Applied Mathematical Sciences Volume 152 Editors S.S. Antman J.E. Marsden L. Sirovich
Advisors J.K. Hale P. Holmes J. Keener J. Keller B.J. Matkowsky A. Mielke C.S. Peskin K.R. Sreenivasan
For further volumes: http://www.springer.com/series/34
Helge Holden • Nils Henrik Risebro
Front Tracking for Hyperbolic Conservation Laws
Helge Holden Department of Mathematics Norwegian University of Science and Technology Trondheim Norway
[email protected]
Editors: S.S. Antman Department of Mathematics and Institute for Physical Science and Technology University of Maryland College Park MD 20742-4015 USA
[email protected]
Nils Henrik Risebro Department of Mathematics University of Oslo Blindern Norway
[email protected]
J.E. Marsden (1942–2010) California Institute of Technology USA
L. Sirovich Laboratory of Applied Mathematics Department of Biomathematical Sciences Mount Sinai School of Medicine New York, NY 10029-6574 USA
[email protected]
ISSN 0066-5452 ISBN 978-3-540-43289-0 (hardcover) e-ISBN 978-3-642-23911-3 ISBN 978-3-642-23910-6 (softcover) DOI 10.1007/978-3-642-23911-3 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011939410 Mathematics Subject Classification (2000): 35Lxx, 35L65, 58J45
© Springer-Verlag Berlin Heidelberg 2002, Corrected printing 2007, First softcover printing 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
In memory of Raphael, who started it all
Preface
Vse sqastlivye semi pohoi drug na druga, kada nesqastliva sem nesqastliva po-svoemu.1 Lev Tolsto, Anna Karenina (1875) While it is not strictly speaking true that all linear partial differential equations are the same, the theory that encompasses these equations can be considered well developed (and these are the happy families). Large classes of linear partial differential equations can be studied using linear functional analysis, which was developed in part as a tool to investigate important linear differential equations. In contrast to the well-understood (and well-studied) classes of linear partial differential equations, each nonlinear equation presents its own particular difficulties. Nevertheless, over the last forty years some rather general classes of nonlinear partial differential equations have been studied and at least partly understood. These include the theory of viscosity solutions for Hamilton–Jacobi equations, the theory of Korteweg–de Vries equations, as well as the theory of hyperbolic conservation laws. The purpose of this book is to present the modern theory of hyperbolic conservation laws in a largely self-contained manner. In contrast to the modern theory of linear partial differential equations, the mathematician 1 All happy families resemble one another, but each unhappy family is unhappy in its own way (Leo Tolstoy, Anna Karenina).
vii
viii
Preface
interested in nonlinear hyperbolic conservation laws does not have to cover a large body of general theory to understand the results. Therefore, to follow the presentation in this book (with some minor exceptions), the reader does not have to be familiar with many complicated function spaces, nor does he or she have to know much theory of linear partial differential equations. The methods used in this book are almost exclusively constructive, and largely based on the front-tracking construction. We feel that this gives the reader an intuitive feeling for the nonlinear phenomena that are described by conservation laws. In addition, front tracking is a viable numerical tool, and our book is also suitable for practical scientists interested in computations. We focus on scalar conservation laws in several space dimensions and systems of hyperbolic conservation laws in one space dimension. In the scalar case we first discuss the one-dimensional case before we consider its multidimensional generalization. Multidimensional systems will not be treated. For multidimensional equations we combine front tracking with the method of dimensional splitting. We have included a chapter on standard difference methods that provides a brief introduction to the fundamentals of difference methods for conservation laws. This book has grown out of courses we have given over some years: full-semester courses at the Norwegian University of Science and Technology and the University of Oslo, as well as shorter courses at Universit¨¨at Kaiserslautern and S.I.S.S.A., Trieste. We have taught this material for graduate and advanced undergraduate students. A solid background in real analysis and integration theory is an advantage, but key results concerning compactness and functions of bounded variation are proved in Appendix A. Our main audience consists of students and researchers interested in analytical properties as well as numerical techniques for hyperbolic conservation laws. We have benefited from the kind advice and careful proofreading of various versions of this manuscript by several friends and colleagues, among them Petter I. Gustafson, Runar Holdahl, Helge Kristian Jenssen, Kenneth H. Karlsen, Odd Kolbjørnsen, Kjetil Magnus Larsen, Knut-Andreas Lie, Achim Schroll. Special thanks are due to Harald Hanche-Olsen, who has helped us on several occasions with both mathematical and TEX-nical issues. We are also grateful to Trond Iden, from Ordkommisjonen, for helping with technical issues and software for making the figures. Our research has been supported in part by the BeMatA program of the Research Council of Norway. A list of corrections can be found at www.math.ntnu.no/~holden/FrontBook/ Whenever you find an error, please send us an email about it.
Preface
ix
The logical interdependence of the material in this book is depicted in the diagram below. The main line, Chapters 1, 2, 5–7, has most of the emphasis on the theory for systems of conservation laws in one space dimension. Another possible track is Chapters 1–4, with emphasis on numerical methods and theory for scalar equations in one and several space dimensions. Chapter 1
Chapter 2
Sections 3.1–2
Chapter 5
Chapter 4
Sections 3.3–4
Chapter 6
Chapter 7
In the corrected printing of the first edition we have used the opportunity to correct many smaller errors. We are grateful to those who have given us feedback, in particular, F. Gossler, X. Raynaud, M. Rejske, and O. Sete.
Contents
Preface
vii
1 Introduction 1.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 18
2 Scalar Conservation Laws 2.1 Entropy Conditions . . . . 2.2 The Riemann Problem . . 2.3 Front Tracking . . . . . . . 2.4 Existence and Uniqueness 2.5 Notes . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
23 24 30 36 44 56
3 A Short Course in Difference 3.1 Conservative Methods . . 3.2 Error Estimates . . . . . . 3.3 A Priori Error Estimates . 3.4 Measure-Valued Solutions 3.5 Notes . . . . . . . . . . . .
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
63 63 81 92 99 112
4 Multidimensional Scalar Conservation Laws 4.1 Dimensional Splitting Methods . . . . . . . 4.2 Dimensional Splitting and Front Tracking . 4.3 Convergence Rates . . . . . . . . . . . . . . 4.4 Operator Splitting: Diffusion . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
117 117 126 134 146
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
xi
xii
Contents
4.5 4.6 5 The 5.1 5.2 5.3 5.4 5.5 5.6
Operator Splitting: Source . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Riemann Problem for Systems Hyperbolicity and Some Examples . . . . Rarefaction Waves . . . . . . . . . . . . . The Hugoniot Locus: The Shock Curves The Entropy Condition . . . . . . . . . . The Solution of the Riemann Problem . Notes . . . . . . . . . . . . . . . . . . . .
6 Existence of Solutions of the Cauchy for Systems 6.1 Front Tracking for Systems . . . . . 6.2 Convergence . . . . . . . . . . . . . 6.3 Notes . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
153 157 163 164 168 174 180 188 200
Problem 205 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
206 218 229
of the Cauchy Problem for Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233 238 265 285
A Total Variation, Compactness, etc. A.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
287 298
B The Method of Vanishing Viscosity B.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
299 312
C Answers and Hints
315
References
349
Index
359
7 Well-Posedness 7.1 Stability . 7.2 Uniqueness 7.3 Notes . . .
1 Introduction
I have no objection to the use of the term “Burgers’ equation” for the nonlinear heat equation (provided it is not written “Burger’s equation”). Letter from Burgers to Batchelor (1968 )
Hyperbolic conservation laws are partial differential equations of the form ∂u + ∇ · f (u) = 0. ∂t If we write f = (f1 , . . . , fm ), x = (x1 , x2 , . . . , xm ) ∈ Rm , and introduce initial data u0 at t = 0, the Cauchy problem for hyperbolic conservation laws reads ∂u(x, t) ∂ + fj (u(x, t)) = 0, ∂t ∂xj j=1 m
u|t=0 = u0 .
(1.1)
In applications t normally denotes the time variable, while x describes the spatial variation in m space dimensions. The unknown function u (as well as each fj ) can be a vector, in which case we say that we have a system of equations, or u and each fj can be a scalar. This book covers the theory of scalar conservation laws in several space dimensions as well as the theory of systems of hyperbolic conservation laws in one space dimension. In the present chapter we study the one-dimensional scalar case to highlight some of the fundamental issues in the theory of conservation laws. H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_1, © Springer-Verlag Berlin Heidelberg 2011
1
2
1. Introduction
We use subscripts to denote partial derivatives, i.e., ut (x, t) = ∂u(x, t)/∂t. Hence we may write (1.1) when m = 1 as ut + f (u)x = 0,
u|t=0 = u0 .
(1.2)
If we formally integrate equation (1.2) between two points x1 and x2 , we obtain x2 x2 ut dx = − f (u)x dx = f (u (x1 , t)) − f (u (x2 , t)) . x1
x1
Assuming that u is sufficiently regular to allow us to take the derivative outside the integral, we get d x2 u(x, t) dx = f (u (x1 , t)) − f (u (x2 , t)) . (1.3) dt x1 This equation expresses conservation of the quantity measured by u in the sense that the rate of change in the amount of u between x1 and x2 is given by the difference in f (u) evaluated at these points.1 Therefore, it is natural to interpret f (u) as the flux density of u. Often, f (u) is referred to as the flux function. As a simple example of a conservation law, consider a one-dimensional medium consisting of noninteracting particles, or material points, identified by their coordinates y along a line. Let φ(y, t) denote the position of material point y at a time t. The velocity and the acceleration of y at time t are given by φt (y, t) and φtt (y, t), respectively. Assume that for each t, φ( · , t) is strictly increasing, so that two distinct material points cannot occupy the same location at the same time. Then the function φ( · , t) has an inverse ψ( · , t), so that y = ψ(φ(y, t), t) for all t. Hence x = φ(y, t) is equivalent to y = ψ(x, t). Now let u denote the velocity of the material point occupying position x at time t, i.e., u(x, t) = φt (ψ(x, t), t), or equivalently, u(φ(y, t), t) = φt (y, t). Then the acceleration of material point y at time t is φtt (y, t) = ut (φ(y, t), t) + ux (φ(y, t), t)φt (y, t) = ut (x, t) + ux (x, t)u(x, t). If the material particles are noninteracting, so that they exert no force on each other, and there is no external force acting on them, then Newton’s second law requires the acceleration to be zero, giving 1 2 ut + u = 0. (1.4) 2 x 1 In physics one normally describes conservation of a quantity in integral form, that is, one starts with (1.3). The differential equation (1.2) then follows under additional regularity conditions on u.
1. Introduction
3
The last equation, (1.4), is a conservation law; it expresses that u is conserved with a flux density given by u2 /2. This equation is often referred to as the Burgers equation without viscosity,2 and is in some sense the simplest nonlinear conservation law. Burgers’ equation, and indeed any conservation law, is an example of a quasilinear equation, meaning that the highest derivatives occur linearly. A general inhomogeneous quasilinear equation for functions of two variables x and t can be written a(x, t, u)ut + b(x, t, u)ux = c(x, t, u).
(1.5) 2
We may consider the solution as the surface {(t, x, u(x, t)) | (t, x) ∈ R } in R3 . Let Γ be a given curve in R3 (which one may think of as the initial data if t is constant) parameterized by (t(η), x(η), z(η)). We want to construct a surface S ⊂ R3 parameterized by (t, x, u(x, t)) such that u = u(x, t) satisfies (1.5) and Γ ⊂ S. To this end we solve the system of ordinary differential equations ∂t = a, ∂ξ
∂x = b, ∂ξ
∂z = c, ∂ξ
(1.6)
with t(ξ0 , η) = t(η),
x(ξ0 , η) = x(η),
z(ξ0 , η) = z(η).
(1.7)
Assume that we can invert the relations x = x(ξ, η), t = t(ξ, η) and write ξ = ξ(x, t), η = η(x, t). Then u(x, t) = z(ξ(x, t), η(x, t))
(1.8)
satisfies both (1.5) and the condition Γ ⊂ S. However, there are many ifs in the above construction: The solution may only be local, and we may not be able to invert the solution of the differential equation to express (ξ, η) as functions of (x, t). These problems are intrinsic to equations of this type and will be discussed at length. Equation (1.6) is called the characteristic equation, and its solutions are called characteristics. This can sometimes be used to find explicit solutions of conservation laws. In the homogeneous case, that is, when c = 0, the solution u is constant along characteristics, namely, d u(x(ξ, η), t(ξ, η)) = ux xξ + uttξ = ux b + ut a = 0. dξ
(1.9)
♦ Example 1.1. Define the (quasi)linear equation ut − xux = −2u,
u(x, 0) = x,
2 Henceforth we will adhere to common practice and call it the inviscid Burgers’ equation.
4
1. Introduction
with associated characteristic equations ∂t = 1, ∂ξ
∂x = −x, ∂ξ
∂z = −2z. ∂ξ
The general solution of the characteristic equations reads t = t0 + ξ,
x = x0 e−ξ ,
z = z0 e−2ξ .
Parameterizing the initial data for ξ = 0 by t = 0, x = η, and z = η, we obtain t = ξ,
x = ηe−ξ ,
z = ηe−2ξ ,
which can be inverted to yield u = u(x, t) = z(ξ, η) = xe−t . ♦ ♦ Example 1.2. Consider the (quasi)linear equation xut − t2 ux = 0.
(1.10)
Its associated characteristic equation is ∂t = x, ∂ξ
∂x = −t2 . ∂ξ
This has solutions given implicitly by x2 /2 + t3 /3 = const, since after all, ∂(x2 /2 + t3 /3)/∂ξ = 0, so the solution of (1.10) is any function ϕ of x2 /2 + t3 /3, i.e., u(x, t) = ϕ(x2 /2 + t3 /3). For example, if we wish to 2 solve the initial value problem u(x, √0) = sin |x|, then u(x, 0) = ϕ(x /2) = sin |x|. Consequently, ϕ(x) = sin 2x, and the solution u is given by u(x, t) = sin x2 + 2t3 /3, t ≥ 0. ♦ ♦ Example 1.3 (Burgers’ equation). If we apply this technique to Burgers’ equation (1.4) with initial data u(x, 0) = u0 (x), we get that ∂t = 1, ∂ξ
∂x = z, ∂ξ
and
∂z =0 ∂ξ
with initial conditions t(0, η) = 0, x(0, η) = η, and z(0, η) = u0 (η). We cannot solve these equations without knowing more about u0 , but since u (or z) is constant along characteristics, cf. (1.9), we see that
1. Introduction
5
the characteristics are straight lines. In other words, the value of z is transported along characteristics, so that t(ξ, η) = ξ,
x(ξ, η) = η + ξz = η + ξu0 (η) ,
z(ξ, η) = u0 (η).
We may write this as x = η + u0 (η)t.
(1.11)
If we solve this equation in terms of η = η(x, t), we can use η to obtain u(x, t) = z(ξ, η) = u0 (η(x, t)), yielding the implicit relation u(x, t) = u0 (x − u(x, t)t).
(1.12)
From equation (1.11) we find that ∂x = 1 + tu0 (η) . ∂η
(1.13)
Thus a solution certainly exists for all t > 0 if u0 > 0. On the other hand, if u0 (˜ x) < 0 for some x ˜, then a solution cannot be found for t > t∗ = −1/u0 (˜ x). For example, if u0 (x) = − arctan(x), there is no smooth solution for t > 1.
u t x Figure 1.1. A multivalued solution.
What actually happens when a smooth solution cannot be defined? From (1.13) we see that for t > t∗ , there are several η that satisfy (1.11) for each x. In some sense, we can say that the solution u is multivalued at such points. To illustrate this, consider the surface in (t, x, u) space parameterized by ξ and η, (ξ, η + ξu0 (η) , u0 (η)) . Let us assume that the initial data are given by u0 (x) = − arctan(x) and t0 = 0. For each fixed t, the curve traced out by the surface is the graph of a (multivalued) function of x. In Figure 1.1 we see how the multivaluedness starts at t = 1 when the surface “folds over,” and
6
1. Introduction
that for t > 1 there are some x that have three associated u values. To continue the solution beyond t = 1 we have to choose among these three u values. In any case, it is impossible to continue the solution and at the same time keep it continuous. ♦ Now we have seen that no matter how smooth the initial function is, we cannot expect to be able to define classical solutions of nonlinear conservation laws for all time. In this case we have to extend the concept of solution in order to allow discontinuities. The standard way of extending the admissible set of solutions to differential equations is to look for generalized functions, or distributions, instead of smooth functions that satisfy the differential equation in the normal way. In the context of conservation laws, a distribution is a continuous linear functional on C0∞ , the space of infinitely differentiable functions with compact support. Functions in C0∞ are often referred to as test functions. In this book we use the following notation: C i (U ) is the set of i times continuously differentiable functions on a set U ⊆ Rn , and C0i (U ) denotes the set of such functions that have compact support in U . Then ∞ C ∞ (U ) = i=0 C i (U ), and similarly for C0∞ . Where there is no ambiguity, we sometimes omit the set U and write only C 0 , etc. Distributions are linear functionals h : C0∞ → R with the following property: For any family φn ∈ C0∞ with supp φn ⊆ K where K is compact and (m) such that all derivatives satisfy φn → φ(m) uniformly (φ(m) = dm φ/dxm ) for some function φ ∈ C0∞ , we have that h(φn ) → h(φ). Frequently we write h, φ = h(φ). Distributions are natural generalizations of functions, since if we have an integrable function h, we define the linear functional by
h, φ := h(x)φ(x) dx, (1.14) U
C0∞ .
for any φ in It is not hard to see that (1.14) defines a continuous linear functional in the topology of uniform convergence of all derivatives (m) on C0∞ : If φk is some sequence such that φk → φ(m) for all m, then
h, φk → h, φ. Hence h is a distribution. Furthermore, if h is a distribution on R, it possesses distributional derivatives of any order, for we define h(m) , φ := (−1)m h, φ(m) . If h is an m times differentiable function, this definition coincides, using integration by parts, with the usual definition of h(m) (x). We can also define partial derivatives of distributions, e.g.,
∂h ∂φ , φ = − h, . ∂xi ∂xi Differentiation of distributions is linear with respect to multiplication by constants, and distributive over addition. Furthermore, if k denotes a con-
1. Introduction
7
stant distribution, k, φ = k φ, then k = 0, the zero distribution. Distributions can be multiplied by differentiable functions, and the result is a new distribution. If f is a smooth function and h a distribution, then f h is a distribution with f h(φ) = h(f φ) = h, f φ, and (f h) = f h + f h . Two distributions, however, cannot in general be multiplied. So, the solutions we are looking for are distributions on C0∞ (R × [0, ∞). Thus we interpret (1.2) in the distributional sense, this means that a solution is a distribution u such that the partial derivatives ut and f (u)x satisfy (1.2). Note that the last term f (u)x is not in general equal to f (u)ux ; this is the case only if u is differentiable. The above remarks also imply that f (u) is not well-defined for all distributions u. We have seen that two distributions cannot always be multiplied, and in general we cannot use functions on distributions unless the distributions take pointwise values as functions. Such distributions always act on test functions as (1.14). When considering conservation laws, we are usually interested in the initial value problem u(x, 0) = u0 (x) for some given function u0 . This means that the distribution u(x, 0) has to coincide with the distribution defined by the function u0 (x),
u( · , 0), φ = u0 , φ = u0 (x)φ(x) dx. R
C0∞ ,
Since this is to hold for all φ ∈ u0 (x) = u(x, 0) almost everywhere with respect to the Lebesgue measure. In this book we employ the usual notation that for any p ∈ [0, ∞, Lp (U ) denotes the set of all measurable functions F such that the integral |F |p dx U
is finite. Lp (U ) is equipped with the norm 1/p p
F p = F Lp (U) = |F | dx . U
∞
If p = ∞, L (U ) denotes the set of all measurable functions F such that ess supU |F | is finite. The space L∞ has the norm F ∞ = ess supU |F |. In addition, we will frequently use the spaces Lploc (U ) = {f : U → R | f ∈ Lp (K) for every compact set K ⊆ U }. With this notation, we see that u0 must be equal to u(x, 0) in L1 (R). Thus, a distributional solution u to (1.2) taking the initial value u0 (x) is a distribution such that ut + f (u)x = 0 as a distribution, and lim u(x, t) dx = u0 (x) dx. t→0
R
R
8
1. Introduction
Furthermore, the generalized solution u must be a function defined for almost all x and t. Often it is convenient to use a formulation such that the initial condition is directly incorporated in the definition of a weak solution. If we consider test functions on the positive half-line [0, ∞, and distributions v such that v(0) makes sense, the definition of the distributional derivative is modified to
v , φ := −v(0)φ(0) − v, φ .
(1.15)
This definition is reasonable, since it must hold also if v is a function. In this case (1.15) is just the formula for partial integration. Similarly, if we consider test functions in C0∞ (R × [0, ∞), the partial derivative of a generalized solution u with respect to t is defined to be
∂φ
ut, φ := − u(x, 0)φ(x, 0) dx − u, . ∂t R Remembering that a generalized solution is a function defined for almost all x and t, we see that 0 = 0, φ = ut + f (u)x , φ
∂φ ∂φ = − u(x, 0)φ(x, 0) dx − u, − f (u), ∂t ∂x R = − u(x, 0)φ(x, 0) dx − uφt dx dt R [0,∞ R − f (u)φx dt dx R [0,∞ =− (uφt + f (u)φx ) dt dx − u(x, 0)φ(x, 0) dx. R
[0,∞
R
Consequently, a solution must satisfy (uφt + f (u)φx ) dt dx + u0 (x)φ(x, 0) dx = 0 R×[0,∞
(1.16)
R
for all test functions φ ∈ C0∞ (R × [0, ∞). A function satisfying (1.16) is called a weak solution of the initial value problem (1.2). Observe in particular that a (regular) smooth solution is a weak solution as well. So what kind of discontinuities are compatible with (1.16)? If we assume that u is constant outside some finite interval, the remarks below (1.2) imply that d ∞ u(x, t) dx = 0. dt −∞ Hence, the total amount of u is independent of time, or the area below the graph of u( · , t) is constant.
1. Introduction
9
♦ Example 1.4 (Burgers’ equation (cont’d.)). We now wish to determine a discontinuous function such that the graph of the function lies on the surface given earlier with u(x, 0) = − arctan x. Furthermore, the area under the graph of the function should be equal to the area between the x-axis and the surface. In Figure 1.2 we see a section of the surface making up the solution for t = 3. The curve is parameterized by x0 , and explicitly given by u = − arctan (x0 ), x = x0 − 3 arctan (x0 ). u
x
x
a
u
u
b
x
c
Figure 1.2. Different solutions with u conserved.
The function u is shown by a thick line, and the surface is shown by a dotted line. A function u(x) that has the correct integral, u dx = u0 dx, is easily found by making any cut from the upper fold to the √ middle fold at some negative xc with xc ≥ − 2, and then making a cut from the middle part to the lower part at −xc . We see that in all cases, the area below the thick line is the same as the area bounded by the curve (x (x0 ) , u (x0 )). Consequently, conservation of u is not sufficient to determine a unique weak solution. ♦ Let us examine what kind of discontinuities are compatible with (1.16) in the general case. Assume that we have an isolated discontinuity that moves along a smooth curve Γ : x = x(t). That the discontinuity is isolated means that the function u(x, t) is differentiable in a sufficiently small neighborhood of x(t) and satisfies equation (1.2) classically on each side of x(t). We also assume that u is uniformly bounded in a neighborhood of the discontinuity. Now we choose a neighborhood D around the point (x(t), t) and a test function φ(x, t) whose support lies entirely inside the neighborhood. The situation is as depicted in Figure 1.3. The neighborhood consists of two parts D1 and D2 , and is chosen so small that u is differentiable everywhere inside D except on x(t). Let Diε denote the set of points Diε = {(x, t) ∈ Di | dist((x, t), (x(t), t)) > ε} . The function u is bounded, and hence 0= (uφt + f (u)φx ) dx dt = lim D
ε→0
D1ε ∪D2ε
(uφt + f (u)φx ) dx dt. (1.17)
10
1. Introduction
t
x(t ( )
D1 D2
x Figure 1.3. An isolated discontinuity.
Since u is a classical solution inside each Diε , we can use Green’s theorem and obtain (uφt + f (u)φx ) dx dt = (uφt + f (u)φx + (ut + f (u)x )φ) dx dt Diε
Diε
=
Diε
((uφ)t + (f (u)φ)x ) dx dt
=
φ (−u dx + f (u) dt) .
(1.18)
∂Diε
But φ is zero everywhere on ∂Diε except in the vicinity of x(t). Let Γεi denote this part of ∂Diε . Then lim φ (−u dx + f (u) dt) = ± φ (−ul,r dx + fl,r dt) . ε→0
Γεi
Γ∩D
Here ul denotes the limit of u(x, t) as x → x(t)−, and ur the limit as x approaches x(t) from the right, i.e., ur = limx→x(t)+ u(x, t). Similarly, fl,r denotes f (ul,r ). The reason for the difference in sign is that according to Green’s theorem, we must integrate along the boundary counterclockwise. Therefore, the positive sign holds for i = 1, and the negative for i = 2. Using (1.17) we obtain φ [− (ul (t) − ur (t)) dx + (ffl (t) − fr (t)) dt] = 0. Γ∩D
Since this is to hold for all test functions φ, we must have s (ul − ur ) = fl − fr ,
(1.19)
where s = x (t). This equality is called the Rankine–Hugoniot condition, and it expresses conservation of u across jump discontinuities. It is common in the theory of conservation laws to introduce a notation for the jump in
1. Introduction
11
a quantity. Write [[a]] = ar − al
(1.20)
for the jump in any quantity a. With this notation the Rankine–Hugoniot relation takes the form s[[u]] = [[f ]]. ♦ Example 1.5 (Burgers’ equation (cont’d.)). For Burgers’ equation we see that the shock speed must satisfy
2 ur − u2l [[u2 /2]] 1 s= = = (ul + ur ) . [[u]] 2 (ur − ul ) 2 Consequently, the left shock in parts a and b in Figure 1.2 above will have greater speed than the right shock. Therefore, solutions of the type a or b cannot be isolated discontinuities moving along two trajectories starting at t = 1. Type c yields a stationary shock. ♦ ♦ Example 1.6 (Traffic flow). I am ill at these numbers. W. Shakespeare, Hamlet (1603 )
Rather than continue to develop the theory, we shall now consider an example of a conservation law in some detail. We will try to motivate how a conservation law can model the flow of cars on a crowded highway. Consider a road consisting of a single lane, with traffic in one direction only. The road is parameterized by a single coordinate x, and we assume that the traffic moves in the direction of increasing x. Suppose we position ourselves at a point x on the road and observe the number of cars N = N (x, t, h) in the interval [x, x + h]. If some car is located at the boundary of this interval, we account for that by allowing N to take any real value. If the traffic is dense, and if h is large compared with the average length of a car, but at the same time small compared with the length of our road, we can introduce the density given by N (x, t, h) . h→0 h
ρ(x, t) = lim Then N (x, t, h) =
x+h x
ρ(y, t) dy.
Let now the position of some vehicle be given by r(t), and its velocity by v(r(t), t). Considering the interval [a, b], we wish to determine how the number of cars changes in this interval. Since we have assumed that there are no entries or exits on our road, this number can change only as cars are entering the interval from the left endpoint, or leaving the
12
1. Introduction
interval at the right endpoint. The rate of cars passing a point x at some time t is given by v(x, t)ρ(x, t). Consequently, − (v(b, t)ρ(b, t) − v(a, t)ρ(a, t)) =
d dt
b
ρ(y, t) dy. a
Comparing this with (1.3) and (1.2), we see that the density satisfies the conservation law ρt + (ρv)x = 0.
(1.21)
In the simplest case we assume that the velocity v is given as a function of the density ρ only. This may be a good approximation if the road is uniform and does not contain any sharp bends or similar obstacles that force the cars to slow down. It is also reasonable to assume that there is some maximal speed vmax that any car can attain. When traffic is light, a car will drive at this maximum speed, and as the road gets more crowded, the cars will have to slow down, until they come to a complete standstill as the traffic stands bumper to bumper. Hence, we assume that the velocity v is a monotone decreasing function of ρ such that v(0) = vmax and v (ρmax ) = 0. The simplest such function is a linear function, resulting in a flux function given by ρ f (ρ) = vρ = ρvmax 1 − . (1.22) ρmax For convenience we normalize by introducing u = ρ/ρmax and x ˜ = vmax x. The resulting normalized conservation law reads ut + (u(1 − u))x = 0.
(1.23)
Setting u˜ = 12 − u, we recover Burgers’ equation, but this time with a new interpretation of the solution. Let us solve an initial value problem explicitly by the method of characteristics described earlier. We wish to solve (1.23), with initial function u0 (x) given by ⎧ 3 ⎪ for x ≤ −a, ⎨4 1 u0 (x) = u(x, 0) = 2 − x/(4a) for −a < x < a, ⎪ ⎩1 for a ≤ x. 4
1. Introduction
13
The characteristics satisfy t (ξ) = 1 and x (ξ) = 1 − 2u(x(ξ), t(ξ)). The solution of these equations is given by x = x(t), where ⎧ ⎪ for x0 < −a, ⎨x0 − t/2 x(t) = x0 + x0 t/(2a) for −a ≤ x0 ≤ a, ⎪ ⎩ x0 + t/2 for a < x0 . Inserting this into the solution u(x, t) = u0 (x0 (x, t)), we find that ⎧ 3 ⎪ for x ≤ −a − t/2, ⎨4 1 u(x, t) = 2 − x/(4a + 2t) for −a − t/2 < x < a + t/2, ⎪ ⎩1 for a + t/2 ≤ x. 4 This solution models a situation where the traffic density initially is small for positive x, and high for negative x. If we let a tend to zero, the solution reads ⎧ 3 ⎪ for x ≤ −t/2, ⎨4 u(x, t) = 12 − x/(2t) for −t/2 < x < t/2, ⎪ ⎩1 for t/2 ≤ x. 4 As the reader may check directly, this is also a classical solution everywhere except at x = ±t/2. It takes discontinuous initial values: 3 for x < 0, u(x, 0) = 41 (1.24) otherwise. 4 This initial function may model the situation when a traffic light turns green at t = 0. Facing the traffic light the density is high, while on the other side of the light there is a small constant density. Initial value problems of the kind (1.24), where the initial function consists of two constant values, are called Riemann problems. We will discuss Riemann problems at great length in this book. If we simply insert ul = 34 and ur = 14 in the Rankine–Hugoniot condition (1.19), we find another weak solution to this initial value problem. These left and right values give s = 0, so the solution found here is simply u2 (x, t) = u0 (x). A priori, this solution is no better or worse than the solution computed earlier. But when we examine the situation the equation is supposed to model, the second solution u2 is unsatisfactory, since it describes a situation where the traffic light is green, but the density of cars facing the traffic light does not decrease! In the first solution the density decreased. Examining the model a little more closely we find, perhaps from experience of traffic jams, that the allowable discontinuities are those in which the density is increasing. This corresponds to the situation where there is a traffic jam ahead, and we suddenly have to slow down when we approach it.
14
1. Introduction
When we emerge from a traffic jam, we experience a gradual decrease in the density of cars around us, not a sudden jump from a bumper to bumper situation to a relatively empty road. We have now formulated a condition, in addition to the Rankine– Hugoniot condition, that allows us to reduce the number of weak solutions to our conservation law. This condition says, Any weak solution u has to increase across discontinuities. Such conditions are often called entropy conditions. This terminology comes from gas dynamics, where similar conditions state that the physical entropy has to increase across any discontinuity. Let us consider the opposite initial value problem, namely, 1 for x < 0, u0 (x) = 43 for x ≥ 0. 4 Now the characteristics starting at negative x0 are given by x(t) = x0 + t/2, and the characteristics starting on the positive half-line are given by x(t) = x0 − t/2. We see that these characteristics immediately will run into each other, and therefore the solution is multivalued for any positive time t. Thus there is no hope of finding a continuous solution to this initial value problem for any time interval 0, δ, no matter how small δ is. When inserting the initial ul = 14 and ur = 34 into the Rankine– Hugoniot condition, we see that the initial function is already a weak solution. This time, the solution increases across the discontinuity, and therefore satisfies our entropy condition. Thus, an admissible solution is given by u(x, t) = u0 (x). Now we shall attempt to solve a more complicated problem in some detail. Assume that we have a road with a uniform density of cars initially. At t = 0 a traffic light placed at x = 0 changes from green to red. It remains red for some time interval Δt, then turns green again and stays green thereafter. We assume that the initial uniform density is given by u = 12 , and we wish to determine the traffic density for t > 0. When the traffic light initially turns red, the situation for the cars to the left of the traffic light is the same as when the cars stand bumper to bumper to the right of the traffic light. So in order to determine the situation for t in the interval [0, Δt, we must solve the Riemann problem with the initial function 1 for x < 0, l u0 (x) = 2 (1.25) 1 for x ≥ 0. For the cars to the right of the traffic light, the situation is similar to the situation when the traffic abruptly stopped at t = 0 behind the car located at x = 0. Therefore, to determine the density for x > 0 we have
1. Introduction
15
to solve the Riemann problem given by ur0 (x)
=
0 1 2
for x < 0, for x ≥ 0.
(1.26)
Returning to (1.25), here u is increasing over the initial discontinuity, so we can try to insert this into the Rankine–Hugoniot condition. This gives
s=
fr − f l = ur − ul
1 4 1 2
−0 1 =− . 2 −1
Therefore, an admissible solution for x < 0 and t in the interval [0, Δt is given by ul (x, t) =
1 2
for x < −t/2, for x ≥ −t/2.
1
This is indeed close to what we experience when we encounter a traffic light. We see the discontinuity approaching as the brake lights come on in front of us, and the discontinuity has passed us when we have come to a halt. Note that although each car moves only in the positive direction, the discontinuity moves to the left. In general, there are three different speeds when we study conservation laws: the particle speed, in our case the speed of each car, the characteristic speed; and the speed of a discontinuity. These three speeds are not equal if the conservation law is nonlinear. In our case, the speed of each car is nonnegative, but both the characteristic speed and the speed of a discontinuity may take both positive and negative values. Note that the speed of an admissible discontinuity is less than the characteristic speed to the left of the discontinuity, and larger than the characteristic speed to the right. This is a general feature of admissible discontinuities. It remains to determine the density for positive x. The initial function given by (1.26) also has a positive jump discontinuity, so we obtain an admissible solution if we insert it into the Rankine–Hugoniot condition. Then we obtain s = 12 , so the solution for positive x is r
u (x, t) =
0 1 2
for x < t/2, for x ≥ t/2.
16
1. Introduction
Piecing together ul and ur [0, Δt reads ⎧1 ⎪ ⎪2 ⎪ ⎨1 u(x, t) = ⎪0 ⎪ ⎪ ⎩1 2
we find that the density u in the time interval for for for for
x ≤ −t/2, −t/2 < x ≤ 0, 0 < x ≤ t/2, t/2 < x,
t ∈ [0, Δt .
What happens for t > Δt? To find out, we have to solve the Riemann problem 1 for x < 0, u(x, Δt) = 0 for x ≥ 0. Now the initial discontinuity is not acceptable according to our entropy condition, so we have to look for some other solution. We can try to mimic the example above where we started with a nonincreasing initial function that was linear on some small interval −a, a. Therefore, let v(x, t) be the solution of the initial value problem vt + (v(1 − v))x = 0, v(x, 0) = v0 (x) =
⎧ ⎪ ⎨1
1 ⎪2
⎩ 0
− x/(2a)
for x < −a, for −a ≤ x < a, for a ≤ x.
As in the above example, we find that the characteristics are not overlapping, and they fill out the positive half-plane exactly. The solution is given by v(x, t) = v0 (x0 (x, t)): ⎧ ⎪ for x < −a − t, ⎨1 1 v(x, t) = 2 − x/(2a + 2t) for −a − t ≤ x < a + t, ⎪ ⎩ 0 for a + t ≤ x. Letting a → 0, we obtain the solution to the Riemann problem with a left value 1 and a right value 0. For simplicity we also denote this function by v(x, t). This type of solution can be depicted as a “fan” of characteristics emanating from the origin, and it is called a centered rarefaction wave, or sometimes just a rarefaction wave. The origin of this terminology lies in gas dynamics. We see that the rarefaction wave, which is centered at (0, Δt), does not immediately influence the solution away from the origin. The leftmost part of the wave moves with a speed −1, and the front of the wave moves with speed 1. So for some time after Δt, the density is obtained by piecing together three solutions, ul (x, t), v(x, t − Δt), and ur (x, t).
1. Introduction
17
The rarefaction wave will of course catch up with the discontinuities in the solutions ul and ur . Since the speeds of the discontinuities are ∓ 21 , and the speeds of the rear and the front of the rarefaction wave are ∓1, and the rarefaction wave starts at (0, Δt), we conclude that this will happen at (∓Δt, 2Δt). It remains to compute the solution for t > 2Δt. Let us start with examining what happens for positive x. Since the u values that are transported along the characteristics in the rarefaction wave are less than 12 , we can construct an admissible discontinuity by using the Rankine–Hugoniot condition (1.19). Define a function that has a discontinuity moving along a path x(t). The value to the right of the discontinuity is 12 , and the value to the left is determined by v(x, t − Δt). Inserting this into (1.19) we get
x (t) = s =
1 4
−
1 2
+ 1 2
x 2(t−Δt)
−
1 2
−
1 2
−
x 2(t−Δt)
x 2(t−Δt)
=
x . 2(t − Δt)
Since x(2Δt) = Δt, this differential equation has solution
x+ (t) =
Δt(t − Δt).
The situation is similar for negative x. Here, we use the fact that the u values in the left part of the rarefaction fan are larger than 12 . This gives a discontinuity with a left value 12 and right values taken from the rarefaction wave. The path of this discontinuity is found to be x− (t) = −x+ (t). Now we have indeed found a solution that is valid for all positive time. This function has the property that it is a classical solution at all points x and t where it is differentiable, and it satisfies both the Rankine– Hugoniot condition and the entropy condition at points of discontinuity. We show this weak solution in Figure 1.4, both in the (x, t) plane, where we show characteristics and discontinuities, and u as a function of x for various times. The characteristics are shown as gray lines, and the discontinuities as thicker black lines. This concludes our example. Note that we have been able to find the solution to a complicated initial value problem by piecing together solutions from Riemann problems. This is indeed the main idea behind front tracking, and a theme to which we shall give considerable attention in this book. ♦
18
1. Introduction
4
t
2
0
0
-2
x
2
Figure 1.4. A traffic light on a single road. To the left we show the solution in (x, t), and to the right the solution u(x, t) at three different times t.
1.1 Notes Never any knowledge was delivered in the same order it was invented.3 Sir Francis Bacon (1561–1626)
The simplest nontrivial conservation law, the inviscid Burgers’ equation, has been extensively analyzed. Burgers introduced the “nonlinear diffusion equation” 1 ut + (u2 )x = uxx , (1.27) 2 which is currently called (the viscous) Burgers’ equation, in 1940 [25] (see also [26]) as a model of turbulence. Burgers’ equation is linearized, and thereby solved, by the Cole–Hopf transformation [33], [73]. Both the equation and the Cole–Hopf transformation were, however, known already in 1906; see Forsyth [48, p. 100]. See also Bateman [10]. The most common elementary example of application of scalar conservation laws is the model of traffic flow called “traffic hydrodynamics” that was 3 in
Valerius Terminus: Of the Interpretation of Nature, c. 1603.
1.1. Notes
19
introduced independently by Lighthill and Whitham [101] and Richards [118]. A modern treatment can be found in Haberman [61]. The jump condition, or the Rankine–Hugoniot condition, was derived heuristically from the conservation principle independently by Rankine in 1870 [116] and Hugoniot in 1886 [76, 77, 78]. Our presentation of the Rankine–Hugoniot condition is taken from Smoller [130]. The notion of “Riemann problem” is fundamental in the theory of conservation laws. It was introduced by Riemann in 1859 [119, 120] in the context of gas dynamics. He studied the situation where one initially has two gases with different (constant) pressures and densities separated by a thin membrane in a one-dimensional cylindrical tube. See [72] for a historical discussion. There are by now several books on various aspects of hyperbolic conservation laws, starting with the classical book by Courant and Friedrichs [37]. Nice treatments with emphasis on the mathematical theory can be found in books by Lax [96], Chorin and Marsden [29], Roˇ ˇzdestvenski˘˘ı and Janenko [126], Smoller [130], M´ a´lek et al. [107], H¨¨ormander [74], Liu [104], Serre [128, 129], Bressan [18], and Dafermos [42]. The books by Godlewski and Raviart [58, 59], LeVeque [98], Kr¨ oner [86], Toro [140], and Thomas ¨ [139] focus more on the numerical theory.
Exercises 1.1 Determine characteristics for the following quasilinear equations: ut + sin(x)ux = u, sin(t)ut + cos(x)ux = 0, ut + sin(u)ux = u, sin(u)ut + cos(u)ux = 0. 1.2 Use characteristics to solve the following initial value problems: a. uux + xuy = 0,
u(0, s) = 2s for s > 0.
b. ey ux + uuy + u2 = 0,
u(x, 0) = 1/x for x > 0.
c. xuy − yux = u,
u(x, 0) = h(x) for x > 0.
d. (x + 1)2 ux + (y − 1)2 uy = (x + y)u,
u(x, 0) = −1 − x.
20
1. Introduction
e. ux + 2xuy = x + xu,
u(1, y) = ey − 1.
ux + 2xuy = x + xu,
u(0, y) = y 2 − 1.
f.
g. xuux + uy = 2y,
u(x, 0) = x.
1.3 Find the shock condition (i.e., the Rankine–Hugoniot condition) for one-dimensional systems, i.e., the unknown u is a vector u = (u1 , . . . , un) for some n > 1, and also f (u) = (f1 (u), . . . , fn (u)). 1.4 Consider a scalar conservation law in two space dimensions, ut +
∂f (u) ∂g(u) + = 0, ∂x ∂y
where the flux functions f and g are continuously differentiable. Now the unknown u is a function of x, y, and t. Determine the Rankine– Hugoniot condition across a jump discontinuity in u, assuming that u jumps across a regular surface in (x, y, t). Try to generalize your answer to a conservation law in n space dimensions. 1.5 We shall consider a linearization ⎧ ⎪1 ⎨ u0 (x) = −x ⎪ ⎩ −1
of Burgers’ equation. Let for x < −1, for −1 ≤ x ≤ 1, for 1 < x.
a. First determine the maximum time that the solution of the initial value problem ut +
1 2 u x = 0, 2
u(x, 0) = u0 (x),
will remain continuous. Find the solution for t less than this time. b. Then find the solution v of the linearized problem vt + u0 (x)vx = 0,
v(x, 0) = u0 (x).
Determine the solution also in the case where v(x, 0) = u0 (αx) where α is nonnegative. c. Next, we shall determine a procedure for finding u by solving a sequence of linearized equations. Fix n ∈ N. For t in the interval
m/n, (m + 1)/n] and m ≥ 0, let vn solve (vn )t + vn (x, m/n) (vn )x = 0,
1.1. Notes
21
and set vn (x, 0) = u0 (x). Then show that m vn x, = u0 (αm,n x) n and find a recursive relation (in m) satisfied by αm,n . d. Assume that lim αm,n = α(t), ¯
n→∞
for some continuously differentiable α(t), ¯ where t = m/n < 1. Show that α(t) ¯ = 1/(1 − t), and thus vn (x) → u(x) for t < 1. What happens for t ≥ 1? 1.6
a. Solve the initial value problem for Burgers’ equation 1 2 −1 for x < 0, ut + u x = 0, u(x, 0) = 2 1 for x ≥ 0.
(1.28)
b. Then find the solution where the initial data are 1 for x < 0, u(x, 0) = −1 for x ≥ 0. c. If we multiply Burgers’ equation by u, we formally find that u satisfies 1 2 1 3 u t+ u x = 0. (1.29) 2 3 Are the solutions to (1.28) you found in parts a and b weak solutions to (1.29)? If not, then find the corresponding weak solutions to (1.29). Warning: This shows that manipulations valid for smooth solutions are not necessarily true for weak solutions. 1.7 ([130, p. 250]) Show that ⎧ ⎪ ⎪1 ⎪ ⎨−α u(x, t) = ⎪α ⎪ ⎪ ⎩ −1 is a weak solution of 1 2 ut + u = 0, 2 x
for for for for
x ≤ (1 − α)t/2, (1 − α)t/2 < x ≤ 0, 0 < x ≤ (α − 1)t/2, x ≥ (α − 1)t/2
u(x, 0) =
1 for x ≤ 0, −1 for x > 0,
for all α ≥ 1. Warning: Thus we see that weak solutions are not necessarily unique.
2 Scalar Conservation Laws
It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. Sherlock Holmes, A Scandal in Bohemia (1891 )
In this chapter we consider the Cauchy problem for a scalar conservation law. Our goal is to show that, subject to certain conditions, there exists a unique solution to the general initial value problem. Our method will be completely constructive, and we shall exhibit a procedure by which this solution can be constructed. This procedure is, of course, front tracking. The basic ingredient in the front-tracking algorithm is the solution of the Riemann problem. Already in the example on traffic flow, we observed that conservation laws may have several weak solutions, and that some principle is needed to pick out the correct ones. The problem of lack of uniqueness for weak solutions is intrinsic in the theory of conservation laws. There are by now several different approaches to this problem, and they are commonly referred to as “entropy conditions.” Thus the solution of Riemann problems requires some mechanism to choose one of possibly several weak solutions. Therefore, before we turn to front tracking, we will discuss entropy conditions. H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_2, © Springer-Verlag Berlin Heidelberg 2011
23
24
2. Scalar Conservation Laws
2.1 Entropy Conditions We study the conservation law1 ut + f (u)x = 0,
(2.1)
whose solutions u = u(x, t) are to be understood in the distributional sense; see (1.16). We will not state any continuity properties of f , but tacitly assume that f is sufficiently smooth for all subsequent formulas to make sense. One of the most common entropy conditions is so-called viscous regularization, where the scalar conservation law ut + f (u)x = 0 is replaced by ut +f (u)x = uxx. The idea is that the physical problem has some diffusion, and that the conservation law represents a limit model when the diffusion is small. Based on this, one is looking for solutions of the conservation law that are limits of the regularized equation when → 0. Therefore, we are interested in the viscous regularization of the conservation law (2.1), uεt + f (uε )x = εuεxx ,
(2.2)
as ε → 0. In order for this equation to be well posed, ε must be nonnegative. Equations such as (2.2) are called viscous, since the right-hand side uεxx models the effect of viscosity or diffusion. We then demand that the distributional solutions of (2.1) should be limits of solutions of the more fundamental equation (2.2) as the viscous term disappears. This has some interesting consequences. Assume that (2.1) has a solution consisting of constant states on each side of a discontinuity moving with a speed s, i.e., ul for x < st, u(x, t) = (2.3) ur for x ≥ st. We say that u(x, t) satisfies a traveling wave entropy condition if u(x, t) is the pointwise limit almost everywhere of some uε (x, t) = U ((x − st)/ε) as ε → 0, where uε solves (2.2) in the classical sense. Inserting U ((x − st)/ε) into (2.2), we obtain −sU˙ +
df (U ) ¨. =U dξ
(2.4)
Here U = U (ξ), ξ = (x − st)/ε, and U˙ denotes the derivative of U with respect to ξ. This equation can be integrated once, yielding U˙ = −sU + f (U ) + A,
(2.5)
1 The analysis up to and including (2.7) could have been carried out for systems on the line as well.
2.1. Entropy Conditions
25
where A is a constant of integration. We see that as ε → 0, ξ tends to plus or minus infinity, depending on whether x − st is positive or negative. If u should be the limit of uε , we must have that ul for x < st, limξ→−∞ U (ξ), ε lim u = lim U (ξ) = = ε→0 ε→0 ur for x > st, limξ→+∞ U (ξ). It follows that ˙ lim U(ξ) = 0.
ξ→±∞
Inserting this into (2.5), we obtain (recall that fr = f (ur ), etc.) A = sul − fl = sur − fr ,
(2.6)
which again gives us the Rankine–Hugoniot condition s (ul − ur ) = fl − fr . Summing up, the traveling wave U must satisfy the following boundary value problem: ur , U˙ = −s (U − ul ) + (f (U ) − fl ) , U (±∞) = (2.7) ul . Using the Rankine–Hugoniot condition, we see that both ul and ur are fixed points for this equation. What we want is an orbit of (2.7) going from ul to ur . If the triplet (s, ul , ur ) has such an orbit, we say that the discontinuous solution (2.3) satisfies a traveling wave entropy condition, or that the discontinuity has a viscous profile. (For the analysis so far in this section we were not restricted to the scalar case, and could as well have worked with the case of systems where u is a vector in Rn and f (u) is some function Rn → Rn .) From now on we say that any isolated discontinuity satisfies the traveling wave entropy condition if (2.7) holds locally across the discontinuity. Let us examine this in more detail. First we assume that ul < ur . Observe that U˙ can never be zero. Assuming otherwise, namely that U˙ (ξ0 ) = 0 for some ξ0 , the constant U (ξ0 ) would be the unique solution, which contradicts that U (−∞) = ul < ur = U (∞). Thus U˙ (ξ) > 0 for all ξ, and hence fl + s (u − ul ) < f (u),
(2.8)
for all u ∈ ul , ur . Remembering that according to the Rankine–Hugoniot conditions s = (ffl − fr ) /(ul −ur ), this means that the graph of f (u) has to lie above the straight line segment joining the points (ul , fl ) and (ur , fr ). On the other hand, if the graph of f (u) is above the straight line, then (2.8) is satisfied, and we can find a solution of (2.7). Similarly, if ul > ur , U˙ must be negative in the whole interval ur , ul . Consequently, the graph of f (u) must be below the straight line.
26
2. Scalar Conservation Laws
By combining the two cases we conclude that the viscous profile or traveling wave entropy condition is equivalent to s |k − ul | < sign (k − ul ) (f (k) − f (ul )) ,
(2.9)
for all k strictly between ul and ur . Note that an identical inequality holds with ul replaced by ur . The inequality (2.9) motivates another entropy condition, the Kruzkov ˇ entropy condition. This condition is often more convenient to work with in the sense that it combines the definition of a weak solution with that of the entropy condition. Choose a smooth convex function η = η(u) and a nonnegative test function φ in C0∞ (R × 0, ∞). (Such a test function will be supported away from the x-axis, and thus we get no contribution from the initial data.) Then we find 0= (ut + f (u)x − uxx ) η (u)φ dx dt = η(u)t φ dx dt + q (u)ux φ dx dt
−ε η(u)xx − η (u)(ux )2 φ dx dt =− η(u)φt dx dt − q(u)φx dx dt −ε η(u)φxx dx dt + ε η (u)(ux )2 φ dx dt
≥− η(u)φt + q(u)φx + εη(u)φxx dx dt, (2.10) where we first introduced q such that q (u) = f (u)η (u)
(2.11)
and subsequently used the convexity of η, i.e., η ≥ 0. Interpreted in a distributional sense we may write this as ∂ ∂ η+ q ≤ εηxx . ∂t ∂x If this is to hold as ε → 0, we need ∂ ∂ η+ q ≤ 0. ∂t ∂x Consider now the case with
1/2 η(u) = (u − k)2 + δ 2 ,
(2.12)
δ > 0,
for some constant k. By taking δ → 0 we can extend the analysis to the case where η(u) = |u − k| .
(2.13)
2.1. Entropy Conditions
27
In this case we find that q(u) = sign (u − k) (f (u) − f (k)) . Remark 2.1. Consider a fixed bounded weak solution u and a nonnegative test function φ, and define the linear functional
Λ(η) = η(u)φt + q(u)φx dx dt. (2.14) (The function q depends linearly on η; cf. (2.11).) Assume that Λ(η) ≥ 0 when η is convex. Introduce ηi (u) = αi |u − ki | , Clearly, Λ
ki ∈ R, αi ≥ 0.
ηi ≥ 0.
i
Since u is a weak solution, we have Λ(αu + β) = 0,
α, β ∈ R,
and hence the convex piecewise linear function η(u) = αu + β + ηi (u)
(2.15)
i
satisfies Λ(η) ≥ 0. On the other hand, any convex piecewise linear function η can be written in the form (2.15). This can be proved by induction on the number of breakpoints for η, where by breakpoints we mean those points where η is discontinuous. The induction step goes as follows. Consider a breakpoint for η, which we without loss of generality can assume is at the origin. Near the origin we may write η as σ1 u for u ≤ 0, η(u) = σ2 u for u > 0, for |u| small. Since η is convex, σ1 < σ2 . Then the function 1 1 η˜(u) = η(u) − (σ2 − σ1 ) |u| − (σ1 + σ2 )u (2.16) 2 2 is a convex piecewise linear function with one breakpoint fewer than η for which one can use the induction hypothesis. Hence we infer that Λ(η) ≥ 0 for all convex, piecewise linear functions η. Consider now any convex function η. By sampling points, we can approximate η with convex, piecewise linear functions ηj such that ηj → η in L∞ . Thus we find that Λ(η) ≥ 0. We conclude that if Λ(η) ≥ 0 for the Kruzkov ˇ function η(u) = |u − k| for all k ∈ R, then this inequality holds for all convex functions.
28
2. Scalar Conservation Laws
We say that a function is a Kruzkov ˇ entropy solution to (2.1) if the inequality ∂ ∂ |u − k| + sign (u − k) (f (u) − f (k)) ≤ 0 (2.17) ∂t ∂x holds in the sense of distributions for all constants k. Thus, in integral form, (2.17) appears as (|u − k| φt + sign (u − k) (f (u) − f (k)) φx ) dx dt ≥ 0, (2.18) which is to hold for all constants k and all nonnegative test functions φ in C0∞ (R × 0, ∞). If we consider solutions on a time interval [0, T ], and thus use nonnegative test functions φ ∈ C0∞ (R × [0, T ]), we find that 0
T
[ |u − k| φt + sign (u − k) (f (u) − f (k)) φx ] dx dt − |u(x, T ) − k| φ(x, T ) dx + |u0 (x) − k| φ(x, 0) dx ≥ 0 (2.19)
should hold for all k ∈ R and for all nonnegative test functions φ in C0∞ (R× [0, T ]). If we assume that u is bounded, and set k ≤ − u ∞ , (2.18) gives 0≤ ((u − k)φt + (f (u) − f (k)) φx ) dx dt = (uφt + f (u)φx ) dx dt. Similarly, setting k ≥ u ∞ gives 0≥ (uφt + f (u)φx ) dx dt. These two inequalities now imply that (uφt + f (u)φx ) dx dt = 0
(2.20)
for all nonnegative φ. By considering test functions φ in C0∞ (R × 0, ∞) of the form φ+ − φ− , with φ± ∈ C0∞ (R × 0, ∞) nonnegative, equation (2.20) implies the usual definition (1.16) of a weak solution. For sufficiently regular weak solutions we can show that the traveling wave entropy condition and the Kruˇ ˇzkov entropy condition are equivalent. Assume that u is a classical solution away from isolated jump discontinuities along piecewise smooth curves, and that it satisfies the Kruˇˇzkov entropy condition (2.18). We may apply the argument used to derive the Rankine– Hugoniot condition to (2.18). This gives us the following inequality to be satisfied for all k: s[[ |u − k| ]] ≥ [[ sign (u − k) (f (u) − f (k)) ]]
(2.21)
2.1. Entropy Conditions
29
(recall that [[a]] = ar − al for any quantity a). Conversely, if (2.21) holds, we can multiply by a nonnegative test function φ with support in a neighborhood of the discontinuity and integrate to obtain (2.18). If we assume that ul < ur , and choose k to be between ul and ur , we obtain s (− (ul − k) − (ur − k)) ≤ − (ffr − f (k)) − (ffl − f (k)) , or f (k) − sk ≥ f¯ − s¯ u.
(2.22)
Here, f¯ denotes (ffl + fr )/2, and similarly, u¯ = (ul + ur )/2. In other words, the graph of f (u) must lie above the straight line segment between (ul , fl ) and (ur , fr ). Similarly, if ur < ul , we find that the graph has to lie below the line segment. Hence the Kruzkov ˇ entropy condition implies (2.9). Assume now that u is a piecewise continuous weak solution whose discontinuities satisfy (2.9), and that u is differentiable, and consequently a classical solution of (2.1) in some neighborhood of a point (x, t). In this neighborhood we have 0 = ut + f (u)x = (u − k)t + (f (u) − f (k))x , for any constant k. If we choose a nonnegative test function φ with support in a small neighborhood of (x, t), and a constant k such that u > k where φ = 0, we have (|u − k| φt + sign (u − k) (f (u) − f (k)) φx ) dx dt = 0. (2.23) If k > u where φ = 0, we obtain the same equality. Therefore, since u is continuous around (x, t) and φ can be chosen to have arbitrarily small support, (2.23) holds for all nonnegative φ having support where u is a classical solution. If u has an isolated discontinuity, with limits ul and ur that are such that the Rankine–Hugoniot equality holds, then s[[ |u − k| ]] = [[ sign (u − k) (f (u) − f (k)) ]]
(2.24)
for any constant k not between ul and ur . For constants k between ul and ur , we have seen that if f (k) − sk ≥ f¯ − s¯ u, i.e., the viscous profile entropy condition holds, then s[[ |u − k| ]] ≥ [[ sign (u − k) (f (u) − f (k)) ]],
(2.25)
and thus the last two equations imply that the Kruˇ ˇzkov entropy condition will be satisfied. Hence, for sufficiently regular solutions, these two entropy conditions are equivalent. We shall later see that Kruˇ ˇzkov’s entropy condition implies that for sufficiently regular initial data, the solution does indeed possess the necessary regularity; consequently, these two entropy conditions “pick”
30
2. Scalar Conservation Laws
the same solutions. We will therefore in the following use whichever entropy condition is more convenient to work with.
2.2 The Riemann Problem With my two algorithms one can solve all problems—without error, if God will! Al-Khwarizmi (c. 780–850)
For conservation laws, the Riemann problem is the initial value problem ul for x < 0, ut + f (u)x = 0, u(x, 0) = (2.26) ur for x ≥ 0. Assume temporarily that f ∈ C 2 with finitely many inflection points. We have seen examples of Riemann problems and their solutions in the previous chapter, in the context of traffic flow. Since both the equation and the initial data are invariant under the transformation x → kx and t → kt, it is reasonable to look for solutions of the form u = u(x, t) = w(x/t). We set z = x/t and insert this into (2.26) to obtain x 1 w + f (w)w = 0, or z = f (w). (2.27) t2 t If f is strictly monotone, we can simply invert this relation to obtain the solution w = (f )−1 (z). In the general case we have to replace f by a monotone function on the interval between ul and ur . In the example of traffic flow, we saw that it was important whether ul < ur or vice versa. Assume first that ul < ur . Now we claim that the solution of (2.27) is given by ⎧ ⎪ for x ≤ f (ul )t, ⎨ ul −1 u(x, t) = w(z) = (ff (2.28) ) (x/t) for f (ul )t ≤ x ≤ f (ur )t, ⎪ ⎩ ur for x ≥ f (ur )t, −
for ul < ur . Here f denotes the lower convex envelope of f in the interval
−1 −1 [ul , ur ], and (ff ) , or, to be less pedantic, (ff ) , denotes the inverse of its derivative. The lower convex envelope is defined to be the largest convex function that is smaller than or equal to f in the interval [ul , ur ], i.e., f (u) = sup {g(u) | g ≤ f and g convex on [ul , ur ] } .
(2.29)
To picture the envelope of f , we can imagine the graph of f cut out from a board, so that the lower boundary of the board has the shape of the graph. An elastic rubber band stretched from (ul , f (ul )) to (ur , f (ur )) will then
2.2. The Riemann Problem
31
follow the graph of f . Note that f depends on the interval [ul , ur ], and thus is a highly nonlocal function of f . Since f ≥ 0, we have that f is nondecreasing, and hence we can form −1 its inverse, denoted by (ff ) , permitting jump discontinuities where f is constant. Hence formula (2.28) at least makes sense. In Figure 2.1 we see a flux function and the envelope between two points ul and ur .
f ul
u2 ur
u3
u Figure 2.1. In a series of figures we will illustrate the solution of an explicit Riemann problem. We start by giving the flux function f , two states ul and ur (with ul < ur ), and the convex envelope of f relative to the interval [ul , ur ].
If f ∈ C 2 with finitely many inflection points, there will be a finite number of intervals with endpoints ul = u1 < u2 < · · · < un = ur such that f = f on every other interval. That is, if f (u) = f (u) for u ∈ [ui , ui+1 ], then f (u) < f (u) for u ∈ ui+1 , ui+2 ∪ ui−1 , ui . In this case the solution u( · , t) consists of finitely many intervals where u is a regular solution given by u(x, t) = (f )−1 (x/t) separated by jump discontinuities at points x such that x = f (uj )t = t(f (uj+1 ) − f (uj ))/(uj+1 − uj ) = f (uj+1 )t that clearly satisfy the Rankine–Hugoniot relation. In Figure 2.1 we have three intervals, where f < f on the middle interval. To show that (2.28) defines an entropy solution, we shall need some notation. For i = 1, . . . , n set σi = f (ui ) and define σ0 = −∞, σn+1 = ∞. By discarding identical σi ’s and relabeling if necessary, we can and will assume that σ0 < σ1 < · · · < σn+1 . Then for i = 2, . . . , n define x −1 x vi (x, t) = (ff ) , σi−1 ≤ ≤ σi t t and set v1 (x, t) = ul for x ≤ σ1 t and vn+1 (x, t) = ur for x ≥ σn t. Let Ωi denote the set Ωi = {(x, t) | 0 ≤ t ≤ T,
σi−1 t < x < σi t }
32
2. Scalar Conservation Laws
for i = 1, . . . , n + 1. Using this notation, u defined by (2.28) can be written u(x, t) =
n+1
χΩi (x, t)vi (x, t),
(2.30)
i=1
where χΩi denotes the characteristic function of the set Ωi . For i = 1, . . . , n we then define ui =
lim u(x, t) and
x→σi t−
u ¯i =
lim u(x, t).
x→σi t+
The values ui and u ¯i are the left and right limits of the discontinuities of u. With this notation at hand, we show that u defined by (2.30) is an entropy solution in the sense of (2.19) of the initial value problem (2.26). Note that u is also continuously differentiable in each Ωi . First we use Green’s theorem (similarly as in proving the Rankine–Hugoniot relation (1.18)) on each Ωi to show that T n+1
ηϕt + qϕx dx dt = ηi ϕt + qi ϕx dx dt 0
i=1 Ω i
=
n+1
(ηi ϕ)t + (qi ϕ)x dx dt
i=1 Ω i
=
n+1
ϕ (−ηi dx + qi dt)
i=1 ∂Ω
=
+
i
η(x, T )ϕ(x, T ) − η(x, 0)ϕ(x, 0) dx
n i=1
0
T
ϕ (σi t, t) σi η¯i − η i − q¯i − q i dt.
Here η = η(u, k) = |u − k| , ηi = η(ui (x, t), k),
η¯i = η(¯ ui , k),
η i = η(ui , k),
q = q(u, k) = sign (u − k) (f (u) − f (k)), qi = q(ui (x, t), k), q¯i = q(¯ ui , k), and q i = q(ui , k). Now we observe that by (2.24) and (2.25) we have σi η¯i − η i − q¯i − q i ≥ 0, for all constants k. Hence T
ηϕt + qϕx dx dt + η(x, 0)ϕ(x, 0) − η(x, T )ϕ(x, T ) dx ≥ 0, 0
2.2. The Riemann Problem
33
i.e., u satisfies (2.19). Now we have found an entropy-satisfying solution to the Riemann problem if ul < ur . If ul > ur , we can transform the problem to the case discussed above by sending x → −x. Then we obtain the Riemann problem ur for x < 0, ut − f (u)x = 0, u(x, 0) = ul for x ≥ 0. In order to solve this we have to take the lower convex envelope of −f from ur to ul . But this envelope is exactly the negative of the upper concave envelope from ul to ur . The upper concave envelope is defined to be f (u) = inf g(u) g ≥ f and g concave on [ur , ul ] . (2.31) In this case the weak solution is given by ⎧ ⎪ for x ≤ f (ul )t, ⎨u l −1 u(x, t) = w(z) = (ff ) (z) for f (ul )t ≤ x ≤ f (ur )t, ⎪ ⎩ ur for x ≥ f (ur )t,
(2.32)
for ul > ur , where z = x/t. 1.40
-0.10
0.63
2.10
1.37
f'
2.10
u
0.47
1.37
-0.47
0.63
-1.40
u
1 40
0 47
0 47
x/t
-0.10 1 40
Figure 2.2. The function f (left) and its inverse (right).
This construction of the solution is valid as long as the envelope consists of a finite number of intervals where f, = f , alternating with intervals where the envelope and the function coincide. We will later extend the solution to the case where f is only Lipschitz continuous. We have now proved a theorem about the solution of the Riemann problem for scalar conservation laws. Theorem 2.2. The initial value problem ut + f (u)x = 0,
ul u(x, 0) = ur
for x < 0, for x ≥ 0,
34
2. Scalar Conservation Laws
with a flux function f (u) such that f, = f on finitely many intervals, alternating with intervals where they coincide, has a weak solution given by equation (2.28) if ul < ur , or by (2.32) if ur < ul . This solution satisfies the Kruˇ ˇzkov entropy condition (2.19). The solution u(x, t) given by (2.28) and (2.32) consists of a finite number of discontinuities separated by “wedges” (i.e., intervals zi , zi+1 ) inside which u is a classical solution. A discontinuity that satisfies the entropy condition is called a shock wave or simply a shock, and the continuous parts of the solution of the Riemann problem are called rarefaction waves. This terminology, as well as the term “entropy condition,” comes from gas dynamics. Thus we may say that the solution of a Riemann problem consists of a finite sequence of rarefaction waves alternating with shocks. ♦ Example 2.3 (Traffic flow (cont’d.)). In the conservation law model of traffic flow, we saw that the flux function was given as f (u) = u(1 − u). This is a concave function. Consequently, any upper envelope will be the function f itself, whereas any lower envelope will be the straight line segment between its endpoints. Any Riemann problem with ul > ur will be solved by a rarefaction wave, and if ul < ur , the solution will consist of a single shock. This is, of course, in accordance with our earlier results, and perhaps also with our experience. ♦ The solution of a Riemann problem is frequently depicted in (x, t) space as a collection of rays emanating from the origin. The slope of these rays is the reciprocal of f (u) for rarefaction waves, and if the ray illustrates a shock, the reciprocal of [[f ]]/[[u]]. In Figure 2.3 we illustrate the solution of the previous Riemann problem in this way; broken lines indicate rarefaction waves, and the solid line the shock. Note that Theorem 2.2 does not require the flux function f to be differentiable. Assume now that the flux function is continuous and piecewise linear on a finite number of intervals. Thus f will then be a step function taking a finite number of values. The discontinuity points of f will hereafter be referred to as breakpoints. Making this approximation is reasonable in many applications, since the precise form of the flux function is often the result of some measurements. These measurements are taken for a discrete set of u values, and a piecewise linear flux function is the result of a linear interpolation between these values. Both upper concave and lower convex envelopes will also be piecewise linear functions with a finite number of breakpoints. This means that f and f will be step functions, as will their inverses. Furthermore, the inverses of the derivatives will take their values among the breakpoints of f
2.2. The Riemann Problem -1.0
-0.33
1.0
0.33
35
1.00
t 0.67
0.33
x
0.00
Figure 2.3. The solution of a Riemann problem, shown in (x, t) space.
(or f ), and therefore also of f . If the initial states in a Riemann problem are breakpoints, then the entire solution will take values in the set of breakpoints. If we assume that ul < ur , and label the breakpoints ul = u0 < u1 < · · · < un = ur , then f will have breakpoints in some subset of this, say ul < ui1 < · · · < uik < ur . The solution will be a step function in z = x/t, monotonically nondecreasing between ul and ur . The discontinuities will be located at zik , given by
zik
f uik−1 − f (uik ) = . uik−1 − uik
Thus the following corollary of Theorem 2.2 holds. Corollary 2.4. Assume that f is a continuous piecewise linear function f : [−K, K] → R for some constant K. Denote the breakpoints of f by −K = u0 < u1 < · · · < un−1 < un = K. Then the Riemann problem ut + f (u)x = 0,
u(x, 0) =
uj uk
for x < 0, for x ≥ 0,
(2.33)
has a piecewise constant (in z = x/t) solution. If uj < uk , let uj = v1 < · · · < vm = uk denote the breakpoints of f , and if uj > uk , let uk = vm < · · · < v1 = uj denote the breakpoints of f . The weak solution of the
36
2. Scalar Conservation Laws
Riemann problem is then given by ⎧ ⎪ v ⎪ ⎪ 1 ⎪ ⎪ ⎪ ⎪v2 ⎪ ⎪ ⎪ ⎨... u(x, t) = ⎪ v ⎪ ⎪ i ⎪ ⎪ . ⎪ ⎪.. ⎪ ⎪ ⎪ ⎩v m
for x ≤ s1 t, for s1 t < x ≤ s2 t, for si−1 t < x ≤ si t,
(2.34)
for sm−1 t < x.
Here, the speeds si are computed from the derivative of the envelope, that is, si =
f (vi+1 ) − f (vi ) . vi+1 − vi
Note that this solution is an admissible solution in the sense that it satisfies the Kruˇ ˇzkov entropy condition. The viscous profile entropy condition is somewhat degenerate in this case. Across discontinuities over which f (u) differs from the envelope, it is satisfied. But across those discontinuities over which the envelope and the flux function coincide, the right-hand side of the defining ordinary differential equation (2.7) collapses to zero. The conservation law is called linearly degenerate in each such interval vi , vi+1 . Nevertheless, these discontinuities are also limits of the viscous regularization, as can be seen by changing to Lagrangian coordinates x → x − si t; see Exercise 2.3. With this we conclude our discussion of the Riemann problem, and in the next section we shall see how the solutions of Riemann problems may be used as a building block to solve more general initial value problems.
2.3 Front Tracking This algorithm is admittedly complicated, but no simpler mechanism seems to do nearly as much. D. E. Knuth, The TEXbook (1984 )
We begin this section with an example that illustrates the ideas of front tracking for scalar conservation laws, as well as some of the properties of solutions. ♦ Example 2.5. In this example we shall study a piecewise linear approximation of Burgers’ equation, ut +(u2 /2)x = 0. This means that we study a conservation law with a flux function that is piecewise linear and agrees with u2 /2
2.3. Front Tracking
37
at its breakpoints. To be specific, we choose intervals of unit length. We shall be interested in the flux function only in the interval [−1, 2], where we define it to be ⎧ ⎪−u/2 for u ∈ [−1, 0], ⎨ f (u) = u/2 (2.35) for u ∈ [0, 1], ⎪ ⎩ 3u/2 − 1 for u ∈ [1, 2]. This flux function has two breakpoints, and is convex. We wish to solve the initial value problem
ut + f (u)x = 0,
⎧ ⎪2 ⎨ u0 (x) = −1 ⎪ ⎩ 1
for x ≤ x1 , for x1 < x ≤ x2 , for x2 < x,
(2.36)
with f given by (2.35). Initially, the solution must consist of the solutions of the two Riemann problems located at x1 and x2 . This is so, since the waves from these solutions move with a finite speed, and will not interact until some positive time. This feature, sometimes called finite speed of propagation, characterizes hyperbolic, as opposed to elliptic or parabolic, partial differential equations. It implies that if we change the initial condition locally around some point, it will not immediately influence the solution “far away.” Recalling the almost universally accepted assumption that nothing moves faster than the speed of light, one can say that hyperbolic equations are more fundamental than the other types of partial differential equations. Returning to our example, we must then solve the two initial Riemann problems. We commence with the one at x1 . Since f is convex, and ul = 2 > −1 = ur , the solution will consist of a single shock wave with speed s1 = 12 given from the Rankine–Hugoniot condition of this Riemann problem. For small t and x near x1 the solution reads 2 u(x, t) = −1
for x < s1 t + x1 , for x ≥ s1 t + x1 .
(2.37)
The other Riemann problem has ul = −1 and ur = 1, so we must use the lower convex envelope, which in this case coincides with the flux function f . The flux function has two linear segments and one breakpoint u = 0 in the interval −1, 1. Hence, the solution will consist of two discontinuities moving apart. The speeds of the discontinuities are computed from f (u), or equivalently from the Rankine–Hugoniot condition, since f is linearly degenerate over each discontinuity. This gives s2 = − 21 and s3 = 12 . The
38
2. Scalar Conservation Laws
solution equals
⎧ ⎪ ⎨−1 u(x, t) = 0 ⎪ ⎩ 1
for x < s2 t + x2 , for s2 t + x2 ≤ x < s3 t + x2 , for s3 t + x2 ≤ x,
(2.38)
for small t and x near x2 . It remains to connect the two done for sufficiently small t: ⎧ 2 ⎪ ⎪ ⎪ ⎨−1 u(x, t) = ⎪ ⎪0 ⎪ ⎩ 1
solutions (2.37) and (2.38). This is easily for for for for
x < x1 + s1 t, x1 + s1 t ≤ x ≤ x2 + s2 t, x2 + s2 t ≤ x < x2 + s3 t, x2 + s3 t ≤ x.
(2.39)
The problem now is that the shock wave located at x1 (t) = x1 + t/2 will collide with the discontinuity x2 (t) = x2 − t/2. Then equation (2.39) is no longer valid, since the middle interval has collapsed. This will happen at time t = t1 = (x2 − x1 ) and position x = x4 = (x1 + x2 )/2. To continue the solution we must solve the interaction between the shock and the discontinuity. Again, using finite speed of propagation, we have that the solution away from (x4 , t1 ) will not be directly influenced by the behavior here. Consider now the solution at time t1 and in a vicinity of x4 . Here u takes two constant values, 2 for x < x4 and 0 for x > x4 . Therefore, the interaction of the shock wave x1 (t) and the discontinuity x2 (t) is determined by solving the Riemann problem with ul = 2 and ur = 0. Again, this Riemann problem is solved by a single shock, since the flux function is convex and ul > ur . The speed of this shock is s4 = 1. Thus, for t larger than x2 − x1 , the solution consists of a shock located at x4 (t) and a discontinuity located at x3 (t). The locations are given by 1 1 (x1 + x2 ) + 1 (t − (x2 − x1 )) = t + (3x1 − x2 ) , 2 2 1 x3 (t) = x2 + t. 2 We can then write the solution u(x, t) as x4 (t) =
u(x, t) = 2 + [[u (x4 (t))]]H (x − x4 (t)) + [[u (x3 (t))]]H (x − x3 (t)) , (2.40) where H is the Heaviside function. Indeed, any function u(x, t) that is piecewise constant in x with discontinuities located at xj (t) can be written in the form u(x, t) = ul + [[u (xj (t))]]H (x − xj (t)) , (2.41) j
2.3. Front Tracking
39
where ul now denotes the value of u to the left of the leftmost discontinuity. Since the speed of x4 (t) is greater than the speed of x3 (t), these two discontinuities will collide. This will happen at t = t2 = 3 (x2 − x1 ) and x = x5 = (5x2 − 3x1 )/2. In order to resolve the interaction of these two discontinuities, we have to solve the Riemann problem with ul = 2 and ur = 1. In the interval [1, 2], f (u) is linear, and hence the solution of the Riemann problem will consist of a single discontinuity moving with speed s5 = 32 . Therefore, for t > t2 the solution is defined as 2 for x < 3t/2 + 3x1 − 2x2 , u(x, t) = (2.42) 1 for x ≥ 3t/2 + 3x1 − 2x2 . Since the solution now consists of a single moving discontinuity, there will be no further interactions, and we have found the solution for all positive t. Figure 2.4 depicts this solution in the (x, t) plane; the discontinuities are shown as solid lines. We call the method that we have used to obtain the solution front tracking. Front tracking consists in tracking all discontinuities in the solution, whether they represent shocks or
t
t2
x4 (t)
u=2
u= 1
t1
u=0
x3 (t)
x2 (t) x1 (t) x1
u = −1 x4
x2
x5
x
Figure 2.4. The solution of (2.36) with the piecewise linear continuous flux function (2.35).
40
2. Scalar Conservation Laws
not. Hereafter, if the flux function is continuous and piecewise linear, all discontinuities in the solution will be referred to as fronts. Notice that if the flux function is continuous and piecewise linear, the Rankine–Hugoniot condition can be used to calculate the speed of any front. So from a computational point of view, all discontinuities are equivalent. ♦ With this example in mind we can define a general front-tracking algorithm for scalar conservation laws. Loosely speaking, front tracking consists in making a step function approximation to the initial data, and a piecewise linear approximation to the flux function. The approximate initial function will define a series of Riemann problems, one at each step. One can solve each Riemann problem, and since the solutions have finite speed of propagation, they will be independent of each other until waves from neighboring solutions interact. Front tracking should then resolve this interaction in order to propagate the solution to larger times. By considering flux functions that are continuous and piecewise linear we are providing a method for resolving interactions. (i) We are given a scalar one-dimensional conservation law ut + f (u)x = 0,
u|t=0 = u0 .
(2.43)
(ii) Approximate f by a continuous, piecewise linear flux function f δ . (iii) Approximate initial data u0 by a piecewise constant function uη0 . (iv) Solve initial value problem ut + f δ (u)x = 0,
u|t=0 = uη0
exactly. Denote solution by uδ,η . (v) As f δ and uη0 approach f and u0 , respectively, the approximate solution uδ,η will converge to u, the solution of (2.43). Front tracking in a box (scalar case). We have seen that the solution of a Riemann problem always is a monotone function taking values between ul and ur . Another way of stating this is to say that the solution of a Riemann problem obeys a maximum principle. This means that if we solve a collection of Riemann problems, the solutions (all of them) will remain between the minimum and the maximum of the left and right states. Therefore, fix a large positive number M and let ui = iδ, for −M ≤ iδ ≤ M . In this section we hereafter assume, unless otherwise stated, that the flux function f (u) is continuous and piecewise linear, with breakpoints ui . We assume that u0 is some piecewise constant function taking values in the set {ui } with a finite number of discontinuities, and we wish to solve
2.3. Front Tracking
41
the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x).
(2.44)
As remarked above, the solution will initially consist of a number of noninteracting solutions of Riemann problems. Each solution will be a piecewise constant function with discontinuities traveling at constant speed. Hence, at some later time t1 > 0, two discontinuities from neighboring Riemann problems will interact. At t = t1 we can proceed by considering the initial value problem with solution v(x, t): vt + f (v)x = 0,
v (x, t1 ) = u (x, t1 ) .
Since the solutions of the initial Riemann problems will take values among the breakpoints of f , i.e., {ui}, the initial data u(x, t1 ) is the same type of function as u0 (x). Consequently, we can proceed as we did initially, by solving the Riemann problems at the discontinuities of u(x, t1 ). However, except for the Riemann problem at the interaction point, these Riemann problems have all been solved initially, and their solution merely consists in continuing the discontinuities at their present speed. The Riemann problem at the interaction point has to be solved, giving a new fan of discontinuities. In this fashion the solution can be calculated up to the next interaction at t2 , say. Note that what we calculate in this way is not an approximation to the entropy weak solution of (2.44), but the exact solution. It is clear that we can continue this process for any number of interactions occurring at times tn where 0 < t1 ≤ t2 ≤ t3 ≤ · · · ≤ tn ≤ . . . . However, we cannot a priori be sure that lim tn = ∞, or in other words, that we can calculate the solution up to any predetermined time. One might envisage that the number of discontinuities grow for each interaction, and that their number increases without bound at some finite time. The next lemma assures us that this does not happen. Lemma 2.6. For each fixed δ, and for each piecewise constant function u0 taking values in the set {ui }, there is only a finite number of interactions between discontinuities of the weak solution to (2.44) for t in the interval [0, ∞. Remark 2.7. In particular, this means that we can calculate the solution by front tracking up to infinite time using only a finite number of operations. In connection with front tracking used as a numerical method, this property is called hyperfast. In the rest of this book we call a discontinuity in a front-tracking solution a front. Thus a front can represent either a shock or a discontinuity over which the flux function is linearly degenerate. Proof of Lemma 2.6. Let N (t) denote the total number of fronts in the front-tracking solution u(x, t) at time t. If a front represents a jump from ul to ur , we say that the front contains linear segments if the flux function has − 1 breakpoints between ul and
42
2. Scalar Conservation Laws
ur . We use the notation [[u]] to denote the jump in u across a front. In this notation = |[[u]]| /δ. Let L(t) be the total number of linear segments present in all fronts of u(x, t) at time t. Thus, if we number the fronts from left to right, and the ith front contains i linear segments, then 1 L(t) = i = |[[u]]i | . δ i i Let Q denote the number of linear segments in the piecewise linear flux function f (u) for u in the interval [−M, M ]. Now we claim that the functional T (t) = QL(t) + N (t) is strictly decreasing for each collision of fronts. Since T (t) takes only integer values, this means that we can have at most T (0) collisions. It remains to prove that T (t) is strictly decreasing for each collision. Assume that a front separating values ul and um collides from the left with a front separating um and ur . We will first show that T is decreasing if um is between ul and ur . um
f
t
ur
ul
ur um
ul
u
x
Figure 2.5. An interaction of fronts where ul < um < ur .
We assume that ul < um < ur . If ur < um < ul , the situation is analogous, and the statement can be proved with the same arguments. The situation is as depicted in Figure 2.5. Since a single front connects ul with um , the graph of the flux function cannot cross the straight line segment connecting the points (ul , f (ul )) and (um , f (um )). The entropy condition also implies that the graph of the flux function must be above this segment. The same holds for the front on the right separating um and ur . As the two fronts are colliding, the speed of the left front must be larger than the speed of the right front. This means that the slope of the segment from (ul , f (ul )) to (um , f (um )) is greater than the slope of the segment from (um , f (um )) to (ur , f (ur )). Therefore, the lower convex envelope from ul
2.3. Front Tracking
43
to ur consists of the line from (ul , f (ul )) to (ur , f (ur )). Accordingly, the solution of the Riemann problem consists of a single front separating ul and ur . See Figure 2.5. Consequently, L does not change at the interaction, and N decreases by one. Thus, in the case where um is between ur and ul , T decreases. It remains to show that T also decreases if um is not between ul and ur . We will show this for the case um < ul < ur . The other cases are similar, and can be proved by analogous arguments. Since the Riemann problem with a left state ul and right state um is solved by a single discontinuity, the graph of the flux function cannot lie above the straight line segment connecting the points (ul , f (ul )) and (um , f (um )). Similarly, the graph of the flux function must lie entirely above the straight line segment connecting (um , f (um )) and (ur , f (ur )). Also, the slope of the latter segment must be smaller than that of the former, since the fronts are colliding. This means that the Riemann problem with left state ul and right state ur defined at the collision of the fronts will have a solution consisting of fronts with speed smaller than or equal to the speed of the right colliding front. See Figure 2.6.
f
t
ul
ul
um ur
um
u
ur
x
Figure 2.6. An interaction of fronts where um < ul < ur .
The maximal number of fronts resulting from the collision is |ul − ur | /δ. This is strictly less than Q. Hence N increases by at most Q − 1. At the same time, L decreases by at least one. Consequently, T must decrease by at least one. This concludes the proof of Lemma 2.6. As a corollary of Lemma 2.6, we know that for a piecewise constant initial function with a finite number of discontinuities, and for a continuous and piecewise linear flux function with a finite number of breakpoints, the initial value problem has a weak solution satisfying both Kruˇ ˇzkov entropy conditions (2.17), as well as the viscous entropy condition for every discontinuity. It is not difficult to prove the following slight generalization of what we have already shown:
44
2. Scalar Conservation Laws
Corollary 2.8. Let f (u) be a continuous and piecewise linear function with a finite number of breakpoints for u in the interval [−M, M ], where M is some constant. Assume that u0 is a piecewise constant function with a finite number of discontinuities, u0 : R → [−M, M ]. Then the initial value problem ut + f (u)x = 0,
u|t=0 = u0
(2.45)
has a weak solution u(x, t). The function u(x, t) is a piecewise constant function of x for each t, and u(x, t) takes values in the finite set {u0 (x)} ∪ {the breakpoints of f }. Furthermore, there are only a finite number of interactions between the fronts of u. The distribution u also satisfies the Kruˇ ˇzkov entropy condition (2.19). This is all well and fine, but we could wish for more. For instance, is this solution the only one? And piecewise linear flux functions and piecewise constant initial functions seem more like an approximation than what we would expect to see in “real life.” So what happens when the piecewise constant initial function and the piecewise linear flux function converge to general initial data and flux functions, respectively? It turns out that these two questions are connected and can be answered by elegant, but indirect, analysis starting from the Kruˇ ˇzkov formulation (2.18).
2.4 Existence and Uniqueness Det var en ustyrtelig mængde lag! Kommer ikke kærnen snart for en dag?2 Henrik Ibsen, Peer Gynt (1867 )
By a clever choice of the test function φ, we shall use the Kruˇˇzkov formulation to show stability with respect to the initial value function, and thereby uniqueness. The approach used in this section is also very useful in estimating the error in numerical methods. We shall return to this in a later chapter. Let therefore u = u(x, t) and v = v(x, t) be two weak solutions to ut + f (u)x = 0, with initial data u|t=0 = u0 , 2 What
v|t=0 = v0 ,
an enormous number of swathings! Isn’t the kernel soon coming to light?
2.4. Existence and Uniqueness
45
respectively, satisfying the Kruˇ ˇzkov entropy condition. Equivalently, (|u − k| φt + sign (u − k) (f (u) − f (k))φx ) dx dt + |u0 − k| φ|t=0 dx ≥ 0 (2.46) for every nonnegative test function φ with compact support (and similarly for the function v). Throughout the calculations we will assume that both u and v are of bounded variation in x for each fixed t, i.e., T.V. (u( · , t)) , T.V. (v( · , t)) < ∞.
(2.47)
We assume that f is Lipschitz continuous, that is, that there is a constant L such that f (u) − f (v) ≤ L,
f Lip := sup (2.48) u−v u= v
and we denote by f Lip the Lipschitz constant, or seminorm3 , of f . If φ is compactly supported in t > 0, then (2.46) reads (|u − k| φt + sign (u − k) (f (u) − f (k))φx ) dx dt ≥ 0. (2.49) For simplicity we shall in this section use the notation q(u, k) = sign (u − k) (f (u) − f (k)). For functions of two variables we define the Lipschitz constant by
q Lip =
|q(u1 , v1 ) − q(u2 , v2 )| . (u1 ,v1 )=( u2 ,v2 ) |u1 − u2 | + |v1 − v2 | sup
Since qu (u, k) = sign (u − k) f (u) and qk (u, k) = −sign (u − k) f (k), it follows that if f Lip ≤ L, then also q Lip ≤ L. Now let φ = φ(x, t, y, s) be a nonnegative test function both in (x, t) and (y, s) with compact support in t > 0 and s > 0. Using that both u and v satisfy (2.49), we can set k = v(y, s) in the equation for u, and set k = u(x, t) in the equation for v = v(y, s). We integrate the equation for u(x, t) with respect to y and s, and the equation for v(y, s) with respect to x and t, and add the two resulting equations. We then obtain
|u(x, t) − v(y, s)| (φt + φs ) + q(u, v)(φx + φy ) dx dt dy ds ≥ 0. (2.50) Now we temporarily leave the topic of conservation laws in order to establish some facts about “approximate δ distributions,” or mollifiers. This is a sequence of smooth functions ωε such that the corresponding distributions 3 f Lip
is not a norm; after all, constants k have kLip = 0.
46
2. Scalar Conservation Laws
tend to the δ0 distribution, i.e., ωε → δ0 as ε → 0. There are several ways of defining these distributions. We shall use the following: Let ω(σ) be a C ∞ function such that 1 0 ≤ ω(σ) ≤ 1, supp ω ⊆ [−1, 1], ω(−σ) = ω(σ), ω(σ) dσ = 1. −1
Now define
1 σ ω . (2.51) ε ε It is not hard to verify that ωε has the necessary properties such that limε→0 ωε = δ0 as a distribution. We will need the following result: ωε (σ) =
Lemma 2.9. Let h ∈ L∞ (R2 ) with compact support. Assume that for almost all x0 ∈ R the function h(x, y) is continuous at (x0 , x0 ). Then lim h(x, y)ωε (x − y) dy dx = h(x, x) dx. (2.52) ε→0
Proof. Observe first that h(x, y)ωε (x − y) dy − h(x, x) = (h(x, y) − h(x, x)) ωε (x − y) dy = (h(x, x + εz) − h(x, x)) ω(z) dz → 0 as ε → 0, |z |≤1
for almost all x, using the continuity of h. Furthermore, h(x, y)ωε (x − y) dy ≤ h ∞ , and hence we can use Lebesgue’s bounded convergence theorem to obtain the result. Returning now to conservation laws and (2.50), we must make a smart choice of a test function φ(x, t, y, s). Let ψ(x, t) be a test function that has support in t > 0 . We then define x+y t+s φ(x, t, y, s) = ψ , ωε0 (t − s)ωε (x − y), 2 2 where ε0 and ε are (small) positive numbers. In this case4 ∂ψ x + y t + s φt + φs = , ωε0 (t − s)ωε (x − y), ∂t 2 2 4 Beware!
Here
∂ψ ∂t
“
x+y t+s , 2 2
”
means the partial derivative of ψ with respect to the “ ” second variable, and this derivative is evaluated at x+y , t+s . 2 2
2.4. Existence and Uniqueness
and5 φx + φy =
∂ψ ∂x
x+y t+s , 2 2
47
ωε0 (t − s)ωε (x − y).
Now apply Lemma 2.9 to h(x, y) = |u(x) − v(y)| ωε0 ∂ψ/∂t and h(x, y) = q(x, y)ωε0 ∂ψ/∂x. This is justified since we assume that both u and v have finite total variation, cf. (2.47), and hence are continuous almost everywhere. Then, letting ε0 and ε tend to zero, (2.50) and Lemma 2.9 give (|u(x, t) − v(x, t)| ψt + q(u, v)ψx ) dt dx ≥ 0 (2.53) for any two weak solutions u and v and any nonnegative test function ψ with support in t > . If we considered (2.19) in the strip t ∈ [0, T ] and test functions whose support included 0 and T , the Kruzkov ˇ formulation would imply (|u(x, t) − v(y, s)| (φt + φs ) + q(u, v) (φx + φy )) dx dt dy ds − |u(x, T ) − v(y, s)| φ(x, T, y, s) dx dy ds − |u(x, t) − v(y, T )| φ(x, t, y, T ) dx dy dt + |u0 (x) − v(y, s)| φ(x, 0, y, s) dx dy ds + |u(x, t) − v0 (y)| φ(x, t, y, 0) dx dy dt ≥ 0. We can make the same choice of test function as before. Since we are integrating over only half the support of the test functions, we get a factor 1 2 in front of each of the boundary terms for t = 0 and t = T . Thus we end up with (|u(x, t) − v(x, t)| ψt + q(u, v)ψx ) dt dx − |u(x, T ) − v(x, T )| ψ(x, T ) dx (2.54) + |u0 (x) − v0 (x)| ψ(x, 0) dx ≥ 0. In order to exploit (2.54) we define ψ as
ψ(x, t) = χ[−M +Lt+ε,M −Lt−ε] ∗ ωε (x), 5 As
in the previous equation,
∂ψ ∂x
“
x+y t+s , 2 2
(2.55)
”
means the partial derivative of ψ with “ ” respect to the first variable, and this derivative is evaluated at x+y , t+s . 2 2
48
2. Scalar Conservation Laws
for t ∈ [0, T ]. Here L denotes the Lipschitz constant of f , χ[a,b] the characteristic function of the interval [a, b], and ∗ the convolution product. We make the constant M so large that M − Lt−ε > −M + Lt+ 3ε for t < T . In order to make ψ an admissible test function, we modify it to go smoothly to zero for t > T . We can compute for t < T , d M −Lt−ε ψt = ωε (x − y) dy (2.56) dt −M +Lt+ε = −L (ωε (x − M + Lt + ε) + ωε (x + M − Lt − ε)) ≤ 0, and ψx = − (ωε (x − M + Lt + ε) − ωε (x + M − Lt − ε)) .
(2.57)
With our choice of M , the two functions on the right-hand side of (2.57) have nonoverlapping support. Therefore, 0 = ψt + L |ψx | ≥ ψt +
q(u, v) ψx , |u − v|
and hence |u − v| ψt + q(u, v)ψx ≤ 0. Using this in (2.54), and letting ε go to zero, we find that M−Lt M |u(x, t) − v(x, t)| dx ≤ |u0 (x) − v0 (x)| dx. −M+Lt
(2.58)
−M
If we assume that u0 (x) = v0 (x) for |x| sufficiently large (or u0 − v0 ∈ L1 ), we have that also u(x, t) = v(x, t) for large |x|. Consequently,
u( · , t) − v( · , t) 1 ≤ u0 − v0 1
(2.59)
in this case. Thus we have proved the following result. Proposition 2.10. Assume that f is a Lipschitz continuous, and let u and v be two weak solutions of the initial value problems ut + f (u)x = 0,
u|t=0 = u0 ,
vt + f (v)x = 0,
v|t=0 = v0 ,
respectively, satisfying the Kruzkov ˇ entropy condition. Assume that u0 − v0 is integrable and both u and v have finite total variation in x for each time, cf. 2.47. Then
u( · , t) − v( · , t) 1 ≤ u0 − v0 1 .
(2.60)
In particular, if u0 = v0 , then u = v. In other words, we have shown, starting from the Kruˇ ˇzkov formulation of the entropy condition, that the initial value problem is stable in L1 , assuming existence of solutions.
2.4. Existence and Uniqueness
49
The idea is now to obtain existence of solutions using front tracking; for Riemann initial data and continuous, piecewise linear flux functions we already have existence from Corollary 2.8. For given initial data and flux function we show that the solution can be obtained by approximating with front-tracking solutions. First we consider general initial data u0 ∈ L1 (R) but with a continuous, piecewise linear flux function f . Equation (2.59) shows that if ui0 (x) is a sequence of step functions converging in L1 to some u0 (x), then the corresponding front-tracking solutions ui (x, t) will also converge in L1 to some function u(x, t). What is the equation satisfied by u(x, t)? To answer this question, let φ(x, t) be a fixed test function, and let C be a constant such that C > max{ φ ∞ , φt ∞ , φx ∞ }. Let also T be such that φ(x, t) = 0 for all t ≥ T . Since ui (x, t) all are weak solutions, we compute ∞ (u(x, t)φt + f (u(x, t))φx ) dt dx + φ(x, 0)u0 (x) dx 0 ∞
= (u − ui )φt + f (u) − f (ui ) φx dt dx 0
i + φ(x, 0) u0 − u0 dx
≤
C u( · , t) − ui ( · , t)1 + LC u( · , t) − ui ( · , t)1 dt 0 + C u0 − ui0 1 . (2.61) T
Here L is a constant given by the Lipschitz condition (2.48). This implies ∞ ∞ ∞ (u(x, t)φt + f (u(x, t))φx ) dt dx + φ(x, 0)u0 (x) dx −∞ 0 −∞ ≤ C ((1 + L)T + 1) u0 − ui0 1 . (2.62) Since the right-hand side of (2.62) can be made arbitrarily small, u(x, t) is a weak solution to the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x),
where f is piecewise linear and continuous. Now, since we have eliminated the restriction of using step functions as initial functions, we would like to be able to consider conservation laws with more general flux functions. That is, if f i is some sequence of piecewise linear continuous functions converging to f in Lipschitz norm, we would like to conclude that the corresponding front-tracking solutions converge as well.
50
2. Scalar Conservation Laws
That is, if f i is some sequence of piecewise linear continuous functions such that f i → f in the Lipschitz seminorm, and u0 (x) is a function with finite total variation,6 that is, T.V. (u0 ) is bounded, then the corresponding solutions ui (x, t) converge in L1 to some function u(x, t). To this end we need to consider stability with respect to the flux function. We start by studying two Riemann problems with the same initial data, but with different flux functions. Let u and v be the weak solutions of ut + f (u)x = 0, with initial data
vt + g(v)x = 0
ul u(x, 0) = v(x, 0) = ur
(2.63)
for x < 0, for x > 0.
We assume that both f and g are continuous and piecewise linear with a finite number of breakpoints. The solutions u and v of (2.63) will be piecewise constant functions of x/t that are equal outside a finite interval in x/t. We need to estimate the difference in L1 between the two solutions. Lemma 2.11. The following inequality holds: d
u − v 1 ≤ sup |f (u) − g (u)| |ul − ur | , dt u
(2.64)
where the supremum is over all u between ul and ur . Proof. Assume that ul ≤ ur ; the case ul ≥ ur is similar. Consider first the case where f and g both are convex. Without loss of generality we may assume that f and g have common breakpoints ul = w1 < w2 < · · · < wn = ur , and let the speeds be denoted by f |wj ,wj+1 = sj and g |wj ,wj+1 = s˜j . Then
ur
ul
|f (u) − g (u)| du =
n−1
|sj − s˜j | (wj+1 − wj ).
j=1
Let σj be an ordering, that is, σj < σj +1 , of all the speeds {sj , s˜j }. Then we may write u(x, t)|x∈σj t,σj+1 t = uj+1 , v(x, t)|x∈σj t,σj+1 t = vj+1 ,
6 See Appendix A for definitions and properties of the total variation T.V. (u) of a function u.
2.4. Existence and Uniqueness
51
where both uj+1 and vj+1 are from the set of all possible breakpoints, namely {w1 , w2 , . . . , wn }, and uj ≤ uj+1 and vj ≤ vj+1 . Thus
u( · , t) − v( · , t) 1 = t
m
|uj+1 − vj+1 | (σ σj +1 − σj ).
j=1
We easily see that d
u( · , t) − v( · , t) 1 = dt
ur
|f (u) − g (u)| du
ul
≤ sup |f (u) − g (u)| |ul − ur | . u
The case where f and g are not necessarily convex is more involved. We will show that ur ur |ff (u) − g(u)| du ≤ |f (u) − g (u)| du (2.65) ul
ul
when the convex envelopes are taken on the interval [ul , ur ]. To this end we use the following general lemma: Lemma 2.12 (Crandall–Tartar). Let D be a subset of L1 (Ω), where Ω is some measure space. Assume that if φ and ψ are in D, then also φ ∨ ψ = max{φ, ψ} is in D. Assume furthermore that there is a map T : D → L1 (Ω) such that T (φ) = φ, φ ∈ D. Ω
Ω
Then the following statements, valid for all φ, ψ ∈ D, are equivalent: (i) If φ ≤ ψ, then T (φ) ≤ T (ψ). + + (ii) Ω (T (φ) − T (ψ)) ≤ Ω (φ − ψ) , where φ+ = φ ∨ 0. (iii) Ω |T (φ) − T (ψ)| ≤ Ω |φ − ψ|. Proof of Lemma 2.12. For completeness we include a proof of the lemma. Assume (i). Then T (φ∨ψ)−T (φ) ≥ 0, which trivially implies T (φ)−T (ψ) ≤ T (φ∨ψ)−T (ψ), and thus (T (φ)−T (ψ))+ ≤ T (φ∨ψ)−T (ψ). Furthermore, + (T (φ) − T (ψ)) ≤ (T (φ ∨ ψ) − T (ψ)) = (φ∨ψ −ψ) = (φ−ψ)+ , Ω
Ω
Ω
Ω
proving (ii). Assume now (ii). Then + |T (φ) − T (ψ)| = (T (φ) − T (ψ)) + (T (ψ) − T (φ))+ Ω Ω Ω + ≤ (φ − ψ) + (ψ − φ)+ Ω Ω = |φ − ψ| , Ω
52
2. Scalar Conservation Laws
which is (iii). It remains to prove that (iii) implies (i). Let φ ≤ ψ. For real numbers we have x+ = (|x| + x)/2. This implies 1 1 + (T (φ) − T (ψ)) = |T (φ) − T (ψ)| + (T (φ) − T (ψ)) 2 Ω 2 Ω Ω 1 1 ≤ |φ − ψ| + (φ − ψ) = 0. 2 Ω 2 Ω
To apply this lemma in our context, we let D be the set of all piecewise constant functions on [ul , ur ]. For any piecewise linear and continuous flux function f its derivative f is in D, and we define T (f ) = (ff ) , where the convex envelope is taken on the full interval [ul , ur ]. Then ur ur T (f ) du = (ff ) (u) du = f (ur ) − f (ul ) ul ul ur = f (ur ) − f (ul ) = f (u) du. ul
To prove (2.65), it suffices to prove that (i) holds, that is, f ≤ g implies T (f ) ≤ T (g ) for another piecewise linear and continuous flux function g. Assume otherwise, and let [u1 , u2 ] be the interval with left point u1 nearest ul such that (ff ) > (g ) . Assume furthermore that u2 is chosen maximal. By construction the point u1 has to be a breakpoint of f . Thus f (u1 ) = f (u1 ), while u2 has to be a breakpoint of g . Thus g (u2 ) = g(u2 ). Using this and the fact that the lower convex envelope never exceeds the function, we obtain g(u2 ) − g(u1 ) ≤ g (u2 ) − g (u1 ) < f (u2 ) − f (u1 ) ≤ f (u2 ) − f (u1 ), which is a contradiction. Hence we conclude that property (i) holds, which implies (iii) and subsequently (2.65). If we then let u0 (x) be an arbitrary piecewise constant function with a finite number of discontinuities, and let u and v be the solutions of the initial value problem (2.63), but u(x, 0) = v(x, 0) = u0 (x), then the following inequality holds for all t until the first front collision: n d
u(t) − v(t) 1 ≤ f − g Lip |[[u0 ]]i | . dt i=1
(2.66)
where we used the Lipschitz seminorm · Lip defined in (2.48). At this point, it is also convenient to consider the total variation of u0 .
2.4. Existence and Uniqueness
53
In this notation (2.66) gives
u(t) − v(t) 1 ≤ t f − g Lip T.V. (u0 ) .
(2.67)
This estimate holds until the first interaction of fronts for either u or v. Let t1 be this first collision time, and let w be the weak solution constructed by front tracking of wt + f (w)x = 0,
w (x, t1 ) = v (x, t1 ) .
Then for t1 < t < t2 , with t2 denoting the next time two fronts of either v or u interact,
u(t) − v(t) 1 ≤ u(t) − w(t) 1 + w(t) − v(t) 1 ≤ u (t1 ) − w (t1 ) 1 + (t − t1 ) f − g Lip T.V. (v (t1 )) . (2.68) However,
u (t1 ) − w (t1 ) 1 = u (t1 ) − v (t1 ) 1 ≤ t1 f − g LipT.V. (u0 ) .
(2.69)
Observe furthermore that front-tracking solutions have the property that the total variation of the initial function T.V. (u0 ) is larger than or equal to the total variation of the front-tracking solution T.V. (u( · , t)). This holds because as two fronts interact, the solution of the resulting Riemann problem is always a monotone function. Thus, no new extrema are created. When this and (2.69) are used in (2.68), we obtain
u(t) − v(t) 1 ≤ t f − g Lip T.V. (u0 ) .
(2.70)
This now holds for t1 < t < t2 , but we can repeat the above argument inductively for every collision time ti . Consequently, (2.70) holds for all positive t. Consider now a Lipschitz continuous function f , and let f i be a sequence of continuous, piecewise linear functions converging to f in the Lipschitz seminorm. Then the corresponding solutions ui converge to some function u from (2.70). Next, we show that the limit u(x, t) satisfies the Kruˇ ˇzkov entropy condition (2.18), thereby showing that it is indeed a weak solution that satisfies the entropy condition. By the Lebesgue bounded convergence theorem both i u − k → |u − k| , and
sign ui − k f i ui − f i (k) → sign (u − k) (f (u) − f (k)) in L1loc (R × [0, ∞). Hence, the limit u(x, t) satisfies (2.18), and we have shown that the front-tracking method indeed converges to the unique solution of the conservation law.
54
2. Scalar Conservation Laws
Let u be the limit obtained by front tracking by letting f i → f in the Lipschitz seminorm, and similarly let v be the limit obtained by letting g i → g. Assume that u0 = v0 . Using (2.70) we then obtain
u(t) − v(t) 1 ≤ t f − g Lip T.V. (u0 ) .
(2.71)
We can combine (2.59) and the above equation (2.71) to show a comparison result. Let u, v, and w be the solutions of ut + f (u)x = 0, vt + g(v)x = 0,
u(x, 0) = u0 (x), v(x, 0) = v0 (x),
wt + f (w)x = 0,
w(x, 0) = v0 (x).
Then
u( · , t) − v( · , t) 1 ≤ u( · , t) − w( · , t) 1 + w( · , t) − v( · , t) 1 ≤ u0 − w( · , 0) 1 + t T.V. (v0 ) f − g Lip , by (2.59) and (2.71). If we had defined w to be the solution of wt + g(w)x = 0,
w(x, 0) = u0 (x),
we would have obtained
u( · , t) − v( · , t) 1 ≤ v0 − w( · , 0) 1 + t T.V. (u0 ) f − g Lip . Thus we have proved the following theorem. Theorem 2.13. Let u0 be a function of bounded variation that is also in L1 , and let f (u) be a Lipschitz continuous function. Then there exists a unique weak solution u = u(x, t) to the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x),
which also satisfies the Kruˇ ˇzkov entropy condition (2.46). Furthermore, if v0 is another function in BV ∩ L1 , g(v) is Lipschitz continuous, and v is the unique weak Kruˇ ˇzkov entropy solution to vt + g(v)x = 0,
v(x, 0) = v0 (x),
then
u( · , t) − v( · , t) 1 ≤ u0 − v0 1 + t min {T.V. (u0 ) , T.V. (v0 )} f − g Lip . (2.72) In the special case where g = fδ , the piecewise linear and continuous approximation to f obtained by taking linear interpolation between points fδ (jδ) = f (jδ), we have, assuming that f is piecewise C 2 , that (see Exercise 2.11)
f − fδ Lip ≤ δ f ∞ , and hence we obtain, when uδ denotes the solution with flux fδ ,
u( · , t) − uδ ( · , t) 1 ≤ u0 − u0,δ 1 + t δ T.V. (u0 ) f ∞ ,
(2.73)
2.4. Existence and Uniqueness
55
showing that the convergence is of first order in the approximation parameter δ. We end this section by summarizing some of the fundamental properties of solutions of scalar conservation laws in one dimension. Theorem 2.14. Let u0 be an integrable function of bounded variation, and let f (u) be a Lipschitz continuous function. Then the unique weak entropy solution u = u(x, t) to the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x),
satisfies the following properties for all t ∈ [0, ∞: (i) Maximum principle:
u( · , t) ∞ ≤ u0 ∞ . (ii) Total variation diminishing (TVD): T.V. (u( · , t)) ≤ T.V. (u0 ) . (iii) L1 -contractive: If v0 is a function in BV ∩ L1 and v = v(x, t) denotes the entropy solution with v0 as initial data, then
u( · , t) − v( · , t) 1 ≤ u0 − v0 1 . (iv) Monotonicity preservation: u0 monotone implies u( · , t) monotone. (v) Monotonicity: Let v0 be a function in BV ∩ L1 , and let v = v(x, t) denote the entropy solution with v0 as initial data. Then u0 ≤ v0 implies u( · , t) ≤ v( · , t). (vi) Lipschitz continuity in time:
u( · , t) − u( · , s) 1 ≤ f Lip T.V. (u0 ) |t − s| , for all s, t ∈ [0, ∞. Proof. The maximum principle and the monotonicity preservation properties are all easily seen to be true for the front-tracking approximation by checking the solution of isolated Riemann problems, and the properties carry over in the limit. Monotonicity holds by the Crandall–Tartar lemma, Lemma 2.12, applied with the solution operator u0 → u(x, t) as T , and the L1 contraction property.
56
2. Scalar Conservation Laws
The fact that the total variation is nonincreasing follows using Theorem 2.13 (with g = f and v0 = u0 (· + h)) and 1 T.V. (u( · , t)) = lim |u(x + h, t) − u(x, t)| dx h→0 h 1 ≤ lim |u0 (x + h) − u0 (x)| dx = T.V. (u0 ) . h→0 h The L1 -contractivity is a special case of (2.72). Finally, to prove the Lipschitz continuity in time of the spatial L1 -norm, we first consider the solution of a single Riemann problem for a continuous piecewise linear flux function f . From Corollary 2.4 we see that
u( · , t) − u( · , s) 1 ≤ (|s1 | |v2 − v1 | + · · · + |sm−1 | |vm − vm−1 |) |t − s| ≤ f Lip T.V. (u0 ) |t − s| . This will carry over to the general case by taking appropriate approximations (of the initial data and the flux function). For an alternative argument, see Theorem 7.10.
2.5 Notes Ofte er det jo s˚ ˚ adan, at n˚ ar man kigger det nye efter i sømmene, s˚ ˚ a er det bare sømmene, der er nye. Kaj Munk, En Digters Vej og andre Artikler (1948)
The “viscous regularization” as well as the weak formulation of the scalar conservation law were studied in detail by Hopf [73] in the case of Burgers’ equation where f (u) = u2 /2. Hopf’s paper initiated the rigorous analysis of conservation laws. Ole˘ ˘ınik, [110], [111], gave a systematic analysis of the scalar case, proving existence and uniqueness of solutions using finite differences. See also the survey by Gel’fand [50]. Kruzkov’s ˇ approach, which combines the notion of weak solution and uniqueness into one equation, (2.23), was introduced in [88], in which he studied general scalar conservation laws in many dimensions with explicit time and space dependence in flux functions and a source term. The solution of the Riemann problem in the case where the flux function f has one or more inflection points was given by Gel’fand [50], Cho-Chun [28], and Ballou [7]. It is quite natural to approximate any flux function by a continuous and piecewise linear function. This method is frequently referred to as “Dafermos’ method” [41]. Dafermos used it to derive existence of solutions of scalar conservation laws. Prior to that a similar approach was studied numerically by Barker [9]. Further numerical work based on Dafermos’ paper can be found in Hedstrom [63, 64] and Swartz and Wendroff [133].
2.5. Notes
57
Applications of front tracking to hyperbolic conservation laws on a half-line appeared in [136]. Unaware of this earlier development, Holden, Holden, and Høegh-Krohn rediscovered the method [67], [66] and proved L1 -stability and that the method in fact can be used as a numerical method. We here use the name “front-tracking method” as a common name for this approach and an analogous method that works for systems of hyperbolic conservation laws. We combine the front-tracking method with Kruˇˇzkov’s ingenious method of “doubling the variables”; Kruzkov’s ˇ method shows stability (and thereby uniqueness) of the solution, and we use front tracking to construct the solution. The original argument in [66] followed a direct but more cumbersome analysis. An alternative approach to show convergence of the front-tracking approximation is first to establish boundedness of the approximation both in L∞ and total variation, and then to use Helly’s theorem to deduce convergence. Subsequently one has to show that the limit is a Kruˇ ˇzkov entropy solution, and finally invoke Kuznetsov’s theory to conclude stability in the sense of Theorem 2.13. We will use this argument in Chapters 3 and 4. Lemma 2.12 is due to Crandall and Tartar; see [40] and [38]. The L1 -contractivity of solutions of scalar conservation laws is due to Volpert [143]; see also Keyfitz [115, 62]. We simplify the presentation by assuming solutions of bounded variation. The uniqueness result, Theorem 2.13, was first proved by Lucier [106], using an approach due to Kutznetsov [90]. Our presentation here is different in that we avoid Kutznetsov’s theory; see Section 3.2. For an alternative proof of (2.59) we refer to M´ alek et al. [107, pp. 92 ff]. ´ The term “front tracking” is also used to denote other approaches to hyperbolic conservation laws. Glimm and coworkers [55, 56, 53, 54] have used a front-tracking method as a computational tool. In their approach the discontinuities, or shocks, are introduced as independent computational objects and moved separately according to their own dynamics. Away from the shocks, traditional numerical methods can be employed. This method yields sharp fronts. The name “front tracking” is also used in connection with level set methods, in particular in connection with Hamilton–Jacobi equations; see, e.g., [114]. Here one considers the dynamics of interfaces or fronts. These methods are distinct from those treated in this book.
58
2. Scalar Conservation Laws
Exercises Our problems are manmade; therefore they may be solved by man. John F. Kennedy (1963 )
2.1 Let f (u) =
u2
u2 . + (1 − u)2
Find the solution of the Riemann problem for the scalar conservation law ut + f (u)x = 0 where ul = 0 and ur = 1. This equation is an example of the so-called Buckley–Leverett equation and represents a simple model of two-phase fluid flow in a porous medium. In this case u is a number between 0 and 1 and denotes the saturation of one of the phases. 2.2 Consider the initial value problem for Burgers’ equation, 1 2 −1 for x < 0, ut + u x = 0, u(x, 0) = u0 (x) = 2 1 for x ≥ 0. a. Show that u(x, t) = u(x, 0) is a b. Let ⎧ ⎪ ⎨−1 ε u0 (x) = x/ε ⎪ ⎩ 1
weak solution. for x < −ε, for −ε ≤ x ≤ ε, for ε < x.
Find the solution, uε (x, t), of Burgers’ equation if u(x, 0) = uε0 (x). c. Find u ¯(x, t) = limε↓0 uε (x, t). d. Since u¯(x, 0) = u0 (x), why do we not have u¯ = u? 2.3 For ε > 0, consider the linear viscous regularization ul , for x ≤ 0, uεt + auεx = εuεxx , uε (x, 0) = u0 (x) = ur , for x > 0, where a is a constant. Show that ul , ε lim u (x, t) = ε↓0 ur ,
for x < at, for x > at,
and thus that in L1 (R × [0, T ]), uε → u0 (x − at). 2.4 This exercise outlines another way to prove monotonicity. If u and v are entropy solutions, then we have T [(u − v)ψt + (f (u) − f (v))ψx ] dt dx − (u − v)ψ 0 dx = 0.
2.5. Notes
59
Set Φ(σ) = |σ| + σ, and use (2.54) to conclude that T
Φ(u − v)ψt + Ψ(u, v)ψx dx dt − Φ(u − v)ψ 0 dx ≥ 0 (2.74) for a Lipschitz continuous Ψ. Choose a suitable test function ψ to show that (2.74) implies the monotonicity property. 2.5 Let c(x) be a continuous and locally bounded function. Consider the conservation law with “coefficient” c, ut + c(x)f (u)x = 0,
u(x, 0) = u0 (x).
(2.75)
a. Define the characteristics for (2.75). b. What is the Rankine–Hugoniot condition in this case? c. Set f (u) = u2 /2, c(x) = 1 + x2 , and −1 for x < 0, u0 = 1 for x ≥ 0. Find the solution of (2.75) in this case. d. Formulate a front-tracking algorithm for the general case of (2.75). e. What is the entropy condition for (2.75)? 2.6 Consider the conservation law where the x dependency is “inside the derivation,” ut + (c(x)f (u))x = 0.
(2.76)
The coefficient c is assumed to be continuously differentiable. a. Define the characteristics for (2.76). b. What is the entropy condition for this problem? c. Modify the proof of Proposition 2.10 to show that if u and v are entropy solutions of (2.76) with initial data u0 and v0 , respectively, then
u( · , t) − v( · , t) L1 (R) ≤ u0 − v0 L1 (R) . 2.7 Let η and q be an entropy/entropy flux pair as in (2.12). Assume that u is a piecewise continuous solution (in the distributional sense) of η(u)t + q(u)x ≤ 0. Show that across any discontinuity u satisfies σ (ηl − ηr ) − (ql − qr ) ≥ 0, where σ is the speed of the discontinuity, and ql,r and ηl,r are the values to the left and right of the discontinuity.
60
2. Scalar Conservation Laws
2.8 Consider the initial value problem for (the inviscid) Burgers’ equation 1 2 u x = 0, u(x, 0) = u0 (x), 2 and assume that the entropy solution is bounded. Set η = 12 u2 , and find the corresponding entropy flux q(u). Then choose a test function ψ(x, t) to show that ut +
u( · , t) L2 (R) ≤ u0 L2 (R) . If v is another bounded entropy solution of Burgers’ equation with initial data v0 , is u − v L2 (R) ≤ u0 − v0 L2 (R) ? 2.9 Consider the scalar conservation law with a zeroth-order term ut + f (u)x = g(u),
(2.77)
where g(u) is a locally bounded and Lipschitz continuous function. a. Determine the Rankine–Hugoniot relation for (2.77). b. Find the entropy condition for (2.77). 2.10 The initial value problem vt + H (vx ) = 0,
v(x, 0) = v0 (x),
(2.78)
is called a Hamilton–Jacobi equation. One is interested in solving (2.78) for t > 0, and the initial function v0 is assumed to be bounded and uniformly continuous. Since the differentiation is inside the nonlinearity, we cannot define solutions in the distributional sense as for conservation laws. A viscosity solution of (2.78) is a bounded and uniformly continuous function v such that for all test functions ϕ, the following hold: if v − ϕ has a local maximum at (x, t) then subsolution ϕ(x, t)t + H (ϕ(x, t)x ) ≤ 0, if v − ϕ has a local minimum at (x, t) then supsolution ϕ(x, t)t + H (ϕ(x, t)x ) ≥ 0. If we set p = vx , then formally p satisfies the conservation law pt + H(p)x = 0, Assume that
p(x, 0) = ∂x v0 (x).
pl x v0 (x) = v0 (0) + pr x
(2.79)
for x ≤ 0, for x > 0,
where pl and pr are constants. Let p be an entropy solution of (2.79) and set v(x, t) = v0 (0) + xp(x, t) − tH(p(x, t)).
2.5. Notes
61
Show that v defined in this way is a viscosity solution of (2.78). 2.11 Let f be piecewise C 2 . Show that if we define the continuous, piecewise linear interpolation fδ by fδ (jδ) = f (jδ), then we have
f − fδ Lip ≤ δ f ∞ , cf. (2.73). 2.12 ([113]) Consider the Riemann problem ut + f (u)x = 0, where
ul u0 = ur
u|t=0 = u0 , for x < 0, for x > 0.
Show that the solution can be written d
− dξ minu∈[ul ,ur ] f (u) − ξu ξ=x/t
u(x, t) = d − dξ maxu∈[ur ,ul ] f (u) − ξu ξ=x/t
for ul < ur , for ur < ul .
In particular, the formula is valid also for nonconvex flux functions. 2.13 Find the unique weak entropy solution of the initial value problem (cf. Exercise 2.9) 1 2 ut + u = −u, 2 x ⎧ ⎪ for x ≤ − 12 , ⎨1 u|t=0 = −2x for − 21 < x < 0, ⎪ ⎩ 0 for x ≥ 0. 2.14 Find the weak entropy solution of the initial value problem 2 for x < 0, u ut + (e )x = 0, u(x, 0) = 0 for x ≥ 0. 2.15 Find the weak entropy solution of the initial value problem
ut + u3 x = 0 with initial data a.
b.
1 u(x, 0) = 0
for x < 2, for x ≥ 2,
0 u(x, 0) = 1
for x < 2, for x ≥ 2.
62
2. Scalar Conservation Laws
2.16 Find the weak entropy solution of the initial value problem 1 2 ut + u x=0 2 with initial data 1 for 0 < x < 1, u(x, 0) = 0 otherwise. 2.17 Redo Example 2.5 with the same flux function but initial data ⎧ ⎪−1 for x ≤ x1 , ⎨ u0 (x) = 1 for x1 < x < x2 , ⎪ ⎩ −1 for x ≥ x2 .
3 A Short Course in Difference Methods
Computation will cure what ails you. Clifford Truesdell, The Computer, Ruin of Science and Threat to Mankind, 1980/1982
Although front tracking can be thought of as a numerical method, and has indeed been shown to be excellent for one-dimensional conservation laws, it is not part of the standard repertoire of numerical methods for conservation laws. Traditionally, difference methods have been central to the development of the theory of conservation laws, and the study of such methods is very important in applications. This chapter is intended to give a brief introduction to difference methods for conservation laws. The emphasis throughout will be on methods and general results rather than on particular examples. Although difference methods and the concepts we discuss can be formulated for systems, we will exclusively concentrate on scalar equations. This is partly because we want to keep this chapter introductory, and partly due to the lack of general results for difference methods applied to systems of conservation laws.
3.1 Conservative Methods We are interested in numerical methods for the scalar conservation law in one dimension. (We will study multidimensional problems in Chapter 4.) H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_3, © Springer-Verlag Berlin Heidelberg 2011
63
64
3. A Short Course in Difference Methods
Thus we consider ut + f (u)x = 0,
u|t=0 = u0 .
(3.1)
A difference method is created by replacing the derivatives by finite differences, e.g., Δu Δf (u) + = 0. (3.2) Δt Δx Here Δt and Δx are small positive numbers. We shall use the notation
n n Ujn = u (jΔx, nΔt) and U n = U−K , . . . , Ujn , . . . , UK , where u now is our numerical approximation to the solution of (3.1). Normally, since we are interested in the initial value problem (3.1), we know the initial approximation Uj0 ,
−K ≤ j ≤ K,
and we want to use (3.2) to calculate U n for n ∈ N. We will not say much about boundary conditions in this book. Often one assumes that the initial data is periodic, i.e., 0 0 U−K+j = UK +j , n U−K+j
for 0 ≤ j ≤ 2K,
n UK +j .
which gives = Another commonly used device is to assume that ∂x f (u) = 0 at the boundary of the computational domain. For a numerical scheme this means that
n
n
n n f U−K−j = f U−K and f UK UK ) for j > 0. +j = f (U For nonlinear equations, explicit methods are most common. These can be written
U n+1 = G U n , . . . , U n−l (3.3) for some function G. ♦ Example 3.1 (A nonconservative method). If f (u) = u2 /2, then we can define an explicit method Δt n n Ujn+1 = Ujn − Uj Uj +1 − Ujn . Δx 0 If U is given by 0 for j ≤ 0, 0 Uj = 1 for j > 0,
(3.4)
then U n = U 0 for all n. So the method produces a nicely converging sequence, but the limit is not a solution to the original problem. The difference method (3.4) is based on a nonconservative formulation. Henceforth, we will not discuss nonconservative schemes. ♦
3.1. Conservative Methods
65
We call a difference method conservative if it can be written in the form Ujn+1 = G(U Ujn−1−p , . . . , Ujn+q )
(3.5) = Ujn − λ F Ujn−p , . . . , Ujn+q − F Ujn−1−p , . . . , Ujn−1+q , where Δt . Δx The function F is referred to as the numerical flux. For brevity, we shall often use the notation λ=
G(U ; j) = G (U Uj −1−p , . . . , Uj +q ) , F (U ; j) = F (U Uj −p , . . . , Uj +q ) , so that (3.5) reads Ujn+1 = G(U n ; j) = Ujn − λ (F (U n ; j) − F (U n ; j − 1)) . Conservative methods have the property that U is conserved, since K
Ujn+1 Δx =
j=−K
If we set
K
Ujn Δx − Δt (F (U n ; K) − F (U n ; −K − 1)) .
j=−K
Uj0
equal to the average of u0 over the jth grid cell, i.e., (j+1)Δx 1 Uj0 = u0 (x) dx, Δx j Δx
and for the moment assume that F (U n ; K) = F (U n ; −K − 1), then U n (x) dx = u0 (x) dx. (3.6) A conservative method is said to be consistent if F (u, . . . , u) = f (u).
(3.7)
In addition we demand that F be Lipschitz continuous in all its variables. ♦ Example 3.2 (Some conservative methods). The simplest conservative method is the upwind scheme F (U ; j) = f (U Uj ) .
(3.8)
Another common method is the Lax–Friedrichs scheme, usually written 1
1 n Ujn+1 = Uj +1 + Ujn−1 − λ f Ujn+1 − f Ujn−1 . (3.9) 2 2 In conservation form, this reads 1 n
1 n F (U n ; j) = U − Ujn+1 + f Uj + f Ujn+1 . 2λ j 2
66
3. A Short Course in Difference Methods
Also, two-step methods are used. One is the Richtmyer two-step Lax– Wendroff scheme:
n
n 1 n n F (U ; j) = f U + Uj − λ f Uj +1 − f Uj . (3.10) 2 j +1 Another two-step method is the MacCormack scheme:
n
n
n
n 1 F (U ; j) = f Uj − λ f Uj +1 − f Uj + f Uj . 2
(3.11)
The Godunov scheme is a generalization of the upwind method. Let u ˜j be the solution of the Riemann problem with initial data Ujn for x ≤ 0, u ˜j (x, 0) = n Uj +1 for x > 0. The numerical flux is given by F (U ; j) = f (˜ uj (0, Δt)) .
(3.12)
To avoid that waves from neighboring grid cells start to interact before the next time step, we cannot take too long time steps Δt. Since the maximum speed is bounded by max |f (u)|, we need to enforce the requirement that λ |f (u)| < 1.
(3.13)
The condition (3.13) is called the Courant–Friedrichs–Lewy (CFL) condition. If all characteristic speeds are nonnegative (nonpositive), Godunov’s method reduces to the upwind (downwind) method. The Lax–Friedrichs and Godunov schemes are both of first order in the sense that the local truncation error is of order one. (We shall return to this concept below.) However, both the Lax–Wendroff and MacCormack methods are of second order. In general, higher-order methods are good for smooth solutions, but also produce solutions that oscillate in the vicinity of discontinuities. On the other hand, lower order methods have “enough diffusion” to prevent oscillations. Therefore, one often uses hybrid methods. These methods usually consist of a linear combination of a lower- and a higher-order method. The numerical flux is then given by F (U ; j) = θ(U ; j)F FL (U ; j) + (1 − θ(U ; j)) FH (U ; j),
(3.14)
where FL denotes a lower-order numerical flux, and FH a higher-order numerical flux. The function θ(U ; j) is close to zero, where U is smooth and close to one near discontinuities. Needless to say, choosing appropriate θ’s is a discipline in its own right. We have implemented a method (called fluxlim in Figure 3.1) that is a combination of the (second-order) MacCormack method and the (first-order) Lax–Friedrichs scheme, and
3.1. Conservative Methods
67
this scheme is compared with the “pure” methods in this figure. We, somewhat arbitrarily used θ(U ; j) = 1 −
1 , 1 + |Δj,Δx U |
where Δj,Δx U is an approximation to the second derivative of U with respect to x, Δj,Δx U =
Uj +1 − 2U Uj + Uj −1 . Δx2
Another approach is to try to generalize Godunov’s method by replacing the piecewise constant data U n by a smoother function. The simplest such replacement is by a piecewise linear function. To obtain a proper generalization one should then solve a “Riemann problem” with linear initial data to the left and right. While this is difficult to do exactly, one can use approximations instead. One such approximation leads to the following method: F (U n ; j) =
1 1 (gj + gj+1 ) − ΔU Ujn . 2 2λ
Here ΔU Ujn = Ujn+1 − Ujn , and n+1/2
gj = f (uj where
)+
1 u , 2λ j
uj = MinMod ΔU Ujn−1 , ΔU Ujn , λ n+1/2 uj = Ujn − f Ujn uj , 2
and 1 (sign (a) + sign (b)) min (|a| , |b|) . 2 This method is labeled slopelim in the figures. Now we show how these methods perform on two test examples. In both examples the flux function is given by MinMod(a, b) :=
f (u) =
u2 . u2 + (1 − u)2
(3.15)
The example is motivated by applications in oil recovery, where one often encounters flux functions that have a shape similar to that of f ; that is, f ≥ 0 and f (u) = 0 at a single point u. The model is called the Buckley–Leverett equation. The first example uses initial data 1 for x ≤ 0, u0 (x) = (3.16) 0 for x > 0.
68
3. A Short Course in Difference Methods Upwind method
Lax–Friedrichs method 1
1
u(x ( 1)
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
x
1
1.2
1.4
u(x ( 1)
0.9
0.8
1.6
0
0
0.2
0.8
x
1
1.2
1.4
1.6
1
u(x ( 1)
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1 0
0.2
0.4
0.6
0.8
x
1
1.2
1.4
u(x ( 1)
0.9
0.8
1.6
0
0
0.2
0.4
Fluxlim method
0.6
0.8
x
1
1.2
1.4
1.6
Slopelim method
1
1
u(x ( 1)
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1 0
0.2
0.4
0.6
0.8
x
1
1.2
1.4
u(x ( 1)
0.9
0.8
0
0.6
MacCormack method
Lax–Wendroff method 1
0
0.4
1.6
0
0
0.2
0.4
0.6
0.8
x
1
1.2
1.4
1.6
Figure 3.1. Computed solutions at time t = 1 for flux function (3.15) and initial data (3.16).
In Figure 3.1 we show the computed solution at time t = 1 for all methods, using 30 grid points in the interval [−0.1, 1.6], and Δx = 1.7/29, Δt = 0.5Δx. The second example uses initial data 1 for x ∈ [0, 1], u0 (x) = (3.17) 0 otherwise,
3.1. Conservative Methods
69
and 30 grid points in the interval [−0.1, 2.6], Δx = 2.7/29, Δt = 0.5Δx. In Figure 3.2 we also show a reference solution computed by the upwind method using 500 grid points. The most notable feature of the plots in Figure 3.2 is the solutions computed by the second-order methods. We shall show that if a sequence of solutions produced by a consistent, conservative method converges, then the limit is a weak solution. The exact solution to both these problems can be calculated by the method of characteristics. ♦ The local truncation error of a numerical method LΔt is defined (formally) as 1 (S(Δt)u − SN (Δt)u) (x), (3.18) Δt where S(t) is the solution operator associated with (3.1); that is, u = S(t)u0 denotes the solution at time t, and SN (t) is the formal solution operator associated with the numerical method, i.e., LΔt (x) =
SN (Δt)u(x) = u(x) − λ (F (u; j) − F (u; j − 1)) . To make matters more concrete, assume that we are studying the upwind method. Then Δt SN (Δt)u(x) = u(x) − (f (u(x)) − f (u(x − Δx))) . Δx We say that the method is of kth order if for all smooth solutions u(x, t),
|LΔt (x)| = O Δtk as Δt → 0. That a method is of high order, k ≥ 2, usually implies that it is “good” for computing smooth solutions. ♦ Example 3.3 (Local truncation error). We verify that the upwind method is of first order: 1 Δt LΔt (x) = u(x, t + Δt) − u(x, t) + (f (u(x, t)) − f (u(x − Δx, t))) Δt Δx 1 (Δt)2 = u + Δt ut + utt + · · · − u Δt 2 (Δx)2 − λ f (u) −ux Δx + uxx + · · · 2 1 2 + f (u) (−ux Δx + · · · ) 2 1 (Δt)2 = Δt (ut + f (u)x ) + utt Δt 2 ΔtΔx − uxx f (u) + f (u)u2x + · · · 2
70
3. A Short Course in Difference Methods Lax–Friedrichs method
0.9 0.8
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1 0
0 0.5
1
x
1.5
2
2.5
Lax–Wendroff method
0.9
u(x ( 1)
0.8
0.7
0
Upwind method
0.9
u(x ( 1)
0
0.5
u(x ( 1)
x
1.5
2
u(x ( 1)
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
2.5
MacCormack method
0.9
0.8
1
0 0
0.5
1
x
1.5
2
2.5
Fluxlim method
0.9
0
u(x ( 1)
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.5
1
x
1.5
2
x
1.5
2
2.5
2.5
u(x ( 1)
0.8
0.7
0
1
Slopelim method
0.9
0.8
0
0.5
0 0
0.5
1
x
1.5
2
2.5
Figure 3.2. Computed solutions at time t = 1 for flux function (3.15) and initial data (3.17).
1 (Δt utt − Δx (f (u)ux )x ) + O (Δt)2 2
Δx = ut + f (u)x + (λ utt − (f (u)ux )x ) + O (Δt)2 . 2 = ut + f (u)x +
Assuming that u is a smooth solution of (3.1), we find that
utt = (f (u))2 ux x ,
3.1. Conservative Methods
71
and inserting this into the previous equation we obtain
Δt ∂ LΔt = (f (u) (λf (u) − 1) ux ) + O (Δt)2 . (3.19) 2λ ∂x Hence, the upwind method is of first order. The above computations were purely formal, assuming sufficient smoothness for the Taylor expansion to be valid. This means that Godunov’s scheme is also of first order. Similarly, computations based on the Lax–Friedrichs scheme yield
Δt ∂
LΔt = 2 (λf (u))2 − 1 ux + O Δt2 . (3.20) 2λ ∂x Consequently, the Lax–Friedrichs scheme is also of first order. From the above computations it also emerges that the Lax–Friedrichs scheme is second-order accurate on the equation Δt
ut + f (u)x = 2 1 − (λf (u))2 ux x . (3.21) 2λ This is called the model equation for the Lax–Friedrichs scheme. In order for this to be well posed we must have that the coefficient of uxx on the right-hand side is nonnegative. Hence |λf (u)| ≤ 1.
(3.22)
This is a stability restriction on λ, and is the Courant–Friedrichs–Lewy (CFL) condition that we encountered in (3.13). The model equation for the upwind method is Δt (f (u) (1 − λf (u)) ux )x . (3.23) 2λ In order for this equation to be well posed, we must have f (u) ≥ 0 and λf (u) < 1. ♦ ut + f (u)x =
From the above examples, we see that first-order methods have model equations with a diffusive term. Similarly, one finds that second-order methods have model equations with a dispersive right-hand side. Therefore, the oscillations observed in the computations were to be expected. From now on we let the function uΔt be defined by uΔt (x, t) = Ujn ,
(x, t) ∈ [jΔx, (j + 1)Δx × [nΔt, (n + 1)Δt. (3.24)
Observe that uΔt (x, t) dx = Δx Ujn , for nΔt ≤ t < (n + 1)Δt. R
j
We briefly mentioned in Example 3.2 the fact that if uΔt converges, then the limit is a weak solution. Precisely, we have the well-known Lax–Wendroff theorem. Theorem 3.4 (Lax–Wendroff theorem). Let uΔt be computed from a conservative and consistent method. Assume that T.V.x (uΔt) is uniformly
72
3. A Short Course in Difference Methods
bounded in Δt. Consider a subsequence uΔtk such that Δtk → 0, and assume that uΔtk converges in L1loc as Δtk → 0. Then the limit is a weak solution to (3.1). Proof. The proof uses summation by parts. Let ϕ(x, t) be a test function. By the definition of Ujn+1 , N ∞
ϕ (xj , tn ) Ujn+1 − Ujn
n=0 j=−∞
=−
N ∞ Δt ϕ (xj , tn ) (F (U n ; j) − F (U n ; j − 1)) , Δx n=0 j=−∞
where xj = jΔx and tn = nΔt, and we choose T = N Δt such that ϕ = 0 for t ≥ T . After a summation by parts we get −
∞
ϕ (xj , 0)U Uj0
−
∞ N
(ϕ (xj , tn ) − ϕ (xj , tn−1 )) Ujn
j=−∞ n=1
j=−∞
−
N ∞ Δt (ϕ (xj+1 , tn ) − ϕ (xj , tn )) F (U n ; j) = 0. Δx n=0 j=−∞
Rearranging, we find that ΔtΔx
N ∞ ϕ (xj , tn ) − ϕ (xj , tn−1 ) n=1 j=−∞
Δt
Ujn
ϕ (xj+1 , tn ) − ϕ (xj , tn ) + Δx ∞ = −Δx ϕ (xj , 0) Uj0 .
n
F (U ; j) (3.25)
j=−∞
This almost looks like a Riemann sum for the weak formulation of (3.1), were it not for F . To conclude that the limit is a weak solution we must show that ΔtΔx
N ∞ F (U n ; j) − f (U U n ) j
(3.26)
n=1 j=−∞
tends to zero as Δt → 0. Using consistency, we find that (3.26) equals ΔtΔx
N ∞ n
F U , . . . , U n − F U n , . . . , U n , j −p j +q j j n=1 j=−∞
3.1. Conservative Methods
73
which by the Lipschitz continuity of F is less than ΔtΔxM
q N ∞ n Uj +k − Ujn n=1 j=−∞ k=−p
≤
N ∞ n 1 n U (q(q − 1) + p(p − 1))Δt Δx M j +1 − Uj 2 n=1 j=−∞
≤ (q 2 + p2 )Δx M T.V. (uΔt) T, where M is larger than the Lipschitz constant of F . Therefore, (3.26) is small for small Δx, and the limit is a weak solution. We proved in Theorem 2.14 that the solution of a scalar conservation law in one dimension possesses several properties. The corresponding properties for conservative and consistent numerical schemes read as follows: Definition 3.5. Let uΔt be computed from a conservative and consistent method. • A method is said to be total variation stable 1 if the total variation of U n is uniformly bounded, independently of Δx and Δt. • We say that method is total variation diminishing (TVD) if
a numerical T.V. U n+1 ≤ T.V. (U n ) for all n ∈ N0 . • A method is called monotonicity preserving if the initial data is monotone implies that so is U n for all n ∈ N. • A numerical method is called L1 -contractive if it is L1 -contractive [sic! ], i.e., uΔt(t) − vΔt (t) 1 ≤ uΔt (0) − vΔt (0) 1 for all t ≥ 0. Here vΔt is another solution with initial data v0 . Alternatively, we can of course write this as n U n+1 − V n+1 ≤ Uj − Vjn , n ∈ N0 . j
j
j
j
• A method is said to be monotone if for initial data U 0 and V 0 , we have Uj0 ≤ Vj0 ,
j∈Z
⇒
Ujn ≤ Vjn ,
j ∈ Z, n ∈ N.
The above notions are strongly interrelated, as the next theorem shows. Theorem 3.6. For conservative and consistent methods the following hold : (i) Any monotone method is L1 -contractive, assuming uΔt (0) − vΔt (0) ∈ L1 (R). (ii) Any L1 -contractive method is TVD, assuming that T.V. (u0 ) is finite. (iii) Any TVD method is monotonicity preserving. 1 This
methods.
definition is slightly different from the standard definition of T.V. stable
74
3. A Short Course in Difference Methods
Proof. (i) We apply the Crandall–Tartar lemma, Lemma 2.12, with Ω = R, and D equal to the set of all functions in L1 that are piecewise constant on the grid ΔxZ, and finally we let T (U 0 ) = U n . Since the method is conservative (cf. (3.6)), we have that Ujn = Uj0 , or T (U 0 ) = U n = U 0 . j
j
Lemma 2.12 immediately implies that U n − V n ≤ Δx U 0 − V 0
uΔt − vΔt 1 = Δx j j j j j
j
= uΔt(0) − vΔt (0) 1 . (ii) Assume now that the method is L1 -contractive, i.e., n U n+1 − V n+1 ≤ U − V n . j j j j j
Let V
n
j
be the numerical solution with initial data Vi0 = Ui0+1 .
Then by the translation invariance induced by (3.5), Vin = Uin+1 for all n. Furthermore,
T.V. U
n+1
∞ n+1 n+1 n+1 U U = = − Vjn+1 j +1 − Uj j j=−∞
j
U n − V n = T.V. (U n ) . ≤ j
j
j
(iii) Consider now a TVD
method, and assume that we have monotone initial data. Since T.V. U 0 is finite, the limits UL = lim Uj0 and UR = lim Uj0 j→−∞
j→∞
1 exist. Then T.V. U 0 = |U UR − U L |. If U were not monotone, then 1 0 T.V. U > |U UR − UL | = T.V. U , which is a contradiction. We can summarize the above theorem as follows: monotone ⇒ L1 -contractive ⇒ TVD ⇒ monotonicity preserving. Monotonicity is relatively easy to check for explicit methods, e.g., by calculating the partial derivatives ∂G/∂U i in (3.3). ♦ Example 3.7 (Lax–Friedrichs scheme). Recall from Example 3.2 that the Lax–Friedrichs scheme is given by 1
1 n Ujn+1 = U + Ujn−1 − λ f Ujn+1 − f Ujn−1 . 2 j +1 2
3.1. Conservative Methods
Computing partial derivatives we obtain ⎧ ⎪(1 − λf (U Ukn ))/2 ∂U Ujn+1 ⎨ = (1 + λf (U Ukn ))/2 ⎪ ∂U Ukn ⎩ 0
75
for k = j + 1, for k = j − 1, otherwise,
and hence we see that the Lax–Friedrichs scheme is monotone as long as the CFL condition λ |f (u)| < 1 ♦
is fulfilled.
Theorem 3.8. Let u0 ∈ L1 (R) have bounded variation. Assume that uΔt is computed with a method that is conservative, consistent, total variation stable, and uniformly bounded; that is, T.V. (uΔt ) ≤ M and uΔt ∞ ≤ M, where M is independent of Δx and Δt. Let T > 0. Then {uΔt (t)} has a subsequence that converges for all 1 t ∈ [0,
T ] to a weak solution u(t) in Lloc (R). Furthermore, the limit is in C [0, T ]; L1loc(R) . Proof. We intend to apply Theorem A.8. It remains to show that b |uΔt (x, t) − uΔt(x, s)| dx ≤ C |t − s| + o(1), as Δt → 0, s, t ∈ [0, T ]. a
Lipschitz continuity of the flux function implies, for any fixed Δt, n+1 U − Ujn = λ F (U Ujn ; j) − F (U Ujn ; j − 1) j = λ F (U Ujn−p , . . . , Ujn+q ) − F (U Ujn−p−1 , . . . , Ujn+q−1 )
≤ λL Ujn−p − Ujn−p−1 + · · · + Ujn+q − Ujn+q−1 , from which we conclude that
uΔt ( · , tn+1 ) − uΔt ( · , tn ) 1 =
∞ n+1 U − Ujn Δx j j=−∞
≤ L(p + q + 1)T.V. (U n ) Δt ≤ L(p + q + 1)M Δt, where L is the Lipschitz constant of F . More generally,
uΔt ( · , tm ) − uΔt ( · , tn ) 1 ≤ L(p + q + 1)M |n − m| Δt. Now let τ1 , τ2 ∈ [0, T ], and choose t˜1 , t˜2 ∈ {nΔt | 0 ≤ n ≤ T /Δt} such that 0 ≤ τj − t˜j < Δt for j = 1, 2.
76
3. A Short Course in Difference Methods
By construction uΔt (ττj ) = uΔt (t˜j ), and hence
uΔt ( · , τ1 ) − uΔt ( · , τ2 ) 1 ≤ uΔt( · , τ1 ) − uΔt ( · , t˜1 )1 + uΔt ( · , t˜1 ) − uΔt ( · , t˜2 )1 + uΔt ( · , t˜2 ) − uΔt( · , τ2 )1 ≤ (p + q + 1)L M t˜1 − t˜2 ≤ (p + q + 1)L M |ττ1 − τ2 | + O (Δt) . Observe that this estimate is uniform in τ1 , τ2 ∈ [0, T ]. We conclude that uΔt → u in C([0, T ]; L1 ([a, b])) for a sequence Δt → 0. The Lax–Wendroff theorem then says that this limit is a weak solution. At this point it is convenient to introduce the concept of entropy pairs or entropy/entropy flux pairs.2 Recall that a pair of functions (η(u), q(u)) with η convex is called an entropy pair if q (u) = f (u)η (u).
(3.27)
The reason for introducing this concept is that the entropy condition can now be reformulated using (η, q). To see this, assume that u is a solution of the viscous conservation law ut + f (u)x = εuxx .
(3.28)
Assume, or consult Appendix B, that this equation has a unique twicedifferentiable solution. Hence, multiplying by η (u) yields (cf. (2.10)) 2
η(u)t + q(u)x = εη (u)uxx = ε (η (u)ux )x − εη (u) (ux ) . If η is bounded, and η > 0, then the first term on the right of the above equation tends to zero as a distribution as ε → 0, while the second term is nonpositive. Consequently, if the solution of (3.1) is to be the limit of the solutions of (3.28) as ε → 0, the solution of (3.1) must satisfy (cf. (2.12)) η(u)t + q(u)x ≤ 0
(3.29)
as a distribution. Choosing η(u) = |u − k| we recover the Kruˇ ˇzkov entropy condition; see (2.17). We have demonstrated that that if a function satisfies (2.46) for all k, then it satisfies (3.29) for all convex η and vice versa; see Remark 2.1. Hence, the Kruˇ ˇzkov entropy condition is equivalent to demanding (3.29) for all convex η. The analogue of an entropy pair for difference schemes reads as follows. Write a ∨ b = max(a, b)
and
a ∧ b = min(a, b),
2 We have already encountered an entropy pair with η(u) = |u − k| and q(u) = sign (u − k) (f (u) − f (k)) when we introduced the Kruˇ ˇzkov entropy condition in Chapter 2.
3.1. Conservative Methods
77
and observe the trivial identity |a − b| = a ∨ b − a ∧ b. Then we define the numerical entropy flux Q by Q(U ; j) = F (U ∨ k; j) − F (U ∧ k; j), or explicitly, Q (U Uj −p , . . . , Uj +p ) = F (U Uj −p ∨ k, . . . , Uj +p ∨ k) − F (U Uj −p ∧ k, . . . , Uj +p ∧ k) . We have that Q is consistent with the usual entropy flux, i.e., Q (u, . . . , u) = sign (u − k) (f (u) − f (k)). Returning to monotone difference schemes, we have the following result. Theorem 3.9. Under the assumptions of Theorem 3.8, the approximate solutions computed by a conservative, consistent, and monotone difference method converge to the entropy solution as Δt → 0. Proof. Theorem 3.8 allows us to conclude that uΔt has a subsequence that converges in C([0, T ]; L1([a, b])) to a weak solution. It remains to show that the limit satisfies a discrete Kruˇ ˇzkov form. By a direct calculation we find that n Uj − k − λ (Q (U n ; j) − Q (U n ; j − 1)) = G(U n ∨ k; j) − G(U n ∧ k; j). Using that Ujn+1 = G(U n ; j) and that k = G(k; j), the monotonicity of the scheme implies that G(U n ∨ k; j) ≥ G(U n ; j) ∨ G(k; j) = G(U n ; j) ∨ k, −G(U n ∧ k; j) ≥ −G(U n ; j) ∧ G(k; j) = −G(U n ; j) ∧ k. Therefore, n+1 U − k − U n − k + λ (Q (U n ; j) − Q (U n ; j − 1)) ≤ 0. j
j
(3.30)
Applying the technique used in proving the Lax–Wendroff theorem to (3.30) gives that the limit u satisfies (|u − k| ϕt + sign (u − k) (f (u) − f (k)) ϕx ) dx dt ≥ 0.
Note that we can also use the above theorem to conclude the existence of weak entropy solutions to scalar conservation laws. Now we shall examine the local truncation error of a general conservative, consistent, and monotone method. Since this can be written
Ujn+1 = G (U n ; j) = G Ujn−p−1 , . . . , Ujn+q
= Ujn − λ F Ujn−p , . . . , Ujn+q − F Ujn−p−1 , . . . , Ujn+q−1 ,
78
3. A Short Course in Difference Methods
we write G = G(u0 , . . . , up+q+1 ) and
F = F (u1 , . . . , up+q+1 ).
We assume that F , and hence G, is three times continuously differentiable with respect to all arguments, and write the derivatives with respect to the ith argument as ∂i G(u0 , . . . , up+q+1 ) and
∂i F (u1 , . . . , up+q+1 ).
We set ∂i F = 0 if i = 0. Throughout this calculation, we assume that the jth slot of G contains Ujn , so that G(u0 , . . . , up+q+1 ) = uj − λ(. . . ). By consistency we have that G(u, . . . , u) = u
and F (u, . . . , u) = f (u).
Using this we find that
p+q+1
∂i F (u, . . . , u) = f (u),
(3.31)
i=1
∂i G = δi,j − λ (∂ ∂i−1 F − ∂i F ) ,
(3.32)
2 2 ∂i,k G = −λ ∂i2−1,k−1 F − ∂i,k F .
(3.33)
and
Therefore, p+q+1
∂i G(u, . . . , u) =
i=0
p+q+1
δi,j = 1.
(3.34)
i=0
Furthermore,
p+q+1
p+q+1
(i − j)∂ ∂i G(u, . . . , u) =
i=0
(i − j)δi,j
i=0
− λ(i − j) (∂ ∂i−1 F (u, . . . , u) − ∂i F (u, . . . , u)) = −λ
p+q+1 i=0
= −λf (u).
((i + 1) − i))∂ ∂i F (u, . . . , u) (3.35)
We also find that p+q+1
2 (i − k)2 ∂i,k G(u, . . . , u)
i,k=0
= −λ
p+q+1 i,k=0
2 (i − k)2 ∂i2−1,k−1 F (u, . . . , u) − ∂i,k F (u, . . . , u)
3.1. Conservative Methods
79
2 ((i + 1) − (k + 1))2 − (i − k)2 ∂i,k F (u, . . . , u)
p+q+1
= −λ
i,k=0
= 0.
(3.36)
Having established this, we now let u = u(x, t) be a smooth solution of the conservation law (3.1). We are interested in applying G to u(x, t), i.e., to calculate G(u(x − (p + 1)Δx, t) . . . , u(x, t), . . . , u(x + qΔx, t)). Set ui = u(x + (i − (p + 1))Δx, t) for i = 0, . . . , p + q + 1. Then we find that G(u0 , . . . , up+q+1 ) = G(uj , . . . , uj ) +
p+q+1
∂i G(uj , . . . , uj ) (ui − uj )
i=0
+
p+q+1
1 2 ∂i,k G(uj , . . . , uj ) (ui − uj ) (uk − uj ) + O Δx3 2 i,k=0
= u(x, t) + ux (x, t)Δx
p+q+1
(i − j)∂ ∂i G(uj , . . . , uj )
i=0 p+q+1 1 2 + uxx (x, t)Δx (i − j)2 ∂i G(uj , . . . , uj ) 2 i=0 p+q+1
1 2 + u2x (x, t)Δx2 (i − j)(k − j)∂ ∂i,k G(uj , . . . , uj ) + O Δx3 2 i,k=0
= u(x, t) + ux (x, t)Δx
p+q+1
(i − j)∂ ∂i G(uj , . . . , uj )
i=0 p+q+1 1 + Δx2 (i − j)2 [∂ ∂i G(uj , . . . , uj )ux (x, t)]x 2 i=0 p+q+1 2 1 − Δx2 u2x (x, t) (i − j)2 − (i − j)(k − j) ∂i,k G(uj , . . . , uj ) 2 i,k
+ O Δx3 .
2 2 Next we observe, since ∂i,k G = ∂k,i G and using (3.36), that
80
3. A Short Course in Difference Methods
0=
2 (i − k)2 ∂i,k G=
i,k
=
2 ((i − j) − (k − j))2 ∂i,k G i,k
2
2 ((i − j) − 2(i − j)(k − j))∂ ∂i,k G+
i,k
2 =2 ((i − j)2 − (i − j)(k − j))∂ ∂i,k G.
2 (k − j)2 ∂k,i G
i,k
i,k
Consequently, the penultimate term in the Taylor expansion of G above is zero, and we have that G(u(x − (p + 1)Δx, t), . . . , u(x + qΔx, t)) = u(x, t) − Δtf (u(x, t))x
Δx2 + (i − j)2 [∂ ∂i G(u(x, t), . . . , u(x, t))ux ]x + O Δx3 . (3.37) 2 i Since u is a smooth solution of (3.1), we have already established that
Δt2 2 u(x, t + Δt) = u(x, t) − Δtf (u)x + (f (u)) ux + O Δt3 . 2 x Hence, we compute the local truncation error as !p+q+1 " # Δt 2 2 2 LΔt = − 2 (i − j) ∂i G(u, . . . , u) − λ (f (u)) ux 2λ i=1
x
Δt =: − 2 [β(u)ux ]x + O Δt2 . (3.38) 2λ Thus if β > 0, then the method is of first order. What we have done so far is valid for any conservative and consistent method where the numerical flux function is three √ times continuously differentiable. Next, we utilize that ∂i G ≥ 0, so that ∂i G is well-defined. This means that p+q+1 |−λf (u)| = (i − j)∂ ∂i G(u, . . . , u) i=0 p+q+1
=
|i − j|
∂i G(u, . . . , u) ∂i G(u, . . . , u).
i=0
Using the Cauchy–Schwarz inequality and (3.34) we find that 2
p+q+1
2
λ (f (u)) ≤
(i − j) ∂i G(u, . . . , u)
i=0 p+q+1
=
p+q+1 2
∂i G(u, . . . , u)
i=0
(i − j)2 ∂i G(u, . . . , u).
i=0
Thus, β(u) ≥ 0. Furthermore, the inequality is strict if more than one term in the right-hand sum is different from zero. If ∂i G(u, . . . , u) = 0 except for
3.2. Error Estimates
81
i = k for some k, then G(u0 , . . . , up+q+1 ) = uk by (3.34). Hence the scheme is a linear translation, and by consistency f (u) = cu, where c = (j − k)λ. Therefore, monotone methods for nonlinear conservation laws are at most first-order accurate. This is indeed their main drawback. To recapitulate, we have proved the following theorem: Theorem 3.10. Assume that the numerical flux F is three times continuously differentiable, and that the corresponding scheme is monotone. Then the method is at most first-order accurate.
3.2 Error Estimates Let others bring order to chaos. I would bring chaos to order instead. Kurt Vonnegut, Breakfast of Champions (1973)
The concept of local error estimates is based on formal computations, and indicates how the method performs in regions where the solution is smooth. Since the convergence of the methods discussed was in L1 , it is reasonable to ask how far the approximated solution is from the true solution in this space. In this section we will consider functions u that are maps t → u(t) from [0, ∞ to L1loc ∩BV (R) such that the one-sided limits u(t±) exist in L1loc , and for definiteness we assume that this map is right continuous. Furthermore, we assume that
u(t) ∞ ≤ u(0) ∞ ,
T.V. (u(t)) ≤ T.V. (u(0)) .
We denote this class of functions by K. From Theorem 2.14 we know that solutions of scalar conservation laws are in the class K. It is convenient to introduce moduli of continuity in time (see Appendix A) νt (u, σ) = sup u(t + τ ) − u(t) 1 ,
σ > 0,
|τ |≤σ
ν(u, σ) = sup νt (u, σ). 0≤t≤T
From Theorem 2.14 we have that ν(u, σ) ≤ |σ| f Lip T.V. (u0 )
(3.39)
for weak solutions of conservation laws. Now let u(x, t) be any function in K, not necessarily a solution of (3.1). In order to measure how far u is from being a solution of (3.1) we insert u
82
3. A Short Course in Difference Methods
in the Kruzkov ˇ form (cf. (2.19)) T ΛT (u, φ, k) = (|u − k| φt + q(u, k)φx ) dx ds (3.40) 0 − |u(x, T ) − k| φ(x, T ) dx + |u0 (x) − k| φ(x, 0) dx. If u is a solution, then ΛT ≥ 0 for all constants k and all nonnegative test functions φ. We shall now use the special test function Ω(x, x , s, s ) = ωε0 (s − s )ωε (x − x ), where
1 x ω ε ε function satisfying ωε (x) =
and ω(x) is an even C ∞ 0 ≤ ω ≤ 1,
ω(x) = 0
for |x| > 1,
ω(x) dx = 1.
Let v(x , s ) be the unique weak solution of (3.1), and define T Λε,ε0 (u, v) = ΛT (u, Ω( · , x , · , s ), v(x , s )) dx ds . 0
The comparison result reads as follows. Theorem 3.11 (Kuznetsov’s lemma). Let u( · , t) be a function in K, and v be a solution of (3.1). If 0 < ε0 < T and ε > 0, then u( · , T −) − v( · , T ) ≤ u0 − v0 + T.V. (v0 ) (2ε + ε0 f Lip ) 1 1 + ν(u, ε0 ) − Λε,ε0 (u, v),
(3.41)
where u0 = u( · , 0) and v0 = v( · , 0). Proof. We use special properties of the test function Ω, namely that Ω(x, x , s, s ) = Ω(x , x, s, s ) = Ω(x, x , s , s) = Ω(x , x, s , s)
(3.42)
and Ωx = −Ωx ,
and Ωs = −Ωs .
(3.43)
Using (3.42) and (3.43), we find that T
Λε,ε0 (u, v) = −Λε,ε0 (v, u) − Ω(x, x , s, T ) |u(x, T ) − v(x , s)| 0 + |v(x , T ) − u(x, s)| dx dx ds T
+ Ω(x, x , s, 0) |v0 (x ) − u(x, s)| 0 + |u0 (x) − v(x , s)| dx dx ds
3.2. Error Estimates
83
:= −Λε,ε0 (v, u) − A + B. Since v is a weak solution, Λε,ε0 (v, u) ≥ 0, and hence A ≤ B − Λε,ε0 (u, v). Therefore, we would like to obtain a lower bound on A and an upper bound on B, the lower bound on A involving u(T ) − v(T ) 1 and the upper bound on B involving u0 − v0 1 . We start with the lower bound on A. Let ρε be defined by ρε (u, v) = ωε (x − x ) |u(x) − v(x )| dx dx . (3.44) Then A= 0
T
ωε0 (T − s) (ρε (u(T ), v(s)) + ρε (u(s), v(T ))) ds.
Now by a use of the triangle inequality,
u(x, T ) − v(x , s) + |u(x, s) − v(x , T )| ≥ |u(x, T ) − v(x, T )| + |u(x, T ) − v(x, T )| − |v(x, T ) − v(x , T )| − |u(x, T ) − u(x, s)| − |v(x , T ) − v(x , s)| − |v(x, T ) − v(x , T )| . Hence ρε (u(T ), v(s)) + ρε (u(s), v(T )) ≥ 2 u(T ) − v(T ) 1 − 2ρε (v(T ), v(T )) − u(T ) − u(s) 1 − v(T ) − v(s) 1 . Regarding the upper estimate on B, we similarly have that T B= ωε0 (s) [ρε (u0 , v(s)) + ρε (u(s), v0 )] ds, 0
and we also obtain ρε (u0 , v(s)) + ρε (u(s), v0 ) ≤ 2 u0 − v0 1 + 2ρε (v0 , v0 ) + u0 − u(s) 1 + v0 − v(s) 1 . Since v is a solution, it satisfies the TVD property, and hence ε ρε (v(T ), v(T )) = ωε (z) |v(x + z, T ) − v(x, T )| dz dx −ε ε ≤ ωε (z) sup |v(x + z, T ) − v(x, T )| dx dz |z|≤ε
−ε
= |ε|
ε
−ε
ωε (z)T.V. (v(T )) dz ≤ |ε| T.V. (v0 ) ,
84
3. A Short Course in Difference Methods
using (A.10). By the properties of ω, T ωε (T − s) ds = 0
T
0
ωε (s) ds =
1 . 2
Applying (3.39) we obtain (recall that ε0 < T ) T ωε0 (T − s) v(T ) − v(s) 1 ds 0
≤ ≤
and
1 ε0 f Lip T.V. (v0 ) 2
ωε0 (s) v0 − v(s) 1 ds ≤
0
and
ωε0 (T − s) (T − s) f Lip T.V. (v0 ) ds
T 0
Similarly,
0
T
1 ε0 f LipT.V. (v0 ) . 2
T
ωε0 (T − s) u(T ) − u(s) 1 ds ≤ 0
T
ωε0 (s) u0 − u(s) 1 ds ≤
1 ν (u, ε0 ) 2
1 ν (u, ε0 ) . 2
If we collect all the above bounds, we should obtain the statement of the theorem. Observe that in the special case where u is a solution of the conservation law (3.1), we know that Λε,ε0 (u, v) ≥ 0, and hence we obtain, when ε, ε0 → 0, the familiar stability result
u( · , T ) − v( · , T ) 1 ≤ u0 − v0 1 . We shall now show in three cases how Kuznetsov’s lemma can be used to give estimates on how fast a method converges to the entropy solution of (3.1). ♦ Example 3.12 (The smoothing method). While not a proper numerical method, the smoothing method provides an example of how the result of Kuznetsov may be used. The smoothing method is a (semi)numerical method approximating the solution of (3.1) as follows: Let ωδ (x) be a standard mollifier with support in [−δ, δ], and let tn = nΔt. Set u0 = u0 ∗ ωδ . For 0 ≤ t < Δt define u1 to be the solution of (3.1) with initial data u0 . If Δt is small enough, u1 remains differentiable for t < Δt. In the interval [(n − 1)Δt, nΔt, we define un to be the solution of (3.1), with un (x, (n − 1)Δt) = un−1 ( · , tn −) ∗ ωδ .
3.2. Error Estimates
85
The advantage of doing this is that un will remain differentiable in x for all times, and the solution in the strips [tn , tn+1 can be found by, e.g., the method of characteristics. To show that un is differentiable, we calculate |unx(x, tn−1 )| = un−1 (y, t )ω (x − y) dy n−1 δ x ≤
T.V. (u0 ) 1 T.V. un−1 (tn−1 ) ≤ . δ δ
Let μ(t) = maxx |ux (x, t)|. Using that u is a classical solution of (3.1), we find by differentiating (3.1) with respect to x that uxt + f (u)uxx + f (u)u2x = 0. Write μ(t) = ux (x0 (t), t), where x0 (t) is the location of the maximum of |ux |. Then μ (t) = uxx (x0 (t), t)x0 (t) + uxt (x0 (t), t)
2 ≤ uxt (x0 (t), t) = −f (u) ux (x0 (t), t) ≤ cμ(t)2 , since uxx = 0 at an extremum of ux . Thus μ (t) ≤ cμ2 (t),
(3.45)
where c = f ∞ . The idea is now that (3.45) has a blowup at some finite time, and we choose Δt less than this time. We shall be needing a precise relation between the Δt and δ and must therefore investigate (3.45) further. Solving (3.45) we obtain μ(t) ≤
μ (tn ) T.V. (u0 ) ≤ . 1 − cμ (tn ) (t − tn ) δ − cT.V. (u0 ) Δt
So if Δt <
δ , cT.V. (u0 )
(3.46)
the method is well-defined. Choosing Δt = δ/(2cT.V. (u0 )) will do. Since u is an exact solution in the strips [tn , tn+1 , we have
tn+1
tn
+
(|u − k| φt + q(u, k)φx ) dx dt
|u(x, tn +) − k| φ(x, tn ) − |u(x, tn+1 −) − k| φ(x, tn+1 ) dx ≥ 0.
86
3. A Short Course in Difference Methods
Summing these inequalities, and setting k = v(y, s) where v is an exact solution of (3.1), we obtain N −1
ΛT (u, Ω, v(y, s)) ≥ −
n=0
Ω (x, y, tn , s) |u(x, tn +) − v(y, s)|
− |u(x, tn −) − v(y, s)| dx,
where we use the test function Ω(x, y, t, s) = ωε0 (t − s)ωε (x − y). Integrating this over y and s, and letting ε0 tend to zero, we get lim inf Λε,ε0 (u, v) ≥ − ε0 →0
N −1
(ρε (u(tn +), v(tn )) − ρε (u(tn −), v(tn ))) .
n=0
Using this in Kuznetsov’s lemma, and letting ε0 → 0, we obtain
u(T ) − v(T ) 1 ≤ u0 − u0 1 + 2ε T.V. (u0 ) (3.47) +
N −1
(ρε (u(tn +), v(tn )) − ρε (u(tn −), v(tn )))) ,
n=0
where we have used that limε0 →0 νt (u, ε0 ) = 0, which holds because u is a solution of the conservation law in each strip [tn , tn+1 . To obtain a more explicit bound on the difference of u and v, we investigate ρε (ωδ ∗ u, v) − ρε (u, v), where ρε is defined by (3.44), ρε (u ∗ ωδ , v) − ρε (u, v) ≤ ωε (x − y)ω(z) |u(x + δz) − v(y)| |z |≤1
=
1 2
− |u(x) − v(y)| dx dy dz
|z |≤1
(ωε (x − y) − ωε (x + δz − y)) ω(z)
× (|u(x + δz) − v(y)| − |u(x) − v(y)|) dx dy dz, 1 which follows after writing = 12 +2 and making the substitution x → x − δz, z → −z in one of these integrals. Therefore, 1 ρε (u ∗ ωδ , v) − ρε (u, v) ≤ |ωε (y + δz) − ωε (y)| 2 |z |≤1 × ω(z) |u(x + δz) − u(x)| dx dy dz 1 T.V. (ωε ) T.V. (u) δ 2 2 δ2 ≤ T.V. (u) , ε ≤
3.2. Error Estimates
87
by the triangle inequality and a further substitution y → x − y. Since N = T /Δt, the last term in (3.47) is less than N T.V. (u0 )
δ2 δ 2 ≤ (T.V. (u0 )) 2cT , ε ε
using (3.46). Furthermore, we have that 0 u − u0 ≤ δT.V. (u0 ) . 1 Letting K = T.V. (u0 ) c, we find that KT δ
u(T ) − v(T ) 1 ≤ 2T.V. (u0 ) δ + ε + , ε using (3.47). Minimizing with respect to ε, we find that √
u(T ) − v(T ) 1 ≤ 2T.V. (u0 ) δ + 2 KT δ . So, we have shown that the smoothing method is of order smoothing coefficient δ.
(3.48) 1 2
in the ♦
♦ Example 3.13 (The method of vanishing viscosity). Another (semi)numerical method for (3.1) is the method of vanishing viscosity. Here we approximate the solution of (3.1) by the solution of ut + f (u)x = δuxx ,
δ > 0,
(3.49)
using the same initial data. Let uδ denote the solution of (3.49). Due to the dissipative term on the right-hand side, the solution of (3.49) remains a classical (twice differentiable) solution for all t > 0. Furthermore, the solution operator for (3.49) is TVD. Hence a numerical method for (3.49) will (presumably) not experience the same difficulties as a numerical method for (3.1). If (η, q) is a convex entropy pair, we have, using the differentiability of the solution, that
η(u)t + q(u)x = δη (u)uxx = δ η(u)xx − η (u)u2x . Multiplying by a nonnegative test function ϕ and integrating by parts, we get (η(u)ϕt + q(u)ϕx ) dx dt ≥ δ η(u)x ϕx dx dt,
88
3. A Short Course in Difference Methods
where we have used the convexity of η. Applying this with η = uδ − u and q = F (uδ , u) we can bound limε0 →0 Λε,ε0 (uδ , u) as follows: − lim Λε,ε0 (uδ , u) ε0 →0 T ≤δ ≤δ
0
T 0
∂ωε (x − y) ∂x ∂ωε (x − y) ∂x
∂ uδ (x, t) − u(y, t) dx dy dt ∂x δ ∂u (x, t) ∂x dx dy dt
δ ≤ 2T.V. uδ T ε δ ≤ 2T T.V. (u0 ) . ε Now letting ε0 → 0 in (3.41) we obtain √ δ u (T ) − u(T ) ≤ min 2ε + 2T δ T.V. (u0 ) = 2T.V. (u0 ) T δ. 1 ε ε So the method of vanishing viscosity also has order 12 .
♦
♦ Example 3.14 (Monotone schemes). We will here show that monotone schemes converge in L1 to the solution of (3.1) at a rate of (Δt)1/2 . In particular, this applies to the Lax– Friedrichs scheme. Let uΔt be defined by (3.24), where Ujn is defined by (3.5), that is,
Ujn+1 = Ujn − λ F Ujn−p , . . . , Ujn+p − F Ujn−1−p , . . . , Ujn−1+p , (3.50) for a scheme that is assumed to be monotone; cf. Definition 3.5. In the following we use the notation
ηjn = Ujn − k , qjn = f Ujn ∨ k − f Ujn ∧ k . We find that −ΛT (uΔt, φ, k) =−
−1 N j
−
n=0 xj xj+1
j
xj+1
xj
tn+1
ηjn φt (x, s) + qjn φx (x, s) ds dx
tn
ηj0 φ(x, 0) dx +
j
xj+1
ηjN φ(x, T ) dx xj
3.2. Error Estimates
=−
N−1 xj+1
n=0 xj xj+1
j
+
ηjn φ(x, tn+1 ) − φ(x, tn ) dx
ηj0 φ(x, 0) dx
N −1 tn+1 n=0
ηjN φ(x, T ) dx xj
qjn
φ(xj+1 , s) − φ(xj , s) ds
#
tn
! −1 N n+1 n = (ηηj − ηj )
xj+1
φ(x, tn+1 ) dx
xj
n=0
j
xj+1
−
xj
+
89
+ (qqjn − qjn−1 )
"
tn+1
φ(xj , s) ds tn
by a summation by parts. Recall that we define the numerical entropy flux by Qnj = Q(U n ; j) = F (U n ∨ k; j) − F (U n ∧ k; j). Monotonicity of the scheme implies, cf. (3.30), that ηjn+1 − ηjn + λ(Qnj − Qnj−1 ) ≤ 0. For a nonnegative test function φ we obtain −ΛT (uΔt , φ, k) ! −1 N ≤ − λ(Qnj − Qnj−1 ) j
+
=
j
φ(x, tn+1 ) dx
xj
n=0
−1 N
xj+1
(qqjn
−
qjn−1 ) !
λ(Qnj
−
"
tn+1
φ(xj , s) ds tn xj+2
qjn )
φ(x, tn+1 ) dx −
xj+1
n=0
+ (qqjn − qjn−1 )
tn+1
"
xj+1
φ(x, tn+1 ) dx xj
φ(xj , s) ds − λ
tn
xj+1
φ(x, tn+1 ) dx xj
We also have that
p n n n Q − q n ≤ 2 f Lip U j j j +m − Uj , m=−p
and n qj − qjn−1 ≤ 2 f Lip Ujn − Ujn−1 ,
"# .
90
3. A Short Course in Difference Methods
which implies that −ΛT (uΔt , φ, k) p −1 N n Uj +m − Ujn ≤ 2 f Lip j
n=0
m=−p xj+1
×
|φ(x + Δx, tn+1 ) − φ(x, tn+1 )| dx
x
j + Ujn − Ujn−1 # xj+1 tn+1 × φ(xj , s) ds − λ φ(x, tn+1 ) dx . tn xj Next, we subtract φ(xj , tn ) from the integrand in each of the latter two integrals. Since Δt = λΔx, the extra terms cancel, and we obtain −ΛT (uΔt , φ, k) ≤ 2 f Lip
(3.51) −1 N j
n=0
p
Ujn+m − Ujn m=−p xj+1
×
|φ(x + Δx, tn+1 ) − φ(x, tn+1 )| dx
xj
+ Ujn − Ujn−1
xj+1
+λ
!
tn+1
tn
|φ(xj , s) − φ(xj , tn )| ds "#
|φ(x, tn+1 ) − φ(xj , tn )| dx
.
xj
Let v = v(x, t) denote the unique weak solution of (3.1), and let k = v(x , s ). Choose the test function as φ(x, s) = ωε (x − x )ωε0 (s − s ), and observe that T |ωε (x + Δx − x ) − ωε (x − x )| ωε0 (tn − s ) dx ds 0
R
≤ Δx T.V. (ωε ) ≤ 2 Similarly, T 0
R
ωε (xj − x ) |ωε0 (s − s ) − ωε0 (tn − s )| dx ds ≤ 2
whenever |s − tn | ≤ Δt, and 0
T
ωε (x − x )ωε0 (tn+1 − s ) R
Δx . ε
Δt , ε0
3.2. Error Estimates
91
Δt Δx − ωε (xj − x )ωε0 (tn − s ) dx ds ≤ 2 + . ε0 ε Integrating (3.51) over (x , s ) with 0 ≤ s ≤ T we obtain −Λε,ε0 (uΔt , v) ≤ 4 f Lip
N −1 n=0
p n Uj +m − Ujn Δx Δx ε j m=−p ! "# Δt
Δx Δt n n Uj − Uj −1 + Δt + λ + Δx ε0 ε ε0 j
≤ 4 f LipT.V. (uΔt (0)) N −1
(Δx)2 1 (Δx)2 (Δt)2 ΔxΔt × (p + p + 1)2 + +λ + 2 ε ε0 ε ε0 n=0 1 1 ≤ K T T.V. (uΔt(0)) + Δt ε ε0 for some constant K, by using the estimate
p n U
j +m
j
m=−p
1 − Ujn ≤ (p + p + 1)2 T.V. (U n ) . 2
Recalling Kuznetsov’s lemma
uΔt(T ) − v(T ) 1 ≤ uΔt (0) − v0 1 + T.V. (v0 ) (2ε + ε0 f Lip ) 1 + (ννT (uΔt , ε0 ) + ν0 (uΔt, ε0 )) − Λε,ε0 . 2 We have that T.V. (uΔt ( · , t)) ≤ T.V. (uΔt( · , 0)) and νt (uΔt , ε) ≤ K1 (Δt + ε) T.V. (uΔt( · , 0)) . Choose the initial approximation such that
uΔt (0) − v0 1 ≤ Δx T.V. (v0 ) . This implies
uΔt(T ) − v(T ) 1 ≤ T.V. (v0 ) (Δx + 2ε + ε0 f Lip ) 1 1 + T.V. (uΔt( · , 0)) K1 (Δt + ε0 ) + KT Δt + ε0 ε T Δt T Δt ≤ K2 T.V. (v0 ) Δt + ε + + ε0 + . ε ε0
#
92
3. A Short Course in Difference Methods
Minimizing with respect to ε0 and ε, we obtain the final bound √
uΔt (T ) − v(T ) 1 ≤ K4 T.V. (v0 ) Δt + 4 T Δt . (3.52) Thus, as promised, we have shown that monotone schemes are of order (Δt)1/2 . ♦ If one uses Kuznetsov’s lemma to estimate the error of a scheme, one must estimate the modulus of continuity ν˜t (u, ε0 ) and the term Λε,ε0 (u, v). In other words, one must obtain regularity estimates on the approximation u. Therefore, this approach gives a posteriori error estimates, and perhaps the proper use for this approach should be in adaptive methods, in which it would provide error control and govern mesh refinement. However, despite this weakness, Kuznetsov’s theory is still actively used.
3.3 A Priori Error Estimates We shall now describe an application of a variation of Kuznetsov’s approach, in which we obtain an error estimate for the method of vanishing viscosity, without using the regularity properties of the viscous approximation. Of course, this application only motivates the approach, since regularity of the solutions of parabolic equations is not difficult to obtain elsewhere. Nevertheless, it is interesting in its own right, since many difference methods have (3.53) as their model equation. We first state the result. Theorem 3.15. Let v(x, t) be a solution of (3.1) with initial value v0 , and let u solve the equation ut + f (u)x = (δ(u)ux )x ,
u(x, 0) = u0 (x),
in the classical sense, with δ(u) > 0. Then
u(T ) − v(T ) 1 ≤ 2 u0 − v0 1 + 4T.V. (v0 )
$
(3.53)
8T δ v ,
where
δ v = sup δ˜ (v(x−, t), v(x+, t)) t∈[0,T ] x∈R
and ˜ b) = δ(a,
1 b−a
b
δ(c) dc. a
This result is not surprising, and in some sense is weaker than the corresponding result found by using Kuznetsov’s lemma. The new element here is that the proof does not rely on any smoothness properties of the function
3.3. A Priori Error Estimates
93
u, and is therefore also considerably more complicated than the proof using Kuznetsov’s lemma. Proof. The proof consists in choosing new Λ’s, and using a special form of the test function ϕ. Let ω ∞ be defined as 1 for |x| ≤ 1, ∞ ω (x) = 2 0 otherwise. We will consider a family of smooth functions ω such that ω → ω ∞ . To keep the notation simple we will not add another parameter to the functions ω, but rather write ω → ω ∞ when we approach the limit. Let ϕ(x, y, t, s) = ωε (x − y)ωε0 (t − s) with ωα (x) = (1/α) ω(x/α) as usual. In this notation 1/(2ε) for |x| ≤ ε, ∞ ωε (x) = 0 otherwise. In the following we will use the entropy pair η(u, k) = |u − k|
and q(u, k) = sign (u − k) (f (u) − f (k)),
and except where explicitly stated, we always let u = u(y, s) and v = v(x, t). Let ησ (u, k) and qσ (u, k) be smooth approximations to η and q such that ησ (u) → η(u) as σ → 0, qσ (u, k) = ησ (z − k)(f (z) − f (k)) dz. For a test function ϕ define T ΛσT (u, k) = ησ (u − k) us + f (u)y − (δ(u)uy )y ϕ dy ds 0
(which is clearly zero because of (3.53)) and T σ Λε,ε0 (u, v) = ΛσT (u, v(x, t)) dx dt. 0
Note that since u satisfies (3.53), Λσε,ε0 = 0 for every v. We now split Λσε,ε0 into two parts. Writing (cf. (2.10))
us +f (u)x − (δ(u)uy )y )ησ (u − k) = η(u − k)s + ((f (u) − f (k)) ησ (u − k)uy − (δ(u)uy )y ησ (u − k) = ησ (u − k)s + qσ (u, k)u uy − (δ(u)uy )y ησ (u − k) = ησ (u − k)s + qσ (u, k)y − (δ(u)ησ (u − k)y )y + ησ (u − k)δ(u)(uy )2 = ησ (u − k)s + (qσ (u, k) − δ(u)ησ (u − k)y )y + η (u − k)δ(u)(uy )2 ,
94
3. A Short Course in Difference Methods
we may introduce T Λσ1 (u, v) = 0
Λσ2 (u, v) T = 0
T
0
T
ησ (u − v)δ(u) (uy )2 ϕ dy ds dx dt,
0
ησ (u − v)s + (qσ (u, v) − δ(u)ησ (u − v)y )y ϕ dy ds dx dt,
such that Λσε,ε0 = Λσ1 + Λσ2 . Note that if δ(u) > 0, we always have Λσ1 ≥ 0, and hence Λσ2 ≤ 0. Then we have that Λ2 := lim sup Λσ2 ≤ 0. σ→0
To estimate Λ2 , we integrate by parts: Λ2 (u, v) T = 0
T
(−η(u − v)ϕs − q(u, v)ϕy + V (u, v)ϕyy ) dy ds dx dt
0
T
0 T
T
= 0
η(u(T ) − v)ϕ|s=T dy dx dt −
+
0
η(u(T ) − v)ϕ|s=T dy dx dt −
0
η(u0 − v)ϕ|s=0 dy dx dt
0
(η(u − v)ϕt + F (u, v)ϕx − V (u, v)ϕxy ) dy ds dx dt
T
+
T
where
v
V (u, v) =
0
T
η(u0 − v)ϕ|s=0 dy dx dt,
δ(s)η (s − v) ds.
u
Now define (the “dual of Λ2 ”) T T ∗ Λ2 := − (η(u − v)ϕt + q(u, v)ϕx − V (u, v)ϕxy ) dy ds dx dt −
0 0 T
0
t=T η(u − v(T ))ϕ dx dy ds. t=0
Then we can write Λ2 = −Λ∗2 T
+ η(u(T ) − v)ϕ |s=T dy dx dt %0 &' ( − %
0
T
Φ1
η(u0 − v)ϕ |s=0 dy dx dt &' ( Φ2
3.3. A Priori Error Estimates
+ %
T
0
− % =:
T
95
η(u − v(T ))ϕ |t=T dx dy ds &' ( Φ3
0
η(u0 − v0 )ϕ |t=0 dx dy ds &' ( Φ4
−Λ∗2
+ Φ.
We will need later that Φ = Λ∗2 + Λ2 ≤ Λ∗2 . Let
Ωε0 (t) =
(3.54)
t
0
ωε0 (s) ds
and
e(t) = u(t) − v(t) 1 =
η(u(x, t) − v(x, t)) dx.
To continue estimating, we need the following proposition. Proposition 3.16. Φ ≥ Ωε0 (T )e(T ) − Ωε0 (T )e(0) +
T 0
ωε0 (T − t)e(t) dt −
T 0
ωε0 (t)e(t) dt
− 4Ωε0 (T ) (ε0 f Lip + ε) T.V. (v0 ) . Proof (of Proposition 3.16). We start by estimating Φ1 . First note that η(u(y, T ) − v(x, t)) = |u(y, T ) − v(x, t)| ≥ |u(y, T ) − v(y, T )| − |v(y, T ) − v(y, t)| − |v(y, t) − v(x, t)| = η(u(y, T ) − v(y, T )) − |v(y, T ) − v(y, t)| − |v(y, t) − v(x, t)| . Thus
Φ1 ≥
T
η(u(y, T ) − v(y, T ))ϕ|s=T dy dx dt
0
−
0
−
0
T
|v(y, T ) − v(y, t)| ϕ|s=T dy dx dt T
|v(y, t) − v(x, t)| ϕ|s=T dy dx dt
≥ Ωε0 (T )e(T ) − Ωε0 (T ) (ε0 f Lip + ε) T.V. (v0 ) .
96
3. A Short Course in Difference Methods
Here we have used that v is an exact solution. The estimate for Φ2 is similar, yielding Φ2 ≥ −Ωε0 (T )e(0) − Ωε0 (T ) (ε0 f Lip + ε) T.V. (v0 ) . To estimate Φ3 we proceed in the same manner: η(u(y, s) − v(x, T )) ≥ η(u(y, s) − v(y, s)) − |v(y, s) − v(x, s)| − |v(x, s) − v(x, T )| . This gives Φ3 ≥
0
T
ωε0 (T − t)e(t) dt − Ωε0 (T ) (ε0 f Lip + ε) T.V. (v0 ) ,
while by the same reasoning, the estimate for Φ4 reads T Φ4 ≥ − ωε0 (t)e(t) dt − Ωε0 (T ) ( f Lip ε0 + ε) T.V. (v0 ) . 0
The proof of Proposition 3.16 is complete. To proceed further, we shall need the following Gronwall-type lemma: Lemma 3.17. Let θ be a nonnegative function that satisfies τ τ ∞ ∞ Ω∞ (τ )θ(τ ) + ω (τ − t)θ(t) dt ≤ C Ω (τ ) + ωε∞0 (t)θ(t) dt, (3.55) ε0 ε0 ε0 0
0
for all τ ∈ [0, T ] and some constant C. Then θ(τ ) ≤ 2C. Proof (of Lemma 3.17). If τ ≤ ε0 , then for t ∈ [0, τ ], ωε∞0 (t) = ωε∞0 (τ − t) = 1/(2ε0 ). In this case (3.55) immediately simplifies to θ(t) ≤ C. For τ > ε0 , we can write (3.55) as ε0
∞ 1 θ(τ ) ≤ C + ∞ ωε0 (t) − ωε∞0 (τ − t) θ(t) dt. Ωε0 (τ ) 0 For t ∈ [0, ε0 ] we have θ(t) ≤ C, and this implies ε0
∞ 1 ∞ θ(τ ) ≤ C 1 + ∞ ωε0 (t) − ωε0 (τ − t) dt ≤ 2C. Ωε0 (τ ) 0 This concludes the proof of the lemma. Now we can continue the estimate of e(T ). Proposition 3.18. We have that Λ∗2 (u, v) . (t) t∈[0,T ] Ωε∞ 0
e(T ) ≤ 2e(0) + 8 (ε + ε0 f Lip ) T.V. (v0 ) + 2 lim∞ sup ω→ω
3.3. A Priori Error Estimates
97
Proof (of Proposition 3.18). Starting with the inequality (3.54), using the estimate for Φ from Proposition 3.16, we have, after passing to the limit ω → ω ∞ , that T T ∞ ∞ ∞ Ωε0 (T )e(T ) + ωε0 (T − t)e(t) dt ≤ Ωε0 (t)e(0) + ωε∞0 (t)e(t) dt 0
0
+ 4Ω∞ ε0 (t) (ε + ε0 f Lip ) T.V. (v0 ) Λ∗2 (u, v) . ∞ t∈[0,T ] Ωε0 (t)
+ Ω∞ ε0 (T ) lim∞ sup ω→ω
We apply Lemma 3.17 with Λ∗2 (u, v) + e(0) ∞ t∈[0,T ] Ωε0 (t)
C = 4 (ε + ε0 f Lip ) T.V. (v0 ) + lim∞ sup ω→ω
to complete the proof. To finish the proof of the theorem, it remains only to estimate Λ∗2 (u, v) . Ω(t) t∈[0,T ]
lim∞ sup
ω→ω
We will use the following inequality: v+ V (u, v+ ) − V (u, v− ) 1 ≤ v+ − v− − δ(s) ds. v+ − v− v
(3.56)
Since v is an entropy solution to (3.1), we have that Λ∗2 ≤ −
T
0
T
0
V (u, v)ϕxy dy ds dx dt.
(3.57)
Since v is of bounded variation, it suffices to study the case where v is differentiable except on a countable number of curves x = x(t). We shall bound Λ∗2 in the case that we have one such curve; the generalization to more than one is straightforward. Integrating (3.57) by parts, we obtain T Λ∗2 ≤ Ψ(y, s) dy ds, (3.58) 0
where Ψ is given by T Ψ(y, s) = 0
x(t)
V (u, v)v vx ϕy dx ∞ [[V ]] [[v]]ϕy |x=x(t) + + V (u, v)v vx ϕy dx dt. [[v]] x(t) −∞
98
3. A Short Course in Difference Methods
As before, [[a]] denotes the jump in a, i.e., [[a]] = a(x(t)+, t) − a(x(t)−, t). Using (3.56), we obtain T x(t) |Ψ(y, s)| ≤ δ v |vx | |ϕy | dx 0 −∞ (3.59) ∞ + |[[v]]| ϕy |x=x(t) + |vx | |ϕy | dx dt. x(t)
Let D be given by
T
|ϕy | dy ds.
D(x, t) = 0
A simple calculation shows that 1 T 1 T D(x, t) = ωε0 (t − s) ds |ω (y)| dy ≤ ωε0 (t − s) ds. ε 0 ε 0 Consequently, 0
T
sup D(x, t) dt ≤ x
1 ε
0
T
T
0
ωε0 (t − s) ds dt
2 T = (T − t)ωε0 (t) dt ε 0 2T Ω(T ) ≤ . ε
Inserting this in (3.59), and the result in (3.58), we find that Λ∗2 (u, v, T ) ≤
2 T T.V. (v0 ) δ v Ω(T ). ε
Summing up, we have now shown that 4 e(T ) ≤ 2e(0) + 8 (ε + ε0 f Lip ) T.V. (v0 ) + T T.V. (v0 ) δ v . ε We can set ε0 to zero, and minimize over ε, obtaining $
u(T ) − v(T ) 1 ≤ 2 u0 − v0 1 + 4T.V. (v0 ) 8T δ v . The theorem is proved. The main idea behind this approach to getting a priori error estimates, is to choose the “Kuznetsov-type” form Λε,ε0 such that Λε,ε0 (u, v) = 0 for every function v, and then writing Λε,ε0 as a sum of a nonnegative and a nonpositive part. Given a numerical scheme, the task is then to prove a discrete analogue of the previous theorem.
3.4. Measure-Valued Solutions
99
3.4 Measure-Valued Solutions You try so hard, but you don’t understand . . . Bob Dylan, Ballad of a Thin Man (1965 )
Monotone methods are at most first-order accurate. Consequently, one must work harder to show that higher-order methods converge to the entropy solution. While this is possible in one space dimension, i.e., in the above setting, it is much more difficult in several space dimensions. One useful tool to aid the analysis of higher-order methods is the concept of measure-valued solutions. This is a rather complicated concept, which requires a background from analysis beyond this book. Therefore, the presentation in this section is brief, and is intended to give the reader a first flavor, and an idea of what this method can accomplish. Consider the case where a numerical scheme gives a sequence Ujn that is uniformly bounded in L∞ (R × [0, ∞), and with the L1 -norm Lipschitz continuous in time, but such that there is no bound on the total variation. We can still infer the existence of a weak limit ∗
uΔt u, but the problem is to show that ∗
f (uΔt ) f (u). Here, we have introduced the concept of weak-∗ L∞ convergence. A sequence {un } that is bounded in L∞ is said to converge weakly-∗ to u if for all v ∈ L1 , un v dx → uv dx, as n → ∞. Since uΔt is bounded, f (uΔt) is also bounded and converges weakly, and thus ∗ f (uΔt ) f¯, but f¯ is in general not equal to f (u). We provide a simple example of the problem. ♦ Example 3.19. Let un = sin(nx) and f (u) = u2 . Then sin(nx)ϕ(x) dx ≤ 1 cos(nx)ϕ (x) dx ≤ C → 0 as n → ∞. n n On the other hand, f (un ) = sin2 (nx) = (1 − cos(2nx))/2, and hence a similar estimate shows that (f (un ) − 1 )ϕ(x) dx ≤ C → 0 as n → ∞. 2 n
100
3. A Short Course in Difference Methods
Thus we conclude that ∗
un 0,
∗
f (un )
1 = 0 = f (0). 2 ♦
To be able to treat this situation, we will further weaken the requirements to solutions of conservation laws. Let M(R) denote the space of bounded Radon measures on R and ) * C0 (R) = g ∈ C (R) lim g(λ) = 0 . |λ|→∞
If μ ∈ M(R), then we write + , μ, g = g(λ) dμ(λ), R
for all
g ∈ C0 (R).
+ , Now we have that μ ∈ M(R) if and only if μ, g ≤ C g L∞ (R) for all g ∈ C0 (Rn ). We can then define a norm on M(R) by + ,
μ M(R) = sup μ, ψ ψ ∈ D(R), ψ L∞ (R) ≤ 1 . The space M(R) equipped with the norm · M(R) is a Banach space, and it is isometrically isomorphic to the dual space of C0 (R) with the norm
· L∞ (R) ; see, e.g., [107, p. 149]. Next we define the space of probability measures Prob(R) as Prob(R) = μ ∈ M(R) μ is nonnegative and μ M(R) = 1 . Then we can state the fundamental theorem in the theory of compensated compactness. Theorem 3.20 (Young’s theorem). Let K ⊂ R be a bounded open set and {uε } a sequence of functions uε : R× [0, T ] → K. Then there exists a family of probability measures {ν(x,t)(λ) ∈ Prob(R)}(x,t)∈R×[0,T ] (depending weak∗ measurably on (x, t)) such that for any continuous function g : K → R, we have a subsequence ∗
g(uε ) g¯ in L∞ (R × [0, T ]) as ε → 0, where (the exceptional set depends possibly on g) g¯(x, t) := ν(x,t) , g = g(λ) dν(x,t) (λ) for a.e. (x, t) ∈ R × [0, T ]. R
Furthermore, ¯ for a.e. (x, t) ∈ R × [0, T ]. supp ν(x,t) ⊂ K This theorem is indeed the main reason why measure-valued solutions are easier to obtain than weak solutions, since for any bounded sequence
3.4. Measure-Valued Solutions
101
of approximations to a solution of a conservation law we can associate (at least) one probability measure ν(x,t) representing the weak-star limits of the sequence. Thus we avoid having to show that the method is TVD stable and use Helly’s theorem to be able to work with the limit of the sequence. The measures associated with weakly convergent sequences are frequently called Young measures. Intuitively, when we are in the situation that we have no knowledge of eventual oscillations in uε as ε → 0, the Young measure ν(x,t) (E) can be thought of as the probability that the “limit” at the point (x, t) takes a value in the set E. To be a bit more precise, define 1 ε,r ν(x,t) (E) = 2 meas (y, s) |x − y| , |t − s| ≤ r and uε (y, s) ∈ E . r ε,r Then for small r, ν(x,t) (E) is the probability that uε takes values in E near x. It can be shown that ε,r ν(x,t) = lim lim ν(x,t) ; r→0 ε→0
see [6]. We also have the following corollary of Young’s theorem. Corollary 3.21. The sequence uε converges strongly to u if and only if the measure ν(x,t) reduces to a Dirac measure located at u(x, t), i.e., ν(x,t) = δu(x,t) . Now we can define measure-valued solutions. A probability measure ν(x,t) is a measure-valued solution to (3.1) if + , + , ν(x,t) , Id t + ν(x,t) , f x = 0 in the distributional sense, where Id is the identity map, Id(λ) = λ. As with weak solutions, we call a measure-valued solution compatible with the entropy pair (η, q) if + , + , ν(x,t) , η t + ν(x,t) , q x ≤ 0 (3.60) in the distributional sense. If (3.60) holds for all convex η, we call ν(x,t) a measure-valued entropy solution. Clearly, weak solutions are also measurevalued solutions, as we can see by setting ν(x,t) = δu(x,t) for a weak entropy solution u. But measure-valued solutions are more general than weak solutions, since for any two measure-valued solutions ν(x,t) and μ(x,t) , and for any θ ∈ [0, 1] the convex combination θν(x,t) + (1 − θ)μ(x,t)
(3.61)
is also a measure-valued solution. It is not clear, however, what are the initial data satisfied by the measure-valued solution defined by (3.61). We
102
3. A Short Course in Difference Methods
would like our measure-valued solutions initially to be Dirac masses, i.e., ν(x,0) = δu0 (x) . Concretely we shall assume the following: , 1 T A + lim ν(x,t) , |Id − u0 (x)| dx dt = 0 (3.62) T ↓0 T 0 −A for every A. For any Young measure ν(x,t) we have the following lemma. Lemma 3.22. Let ν(x,t) be a Young measure (with support in [−K, K]), and let ωε be a standard mollifier in x and t. Then: ε (i) there exists a Young measure ν(x,t) defined by + , ε ν(x,t) , g = ν(x,t) , g ∗ ωε + , = ωε (x − y)ωε (t − s) ν(y,s) , g dy ds.
(3.63)
ε (ii) For all (x, t) ∈ R × [0, T ] there exist bounded measures ∂x ν(x,t) and ε ∂t ν(x,t) , defined by ε ε ∂t ν(x,t) , g = ∂t ν(x,t) ,g , (3.64) ε ε ∂x ν(x,t) , g = ∂x ν(x,t) ,g .
Proof. Clearly, the right-hand side of (3.63) is a bounded linear functional on C0 (R), and hence the Riesz representation theorem guarantees the exε ε istence of ν((x,t) ) . To show that ν(x,t) M(R) = 1, let {ψn } be a sequence of test functions such that + , ν(x,t) , ψn → 1, as n → ∞. Then for any 1 > κ > 0 we can find an N such that + , ν(x,t) , ψn > 1 − κ, for n ≥ N . Thus, for such n
ε ν(x,t) , ψn ≥ 1 − κ,
ε and therefore ν(x,t)
M(R) ≥ 1. The opposite inequality is immediate, since + , ε ν(x,t) , ψ ≤ ν(x,t) , ψ ε for all test functions ψ. Therefore, ν(x,t) is a probability measure. Similarly, ε ε the existence of ∂x ν(x,t) and ∂t ν(x,t) follows by the Riesz representation ε ε theorem. Since ν(x,t) is bounded, the boundedness of ∂x ν(x,t) and ∂t ν(x,t) follows for each fixed ε > 0.
Now that we have established the existence of the “smooth approximation” to a Young measure, we can use this to prove the following lemma.
3.4. Measure-Valued Solutions
103
Lemma 3.23. Assume that f is a Lipschitz continuous function and that ν(x,t) (λ) and σ(x,t) (μ) are measure-valued solutions with support in [−K, K]. Then + , + , ∂t ν(x,t) ⊗ σ(x,t) , |λ − μ| + ∂x ν(x,t) ⊗ σ(x,t) , q(λ, μ) ≤ 0, (3.65) in the distributional sense, where q(λ, μ) = sign (λ − μ) (f (λ) − f (μ)), and ν(x,t) ⊗ σ(x,t) denotes the product measure dν(x,t) dσ(x,t) on R × R. ε ε Proof. If ν(x,t) and σ(x,t) are defined by (3.63), and ϕ ∈ C0∞ (R × [0, T ]), then we have that + , ε ν(x,t) , g ∂t (ϕ ∗ ωε ) dx dt = ν(x,t) , g ∂t ϕ dx dt R×[0,T ] R×[0,T ] ε =− ∂t ν(x,t) , g ϕ dx dt, R×[0,T ]
and similarly, + , ν(x,t) , g ∂x (ϕ ∗ ωε ) dx dt = − R×[0,T ]
R×[0,T ]
ε ∂x ν(x,t) , g ϕ dx dt,
and analogous identities also hold for σ(x,t) . Therefore, ε ε ∂t ν(x,t) , |λ − μ| + ∂x ν(x,t) , q(λ, μ) ≤ 0, ε ε ∂t σ(x,t) , |λ − μ| + ∂x σ(x,t) , q(λ, μ) ≤ 0.
(3.66) (3.67)
Next, we observe that for any continuous function g, ε ε ε ε ∂t ν(x,t) ⊗ σ(x,t) , g(λ, μ) = ∂t g(λ, μ) dν(x,t) (λ) dσ(x,t) (μ) R R ε ε + ∂t g(λ, μ) dσ(x,t) (μ) dν(x,t) (λ) R R ε ε = ∂t ν(x,t) , g(λ, μ) dσ(x,t) (μ) R ε ε + ∂t σ(x,t) , g(λ, μ) dν(x,t) (λ), R
and an analogous equality holds for ε ε ∂x ν(x,t) ⊗ σ(x,t) , g(λ, μ) . Therefore, we find that ε1 ε2 ε1 ε2 ν(x,t) ⊗ σ(x,t) , |λ − μ| ϕt + ν(x,t) ⊗ σ(x,t) , q(λ, μ) ϕx (x, t) dx dt R×[0,T ]
104
3. A Short Course in Difference Methods
=−
R×[0,T ]
−
+
ε2 dσ(x,t) (μ) ϕ dx dt ε2 ∂t σ(x,t) , |λ − μ|
ε1 ∂x ν(x,t) , q(λ, μ)
R×[0,T ]
R
ε1 ∂t ν(x,t) , |λ − μ|
R
ε2 ε1 + ∂x σ(x,t) , q(λ, μ) dν(x,t) (λ) ϕ dx dt
≥ 0, for any nonnegative test function ϕ. Now we would like to conclude the proof by sending ε1 and ε2 to zero. Consider the second term ε1 ε2 I ε1 ,ε2 = ν(x,t) ⊗ σ(x,t) , q(λ, μ) ϕx (x, t) dx dt R×[0,T ] ε2 = σ(x,t) , q(λ, μ) dν(y,s) R×[0,T ]
× ωε1 (x − y)ωε1 (t − s)ϕx (x, t) dy ds dx dt. Since ε2 σ(x,t) , q(λ, μ) dν(y,s) ωε1 (x − y)ωε1 (t − s)ϕx (x, t) dy ds ε2 → σ(x,t) , q(λ, μ) dν(x,t) ϕx (x, t) < ∞ for almost all (x, t) as ε1 → 0, we can use the Lebesgue bounded convergence theorem to conclude that ε2 ε1 ,ε2 lim I = ν(x,t) ⊗ σ(x,t) , q(λ, μ) ϕx (x, t) dx dt. ε1 →0
R×[0,T ]
We can apply this argument once more for ε2 , obtaining + , ε1 ,ε2 lim lim I = ν(x,t) ⊗ σ(x,t) , q(λ, μ) ϕx (x, t) dx dt. (3.68) ε2 →0 ε1 →0
R×[0,T ]
Similarly, we obtain ε1 ε2 lim lim ν(x,t) ⊗ σ(x,t) , |λ − μ| ϕt (x, t) dx dt ε2 →0 ε1 →0 R×[0,T ] + , = ν(x,t) ⊗ σ(x,t) , |λ − μ| ϕt (x, t) dx dt.
(3.69)
R×[0,T ]
This concludes the proof of the lemma. Let {uε } and {vε } be the sequences associated with ν(x,t) and σ(x,t) , and assume that for t ≤ T , the support of uε ( · , t) and vε ( · , t) is contained in a finite interval I. Then both uε ( · , t) and vε ( · , t) are in L1 (R) uniformly
3.4. Measure-Valued Solutions
in ε. This means that both + , ν(x,t) , |λ|
and
105
+ , σ(x,t) , |λ|
are in L1 (R) for almost all t. Using this observation, and the preceding lemma, Lemma 3.23, we can continue. Define t
φε (t) = ωε (s − t1 ) − ωε (s − t2 ) ds, 0
where t2 > t1 > 0 and ωε is the usual mollifier. Also define ⎧ ⎪ for |x| ≤ n, ⎨1 ψn (x) = 2(1 − x/(2n)) for n < |x| ≤ 2n, ⎪ ⎩ 0 otherwise, and set ψε,n = ψn ∗ ωε (x). Hence ϕ(x, t) = φε (t)ψε,n (x) is an admissible test function. Furthermore, ψε,n ≤ 1/n, and φε (t) tends to the characteristic function of the interval [t1 , t2 ] as ε → 0. Therefore, + , − lim ν(x,t) ⊗ σ(x,t) , |λ − μ| ϕt ε→0
R×[0,T ]
Set
+ , + ν(x,t) ⊗ σ(x,t) , q(λ, μ) ϕx dx dt ≤ 0.
An (t) =
R
+ , ν(x,t) ⊗ σ(x,t) , |λ − μ| ψn (x) dx.
Using this definition we find that t2 + , An (t2 ) − An (t1 ) ≤ ν(x,t) ⊗ σ(x,t) , |λ − μ| |ψn (x)| dx dt. t1
(3.70)
R
The right-hand side of this is bounded by + + , , 1 ν(x,t) , |λ| 1 + σ(x,t) , |μ| 1
f Lip →0 L (R) L (R) n as n → ∞. Since ν(x,t) and σ(x,t) are probability measures, for almost all t, the set - + , + , . x ν(x,t) , 1 = 1 and σ(x,t) , 1 = 1 has zero Lebesgue measure. Therefore, for almost all t, + , An (t) ≤ ν(x,t) ⊗ σ(x,t), |λ − u0 (x)| + |μ − u0 (x)| dx R + , + , = ν(x,t) , |λ − u0 (x)| dx + σ(x,t) , |μ − u0 (x)| dx. R
R
106
3. A Short Course in Difference Methods
Integrating (3.70) with respect to t1 from 0 to T , then dividing by T and sending T to 0, using (3.62), and finally sending n → ∞, we find that |λ − μ| dν(x,t) dσ(x,t) = 0, for (x, t) ∈ E, (3.71) R×R
where the Lebesgue measure of the (exceptional) set E is zero. Suppose ¯ in the support of ν(x,t) and a μ now that for (x, t) ∈ E there is a λ ¯ in the ¯ = μ. support of σ(x,t) and λ ¯ Then we can find positive functions g and h such that 0 ≤ g ≤ 1,
0 ≤ h ≤ 1,
and supp(g) ∩ supp(h) = ∅. Furthermore, + , + , ν(x,t) , g > 0 and σ(x,t) , h > 0. Thus
0<
g(λ)h(μ) dν(x,t) dσ(x,t) g(λ)h(μ) ≤ sup |λ − μ| dν(x,t) dσ(x,t) = 0. λ − μ R×R λ,μ R×R
This contradiction shows that both ν(x,t) and σ(x,t) are unit point measures with support at a common point. Precisely, we have proved the following theorem: Theorem 3.24. Suppose that ν(x,t) and σ(x,t) are measure-valued entropy solutions to the conservation law ut + f (u)x = 0, and that both ν(x,t) and σ(x,t) satisfy the initial condition (3.62), and that
ν(x,t) , |λ| and σ(x,t) , |μ| are in L∞ ([0, T ]; L1(R)). Then there exists a function u ∈ L∞ ([0, T ]; L1(R)) ∩ L∞ (R × [0, T ]) such that ν(x,t) = σ(x,t) = δu(x,t) ,
for almost all (x, t).
In order to avoid checking (3.62) directly, we can use the following lemma. Lemma 3.25. Let ν(x,t) be a probability measure, and assume that for all test functions ϕ(x) we have , 1 τ + lim+ ν(x,t) , Id ϕ(x) dx dt = u0 (x)ϕ(x) dx, (3.72) τ →0 τ 0 and that for all nonnegative ϕ(x) and for at least one strictly convex continuous function η, , 1 τ + lim sup ν(x,t) , η ϕ(x) dx dt ≤ η (u0 (x)) ϕ(x) dx. (3.73) τ →0+ τ 0 Then (3.62) holds.
3.4. Measure-Valued Solutions
Proof. We shall prove
+ 1 τ A lim
ν(x,t) , Id − u0 (x) dx dt = 0, τ →0+ τ 0 −A
107
(3.74)
from which the desired result will follow from (3.72) and the identity
+ |λ − u0 (x)| = 2 λ − u0 (x) − λ − u0 (x) , where a+ = max{a, 0} denotes the positive part of a. To get started, we write η+ for the right-hand derivative of η. It exists by virtue of the convexity of η; moreover, η(λ) ≥ η(y) + η+ (y)(λ − y)
for all λ. Whenever ε > 0, write ζ(y, ε) =
η(y + ε) − η(y) − η+ (y). ε
Since η is strictly convex, ζ(y, ε) > 0, and this quantity is an increasing function of ε. In particular, if λ > y + ε, then ζ(y, λ − y) > ζ(y, ε), or η(λ) > η(y) + η+ (y)(λ − y) + ζ(y, ε)(λ − y).
In every case, then,
η(λ) > η(y) + η+ (y)(λ − y) + ζ(y, ε) (λ − y)+ − ε .
(3.75)
On the other hand, whenever y < λ < y + ε, then ζ(y, λ − y) > ζ(y, ε), so η(λ) < η(y) + η+ (y)(λ − y) + εζ(y, ε)
(y ≤ λ < y + ε).
(3.76)
Let us now assume that ϕ ≥ 0 is such that ϕ(x) = 0 ⇒ y ≤ u0 (x) < y + ε.
(3.77)
We use (3.75) on the left-hand side and (3.76) on the right-hand side of (3.73), and get 1 τ lim sup
ν(x,t) , η(y) + η+ (y)(Id − y) τ →0+ τ 0 R
+ ζ(y, ε) (Id − y)+ − ε ϕ(x) dx dt
≤ η(y) + η+ (y) u(x0 ) − y + εζ(y, ε) ϕ(x) dx. R
Here, thanks to (3.72) and the fact that ν(x,t) is a probability measure, all the terms not involving ζ(y, ε) cancel, and then we can divide by ζ(y, ε) = 0 to arrive at 1 τ lim sup
ν(x,t) , (Id − y)+ ϕ(x) dx dt ≤ 2ε ϕ(x) dx. τ →0+ τ 0 R R
108
3. A Short Course in Difference Methods
Now, remembering (3.77) we see that whenever ϕ(x) = 0 we have (λ−y)+ ≤ (λ − u0 (x))+ + ε, so the above implies
+ 1 τ lim sup
ν(x,t) , Id − u0 (x) ϕ(x) dx dt ≤ 3ε ϕ(x) dx τ →0+ τ 0 R R whenever (3.77) holds. It remains only to divide up the common support [−M, M ] of all the measures ν(x,t) , writing yi = −M + iε for i = 0, 1, . . . , N − 1, where ε = 2M/N ; let ϕi be the characteristic function of [−A, A] ∩ u−1 0 ([yi , yi + ε); and add together the above inequality for each i to arrive at A
+ 1 τ A lim sup
ν(x,t) , Id − u0 (x) ϕ(x) dx dt ≤ 3ε ϕ(x) dx. τ →0+ τ 0 −A −A Since ε can be made arbitrarily small, (3.74) follows, and the proof is complete. Remark 3.26. We cannot conclude that3 1 τ lim
ν(x,t) , |Id − u0 (x)| dx dt = 0 τ →0+ τ 0 R
(3.78)
from the present assumptions. Here is an example to show this. Let ν(x,t) = μγ(x,t) , where μβ = 12 (δ−β + δβ ) and γ is a continuous, nonnegative function with γ(x, 0) = 0. Let u0 (x) = 0 and η(y) = y 2 . Then (3.72) holds trivially, and (3.73) becomes 1 τ lim sup γ(x, t)2 ϕ(x) dx dt = 0, τ →0+ τ 0 R which is also true due to the stated assumptions on γ. The desired conclusion (3.78), however, is now 1 τ lim sup γ(x, t) dx dt = 0. τ →0+ τ 0 R But the simple choice 2
γ(x, t) = te−(xt) yields 1 lim sup τ τ →0+
τ
γ(x, t) dx dt =
0
√
π.
R
We shall now describe a framework that allows one to prove convergence of a sequence of approximations without proving that the method is TV stable. Unfortunately, the application of this method to concrete examples, 3 Where the integral over the compact interval [−A, A] in (3.62) has been replaced by an integral over the entire real line.
3.4. Measure-Valued Solutions
109
while not very difficult, involves quite large calculations, and will be omitted here. Readers are encouraged to try their hands at it themselves. We give one application of these concepts. The setting is as follows. Let U n be computed from a conservative and consistent scheme, and assume uniform boundedness of U n . Young’s theorem states that there ∗ exists a family of probability measures ν(x,t) such that g(U n ) ν(x,t) , g for Lipschitz continuous functions g. We assume that the CFL condition, λ supu |f (u)| ≤ 1, is satisfied. The next theorem states conditions, strictly weaker than TVD, for which we prove that the limit measure ν(x,t) is a measure-valued solution of the scalar conservation law. Theorem 3.27. Assume that the sequence {U n } is the result of a conservative, consistent method, and define uΔt as in (3.24). Assume that uΔt is uniformly bounded in L∞ (R × [0, T ]), T = nΔt. Let ν(x,t) be the Young measure associated with uΔt , and assume that Ujn satisfies the estimate (Δx)β
N n U
j +1
− Ujn Δt ≤ C(T ),
(3.79)
n=0 j
for some β ∈ [0, 1 and some constant C(T ). Then ν(x,t) is a measure-valued solution to (3.1). Furthermore, let (η, q) be a strictly convex entropy pair, and let Q be a numerical entropy flux consistent with q. Write ηjn = η(U Ujn ) and Qnj = n Q(U Uj ). Assume that 1 n+1 1 n η − ηjn + Qj − Qnj−1 ≤ Rjn Δt j Δx for all n and j, where Rjn satisfies lim
Δt→0
N n=0
ϕ(jΔx, nΔt) Rjn ΔxΔt = 0
(3.80)
(3.81)
j
for all nonnegative ϕ ∈ C01 . Then ν(x,t) is a measure-valued solution compatible with (η, q), and the initial data is assumed in the sense of (3.72), (3.73). If (3.80) and (3.81) hold for all entropy pairs (η, q), ν(x,t) is a measure-valued entropy solution to (3.1). Remark 3.28. For β = 0, (3.79) is the standard TV estimate, while for β > 0, (3.79) is genuinely weaker than a TV estimate. Proof. We start by proving the first statement in the theorem, assuming (3.79). As before, we obtain (3.25) by rearranging. For simplicity,
we now write Fjn = F (U n ; j), fjn = f (U Ujn ), and observe that Fjn = fjn + Fjn − fjn , getting
n n,j n uΔt Δn,j Δn,j t ϕ + f (uΔt )Δx ϕ dx dt = x ϕ Fj − fj ΔtΔx. j,n
(3.82)
110
3. A Short Course in Difference Methods
Here we use the notation uΔt = Ujn
for (x, t) ∈ [jΔx, (j + 1)Δx × [nΔt, (n + 1)Δt ,
and 1 (ϕ(jΔx, (n + 1)Δt) − ϕ(jΔx, nΔt)) , Δt 1 Δn,j (ϕ((j + 1)Δx, nΔt) − ϕ(jΔx, nΔt)) . x ϕ= Δx The first term on the left-hand side in (3.82) reads uΔt Δn,j t ϕ dx dt + ,
+ , = ν(x,t) , Id ϕt dx dt + uΔt − ν(x,t) , Id ϕt dx dt + uΔt Δn,j ϕ − ϕ dx dt. (3.83) t t Δn,j t ϕ=
The third term on the right-hand side of (3.83) clearly tends to zero as Δt goes to zero. Furthermore, by definition of the Young measure ν(x,t), the second term tends to zero as well. Thus the left-hand side of (3.83) approaches ν(x,t) , Idϕt dx dt. One can use a similar argument for the second term on the left-hand side of (3.82) to show that the (whole) left-hand side of (3.82) tends to
+ , + , ν(x,t) , Id ϕt + ν(x,t) , f ϕx dx dt (3.84) as Δt → 0. We now study the right-hand side of (3.82). Mimicking the proof of the Lax–Wendroff theorem, we have q n n n F − f n ≤ C U j j j +k − Uj . k=−p
Therefore, n,j n n Δx ϕ Fj − fj ΔtΔx j,n
≤ C ϕ Lip (p + q + 1)
N n Uj +1 − Ujn ΔtΔx n=0 j
≤ C ϕ Lip (p + q + 1)(Δx)1−β ,
(3.85)
using the assumption (3.79). Thus the right-hand side of (3.85), and hence also of (3.82), tends to zero. Since the left-hand side of (3.82) tends to (3.84), we conclude that ν(x,t) is a measure-valued solution. Using similar calculations, and (3.81), one shows that ν(x,t) is also an entropy measurevalued solution.
3.4. Measure-Valued Solutions
111
It remains to show consistency with the initial condition, i.e., (3.72) and (3.73). Let ϕ(x) be a test function, and we use the notation ϕ(jΔx) = ϕj . From the definition of Ujn+1 , after a summation by parts, we have that
ϕj Ujn+1 − Ujn Δx = Δt Fjn Δn,j x ϕj+1 Δx ≤ O (1) Δt,
j
j
since Ujn is bounded. Remembering that ϕ = ϕ(x), we get
n 0 ≤ O (1) nΔt. ϕ U − U Δx j j j
(3.86)
j
Let t1 = n1 Δt and t2 = n2 Δt. Then (3.86) yields n2
n 1 0 ϕj Uj − Uj ΔxΔt ≤ O (1) t2 , (n2 + 1 − n1 ) Δt n=n1
j
which implies that the Young measure ν(x,t) satisfies t2 1 + , ϕ(x) ν(x,t) , Id dx dt − ϕ(x)u0 (x) dx ≤ O (1) t2 . t2 − t1 t1 We let t1 → 0 and set t2 = τ in (3.87), obtaining τ 1 + , ϕ(x) ν(x,t) , Id dx dt − ϕ(x)u0 (x) dx ≤ O (1) τ, τ 0
(3.87)
(3.88)
which proves (3.72). Now for (3.73). We have that there exists a strictly convex entropy η for which (3.80) holds. Now let ϕ(x) be a nonnegative test function. Using (3.80), and proceeding as before, we obtain n
n 0 ≤ O (1) nΔt + η − η ϕ Δx Rj ϕj ΔtΔx. j j j j
=0
j
By using this estimate and the assumption on Rj , (3.81), we can use the same arguments as in proving (3.88) to prove (3.73). The proof of the theorem is complete. A trivial application of this approach is found by considering monotone schemes. Here we have seen that (3.79) holds for β = 0, and (3.80) for Rjn = 0. The theorem then gives the convergence of these schemes without using Helly’s theorem. However, in this case the application does not give the existence of a solution, since we must have this in order to use DiPerna’s theorem. The main usefulness of the method is for schemes in several space dimensions, where TV bounds are more difficult to obtain.
112
3. A Short Course in Difference Methods
3.5 Notes The Lax–Friedrichs scheme was introduced by Lax in 1954; see [94]. Godunov discussed what has later become the Godunov scheme in 1959 as a method to study gas dynamics; see [60]. The Lax–Wendroff theorem, Theorem 3.4, was first proved in [97]. Theorem 3.8 was proved by Ole˘nik ˘ in her fundamental paper [110]; see also [130]. Several of the fundamental results concerning monotone schemes are due to Crandall and Majda [39], [38]. Theorem 3.10 is due to Harten, Hyman, and Lax; see [62]. The error analysis is based on the fundamental analysis by Kuznetsov, [89], where one also can find a short discussion of the examples we have analyzed, namely the smoothing method, the method of vanishing viscosity, as well as monotone schemes. Our presentation of the a priori estimates follows the approach due to Cockburn and Gremaud; see [31] and [32], where also applications to numerical methods are given. The concept of measure-valued solutions is due to DiPerna, and the key result, Corollary 3.21, can be found in [45], while Lemma 3.25 is to be found in [44]. The proof of Lemma 3.25 and Remark 3.26 are due to H. Hanche-Olsen. Our presentation of the uniqueness of measure-valued solutions, Theorem 3.24, is taken mainly from Szepessy, [134]. Theorem 3.27 is due to Coquel and LeFloch, [35]; see also [36], where several extensions are discussed.
Exercises 3.1 Show that the Lax–Wendroff and the MacCormack methods are of second order. 3.2 The Engquist–Osher (or generalized upwind) method, see [46], is a conservative difference scheme with a numerical flux defined as follows: F (U ; j) = f EO (U Uj , Uj +1 ) , where u v f EO (u, v) = max(f (s), 0) ds + min(f (s), 0) ds + f (0). 0
0
a. Show that this method is consistent and monotone. b. Find the order of the scheme. c. Show that the Engquist–Osher flux f EO can be written v 1 EO f (u, v) = f (u) + f (v) − |f (s)| ds . 2 u d. If f (u) = u2 /2, show that the numerical flux can be written 1 max(u, 0)2 + min(v, 0)2 . f EO (u, v) = 2
3.5. Notes
113
Generalize this simple expression to the case where f (u) = 0 and lim|u|→∞ |f (u)| = ∞. 3.3 Why does the method
Δt n f Uj +1 − f Ujn−1 2Δx not give a viable difference scheme? Ujn+1 = Ujn −
3.4 We study a nonconservative method for Burgers’ equation. Assume that Uj0 ∈ [0, 1] for all j. Then the characteristic speed is nonnegative, and we define
Ujn+1 = Ujn − λU Ujn+1 Ujn − Ujn−1 , n ≥ 0, (3.89) where λ = Δt/Δx. a. Show that this yields a monotone method, provided that a CFL condition holds. b. Show that this method is consistent and determine the truncation error. 3.5 Assume that f (u) > 0 and that f (u) ≥ 2c > 0 for all u in the range of u0 . We use the upwind method to generate approximate solutions to ut + f (u)x = 0, i.e., we set
u(x, 0) = u0 (x);
(3.90)
Ujn+1 = Ujn − λ f (U Ujn ) − f (U Ujn−1 .
Set Vjn =
Ujn − Ujn−1 . Δx
a. Show that
Vjn+1 = 1 − λf (U Ujn−1 ) Vjn + λf (U Ujn−1 )V Vjn−1
2
2 Δt − f (ηηj −1/2 ) Vjn + f (ηηj −3/2 ) Vjn−1 , 2 where ηj −1/2 is between Ujn and Ujn−1 . b. Next, assume inductively that Vjn ≤
1 , (n + 2)cΔt
for all j,
and set Vˆ n = max(maxj Vjn , 0). Then show that 2 Vˆ n+1 ≤ Vˆ n − cΔt Vˆ n .
114
3. A Short Course in Difference Methods
c. Use this to show that Vˆ n ≤
Vˆ 0 . 1 + Vˆ 0 cnΔt
d. Show that this implies that Uin − Ujn ≤ Δx(i − j)
Vˆ 0 , 1 + Vˆ 0 cnΔt
for i ≥ j. e. Let u be the entropy solution of (3.90), and assume that 0 ≤ maxx u0 (x) = M < ∞, show that for almost every x, y, and t we have that u(x, t) − u(y, t) M ≤ . (3.91) x−y 1 + cM t This is the Ole˘ ˘ınik entropy condition for convex scalar conservation laws. 3.6 Assume that f is as in the previous exercise, and that u0 is periodic with period p. a. Use uniqueness of the entropy solution to (3.90) to show that the entropy solution u(x, t) is also periodic in x with period p. b. Then use the Ole˘ ˘ınik entropy condition (3.91) to deduce that sup u(x, t) − inf u(x, t) ≤ x
x
Mp . 1 + cM t
Thus limt→∞ u(x, t) = u ¯ for some constant u ¯. c. Use conservation to show that 1 p u ¯= u0 (x) dx. p 0 3.7 Assume that g(x) is a continuously differentiable function with period 2π. Then we have that the Fourier representation ∞
g(x) =
a0 + ak cos(kx) + bk sin(kx) 2 k=1
holds pointwise, where
a0 =
1 π
2π
g(x) dx
and
0
⎧ 2π ⎪ ⎪a = 1 ⎪ g(x) cos(kx) dx, ⎨ k 2π 0 2π ⎪ 1 ⎪ ⎪ g(x) sin(kx) dx, ⎩ bk = 2π 0
for k ≥ 1. a. Use this to show that ∗
g(nx)
a0 . 2
3.5. Notes
115
b. Find a regular measure ν such that for any continuously differentiable h, ∗ h(sin(nx)) h(λ) dν(λ). Thus we have found an explicit form of the Young measure associated with the sequence {sin(nx)}. 3.8 We shall consider a scalar conservation law with a “fractal” function as the initial data. Define the set of piecewise linear functions D = {φ(x) = Ax + B | x ∈ [a, b], A, B ∈ R}, and the map
F (φ)
=
⎧ ⎪ ⎨2D(x − a) + φ(a) −D(x − a) + φ(a) ⎪ ⎩ 2D(x − b) + φ(b)
for x ∈ [a, a + L/3], for x ∈ [a + L/3, a + 2L/3], for x ∈ [a + 2L/3, b],
φ ∈ D, where L = b − a and D = (φ(b) − φ(a))/L. For a nonnegative integer k introduce χj,k as the characteristic function of the interval Ij,k = [j/3k , (j + 1)/3k ], j = 0, . . . , 3k+1 − 1. We define functions {vk } recursively as follows. Let ⎧ ⎪ for x ≤ 0, ⎪ ⎪0 ⎪ ⎪ ⎪ for 0 ≤ x ≤ 1, ⎨x v0 (x) = 1 for 1 ≤ x ≤ 2, ⎪ ⎪ ⎪3 − x for 2 ≤ x ≤ 3, ⎪ ⎪ ⎪ ⎩0 for 3 ≤ x. Assume that vj,k is linear on Ij,k and let vk =
k 3 −1
vj,k χj,k ,
(3.92)
j=−3k
and define the next function vk+1 by vk+1 =
3k+1 −1 j=0
F (vj,k )χj,k =
3k+2 −1
vj,k+1 χj,k+1 .
(3.93)
j=0
In the left part of Figure 3.3 we show the effect of the map F , and on the right we show v5 (x) (which is piecewise linear on 36 = 729 segments). a. Show that the sequence {vk }k>1 is a Cauchy sequence in the supremum norm, and hence we can define a continuous function v by setting v(x) = lim vk (x). k→∞
116
3. A Short Course in Difference Methods 1
v5 (x ( )
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.5
1
1.5
2
2.5
x
3
Figure 3.3. Left: the construction of F (φ) from φ. Right: v5 (x).
b. Show that v is not of bounded variation, and determine the total variation of vk . c. Show that v(j/3k ) = vk (j/3k ), for any integers j = 0, . . . , 3k+1 , k ∈ N. d. Assume that f is a C 1 function on [0, 1] with 0 ≤ f (u) ≤ 1. We are interested in solving the conservation law ut + f (u)x = 0,
u0 (x) = v(x).
To this end we shall use the upwind scheme defined by (3.8), with Δt = Δx = 1/3k , and Uj0 = v(jΔx). Show that uΔt (x, t) converges to an entropy solution of the conservation law above.
4 Multidimensional Scalar Conservation Laws
Just send me the theorems, then I shall find the proofs.1 Chrysippus told Cleanthes, 3rd century b.c.
Our analysis has so far been confined to scalar conservation laws in one dimension. Clearly, the multidimensional case is considerably more important. Luckily enough, the analysis in one dimension can be carried over to higher dimensions by essentially treating each dimension separately. This technique is called dimensional splitting. The final results are very much the natural generalizations one would expect. The same splitting techniques of dividing complicated differential equations into several simpler parts, can in fact be used to handle other problems. These methods are generally denoted operator splitting methods or fractional steps methods.
4.1 Dimensional Splitting Methods We will in this section show how one can analyze scalar multidimensional conservation laws by dimensional splitting, which amounts to solving one 1 Lucky guy! Paraphrased from Diogenes Laertius, Lives of Eminent Philosophers, c. a.d. 200.
H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_4, © Springer-Verlag Berlin Heidelberg 2011
117
118
4. Multidimensional Scalar Conservation Laws
space direction at a time. To be more concrete, let us consider the twodimensional conservation law ut + f (u)x + g(u)y = 0,
u(x, y, 0) = u0 (x, y).
(4.1)
If we let Stf,x u0 denote the solution of vt + f (v)x = 0,
v(x, y, 0) = u0 (x, y)
(where y is a passive parameter), and similarly let Stg,y u0 denote the solution of wt + g(w)y = 0,
w(x, y, 0) = u0 (x, y)
(x is a parameter), then the idea of dimensional splitting is to approximate the solution of (4.1) as follows: n g,y f,x u(x, y, nΔt) ≈ SΔt ◦ SΔt u0 . (4.2) ♦ Example 4.1 (A single discontinuity). We first show how this works on a concrete example. Let 1 f (u) = g(u) = u2 2 and ul for x < y, u0 (x, y) = ur for x ≥ y, with ur > ul . The solution in the x-direction for fixed y gives a rarefaction wave, the left and right part moving with speeds ul and ur , respectively. With a quadratic flux, the rarefaction wave is a linear interpolation between the left and right states. Thus ⎧ ⎪ for x < y + ul Δt, ⎨ ul f,x 1/2 u := SΔt u0 = (x − y)/Δt for y + ul Δt < x < y + ur Δt, ⎪ ⎩ ur for x > y + ur Δt. The solution in the y-direction for fixed x with initial state u1/2 will exhibit a focusing of characteristics. The left state, which now equals ur , will move with speed given by the derivative of the flux function, in this case ur , and hence overtake the right state, given by ul , which moves with smaller speed, namely ul . The characteristics interact at a time t given by ur t + x − ur Δt = ul t + x − ul Δt, or t = Δt. At that time we are back to the original Riemann problem between states ul and ur at the point x = y. Thus g,y 1/2 u1 := SΔt u = u0 .
4.1. Dimensional Splitting Methods
119
f,x Another application of SΔt will of course give f,x 1 u3/2 := SΔt u = u1/2 .
So we have that un = u0 for all n ∈ N. Introducing coordinates 1 ξ = √ (x + y) , 2 the equation transforms into 1 2 ut + √ u = 0, 2 ξ
1 η = √ (x − y), 2
u(ξ, η, 0) =
ul ur
for η ≤ 0, for η > 0.
We see that u(x, y, t) = u0 (x, y), and consequently limΔt→0 un = u0 (where we keep nΔt = t fixed). Thus the dimensional splitting procedure produces approximate solutions converging to the right solution in this case. ♦ We will state all results for the general case of arbitrary dimension, while proofs will be carried out in two dimensions only, to keep the notation simple. We first need to define precisely what is meant by a weak entropy solution of the initial value problem ut +
m
fj (u)xj = 0,
u(x1 , . . . , xm , 0) = u0 (x1 , . . . , xm ).
(4.3)
j=1
Here we adopt the Kruzkov ˇ entropy condition from Chapter 2, and say that u is a (weak) Kruˇ ˇzkov entropy solution of (4.3) for time [0, T ] if u is a bounded function that satisfies2 T m
|u − k| ϕt + sign (u − k) (ffj (u) − fj (k)) ϕxj dx1 · · · dxm dt 0
Rm
+ Rm
j=1
ϕ|t=0 |u0 − k| − (|u − k| ϕ)|t=T dx1 · · · dxm ≥ 0, (4.4)
for all constants k ∈ R and all nonnegative test functions ϕ ∈ C0∞ (Rm × [0, T ]). It certainly follows as in the one-dimensional case that u is a weak solution, i.e., ∞ m uϕt + fj (u)ϕxj dx1 · · · dxm dt 0
Rm
j=1
+ Rm
ϕ|t=0 u0 dx1 · · · dxm = 0,
(4.5)
for all test functions ϕ ∈ C0∞ (Rm × [0, ∞). 2 If we want a solution for all time we disregard the last term in (4.4) and integrate t over [0, ∞.
120
4. Multidimensional Scalar Conservation Laws
Our analysis aims at two different goals. We first show that the dimensional splitting indeed produces a sequence of functions that converges to a solution of the multidimensional equation (4.3). Our discussion will here be based on the more or less standard argument using Kolmogorov’s compactness theorem. The argument is fairly short. To obtain stability in the multidimensional case in the sense of Theorem 2.13, we show that dimensional splitting preserves this stability. Furthermore, we show how one can use front tracking as our solution operator in one dimension in combination with dimensional splitting. Finally, we determine the appropriate convergence rate of this procedure. This analysis strongly uses Kuznetsov’s theory from Section 3.2, but matters are more complicated and technical than in one dimension. We shall now show that dimensional splitting produces a sequence that converges to the entropy solution u of (4.3); that is, the limit u should satisfy (4.4). As promised, our analysis will be carried out in the two-dimensional case only, i.e., for equation (4.1). Assume that u0 is a compactly supported function in L∞ (R2 ) ∩ BV (R2 ) (consult Appendix A for
a definition of BV (R2 )). Let tn = nΔt and tn+1/2 = n + 12 Δt. Define u0 = u0 ,
f,x n un+1/2 = SΔt u ,
g,y n+1/2 un+1 = SΔt u ,
(4.6)
for n ∈ N0 . We shall also be needing an approximate solution for t = tn . We want the approximation to be an exact solution to a one-dimensional conservation law in each interval tj , tj+1/2 , j = k/2 and k ∈ N0 . The way to do this is to make “time go twice as fast” in each such interval; i.e., let uΔt be defined by3 f,x S2(t−tn ) un for tn ≤ t ≤ tn+1/2 , uΔt (x, t) = (4.7) g,y n+1/2 S2(t−tn+1/2 ) u for tn+1/2 ≤ t ≤ tn+1 . We first show that the sequence {uΔt } is compact. Since both operators S f,x and S g,y take L∞ into itself, uΔt will be uniformly bounded, i.e.,
uΔt ∞ ≤ M
(4.8)
for some constant M determined by the initial data and not depending on Δt. For the solution constructed from dimensional splitting we have T.V.x,y un+1/2 f,x n f,x n = T.V.x SΔt u dy + T.V.y SΔt u dx 1 n+1/2 ≤ T.V.x (un) dy + lim (x, y + h) − un+1/2 (x, y) dy dx u h→0 h 3 We
will keep the ratio λ = Δt/Δx fixed, and thus we index with only Δt.
4.1. Dimensional Splitting Methods
121
1 n+1/2 = T.V.x (u ) dy + lim (x, y + h) − un+1/2 (x, y) dx dy u h→0 h 1 ≤ T.V.x (un) dy + lim |un (x, y + h) − un (x, y)| dx dy h→0 h 1 n = T.V.x (u ) dy + lim |un (x, y + h) − un (x, y)| dy dx h→0 h = T.V.x,y (un ) , (4.9) n
using first the TVD property of S f,x and Lemma A.1, and subsequently the L1 -stability in the x-direction. The interchange of integrals and limits
is jus tified using Lebesgue’s dominated convergence. Similarly, T.V.x,y un+1 ≤
T.V.x,y un+1/2 , and thus T.V.x,y (un ) ≤ T.V.x,y (u0 ) follows by induction. This extends to T.V.x,y (uΔt) ≤ T.V.x,y (u0 ) .
(4.10)
We now want to establish Lipschitz continuity in time of the L1 -norm, i.e.,
uΔt (t) − uΔt(s) 1 ≤ C |t − s|
(4.11)
for some constant C. By repeated use of the triangle inequality it suffices to show that
uΔt ((n + 1)Δt) − uΔt(nΔt) 1 ≤ un+1 − un+1/2 + un+1/2 − un 1 1 f,x n n = SΔt u − u (4.12) 1 g,y n+1/2 + SΔt u − un+1/2 1
≤ CΔt. Using Theorem 2.14 (vi) we conclude that the first term in (4.12) is bounded by f Lip ΔtT.V.x,y (un ). As the second term we find, using in addition (4.9), the bound g LipΔtT.V.x,y (un ). This shows (4.12). Using Theorem A.8 we conclude the existence of a convergent subsequence, also labeled {uΔt}, and set u = limΔt→0 uΔt. Let φ = φ(x, y, t) be a nonnegative test function, and define ϕ by ϕ(x, y, t) = φ(x, y, t/2 + tn ). By defining τ = 2(t − nΔt) we have that for each y, the function uΔt is a weak solution in x on the strip t ∈ [tn , tn+1/2 ] satisfying the inequality Δt
|uΔt − k| ϕτ + q f (uΔt , k)ϕx dx dτ 0 (4.13) ≥ un+1/2 − k ϕ(Δt) dx − |un − k| ϕ(0) dx,
122
4. Multidimensional Scalar Conservation Laws
for all constants k. Here q f (u, k) = sign (u − k) (f (u) − f (k)). Changing back to the t variable, we find that tn+1/2 1 f 2 |uΔt − k| φt + q (uΔt , k)φx dx dt 2 tn n+1/2 ≥ u − k φ(tn+1/2 ) dx − |un − k| φ(tn ) dx. (4.14) Similarly, tn+1 1 2 |uΔt − k| φt + q g (uΔt , k)φy dy dt 2 tn+1/2 n+1 ≥ u − k φ(tn+1 ) dy − un+1/2 − k φ(tn+1/2 ) dy.
(4.15)
Here q g is defined similarly to q f , using g instead of f . Integrating (4.14) over y and (4.15) over x and adding the two results and summing over n, we obtain T 1 2 |uΔt − k| φt + χn q f (uΔt , k)φx 2 0 n g + χ ˜n q (uΔt , k)φy dx dy dt n
≥
(|uΔt − k| φ)|t=T dx dy −
|u0 − k| φ(0) dx dy,
where χn and χ ˜n denote the characteristic functions of the strips tn ≤ t ≤ tn+1/2 and tn+1/2 ≤ t ≤ tn+1 , respectively. As Δt tends to zero, it follows that ∗ 1 ∗ 1 χn , χ ˜n . 2 2 n n Specifically, for continuous functions ψ of compact support we see that ∞ tn+1/2 χn ψ dt = ψ dt n
0
n
=
n
tn
ψ(t∗n )
Δt 2
1 = ψ(t∗n )Δt 2 n 1 → ψ dt as Δt → 0 2 (where t∗n is in [tn , tn+1/2 ]), by definition of the Riemann integral. The general case follows by approximation.
4.1. Dimensional Splitting Methods
123
Letting Δt → 0, we thus obtain T |u − k|φt + q f (u, k)φx + q g (u, k)φy dx dy dt 0 + |u0 − k| φ|t=0 dx dy ≥ (|u(T ) − k| φ)|t=T dx dy,
which proves that u(x, y, t) is a solution to (4.1) satisfying the Kruˇ ˇzkov entropy condition. Next, we want to prove uniqueness of solutions of multidimensional conservation laws. Let u and v be two Kruzkov ˇ entropy solutions of the conservation law ut + f (u)x + g(u)y = 0
(4.16)
with initial data u0 and v0 , respectively. The argument in Section 2.4 leads, with no fundamental changes in the multidimensional case, to the same result (2.59), namely,
u(t) − v(t) 1 ≤ u0 − v0 1 ,
(4.17)
thereby proving uniqueness. Observe that we do not need compact support of u0 and v0 , but only that u0 − v0 is integrable. Using the fact that if every subsequence of a sequence has a further subsequence converging to the same limit, the whole sequence converges to that (unique) limit, we find that the whole sequence {uΔt } converges, not just a subsequence. We have proved the following result. Theorem 4.2. Let fj be Lipschitz continuous functions, and furthermore, let u0 be a bounded function in BV (Rm ). Define the sequence of functions {un } by u0 = u0 and f ,x
j j n+(j−1)/m un+j/m = SΔt u ,
j = 1, . . . , m,
n ∈ N0 .
Introduce the function (where tr = rΔt for any rational number r) uΔt = uΔt (x1 , . . . , xm , t) by f ,x
j j uΔt(x1 , . . . , xm , t) = Sm(t−t un+(j−1)/m , n+(j−1)/m )
for t ∈ [tn+(j−1)/m , tn+j/m ]. Fix T > 0, then for any sequence {Δt} such that Δt → 0, for all t ∈ [0, T ] the function uΔt (t) converges to the unique weak solution u(t) of (4.3) satisfying the Kruˇ ˇzkov entropy condition (4.4). The limit is in C([0, T ]; L1loc(Rm )). To prove stability of the solution with respect to flux functions, we will show that the one-dimensional stability result (2.72) in Section 2.4 remains valid with obvious modifications in several dimensions. Let u and v denote the unique solutions of ut + f (u)x + g(u)y = 0,
u|t=0 = u0 ,
124
4. Multidimensional Scalar Conservation Laws
and vt + f˜(v)x + g˜(v)y = 0,
v|t=0 = v0 ,
respectively, that satisfy the Kruˇ ˇzkov entropy condition. Here we do not assume compact support of u0 and v0 , but only that u0 −v0 is integrable. We want to estimate the L1 -norm of the difference between the two solutions. To this end we first consider n+1/2 n+1/2 n+1/2 −v − v n+1/2 dx dy u = u 1 ≤ |un − v n | dx + Δt min{T.V.x (un ) , T.V.x (v n )} f − f˜ Lip dy = un − v n 1 + Δt f − f˜ Lip
min{T.V.x (un) , T.V.x (vn )} dy.
Next we employ the trivial, but useful, inequality a ∧ b + c ∧ d ≤ (a + c) ∧ (b + d),
a, b, c, d ∈ R.
Thus
n+1 n+1 n+1 n+1 u u −v = − v dx dy 1 n+1/2 ≤ − vn+1/2 dy u + Δt min T.V.y un+1/2 , T.V.y v n+1/2 g − g˜ Lip dx ≤ un+1/2 − v n+1/2 1 + Δt g − g˜ Lip min T.V.y un+1/2 , T.V.y v n+1/2 dx ≤ un − v n 1 + Δt max f − f˜ Lip , g − g˜ Lip × min T.V.x (un ) dy, T.V.x (vn ) dy n n + min T.V.y (u ) dx, T.V.y (v ) dx ≤ un − v n 1 + Δt max{ f − f˜ Lip , g − g˜ Lip } ) * T.V.x (un ) dy + T.V.y (un ) dx, × min T.V.x (vn ) dy + T.V.y (v n ) dx
4.1. Dimensional Splitting Methods
125
= un − v n 1
+ Δt max{ f − f˜ Lip , g − g˜ Lip } min T.V. (un ) , T.V. (v n ) ,
which implies
un − vn 1 ≤ u0 − v0 1 + nΔt max{ f − f˜ Lip , g − g˜ Lip } min{T.V. (u0 ) , T.V. (v0 )}.
(4.18)
Consider next t ∈ [tn , tn+1/2 . Then the continuous interpolants defined by (4.7) satisfy f,x f˜,x n n
uΔt (t) − vΔt (t) L1 (R2 ) = S2(t−t u − S v 1 2 2(t−tn ) n) L (R ) ≤ |un − v n | dx + 2(t − tn ) min{T.V.x (un ) , T.V.x (v n )} f − f˜ Lip dy = un − vn L1 (R2 )
(4.19)
+ 2(t − tn ) f − f˜ Lip
min{T.V.x (un ) , T.V.x (v n )} dy
≤ u0 − v0 1 + tn max{ f − f˜ Lip , g − g˜ Lip } min{T.V. (u0 ) , T.V. (v0 )} + 2(t − tn ) min{T.V. (u0 ) , T.V. (v0 )} max{ f − f˜ Lip , g − g˜ Lip } ≤ u0 − v0 1 + (t + Δt) min{T.V. (u0 ) , T.V. (v0 )} max{ f − f˜ Lip , g − g˜ Lip }. Observe that the above argument also holds mutatis mutandis in the general case of a scalar conservation law in any dimension. We summarize our results in the following theorem. Theorem 4.3. Let u0 be in L1 (Rm ) ∩ L∞ (Rm ) ∩ BV (Rm ), and let fj be Lipschitz continuous functions for j = 1, . . . , m. Then there exists a unique solution u(x1 , . . . , xm , t) of the initial value problem ut +
m
fj (u)xj = 0,
u(x1 , . . . , xm , 0) = u0 (x1 , . . . , xm ),
(4.20)
j=1
that satisfies the Kruˇ ˇzkov entropy condition (4.4). The solution satisfies T.V. (u(t)) ≤ T.V. (u0 ) .
(4.21)
Furthermore, if v0 and g share the same properties as u0 and f , respectively, then the unique weak Kruzkov ˇ entropy solution of vt +
m j=1
gj (v)xj = 0,
v(x1 , . . . , xm , 0) = v0 (x1 , . . . , xm ),
(4.22)
126
4. Multidimensional Scalar Conservation Laws
satisfies
u( · , t) − v( · , t) 1 ≤ u0 − v0 1 (4.23) + t min{T.V. (u0 ) , T.V. (v0 )} max{ ffj − gj Lip }. j
Proof. It remains to consider the case where u0 no longer is assumed to have compact support. Observe that we only used this assumption to prove (4.11). In particular, the estimate (4.21) carries over with no changes.
4.2 Dimensional Splitting and Front Tracking It doesn’t matter if the cat is black or white. As long as it catches rats, it’s a good cat. Deng Xiaoping (1904–97)
In this section we will study the case where we use front tracking to solve the one-dimensional conservation laws. More precisely, we replace the flux functions f and g (in the two-dimensional case) by piecewise linear continuous interpolations fδ and gδ , with the interpolation points spaced a distance δ apart. The aim is to determine the convergence rate towards the solution of the full two-dimensional conservation law as δ → 0 and Δt → 0. With the front-tracking approximation the one-dimensional solutions will be piecewise constant if the initial condition is piecewise constant. In order to prevent the number of discontinuities from growing without bound, we will project the one-dimensional solution S fδ ,x u onto a fixed grid in the x, y plane before applying the operator S gδ ,y . To be more concrete, let the grid spacing in the x and y directions be given by Δx and Δy, respectively, and let Iij denote the grid cell Iij = {(x, y) | iΔx ≤ x < (i + 1)Δx, jΔy ≤ y < (j + 1)Δy}. The projection operator π is defined by πu(x, y) =
1 ΔxΔy
u dx dy for (x, y) ∈ Iij . Iij
Let the approximate solution at the discrete times t be defined as fδ ,x n gδ ,y n+1/2 un+1/2 = π ◦ SΔt u and un+1 = π ◦ SΔt u ,
4.2. Dimensional Splitting and Front Tracking
127
for n = 0, 1, 2, . . . , with u0 = πu0 . We collect the discretization parameters in η = (δ, Δx, Δy, Δt). In analogy to (4.7), we define uη as ⎧ fδ ,x n ⎪ for tn ≤ t < tn+1/2 , ⎪S2(t−tn ) u ⎪ ⎪ ⎨un+1/2 for t = tn+1/2 uη (t) = (4.24) gδ ,y n+1/2 ⎪ S u for tn+1/2 ≤ t < tn+1 , ⎪ 2(t−tn+1/2 ) ⎪ ⎪ n+1 ⎩ u for t = tn+1 . In Figure 4.1 we illustrate how this works. Starting in the upper left corner, fδ ,x the operator SΔt takes us to the upper right corner; then we apply π gδ ,y and move to the lower right corner. Next, SΔt takes us to the lower left corner, and finally π takes us back to the upper left corner, this time with n incremented by 1. y
y fδ ,x SΔt
un (0)
un (Δt)
x π
x π
n→n + 1
y
y gδ ,y SΔt
un+1/2(Δt)
un+1/2 (0)
x
x
Figure 4.1. Front tracking and dimensional splitting on a 3 × 3 grid.
To prove that uη converges to the unique solution u as η → 0, we essentially mimic the approach we just used to prove Theorem 4.2. First of all we observe that
uη (t) ∞ ≤ u0 ∞ , (4.25) since S fδ ,x , S gδ ,y , and π all obey a maximum principle. On each rectangle Iij the function uη is constant for t = Δt. In a desperate attempt to simplify
128
4. Multidimensional Scalar Conservation Laws
the notation we write unij = uη (x, y, nΔt) for (x, y) ∈ Iij . Next we go carefully through one full-time step in this construction, starting with unij . At each step we define a shorthand notation that we will use in the estimates. When we consider unij as a function of x only, we write unj (0) = unij = uη ( · , jΔy, nΔt). (The argument “0” on the left-hand side indicates the start of the time fδ ,x variable before we advance time an interval Δt using SΔt .) Advancing the solution in time by Δt by applying front tracking in the x-variable produces fδ ,x n unj (Δt) = SΔt uj (x). (The x-dependence is suppressed in the notation on the left-hand side.) We now apply the projection π, which yields n+1/2
uij
= πunj (Δt).
After this sweep in the x-variable, it is time to do the y-direction. n+1/2 Considering uij as a function of y we write
1 n+1/2 n+1/2 ui (0) = uij = uη iΔx, · , n + Δt , 2 to which we apply the front-tracking solution operator in the y-direction n+1/2 gδ ,y n+1/2 ui (Δt) = SΔt ui (y). (The y-dependence is suppressed in the notation on the left-hand side.) One full time step is completed by a final projection n+1/2
un+1 = πui ij
(Δt).
Using this notation we first want to prove the analogue of Lipschitz continuity in time of the spatial L1 -norm as expressed in (4.11). In this context the result reads um − un ΔxΔy
uη (tm ) − uη (tn ) 1 = ij ij i,j
≤ max{ ffδ Lip , gδ Lip }Δt + (Δx + Δy)
× T.V. u0 |m − n| . (4.26) To prove (4.26) it suffices to show that un+1 − unij ΔxΔy ij i,j
≤ max{ ffδ Lip , gδ Lip }Δt + (Δx + Δy) T.V. u0 .
(4.27)
4.2. Dimensional Splitting and Front Tracking
129
We start by writing n+1 n+1 n+1/2 n+1/2 n n u − u ≤ u − u (Δt) + u − u (Δt) ij j ij ij i ij n+1/2 n+1/2 + ui (Δt) − ui (0) + unj (Δt) − unj (0) n+1/2 n+1/2 = πui (Δt) − ui (Δt) + πunj (Δt) − unj (Δt) n+1/2 n+1/2 + ui (Δt) − ui (0) + unj (Δt) − unj (0) . Integrating this inequality over un+1 − unij ΔxΔy ≤ ij i,j
R2 gives n+1/2 n+1/2 (Δt) − ui (Δt) dx dy πui
n πu (Δt) − un (Δt) dx dy j j n+1/2 n+1/2 + (Δt) − ui (0) dx dy ui n u (Δt) − un(0) dx dy. + j j +
(4.28)
We see that two terms involve the projection operator π. For these terms we prove the estimate |πψ − ψ| dx dy ≤ (Δx + Δy) T.V. (ψ) = O (max{Δx, Δy}) . (4.29) We will prove (4.29) in the one-dimensional case only. Consider (where Ii = iΔx, (i + 1)Δx) |πψ − ψ| dx = |πψ(x) − ψ(x)| dx i
Ii
1 = ψ(y) dy − ψ(x) dx Δx Ii Ii i 1 = (ψ(y) − ψ(x)) dy dx Δx i Ii Ii 1 ≤ |ψ(y) − ψ(x)| dy dx Δx i Ii Ii 1 = |ψ(x + ξ) − ψ(x)| dξ dx Δx i Ii −x+Ii Δx 1 ≤ |ψ(x + ξ) − ψ(x)| dξ dx Δx i Ii −Δx Δx 1 = |ψ(x + ξ) − ψ(x)| dx dξ Δx −Δx R
130
4. Multidimensional Scalar Conservation Laws
≤
1 Δx
Δx
|ξ| T.V. (ψ) dξ
−Δx
= Δx T.V. (ψ) .
(4.30)
For the two remaining terms in (4.28) we find, using the Lipschitz continuity in time in the L1 norm in the x-variable (see Theorem 2.14) that n
u (Δt) − un (0) dx dy ≤ Δt ffδ Lip T.V.x un (0) dy j j j ≤ Δt ffδ Lip T.V. (un ) .
(4.31)
Combining this result with (4.29) we conclude that (4.27), and hence also (4.26), holds. Finally, we want to show that the total variation is bounded in the sense that T.V. (un ) ≤ T.V. (u0 ) .
(4.32)
We will show that
T.V. un+1/2 ≤ T.V. (un ) ; (4.33)
n+1
n+1/2 an analogous argument gives T.V. u ≤ T.V. u , from which we conclude that
T.V. un+1 ≤ T.V. (un ) , and (4.32) follows by induction. By definition n+1/2 n+1/2 n+1/2 n+1/2 T.V. un+1/2 = ui+1,j − ui,j Δy + ui,j+1 − ui,j Δx , i,j
(4.34) while T.V. (un ) =
uni+1,j − uni,j Δy + uni,j+1 − uni,j Δx .
(4.35)
i,j
We first consider n+1/2
n+1/2 ui+1,j − ui,j = T.V.x πunj (Δt) i
≤ T.V.x unj (Δt) ≤ T.V.x unj (0) uni+1,j − uni,j , =
(4.36)
i
where we first used that T.V. (πφ) ≤ T.V. (φ) for step functions φ, and then that T.V. (v) ≤ T.V. (v0 ) for solutions v of one-dimensional conservation laws with initial data v0 . For the second term in the definition of T.V. un+1/2 we obtain (cf. (4.9)) n+1/2 n+1/2 n+1/2 n+1/2 ui,j+1 − ui,j ΔxΔy = ui,j+1 − ui,j dx dy i,j
i,j
Iij
4.2. Dimensional Splitting and Front Tracking
=
i,j
≤ =
=
j
(i+1)Δx
Δy
iΔx
Δy
j
≤
n u (Δt) − un (Δt) dx dy j+1 j
Iij
i,j
=
π unj+1 (Δt) − unj (Δt) dx dy
Iij
i,j
n π uj+1 (Δt) − unj (Δt) dx dy
Iij
i,j
131
R
Δy R
n u (Δt) − un (Δt) dx j+1 j
n u (x, Δt) − un (x, Δt) dx j+1 j n uj+1 (x, 0) − unj (x, 0) dx
uni,j+1 − uni,j ΔxΔy. =
(4.37)
i,j
The first inequality follows from |πφ| ≤ π |φ|; thereafter, we use Iij πφ = φ, and finally we use the L1 -contractivity, v − w 1 ≤ v0 − w0 1 , of Iij solutions of one-dimensional conservation laws. Multiplying (4.36) by Δy, summing over j, dividing (4.37) by Δx, and finally adding the results, gives (4.33). So far we have obtained the following estimates: (i) Uniform boundedness,
uη (t) ∞ ≤ u0 ∞ .
(ii) Uniform bound on the total variation, T.V. (un ) ≤ T.V. (u0 ) . (iii) Lipschitz continuity in time,
uη (tm ) − uη (tn ) 1 (4.38) 1 ≤ C +O max{Δx, Δy} T.V. (u0 ) |tm − tn | . Δt From Theorem A.8 we conclude that the sequence {uη } has a convergent subsequence as η → 0, provided that the ratio max{Δx, Δy}/Δt remains bounded. We let u denote its limit. Furthermore, this sequence converges in C([0, T ]; L1loc(R2 )) for any positive T . It remains to prove that the limit is indeed an entropy solution of the full two-dimensional conservation law. We first use that unj (x, t) (suppressing the y-dependence) is a solution of the one-dimensional conservation law in
132
4. Multidimensional Scalar Conservation Laws
the time interval [tn , tn+1/2 ]. Hence we know that R
tn+1/2 tn
1 n uj (x, t) − k φt + q fδ (unj (x, t), k)φx dt dx 2 n 1 uj (x, tn+1/2 −) − k φ(x, tn+1/2 ) dx − 2 R n 1 u (x, tn +) − k φ(x, tn ) dx ≥ 0. + 2 R j
Similarly, we obtain for the y-direction 1 n+1/2 n+1/2 gδ u (y, t) − k φ + q (u (y, t), k)φ dt dy t y i 2 i tn+1/2 1 n+1/2 − (y, tn+1 −) − k φ(y, tn+1 ) dy ui 2 R 1 n+1/2 + (y, tn+1/2 +) − k φ(y, tn+1/2 ) dy ≥ 0. ui 2 R
R
tn+1
Integrating the first inequality over y and the second over x and adding the results as well as adding over n gives, where T = N Δt,
R2
T 0
1 |uη − k| φt + χn q fδ (uη , k)φx 2 n + χ ˜n q gδ (uη , k)φy dx dy dt 1 − 2
≥
1 2
R2
2N −1 n=1
R2
n
|uη (x, y, T ) − k| φ(x, y, T ) dx dy − |uη (x, y, 0) − k| φ(x, y, 0) dx dy R2
uη (x, y, tn/2 +) − k − uη (x, y, tn/2 −) − k × φ(x, y, tn/2 ) dx dy
=: I, and χn and χ ˜n as before denote the characteristic functions on {(x, y, t) | t ∈ [tn , tn+1/2 ]} and {(x, y, t) | t ∈ [tn+1/2 , tn+1 ]}, respectively. Observe that we have obtained the right-hand side, denoted by I, by using a projection at each time step. As n → ∞ and Δt → 0 while keeping T fixed,
4.2. Dimensional Splitting and Front Tracking
133
/ ∗ we have that n χn 12 . To estimate the term I we write uη (x, y, tn/2 +) − uη (x, y, tn/2 −) φ(x, y, tn/2 ) dx dy |I| ≤ n
≤ φ ∞
πuη (x, y, tn/2 −) − uη (x, y, tn/2 −) dx dy n
≤ O (max{Δx, Δy}/Δt) , using (4.29). In order to conclude that u is an entropy solution, we need that I → 0; that is, we need to assume that max{Δx, Δy}/Δt → 0 as η → 0. Under this assumption T
|u − k| φt + q f (u, k)φx + q g (u, k)φy dt dx dy R2 0 − |u(x, y, T ) − k| φ(x, y, T ) dx dy 2 R + |u(x, y, 0) − k| φ(x, y, 0) dx dy ≥ 0, R2
which shows that u indeed satisfies the Kruˇ ˇzkov entropy condition. We summarize the result. Theorem 4.4. Let u0 be a compactly supported function in L∞(Rm ) ∩ BV (Rm ), and let fj be Lipschitz continuous functions for j = 1, . . . , m. Construct an approximate solution uη using front tracking by defining u0 = πu0 , and uη (x, t) =
f
j,δ un+j/m = π ◦ SΔt
,xj n+(j−1)/m
u
f ,x j j,δ Sm(t−t un+(j−1)/m , n+(j−1)/m )
,
j = 1, . . . , m,
n ∈ N,
for t ∈ [tn+(j−1)/m , tn+j/m ,
un+j/m
for t = tn+j/m ,
where x = (x1 , . . . , xm ). For any sequence {η}, with η = (Δx1 , . . . , Δxm , Δt, δ) where η → 0 and max {Δxj } /Δt → 0, j
we have that {uη } converges to the unique solution u = u(x, t) of the initial value problem ut +
m
fj (u)xj = 0,
u(x, 0) = u0 (x),
j=1
that satisfies the Kruˇ ˇzkov entropy condition.
(4.39)
134
4. Multidimensional Scalar Conservation Laws
4.3 Convergence Rates Now I think I’m wrong on account of those damn partial integrations. I oscillate between right and wrong. Letter from Feynman to Walton (1936 )
In this section we show how fast front tracking plus dimensional splitting converges to the exact solution. The analysis is based on Kuznetsov’s lemma. We start by generalizing Kuznetsov’s lemma, Theorem 3.11, to the present multidimensional setting. Although the argument carries over, we will present the relevant definitions in arbitrary dimension. Let the class K consist of maps u : [0, ∞ → L1 (Rm )∩BV (Rm )∩L∞ (Rm ) such that: (i) The limits u(t±) exist. (ii) The function u is right continuous, i.e., u(t+) = u(t). (iii) u(t) ∞ ≤ u(0) ∞ . (iv) T.V. (u(t)) ≤ T.V. (u(0)). Recall the following definition of moduli of continuity in time (cf. (3.2)): νt (u, σ) = sup u(t + τ ) − u(t) 1 ,
σ > 0,
|τ |≤σ
ν(u, σ) = sup νt (u, σ). 0≤t≤T
The estimate (3.39) is replaced by ν(u, σ) ≤ |σ| T.V. (u0 ) max{ ffj Lip }, j
for a solution u of (4.20). In several space dimensions, the Kruˇ ˇzkov form reads
ΛT (u, φ, k) = |u − k| φt + q fj (u, k)φxj dx1 · · · dxm Rm ×[0,T ]
j
−
m
R
+ Rm
|u(x, T ) − k| φ(x, T ) dx1 . . . dxm dt |u0 (x) − k| φ(x, 0) dx1 · · · dxm . (4.40)
In this case we use the test function Ω(x, x , s, s ) = ωε0 (s − s )ωε (x1 − x1 ) · · · ωε (xm − xm ), x = (x1 , . . . , xm ), x = (x1 , . . . , xm ).
(4.41)
4.3. Convergence Rates
Here ωε is the standard mollifier defined by 1 xj ωε (xj ) = ω ε ε with 0 ≤ ω ≤ 1,
supp ω ⊆ [−1, 1],
ω(−xj ) = ω(xj ),
135
1
ω(z) dz = 1. −1
When v is the unique solution of the conservation law (4.22), we introduce T Λε,ε0 (u, v) = ΛT (u, Ω( · , x , · , s ), v(x , s )) dx ds . 0
Rm
Kuznetsov’s lemma can be formulated f as follows. Theorem 4.5. Let u be a function in K, and v be an entropy solution of (4.22). If 0 < ε0 < T and ε > 0, then
u( · , T −) − v( · , T ) 1 ≤ u0 − v0 1
+ T.V. (v0 ) 2ε + ε0 max{ ffj Lip } j
+ ν(u, ε0 ) − Λε,ε0 (u, v),
(4.42)
where u0 = u( · , 0) and v0 = v( · , 0). The proof of Theorem 3.11 carries over to this setting verbatim. We want to estimate
S(T )u0 − uη 1 ≤ S(T )u0 − Sδ (T )u0 1 + Sδ (T )u0 − uη 1 ,
(4.43)
where u = S(T )u0 and Sδ (T )u0 denote the exact solutions of the multidimensional conservation law with flux functions f replaced by their piecewise linear and continuous approximations fδ . The first term can be estimated by
S(T )u0 − Sδ (T )u0 1 ≤ T max { ffj − fj,δ Lip } T.V. (u0 ) , j
(4.44)
while we apply Kuznetsov’s lemma, Theorem 4.5, for the second term. For the function u we choose uη , the approximate solution by using front tracking along each dimension and dimensional splitting, while for v we use the exact solution with piecewise linear continuous flux functions fδ and gδ and u0 as initial data, that is, v = vδ = Sδ (T )u0 . Thus we find, using (4.38), that 1 ν(uη , ε0 ) ≤ ε0 C + O max {Δxj } T.V. (u0 ) . Δt j Kuznetsov’s lemma then reads
Sδ (T )u0 − uη 1 ≤ u0 − u0 1 + 2ε + max { ffj,δ Lip } ε0 j
136
4. Multidimensional Scalar Conservation Laws
+ ε0
max {Δx } j C +O T.V. (u0 ) Δt
− Λε,ε0 (uη , vδ ),
(4.45)
and the name of the game is to estimate Λε,ε0 . To make the estimates more transparent, we start by rewriting ΛT (uη , φ, k). Since all the complications of several space dimensions are present in two dimensions, we present the argument in two dimensions only, that is, with m = 2, and denote the spatial variables by (x, y). All arguments carry over to arbitrary dimensions without any change. By definition we have (in obvious notation, q fδ (u) = sign (u − k) (ffδ (u) − fδ (k)) and similarly for q gδ ) T
ΛT (uη , φ, k) = |uη − k| φt + q fδ (uη , k)φx + q gδ (uη , k)φy dt dx dy 0 + |uη − k| φ|t=0+ dx dy − |uη − k| φ|t=T − dx dy =
N −1
tn+1/2
tn+1
+ tn
n=0
|uη − k| φt
tn+1/2
+ q fδ (uη , k)φx + q gδ (uη , k)φy dt dx dy + |uη − k| φ|t=0+ dx dy − |uη − k| φ|t=T − dx dy =
N −1
n
+
N −1 n=0
+
N −1 n=0
+
|uη − k| φt + 2q fδ (uη , k)φx dt dx dy
tn
n=0
+
tn+1/2
tn+1
(|uη − k| φt + 2q gδ (uη , k)φy ) dt dx dy
tn+1/2
tn+1
−
tn+1/2
tn+1/2
tn
tn+1/2
q fδ (uη , k)φx dt dx dy
tn
−
q gδ (uη , k)φy dt dx dy
tn+1
tn+1/2
|uη − k| φ|t=0+ dx dy −
|uη − k| φ|t=T − dx dy.
We now use that uη is an exact solution in the x-direction and the ydirection on each strip [tn , tn+1/2 ] and [tn+1/2 , tn+1 ], respectively. Thus we can invoke inequalities (4.14) and (4.15), and we conclude that N −1 ΛT (uη , φ, k) ≥ |uη − k| |t=tn+1/2 − φ(tn+1/2 ) n=0
− |uη − k| |t=tn + φ(tn ) dx dy
4.3. Convergence Rates
+
N −1
137
|uη − k| |t=tn+1 − φ(tn+1 )
n=0
− |uη − k| |t=tn+1/2 + φ(tn+1/2 ) dx dy tn+1/2 N −1 tn+1 + − q fδ (uη , k)φx dt dx dy tn+1/2
n=0
+
N −1
−
q gδ (uη , k)φy dt dx dy
tn+1
tn+1/2
|uη − k| φ|t=0+ dx dy −
+ = −2
N −1
tn+1/2
|uη − k| φ|t=T − dx dy
q fδ (uη , k)φx dt dx dy
tn
n=0
T
q fδ (uη , k)φx dt dx dy
+ −2
0 N −1
tn+1
q gδ (uη , k)φy dt dx dy tn+1/2
n=0
T
+ 0
N −1
q gδ (uη , k)φy dt dx dy
|uη − k|
t=tn+1/2 −
n=0
+
tn+1/2
tn
n=0
+
tn
− |uη − k|
N −1
t=tn+1/2 +
|uη − k|
t=tn −
n=1
φ(tn+1/2 ) dx dy
− |uη − k|
t=tn +
:= −I1 (uη , k) − I2 (uη , k) − I3 (uη , k) − I4 (uη , k).
φ(tn ) dx dy (4.46)
Observe that because we employ the projection operator π between each pair of consecutive times we solve a conservation law in one dimension; un+1/2 and un are in general discontinuous across tn+1/2 and tn , respectively. The terms I1 and I2 are due to dimensional splitting, while I3 and I4 come from the projections. Choose now for the constant k the function vδ (x , y , s ), and for φ we use Ω given by (4.41). Integrating over the new variables we obtain Λε,ε0 (uη , vδ ) =
0
T
ΛT (uη , Ω( · , x , · , y , · , s ), vδ (x , y , s )) ds dx dy
≥ −I1ε,ε0 (uη , vδ ) − I2ε,ε0 (uη , vδ ) − I3ε,ε0 (uη , vδ ) − I4ε,ε0 (uη , vδ ),
138
4. Multidimensional Scalar Conservation Laws
where Ijε,ε0 are given by I1ε,ε0 (uη , vδ ) =
T
N −1 2
0
− I2ε,ε0 (uη , vδ ) =
T
0
N −1
0
T
0
tn+1
q gδ (uη , vδ )Ωy ds
tn+1/2
q gδ (uη , vδ )Ωy ds dx dy ds dx dy , |uη − vδ | |s=tn +
− |uη − vδ | |s=tn − Ω dx dy ds dx dy , T
0
n=0
T
0
N −1
q fδ (uη , vδ )Ωx ds dx dy ds dx dy ,
n=0
n=1
I4ε,ε0 (uη , vδ ) =
T
q fδ (uη , vδ )Ωx ds
tn
N −1 2 −
I3ε,ε0 (uη , vδ ) =
n=0
tn+1/2
|uη − vδ | |s=tn+1/2 +
− |uη − vδ | |s=tn+1/2 − Ω dx dy ds dx dy .
We will start by estimating I1ε,ε0 and I2ε,ε0 . Lemma 4.6. We have the following estimate: |I1ε,ε0 | + |II2ε,ε0 | ≤ T max { f Lip, g Lip} T.V. (u0 ) Δt 1 × + { f Lip + g Lip}Δt + Δx + Δy . (4.47) ε0 ε Proof. We will detail the estimate for |I1ε,ε0 |. Writing q fδ (uη (s), vδ (s )) = q fδ (uη (tn+1/2 ), vδ (s ))
+ q fδ (uη (s), vδ (s )) − q fδ (uη (tn+1/2 ), vδ (s )) , we rewrite I1ε,ε0 as
N −1
I1ε,ε0 (uη , vδ ) =
n=0
with J1 (ττ1 , τ2 ) =
0
T
J1 (tn , tn+1/2 ) − J1 (tn+1/2 , tn+1 )
+ J2 (tn , tn+1/2 ) − J2 (tn+1/2 , tn+1 ) ,
τ2
τ1
(4.48)
q fδ (uη (x, y, tn+1/2 ), vδ (x , y , s ))
× Ωx (x, x , y, y , s, s ) ds dx dy ds dx dy ,
4.3. Convergence Rates
J2 (ττ1 , τ2 ) =
T
0
τ2
139
q fδ (uη (x, y, s), vδ (x , y , s ))
τ1
− q fδ (uη (x, y, tn+1/2 ), vδ (x , y , s )) × Ωx (x, x , y, y , s, s ) ds dx dy ds dx dy .
Here we have written out all the variables explicitly; however, in the following we will display only the relevant variables. All spatial integrals are over the real line unless specified otherwise. Rewriting s ωε0 (s − s ) = ωε0 (tn+1/2 − s ) + ωε 0 (¯ − s ) d¯ s, tn+1/2
we obtain J1 (tn , tn+1/2 ) T fδ ε = q (uη (tn+1/2 ), vδ (s ))Ωx 0
tn+1/2
tn
s
+
T
= 0
tn
tn+1/2
tn+1/2
s
+ tn
tn+1/2
ωε0 (tn+1/2 − s ) ds
ωε 0 (¯ − s ) ds¯ ds dx dy ds dx dy
Δt ωε (tn+1/2 − s ) 2 0 ωε 0 (¯ − s ) ds¯ ds dx dy ds dx dy ,
q fδ (uη (tn+1/2 ), vδ (s ))Ωεx
tn+1/2
where Ωε = ωε (x − x )ωε (y − y ) denotes the spatial part of Ω. If we rewrite J1 (tn+1/2 , tn+1 ) in the same way, we obtain J1 (tn+1/2 , tn+1 ) T Δt = q fδ (uη (tn+1/2 ), vδ (s ))Ωεx ωε0 (tn+1/2 − s ) 2 0 tn+1 s + ωε0 (¯ − s ) ds¯ ds dx dy ds dx dy, tn+1/2
tn+1/2
and hence
J1 tn , tn+1/2 ) − J1 (tn+1/2 , tn+1 T fδ ε = q (uη (tn+1/2 ), vδ (s ))Ωx 0
−
tn+1
tn+1/2
tn+1/2
tn
s tn+1/2
s
tn+1/2
ωε 0 (¯ − s ) ds¯ ds
ωε 0 (¯ − s ) ds¯ ds dx dy ds dx dy .
(4.49)
140
4. Multidimensional Scalar Conservation Laws
fδ Now using the Lipschitz continuity of variation in q fδ q we can replace by variation in u, and obtain, using ωε0 (x − x ) dx dx = 0, that
fδ q (u (x, y, t ), v (s ))ω (x − x ) dx dx η δ n+1/2 ε0 = ωε 0 (x − x ) dx dx
× q fδ (uη (x, y, tn+1/2 ), vδ (s )) − q fδ (uη (x , y, tn+1/2 ), vδ (s )) ω (x − x ) ≤ ffδ Lip ε0 × uη (x, y, tn+1/2 ) − uη (x , y, tn+1/2 ) dx dx uη (x + z, y, tn+1/2 ) − uη (x , y, tn+1/2 ) ω (z) dx dz = ffδ Lip ε0 1 uη (x + z, y, tn+1/2 ) − uη (x , y, tn+1/2 ) dx ≤ ffδ Lip |z| × zωε 0 (z) dz
zω (z) dz ≤ ffδ Lip T.V.x uη (tn+1/2 ) ε0
≤ ffδ Lip T.V.x uη (tn+1/2 ) ,
using that
zω (z) dz = 1. We combine this with (4.49) to get ε0
J1 (tn , tn+1/2 ) − J1 (tn+1/2 , tn+1 )
≤ ffδ Lip T.V.x uη (tn+1/2 ) ωε0 (y − y ) T tn+1/2 s × ωε0 (¯ − s ) d¯ s ds ds 0 tn tn+1/2 T tn+1 s + ωε0 (¯ − s ) d¯ s ds ds dy dy. 0
tn+1/2
tn+1/2
Inserting the estimate 0
T
ω (¯ − s ) ds ≤ 1 ε0 ε0
|ω (z)| dz ≤ 2/ε0 ,
we obtain ff (Δt)2
δ Lip T.V. uη (tn+1/2 ) . J1 (tn , tn+1/2 ) − J1 (tn+1/2 , tn+1 ) ≤ 2ε0 (4.50)
4.3. Convergence Rates
141
Next we consider the term J2 . We first use the Lipschitz continuity of q fδ , which yields J2 (tn , tn+1/2 ) T tn+1/2 uη (x, y, s) − uη (x, y, tn+1/2 ) ≤ ffδ Lip 0
≤
ffδ Lip ε
tn
× |Ωx | ds dx dy ds dx dy tn+1/2 uη (x, y, s) − uη (x, y, tn+1/2 ) ds dx dy tn tn+1/2
ffδ Lip uη (x, y, s) − uη (x, y, tn+1/2 −) ds dx dy ≤ ε tn
ffδ Lip Δt uη (x, y, tn+1/2 −) − uη (x, y, tn+1/2 ) dx dy + 2ε
ffδ Lip Δt ≤ ( ffδ Lip Δt + Δx) T.V. uη tn+1/2 . ε Here to unity in the variables s and y , and estimated we integrated |ωε (x − x )| dx by 2/ε. Finally, we used the continuity in time of the L1 -norm in the x-direction and estimated the error due to the projection. A similar bound can be obtained for J2 (tn+1/2 , tn+1 ), and hence J2 (tn , tn+1/2 ) − J2 (tn+1/2 , tn+1 ) ≤ J2 (tn , tn+1/2 ) + J2 (tn+1/2 , tn+1 )
f Lip Δt (2 f Lip Δt + Δx + Δy) T.V. (uη (tn )) , (4.51) ε
where we used that T.V. uη (tn+1/2 ) ≤ T.V. (uη (tn )). Inserting estimates (4.50) and (4.51) into (4.48) yields ≤
|I1ε,ε0 (uη , vδ )| ≤ ffδ Lip T.V. (uη (0)) N −1 (Δt)2 Δt × + (2 ffδ Lip Δt + Δx + Δy) 2ε0 2ε n=0 ≤ T ffδ Lip T.V. (uη (0)) Δt 1 × + (2 ffδ Lip Δt + Δx + Δy) , 2ε0 2ε where we again used that T.V. (uη ) is nonincreasing. An analogous argument gives the same estimate for I2ε,ε0 . Adding the two inequalities, we conclude that (4.47) holds. It remains to estimate I3ε,ε0 and I4ε,ε0 . We aim at the following result.
142
4. Multidimensional Scalar Conservation Laws
Lemma 4.7. The following estimate holds: T (Δx + Δy)2 T.V. (u0 ) . Δt ε
|II3ε,ε0 | + |II4ε,ε0 | ≤
Proof. We discuss the term I3ε,ε0 only. Recall that I3ε,ε0 (uη , vδ ) N −1 =
T
0
n=1
|uη (x, y, tn ) − vδ (x , y , s )|
− |uη (x, y, tn −) − vδ (x , y , s )| × Ω(x, x , y, y , tn , s ) dx dy ds dx dy. The function uη (x, y, tn +) is the projection of uη (x, y, tn −), that is, 1 uη (x, y, tn +) = uη (¯ x, y, ¯ tn −) dx ¯ dy¯. (4.52) ΔxΔy Iij / If we replace R2 by i,j Iij and use (4.52), we obtain I3ε,ε0 (uη , vδ ) N −1 = n=1
T
0
Iij
i,j
1 ΔxΔy
uη (¯ x, y, ¯ tn −) dx ¯ dy¯ − vδ (x , y , s )
Iij
− |uη (x, y, tn −) − vδ (x , y , s )| Ω(x, x , y, y , tn , s ) dx dy ds dx dy N −1 T 1 = Ω(x, x , y, y , tn , s ) ΔxΔy n=1 0 × |uη (¯ x, y, ¯ tn −) − vδ (x , y , s )| Iij
i,j
Iij
− |uη (x, y, tn −) − vδ (x , y , s )|
dx ¯ dy¯ dx dy ds dx dy
N −1 T 1 = Ω(x, x , y, y , tn , s ) 2ΔxΔy n=1 0 × |uη (¯ x, y, ¯ tn −) − vδ (x , y , s )| i,j
Iij
Iij
− |uη (x, y, tn −) − vδ (x , y , s )| +
dx ¯ dy¯ dx dy ds dx dy
N −1 T 1 Ω(¯ x, x , y¯, y , tn , s ) 2ΔxΔy n=1 0
4.3. Convergence Rates
×
i,j
Iij
143
|uη (x, y, tn −) − vδ (x , y , s )|
Iij
− |uη (¯ x, y, ¯ tn −) − vδ (x , y , s )|
dx dy dxd ¯ y¯ ds dx dy
N −1 T 1 = Ω(x, x , y, y , tn , s ) − Ω(¯ x, x , y¯, y , tn , s ) 2ΔxΔy n=1 0 × |uη (¯ x, y, ¯ tn −) − vδ (x , y , s )| i,j
Iij
Iij
− |uη (x, y, tn −) − vδ (x , y , s )|
dx ¯ dy¯ dx dy ds dx dy .
Estimating I3ε,ε0 (uη , vδ ) using the inverse triangle inequality we obtain ε,ε0 I3 (uη , vδ ) ≤
N −1 T 1 |uη (¯ x, y, ¯ tn −) − uη (x, y, tn −)| 2ΔxΔy n=1 0 Iij Iij i,j × |Ω(x, x , y, y , tn , s ) − Ω(¯ x, x , y¯, y , tn , s )| dx ¯ dy¯ dx dy ds dx dy . (4.53)
The next step is to bound the test functions in (4.53) from above. To this end we first consider for x, x ¯ ∈ iΔx, (i + 1)Δx, |ωε (x − x ) − ωε (¯ x − x )| dx = |ω(z) − ω(z + (¯ x − x)/ε)| dz z+(¯x−x)/ε = ω (ξ) dξ dz z z+(¯x−x)/ε ≤ |ω (ξ)| dξ dz ≤
z Δx/ε 0
|ω (α + β)| dα dβ =
2Δx . ε
Integrating the time variable to unity we easily see (really, this is easy!) that T |Ω(x, x , y, y , tn , s ) − Ω(¯ x, x , y¯, y , tn , s )| ds dx dy 0
T
= 0
ωε0 (s − s ) ds × |ωε (x − x )ωε (y − y ) − ωε (¯ x − x )ωε (¯ − y )| dx dy
144
4. Multidimensional Scalar Conservation Laws
|ωε (x − x ) − ωε (¯ x − x )| ωε (y − y ) dx dy + |ωε (y − y ) − ωε (¯ − y )| ωε (¯ x − x ) dx dy ≤ |ωε (x − x ) − ωε (¯ x − x )| dx + |ωε (y − y ) − ωε (¯ − y )| dy ≤
2 ≤ (Δx + Δy) . ε Furthermore,
(4.54)
|uη (¯ x, y, ¯ tn −) − uη (x, y, tn −)| = |uη (x, y, ¯ tn −) − uη (x, y, tn −)| ≤ T.V.jΔy,(j+1)Δy (uη (x, · , tn −)) . (4.55) Inserting (4.54) and (4.55) into (4.53) yields |II3ε,ε0 (uη , vδ )| ≤
1 2(Δx + Δy) 2ΔxΔy ε N −1 × T.V.jΔy,(j+1)Δy (uη (x, · , tn −)) dx ¯ dy¯ dx dy n=1 i,j
≤
Iij
Iij
N −1 (i+1)Δx Δx + Δy Δx(Δy)2 T.V.jΔy,(j+1)Δy (uη (x, · , tn −)) εΔxΔy n=1 iΔx i,j
≤
(Δx + Δy) Δy ε
N −1
T.V. (uη (tn −))
n=1
(Δx + Δy) T Δy T.V. (uη (0)) , (4.56) ε Δt where in the final step we used that T.V. (uη (tn −)) ≤ T.V. (uη (0)). The same analysis provides the following estimate for I4ε,ε0 (vδ , uη ): ≤
(Δx + Δy) T Δx T.V. (uη (0)) . ε Δt Adding (4.56) and (4.57) proves the lemma. |II4ε,ε0 (uη , vδ )| ≤
(4.57)
We now return to the proof of the estimate of Λε,ε0 (uη , vδ ). Combining Lemma 4.6 and Lemma 4.7 we obtain −Λε,ε0 (uη , vδ ) ≤ |I1ε,ε0 (uη , vδ )| + |II2ε,ε0 (uη , vδ )| + |II3ε,ε0 (uη , vδ )| + |II4ε,ε0 (uη , vδ )| Δt 1 ≤T + ({ ffδ Lip + gδ Lip }Δt + Δx + Δy) ε0 ε (Δx + Δy)2 × max { ffδ Lip , gδ Lip } + T.V. (u0 ) Δt ε
4.3. Convergence Rates
=: T T.V. (u0 ) Λ(ε, ε0 , η).
145
(4.58)
Returning to (4.43), we combine (4.44), (4.45), as well as (4.58), to obtain
S(T )u0 − uη (T ) 1
≤ S(T )u0 − Sδ (T )u0 1 + Sδ (T )u0 − uη (T ) 1
≤ T max{ f − fδ Lip , g − gδ Lip }T.V. (u0 ) + u0 − u0 1 max{Δx, Δy} + 2ε + max{ ffδ Lip , gδ Lip }ε0 + ε0 C + O Δt + T Λ(ε, ε0 , η) T.V. (u0 ) . (4.59) Next we take a minimum over ε and ε0 on the right-hand side of (4.59). This has the form √ √ b d min a ε + + c ε0 + = 2 ab + 2 cd. ε,ε0 ε ε0 The minimum is obtained for ε = b/a and ε0 = d/c. We obtain
S(T )u0 − uη (T ) 1
≤ T max { f − fδ Lip , g − gδ Lip } T.V. (u0 ) + u0 − u0 1 (Δx + Δy)2 1/2 +O (Δx + Δy) + Δt + T.V. (u0 ) . (4.60) Δt We may choose the approximation of the initial data such that u0 − u0 1 = O (Δx + Δy) T.V. (u0 ). Furthermore, if the flux functions f and g are piecewise C 2 and Lipschitz continuous, then
f − fδ Lip ≤ δ f ∞ . We state the final result in the general case. Theorem 4.8. Let u0 be a function in L1 (Rm ) ∩ L∞(Rm ) with bounded total variation, and let fj for j = 1, . . . , m be piecewise C 2 functions that in addition are Lipschitz continuous. Then
u(T ) − uη (T ) 1 ≤ O δ + (Δx + Δy)1/2 as η → 0 when Δx = K1 Δy = K2 Δt for constants K1 and K2 . It is worthwhile to analyze the error terms in the estimate. We are clearly making three approximations with the front-tracking method combined with dimensional splitting. First of all, we are approximating the initial data by step functions. That gives an error of order Δx. Secondly, we are approximating the flux functions by piecewise linear and continuous functions; in this case the error is of order δ. A third source is the intrinsic
146
4. Multidimensional Scalar Conservation Laws
error in the dimensional splitting, which is of order (Δt)1/2 , and finally, the projection onto the grid gives an error of order (Δx)1/2 . The advantage of this method over difference methods is the fact that the time step Δt is not bounded by a CFL condition expressed in terms of Δx and Δy. The only relation that must be satisfied is (4.24), which allows for taking large time steps. In practice it is observed that one can choose CFL numbers4 as high as 10–15 without loss in accuracy. This makes it a very fast method.
4.4 Operator Splitting: Diffusion The answer, my friend, is blowin’ in the wind, the answer is blowin’ in the wind. Bob Dylan, Blowin’ in the Wind (1968 )
We show how to use the concept of operator splitting to derive a (weak) solution of the parabolic problem5 ut +
m
fj (u)xj = μ
j=1
m
uxj xj
(4.61)
j=1
by solving ut + fj (u)xj = 0,
j = 1, . . . , m,
(4.62)
and ut = μΔu, (4.63) / where we employ the notation Δu = j uxj xj . To this end let Sj (t)u0 and H(t)u0 denotes the solutions of (4.62) and (4.63), respectively, with initial data u0 . Introducing the heat kernel we may write u(x, t) = (H(t)u0 ) (x, t) = K(x − y, t)u0 (y) dy Rm ! " 1 |x − y|2 = exp − u0 (y) dy. 4μt (4πμt)m/2 Rm ˛ ˛ several dimensions the CFL number is defined as max i (˛fi ˛ Δt/Δxi ). 5 Although we have used the parabolic regularization to motivate the appropriate entropy condition, we have constructed the solution of the multidimensional conservation law per se, and hence it is logically consistent to use the solution of the conservation law in combination with operator splitting to derive the solution of the parabolic problem. A different approach, where we start with a solution of the parabolic equation and subsequently show that in the limit of vanishing viscosity the solution converges to the solution of the conservation law, is discussed in Appendix B. 4 In
4.4. Operator Splitting: Diffusion
147
Let Δt be positive and tn = nΔt. Define u0 = u0 ,
un+1 = (H(Δt)Sm (Δt) · · · S1 (Δt)) un ,
n
(4.64) n
with the idea that u approximates u(x, tn ). We will show that u converges to the solution of (4.61) as Δt → 0. Lemma 4.9. The following estimates hold :
un ∞ ≤ u0 ∞ ,
T.V. (un ) ≤ T.V. u0 ,
1/(m+1)
un1 − un2 L1 (Rm ) ≤ C |n1 − n2 | Δt . loc
(4.65) (4.66) (4.67)
Proof. Equation (4.65) is obvious, since both the heat equation and the conservation law obey the maximum principle. We know that the solution of the conservation law has the TVD property (4.66); see (4.21). Thus it remains to show that this property is shared by the solution of the heat equation. To this end, we have H(t)u(x + h) − H(t)u(x) dx Rm ≤ |K(x + h − y, t)u(y) − K(x − y, t)u(y)| dy dx m m R R = |K(y, t)u(x + h − y) − K(y, t)u(x − y)| dy dx Rm Rm = K(y, t) dy |u(x + h) − u(x)| dx m Rm R = |u(x + h) − u(x)| dx. Rm
Dividing by |h| and letting h → 0 we conclude that T.V. (H(t)u) ≤ T.V. (u) , which proves (4.66). Finally, we consider (4.67). We will first show that the approximate solution obtained by splitting is weakly Lipschitz continuous in time. More precisely, for each ball Br = {x | |x| ≤ r}, we will show that n1 n2 (u − u )φ ≤ Cr |n1 − n2 | Δt φ ∞ + max φxj ∞ , (4.68) Br
j
for smooth test functions φ = φ(x), where Cr is a constant depending on r. It is enough to study the case n2 = n1 +1, and we set n1 = n. Furthermore, we can write n n n n (un+1 −un )φ dx ≤ (H(Δt)˜ + (˜ (4.69) u − u ˜ )φ dx u − u )φ dx
148
4. Multidimensional Scalar Conservation Laws
where u ˜n = (Sm (Δt) · · · S1 (Δt)) un . This shows that it suffices to prove this property for the solutions of the conservation law and the heat equation separately. From Theorem 4.3 we know that the solution of the one-dimensional conservation law satisfies the stronger estimate
S(t)u − u 1 ≤ C |t| . This implies that (for simplicity with m = 2)
S2 (t)S1 (t)u − u 1 ≤ S2 (t)S1 (t)u − S1 (t)u 1 + S1 (t)u − u 1 ≤ C |t| , and hence we infer that the last term of (4.69) is of order Δt, that is,
˜ un − un 1 ≤ C φ ∞ |Δt| . The first term can be estimated as follows (for simplicity of notation we assume m = 1). Consider t t (H(t)u0 − u0 )φ dx = ut dt φ dx = uxx dt φ dx
≤
0 t
0
0
|ux φx | dt dx
≤ φx ∞
t
≤ φx ∞
|ux | dx dt
0
0
t
T.V. (u) dt ≤ φx ∞ T.V. (u0 ) t.
Thus we conclude that (4.68) holds. From the TVD property (4.66), we have that sup |un (x + ξ, t) − un(x, t)| dx ≤ ρ T.V. (un ) .
(4.70)
|ξ|≤ρ
Using Kruzkov’s ˇ interpolation lemma (stated and proved right after this proof) we can infer, using (4.68) and (4.70), that |n1 − n2 | Δt n1 n2 |u (x) − u (x)| dx ≤ Cr ε + ε Br for all ε ≤ ρ. Choosing ε = |n1 − n2 | Δt proves the result. We next state and prove Kruˇ ˇzkov’s interpolation lemma. To do this we need the multi-index notation. A vector of the form α = (α1 , . . . , αm ), where each component is a nonnegative integer, is called a multi-index of order |α| = α1 + · · · + αm . Given a multi-index α we define Dα u(x) =
∂ |α| u(x) . m · · · ∂xα m
1 ∂xα 1
4.4. Operator Splitting: Diffusion
149
Lemma 4.10 (Kruˇ ˇzkov interpolation lemma). Let u(x, t) be a bounded measurable function defined in the cylinder Br+ˆr × [0, T ], rˆ ≥ 0. For t ∈ [0, T ] and |ρ| ≤ rˆ, assume that u possesses a spatial modulus of continuity sup |u (x + ξ, t) − u(x, t)| dx ≤ νr,T,ˆr (|ρ| ; u), (4.71) |ξ|≤|ρ|
Br
where νr,T,ˆr does not depend on t. Suppose that for any φ ∈ C0∞ (Br ) and any t1 , t2 ∈ [0, T ],
u (x, t ) − u x, t φ(x) dx 2 1 Br (4.72) α ≤ Constr,T
D φ L∞ (Br ) |t2 − t1 | , |α|≤m
where α denotes a multi-index. Then for t and t + τ ∈ [0, T ] and for all ε ∈ 0, rˆ], |τ | |u(x, t + τ ) − u(x, t)| dx ≤ Constr,T ε + νr,T,ˆr (ε; u) + m . (4.73) ε Br Proof. Let δ ∈ C0∞ be a function such that 0 ≤ δ(x) ≤ 1,
supp δ ⊆ B1 ,
and define δε (x) =
δ(x) dx = 1,
1 x δ . εm ε
Furthermore, write f (x) = u(x, t + τ ) − u(x, t) (suppressing the time dependence in the notation for f ), σ(x) = sign (f (x)) for |x| ≤ r − ε, and 0 otherwise, and
σε (x) = (σ ∗ δε )(x) =
σ(x − y)δε (y) dy.
By construction, σε ∈ C0∞ (Rm ) and supp σε ⊆ Br . Furthermore, |σε | ≤ 1 and ∂ ∂ 1 x − y ∂xj σε ≤ εm ∂xj δ( ε ) σ(y) dy 1 x − y C ≤ m+1 δxj ( ) σ(y) dy ≤ . ε ε ε This easily generalizes to
Dα σε ∞ ≤
C . ε|α|
150
4. Multidimensional Scalar Conservation Laws
Next we have the elementary but important inequality |f (x)| dx = |f (x)| dx Br Br = (|f (x)| − σε (x)f (x) + σε (x)f (x)) dx B r ≤ (|f (x)| − σε (x)f (x)) dx + σε (x)f (x) dx B Br r ≤ | |f (x)| − σε (x)f (x)| dx + σε (x)f (x) dx Br
Br
:= I1 + I2 . We estimate I1 and I2 separately. Starting with I1 we obtain I1 = | |f (x)| − σε (x)f (x)| dx Br 1 x−y 1 x−y = δ( )σ(y) dy f (x) dx |f (x)| εm δ( ε ) dy − εm ε Br 1 x−y = m δ( ) ||f (x)| − σ(y)f (x)| dy dx. ε ε The integrand is integrated over the domain {(x, y) | |x| ≤ r, |x − y| ≤ ε}; see Figure 4.2. y r+ε (i) r−ε (ii) r
−r −r + ε
x
(i)
−r − ε
Figure 4.2. The integration domain.
4.4. Operator Splitting: Diffusion
151
We further divide this set into two parts: (i) |y| ≥ r − ε, and (ii) |y| ≤ r − ε; see Figure 4.2. In case (i) we have | |f (x)| − σ(y)f (x)| = |f (x)| , since σ(y) = 0 whenever |y| ≥ r − ε. In case (ii) | |f (x)| − σ(y)f (x)| = | |f (x)| − sign (f (y)) f (x)| ≤ 2 |f (x) − f (y)| using the elementary inequality | |a| − sign (b) a| = | |a| − |b| + sign (b) (b − a)| ≤ | |a| − |b| | + |sign (b) (b − a)| ≤ 2 |a − b| . Thus
x−y ) |f (x) − f (y)| dy dx ε Br Br−ε 1 x−y + m δ( ) |f (x)| dy dx ε ε Br |y |≥r−ε ≤2 δ(z) |f (x) − f (x − εz)| dz dx Br B1 1 x−y + f ∞ m δ( ) dy dx ε ε Br |y |≥r−ε ≤2 δ(z) sup |f (x) − f (x + ξ)| dx dz
I1 ≤
2 εm
B1
+ f ∞
δ(
|ξ|≤ε
Br
Br+ε \Br−ε
1 εm
δ( Br
x−y ) dx dy ε
≤ 2ν(ε; f ) + f ∞ vol (Br+ε \ Br−ε ) ≤ 2ν(ε; f ) + f ∞ Cr ε. Furthermore, ν(ε; f ) ≤ 2ν(ε; u). The second term I2 is estimated by the assumptions of the lemma, namely, |τ | α I2 = σε (x)f (x) dx ≤ Constr,T
D σε L∞ (Br ) |τ | ≤ C m . ε Br |α|≤m
Combining the two estimates we conclude that |τ | |u(x, t + τ ) − u(x, t)| dx ≤ Cr ε + νr,T,ˆr (ε; u) + m . ε Br
152
4. Multidimensional Scalar Conservation Laws
Next we need to extend the function un to all times. First, define un+j/(m+1) = Sj un+(j−1)/(m+1) ,
j = 1, . . . , m.
Now let
⎧ n+(j−1)/(m+1) ⎪ ⎪ Sj ((m + 1)(t − tn+(j−1)/(m+1) ))u ⎪ ⎪ ⎨ for t ∈ [tn+(j−1)/(m+1) , tn+j/(m+1) , uΔt (x, t) = ⎪ H((m + 1)(t − tn+m/(m+1) ))un+m/(m+1) ⎪ ⎪ ⎪ ⎩ for t ∈ [tn+m/(m+1) , tn+1 .
(4.74)
The estimates in Lemma 4.9 carry over to the function uΔt. Fix T > 0. Applying Theorem A.8 we conclude that there exists a sequence of Δt → 0 such that for each t ∈ [0, T ] the function uΔt (t) converges to a function u(t), and the convergence is in C([0, T ]; L1loc(Rm )). It remains to show that u is a weak solution of (4.61), or t (uφt + f (u) · ∇φ + εuΔφ) dt dx = 0 (4.75) Rm
0
for all smooth and compactly supported test functions φ. We have tn+j/(m+1) 1 uΔt φt + f (uΔt ) · ∇φ dt dx Rm tn+(j−1)/(m+1) m + 1 Δt t˜ − t 1 n+(j−1)/(m+1) = un+(j−1)/(m+1) (x, t˜) φt x, m + 1 Rm 0 m+1 t˜ − t n+(j−1)/(m+1) + f (un+(j−1)/(m+1) ) · ∇φ x, dt˜dx m+1 t=tn+j/(m+1) 1 = (uΔt φ) dx, (4.76) m + 1 Rm t=tn+(j−1)/(m+1) for j = 1, . . . , m, where we have used that un+(j−1)/(m+1) is a solution of the conservation law on the strip t ∈ [tn+(j−1)/(m+1) , tn+j/(m+1) . Similarly, we find for the solution of the heat equation that tn+1 1 uΔtφt + εuΔtΔφ dt dx m+1 Rm tn+m/(m+1) (4.77)
1 = (uΔt φ) |t=tn+m/(m+1) − (uΔt φ) |t=tn+1 dx. m + 1 Rm Summing (4.76) for j = 1, . . . , m, and adding the result to (4.77), we obtain t 1 uΔt φt + fΔt (uΔt ) · ∇φ + εχm+1 uΔt Δφ dt dx = 0, m+1 Rm 0 (4.78) where fΔt = (χ1 f1 , . . . , χm fm )
4.5. Operator Splitting: Source
and
1 χj = 0
153
for t ∈ ∪n [tn+(j−1)/(m+1) , tn+j/(m+1) , otherwise. ∗
As Δt → 0, we have χj 1/(m + 1), which proves (4.75). We summarize the result as follows. Theorem 4.11. Let u0 be a function in L∞ (Rm )∩L1 (Rm )∩BV (Rm ), and assume that fj are Lipschitz continuous functions for j = 1, . . . , m. Define the family of functions {uΔt } by (4.64) and (4.74). Fix T > 0. Then there exists a sequence of Δt → 0 such that {uΔt(t)} converges to a weak solution u of (4.61). The convergence is in C([0, T ]; L1loc (Rm )). One can prove that a weak solution of (4.61) is indeed a classical solution; see [112]. Hence, by uniqueness of classical solutions, the sequence {uΔt } converges for any sequence {Δt} tending to zero.
4.5 Operator Splitting: Source Experience must be our only guide; Reason may mislead us. J. Dickinson, the Constitutional Convention (1787 )
We will use operator splitting to study the inhomogeneous conservation law ut +
m
fj (u)xj = g(x, t, u),
u|t=0 = u0 ,
(4.79)
j=1
where the source term g is assumed to be continuous in (x, t) and Lipschitz continuous in u. In this case the Kruzkov ˇ entropy condition reads as follows. The bounded function u is a weak entropy solution on [0, T ] if it satisfies 0
T
Rm
|u − k| ϕt + sign (u − k)
Rm
(ffj (u) − fj (k)) ϕxj dx1 · · · dxm dt
j=1
+
m
|u0 − k| ϕ|t=0 dx1 · · · dxm − ≥−
0
T
Rm
Rm
(|u − k| ϕ)|t=T dx1 · · · dxm
sign (u − k) ϕg(x, t, u) dx1 · · · dxm dt,
(4.80)
for all constants k ∈ R and all nonnegative test functions ϕ ∈ C0∞ (Rm × [0, T ]).
154
4. Multidimensional Scalar Conservation Laws
To simplify the presentation we consider only the case with m = 1, and where g = g(u). Thus ut + f (u)x = g(u).
(4.81)
The case where g also depends on (x, t) is treated in Exercise 4.7. Let S(t)u0 and R(t)u0 denote the solutions of ut + f (u)x = 0,
u|t=0 = u0 ,
(4.82)
and ut = g(u),
u|t=0 = u0 ,
(4.83)
respectively. Define the sequence {un } by (we still use tn = nΔt) u 0 = u0 ,
un+1 = (S(Δt)R(Δt)) un
for some positive Δt. Furthermore, we need the extension to all times, defined by6 S(2(t − tn ))un for t ∈ [tn , tn+1/2 ,
uΔt (x, t) = (4.84) R 2 t − tn+1/2 un+1/2 for t ∈ [tn+1/2 , tn+1 , with un+1/2 = S(Δt)un ,
1 tn+1/2 = n + Δt. 2
For this procedure to be well-defined, we must be sure that the ordinary differential equation (4.83) is well-defined. This is the case if g is uniformly Lipschitz continuous in u, i.e., |g(u) − g(v)| ≤ g Lip |u − v| .
(4.85)
For convenience, we set γ = g Lip. This assumption also implies that the solution of (4.83) does not “blowup” in finite time, since |g(u)| ≤ |g(0)| + γ |u| ≤ Cg (1 + |u|),
(4.86)
for some constant Cg . Under this assumption on g we have the following lemma. Lemma 4.12. Assume that u0 is a function in L1loc (R), and that u0 is of bounded variation. Then for nΔt ≤ T , the following estimates hold :
6 Essentially replacing the operator H used in operator splitting with respect to diffusion by R in the case of a source.
4.5. Operator Splitting: Source
155
(i) There is a constant M1 independent of n and Δt such that
un ∞ ≤ M1 .
(4.87)
(ii) There is a constant M2 independent of n and Δt such that T.V. (un ) ≤ M2 .
(4.88)
(iii) There is a constant M3 independent of n and Δt such that for t1 and t2 , with 0 ≤ t1 ≤ t2 ≤ T , and for each bounded interval B ⊂ R, |uΔt (x, t1 ) − uΔt (x, t2 )| dx ≤ M3 |t1 − t2 | . (4.89) B
Proof. We start by proving (i). The solution operator St obeys a maximum principle, so that un+1/2 ∞ ≤ un ∞ . Multiplying (4.83) by sign (u), we find that |u|t = sign (u) g(u) ≤ |g(u)| ≤ Cg (1 + |u|), where we have used (4.86). By Gronwall’s inequality (see Exercise 4.5), for a solution of (4.83), we have that |u(t)| ≤ eCg t (1 + |u0 |) − 1. This means that n+1 n+1/2 u ≤ eCg Δt 1 + u − 1 ≤ eCg Δt (1 + un ∞ ) − 1, ∞ ∞
which by induction implies
un ∞ ≤ eCg tn (1 + u0 ∞ ) − 1. Setting M1 = eCg T (1 + u0 ∞ ) − 1 proves (i). Next, we prove (ii). The proof is similar to that of the last case, since St is TVD, T.V. un+1/2 ≤ T.V. (un ). As before, let u be a solution of (4.83) and let v be another solution with initial data v0 . Then we have (u − v)t = g(u) − g(v). Setting w = u − v, and multiplying by sign (w), we find that |w|t = sign (w) (g(u) − g(v)) ≤ γ |w| . Then by Gronwall’s inequality, |w(t)| ≤ eγt |w(0)| . Hence,
n+1 u (x) − un+1 (y) ≤ eγΔt un+1/2 (x) − un+1/2 (y) .
This implies that
T.V. un+1 ≤ eγΔtT.V. un+1/2 ≤ eγΔtT.V. (un ) .
156
4. Multidimensional Scalar Conservation Laws
Inductively, we then have that T.V. (un ) ≤ eγtn T.V. (u0 ) , and setting M2 = eγT concludes the proof of (ii). Regarding (iii), we know that n+1/2 (x) − un (x) dx ≤ CΔt. u B
We also have that n+1 (x) − un+1/2 (x) dx = u
Δt g (uΔt(x, t − tn )) dt dx B 0 Δt ≤ |g (uΔt (x, t − tn ))| dt dx
B
B
≤ Cg
0 Δt
0
(1 + M1 ) dx dt B
= |B| Cg (1 + M1 )Δt, where |B| denotes the length of B. Setting M3 = C + |B| Cg (1 + M1 ) shows that n+1 u (x) − un (x) ≤ M3 Δt, B
which implies (iii). Fix T > 0. Theorem A.8 implies the existence of a sequence Δt → 0 such that for each t ∈ [0, T ], the function uΔt (t) converges in L1loc (R) to a bounded function of bounded variation u(t). The convergence is in C([0, T ]; L1loc (Rm )). It remains to show that u solves (4.81) in the sense of (4.80). Using that uΔt is an entropy solution of the conservation law without source term (4.82) in the interval [tn , tn+1/2 ], we obtain7 tn+1/2 1 2 |uΔt − k| ϕt + sign (uΔt − k) (f (uΔt ) − f (k))ϕx dx dt 2 tn t=tn + (|uΔt − k| ϕ) dx ≥ 0. (4.90) t=tn+1/2
Regarding solutions of (4.83), since kt = 0 for any constant k we find that |u − k|t = sign (u − k) (u − k)t = sign (u − k) g(u). 7 The constants 2 and 1 come from the fact that time is running “twice as fast” in 2 the solution operators S and R in (4.84) (cf. also (4.13)–(4.14)).
4.6. Notes
157
Multiplying this by a test function φ(t) and integrating over s ∈ [0, t] we find after a partial integration that t
|u − k| φs + sign (u − k) g(u)φ ds + uφ|s=t s=0 = 0. 0
Since uΔt is a solution of the ordinary differential equation (4.83) on the interval [tn+1/2 , tn+1 ] (with time running “twice as fast”; see (4.84)), we find that tn+1/2 1 2 |uΔt − k| ϕt + sign (uΔt − k) g(uΔt )ϕ dx dt 2 tn t=tn+1/2 + (|uΔt − k| ϕ) dx = 0. t=tn+1
Adding this and (4.90), and summing over n, we obtain T 1 2 |uΔt − k| ϕt + χΔt sign (uΔt − k) (f (uΔt ) − f (k))ϕx 2 0 + χ˜Δt sign (uΔt − k) g(uΔt)ϕ dx dt − (|uΔt − k| ϕ) |t=T t=0 dx ≥ 0, where χΔt and χ ˜Δt denote characteristic functions of the sets ∪n [tn , tn+1/2 ∗ ∗ and ∪n [tn+1/2 , tn+1 , respectively. We have that χΔt 12 and χ ˜Δt 12 , and hence we conclude that (4.80) holds in the limit as Δt → 0. Theorem 4.13. Let f (u) be Lipschitz continuous, and assume that g = g(u) satisfies the bound (4.85). Let u0 be a bounded function of bounded variation. Then the initial value problem ut + f (u)x = g(u),
u(x, 0) = u0 (x)
(4.91)
has a weak entropy solution, which can be constructed as the limit of the sequence {uΔt } defined by (4.84).
4.6 Notes Dimensional splitting for hyperbolic equations was first introduced by Bagrinovski˘˘ı and Godunov [3] in 1957. Crandall and Majda made a comprehensive and systematic study of dimensional splitting (or the fractional steps method) in [38]. In [39] they used dimensional splitting to prove convergence of monotone schemes as well as the Lax–Wendroff scheme and the Glimm scheme, i.e., the random choice method. There are also methods for multidimensional conservation laws that are intrinsically multidimensional. However, we have here decided to use di-
158
4. Multidimensional Scalar Conservation Laws
mensional splitting as our technique because it is conceptually simple and allows us to take advantage of the one-dimensional analysis. Another natural approach to the study of multidimensional equations based on the front-tracking concept is first to make the standard fronttracking approximation: Replace the initial data by a piecewise constant function, and replace flux functions by piecewise linear and continuous functions. That gives rise to truly two-dimensional Riemann problems at each grid point (iΔx, jΔy). However, that approach has turned out to be rather cumbersome even for a single Riemann problem and piecewise linear and continuous flux functions f and g. See Risebro [122]. The one-dimensional front-tracking approach combined with dimensional splitting was first introduced in Holden and Risebro [69]. The theorem on the convergence rate of dimensional splitting was proved independently by Teng [138] and Karlsen [80, 81]. Our presentation here follows Haugse, Lie, and Karlsen [100]. Section 4.4, using operator splitting to solve the parabolic regularization, is taken from Karlsen and Risebro [82]. The Kruzkov ˇ interpolation lemma, Lemma 4.10, is taken from [87]; see also [82]. The presentation in Section 4.5 can be found in Holden and Risebro [70], where also the case with a stochastic source is treated. The convergence rate in the case of operator splitting applied to a conservation law with a source term is discussed in Langseth, Tveito, and Winther [93].
Exercises 4.1 Consider the initial value problem ut + f (u)x + g(u)y = 0,
u|t=0 = u0 ,
where f , g are Lipschitz continuous functions, and u0 is a bounded, integrable function with finite total variation. a. Show that the solution u is Lipschitz continuous in time; that is,
u(t) − u(s) 1 ≤ C T.V. (u0 ) |t − s| . b. Let v0 be another function with the same properties as u0 . Show that if u0 ≤ v0 , then also u ≤ v almost everywhere, where v is the solution with initial data v0 . 4.2 Consider the initial value problem ut + f (u)x = 0,
u|t=0 = u0 ,
(4.92)
where f is a Lipschitz continuous function and u0 is a bounded, integrable function with finite total variation. Write f = f1 + f2
4.6. Notes
159
and let Sj (t)u0 denote the solution of ut + fj (u)x = 0,
u|t=0 = u0 .
Prove that operator splitting converges to the solution of (4.92). Determine the convergence rate. 4.3 Consider the heat equation in Rm , ut =
m ∂2u i=1
∂x2i
,
u(x, 0) = u0 (x).
(4.93)
Let Hti denote the solution operator for the heat equation in the ith direction, i.e., we write the solution of ut = as Hti u0 . Define
∂2u , ∂x2i
u(x, 0) = u0 (x),
m 1 n un (x) = HΔ u0 (x), t ◦ · · · ◦ HΔ t
j j−1 1 n un+j/m (x) = HΔ t ◦ HΔt ◦ · · · ◦ HΔt u (x),
for j = 1, . . . , m, and n ≥ 0. For t in the interval [tn + ((j − 1)/m)Δt, tn + (j/m)Δt] define j uΔt(x, t) = Hm(t−t un+(j−1)/m (x). n+(j−1)/m )
If the initial function u0 (x) is bounded and of bounded variation, show that {uΔt } converges in C([0, T ]; L1loc (Rm )) to a weak solution of (4.93). 4.4 We consider the viscous conservation law in one space dimension ut + f (u)x = uxx ,
u(x, 0) = u0 (x),
(4.94)
where f satisfies the “usual” assumptions and u0 is in L1 ∩ BV . Consider the following scheme based on operator splitting:
1 n n+1/2 Uj = Uj +1 + Ujn−1 − λ f Ujn+1 − f Ujn−1 , 2 n+1/2
Ujn+1 = Uj
n+1/2
+ μ Uj +1
n+1/2
− 2U Uj
n+1/2
+ Uj −1
,
for n ≥ 0, where λ = Δt/Δx and μ = Δt/Δx2 . Set (j+1/2)Δx 1 Uj0 = u0 (x) dx. Δx (j −1/2)Δx We see that we use the Lax–Friedrichs scheme for the conservation law and an explicit difference scheme for the heat equation. Let
for j −
1 2
uΔt (x, t) = Ujn Δx ≤ x < j + 12 Δx and nΔt < t ≤ (n + 1)Δt.
160
4. Multidimensional Scalar Conservation Laws
a. Show that this gives a monotone and consistent scheme, provided that a CFL condition holds. b. Show that there is a sequence of Δt’s such that uΔt converges to a weak solution of (4.94) as Δt → 0. 4.5 We outline a proof of some Gronwall inequalities. a. Assume that u satisfies u (t) ≤ γu(t). Show that u(t) ≤ eγt u(0). b. Assume now that u satisfies u (t) ≤ C(1 + u(t)). Show that u(t) ≤ eCt (1 + u(0)) − 1. c. Assume that u satisfies u (t) ≤ c(t)u(t) + d(t), for 0 ≤ t ≤ T , where c(t) and d(t) are in L1 ([0, T ]). Show that t s t
u(t) ≤ u(0) + d(s) exp − c(˜) d˜ s ds exp c(s) ds 0
0
0
for t ≤ T . d. Assume that u is in L1 ([0, T ]) and that for t ∈ [0, T ], t u(t) ≤ C1 u(s) ds + C2 . 0
Show that u(t) ≤ C2 eC1 t . e. Assume that u, f , and g are in L1 ([0, T ]), and that g is nonnegative, while f is strictly positive and nondecreasing. Assume that t u(t) ≤ f (t) + g(s)u(s) ds, t ∈ [0, T ]. 0
Show that u(t) ≤ f (t) exp
t
g(s) ds ,
0
t ∈ [0, T ].
4.6 Assume that u and v are entropy solutions of ut + f (u)x = g(u), u(x, 0) = u0 (x), vt + f (v)x = g(v), v(x, 0) = v0 (x), where u0 and v0 are in L1 (R) ∩ BV (R), and f and g satisfy the assumptions of Theorem 4.13.
4.6. Notes
161
a. Use the entropy formulation (4.80) and mimic the arguments used to prove (2.54) to show that for any nonnegative test function ψ,
|u(x, t) − v(x, t)| ψt + q(u, v)ψx dt dx − |u(x, T ) − v(x, T )| ψ(x, T ) dx + |u0 (x) − v0 (x)| ψ(x, 0) dx ≥ sign (u − v) (g(u) − g(v))ψ dt dx. b. Define ψ(x, t) by (2.55), and set h(t) = |u(x, t) − v(x, t)| ψ(x, t) dx. Show that
h(T ) ≤ h(0) + γ
T
h(t) dt, 0
where γ denotes the Lipschitz constant of g. Use the previous exercise to conclude that
h(T ) ≤ h(0) 1 + γT eγT . c. Show that
u( · , t) − v( · , t) 1 ≤ u0 − v0 1 1 + γteγt ,
and hence that entropy solutions of (4.91) are unique. Note that this implies that {uΔt} defined by (4.84) converges to the entropy solution for any sequence {Δt} such that Δt → 0. 4.7 We consider the case where the source depends on (x, t). For u0 ∈ L1loc ∩ BV , let u be an entropy solution of ut + f (u)x = g(x, t, u),
u(x, 0) = u0 (x),
(4.95)
where g bounded for each fixed u and continuous in t, and satisfies |g(x, t, u) − g(x, t, v)| ≤ γ |u − v| , T.V. (g( · , t, u)) ≤ b(t), where the constant γ is independent of x and t, for all u and v and for a bounded function b(t) in L1 ([0, T ]). We let St be as before, and let R(x, t, s)u0 denote the solution of u (t) = g(x, t, u), for t > s.
u(s) = u0 ,
162
4. Multidimensional Scalar Conservation Laws
a. Define an operator splitting approximation uΔt using St and R(x, t, s). b. Show that there is a sequence of Δt’s such that uΔt converges in C([0, T ]; L1loc(R)) to a function of bounded variation u. c. Show that u is an entropy solution of (4.95). 4.8 Show that if the initial data u0 of the heat equation ut = Δu is smooth, that is, u0 ∈ C0∞ , then
u(t) − u0 1 ≤ C t. Compare this result with (4.4).
5 The Riemann Problem for Systems
Diese Untersuchung macht nicht darauf Anspruch, der experimentellen Forschung n¨ u ¨tzliche Ergebnisse zu liefern; der Verfasser w¨ u ¨nscht sie nur als einen Beitrag zur Theorie der nicht linearen partiellen Differentialgleichungen betrachtet zu sehen.1 G. F. B. Riemann [119]
We return to the conservation law (1.2), but now study the case of systems, i.e., ut + f (u)x = 0,
(5.1) 2
where u = u(x, t) = (u1 , . . . , un ) and f = f (u) = (f1 , . . . , fn ) ∈ C are vectors in Rn . (We will not distinguish between row and column vectors, and use whatever is more convenient.) Furthermore, in this chapter we will consider only systems on the line; i.e., the dimension of the underlying physical space is still one. In Chapter 2 we proved existence, uniqueness, and stability of the Cauchy problem for the scalar conservation law in one space dimension, i.e., well-posedness in the sense of Hadamard. However, this is a more subtle question in the case of systems of hyperbolic conservation laws. We will here first discuss the basic concepts for systems: 1 The present work does not claim to lead to results in experimental research; the author asks only that it be considered as a contribution to the theory of nonlinear partial differential equations.
H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_5, © Springer-Verlag Berlin Heidelberg 2011
163
164
5. The Riemann Problem for Systems
fundamental properties of shock waves and rarefaction waves. In particular, we will discuss various entropy conditions to select the right solutions of the Rankine–Hugoniot relations. Using these results we will eventually be able to prove well-posedness of the Cauchy problem for systems of hyperbolic conservation laws with small variation in the initial data.
5.1 Hyperbolicity and Some Examples Before we start to define the basic properties of systems of hyperbolic conservation laws we discuss some important and interesting examples. The first example is a model for shallow-water waves and will be used throughout this chapter both as a motivation and an example in which all the basic quantities will be explicitly computed. ♦ Example 5.1 (Shallow water). Water shapes its course according to the nature of the ground over which it flows. Sun Tzu, The Art of War (6th–5th century b.c.)
We will now give a brief derivation of the equations governing shallowwater waves in one space dimension, or, if we want, the long-wave approximation. Consider a one-dimensional channel along the x-axis with a perfect, inviscid fluid with constant density ρ, and assume that the bottom of the channel is horizontal. h
h(x)
v(x)
x
Figure 5.1. A shallow channel.
In the long-wave or shallow-water approximation we assume that the fluid velocity v is a function only of time and the position along the channel measured along the x-axis. Thus we assume that there is no vertical motion in the fluid. The distance of the surface of the fluid from the bottom is denoted by h = h(x, t). The fluid flow is governed by conservation of mass and conservation of momentum.
5.1. Hyperbolicity and Some Examples
165
Consider first the conservation of mass of the system. Let x1 < x2 be two points along the channel. The change of mass of fluid between these points is given by h(x2 ,t) h(x1 ,t) d x2 h(x,t) ρ dy dx = − ρv(x2 , t) dy + ρv(x1 , t) dy. dt x1 0 0 0 Assuming smoothness of the functions and domains involved, we may rewrite the right-hand side as an integral of the derivative of ρvh. We obtain x2 d x2 h(x,t) ∂ ρ dy dx = − (ρv(x, t)h(x, t)) dx, dt x1 0 x1 ∂x or
x2 x1
∂ ∂ (ρh(x, t)) + (ρv(x, t)h(x, t)) dx = 0. ∂t ∂x
Dividing by (x2 − x1 )ρ and letting x2 − x1 → 0, we obtain the familiar ht + (vh)x = 0.
(5.2)
Observe the similarity in the derivation of (5.2) and (1.22). In fact, in the derivation of (1.22) we started by considering individual cars before we made the continuum assumption corresponding to high traffic densities, thereby obtaining (1.22), while in the derivation of (5.2) we simply assumed a priori that the fluid constituted a continuum, and formulated mass conservation directly in the continuum variables. For the derivation of the equation describing the conservation of momentum we have to assume that the fluid is in hydrostatic balance. For that we introduce the pressure P = P (x, y, t) and consider a small element of the fluid [x1 , x2 ] × [y, y + Δy]. Hydrostatic balance means that the pressure exactly balances the effect of gravity, or (P (˜ x, y + Δy, t) − P (˜ x ˜, y, t)) (x2 − x1 ) = −(x2 − x1 )ρgΔy for some x ˜ ∈ [x1 , x2 ], where g is the acceleration due to gravity. Dividing by (x2 − x1 )Δy and taking x1 , x2 → x, Δy → 0 we find that ∂P (x, y, t) = −ρg. ∂y Integrating and normalizing the pressure to be zero at the fluid surface we conclude that P (x, y, t) = ρg(h(x, t) − y).
(5.3)
If we now again study the fluid between two points x1 < x2 along the channel, and compute the change of momentum for this part of the fluid
166
5. The Riemann Problem for Systems
we obtain ∂ ∂t
x2
h(x,t)
ρv(x, t) dy dx x1
=− −
0
0 h(x2 ,t) 0
h(x2 ,t)
P (x2 , y, t) dy + 2
ρv(x2 , t) dy +
h(x1 ,t)
0 h(x1 ,t)
0
P (x1 , y, t) dy ρv(x1 , t)2 dy.
In analogy with the derivation of the equation for conservation of mass we may rewrite this, using (5.3), as x2 ∂ 1 2 2 ρvh dx = −ρg h(x2 , t) − h(x2 , t) ∂t x1 2 x2 1 ∂ 2 2 + ρg h(x1 , t) − h(x1 , t) − ρhv2 dx 2 x1 ∂x x2 x2 ∂ 1 2 ∂ 2 = −ρg h dx − ρv h dx. 2 x1 ∂x x1 ∂x Dividing again by (x2 − x1 )ρ and letting x2 − x1 → 0, scaling g to unity, we obtain 1 2 2 (vh)t + v h + h = 0. (5.4) 2 x To summarize, we have the following system of conservation laws: 1 ht + (vh)x = 0, (vh)t + v2 h + h2 = 0, (5.5) 2 x where h and v denote the height (depth) and velocity of the fluid, respectively. Introducing the variable q defined by q = vh,
(5.6)
we may rewrite the shallow-water equations as q h + q2 = 0, 2 q t + h2 h
(5.7)
x
which is the form we will study in detail later on in this chapter.
♦
♦ Example 5.2 (The wave equation). Let φ = φ(x, t) denote the transverse position away from equilibrium of a one-dimensional string. If we assume that the amplitude of the transversal waves is small, we obtain the wave equation φtt = (c2 φx )x ,
(5.8)
5.1. Hyperbolicity and Some Examples
167
where c denotes the wave speed. Introducing new variables u = φx and v = φt , we find that (5.8) may be written as the system u v − 2 = 0. (5.9) v t c u x If c does not depend on φ, we recover the classical linear wave equation φtt = c2 φxx . ♦ ♦ Example 5.3 (The p-system). The p-system is a classical model of an isentropic gas, where one has conservation of mass and momentum, but not of energy. In Lagrangian coordinates it is described by v −u + = 0. (5.10) u t p(v) x Here v denotes specific volume, that is, the inverse of the density; u is the velocity; and p denotes the pressure. ♦ ♦ Example 5.4 (The Euler equations). The Euler equations are commonly used to model gas dynamics. They can be written in several forms, depending on the physical assumptions used and variables selected to describe them. Let it suffice here to describe the case where ρ denotes the density, v velocity, p pressure, and E the energy. Conservation of mass and momentum give ρt + (ρv)x = 0 and (ρv)t + (ρv 2 + p)x = 0, respectively. The total energy can be written as E = 12 ρv 2 + ρe, where e denotes the specific internal energy. Furthermore, we assume that there is a relation between this quantity and the density and pressure, viz., e = e(ρ, p). Conservation of energy now reads Et + (v(E + p))x = 0, yielding finally the system ⎛ ⎞ ⎛ ⎞ ρ ρv ⎝ρv ⎠ + ⎝ ρv 2 + p ⎠ = 0. (5.11) E t v(E + p) x ♦ We will have to make assumptions on the (vector-valued) function f so that many of the properties of the scalar case carry over to the case of systems. In order to have finite speed of propagation, which characterizes hyperbolic equations, we have to assume that the Jacobian of f , denoted by df , has n real eigenvalues df (u)rj (u) = λj (u)rj (u),
λj (u) ∈ R,
j = 1, . . . , n.
(5.12)
168
5. The Riemann Problem for Systems
(We will later normalize the eigenvectors rj (u).) Furthermore, we order the eigenvalues λ1 (u) ≤ λ2 (u) ≤ · · · ≤ λn (u).
(5.13)
A system with a full set of eigenvectors with real eigenvalues is called hyperbolic, and if all the eigenvalues are distinct, we say that the system is strictly hyperbolic. Let us look at the shallow-water model to see whether that system is hyperbolic. ♦ Example 5.5 (Shallow water (cont’d.)). In case of the shallow-water equations (5.7) we easily find that q √ q √ λ1 (u) = − h < + h = λ2 (u), h h with corresponding eigenvectors 1 rj (u) = , λj (u)
(5.14)
(5.15)
and thus the shallow-water equations are strictly hyperbolic away from h = 0. ♦
5.2 Rarefaction Waves Natura non facit saltus.2 Carl Linnaeus, Philosophia Botanica (1751 )
Let us consider smooth solutions for the initial value problem ut + f (u)x = 0,
(5.16)
with Riemann initial data
ul u(x, 0) = ur
for x < 0, for x ≥ 0.
(5.17)
First we observe that since both the initial data and the equation are scaleinvariant or self-similar, i.e., invariant under the map x → kx and t → kt, the solution should also have that property. Let us therefore search for solutions of the form u(x, t) = w(x/t) = w(ξ), 2 Nature
does not make jumps.
ξ = x/t.
(5.18)
5.2. Rarefaction Waves
169
Inserting this into the differential equation (5.16) we find that −
x 1 w˙ + df (w)w˙ = 0, t2 t
(5.19)
or df (w)w˙ = ξ w, ˙
(5.20)
where w˙ denotes the derivative of w with respect to the one variable ξ. Hence we observe that w˙ is an eigenvector for the Jacobian df (w) with eigenvalue ξ. From our assumptions on the flux function we know that df (w) has n eigenvectors given by r1 , . . . , rn , with corresponding eigenvalues λ1 , . . . , λn . This implies w(ξ) ˙ = rj (w(ξ)),
λj (w(ξ)) = ξ,
(5.21)
for a value of j. Assume in addition that w(λj (ul )) = ul ,
w(λj (ur )) = ur .
(5.22)
Thus for a fixed time t, the function w(x/t) will continuously connect the given left state ul to the given right state ur . This means that ξ is increasing, and hence λj (w(x/t)) has to be increasing. If this is the case, we have a solution of the form ⎧ ⎪ ul for x ≤ λj (ul )t, ⎨ u(x, t) = w(x/t) (5.23) for tλj (ul ) ≤ x ≤ tλj (ur ), ⎪ ⎩ ur for x ≥ tλj (ur ), where w(ξ) satisfies (5.21) and (5.22). We call these solutions rarefaction waves, a name that comes from applications to gas dynamics. Furthermore, we observe that the normalization of the eigenvector rj (u) also is determined from (5.21), namely, ∇λj (u) · rj (u) = 1,
(5.24)
which follows by taking the derivative with respect to ξ. But this also imposes an extra condition on the eigenvector fields, since we clearly have to have a nonvanishing scalar product between rj (u) and ∇λj (u) to be able to normalize the eigenvector properly. It so happens that in most applications this can be done. However, the Euler equations of gas dynamics have the property that in one of the eigenvector families the eigenvector and the gradient of the corresponding eigenvalue are orthogonal. We say that the jth family is genuinely nonlinear if ∇λj (u) · rj (u) = 0 and linearly degenerate if ∇λj (u) · rj (u) ≡ 0 for all u under consideration. We will not discuss mixed cases where a wave family is linearly degenerate only in certain regions in phase space, e.g., along curves or at isolated points. Before we discuss these two cases separately, we will make a slight but important change in point of view. Instead of considering given left and right states as in (5.17), we will assume only that ul is given, and consider
170
5. The Riemann Problem for Systems
those states ur for which we have a rarefaction wave solution. From (5.21) and (5.23) we see that for each point ul in phase space there are n curves emanating from ul on which ur can lie allowing a solution of the form (5.23). Each of these curves is given as integral curves of the vector fields of eigenvectors of the Jacobian df (u). Thus our phase space is now the ur space. We may sum up the above discussion in the genuinely nonlinear case by the following theorem. Theorem 5.6. Let D be a domain in Rn . Consider the strictly hyperbolic equation ut + f (u)x = 0 in D and assume that the equation is genuinely nonlinear in the jth wave family in D. Let the jth eigenvector rj (u) of df (u) with corresponding eigenvalue λj (u) be normalized so that ∇λj (u)·rj (u) = 1 in D. Let ul ∈ D. Then there exists a curve Rj (ul ) in D, emanating from ul , such that for each ur ∈ Rj (ul ) the initial value problem (5.16), (5.17) has weak solution ⎧ ⎪ for x ≤ λj (ul )t, ⎨ ul u(x, t) = w(x/t) (5.25) for λj (ul )t ≤ x ≤ λj (ur )t, ⎪ ⎩ ur for x ≥ λj (ur )t, where w satisfies w(ξ) ˙ = rj (w(ξ)), λj (w(ξ)) = ξ, w(λj (ul )) = ul , and w(λj (ur )) = ur . Proof. The discussion preceding the theorem gives the key computation and the necessary motivation behind the following argument. Assume that we have a strictly hyperbolic, genuinely nonlinear conservation law with appropriately normalized jth eigenvector. Due to the assumptions on f , the system of ordinary differential equations w(ξ) ˙ = rj (w(ξ)),
w(λj (ul )) = ul
(5.26)
has a solution for all ξ ∈ [λj (ul ), λj (ul )+η for some η > 0. For this solution we have d λj (w(ξ)) = ∇λj (w(ξ)) · w(ξ) ˙ = 1, (5.27) dξ proving the second half of (5.21). We denote the orbit of (5.26) by Rj (ul ). If we define u(x, t) by (5.25), a straightforward calculation shows that u indeed satisfies both the equation and the initial data. Observe that we can also solve (5.26) for ξ less than λj (ul ). However, in that case λj (u) will be decreasing. We remark that the solution u in (5.25) is continuous, but not necessarily differentiable, and hence is not necessarily a regular, but rather a weak, solution. We will now introduce a different parameterization of the rarefaction curve Rj (ul ), which will be convenient in Section 5.5 when we construct
5.2. Rarefaction Waves
171
the wave curves for the solution of the Riemann problem. From (5.27) we see that λj (u) is increasing along Rj (ul ), and hence we may define the positive parameter by := ξ − ξl = λj (u) − λj (ul ) > 0. We denote the corresponding u by uj, , that is, uj, = w(ξ) = w(λj (u)) = w( + λj (ul )). Clearly, duj, = rj (ul ). (5.28) d =0 Assume now that the system is linearly degenerate in the family j, i.e., ∇λj (u) · rj (u) ≡ 0. Consider the system of ordinary differential equations du = rj (u), u|=0 = ul , (5.29) d with solution u = uj, for ∈ −η, η for some η > 0. We denote this orbit by Cj (ul ), along which λj (uj, ) is constant, since d λj (uj, ) = ∇λj (uj, ) · rj (uj, ) = 0. d Furthermore, the Rankine–Hugoniot condition is satisfied on Cj (ul ) with speed λj (ul ), because d duj, duj, (f (uj, ) − λj (ul )uj, ) = df (uj, ) − λj (ul ) d d d = (df (uj, ) − λj (ul ))rj (uj, ) = (df (uj, ) − λj (uj, ))rj (uj, ) = 0, which implies that f (uj, ) − λj (ul )uj, = f (ul ) − λj (ul )ul . Let ur ∈ Cj (ul ), i.e., ur = uj,0 for some 0 . It follows that ul for x < λj (ul )t, u(x, t) = ur for x ≥ λj (ul )t, is a weak solution of the Riemann problem (5.16), (5.17). We call this solution a contact discontinuity. We sum up the above discussion concerning linearly degenerate waves in the following theorem. Theorem 5.7. Let D be a domain in Rn . Consider the strictly hyperbolic equation ut + f (u)x = 0 in D. Assume that the equation is linearly degenerate in the jth wave family in D, i.e., ∇λj (u) · rj (u) ≡ 0 in D, where rj (u) is the jth eigenvector of df (u) with corresponding eigenvalue λj (u). Let ul ∈ D. Then there exists a curve Cj (ul ) in D, passing through ul , such that for each ur ∈ Cj (ul ) the initial value problem (5.16), (5.17) has solution ul for x < λj (ul )t, u(x, t) = (5.30) ur for x ≥ λj (ul )t,
172
5. The Riemann Problem for Systems
where ur is determined as follows: Consider the function → u determined by du = rj (u), u|=0 = ul . Then ur = u0 for some 0 . d ♦ Example 5.8 (Shallow water (cont’d.)). Let us now consider the actual computation of rarefaction waves in the case of shallow-water waves. Recall that q h u= , f (u) = q2 , h2 q h + 2 √ with eigenvalues λj = hq + (−1)j h, and corresponding eigenvectors
rj (u) = λj1(u) . With this normalization of rj , we obtain 3(−1)j √ , (5.31) 2 h and hence we see that the shallow-water equations are genuinely nonlinear in both wave families. From now on we will renormalize the eigenvectors to satisfy (5.24), viz., √ 2 1 j rj (u) = (−1) h . (5.32) λj (u) 3 ∇λj (u) · rj (u) =
For the 1-family we have that 2√ 1√ h˙ =− h q , q˙ − h 3 h
(5.33)
implying that dq q √ = λ1 = − h, dh h which can be integrated to yield
√ h q = q(h) = ql − 2h h − hl . hl
(5.34)
Since λ1 (u) has to increase along the rarefaction wave, we see from (5.14) (inserting the expression (5.34) for q) that we have to use h ≤ hl in (5.34). For the second family we again obtain dq q √ = λ2 = + h, dh h yielding q = q(h) = ql
√ h + 2h h − hl . hl
(5.35)
In this case we see that we have to use h ≥ hl . Observe that (5.34) and (5.35) would follow for any normalization of the eigenvector rj (u).
5.2. Rarefaction Waves
173
Summing up, we obtain the following rarefaction waves expressed in terms of h: R1 : R2 :
√ h − 2h h − hl , hl
√ h q = R2 (h; ul ) := ql + 2h h − hl , hl q = R1 (h; ul ) := ql
h ∈ 0, hl ],
(5.36)
h ≥ hl .
(5.37)
Alternatively, in the (h, v) variables (with v = q/h) we have the following:
√ R1 : v = R1 (h; ul ) := vl − 2 h − hl , h ∈ 0, hl ], (5.38)
√ R2 : v = R2 (h; ul ) := vl + 2 h − hl , h ≥ hl . (5.39)
v
q
R2
R1
R2
R1
L h
L h
Figure 5.2. Rarefaction curves in the (h, v) and (h, q) planes. We have illustrated the full solution of (5.26) for the shallow-water equations. Only the part given by (5.36) and (5.37) will be actual rarefaction curves.
However, if we want to compute the rarefaction curves in terms of the parameter ξ or , we have to use the proper normalization of the eigenvectors given by (5.32). Consider first the 1-family. We obtain √ ˙h = − 2 h, q˙ = 2 − √q + h . (5.40) 3 3 h Integrating the first equation directly and inserting the result into the second equation, we obtain h1 w1 (ξ) = (ξ) = R1 (ξ; ul ) q1 √ 1 (vl + 2 hl − ξ)2√ 9 √ := 1 (5.41) 2 27 (vl + 2 hl + 2ξ)(vl + 2 hl − ξ) √ √ , for ξ ∈ vl − hl , vl + 2 hl .
174
5. The Riemann Problem for Systems
Similarly, for the second family we obtain h2 w2 (ξ) = (ξ) = R2 (ξ; ul ) q2 √ 1 (−vl + 2 hl + ξ)2√ 9 √ := 1 2 27 (vl − 2 hl + 2ξ)(−vl + 2 hl + ξ) , for ξ ∈ λ2 (ul ), ∞ . Hence the actual solution reads ⎧ ⎪ for x ≤ λj (ul )t, ⎨u l u(x, t) = Rj (x/t; ul ) for λj (ul )t ≤ x ≤ λj (ur )t, ⎪ ⎩ ur for x ≥ λj (ur )t. In the (h, v) variables (with v = q/h) we obtain 1 v1 (ξ) = vl + 2 hl + 2ξ 3 and 1 v2 (ξ) = vl − 2 hl + 2ξ , 3 for the first and the second families, respectively.
(5.42)
(5.43)
(5.44)
(5.45)
In terms of the parameter we may write (5.41) as √ h1, ( hl √ − 3 )2 u1, = = R1, (ul ) := q1, (vl + 2 )( hl − 3 )2 3 √ , for ∈ 0, 3 hl , and (5.42) as ! "
√ 2 √ h2, hl + 3 √ 2 u2, = = R2, (ul ) := √ q2, vl + 2 hl + 3 3 for ∈ [0, ∞.
(5.46)
(5.47) ♦
5.3 The Hugoniot Locus: The Shock Curves God lives in the details. Johannes Kepler (1571–1630 )
The discussion in Chapter 1 concerning weak solutions, and in particular the Rankine–Hugoniot condition (1.23), carries over to the case of systems without restrictions. However, the concept of entropy is considerably more difficult for systems and is still an area of research. Our main concern in this section is the characterization of solutions of the Rankine–Hugoniot relation. Again, we will take the point of view introduced in the previous
5.3. The Hugoniot Locus: The Shock Curves
175
section, assuming the left state ul to be fixed, and consider possible right states u that satisfy the Rankine–Hugoniot condition s(u − ul ) = f (u) − f (ul ),
(5.48)
for some speed s. We introduce the jump in a quantity φ as [[φ]] = φr − φl , and hence (5.48) takes the familiar form s[[u]] = [[f (u)]]. The solutions of (5.48), for a given left state ul , form a set, which we call the Hugoniot locus and write H(ul ), i.e., H(ul ) := {u | ∃s ∈ R such that s[[u]] = [[f (u)]]} .
(5.49)
We start by computing the Hugoniot locus for the shallow-water equations. ♦ Example 5.9 (Shallow water (cont’d.)). The Rankine–Hugoniot condition reads s(h − hl ) = q − ql , 2 2 q h2 ql h2 s(q − ql ) = + − + l , h 2 hl 2
(5.50)
where s as usual denotes the shock speed between the left state ul = hqll
and right state u = hq , viz., hl for x < st, h (x, t) = hql (5.51) q for x ≥ st. q In the context of the shallow-water equations such solutions are called bores. Eliminating s in (5.50), we obtain the equation 1 2 q2 [[h]] + [[h ]] = [[q]]2 . (5.52) h 2 Introducing the variable v, given by q = vh, equation (5.52) becomes 1 2 2 2 [[h]] [[hv ]] + [[h ]] = [[vh]] , 2 with solution
$ 1 v = vl ± √ (h − hl ) h−1 + h−1 l , 2
or, alternatively, q = vh = ql
$ h h ± √ (h − hl ) h−1 + h−1 l . hl 2
(5.53)
(5.54)
176
5. The Riemann Problem for Systems
For later use, we will also obtain formulas for the corresponding shock speeds. We find that [[vh]] v(h − hl ) + (v − vl )hl = [[h]] h − hl $ [[v]] hl =v+ hl = v ± √ h−1 + h−1 l , [[h]] 2
s=
or s=v+
[[v]] [[v]] h hl = vl + [[v]] + hl = vl ± √ [[h]] [[h]] 2
(5.55)
$ h−1 + h−1 l .
(5.56)
When we want to indicate the wave family, we write $ h sj = sj (h; vl ) = vl + (−1)j √ h−1 + h−1 l 2 $ hl = v + (−1)j √ h−1 + h−1 l . 2
(5.57)
Thus we see that through a given left state ul there are two curves on which the Rankine–Hugoniot relation holds, namely, ! " 4 h $ H1 (ul ) := (5.58) h>0 , ql hhl − √h2 (h − hl ) h−1 + h−1 l and
! H2 (ul ) :=
ql hhl
h $ h + √2 (h − hl ) h−1 + h−1 l
" 4 h>0 .
(5.59)
The Hugoniot locus now reads H(ul ) = { u | ∃s ∈ R such that s[[u]] = [[f (u)]] } = H1 (ul ) ∪ H2 (ul ). ♦ v
q
L
L
h
h S2
S1
S2
S1
Figure 5.3. Shock curves in the (h, v) and (h, q) planes. Slow (S1 ) and fast (S2 ) shocks indicated; see Section 5.4.
5.3. The Hugoniot Locus: The Shock Curves
177
We will soon see that the basic features of the Hugoniot locus of the shallow-water equations carry over to the general case of strictly hyperbolic systems at least for small shocks where u is near ul . The problem to be considered is to solve implicitly the system of n equations H(s, u; ul ) := s(u − ul ) − (f (u) − f (ul )) = 0,
(5.60)
for the n + 1 unknowns u1 , . . . , un and s for u close to the given ul . The major problem comes from the fact that we have one equation fewer than the number of unknowns, and that H(s, ul ; ul ) = 0 for all values of s. Hence the implicit function theorem cannot be used without first removing the singularity at u = ul . Let us first state the relevant version of the implicit function theorem that we will use. Theorem 5.10 (Implicit function theorem). Let the function Φ = (Φ1 , . . . , Φp ) : Rq × Rp → Rp
(5.61)
be C 1 in a neighborhood of a point (x0 , y0 ), x0 ∈ Rq , y0 ∈ Rp with Φ(x0 , y0 ) = 0. Assume that the p × p matrix ⎛ ∂Φ ⎞ 1 1 . . . ∂Φ ∂y1 ∂yp ∂Φ ⎜ .. .. ⎟ .. ⎟ =⎜ (5.62) . . . ⎠ ⎝ ∂y ∂Φp ∂Φp . . . ∂yp ∂y1 is nonsingular at the point (x0 , y0 ). Then there exist a neighborhood N of x0 and a unique differentiable function φ : N → Rp such that Φ(x, φ(x)) = 0,
φ(x0 ) = y0 .
(5.63)
We will rewrite equation (5.60) into an eigenvalue problem that we can study locally around each eigenvalue λj (ul ). This removes the singularity, and hence we can apply the implicit function theorem. Theorem 5.11. Let D be a domain in Rn . Consider the strictly hyperbolic equation ut + f (u)x = 0 in D. Let ul ∈ D. Then there exist n smooth curves H1 (ul ), . . . , Hn (ul ) locally through ul on which the Rankine–Hugoniot relation is satisfied. Proof. Writing f (u) − f (ul ) =
1 0
= 0
1
∂ f ((1 − α)ul + αu) dα ∂α df ((1 − α)ul + αu)(u − ul ) dα
= M (u, ul )(u − ul ),
(5.64)
178
5. The Riemann Problem for Systems
where M (u, ul ) is the averaged Jacobian 1 M (u, ul ) = df ((1 − α)ul + αu) dα, 0
we see that (5.60) takes the form H(s, u, ul ) = (s − M (u, ul ))(u − ul ) = 0.
(5.65)
Here u−ul is an eigenvector of the matrix M with eigenvalue s. The matrix M (ul , ul ) = df (ul ) has n distinct eigenvalues λ1 (ul ), . . . , λn (ul ), and hence we know that there exists an open set N such that the matrix M (u, ul ) has twice-differentiable eigenvectors and distinct eigenvalues, namely, (μj (u, ul ) − M (u, ul )) vj (u, ul ) = 0,
(5.66)
for all u, ul ∈ N .3 Let wj (u, ul ) denote the corresponding left eigenvectors normalized so that wk (u, ul ) · vj (u, ul ) = δjk .
(5.67)
In this terminology u and ul satisfy the Rankine–Hugoniot relation with speed s if and only if there exists a j such that wk (u, ul ) · (u − ul ) = 0 for all k = j,
s = μj (u, ul ),
(5.68)
and wj (u, ul ) · (u − ul ) is nonzero. We define functions Fj : R × R → Rn by
Fj (u, ) = w1 (u, ul ) · (u − ul ) − δ1j , . . . , wn (u, ul ) · (u − ul ) − δnj . (5.69) n
The Rankine–Hugoniot relation is satisfied if and only if Fj (u, ) = 0 for some and j. Furthermore, Fj (ul , 0) = 0. A straightforward computation shows that ⎛ ⎞ 1 (ul ) ∂F Fj ⎜ ⎟ (ul , 0) = ⎝ ... ⎠ , ∂u n (ul ) which is nonsingular. Hence the implicit function theorem implies the existence of a unique solution uj () of Fj (uj (), ) = 0
(5.70)
for small. Occasionally, in particular in Chapter 7, we will use the notation Hj ()ul = uj (). 3 The properties of the eigenvalues follow from the implicit function theorem used on the determinant of μI − M (u, R ul ), and for the eigenvectors by considering the onedimensional eigenprojections (M (u, ul ) − μ)−1 dμ integrated around a small curve enclosing each eigenvalue λj (ul ).
5.3. The Hugoniot Locus: The Shock Curves
179
We will have the opportunity later to study in detail properties of the parameterization of the Hugoniot locus. Let it suffice here to observe that by differentiating each component of Fj (uj (), ) = 0 at = 0, we find that k (ul ) · uj (0) = δjk
(5.71)
for all k = 1, . . . , n, showing that indeed uj (0) = rj (ul ).
(5.72)
From the definition of M we see that M (u, ul ) = M (ul, u), and this symmetry implies that μj (u, ul ) = μj (ul , u), vj (u, ul ) = vj (ul , u),
μj (ul , ul ) = λj (ul ), vj (ul , ul ) = rj (ul ),
wj (u, ul ) = wj (ul , u),
wj (ul , ul ) = j (ul ).
(5.73)
Let ∇k h(u1 , u2 ) denote the gradient of a function h : Rn × Rn → R with respect to the kth variable uk ∈ Rn , k = 1, 2. Then the symmetries (5.73) imply that ∇1 μj (u, ul ) = ∇2 μj (u, ul ).
(5.74)
Hence ∇λj (ul ) = ∇1 μj (ul , ul ) + ∇2 μj (ul , ul ) = 2∇1 μj (ul , ul ).
(5.75)
For a vector-valued function φ(u) = (φ1 (u), . . . , φn (u)) we let ∇φ(u) denote the Jacobian matrix, viz., ⎛
⎞ ∇φ1 ⎜ ⎟ ∇φ(u) = ⎝ ... ⎠ .
(5.76)
∇φn Now the symmetries (5.73) imply that ∇k (ul ) = 2∇1 wk (ul , ul ) in obvious notation.
(5.77)
180
5. The Riemann Problem for Systems
5.4 The Entropy Condition . . . and now remains That we find out the cause of this effect, Or rather say, the cause of this defect . . . W. Shakespeare, Hamlet (1603 )
Having derived the Hugoniot loci for a general class of conservation laws in the previous section, we will have to select the parts of these curves that give admissible shocks, i.e., satisfy an entropy condition. This will be considerably more complicated in the case of systems than in the scalar case. To guide our intuition we will return to the example of shallow-water waves. ♦ Example 5.12 (Shallow water (cont’d.)). Let us first study the points on H1 (ul ); a similar analysis will apply to H2 (ul ). We will work with the variables h, v rather than h, q. Consider the Riemann problem where we have a high-water bank at rest to the left of the origin, and a lower-water bank to the right of the origin, with a positive velocity; or in other words, the fluid from the lower-water bank moves away from the high-water bank. More precisely, for hl > hr we let ⎧! " ⎪ ⎪ hl ⎪ for x < 0, ⎪ ⎨ 0 h ! " (x, 0) = v hr ⎪ ⎪ ⎪ for x ≥ 0, ⎪ hl −hr $ −1 ⎩ √ hr + h−1 l 2 where we have chosen initial data so that the right state is on H1 (ul ), i.e., the Rankine–Hugoniot is already satisfied for a certain speed s. This implies that ⎧! " ⎪ hl ⎪ ⎪ for x < st, ⎪ ⎨ 0 h " (x, t) = ! v hr ⎪ ⎪ $ ⎪ for x ≥ st, ⎪ ⎩ hl√ −hr h−1 + h−1 2
r
l
for hl > hr , where the negative shock speed s given by $ −1 hr h−1 r + hl √ s=− 2 is a perfectly legitimate weak solution of the initial value problem. However, we see that this is not at all a reasonable solution, since the solution predicts a high-water bank being pushed by a lower one!
5.4. The Entropy Condition hl
181
h
hr
Figure 5.4. Unphysical solution. h
hr
hl
Figure 5.5. Reasonable solution.
If we change the initial conditions so that the right state is on the other branch of H1 (ul ), i.e., we consider a high-water bank moving into a lower-water bank at rest, or ⎧! " ⎪ h ⎪ ⎪ l for x < 0, ⎪ ⎨ 0 h " (x, 0) = ! v hr ⎪ ⎪ $ ⎪ for x ≥ 0, ⎪ ⎩ hl√ −hr h−1 + h−1 2
r
l
for hl < hr , we see that the weak solution ⎧! " ⎪ hl ⎪ ⎪ ⎪ ⎨ 0 h " (x, t) = ! v hr ⎪ ⎪ $ ⎪ ⎪ ⎩ hl√ −hr h−1 + h−1 2
r
for x < st, for x ≥ st,
l
$ √ −1 for hl < hr and with speed s = −hr h−1 r + hl / 2, is reasonable physically since the high-water bank now is pushing the lower one. If you are worried about the fact that the shock is preserved, i.e., that there is no deformation of the shock profile, this is due to the fact that the right state is carefully selected. In general we will have both a shock wave and a rarefaction wave in the solution. This will be clear when we solve the full Riemann problem. Let us also consider the above examples with energy conservation in mind. In our derivation of the shallow-water equations we used conservation of mass and momentum only. For smooth solutions of these
182
5. The Riemann Problem for Systems
equations, conservation of mechanical energy will follow. Indeed, the kinetic energy of a vertical section of the shallow-water system at a point x is given by h(x, t)v(x, t)2 /2 in dimensionless variables, and the potential energy of the same section is given by h(x, t)2 /2, and hence the total mechanical energy reads (h(x, t)v(x, t)2 + h(x, t)2 )/2. Consider now a section of the channel between points x1 < x2 and assume that we have a smooth (classical) solution of the shallow-water equations. The rate of change of mechanical energy is given by the energy flow across x1 and x2 plus the work done by the pressure. Energy conservation yields x2 d x2 1 2 1 2 ∂ 1 3 1 2 0= hv + h dx + hv + h v dx dt x1 2 2 2 2 x1 ∂x h(x2 ,t) h(x1 ,t) + P (x2 , y, t)v(x2 , t) dy − P (x1 , y, t)v(x1 , t) dy 0 0 x2 d x2 1 2 1 2 ∂ 1 3 1 2 = hv + h dx + hv + h v dx dt x1 2 2 2 2 x1 ∂x x2 ∂ 1 2 + h v dx ∂x 2 x1 x2 x2 ∂ 1 2 1 2 ∂ 1 3 2 = hv + h dx + hv + h v dx, 2 2 2 x1 ∂t x1 ∂x using that P (x, y, t) = h(x, t) − y in dimensionless variables. Hence we conclude that 1 2 1 2 1 3 2 hv + h + hv + h v = 0. 2 2 2 t x This equation follows easily directly from (5.5) for smooth solutions. However, for weak solutions, mechanical energy will in general not be conserved. Due to dissipation we expect an energy loss across a bore. Let us compute this change in energy ΔE across the bore in the two examples above, for a time t such that x1 < st < x2 . We obtain x2 d x2 1 2 1 2 1 3 2 ΔE = hv + h dx + hv + h v dt x1 2 2 2 x1 1 2 1 2 1 3 2 = −s 2 hv + 2 h + 2 hv + h v 1 2 3 = hr δ([[h]] δ 2 hr + h2r − h2l ) + (−[[h]] δ 3 hr − 2[[h]]δh2r ) 2 1 3 = − [[h]] δ, (5.78) 4 where we have introduced
$ δ :=
−1 h−1 r + hl √ . 2
(5.79)
5.4. The Entropy Condition
183
(Recall that vl = 0 and vr = [[v]] = −[[h]]δ from the Rankine–Hugoniot condition.) Here we have used that we have a smooth solution with energy conservation on each interval [x1 , st] and [st, x2 ]. In the first case, where we had a low-water bank pushing a high-water bank with hr < hl , we find indeed that ΔE > 0, while in the other case we obtain the more reasonable ΔE < 0. From these two simple examples we get a hint that only one branch of H1 (ul ) is physically acceptable. We will now translate this into conditions on existence of viscous profiles, and conditions on the eigenvalues of df (u) at u = ul and u = ur , conditions we will use in cases where our intuition will be more blurred. In Chapter 2 we discussed the notion of traveling waves. Recall from (2.7) that a shock between two fixed states ul and ur with speed s, viz., ul for x < st, u(x, t) = (5.80) ur for x ≥ st, admits a viscous profile if u(x, t) is the limit as → 0 of u (x, t) = U ((x − st)/) = U (ξ) with ξ = (x − st)/, which satisfies ut + f (u )x = uxx . Integrating this equation, using lim→0 U (ξ) = ul if ξ < 0, we obtain U˙ = A(h, q) := f (U ) − f (ul) − s(U − ul ),
(5.81)
where the differentiation is with respect to ξ. We will see that it is possible to connect the left state with a viscous profile to a right state only for the branch with hr > hl of H1 (ul ), i.e., the physically correct solution. Computationally it will be simpler to work with viscous profiles in the (h, v) variables rather than (h, q). Using q˙ = vh ˙ + v h˙ and (5.81), we find that there is a viscous profile in (h, q) if and only if (h, v) satisfies ! " vh − vl hl − s(h − hl ) h˙ = B(h, v) := h2 −h2 v˙ (v − vl )(vl − s) hhl + 2h l ! " (5.82) vh − vl hl − s(h − hl ) = . h2 −h2 (v − vl ) hlhhr δ + 2h l We will analyze the vector field B(h, v) carefully. The Jacobian of B reads ! " v−s h dB(h, v) = h2 +h2l . (5.83) hl hr hl hr 2h2 − (v − vl ) h2 δ h δ
184
5. The Riemann Problem for Systems
At the left state ul we obtain v − s hl hr δ dB(hl , vl ) = l = 1 hr δ 1
hl , hr δ
(5.84)
using the value of the √shock speed s, equation (5.56). The eigenvalues of dB(hl , vl ) are hr δ ± hl , which both are easily seen to be positive when hr > hl ; thus (hl , vl ) is a source. Similarly, we obtain hl δ hr dB(hr , vr ) = , (5.85) 1 hl δ √ with eigenvalues hl δ ± hr . In this case, one eigenvalue is positive and one negative, thus (hr , vr ) is a saddle point. However, we still have to establish an orbit connecting the two states. To this end we construct a region K with (hl , vl ) and (hr , vr ) at the boundary of K such that a connecting orbit has to connect the two points within K. The region K will have two curves as boundaries where the first and second component of B vanish, respectively. The first curve, denoted by Ch , is defined by the first component being zero, viz., vh − vl hl − s(h − hl ) = 0,
h ∈ [hl , hr ],
which can be simplified to yield v = vl − (h − hl )
hr δ, h
h ∈ [hl , hr ].
(5.86)
For the second curve, Cv , we have (v − vl )(vl − s)
hl h2 − h2l + = 0, h 2h
h ∈ [hl , hr ],
which can be rewritten as v = vl −
h2 − h2l , 2hl hr δ
h ∈ [hl , hr ].
(5.87)
Let us now study the behavior of the second component of B along the curve Ch where the first component vanishes, i.e., hl hr h2 − h2l (v − vl ) δ+ h 2h Ch (5.88) hl h + hr = − 2 (hr − h)(h − hl )(1 + ) < 0. 2h hl Similarly, for the first component of B along Cv , we obtain vh − vl hl − s(h − hl ) C v
h − hl = (hr (hl + hr ) − h(h + hl )) > 0, 2hr hl δ
(5.89)
which is illustrated in Figure 5.6. The flow of the vector field is leaving the region K along the curves Ch and Cv . Locally, around (hr , vr )
5.4. The Entropy Condition
185
L v
K
R h
Figure 5.6. The vector field B.
there has to be an orbit entering K as ξ decreases from ∞. This curve cannot escape K and has to connect to a curve coming from (hl , vl ). Consequently, we have proved existence of a viscous profile. We saw that the relative values of the shock speed and the eigenvalues of the Jacobian of B, and hence of A, at the left and right states were crucial for this analysis to hold. Let us now translate these assumptions into assumptions on the eigenvalues of dA. The Jacobian of A reads −s 1 2 . dA(h, q) = −s h − hq 2 2q h √ Hence the eigenvalues are −s + hq ± h = −s + λ(u). At the left state both eigenvalues are positive, and thus ul is a source, while at ur one is positive and one negative, and hence ur is a saddle. We may write this as λ1 (ur ) < s < λ1 (ul ),
s < λ2 (ur ).
(5.90)
We call these the Lax inequalities, and say that a shock satisfying these inequalities is a Lax 1-shock or a slow Lax shock. We have proved that for the shallow-water equations with hr > hl there exists a viscous profile, and that the Lax shock conditions are satisfied. Let us now return to the unphysical shock “solution.” In this case we had hr < hl with the eigenvalues at the left state (hl , vl ) of different sign. Thus (hl , vl ) is a saddle, both eigenvalues are positive for the right state, and hence that point is a source. Accordingly, there cannot be any orbit connecting the left state with the right state. A similar analysis can be performed for H2 (ul ), giving that there exists a viscous profile for a shock satisfying the Rankine–Hugoniot relation if
186
5. The Riemann Problem for Systems
and only if the following Lax entropy conditions are satisfied: λ2 (ur ) < s < λ2 (ul ),
s > λ1 (ul ).
(5.91)
In that case we have a fast Lax shock or a Lax 2-shock. We may sum up the above argument as follows. A shock has a viscous profile if and only if the Lax shock conditions are satisfied. We call such shocks admissible and denote the part of the Hugoniot locus where the Lax j conditions are satisfied by Sj . In the case of shallow-water equations we obtain ! " 4 h $ S1 (ul ) := (5.92) h ≥ hl , ql hhl − √h2 (h − hl ) h−1 + h−1 l ! " 4 h $ S2 (ul ) := (5.93) h ≤ hl . ql hh + √h (h − hl ) h−1 + h−1 l l
2
(These curves are depicted in Section 5.3.) We may also want rameterize the admissible shocks differently. For the slow Lax let 2 h1, := hl − hl , < 0. 3 This gives 7 2 q1, := ql 1 − √ + 2hl 6 hl − 2 3 hl − 9 3 hl such that d d
h1, = r1 (ul ), q1, =0
to pashocks (5.94)
(5.95)
(5.96)
where r1 (ul ) is given by (5.32). Similarly, for the fast Lax shocks let 2 h2, := hl + hl , < 0. (5.97) 3 Then 7 2 √ q2, := ql 1 + + 2hl 6 hl + 2 3 hl + 2 , (5.98) 9 3 hl such that d d
h2, = r2 (ul ), q2, =0
where r2 (ul ) is given by (5.32).
(5.99) ♦
In the above example we have seen the equivalence between the existence of a viscous profile and the Lax entropy conditions for the shallow-water equations. This analysis has yet to be carried out for general systems. We
5.4. The Entropy Condition
187
will use the above example as a motivation for the following definition, stated for general systems. Definition 5.13. We say that a shock ul for x < st, u(x, t) = ur for x ≥ st,
(5.100)
is a Lax j-shock if the shock speed s satisfies λj−1 (ul ) < s < λj (ul ),
λj (ur ) < s < λj+1 (ur ).
(5.101)
(Here λ0 = −∞ and λn+1 = ∞.) Observe that for strictly hyperbolic systems, where the eigenvalues are distinct, it suffices to check the inequalities λj (ur ) < s < λj (ul ) for small Lax j-shocks if the eigenvalues are continuous in u. The following result follows from Theorem 5.11. Theorem 5.14. Consider the strictly hyperbolic equation ut + f (u)x = 0 in a domain D in Rn . Assume that ∇λj · rj = 1. Let ul ∈ D. A state uj, ∈ Hj (ul ) is a Lax j-shock near ul if || is sufficiently small and negative. If is positive, the shock is not a Lax j-shock. Proof. Using the parameterization of the Hugoniot locus we see that the shock is a Lax j-shock if and only if λj−1 (0) < s() < λj (0),
λj () < s() < λj+1 (),
(5.102)
where for simplicity we write u() = uj, , s() = sj, , and λk () = λk (uj, ). The observation following the definition of Lax shocks shows that it suffices to check the inequalities λj () < s() < λj (0).
(5.103)
Assume first that u() ∈ Hj (ul ) and that is negative. We know from the implicit function theorem that s() tends to λj (0) as tends to zero. From the fact that also λj () → λj (0) as → 0, and dλj () = ∇λj (0) · rj (ul ) = 1, d =0 it suffices to prove that 0 < s (0) < 1. We will in fact prove that s (0) = 12 . Recall from (5.68) that s is an eigenvalue of the matrix M (u, ul ), i.e., s() = μj (u(), ul ). Then 1 1 ∇λj (ul ) · rj (ul ) = , (5.104) 2 2 using the symmetry (5.75) and the normalization of the right eigenvalue. If > 0, we immediately see that s() > s(0) = λj (0), and in this case we cannot have a Lax j-shock. s (0) = ∇1 μj (ul , ul ) · u (0) =
188
5. The Riemann Problem for Systems
5.5 The Solution of the Riemann Problem Wie f¨ u ¨r die Integration der linearen partiellen Differentialgleichungen die fruchtbarsten Methoden nicht durch Entwicklung des allgemeinen Begriffs dieser Aufgabe gefunden worden, sondern vielmehr aus der Behandlung specieller physikalischer Probleme hervorgegangen sind, so scheint auch die Theorie der nichtlinearen partiellen Differentialgleichungen durch eine eingehende, alle Nebenbedingungen ber¨ u ¨cksichtigende, Behandlung specieller physikalischer Probleme am meisten gef¨ fordert zu werden, und in der That hat die L¨ f¨ osung der ganz speciellen Aufgabe, welche den Gegenstand dieser Abhandlung bildet, neue Methoden und Auffassungen erfordert, und zu Ergebnissen gef¨ u ¨hrt, welche wahrscheinlich auch bei allgemeineren Aufgaben eine Rolle spielen werden.4 G. F. B. Riemann [119]
In this section we will combine the properties of the rarefaction waves and shock waves from the previous sections to derive the unique solution of the Riemann problem for small initial data. Our approach will be the following. Assume that the left state ul is given, and consider the space of all right states ur . For each right state we want to describe the solution of the corresponding Riemann problem. (We could, of course, reverse the picture and consider the right state as fixed and construct the solution for all possible left states.) To this end we start by defining wave curves. If the jth wave family is genuinely nonlinear, we define Wj (ul ) := Rj (ul ) ∪ Sj (ul ),
(5.105)
and if the jth family is linearly degenerate, we let Wj (ul ) := Cj (ul ).
(5.106)
Recall that we have parameterized the shock and rarefaction curves separately with a parameter such that positive (negative) corresponds to a rarefaction (shock) wave solution in the case of a genuinely nonlinear wave family. The important fact about the wave curves is that they almost form 4 The theory of nonlinear equations can, it seems, achieve the most success if our attention is directed to special problems of physical content with thoroughness and with a consideration of all auxiliary conditions. In fact, the solution of the very special problem that is the topic of the current paper requires new methods and concepts and leads to results which probably will also play a role in more general problems.
5.5. The Solution of the Riemann Problem
189
a local coordinate system around ul , and this will make it possible to prove existence of solutions of the Riemann problem for ur close to ul . We will commence from the left state ul and connect it to a nearby intermediate state um1 = u1,1 ∈ W1 (ul ) using either a rarefaction wave solution (1 > 0) or a shock wave solution (1 < 0) if the first family is genuinely nonlinear. If the first family is linearly degenerate, we use a contact discontinuity for all 1 . From this state we find another intermediate state um2 = u2,2 ∈ W2 (um1 ). We continue in this way until we have reached an intermediate state umn−1 such that ur = un,n ∈ Wn (umn−1 ). The problem is to show existence of a unique n-tuple of (1 , . . . , n ) such that we “hit” ur starting from ul using this construction. As usual, we will start by illustrating the above discussion for the shallowwater equations. This example will contain the fundamental description of the solution that in principle will carry over to the general case. ♦ Example 5.15 (Shallow water (cont’d.)). Fix ul . For each right state ur we have to determine one middle state um on the first-wave curve through ul such that ur is on the secondwave curve with left state um , i.e., um ∈ W1 (ul ) and ur ∈ W2 (um ). (In the special case that ur ∈ W1 (ul ) ∪ W2 (ul ) no middle state um is required.) For 2 × 2 systems of conservation laws it is easier to consider the “backward” second-wave curve W2− (ur ) consisting of states um that can be connected to ur on the right with a fast wave. The Riemann problem with left state ul and right state ur has a unique solution if and only if W1 (ul ) and W2− (ur ) have a unique intersection. In that case, clearly the intersection will be the middle state um . The curve W1 (ul ) is given by
√ √ vl − 2 h − hl for h ∈ [0, hl ], $ v = v(h) = (5.107) −1 h−h l −1 vl − √2 h + hl for h ≥ hl , and we easily see √that W1 (ul ) is strictly decreasing, unbounded, and starting at vl + 2 hl . Using (5.37) and (5.93) we find that W2− (ur ) reads √ √ vr + 2( h − hr ) for h ∈ [0, hr ], v = v(h) = (5.108) −1 + h−1 for h ≥ h , √ r vr + h−h h r r 2 √ which is strictly increasing, unbounded, with minimum vr − 2 hr . Thus we conclude that the Riemann problem for shallow water has a unique solution in the region where vl + 2 hl ≥ vr − 2 hr . (5.109) To obtain explicit equations for the middle state um we have to make case distinctions, depending on the type of wave curves that intersect,
190
5. The Riemann Problem for Systems
i.e., rarefaction waves or shock curves. This gives rise to four regions, denoted by I, . . . , IV. For completeness we give the equations for the middle state um in all cases.
v
V
R1
III
R2
II L
I h
IV S2
S1
Figure 5.7. The partition of the (h, v) plane; see (5.114) and (5.133).
Assume first that ur ∈ I. We will determine a unique intermediate state um ∈ S1 (ul ) such that ur ∈ R2 (um ). These requirements give the following equations to be solved for hm , vm such that um = (hm , qm ) = (hm , hm vm ): 7 1 1 1 + , vm = vl − √ (hm − hl ) h h 2 m l
vr = vm + 2
Summing these equations we obtain the equation 7 √ √ 1 1 2[[v]] = 2 2 hr − hm − (hm − hl ) + hm hl
hr −
(I)
hm .
(5.110)
to determine hm . Consider next the case with ur ∈ III. Here um ∈ R1 (ul ) and ur ∈ S2 (um ), and in this case we obtain √
7 √ 1 1 2[[v]] = (hr − hm ) + − 2 2 hm − hl , hr hm
(III) (5.111)
5.5. The Solution of the Riemann Problem
191
while in the case ur ∈ IV, we obtain (here um ∈ S1 (ul ) and ur ∈ S2 (um )) 7 7 √ 1 1 1 1 2[[v]] = (hr − hm ) + − (hm − hl ) + . (IV) (5.112) hr hm hm hl The case ur ∈ II is special. Here um ∈ R1 (ul ) and ur ∈ R2 (um ). The intermediate state um is given by
vm = vl − 2 hm − hl , vr = vm + 2 hr − hm , which can easily be solved for hm to yield
√ √ √ 2 hr + hl − [[v]] hm = . (II) (5.113) 4 This equation is solvable only for right states such that the right-hand side of (5.113) is nonnegative. Observe that this is consistent with what we found above in (5.109). Thus we find that for √ ur ∈ u ∈ 0, ∞ × R | 2( h + hl ) ≥ [[v]] (5.114) the Riemann problem has a unique solution consisting of a slow wave followed by a fast wave. Let us summarize the solution of the Riemann problem for the shallow-water equations. First of all, we were not able to solve the problem globally, but only locally around the left state. Secondly, the general solution consists of a composition of elementary √ √ waves. More precisely, let ur ∈ u ∈ 0, ∞ × R | 2( h + hl ) ≥ [[v]] . Let wj (x/t; hm , hl ) denote the solution of the Riemann problem for um ∈ Wj (ul ), here as in most of our calculations on the shallow-water equations we use h rather than as the parameter. We will introduce the notation σj± for the slowest and fastest wave speed in each family to simplify the description of the full solution. Thus we have that for j = 1 (j = 2) and hr < hl (hr > hl ), wj is a rarefaction-wave solution
M
M L
R
L
R
Figure 5.8. The solution of the Riemann problem in phase space (left) and in (x, t) space (right).
192
5. The Riemann Problem for Systems
with slowest speed σj− = λj (ul ) and fastest speed σj+ = λj (ur ). If j = 1 (j = 2) and hr > hl (hr < hl ), then wj is a shock-wave solution with speed σj− = σj+ = sj (hr , hl ). The solution of the Riemann problem reads ⎧ ⎪u l for x ≤ σ1− t, ⎪ ⎪ ⎪ ⎪w1 (x/t; um , ul ) for σ − t ≤ x ≤ σ + t, ⎪ ⎨ 1 1 u(x, t) = um (5.115) for σ1+ t ≤ x ≤ σ2− t, ⎪ ⎪ − + ⎪ w2 (x/t; ur , um ) for σ2 t ≤ x ≤ σ2 t, ⎪ ⎪ ⎪ ⎩u for x ≥ σ2+ t. r We will show later in this chapter how to solve the Riemann problem globally for the shallow-water equations. ♦ Before we turn to the existence and uniqueness theorem for solutions of the Riemann problem we will need a certain property of the wave curves that we can explicitly verify for the shallow-water equations. Recall from (5.72) and (5.28) that du = r (ul ); thus Wj (ul ) is at j d =0 least differentiable at ul . In fact, one can prove that Wj (ul ) has a continuous second derivative across ul . We introduce the following notation for the directional derivative of a quantity h(u) in the direction r (not necessarily normalized) at the point u, which is defined as 1 (h(u + r) − h(u)) = (∇h · r)(u). (When h is a vector, ∇h denotes the Jacobian.) Dr h(u) = lim
→0
(5.116)
Theorem 5.16. The wave curve Wj (ul ) has a continuous second derivative across ul . In particular,
1 uj, = ul + rj (ul ) + 2 Drj rj (ul ) + O 3 . 2 Proof. In our proof of the admissibility of parts of the Hugoniot loci, Theorem 5.14, we derived most of the ingredients required for the proof of this theorem. The rarefaction curve Rj (ul ) is the integral curve of the right eigenvector rj (u) passing through ul , and thus we have (when for simplicity we have suppressed the j dependence in the notation for u, and write u() = uj, , etc.) u(0+) = ul ,
u (0+) = rj (ul ),
u (0+) = ∇rj (ul )rj (ul ).
(5.117)
(Here ∇rj (ul )rj (ul ) denotes the product of the n × n matrix ∇rj (ul ), cf. (5.76), with the (column) vector rj (ul ).) Recall that the Hugoniot locus is determined by the relation (5.70), i.e., wk (u(), ul ) · (u() − ul ) = δδjk ,
k = 1, . . . , n.
(5.118)
We know already from (5.72) that u (0−) = rj (ul ). To find the second derivative of u() at = 0, we have to compute the second derivative of
5.5. The Solution of the Riemann Problem
193
(5.118). Here we find that5 2rj (ul )∇1 wk (ul , ul )rj (ul )+wk (ul , ul )·u (0−) = 0,
k = 1, . . . , n. (5.119)
(A careful differentiation of each component may be helpful here; at least we thought so.) In the first term, the matrix ∇1 wk (ul , ul ) is multiplied from the right by the (column) vector rj (ul ) and by the (row) vector rj (ul ) from the left.) Using (5.77), i.e., ∇1 wk (ul , ul ) = 12 ∇k (ul ), we find that rj (ul ) · ∇k (ul )rj (ul ) + k (ul ) · u (0−) = 0.
(5.120)
The orthogonality of the left and the right eigenvectors, k (ul )·rj (ul ) = δjk , shows that rj (ul )∇k (ul ) = −k (ul )∇rj (ul ).
(5.121)
Inserting this into (5.120) we obtain k (ul ) · u (0−) = k (ul )∇rj (ul )rj (ul ) for all k = 1, . . . , n. From this we conclude that also u (0−) = ∇rj (ul )rj (ul ), thereby proving the theorem. We will now turn to the proof of the classical Lax theorem about existence of a unique entropy solution of the Riemann problem for small initial data. The assumption of strict hyperbolicity of the system implies the existence of a full set of linearly independent eigenvectors. Furthermore, we have proved that the wave curves are C 2 , and hence intersect transversally at the left state. This shows, in a heuristic way, that it is possible to solve the Riemann problem locally. Indeed, we saw that we could write the solution of the corresponding problem for the shallow-water equations as a composition of individual elementary waves that do not interact, in the sense that the fastest wave of one family is slower than the slowest wave of the next family. This will enable us to write the solution in the same form in the general case. In order to do this, we introduce some notation. Let uj,j = uj,j (x/t; ur , ul ) denote the unique solution of the Riemann problem with left state ul and right state ur that consists of a single elementary wave (i.e., shock wave, rarefaction wave, or contact discontinuity) of family j with strength j . Furthermore, we need to define notation for speeds corresponding to the fastest and slowest waves of a fixed family. Let σj+ = σj− = sj,j σj− σj+
= λj (uj−1,j−1 ) = λj (umj−1 ), = λj (uj,j ) = λj (umj )
4
if j < 0, if j > 0,
(5.122)
5 Lo and behold; the second derivative of w (u(), u ) is immaterial, since it is k l multiplied by u() − ul at = 0.
194
5. The Riemann Problem for Systems
if the jth wave family is genuinely nonlinear, and σj+ = σj− = λj (uj,j ) = λj (umj )
(5.123)
if the jth wave family is linearly degenerate. With these definitions we are ready to write the solution of the Riemann problem as ⎧ ⎪ for x ≤ σ1− t, ⎪ul ⎪ ⎪ ⎪ ⎪u1,1 (x/t; um1 , ul ) for σ1− t ≤ x ≤ σ1+ t, ⎪ ⎪ ⎪ ⎪um for σ1+ t ≤ x ≤ σ2− t, ⎪ 1 ⎪ ⎪ ⎪ ⎨ u2,2 (x/t; um2 , um1 ) for σ2− t ≤ x ≤ σ2+ t, u(x, t) = (5.124) um 2 for σ2+ t ≤ x ≤ σ3− t, ⎪ ⎪ ⎪ ⎪ .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ − + ⎪ ⎪ ⎪un,n (x/t; ur , umn−1 ) for σn t ≤ x ≤ σn t, ⎪ ⎩u + for x ≥ σn t. r Theorem 5.17 (Lax’s theorem). Assume that fj ∈ C 2 (Rn ), j = 1, . . . , n. Let D be a domain in Rn and consider the strictly hyperbolic equation ut + f (u)x = 0 in D. Assume that each wave family is either genuinely nonlinear or linearly degenerate. ˜ ⊂ D of ul such that for Then for ul ∈ D there exists a neighborhood D ˜ all ur ∈ D the Riemann problem ul for x < 0, u(x, 0) = (5.125) ur for x ≥ 0, ˜ consisting of up to n elementary waves, i.e., has a unique solution in D rarefaction waves, shock solutions satisfying the Lax entropy condition, or contact discontinuities. The solution is given by (5.124). Proof. Consider the map Wj, : u → uj, ∈ Wj (u). We may then write the solution of the Riemann problem using the composition W(1 ,...,n ) = Wn,n ◦ · · · ◦ W1,1
(5.126)
W(1 ,...,n ) ul = ur ,
(5.127)
as
and we want to prove the existence of a unique (1 , . . . , n ) (near the origin) such that (5.127) is satisfied for |ul − ur | small. In our proof we will need the two leading terms in the Taylor expansion for W. We summarize this in the following lemma.
5.5. The Solution of the Riemann Problem
195
Lemma 5.18. We have n
1 2 W(1 ,...,n ) (ul ) = ul + i ri (ul ) + Dr ri (ul ) 2 i=1 i i i=1 n
+
n
i j Dri rj (ul ) + O ||3 .
(5.128)
i,j=1 i<j
Proof (of Lemma 5.18). We shall show that for k = 0, . . . , n, W(1 ,...,k ,0,...,0) (ul ) = ul +
k i=1
+
k
1 2 Dr ri (ul ) 2 i=1 i i k
i ri (ul ) +
3
i j Dri rj (ul ) + O ||
(5.129)
i,j=1 i<j
by induction on k. It is clearly true for k = 0. Assume (5.129). Now, W(1 ,...,k+1 ,0,...,0) (ul )
= Wk+1,k+1 W(1 ,...,k ) (ul ) = ul +
k i=1
+
k
1 2 Dr ri (ul ) 2 i=1 i i k
i ri (ul ) +
i j Dri rj (ul ) + k+1 rk+1 W(1 ,...,k ,0,...,0) (ul )
i,j=1 i<j
1 3 + 2k+1 Drk+1 rk+1 W(1 ,...,k ,0,...,0) (ul ) + O || 2 k+1 k+1 1 2 = ul + i ri (ul ) + i Dri ri (ul ) 2 i=1
+
k+1
i=1
3 i j Dri rj (ul ) + O ||
i,j=1 i<j
by Theorem 5.16. Let ul ∈ D and define the map L(1 , . . . , n , u) = W(1 ,...,n ) ul − u.
(5.130)
This map L satisfies L(0, . . . , 0, ul) = 0,
∇ L(0, . . . , 0, ul ) = (r1 (ul ), . . . , rn (ul )) ,
where the matrix ∇L has the right eigenvectors rj evaluated at ul as columns. This matrix is nonsingular by the strict hyperbolicity assumption.
196
5. The Riemann Problem for Systems
The implicit function theorem then implies the existence of a neighborhood N around ul and a unique differentiable function (1 , . . . , n ) = (1 (u), . . . , n (u)) such that L(1 , . . . , n , u) = 0. If ur ∈ N , then there exists unique (1 , . . . , n ) with W(1 ,...,n ) ul = ur , which proves the theorem. Observe that we could rephrase the Lax theorem as saying that we may use (1 , . . . , n ) to measure distances in phase space, and that we indeed have n A |ur − ul | ≤ |j | ≤ B |ur − ul | (5.131) j=1
for constants A and B. Let us now return to the shallow-water equations and prove existence of a global solution of the Riemann problem. ♦ Example 5.19 (Shallow water (cont’d.)). We will construct a global solution of the Riemann problem for the shallow-water equations for all left and right states in D = {(h, v) | h ∈ [0, ∞, v ∈ R}. Of course, we will maintain the same solution in the region where we already have constructed a solution, so it remains to construct a solution in the region ur ∈ V := ur ∈ D | 2 hr + hl < [[v]] ∪ {h = 0} . (5.132) We will work in the h, v variables rather than (h, q). Assume first that ur = (hr , vr ) in V with hr positive. We first connect ul , using a slow rarefaction wave, with a state um on the “vacuum line” h = 0. This state is given by vm = vl + 2 hl , (5.133) using (5.38). From this state we jump to the unique point v∗ on h = 0 such that the fast rarefaction starting at h∗ = 0 and v∗ hits ur . Thus we √ ∗ see from (5.39) that v = vr − 2 hr , which gives the following solution: ⎧ ⎪ hv l for x < λ1 (ul )t, ⎪ l ⎪ √ ⎪ ⎪ ⎪ R (x/t; u ) for λ1 (ul )t < x < ( hl + vl )t, ⎨ 1 l √ 0 u(x, t) = for ( hl + vl )t < x < v ∗ t, (5.134) ⎪ v˜(x,t) ⎪ ∗ ∗ ⎪ ⎪ 2 (x/t; (0, v )) for v t < x < λ2 (ur )t, ⎪R ⎪ ⎩ hr for x > λ2 (ur )t. vr Physically, it does not make sense to give a value of the speed v of the water when there is no water, i.e., h = 0, and mathematically we see that any v will satisfy the equations when h = 0. Thus we do not have to associate any value with v˜(x, t).
5.5. The Solution of the Riemann Problem
197
If ur is on the vacuum line h = 0, we still connect to a state um on h = 0 using a slow rarefaction, and subsequently we connect to ur along the ˜ > 0 we see that vacuum line. By considering a nearby state u˜r with h with this construction we have continuity in data. Finally, we have to solve the Riemann problem with the left state on the vacuum line h = 0. Now let ul = (0, vl ), and let ur = (hr , vr ) with hr > 0. We now connect √ ul to an intermediate state um on the vacuum line given by vm = vr − 2 hr and continue with a fast rarefaction to the right state ur . ♦ We will apply the above theory to one old and two ancient problems: ♦ Example 5.20 (Dam breaking). For this problem we consider Riemann initial data of the form (in (h, v) variables) hl for x < 0, h(x, 0) u(x, 0) = = 00 v(x, 0) for x ≥ 0. 0 From the above discussion we know rarefaction; thus h(x, t) u(x, t) = v(x, t) ⎧ ⎪ h0l ⎪ ⎨ 1 √ x 2 9 (2√hl − t ) = 2 x ⎪ 3 ( hl + t ) ⎪ ⎩ 0 0
that the solution consists of a slow
√ for x < − hl t, √ √ for − hl t < x < 2 hl t, √ for x > 2 hl t. ♦
1.00
1.03
0.67
0.68
0.33
0.32
0.00
-0.03 -0.97
0.02
1.01
2.00
-1.12
-0.03
1.06
2.15
Figure 5.9. The solution of the dam breaking problem in (x, t) space (left), and the h component (right).
198
5. The Riemann Problem for Systems
We shall call the two ancient problems Moses’ first and second problems. And Moses stretched out his hand over the sea; and the Lord caused the sea to go back by a strong east wind all that night, and made the sea dry land, and the waters were divided. And the children of Israel went into the midst of the sea upon the dry ground: and the waters were a wall unto them on their right hand, and on their left. Exodus (14:21–22)
♦ Example 5.21 (Moses’ first problem). For the first problem we consider initial data of the form (in (h, v) variables) u(x, 0) =
h0
−v 0 h0 v0
for x < 0, for x ≥ 0,
for a positive speed v0 . By applying the above analysis we find that in this case we connect to an intermediate state u1 on the vacuum line using a slow rarefaction. This state is connected to another state u2 also on the vacuum line, which subsequently is connected to the right state using a fast rarefaction wave. More precisely, the state u1 is determined
2 √ by v1 = v(x1 , t1 ), where h(x, t) = 19 − v0 + 2 h0 − xt along the rarefaction wave (cf. (5.41)) and h(x , t ) = 0. We find that x1 = 1 1
slow √ √ 2 h0 − v0 t1 and thus v1 = 2 h0 − v0 . The second intermediate state u2 is such that a fast rarefaction wave with left state u2√hits ur . This √ implies that v0 = v2 + 2 h0 from (5.39), or v2 = v0 − 2 h0 . In order for this construction to be feasible, we will have to assume that v2 > v1 √ or v0 ≥ 2 h0 . If this condition does not hold, we will not get a region without water, and thus the original problem of Moses will not be solved. Combining the above waves in one solution we obtain h(x, t) ⎧ ⎪h0 ⎪ ⎪ ⎪ ⎪ 1 − v0 + 2√h0 − x 2 ⎪ ⎨9 t = 0 ⎪1 √ ⎪ x 2 ⎪ ⎪ ⎪ 9 v0 − 2 h0 − t ⎪ ⎩ h0
for for for for for
√ x < − v0 + h0 t,
√ √ − v0 + h0 t < x < 2 h0 − v0 t,
√
√ 2 h0 − v0 t < x < v0 − 2 h0 t,
√ √ v0 − 2 h0 t < x < v0 + h0 t,
√ x > v0 + h0 t,
5.5. The Solution of the Riemann Problem
v(x, t) ⎧ ⎪ ⎪−v ⎪
0 √ ⎪ 1 ⎪ ⎪ 3 − v0 + 2 h0 + 2 xt ⎨ = 0 ⎪ √ ⎪1 ⎪ v0 − 2 h0 + 2 xt ⎪ 3 ⎪ ⎪ ⎩ v0
for for for for for
199
√ x < − v0 + h0 t,
√ √ − v0 + h0 t < x < 2 h0 − v0 t,
√
√ 2 h0 − v0 t < x < v0 − 2 h0 t,
√ √ v0 − 2 h0 t < x < v0 + h0 t,
√ x > v0 + h0 t. ♦
1.00
1.03
0.67
0.68
0.33
0.32
-0.03
0.00 -3.97
-1.32
1.32
3.97
-4.37
-1.46
1.46
4.37
Figure 5.10. The solution of Moses’ first problem in (x, t) space (left), and the h component (right).
♦ Example 5.22 (Moses’ second problem). And Moses stretched forth his hand over the sea, and the sea returned to his strength when the morning appeared; and the Egyptians fled against it; and the Lord overthrew the Egyptians in the midst of the sea. Exodus (14:27 )
Here we study the multiple Riemann problem given by (in (h, v) variables) ⎧ h 0 ⎪ ⎨ 0 for x < 0, 0 u(x, 0) = for 0 < x < L, ⎪ 0h0 ⎩ for x > L. 0 For small times t, the solution of this problem is found by patching together the solution of two dam breaking problems. The left problem is solved by a fast rarefaction wave, and the right by a slow rarefaction. At some positive time, these rarefactions will interact, and thereafter explicit computations become harder.
200
5. The Riemann Problem for Systems
In place of explicit computation we therefore present the numerical solution constructed by front tracking. This method is a generalization of the front-tracking method presented in Chapter 2, and will be the subject of the next chapter. In the left part of Figure 5.11 we see the fronts in (x, t) space. These fronts are similar to the fronts for the scalar front tracking, and the approximate solution is discontinuous across the lines shown in the figure. Looking at the figure, it is not hard to see why explicit computations become difficult as the two rarefaction waves interact. The right part of the figure shows the water level as it engulfs the Egyptians. The lower figure shows the water level before the two rarefaction waves interact, and the two upper ones show that two shock waves result from the interaction of the two rarefaction waves. ♦ 3.00
2.00
1.00
0.00 -3.9
-1.3
1.3
3.9
Figure 5.11. The solution of Moses’ second problem in (x, t) space (left), and the h component (right).
5.6 Notes The fundamentals of the Riemann problem for systems of conservation laws were presented in the seminal paper by Lax [95], where also the Lax entropy condition was introduced. We refer to Smoller [130] as a general reference for this chapter. Our proof of Theorem 5.11 follows Schatzman [127]. This also simplifies the proof of the classical result that s (0) = 12 in Theorem 5.14. The parameterization of the Hugoniot locus introduced in Theorem 5.11 makes the proof of the smoothness of the wave curves, Theorem 5.16, quite simple. We have used shallow-water equations as our prime example in this chapter. This model can be found in many sources; a good presentation is in Kevorkian [83]. See also the paper by Gerbeau and Perthame [51] for a
5.6. Notes
201
more rigourous derivation of the model. Our treatment of the vacuum for these equations can be found in Liu and Smoller [105]. There is extensive literature on the Euler equations; see, e.g., [37], [130], and [29]. Our version of the implicit function theorem, Theorem 5.10, was taken from Renardy and Rogers [117]. See Exercise 5.11 for a proof.
Exercises 5.1 What assumption on p is necessary for the p-system to be hyperbolic? 5.2 Solve the Riemann problem for the p-system in the case where p(v) = 1/v. For what left and right states does this Riemann problem have a solution? 5.3 Repeat Exercise 5.2 in the general case where p = p(v) is such that p is negative and p is positive. 5.4 Solve the following Riemann problem for the shallow-water equations: hl for x < 0, h(x, 0) u(x, 0) = = h0r v(x, 0) for x ≥ 0, 0 with hl > hr > 0. 5.5 Let w = (u, v) and let ϕ(w) be a smooth scalar function. Consider the system of conservation laws wt + (ϕ(w)w) x = 0.
(5.135)
a. Find the characteristic speeds λ1 and λ2 and the associated eigenvectors r1 and r2 for the system (5.135). 2 b. Let ϕ(w) = |w| /2. Then find the solution of the Riemann problem for (5.135). c. Now let 1 ϕ(w) = , 1+u+v and assume that u and v are positive. Find the solution of the Riemann problem of (5.135) in this case. 5.6 Let us consider the Lax–Friedrichs scheme for systems of conservation laws. As in Chapter 3 we write this as λ n
1 n Ujn+1 = Uj −1 + Ujn+1 − f Uj +1 − f Ujn−1 , 2 2 where λ = Δt/Δx, and we assume that the CFL condition λ ≤ max |λk | k
202
5. The Riemann Problem for Systems
holds. Let unj (x, t) denote the solution of the Riemann problem with initial data Ujn−1 for x < jΔx, Ujn+1 for x ≥ jΔx. Show that Ujn+1 =
1 2Δx
(j+1)Δx
(j −1)Δx
unj (x, Δt) dx.
5.7 A smooth function w : Rn → R is called a k-Riemann invariant if ∇w(u) · rk (u) = 0, where rk is the kth right eigenvector of the Jacobian matrix df , which is assumed to be strictly hyperbolic. a. Show that there locally exist precisely (n − 1) k-Riemann invariants whose gradients are linearly independent. b. Let Rk (ul ) denote the kth rarefaction curve through a point ul . Then show that all (n − 1) k-Riemann invariants are constant on Rk (ul ). This gives an alternative definition of the rarefaction curves. c. We say that we have a coordinate system of Riemann invariants if there exist n scalar-valued functions w1 , . . . , wn such that wj is a k-Riemann invariant for j, k = 1, . . . , n, j = k, and ∇wj (u) · rk (u) = γj (u)δδj,k ,
(5.136)
for some nonzero function gj . Why cannot we expect to find such a coordinate system if n > 2? d. Find the Riemann invariants for the shallow-water system, and verify parts b and c in this case. 5.8 We study the p-system with p(v) = 1/v as in Exercise 5.2. a. Find the two Riemann invariants w1 and w2 in this case. b. Introduce coordinates μ = w1 (v, u) and τ = w2 (v, u), and find the wave curves in (μ, τ ) coordinates. c. Find the solution of the Riemann problem in (μ, τ ) coordinates. d. Show that the wave curves W1 and W2 are stiff in the sense that if a point (μ, τ ) is on a wave curve through (μl , τl ), then the point (μ + Δμ, τ + Δτ ) is on a wave curve through (μl + Δμ, τl + Δτ ). Hence the solution of the Riemann problem can be said to be translation-invariant in (μ, τ ) coordinates. e. Show that the 2-shock curve through a point (μl , τl ) is the reflection about the line μ − μl = τ − τl of the 1-shock curve through (μl , τl ).
5.6. Notes
203
5.9 As for scalar equations, we define an entropy/entropy flux pair (η, q) as scalar functions of u such that for smooth solutions, ut + f (u)x = 0
⇒
ηt + qx = 0,
and η is supposed to be a convex function. a. Show that η and q are related by ∇u q = ∇u η df.
(5.137)
b. Why cannot we expect to find entropy/entropy flux pairs if n > 2? c. Find an entropy/entropy flux pair for the p-system if p(v) = 1/v. d. Find an entropy/entropy flux pair for the shallow-water equations. 5.10 Let A be a constant n × n matrix with real and distinct eigenvalues λ1 < λ2 < · · · < λn . Consider the Riemann problem for the linear system of equations ul for x < 0, ut + Aux = 0, u(x, 0) = ur for x ≥ 0. a. Find the solution of this Riemann problem. b. Extend this solution to the general Cauchy problem. c. What do you obtain for the linear wave equation φtt = c2 φxx with initial data φ(x, 0) = f (x) and φt (x, 0) = g(x) (cf. Example 5.2)? 5.11 This exercise outlines a proof of the implicit function theorem, Theorem 5.10. a. Define T to be a mapping Rp → Rp such that for y1 and y2 , |T (y1 ) − T (y2 )| ≤ c |y1 − y2 | ,
for some constant c < 1.
Such mappings are called contractions. Show that there exists a unique y such that T (y) = y. b. Let u : Rp → Rp , and assume that u is C 1 in some neighborhood of a point y0 , and that du(y0 ) is nonsingular. We are interested in solving the equation u(y) = u(y0 ) + v
(5.138)
for some v where |v| is sufficiently small. Define T (y) = y − du(y0 )−1 (u(y) − u(y0 ) − v) . Show that T is a contraction in a neighborhood of y0 , and consequently that (5.138) has a unique solution x = ϕ(v) for small v, and that ϕ(0) = y0 .
204
5. The Riemann Problem for Systems
c. Now let Φ(x, y) be as in Theorem 5.10. Show that for x close to x0 we can find ϕ(x, v) such that Φ(x, ϕ(x, v)) = Φ(x, y0 ) + v for small v. d. Choose a suitable v = v(x) to conclude the proof of the theorem.
6 Existence of Solutions of the Cauchy Problem for Systems
Faith is an island in the setting sun. But proof, yes. Proof is the bottom line for everyone. Paul Simon, Proof (1990 )
In this chapter we study the generalization of the front-tracking algorithm to systems of conservation laws, and how this generalization generates a convergent sequence of approximate weak solutions. We shall then proceed to show that the limit is a weak solution. Thus we shall study the initial value problem ut + f (u)x = 0,
u|t=0 = u0 ,
(6.1)
1
where f : R → R and u0 is a function in L (R). In doing this, we are in the setting of Lax’s theorem (Theorem 5.17); we have a system of strictly hyperbolic conservation laws where each characteristic field is either genuinely nonlinear or linearly degenerate, and the initial data are close to a constant. This restriction is necessary, since the Riemann problem may fail to have a solution for initial states far apart, which is analogous to the appearance of a “vacuum” in the solution of the shallow-water equations. The convergence part of the argument follows the traditional method of proving compactness in the context of conservation laws, namely, via Kolmogorov’s compactness theorem or Helly’s theorem. Again, the basic ingredient in front tracking is the solution of Riemann problems, or in this case, the approximate solution of Riemann problems. Therefore, we start by defining these approximations. n
n
H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_6, © Springer-Verlag Berlin Heidelberg 2011
205
206
6. Existence of Solutions of the Cauchy Problem
6.1 Front Tracking for Systems Nisi credideritis, non intelligetis.1 Saint Augustine, De Libero Arbitrio (387/9 )
In order for us to define front tracking in the scalar case, the solution of the Riemann problem had to be a piecewise constant function. For systems, this is possible only if all waves are shock waves or contact discontinuities. Consequently, we need to approximate the continuous parts of the solution, the rarefaction waves, by piecewise constants. There are, of course, several ways to make this approximation. We use the following: Let δ be a small parameter. For the rest of this chapter, δ will always denote a parameter that controls the accuracy of the approximation. We start with the system of conservation laws (6.1), and the Riemann problem ul for x < 0, u(x, 0) = (6.2) ur for x ≥ 0. We have seen (Theorem 5.17) that the solution of this Riemann problem consists of at most n + 1 constant states, separated by either shock waves, contact discontinuities, or rarefaction waves. We wish to approximate this solution by a piecewise constant function in (x/t). When the solution has shocks or contact discontinuities, it is already a step function for some range of (x/t), and we set the approximation equal to the exact solution u for such x and t. Thus, if the jth wave is a shock or a contact discontinuity, we let uδj,j (x, t) = uj,j (x, t),
tσ σj+ < x < tσ σj−+1 ,
where the right-hand side is given by (5.124). A rarefaction wave is a smooth transition between two constant states, and we will replace this by a step function whose “steps” are no further apart than δ and lie on the correct rarefaction curve Rj . The discontinuity between two steps is defined to move with a speed equal to the characteristic speed of the left state. More precisely, let the solution to (6.2) be given by (5.124). Assume that the jth wave is a rarefaction wave; that is, the solutions u and umj lie on the jth rarefaction curve Rj umj−1 through umj−1 , or
u(x, t) = uj,j x, t ; umj , umj−1 , for tσ σj− ≤ x ≤ tσ σj+ . Let k = rnd (j /δ) for the moment, where rnd (z) denotes the integer closest2 to z, and let δˆ = j /k. The step values of the approximation are 1 Soft 2 Such
on Latin? It says, “If you don’t believe it, you won’t understand it.” that z − 12 ≤ rnd (z) < z + 12 .
6.1. Front Tracking for Systems
now defined as
uj,l = Rj l δˆ ; umj−1 ,
207
for l = 0, . . . , k.
(6.3)
We have that uj,0 = umj−1 and uj,k = umj . We set the speed of the steps equal to the characteristic speed to the left, and hence the piecewise constant approximation we make is the following: uδj,j (x, t) := uj,0 +
k
(uj,l − uj,l−1 ) H (x − λj (uj,l−1 ) t) ,
(6.4)
l=1
where H now denotes the Heaviside function. Equation (6.4) is to hold for tσ σj+ < x < σj−+1 t. Loosely speaking, we step along the rarefaction curve with steps of size at most δ. Observe that the discontinuities that occur as a result of the approximation of the rarefaction wave will not satisfy the Rankine–Hugoniot condition, and hence the function will not be a weak solution. However, we will prove that uδ converges to a weak solution as δ → 0. Figure 6.1 illustrates this in phase space and in (x, t) space.
uj uj,l uj,l uj-1
uj-1
uj
Figure 6.1. An approximated rarefaction wave in phase space and in (x, t) space.
The approximate solution to the Riemann problem is then found by inserting a superscript δ at the appropriate places in equation (5.124), resulting in ⎧ ⎪ for x ≤ σ1− t, ⎪ ul ⎪ ⎪ ⎪ ⎪uδ1,1 (x/t; um1 , ul ) for σ1− t ≤ x ≤ σ1+ t, ⎪ ⎪ ⎪ ⎪ um for σ1+ t ≤ x ≤ σ2− t, ⎪ 1 ⎪ ⎪ ⎪ δ ⎨ u2,2 (x/t; um2 , um1 ) for σ2− t ≤ x ≤ σ2+ t, uδ (x, t) = (6.5) um 2 for σ2+ t ≤ x ≤ σ3− t, ⎪ ⎪ ⎪ ⎪ ⎪.. ⎪ ⎪ . ⎪ ⎪ ⎪ δ − + ⎪ ⎪ ⎪un,n (x/t; ur , umn−1 ) for σn t ≤ x ≤ σn t, ⎪ ⎩u for x ≥ σn+ t. r
208
6. Existence of Solutions of the Cauchy Problem
It is clear that uδ converges to the exact solution given by (5.124) pointwise. Indeed, δ u (x, t) − u(x, t) = O (δ) . Therefore, we also have that uδ (t) − u(t)1 = O (δ). Now we are ready to define the front-tracking procedure to (approximately) solve the initial value problem (6.1). Our first step is to approximate the initial function u0 by a piecewise constant function uδ0 (we let δ denote this approximation parameter as well) such that lim uδ0 − u0 1 = 0. (6.6) δ→0
We then generate approximations, given by (6.5), to the solutions of the Riemann problems defined by the discontinuities of uδ0 . Already here we see one reason why we must assume T.V. (u0 ) to be small: The initial Riemann problems must be solvable. Therefore, we assume our initial data u0 , as well as the approximation uδ0 , to be in some small neighborhood D of a constant u¯. Without loss of generality, u¯ can be chosen to be zero. As the initial discontinuities interact at some later time, we can solve the Riemann problems defined by the states immediately to the left and right of the collisions. These solutions are then replaced by approximations, and we may continue to propagate the front-tracking construction until the next interaction. However, as in the scalar case, it is not obvious that this procedure will take us up to any predetermined time. A priori, it is not even clear whether the number of discontinuities will blow up at some finite time, that is, that the collision times will converge to some finite time. This problem is much more severe in the case of a system of conservation laws than in the scalar case, since a collision of two discontinuities generically will result in at least n − 2 new discontinuities. So for n > 2, the number of discontinuities seems to be growing without bound as t increases. As in the scalar case, the key to the solution of these problems lies in the study of interactions of discontinuities. To keep the number of waves finite we shall eliminate small waves emanating from Riemann problems. However, there is a trade-off: The more waves we eliminate, the easier it is to prove convergence, but the less likely it is that the limit is a solution of the differential equation. From now on, we shall call all discontinuities in the front-tracking construction fronts. We distinguish between waves and fronts. In general, a solution of a Riemann problem consists of exactly n waves coming from distinct wave families. Each approximation of a rarefaction wave is one wave, but will consist of several fronts. Hence a front is an object with a left and a right state, labeled L and R respectively, and an associated family. The family of a front separating states L and R is the (unique) number
6.1. Front Tracking for Systems
209
j such that R ∈ Wj (L), where, as in Chapter 5, Wj (u) denotes the jth wave curve through the point u. These are parameterized as in Theorem 5.16. (Observe that we still have this relation for fronts approximating a rarefaction wave.) The strength of a front is defined to be ||, where R = uj, (L). The strength of a front coming from an approximate rarefaction wave of the jth family is δˆ = j /k in the terminology of equation (6.3), (6.4). We wish to estimate the strengths of the fronts resulting from a collision in terms of the strengths of the colliding fronts. With some abuse of notation we shall refer to both the front itself and its strength by i . Consider therefore N fronts γN , . . . , γ1 interacting at a single point as in Figure 6.2. We will have to keep track of the associated family of each front. We denote by ˆı the family of wave γi . Thus if γ1 , . . . , γ4 all come from the first family, we have ˆ 1 = ··· = ˆ 4 = 1. Since the speed of γj is greater than the speed of γi for j > i, we have ˆj ≥ ˆı . We label the waves resulting from the collision β1 , . . . , βn . Let β denote the vector of waves in the front-tracking approximation resulting from the solution of the Riemann problem defined by the collision of γ1 , . . . , γN , i.e., β = (β1 , . . . , βn ), and let α= γi , γi , . . . , γi . ˆı =1
t
β1
γN γN−1
β2
ˆı =2
ˆ ı =n
.........
.........
βn
γ2
γ1 x
Figure 6.2. A collision of N fronts.
210
6. Existence of Solutions of the Cauchy Problem
For simplicity, also set γ = (γ1 , . . . , γN ). Note that β is a function of γ, that is, β = β(γ). For i < j we define βi,j (σ, τ ) := Then γi γj
0
1
∂2β (γ1 , . . . , γi−1 , σγi , 0, . . . , 0, τ γj , 0, . . . , 0) . ∂γi ∂γ γj
1
βi,j (σ, τ ) dσ dτ
0
(6.7)
= β (γ1 , . . . , γi , 0, . . . , 0, γj , 0, . . . , 0) + β (γ1 , . . . , γi−1 , 0, . . . , 0) − β (γ1 , . . . , γi , 0, . . . , 0) − β (γ1 , . . . , γi−1 , 0, . . . , 0, γj , 0, . . . , 0) . Furthermore, β (0, . . . , 0, γk , 0, . . . , 0) = (0, . . . , 0, γk , 0, . . . , 0) ,
(6.8)
ˆ place, since in this case we have no where γk on the right is at the kth collision. Summing (6.7) for all i < j, we obtain 1 1 N γi γj βi,j (σ, τ ) dσ dτ i<j
0
0
= β (γ1 , . . . , γN ) −
N
β (0, . . . , 0, γi , 0, . . . , 0) = β − α.
(6.9)
i=1
By the solution of the general Riemann problem, we have that βi,j is bounded; hence |β − α| ≤ O (1)
N
|γi γj | ,
(6.10)
i,j;i<j
or β = α + O (1)
N
|γi γj | .
(6.11)
i,j i<j
Note that if the incoming fronts γk are small, then the fronts resulting from the collision will be very small for those families that are not among the incoming fronts. ♦ Example 6.1 (Higher-order estimates). The estimate (6.10) is enough for our purposes, but we can extract some more information from (6.9) by considering higher-order terms. Firstly, note that β =α+ γi γj βi,j (0, 0) + O (1) |γi γj | |γ| . (6.12) i<j
i<j
6.1. Front Tracking for Systems
211
Therefore, we evaluate βi,j (0, 0). To do this, observe that ur = Rβ(γ) ul = Rγ1 ◦ Rγ2 ◦ · · · ◦ RγN ul ,
(6.13)
where Rβ is defined as (5.128), and ul and ur are the states to the left and right of the collision, respectively. Introducing βγj :=
∂β , ∂γ γj
equation (6.8) implies βγj (0, . . . , 0) = eˆj , where ek denotes the kth standard basis vector in Rn . Also note that ∂ Rβ(γ) = ∇β Rβ · βγi . ∂γi Furthermore, from Lemma 5.18, (5.128), we have that ⎛ ⎞ n 2 ∇β Rβ = ⎝. . . , rk + βj Drmin(j,k) rmax(j,k) , . . . ⎠ + O |β| . j=1
Here the first term on the/ right-hand side is the n × n matrix with the n kth column equal to rk + j=1 βj Drmin(j,k) rmax(j,k) . Consequently,3
∂ ∇β R(0,...,0) = Dr1 rˆj , Dr2 rˆj , . . . , Drˆj rˆj , Drˆj rˆj+1 , . . . , Drˆj rn ∂γ γj evaluated at ul . Differentiating (6.13) with respect to γi we find
(∇β Rβ · βγi ) |γ=(0,...,0,γj ,0...,0) (ul ) = rˆı Rγj ul for j > i. Differentiating this with respect to γj we obtain ∂ ∇β Rβ |γ=(0,...,0) eˆj + ∇β R(0,...,0) βi,j (0, 0) = Drˆj rˆı (ul ) . ∂γ γj Inserting this in (6.12), we finally obtain β = α+
N i<j
γi γj (∇β Rβ )
−1
Drˆj rˆı − Drˆı rˆj +O (1) |γi γj | |γ| , (6.14) i<j
which we call the interaction estimate. One can also use (6.9) to obtain estimates of higher order. In passing, we note that if the integral curves of the eigenvectors form a coordinate system near M , then
Drj ri − Dri rj = 0 3 The right-hand side denotes the n × n matrix where the first ˆ j columns equal Drk rˆj , k = 1, . . . , ˆj , and the remaining (n − ˆj ) columns equal Drˆj rk , k = ˆj + 1, . . . , n.
212
6. Existence of Solutions of the Cauchy Problem
for all i and j, and we obtain a third-order estimate. The estimate (6.10) will prove to be the key ingredient in our analysis of front tracking. For the reader with knowledge of differential geometry, the estimate (6.14) is no surprise. Assume that only two fronts collide, l and r , separating states L, M , and R. Let the families of the two fronts be l and r, respectively. The states L, M , and R are almost connected by the integral curves of rl and rr , respectively. If we follow the integral curve of rl a (parameter) distance −l from R, and then follow the integral curve of rr a distance −r , we end up with, up to third order in l and r , half the Lie bracket of l rl and r rr away from L. This Lie bracket is given by [l rl , r rr ] := l r (Drl rr − Drr rl ) . This means that if we start from L and follow rr a distance r , and then rl a distance l , we finish a distance O ([l rl , r rr ]) away from R. Consequently, up to O ([l rl , r rr ]), the solution of the Riemann problem with right state R and left state L is given by a wave of family r of strength r , followed by a wave of family l of strength l . While not a formal proof, these remarks illuminate the mechanism behind the calculation leading up to (6.14). See Figure 6.3. ♦ R ∈i
M R L
∈l
M ∈r
L
Figure 6.3. An interaction in (x, t) space and in phase space.
Before we proceed further, we introduce some notation. Front tracking will produce a piecewise constant function labeled uδ (x, t) that has, at least initially, some finite number N of fronts. These fronts have strengths i , i = 1, . . . , N . We will refer to the ith front by its strength i , and label the left and right states Li and Ri , respectively. The position of i is denoted by xi (t), and with a slight abuse of notation we have that xi (t) = xi + si (t − ti ) ,
(6.15)
6.1. Front Tracking for Systems
213
where si is the speed of the front, and (xi , ti ) is the position and time it originated. In this notation, uδ can be written uδ (x, t) = L1 +
N
(Ri − Li )H (x − xi (t)) .
(6.16)
i=1
The interaction estimate (6.10) shows that the “amount of change” produced by a collision is proportional to the product of the strengths of the colliding fronts. Therefore, in order to obtain some estimate of what will happen as fronts collide, we define the interaction potential Q. The idea is that Q should (over)estimate the amount of change in uδ caused by all future collisions. Hence, by (6.10) Q should involve terms of type |l r |. We say that two fronts are approaching if the front to the left has a larger family than the front to the right, or if both fronts are of the same family and at least one of the fronts is a shock wave. We collect all pairs of approaching fronts in the approaching set A, that is, A := {(i , j ) such that i and j are approaching} .
(6.17)
The set A will, of course, depend on time. All future collisions will now involve two fronts from A due to the hyperbolicity of the equation. Observe that two approximate rarefaction waves of the same family never collide unless there is another front between, all colliding at the same point (x, t). Therefore, we define Q as Q := |i j | . (6.18) A
For scalar equations we saw that the total variation of the solution of the conservation law was not greater than the total variation of the initial data. From the solution of the Riemann problem, we know that this is not true for systems. Nevertheless, we shall see that if the initial total variation is small enough, the total variation of the solution is bounded. To measure the total variation we use another time-dependent functional T defined by T :=
N
|i | ,
(6.19)
i=1
where N is the number of fronts. Lax’s theorem (Theorem 5.17) implies that T is equivalent to the total variation as long as the total variation is small. Let t1 denote the first time two fronts collide. At this time we will have another Riemann problem, which can be solved up to the next collision time t2 , etc. In this way we obtain an increasing sequence of collision times ti , i ∈ N. To show that front tracking is well-defined, we need to show that the sequence {ti } is finite, or if infinite, not convergent. In the scalar case we saw that indeed this sequence is finite.
214
6. Existence of Solutions of the Cauchy Problem t
1
2
.........
n
τ2
tc
τ1
J
I
N N−1
J 2 1
.........
x
xc
Figure 6.4. A collision of N fronts.
We will analyze more closely the changes in Q and T when fronts collide. Clearly, they change only at collisions. Let tc be some fixed collision time. Assume then that the situation is as in Figure 6.4: N fronts 1 , . . . , N colliding at some point (xc , tc ), giving n waves 1 , . . . , n in the exact solution. Let I be a small interval containing xc , and let J be the complement of I. Then we may write Q = Q(I) + Q(J) + Q(I, J), where Q(I) and Q(J) indicate that the summation is restricted to those pairs of fronts that both lie in I and J, respectively. Similarly, Q(I, J) means that the summation is over those pairs where one front is in I and the other in J. Let τ1 < tc < τ2 be two times, chosen such that no other collisions occur in the interval [ττ1 , τ2 ], and such that no fronts other than 1 , . . . , N are crossing the interval I at time τ1 , and only waves emanating from the collision at tc , i.e., waves denoted by 1 , . . . , n , cross I at time τ2 . Let Qi and Ti denote the values of Q and T at time τi . By construction Q2 (I) = 0 and Q2 (J) = Q1 (J), and hence Q2 − Q1 = Q2 (I, J) − Q1 (I, J) − Q1 (I).
(6.20)
We now want to bound the increase in Q(I, J) from time τ1 to τ2 . More precisely, we want to prove that Q2 (I, J) ≤ Q1 (I, J) + O (Q1 (I)T T1 (J)) . |i |
(6.21)
(, i )
Let be a term in Q2 (I, J), i.e., ∈ A at time τ2 . Note that by the interaction estimate (6.10) we have i = j + O (Q1 (I)) . ˆj=i
Assume now that ˆj = i. If the family of equals i, then (, j ) ∈ A if and only if sign (j ) = sign (i ). If the family of is different from i, then
6.1. Front Tracking for Systems
215
(, j ) ∈ A independently of the sign of j . Writing |i | = |j | − |j | + O (Q1 (I)) ˆj =i sign(j )=sign(i )
if the family of is i, and |i | ≤
ˆj =i sign(j )=sign(i )
|j | + O (Q1 (I))
ˆj =i
if the family of is different from i, we see that this implies (6.21). Inserting (6.21) into (6.20), using the constant K to replace the order symbol, we obtain 1 Q2 − Q1 ≤ KQ1 (I)T T1 − Q1 (I) = Q1 (I) (KT T1 − 1) ≤ − Q1 (I) (6.22) 2 if T1 is smaller than 1/(2K). We summarize the above discussion in the following lemma. Lemma 6.2. Assume that T1 ≤ 1/(2K). Then 1 Q2 − Q1 ≤ − Q1 (I). 2 We will use this lemma to deduce that the total variation remains bounded if it initially is sufficiently small, or, in other words, if the initial data are sufficiently close to a constant state. Lemma 6.3. If T is sufficiently small at t = 0, then there is some constant c independent of δ such that G = T + cQ is nonincreasing. We call G the Glimm functional. Consequently, T and
T.V. uδ are bounded independently of δ. Proof. Let Tn and Qn denote the value of T and Q, respectively, before the nth collision of fronts at tn , with 0 < t1 < t2 < · · · . Using the interaction estimate (6.10) we first infer that ≤ Tn + KQn (I). Tn+1 = (6.23) j j
Let c ≥ 2K and assume that T1 + cT T12 ≤ 1/(2K). Assume furthermore that T + cQ is nonincreasing for all t less than tn , and that Tn ≤ 1/(2K). Lemma 6.2 and (6.23) imply that c Tn+1 + cQn+1 ≤ Tn + KQn (I) + cQn − Qn (I) 2 c = Tn + cQn + K − Qn(I) 2 ≤ Tn + cQn ,
216
6. Existence of Solutions of the Cauchy Problem
since K −
c 2
≤ 0. Consequently,
Tn+1 ≤ Tn+1 + cQn+1 ≤ · · · ≤ T1 + cQ1 ≤ T1 + cT T12 ≤ 1/(2K), which by induction proves the result. We still have not shown that the front-tracking approximation can be continued up to any desired time. We saw that one problem was that the number of fronts may not be bounded, and one could envisage that the number of fronts approaches infinity at some finite time. In order to circumvent this, we will change the approximate Riemann solution by removing certain weak fronts. The advantage will be that we obtain a hyperfast method that converges as δ → 0. However, there is a trade-off; if we remove too much, the limit will not be a weak solution of the original conservation law. First we introduce the concept of generation of a front. We say that each initial front starting at t = 0 belongs to the first generation. Consider two first-generation fronts of families l and r, respectively, that collide. The resulting fronts of families l and r will still belong to the first generation, while all the remaining fronts resulting from this collision will be called second-generation fronts. More generally, if a front of family l and generation m interacts with a front of family r and generation n, the resulting fronts of families l and r are still assigned generations m and n, respectively, while the remaining fronts resulting from this collision are given generation n + m. The idea behind this concept is that fronts of high generation will be weak. Given the initial approximation parameter δ, we will remove all fronts with generation higher than N , where N = rnd (|ln4KT (δ)|) .
(6.24)
More precisely, if two fronts of generation n and m collide, at most two waves will retain their generation. If n + m is greater than N , then the remaining waves will be removed, but if n + m ≤ N , we use the full original solution. When we remove fronts, we let the function uδ be equal to the value it has to the left of the removed fronts, provided that the removed fronts are not the rightmost fronts in the solution of the Riemann problem. If the rightmost fronts are removed, then uδ is set equal to the value immediately to the right of the removed fronts. We will now show that there is only a finite number of fronts of generation less than or equal to N , and that for a fixed δ there is only a finite number of collisions, hence that the method will be what we called hyperfast. To see this, we first assume that T ≤ 1/(4K), such that the strength of each individual front is also bounded by 1/(4K). For later reference we also note that by (6.24) δ ≥ (4KT )N+1 .
(6.25)
6.1. Front Tracking for Systems
217
First we consider the number of fronts of first generation. This number can increase when first-generation rarefaction fronts split up into several rarefaction fronts. By the term rarefaction front we mean a front approximating a rarefaction wave. Note that by the construction of the approximate Riemann problem, the strength of each split rarefaction front is at least 34 δ. Remembering that T is uniformly bounded, we have that #(first-generation fronts) ≤ #(initial fronts) +
4T . 3δ
(6.26)
Thus the number of first-generation fronts is finite. This also means that there will be only a finite number of collisions between first-generation fronts. To see this, note that due to the strict hyperbolicity of the conservation law, each wave family will have speeds that are distinct; cf. (5.122)–(5.124). Hence each first-generation front will remain in a wedge in the (x, t) plane determined by the slowest and fastest speeds of that family. Eventually, all first-generation fronts will have interacted, and we can conclude that there can be only a finite number of collisions between first-generation fronts globally. Assuming now that for some m there will be only a finite number of fronts of generation i, for all i < m, and that there will only be a finite number of interactions between fronts of generation less than m, then, in analogy to (6.26), we find that #(mth generation fronts)
(6.27) 4T ≤ #(collisions of jth and ith generation fronts; i + j = m) + . 3δ
Consequently, the number of fronts of generation less than or equal to m is finite. We can now repeat the argument above showing that there is only a finite number of collisions between first-generation fronts, replacing “first generation” by “of generation less than or equal to m,” and show that there is only a finite number of collisions producing fronts of generation m + 1. Thus we can conclude that there is only a finite number of fronts of generation less than N + 1, and that these interact only a finite number of times. Summing up our results so far we have proved the following result. Theorem 6.4. Let fj ∈ C 2 (Rn ), j = 1, . . . , n. Let D be a domain in Rn and consider the strictly hyperbolic equation ut + f (u)x = 0 in D. Assume that f is such that each wave family is either genuinely nonlinear or linearly degenerate. Assume also that the function u0 (x) has sufficiently small total variation. Then the front-tracking approximation uδ (x, t), defined by (6.5), (6.6) and constructed by the front-tracking procedure described above, is welldefined. Furthermore, the method is hyperfast, i.e., requires only a finite number of computations to define uδ (x, t) for all t. The total variation of
218
6. Existence of Solutions of the Cauchy Problem
uδ is uniformly bounded, and there is a finite constant C such that
T.V. uδ ( · , t) ≤ C for all t > 0 and all δ > 0.
(i) Given a one-dimensional strictly hyperbolic system of conservation laws, ut + f (u)x = 0,
u|t=0 = u0 ,
(6.28)
where u0 has small total variation. (ii) Approximate initial data u0 by a piecewise constant function uδ0 . (iii) Approximate the solution of each Riemann problem by a piecewise constant function by sampling points with distance δ on the rarefaction curve and using the exact shocks and contact discontinuities. (iv) Track discontinuities (“fronts”). (v) Solve new Riemann problems as in (iii). (vi) Eliminate weak waves (“high generation”). (vii) Continue to solve Riemann problems approximately as in (iii), eliminating weak waves. Denote approximate solution by uδ . (viii) As δ → 0, the approximate solution uδ will converge to u, the solution of (6.28). Front tracking in a box (systems).
6.2 Convergence The Devil is in the details. English proverb
At this point we could proceed as in the scalar case, by showing that front tracking is stable with respect to L1 perturbations of the- initial . data. This would then imply that the sequence of approximations uδ has a unique limit as δ → 0. For systems, however, this analysis is rather - .complicated. In this section we shall instead prove that the sequence uδ is compact and that any (there is really only one) limit is a weak solution. The reader willing to accept this, or primarily interested in front tracking, may skip ahead to the next chapter.
6.2. Convergence
219
- . To show that a subsequence of the sequence uδ δ>0 converges in L1loc (R × [0, T ]) we use Theorem A.8 from Appendix A. We have already shown that that uδ (x, t) is bounded, and we have that δ
u (x + ρ, t) − uδ (x, t) dx ≤ ρ T.V. uδ ( · , t) ≤ Cρ, R
for some C independent of δ. Hence, by Theorem A.8, to conclude that a - . subsequence of uδ converges, we must show that R δ u (x, t) − uδ (x, s) dx ≤ C(t − s), −R
where t ≥ s ≥ 0, for any R > 0, and for some C independent of δ. Since uδ is bounded, we have that (recall that λ1 < · · · < λn ) L :=
{|λn (u)| , |λ1 (u)|}
max
|u|≤sup|uδ |
is bounded. Let ti and ti+1 be two consecutive collision times. For t ∈
ti , ti+1 ] we write uδ in the form uδ (x, t) =
Ni
uik−1 − uik H x − xik (t) + uNi ,
(6.29)
k=1
where xik (t) denotes the position of the kth front from the left, and H the Heaviside function. Here uδ (x, t) = uik for x between xik and xik+1 . Assume now that t ∈ [ti , ti+1 ] and s ∈ [tj , tj+1 ], where j ≤ i and s ≤ t. Then δ u (x, t) − uδ (x, ti ) dx R t d δ = u (x, τ ) dτ dx R ti dτ t Ni i i i u ≤ − u x (τ ) H x − xik (τ ) dτ dx k−1 k k R
ti k=1
t Ni i u ≤L
k−1
ti k=1
−
uik
R
H x − xi (τ ) dx dτ k
≤ L (t − ti ) T.V. u ( · , t) δ
≤ LC (t − ti ) , since xik ≤ L. Similarly, we show that δ u (x, ti ) − uδ (x, tj+1 ) dx ≤ LC (ti − tj+1 )
if j + 1 < i,
R
and
R
δ u (x, tj+1 ) − uδ (x, s) dx ≤ LC (tj+1 − s) .
220
6. Existence of Solutions of the Cauchy Problem
Therefore,
δ u ( · , t) − uδ ( · , s) ≤ C |t − s| , 1
for some constant C independent of t and δ. Hence, we can use Theorem A.8 to conclude that there exists a function u(x, t) and a subsequence {δδj } ⊂ {δ} such that uδj → u(x, t) in L1loc as j → ∞. As in the scalar case, it is by no means obvious that the limit function u(x, t) is a weak solution of the original initial value problem (6.1). For a single conservation law, this was not difficult to show, using that the approximations were weak solutions of approximate problems. This is not so in the case of systems, so we must analyze how close the approximations are to being weak solutions. There are three sources of error in the front-tracking approximation. Firstly, the initial data are approximated by a step function. Secondly, there is the approximation of rarefaction waves by step functions, and finally, weak waves are ignored in the solution of some Riemann problems. We start the error analysis by estimating how much we “throw away” by ignoring fronts with a generation larger than N . To do this, we estimate the total variation of the fronts belonging to a given generation. Let Gm denote the set of all fronts of generation m, and let Tm denote the sum of the strengths of fronts of generation m. Thus Tm = |j | . j ∈Gm
Clearly, T =
N
Tm .
m=1
Lemma 6.5. We have that (2m − 3)! m−1 n Tm ≤ 2 K T ≤ C(4KT )m (m − 2)!m! for some constant C. In particular, for m = N + 1, TN +1 ≤ Cδ. Proof. Clearly, Tm+1 =
m
O (|l | |r |)
j=1 l ∈Gm+1−j r ∈Gj
≤K
m
j=1 l ∈Gm+1−j r ∈Gj
=K
m j=1
Tm+1−j Tj .
|l | |r |
6.2. Convergence
221
By introducing T˜m = Tm /(T m K m−1 ), we see that T˜m satisfies T˜m+1 ≤
m
T˜m+1−j T˜j .
j=1
By choosing T sufficiently small, we may assume that T˜1 ≤ 1. Furthermore, we see that T˜m ≤ am , where a1 = 1,
am =
m−1
am−j aj ,
m = 2, 3, . . . .
(6.30)
j=1
If we define the function y(x) =
∞
am xm ,
m=1
then, using (6.30), y2 =
∞ m=2
⎛
m−1
⎝
⎞ am−j aj ⎠ xm = y − x,
j=1
and we infer that (recall that y(0) = 0) y(x) =
∞ √ 1 1/2 2m−1 m 1 − 1 − 4x = (−1)m+1 2 x , 2 m m=1
which implies
am = (−1)m+1
1/2 2m−1 2 . m
We may rewrite this as
1 1 1 (2m − 3)! m+1 2 · 2 − 1 · · · 2 − m + 1 am = (−1) 22m−1 = 2 . m! m!(m − 2)! To estimate am as m → ∞, we apply Stirling’s formula [144, p. 253] √ 1 θ n! = 2π exp n − ln(n + 1) − (n + 1) + , 2 12(n + 1) for 0 ≤ θ ≤ 1. We obtain am = 2
(2m − 3)! = O (1) 4m m−1/2 . m!(m − 2)!
Using Tm = T˜m T m K m−1 ≤ am T m K m−1 , the lemma follows.
222
6. Existence of Solutions of the Cauchy Problem
Equipped with this lemma, we can start to estimate how well uδ approximates a weak solution of (6.1). First we shall determine how far the front-tracking approximation is from being a weak solution between two fixed consecutive collision times tj and tj+1 . For convenience we temporarily call these t1 and t2 . Recall that u is a weak solution in a strip [t, s] if s Its (u) := (uϕt + f (u)ϕx ) dx dt t − u(x, s)ϕ(x, s) dx + u(x, t)ϕ(x, t) dx = 0 (6.31) for every test function ϕ. In the following we shall assume that all bounded quantities are bounded by the (universal) constant M . Thus ϕ, ϕx , ϕt , uδ , etc., are all bounded in absolute value by M . Let s1 = t1 , and let v1 (x, s) denote the weak solution of vs + f (v)x = 0,
v (x, si ) = uδ (x, si ) ,
(6.32)
with i = 1. For (s − s1 ) small, v1 can be calculated by solving the Riemann problems defined at the jumps of uδ . The function v1 can be defined until these interact at some s2 > s1 . If these waves do not interact until s = t2 , v1 can be defined in the whole strip [t1 , t2 ], and we set s2 = t2 . If waves in v1 interact before t2 , then for s > s2 we let v2 be the solution of (6.32) with i = 2. Observe that we use the function uδ as initial data at time s2 . Continuing in this way, we fill the whole strip [t1 , t2 ] with smaller strips [si , si+1 , in which we have defined vi . Let v denote the function that equals vi in the interval [si , si+1 . We have that uδ (x, si ) = v(x, si ) for all i. Now
t2
si
t1 Figure 6.5. Waves of v (dotted lines) and fronts of uδ (thick lines).
6.2. Convergence
223
let v δ be the approximation to v obtained by front tracking in the strip [si , si+1 with data as in (6.32). Since no fronts of vδ will collide in this strip, the fronts in vδ approximate all waves in the solution of Riemann problems δ at si . Furthermore, we have that v(x, t) δ= v (x, t) except when (x, t) is in a rarefaction fan. For such (x, t), v − v = O (δ) by the construction of v δ . We wish to estimate v(x, t) − v δ (x, t) dx (6.33) for t between si and si+1 . This integral will be a sum of integrals across the rarefaction fans of v. If we have a rarefaction fan with a left state vl and right state vr , then the integral across this fan will be a sum of integrals across each step of vδ . The number of such steps is |vl − vr | , O (δ) and the width of each step is (t − si ) Δλ = (t − si ) O (δ) , where Δλ is the difference in characteristic speed on both sides of a front in v δ approximating the rarefaction wave in v. Then the integral (6.33) can be estimated as follows: j j v − v r l v(x, t) − v δ (x, t) dx = O (δ) (t − si ) O (δ) O (δ) j (6.34)
≤ (t − si ) T.V. uδ O (δ) = (t − si ) O (δ) ,
/ j since j |vl − vrj | ≤ T.V. uδ . We also wish to compare uδ and v δ in this strip. The difference between these is that some fronts that are tracked in v δ are ignored in uδ , since all fronts in v δ will be first-generation fronts. Consequently, vδ is different from uδ in a finite number of wedges emanating from the discontinuities in uδ ( · , si ). Then, using Lemma 6.5, δ v (x, t) − uδ (x, t) dx ≤ C (t − si ) |j | j ∈GN +1
≤ C (t − si ) TN +1
(6.35)
= (t − si ) O (δ) , when T ≤ 1/(4K) and N is chosen as in (6.24). Using (6.34) and (6.35), we obtain δ u − v dx = (t − si ) O (δ) . (6.36)
224
6. Existence of Solutions of the Cauchy Problem s
s
We can use the result (6.36) to estimate Isii+1 (uδ ). Since Isii+1 (v) = 0, we have that s Is i+1 (uδ ) = Issi+1 (uδ ) − Issi+1 (v) i i i si+1 = (uδ − v)ϕt + (f (uδ ) − f (v))ϕx dx dt si
δ − u (x, si+1 ) − v (x, si+1 ) ϕ (x, si+1 ) dx ) si+1
δ u − v + f uδ − f (v) dx dt ≤M si * + uδ (x, si+1 ) − v (x, si+1 ) dx ) si+1 δ u − v + O uδ − v 2 ≤M dx dt si * + uδ (x, si+1 ) − v (x, si+1 ) dx 2 ≤ O (δ) (si+1 − si ) + (si+1 − si ) . t
Using this, it is not difficult to estimate Itjj+1 (uδ ), where tj and tj+1 are consecutive collision times of uδ , because ∞ ∞ tj+1 δ si+1 δ 2 Isi (u ) = O (δ) (si+1 − si ) + (si+1 − si ) . Itj (u ) ≤ i=1
i=1
(6.37) Since tj+1 − tj = i=1 (si+1 − si ), the second term on the rightmost part 2 of (6.37) is less than (tj+1 − tj ) , and we get tj+1 δ 2 (6.38) Itj (u ) = O (δ) (tj+1 − tj ) + (tj+1 − tj ) . /∞
Finally, to show that the limit u is a weak
solution, we observe that since uδ is bounded and uδ → u in L1loc , f uδ will converge to f (u) in L1loc . Thus lim I0T uδ = I0T (u) . δ→0
By (6.38) we get T δ
2 I0 u = O (δ) (tj+1 − tj ) + (tj+1 − tj ) ≤ O (δ) T + T 2 , j
I0T (u)
which shows that = 0, and accordingly that u is a weak solution. We can actually extract some more information about the limit u by examining the approximate solutions uδ . More precisely, we would like to show that isolated jump discontinuities of u satisfy the Lax entropy condition λm (ul ) ≥ σ ≥ λm (ur )
(6.39)
6.2. Convergence
225
for some m between 1 and n, where σ is the speed of the discontinuity, and ul = lim u(y, t) and ur = lim u(y, t). y→x−
y→x+
To show this we assume that u has an isolated discontinuity at (x, t), with left and right limits ul and ur . We can enclose (x, t) by a trapezoid Eδ with corners defined as follows. Start by finding points xkδ,l → x−,
xkδ,r → x+,
t1δ ↑ t,
and t2δ ↓ t,
for k = 1, 2 as δ → 0. We let Eδ denote the trapezoid with corners (x1δ,l , t1δ ), (x1δ,r , t1δ ), (x2δ,r , t2δ ), and (x2δ,l , t2δ ). Recall that convergence in L1loc implies pointwise convergence almost everywhere, so we choose these points such that * * uδ (x1δ,l , t1δ ) uδ (x1δ,r , t1δ ) → u and → ur l uδ (x2δ,l , t2δ ) uδ (x2δ,r , t2δ ) as δ → 0. We can also choose points such that the diagonals of Eδ have slopes not too different from σ; precisely, x1 − x2 x1 − x2 δ,l δ,r δ,r δ,l − σ ≤ ε(δ) and − σ (6.40) 1 ≤ ε(δ), tδ − t2δ t1δ − t2δ where ε(δ) → 0 as δ → 0. Next for k = 1, 2 set / |i | Mδk = k , xδ,r − xkδ,l
where the sum is over all rarefaction fronts in the interval xkδ,l , xkδ,r . If Mδk is unbounded as δ → 0, then u contains a centered rarefaction wave at (x, t), i.e., a rarefaction wave starting at (x, t). In this case the discontinuity will not be isolated, and hence Mδk remains bounded as δ → 0. Next observe that δ k k / / u (xδ,l , tδ ) − uδ (xkδ,r , tkδ ) |rarefaction fronts| + |shock fronts| ≤C xkδ,r − xkδ,l xkδ,r − xkδ,l / |shock fronts| = CM Mδk + C . xkδ,r − xkδ,l Here the sums are over fronts crossing the interval xkδ,l , xkδ,r . Since the fraction on the left is unbounded as δ → 0, there must be shock fronts crossing the top and bottom of Eδ for all δ > 0. Furthermore, since the discontinuity is isolated, the total strength of all fronts crossing the left and right sides of Eδ must tend to zero as δ → 0. Next we define a shock line as a sequence of shock fronts of the same family in uδ . Assume that a shock line has been defined for t < tn , where tn is a collision time, and in the interval [tn−1 , tn consists of the shock
226
6. Existence of Solutions of the Cauchy Problem
front . In the interval [tn , tn+1 , this shock line continues as the front if does not collide at tn . If collides at tn , and the approximate solution of the Riemann problem determined by this collision contains an approximate shock front of the same family as , then the shock line continues as this front. Otherwise, it stops at tn . Note that we can associate a unique family to each shock line. From the above reasoning it follows that for all δ there must be shock lines entering Eδ through the bottom that do not exit Eδ through the sides; hence such shock lines must exit Eδ through the top. Assume that 1 2 the leftmost of these shock lines enters Eδ at yδ,l and leaves Eδ at yδ,l . 1 Similarly, the rightmost of the shock lines enters Eδ at yδ,r and leaves Eδ 2 at yδ,r . Set
k
k k k vδ,l = uδ yδ,l −, tkδ and vδ,r = uδ yδ,r +, tkδ . k Between yδ,l and xkδ,l , the function uδ varies over rarefaction fronts or over shock lines that must enter or leave Eδ through the left or right side. Since the discontinuity is isolated, the total strength of such waves must tend to k zero as δ → 0. Because uδ (xkδ,l , tkδ ) → ul as δ → 0, we have that vδ,l → ul k as δ → 0. Similarly, vδ,r → ur . Since ε (δ) → 0, by strict hyperbolicity, the family of all shock lines not crossing the left or right side of Eδ must be the same, say m. The speed of an approximate m-shock front with speed k σ ˜ and left state vδ,l satisfies
k
k λm−1 vδ,l <σ ˜ + O (δ) < λm vδ,l . (6.41) k Similarly, an approximate m-shock front with right state vδ,r and speed σ ˆ satisfies
k
k λm vδ,r <σ ˆ + O (δ) < λm+1 vδ,r . (6.42)
Then (6.39) follows by noting that σ˜ and σ ˆ both tend to σ as δ → 0, and then letting δ → 0 in (6.41) and (6.42). To summarize the results of this chapter we have the following theorem: Theorem 6.6. Consider the strictly hyperbolic system of equations ut + f (u)x = 0,
u(x, 0) = u0 (x),
and assume that f ∈ C 2 is such that each characteristic wave family is either linearly degenerate or genuinely nonlinear. If T.V. (u0 ) is sufficiently small, there exists a global weak solution u(x, t) to this initial value problem. This solution may be constructed by the front-tracking algorithm described in Section 6.1. Furthermore, if u has an isolated jump discontinuity at a point (x, t), then the Lax entropy condition (6.39) holds for some m between 1 and n. We have seen that for each δ > 0 there are only a finite number of collisions between the fronts in uδ for all t > 0. Hence there exists a finite
6.2. Convergence
227
time Tδ such that for t > Tδ , the fronts in uδ will move apart, and not interact. This has some similarity to the solution of the Riemann problem. One can intuitively make the change of variables t → t/ε, x → x/ε without changing the equation, but the initial data is changed to u0 (x/ε). Sending ε → 0, or alternatively t → ∞, we see that u solves the Riemann problem uL for x < 0, ut + f (u)x = 0, u(x, 0) = (6.43) uR for x ≥ 0, where uL = limx→−∞ u0 (x) and uR = limx→∞ u0 (x). Thus in some sense, for very large times, u should solve this Riemann problem. Next, we shall show that this (very imprecise statement) is true, but first we need some more information about uδ . For t > Tδ , the function uδ will consist of a finite number of constant states, say uδi , for i = 0, . . . , M . If uδi−1 is connected with uδi by a wave of a different family from the one connecting uδi to uδi+1 , we call uδi a real state, N and we let {¯ ui }i=0 be the set of real states of uδ . Since the discontinuities δ of u are moving apart, we must have N ≤ n,
(6.44)
by strict hyperbolicity. Furthermore, to each pair (¯ ui−1 , u ¯i ) we can associate a family ki such that 1 ≤ ki < ki+1 ≤ n, and we define k0 = 0 and kN +1 = n. We write the solution of the Riemann problem with left and right data u¯0 and u¯N , respectively, as u, and define j , j = 1, . . . , n, by u ¯N = Wn (n )W Wn−1 (n−1 ) · · · W1 (1 )¯ u0 , and define the intermediate states u0 = u¯0
and uj = Hj (j )uj−1
for j = 1, . . . , n.
Now we claim that |uj − u ¯i | ≤ O (δ) ,
for ki−1 ≤ j ≤ ki . δ
(6.45)
If N = 1, this clearly holds, since in this case u consists of two states for t > Tδ , and by construction of uδ , the pair (¯ u0 , u ¯1 ) is the solution of the same Riemann problem as u is, but possibly with waves of a high generation ignored. Now assume that (6.45) holds for some N > 1. We shall show that it holds for N + 1 as well. Let v be the solution of the Riemann problem with initial data given by u ¯0 for x < 0, v(x, 0) = u ¯N for x ≥ 0, and let w be the solution of the Riemann problem with initial data u ¯N for x < 0, w(x, 0) = u ¯N +1 for x ≥ 0.
228
6. Existence of Solutions of the Cauchy Problem
We denote the waves in v and w by vj and w j , respectively. Then by the induction assumption, ¯i − v ≤ O (δ) , ¯N +1 − w ki kN +1 ≤ O (δ) , |vi | ≤ O (δ) , and |w i | ≤ O (δ) , i∈{ k1 ,...,kN }
i= kN +1
where ¯i denotes the strength of the wave separating u¯i−1 and u¯i . Notice now that u can be viewed as the interaction of v and w; hence by the interaction estimate, |i − vi | ≤ O (δ) for i ≤ kN , and kN +1 − w kN +1 ≤ O (δ) . i
Thus (6.45) holds for N + 1 real states, and therefore for any N ≤ n. Now we can conclude that for u = limδ→0 uδ the following result holds. Theorem 6.7. Assume that uL = limx→−∞ u0 (x) and uR = limx→∞ u0 (x) N exist. Then as t → ∞, u will consist of a finite number of states {ui }i=0 , where N ≤ n. These states are the intermediate states in the solution of the Riemann problem (6.43), and they will be separated by the same waves as the corresponding states in the solution of the Riemann problem. Proof. By the calculations preceding the lemma, for t > Tδ we can define a function u ¯δ that consists of a number of constant states separated by elementary waves, shocks, rarefactions, or contact discontinuities such that these constant states are the intermediate states in the solution of the Riemann problem defined by limx→−∞ uδ (x, t) and limx→∞ uδ (x, t), and such that for any bounded interval I, u ¯δ ( · , t) − uδ ( · , t)L1 (I) → 0 as δ → 0. Then for t > Tδ ,
u( · , t) − u ¯δ ( · , t) L1 (I) ≤ u( · , t) − uδ ( · , t)L1 (I) + u¯δ ( · , t) − uδ ( · , t)L1 (I) .
Set t = Tδ + 1, and let δ → 0. Then both terms on the right tend to zero, and u ¯0 → uL and u ¯N → uR . Hence the lemma holds. Note, however, that u does not necessarily equal some u ¯δ in finite time. Remark 6.8. Here is another way to interpret heuristically the asymptotic result for large times. Consider the set {uδ (x, t) | x ∈ R} in phase space. There is a certain ordering of that set given by the ordering of x. As δ → 0, this set will approach some set {u(x, t) | x ∈ R}.
6.3. Notes
229
Theorem 6.7 states that as t → ∞ this set approaches the set that consists of the states in the solution of the Riemann problem (6.43) with the same order. No statements are made as to how fast this limit is obtained. In particular, if uL = uR = 0, then u(x, t) → 0 for almost all x as t → ∞.
6.3 Notes The fundamental result concerning existence of solutions of the general Cauchy problem is due to Glimm [52], where the fundamental approach was given, and where all the basic estimates can be found. Glimm’s result for small initial data uses the random choice method. The random element is not really essential to the random choice method, as was shown by Liu in [102]. The existence result has been extended for some 2 × 2 systems, allowing for initial data with large total variation; see [109, 131]. These systems have the rather special property that the solution of the Riemann problem is translation-invariant in phase space. Our proof of the interaction estimate (6.10) is a modified version of Yong’s argument [146]. Front tracking for systems was first used by DiPerna in [43]. In this work a front-tracking process was presented for 2 × 2 systems, and shown to be well-defined and to converge to a weak solution. Although DiPerna states that “the method is adaptable for numerical calculation,” numerical examples of front tracking were first presented by Swartz and Wendroff in [133], in which front tracking was used as a component in a numerical code for solving problems of gas dynamics. The front tracking presented here contains elements from the fronttracking methods of Bressan [15] and, in particular, of Risebro [123]. In [123] the generation concept was not used. Instead, one “looked ahead” to see whether a buildup of collision times was about to occur. In [5] Baiti and Jenssen showed that one does not really need to use the generation concept nor to look ahead in order to decide which fronts to ignore. The large time behavior of u was shown to hold for the limit of the Glimm scheme by Liu in [103]. The front-tracking method presented in [123] has been used as a numerical method; see Risebro and Tveito [124, 125] and Langseth [91, 92] for examples of problems in one space dimension. In several space dimensions, front tracking has also been used in conjunction with dimensional splitting with some success for systems; see [68] and [99].
230
6. Existence of Solutions of the Cauchy Problem
Exercises 6.1 Assume that f : Rn → Rn is three-times differentiable, with bounded derivatives. We study the solution of the system of ordinary differential equations dx = f (x), x(0) = x0 . dt We write the unique solution as x(t) = exp(tf )x0 . a. Show that
ε2 df (x0 ) f (x0 ) + O ε3 . 2 b. If g is another vector field with the same properties as f , show that exp(εf )x0 = x0 + εf (x0 ) +
exp(εg) exp(εf )x0 = x0 + ε (f (x0 ) + g (x0 )) ε2 + df (x0 ) f (x0 ) + dg (x0 ) g (x0 ) 2
+ ε2 dg (x0 ) f (x0 ) + O ε3 . c. The Lie bracket of f and g is defined as [f, g](x) = dg(x)f (x) − df (x)g(x). Show that [f, g] (x0 ) = lim
ε→0
1 exp(εg) exp(εf )x − exp(εf ) exp(εg)x 0 0 . ε2
d. Indicate how this can be used to give an alternative proof of the interaction estimate (6.10). 6.2 We study the p system with p(u1 ) as in Exercise 5.2, and we use the results of Exercise 5.8. Define a ffront-tracking scheme by introducing a grid in the (μ, τ ) plane. We approximate rarefaction waves by choosing intermediate states that are not further apart than δ in (μ, τ). If is a front with left state (μl , τl ) and right state (μr , τr ), define |[[μ]]| − |[[τ ]]| if is a 1-wave, T () = |[[τ ]]| − |[[μ]]| if is a 2-wave, and define T additively for a sequence of fronts. a. Define a front-tracking algorithm based on this, and show that Tn+1 ≤ Tn , where Tn denotes the T value of the front-tracking approximation between collision times tn and tn+1 .
6.3. Notes
231
b. Find a suitable condition on the initial data so that the fronttracking algorithm produces a convergent subsequence. c. Show that the limit is a weak solution.
7 Well-Posedness of the Cauchy Problem for Systems
Ma per seguir virtute e conoscenza.1 Dante Alighieri (1265–1321 ), La Divina Commedia
The goal of this chapter is to show that the limit found by front tracking, that is, the weak solution of the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x),
(7.1)
is stable in L1 with respect to perturbations in the initial data. In other words, if v = v(x, t) is another solution found by front tracking, then
u( · , t) − v( · , t) 1 ≤ C u0 − v0 1 for some constant C. Furthermore, we shall show that under some mild extra entropy conditions, any weak solution coincides with the solution constructed by front tracking. ♦ Example 7.1 (A special system). As an example for this chapter we shall consider the special 2 × 2 system
ut + vu2 x = 0,
(7.2) vt + uv 2 x = 0. 1 Hard
to comprehend? It means “[but to] pursue virtue and knowledge.”
H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3_7, © Springer-Verlag Berlin Heidelberg 2011
233
234
7. Well-Posedness of the Cauchy Problem
For simplicity assume that u > 0 and v > 0. The Jacobian matrix reads 2uv u2 , (7.3) v 2 2uv with eigenvalues and eigenvectors λ1 = uv, λ2 = 3uv,
−u/v r1 = , 1 u/v r2 = . 1
(7.4)
The system is clearly strictly hyperbolic. Observe that ∇λ1 · r1 = 0, and hence the first family is linearly degenerate. The corresponding wave curve W1 (ul , vl ) = C1 (ul , vl ) is given by (cf. Theorem 5.7) du u =− , dv v
u (vl ) = ul ,
or W1 (ul , vl ) = C1 (ul , vl ) = {(u, v) | uv = ul vl }. The corresponding eigenvalue λ1 is constant along each hyperbola. With the chosen normalization of r2 we find that ∇λ2 · r2 = 6u, and hence the second-wave family is genuinely nonlinear. The rarefaction curves of the second family are solutions of du u = , dv v
u (vl ) = ul ,
and thus u ul = . v vl We see that these are straight lines emanating from the origin, and λ2 increases as u increases. Consequently, R2 consists of the ray vl v = u , u ≥ ul . ul The rarefaction speed is given by vl λ2 (u; ul , vl ) = 3u2 . ul To find the shocks in the second family, we use the Rankine–Hugoniot relation s (u − ul ) = vu2 − vl u2l , s (v − vl ) = v2 u − vl2 ul ,
7. Well-Posedness of the Cauchy Problem
235
which implies u 1 = ul 2
v vl + ± vl v
v vl − vl v
vl /v, = v/vl .
(Observe that the solution with u/ul = vl /v coincides with the wave curve of the linearly degenerate first family.) The shock part of this curve S2 consists of the line vl S2 (ul , vl ) = (u, v) v = u , ul
0 < u ≤ ul .
The shock speed is given by
vl s := μ2 (u; ul , vl ) = u2 + uul + u2l . ul Hence the Hugoniot locus and rarefaction curves coincide for this system. Systems with this property are called Temple class systems after Temple [137]. Furthermore, the system is linearly degenerate in the first family and genuinely nonlinear in the second. Summing up, the solution of the Riemann problem for (7.2) is as follows: First the middle state is given by 7 um =
vl u l ur , vr
v
vm
7 ul = vl vr . ur
ξ
u
η
Figure 7.1. The curves W in (u, v) coordinates (left) and (η, ξ) coordinates (right).
236
7. Well-Posedness of the Cauchy Problem
If ul /vl ≤ ur /vr , the second wave is a rarefaction wave, and the solution can be written as ⎧! " ⎪ ⎪ ul ⎪ for x/t ≤ ul vl , ⎪ ⎪ ⎪ vl ⎪ ⎪ ! " ⎪ ⎪ u ⎪ ⎪ m ⎪ for ul vl < x/t ≤ 3um vm , ⎪ ⎨ v u m ! " (x, t) = $ (7.5) v ⎪ ⎪ x vm um /vm ⎪ for 3um vm < x/t ≤ 3ur vr , ⎪ ⎪ ⎪ 3t um 1 ⎪ ⎪ ! " ⎪ ⎪ u ⎪ ⎪ r ⎪ for 3ur vr < x/t. ⎪ ⎩ vr In the shock case, that is, when ul /vl > ur /vr , the solution reads ⎧! " ⎪ ⎪ ul ⎪ for x/t ≤ ul vl , ⎪ ⎪ ⎪ vl ⎪ ⎪ ! " ⎪ ⎨ u u m (x, t) = for ul vl < x/t ≤ μ2 (ur ; um , vm ), (7.6) v ⎪ v ⎪ ⎪! m" ⎪ ⎪ ⎪ ⎪ ur ⎪ for μ2 (ur ; um , vm ) < x/t. ⎪ ⎩ v r If we set η = uv, and thus u=
ηξ,
ξ=
v=
u , v
η/ξ,
the solution of the Riemann problem will be especially simple in (η, ξ) coordinates. Given left and right states (ηl , ξl ), (ηr , ξr ), the middle state is given by (ηl , ξr ). Consequently, measured in (η, ξ) coordinates, the total variation of the solution of the Riemann problem equals the total variation of the initial data. This means that we do not need the Glimm functional to show that a front-tracking approximation to the solution of (7.2) has bounded total variation. With this in mind it is easy to show (using the methods of the previous chapters) that there exists a weak solution to the initial value problem for (7.2) whenever the total variation of the initial data is bounded. We may use these variables to parameterize the wave curves as follows: u ul vl /η = (first family), v η u ul η/vl = (second family). v η
7. Well-Posedness of the Cauchy Problem
237
For future use we note that the rarefaction and shock speeds are as follows: λ1 (η) = μ1 (η) = η, √ λ2 (η) = 3η, and μ2 (ηl , ηr ) = (ηl + ηl ηr + ηr ) . ♦ As a reminder we now summarize some properties of the front-tracking approximation for a fixed δ. 1. For all positive times t, uδ (x, t) has finitely many discontinuities, each having position xi (t). These discontinuities can be of two types: shock fronts or approximate rarefaction fronts. Furthermore, only finitely many interactions between discontinuities occur for t ≥ 0. 2. Along each shock front, the left and right states ul,r = uδ (xi ∓, t)
(7.7)
are related by ur = Sˆı (i ) ul + ei , where i is the strength of the shock and ˆı is the family of the shock. The “error” ei is a vector of small magnitude. Furthermore, the speed of the shock, x, ˙ satisfies |x˙ − μˆı (ul , ur )| ≤ O (1) δ,
(7.8)
where μˆı (ul , ur ) is the ˆı th eigenvalue of the averaged matrix 1 M (ul , ur ) = df ((1 − α)ul + αur ) dα; 0
cf. (5.64)–(5.65). 3. Along each rarefaction front, the values ul and ur are related by ur = Rˆı (i ) ul + ei .
(7.9)
Also, |x˙ − λˆı (ur )| ≤ O (1) δ
and
|x˙ − λˆı (ul )| ≤ O (1) δ
(7.10)
where λˆı (u) is the ˆı th eigenvalue of df (u). 4. The total magnitude of all errors is small: |ei | ≤ δ. i
Also, recall that for a suitable constant C0 the Glimm functional
G uδ ( · , t) = T uδ ( · , t) + C0 Q uδ ( · , t)
(7.11)
238
7. Well-Posedness of the Cauchy Problem
is nonincreasing for each collision of fronts, where T and Q are defined by (6.19) and (6.18), respectively, and that the interaction potential
Q uδ ( · , t) is strictly decreasing for each collision of fronts.
7.1 Stability Details are always vulgar. Oscar Wilde, The Picture of Dorian Gray (1891 )
Now let v δ be another front-tracking solution with initial condition v0 . To compare uδ and vδ in the L1 -norm, i.e., to estimate uδ − v δ 1 , we introduce the vector q = q(x, t) = (q1 , . . . , qn ) by vδ (x, t) = Hn (qn ) Hn−1 (qn−1 ) · · · H1 (q1 ) uδ (x, t)
(7.12)
and the intermediate states ωi , ω0 = uδ (x, t),
ωi = Hi (qi ) wi−1 ,
for 1 ≤ i ≤ n,
(7.13)
with velocities μi = μi (ωi−1 , ωi ).
(7.14)
As in Chapter 5, Hk ()u denotes the kth Hugoniot curve through u, parameterized such that d Hk () u=0 = rk (u). d Note that in the definition of q we use both parts of this curve, not only the part where < 0. The vector q represents a “solution” of the Riemann problem with left state uδ and right state vδ using only shocks. (For > 0 these will be weak solutions; that is, they satisfy the Rankine–Hugoniot condition. However, they will not be Lax shocks.) Later in this section we shall use the fact that genuine nonlinearity implies that μk (u, Hk ()u) will be increasing in , i.e., d μk (u, Hk ()u) ≥ c > 0, d for some constant c depending only on f . As our model problem showed, the L1 distance is more difficult to control than the “q-distance.” However, it turns out that even the q-distance
δis not quite enough, and we need to introduce a weighted form. We let D u and
D v δ denote the sets of all discontinuities in u and v, respectively, and
7.1. Stability
239
define the functional Φ uδ , v δ as Φ(uδ , v δ ) =
n k=1
∞
−∞
|qk (x)| Wk (x) dx.
(7.15)
Here the weights Wk are defined as
Wk = 1 + κ1 Ak + κ2 Q uδ + Q vδ , (7.16)
δ
δ where Q u and Q v are the interaction potentials of uδ and v δ , respectively; cf. (6.18). The quantity Ak is the total strength of all waves in uδ or v δ that approach the k-wave qk (x). More precisely, if the kth field is linearly degenerate, then Ak (x) = |i | + |i | . (7.17) i, xi <x ˆ ı >k
i, x>xi ˆ ı
The summation is over all discontinuities xi ∈ D uδ ∪ D(v δ ). If the kth field is genuinely nonlinear, we must also account for waves of the same family approaching each other, and define Ak (x) = |i | + |i | i, xi <x ˆı >k
i, x>xi ˆ ı
⎧ ⎪ |i | + |i | ⎪ ⎪ ⎪ ⎪ i∈D(uδ ) δ ⎪ i∈D(v ) ⎪ ⎪ ˆ ı =k, x<xi ⎨ˆı =k, xi <x +
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
|i | + δ
i∈D(v ) ˆ ı =k, xi <x
if qk (x) < 0, (7.18)
|i |
if qk (x) > 0.
δ
i∈D(u ) ˆ ı =k, x<xi
In plain words, a qk shock is approached by k-waves in uδ from the left, and k-waves in vδ from the right. Similarly, a qk rarefaction wave is approached by k-waves in v δ from the left and k-waves in uδ from the right. Once the values of the constants κ1 and κ2 are determined, we will assume that the total variations of uδ and v δ are so small that 1 ≤ Wk (x) ≤ 2.
(7.19)
In this case we see that Φ is equivalent to the L1 norm; i.e., there exists a finite constant C1 such that
1 uδ − v δ ≤ Φ uδ , v δ ≤ C1 uδ − vδ . (7.20) 1 1 C1 We can also define, with obvious modifications, Φ(uδ1 (t), v δ2 (t)) with two different parameters δ1 and δ2 . Our first goal will be to show that
Φ uδ1 (t), v δ2 (t) − Φ uδ1 (s), v δ2 (s) ≤ C2 (t − s) δ1 ∨ δ2 , (7.21)
240
7. Well-Posedness of the Cauchy Problem
for all 0 ≤ t ≤ s. Once this inequality is in place we can show that the sequence of front-tracking approximations is a Cauchy sequence in L1 for δ
u 1 (t) − uδ2 (t) ≤ C1 Φ uδ1 (t), uδ2 (t) 1
≤ C1 Φ uδ1 (0), uδ2 (0) + C1 C2 t δ1 ∨ δ2
≤ C12 uδ1 (0) − uδ2 (0) + C1 C2 t δ1 ∨ δ2 . 1
Letting δ1 and δ2 tend to zero, we have the convergence of the whole sequence, and not only a subsequence. The first step in order to prove (7.21) is to choose κ2 so large that the weights Wk do not increase when fronts in uδ1 or v δ2 collide. This is possible, since the total variations of both uδ1 and vδ2 are uniformly small; hence the terms κ1 Ak are uniformly bounded, and by the interaction estimate, Q decreases for all collisions. This ensures the inequalities (7.19). Then we must examine how Φ changes between collisions. Observe that Φ(t) is piecewise linear and continuous in t. Let
D = D uδ1 ∪ D v δ2 . We differentiate Φ and find that d δ1 δ2 Φ u ,v dt n = {|qk (xi −)| Wk (xi −) − |qk (xi +)| Wk (xi +)} x˙ i =
i∈D k=1 n
i,+ i,+ i,+ μk − x˙ i − qki,− Wki,− μi,− ˙i , qk Wk k −x
i∈D k=1 n
=:
Ei,k ,
(7.22)
i∈D k=1
where μi,± k = μk (xi ±) ,
μk (x) = μk (ωk−1 (x), ωk (x)) ,
qki,±
and Wki,± = Wk (xi ±) .
= qk (xi ±) ,
The second equality in (7.22) is obtained by adding terms i,− i,− i,− (i−1),+ (i−1),+ (i−1),+ μk = 0, qk Wk μk − qk Wk and observing that there are only a finite number of terms in the sum in (7.22). ♦ Example 7.2 (Example 7.1 (cont’d.)). Let us check how this works for our special system. The two fronttracking approximations are denoted by u and v, and for simplicity we omit the superscript δ. These are made by approximating a rarefaction
7.1. Stability
241
wave between ηl = nδ and ηr = mδ, m > n, by a series of discontinuities with speed 3jδ, j = n, . . . , m − 1. In other words, we use the characteristic speed to the left of the discontinuity. The functions u and v are well-defined by standard techniques. Since we managed this far without the interaction potential, we define the weights also without these (they are needed only to bound the weights, anyway). Hence for the example we use Wk (x) = 1 + κAk (x).
(7.23)
Now we shall estimate d Φ (u, v) = (Ei,1 + Ei,2 ). dt
(7.24)
i∈D
To this end we consider a fixed discontinuity at x (to simplify the notation we do not use a subscript on this discontinuity) in one of the functions, say v. This discontinuity gives a contribution to the right-hand side of (7.24), denoted by E1 + E2 , symmetry where Ej = Wj+ qj+ μ+ ˙ − Wj− qj− μ− ˙ , j = 1, 2. j −x j −x For this 2 × 2 system we have A1 (x) = |i | , xi <x, ˆ ı =2
A2 (x) =
|i |
xi >x, ˆ ı =1
⎧ ⎪ |i | + |i | ⎪ ⎪ ⎪ ⎪ˆı =2, xi <x ˆ ı =2, xi >x ⎪ ⎨ xi ∈D(u) xi ∈D(v) + ⎪ ⎪ |i | + |i | ⎪ ⎪ ⎪ ⎪ ˆ ı =2, xi >x ⎩ˆı =2, xi <x xi ∈D(v)
if q2 < 0,
if q2 > 0.
xi ∈D(u)
To estimate E1 + E2 we study several cases. Case 1. Assume first that the jump at x is a contact discontinuity, that is, of the first family, in which case − A+ 1 = A1 ,
and consequently, W1+ = W1− . Furthermore, − q1+ = q1− + and μ+ ˙ − q2− . 1 = μ1 = x
(7.25)
242
7. Well-Posedness of the Cauchy Problem
Then
E1 = W1+ q1+ μ+ ˙ − W1− q1− μ− ˙ 1 −x 1 −x . - = W1− q1− + − q1− −q2− ≤ W1− q2− || .
(7.26)
For the weights of the second family we find that − A+ 2 = A2 − || ,
W2+ = W2− − κ || ,
q2+ = q2− ,
+ μ− 2 = μ2 .
To estimate μ− ˙ we exploit that μ− 2 −x 2 is a discontinuity of the second family, while x˙ is a contact discontinuity of the first family. Thus we can estimate from below their difference by the smallest difference in speeds between waves in the first- and second-wave families. We find that μ− ˙ ≥ c = minu,v {η} > 0. Hence 2 −x E2 = W2+ q2+ μ+ ˙ − W2− q2− μ− ˙ 2 −x 2 −x = q2− μ− ˙ (−κ ||) 2 −x ≤ −κc q2− || . (7.27) Then
E1 + E2 = q2− || W1− − κc ≤ 0
(7.28)
if κc ≥ supx W1 (x). (Throughout this argument we will choose larger and larger κ.) This inequality, (7.28), is the desired estimate in the case where x is a contact discontinuity. Case 2. The case where x is a genuinely nonlinear wave, that is, belongs to the second family, is more complicated. There are two distinct cases, that of an (approximate) rarefaction wave and that of a shock wave. First we treat the term E1 , which is common to the two cases. Here − A+ 1 = A1 + || , − μ+ 1 = μ1 ,
W1+ = W1− + κ || ,
q1+ = q1− ,
and μ− ˙ < −c. 1 −x
Consequently,
E1 = W1+ q1+ μ+ ˙ − W1− q1− μ− ˙ 1 −x 1 −x = κ || q1− μ− ˙ 1 −x ≤ −κc q1− || ≤ 0.
(7.29)
We split the estimate for E2 into several cases. Case 2a (rarefaction wave). First we consider the case where x is an approximate rarefaction wave. By the construction of v we have = δ > 0 and q2+ = q2− + .
7.1. Stability
243
The speeds appearing in E2 are given by $ − μ+ ηu ηu + q2− + , 2 = 2ηu + q2 + + $ − μ− = 2η + q + ηu ηu + q2− , u 2 2
x˙ = 3 ηu + q2− . We define the auxiliary speed
$
μ ˜ = μ2 v − , v + = 2ηu + 2q2− + + ηu + q2− ηu + q2− + .
It is easily seen that 0≤≤μ ˜ − x˙ ≤ 2. We have several subcases. First we assume that q2− > 0, in which case q2+ > 0 as well; see Figure 7.2. q2+ = q2− + q2−
v + = (η +, ξ +)
v − = (η −, ξ − ) q1+ = q1−
u = (ηu , ξu)
Figure 7.2. q2− > 0. − In this case A+ 2 = A2 + ||. Hence E2 = W2+ q2+ μ+ ˙ − W2+ − κ || q2− μ− ˙ 2 −x 2 −x -
. = W2+ q2− + μ+ ˜ − q2− μ− ˜ 2 −μ 2 −μ
+ W + q + − q − (˜ μ − x) ˙ + κ || q − (μ− − x). ˙ 2
2
2
2
2
-
. We need to estimate the term q2− + μ+ ˜ − q2− μ− ˜ . This 2 −μ 2 −μ estimate is contained in Lemma 7.4 in the general case, and it is verified directly for this model right after the proof of Lemma 7.4. We obtain −
q + μ+ − μ ˜ − q2− μ− ˜ ≤ O (1) || q2− q2− + || , 2 2 2 −μ and thus
E2 ≤ O (1) || q2− q2− + || + W2+ || |˜ μ − x| ˙ + κ || q2− (μ− ˙ 2 − x) − − 2 ≤ O (1) || q q + || + 2W W + || + κ || q − (μ− − x). ˙ 2
2
2
2
2
244
7. Well-Posedness of the Cauchy Problem
We estimate μ− ˙ ≤ −q2− ≤ 0, and hence 2 −x 2 E2 ≤ || q2− (O (1) − κ) + O (1) ||2 q2− + O (1) ||2 ≤ M || δ, for some constant M if we choose κ big enough. We have used that W2+ is bounded. Therefore, E1 + E2 ≤ M || δ. Now for the case where q2− < 0. Here we have two further subcases, q2+ < 0 and q2+ > 0. First we assume that q2+ < 0, and thus both q2− and q2+ are negative. Note that + − q = q − || , 2 2 0 ≤ −q2− ≤ μ− ˙ ≤ −2q2− , 2 −x Thus
− and A+ 2 = A2 − || .
E2 = W2− − κ || q2+ μ+ ˙ − W2− q2− μ− ˙ 2 −x 2 −x -
. = W2− q2+ − μ− ˜ − q2+ μ+ ˜ 2 −μ 2 −μ − W2− || (˜ μ − x) ˙ − κ || q2+ μ+ ˙ 2 −x 2 2 ≤ O (1) || q2+ q2+ + || + O (1) || − κ || q2+ 2 2 ≤ || q2− (O (1) − κ) + O (1) || ≤ M || δ,
where we have used Lemma 7.4 (with ε = , ε = q2+ ) and chosen κ sufficiently large. Thus we conclude that E1 + E2 ≤ M || δ in this case as well. Now for the last case in which > 0, namely q2− < 0 < q2+ . Since q2+ = q2− + , we have + − q ≤ δ, q ≤ δ. 2 2 − + − Furthermore, A+ 2 = A2 , and thus W2 = W2 . We see that
0 ≤ −q2− ≤ μ− ˙ ≤ −2q2− , 2 −x and hence
μ+ ˙ ≤ 2 − q2− , 2 −x
- . E2 = W2+ q2+ μ+ ˙ + q2− μ− ˙ 2 −x 2 −x - . ≤ W2+ q2+ 2 || + q2− + q2− 2 q2− ≤ M || δ,
for some constant M . Case 2b (shock wave). When x is a shock front, we have < 0. In this case $
x˙ = μ ˜ = μ2 v− , v + = 2ηu + 2q2− + + ηu + q2− ηu + q2− + .
7.1. Stability
245
We first consider the case where q2− < 0. Then − q2+ = q2− + < 0, q2+ = q2− + || , and A+ 2 = A2 − || , and we obtain
E2 = W2− − κ || q2+ μ+ ˙ − W2− q2− μ− ˙ 2 −x 2 −x
= −W W2− (q2− + )(μ+ ˙ − q2− (μ− ˙ 2 − x) 2 − x) − − κ || (q2 + ||)(μ+ ˙ 2 − x) − − ≤ O (1) || q2 ( q2 + ||) − κ || (q2− + ||) q2− ≤ || q − (q − + ||) (O (1) − κ) ≤ 0. 2
2
Lemma 7.4 (with ε = , ε = q2− ) implies −
q + μ+ − x˙ − q − μ− − x˙ ≤ O (1) || q − q − + || . 2 2 2 2 2 2
Furthermore,
$ μ+ ˙ = −q2− + ηu ηu + q2− + 2 −x $ − ηu + q2− ηu + q2− + $ ⎛ ⎞ ηu + q2+ ⎠ $ = −q2− ⎝1 + √ ηu + ηu + q2+ ≥ −q2− = q2− .
If q2− > 0, then there are two further cases to be considered, depending on the sign of q2+ . We first consider the case where q2+ < 0, and thus − q2+ < 0 < q2− . Now A+ 2 = A2 . Furthermore, μ− ˙ ≥ −2q2− ≥ 0, 2 −x ⎛
$
⎞ ηu + q2+ ⎠ < − q2− . $ μ+ ˙ = −q2− ⎝1 + 2 −x √ ηu + ηu + q2+
Thus μ+ ˙ < μ− 2 < x 2, and we easily obtain
- . E2 = W2− q2+ μ+ ˙ − q2− μ− ˙ < 0. 2 −x 2 −x
This leaves the final case where q2± > 0. In this case we have that A+ 2 = A− + ||. We still have 2 $ ⎛ ⎞ ηu + q2− ⎠ ≤ −q2+ < 0, $ μ− ˙ = −q2+ ⎝1 + 2 −x √ + ηu + ηu + q2
246
7. Well-Posedness of the Cauchy Problem
and thus
x˙ − μ− ≥ q + . 2 2
Furthermore, by Lemma 7.4, we have that −
q + μ+ − x˙ − q − μ− − x˙ ≤ O (1) q + || q + + || . 2 2 2 2 2 2 Then we calculate E2 = W2+ q2+ μ+ ˙ − W2+ − κ || q2− μ− ˙ 2 −x 2 −x
= W2+ (q2− + )(μ+ ˙ − q2− (μ− ˙ + κ || q2− (μ− ˙ 2 − x) 2 − x) 2 − x) + − −
− + + − ≤ W2 q2 μ2 − x˙ − q2 μ2 − x˙ − κ || μ2 − x˙ q2 ≤ O (1) || q2− q2− + || − κ || q2− q2+ 2 2 ≤ O (1) || + || q2− (O (1) − κ) ≤ M || δ if κ is sufficiently large. This is the last case. Now we have shown that in all cases, E1 + E2 ≤ M || δ. Summing over all discontinuities in u and v we conclude that d Φ(u, v) ≤ C δ, dt for some finite constant C independent of δ.
♦
We shall now show that n
Ei,k ≤ O(1) |i | δ1 ∨ δ2 + O (1) |ei | ,
(7.30)
k=1
and this estimate is easily seen to imply (7.21). To prove (7.30) we shall need some preliminary results: Lemma 7.3. Assume that the vectors = (1 , . . . , n ), = (1 , . . . , n ), and = (1 , . . . , n ) satisfy H () u = H ( ) H ( ) u for some vector u, where H () = Hn (n ) Hn−1 (n−1 ) · · · H1 (1 ) . Then n k=1
|k −
k
−
k |
! " = O (1) j j j + j + j . j
k, k=
(7.31)
7.1. Stability
247
If the scalar and the vector = (1 , . . . , n ) satisfy R () u = H ( ) u where R denotes the th rarefaction curve, then | − | + |k | = O (1) || | | (|| + | |) + |k | . k=
(7.32)
k=
Proof. The proof of this lemma is a straightforward modification of the proof of the interaction estimate (6.14). Lemma 7.4. Let ω ¯ ∈ Ω be sufficiently small, and let ε and ε be real numbers. Define ω = Hk (ε)¯ ω,
ω = Hk (ε ) ω, ω = Hk (ε + ε ) ω ¯,
μ = μk (¯ ω , ω) , μ = μk (ω, ω ) , μ = μk (¯ ω , ω ) .
Then one has |(ε + ε )(μ − μ ) − ε(μ − μ )| ≤ O(1) |εε | (|ε| + |ε |) .
(7.33)
Proof. The proof of this is again in the spirit of the proof of the interaction estimate, equation (6.10). Let the function Ψ be defined as Ψ (ε, ε ) = (ε + ε ) μ − εμ − ε μ . Then Ψ is at least twice differentiable, and satisfies Ψ (ε, 0) = Ψ (0, ε ) = 0, Consequently, Ψ (ε, ε ) = 0
ε ε 0
∂ 2Ψ (0, 0) = 0. ∂ε∂ε
∂ 2Ψ (r, s) ds dr = O(1) ∂ε∂ε
|ε| 0
|ε | 0
(|r| + |s|) dr ds.
From this the lemma follows. ♦ Example 7.5 (Lemma 7.4 for Example 7.1). If k = 2, let ω ¯ , ω , and ω denote the η coordinate, since only this will influence the speeds. Then a straightforward calculation yields (ε + ε )(μ − μ ) − ε(μ − μ ) = |ε| |ε | (|ε| + |ε |)
√ √ √ ω ¯ + ω + ω √ √ × √ √ √ √ √ ω ¯ ω + ω + ω ω ¯ + ω + ω ω ¯ + ω + 2 ω ¯ ω ω
≤
|ε| |ε | (|ε| + |ε |) min {¯ ω , ω , ω }
verifying the lemma in this case.
♦
248
7. Well-Posedness of the Cauchy Problem
If the kth characteristic field is genuinely nonlinear, then the characteristic speed λk (Hk ()ω) is increasing in , and we can even choose the parameterization such that λk (Hk ()ω) − λk (ω) = , for all sufficiently small and ω. This also implies that μk (ω, Hk ()ω) is strictly increasing in . However, the Hugoniot locus through the point ω does not in general coincide with the Hugoniot locus through the point Hk (q)ω. Therefore, it is not so straightforward comparing speeds defined on different Hugoniot loci. When proving (7.30) we shall need to do this, and we repeatedly use the following lemma: Lemma 7.6. For some state ω define Ψ(q) = μk (Hk (q)ω, Hk ()Hk (q)ω) − μk (ω, Hk ( + q)ω) . Then Ψ is at least twice differentiable for all k = 1, . . . , n. Furthermore, if the kth characteristic field is genuinely nonlinear, then for sufficiently small |q| and ||, Ψ (q) ≥ c > 0,
(7.34)
where c depends only on f for all sufficiently small |ω|. Proof. Let the vector be defined by H( )ω = Hk ()Hk (q)ω. Then by Lemma 7.3 |k − (q + )| + |i | ≤ O (1) |q| (|| + |q|) . i= k
Consequently, Hk ( + q)ω = Hk ()Hk (q)ω + O (1) |q| (|| + |q|) . Using this we find that Hk ()Hk (q)ω − Hk ()ω Hk ( + q)ω − Hk ()ω = + O (1) || (|| + |q|) . q q Therefore, d d 2 {Hk ()Hk (q)ω} q=0 = {Hk ()ω} + O (1) || . dq d
(7.35)
Hence, we compute Ψ (0) = ∇1 μk (ω, Hk ()ω) · rk (ω) d d − ∇2 μk (ω, Hk ()ω) · {Hk ()ω} − {Hk ()Hk (q)ω} q=0 d dq 2
= ∇1 μk (ω, Hk ()ω) · rk (ω) + O (1) || ≥ c > 0,
7.1. Stability
249
for sufficiently small ||. The value of the constant c (and its existence) depends on the genuine nonlinearity of the system and hence on f . Since Ψ is continuous for small |q|, the lemma follows. We shall prove (7.30) in the case where the front at xi is a front in v δ2 ; the case where it is a front in uδ1 is completely analogous. We therefore fix i, and study the relation between qk− and qk+ . Since the front is going to be fixed from now on, we drop the subscript i. For simplicity we write δ = δ2 . Assume the the family of the front x is and the front has strength . The situation is as in Figure 7.3. A key observation is that we can regard the waves qk+ as being the result of an interaction between the waves qk− and ; similarly, the waves −qk− are the result of an interaction between and −qk+ .
q−
−q +
v δ,−
uδ 1
v δ,+
uδ 1
Figure 7.3. The setting in the proof of (7.30).
Regarding the weights, from (7.16) and (7.18) we find that κ1 || if k < , + − Wk − Wk = −κ1 || if k > ,
(7.36)
while for k = we obtain W+ − W−
⎧ ⎪ ⎨κ1 || = −κ1 || ⎪ ⎩ O (1)
. if min q− , q+ > 0, . if max q− , q+ < 0, if q− q+ < 0.
(7.37)
The proof of (7.30) is a study of cases. We split the estimate into two subgroups, depending on whether the front at x is an approximate rarefaction wave or a shock. Within each of subgroups we discuss three subcases depending on the signs of q± . In all cases we discuss the terms Ek (k = )
250
7. Well-Posedness of the Cauchy Problem
and E separately. For k = we write Ek (remember that we dropped the subscript i) as
Ek = qk+ − qk− Wk+ μ+ ˙ k −x
+ q − W + − W − μ+ − x˙ + q − W − μ+ − μ− . (7.38) k
k
k
k
k
k
k
k
By the strict hyperbolicity of the system, we have that μ+ ˙ ≤ −c < 0, k −x μ+ k
− x˙ ≥ c > 0,
for k < , for k > ,
where c is some fixed constant depending on the system. Thus we always have that
+ Wk − Wk− μ+ ˙ ≤ −cκ1 || , k = . (7.39) k −x We begin with the case where the front at x is an approximate rarefaction wave ( > 0). In this case
R ()vδ,− + e = H q + uδ1 = H q + H −q − v δ,− = H (˜) vδ,− for some vector q˜. Hence
H −q − v δ,− = H −q + H (˜) vδ,− , R ()v δ,− + e = H (˜) vδ,− .
(7.40) (7.41)
From (7.31) and (7.40) we obtain q + − q − − q˜k k k k
= O (1)
+ q + q˜k q + + |˜ q q˜j , qk | + k k k k
(7.42)
k,j k= j
and from (7.32) and (7.41) we obtain |˜ q − | + |˜ qk | = O (1) || |˜ q | (|˜ q | + ||) + |˜ qk | + O (1) |e| . k=
k=
This implies that |˜ q − | ≤ O (1) || + O (1) |e| , |˜ qk | ≤ O (1) || + O (1) |e| .
(7.43)
k=
Furthermore, since is an approximate rarefaction, 0 ≤ ≤ δ. Therefore, we can replace ˜ with and q˜k (k = ) with zero on the right-hand side of (7.42), making an error of O (1) δ. Indeed, + + q − q − − + q − q − k k k=
7.1. Stability
251
q + − q − − + |˜ q − | + |˜ qk |
≤
k
k
k
k=
!
" + + + ≤ O (1) qk q˜k qk + |˜ qk | + qk q˜j k
k,j k= j
+ O (1) || |˜ q | (|˜ q | + ||) + |˜ qk | + O (1) |e| . k=
Using (7.43) and the fact that ≤ δ we conclude that + − + q −q − + q − q − k k k=
+ + + = O (1) || δ + q q + || + qk + O (1) |e| .
(7.44)
k=
Similarly, + − + q −q − + q − q − k k k=
− q + O (1) |e| . = O (1) || δ + q− q− + || + k
(7.45)
k=
Since in this case 0 ≤ ≤ δ, and the total variation is small, we can assume that the right-hand sides of (7.44)–(7.45) are smaller than + O (1) |e|. Also, the error e is small; cf. (7.11). Then 0 < q+ − q− < 2 + O (1) |e| ≤ 2δ + O (1) |e| .
(7.46)
We can also use the estimates (7.44) and (7.45) to make a simplifying assumption throughout the rest of our calculations. Since the total variation of u − v is uniformly bounded, we can assume that the right-hand sides of (7.44) and (7.45) are bounded by 1 || + O (1) |e| . 2 In particular, we then find that 1 1 || − O (1) |e| ≤ q+ − q− ≤ + || + O (1) |e| . 2 2 Hence if > 0, from the left inequality we find that −
q+ > q− or || ≤ O (1) |e| ,
252
7. Well-Posedness of the Cauchy Problem
and if < 0, from the right inequality above, q+ < q− or || ≤ O (1) |e| . If > 0 and q− ≥ q+ or < 0 and q+ ≥ q− , then || ≤ O (1) |e|. In this case we find for k = , or k = and q− q+ > 0, that -
. Ek = qk− Wk− − Wk+ + Wk+ qk− − qk+ x˙ - . ≤ q − κ1 || + W + (|| /2 + O (1) |e|) |x| ˙ k
k
≤ O (1) |e| .
(7.47)
If k = and q− q+ < 0, then for > 0 we have that q+ − q− ≥ O (1) |e|, so if q+ < q− , we must have that + q ≤ O (1) |e| and q − ≤ O (1) |e| . Similarly, if < 0 and q+ > q− , we obtain q+ < O (1) |e| and q− ≤ O (1) |e| . Then we find that
- . E = q− W− − q+ W+ x˙ ≤ O (1) |e| .
(7.48)
These observations imply that if || = O (1) |e|, we have that Ek = O (1) |e| , k
which is what we want to show. Thus in the following we can assume that either >0
and q+ > q− ,
<0
and q+ < q− .
or (7.49)
Now follows a discussion of several different cases, depending on whether the front is an approximate rarefaction wave or a shock wave, and on the signs of q− and q+ . Case R1: 0 < q− < q+ , > 0. For k = we recall (7.38) that
Ek = qk+ − qk− Wk+ μ+ ˙ k −x
+ q − W + − W − μ+ − x˙ + q − W − μ+ − μ− . k
k
k
k
k
k
k
k
(7.50)
7.1. Stability
253
The second term in (7.50) is less than or equal to (cf. (7.39)) −cκ1 qk− || . Furthermore, by (7.45),
+ − − q − q ≤ O (1) || δ + q − q − + || + q + O (1) |e| . k k k k=
By the continuity of μk , + μ − μ− = O (1) (|| + |e|) . k k Hence from (7.38), we find that − Ek ≤ O (1) || δ + q− q− + || + qk˜ + O (1) |e| − cκ1 qk− || ˜= k
≤ O (1) || δ + qk˜− + O (1) |e| ˜= k
− cκ1 || qk− + O (1) || q− q− + || .
(7.51)
For k = the situation is more complicated. We define states and speeds
−
− ω ˜ = H q− + ω−1 , μ ˜ = μ ω−1 ,ω ˜ , (7.52)
ω = H () ω− , μ = μ ω− , ω ; see Figure 7.4.
ω ˜ ω H( )ω H( )ω−1
ω
ω−1 Figure 7.4. The situation for 0 < q− < q+ , > 0, and k = .
Remember that
± ± μ± = μ ω−1 , ω .
− Now by Lemma 7.4, with ω = ω−1 , ε = q− , and ε = q− + , −
q + (˜ μ − μ ) − q− μ− = O (1) q− || q− + || . (7.53) − μ
254
7. Well-Posedness of the Cauchy Problem
We also find that (cf. (7.10) and the fact that μ (u, u) = λ (u))
|μ − x| ˙ ≤ μ ω− , ω − μ v δ,− , v δ,− + O (1) δ
= μ ω− , H ()ω− − μ ωn− , ωn− + O (1) δ ≤ μ ω− , H ()ω− − μ (ω− , ω− ) − + μ (ω− , ω− ) − μ (ω− , ω+1 ) − − − + μ (ω− , ω+1 ) − μ (ω+1 , ω+2 ) + · · · − + μ (ωn−1 , ωn− ) − μ (ωn− , ωn− ) + O (1) δ −
− ≤ O (1) || + ω− − ω+1 + · · · + ωn−1 , ωn− + O (1) δ − q . ≤ O (1) |δ| + q− + (7.54) k k>
Furthermore, + +
+
−
− μ − μ ˜ = μ ω−1 , H q+ ω−1 − μ ω−1 , H q− + ω−1 +
+ +
+ + − ≤ μ ω−1 , H q ω−1 − μ H q ω−1 , ω−1 +
+
− − + μ H q+ ω−1 , ω−1 − μ ω−1 , H q− + ω−1 + +
+
− − − ≤ O (1) ω−1 − ω−1 + H q ω−1 − H q + ω−1 + +
+
+ − ≤ O (1) ω−1 − ω−1 + H q ω−1 − H q− + ω−1 − +
− + + H q + ω−1 − H q + ω−1 +
+ − ≤ O (1) ω−1 − ω−1 + q − q− −
+ − ≤ O (1) q−2 − q−2 + · · · + q1+ − q1− + q+ − q− − − q + O (1) |e| . (7.55) = O (1) δ + q− q− + || + k k=
Since the th field is genuinely nonlinear, then by Lemma 7.6 μ − μ ˜ ≥ c q−
(7.56)
for some constant c > 0 depending only on the system. Remember that in this case W+ = W− + κ1 || . Moreover, , q+ , and q− are positive. Using the above inequalities, we compute
E = W+ q+ μ+ ˙ − W− q− μ− ˙ −x −x
= W− + κ1 || q+ μ+ ˙ − W− q− μ− ˙ −x −x
-
. = κ1 q+ μ+ ˙ + W− q+ μ+ ˙ − q− μ− ˙ −x −x −x - .
= κ1 q− + (˜ μ − μ ) + q+ μ+ ˙ − q− + (˜ μ − μ ) −x -
. + W− q+ μ+ ˙ − q− μ− ˙ −x −x
7.1. Stability
255
= κ1 q− + (˜ μ − μ ) - − . + κ1 q + μ+ ˙ − (˜ μ − μ ) + q+ − q− − μ+ ˙ −x −x -
. + W− q+ μ+ ˙ − q− μ− ˙ −x −x
≤ κ1 q− + (˜ μ − μ ) + κ1 q− + μ+ ˜ + |μ − x| ˙ −μ -
. + κ1 q+ − q− − μ+ ˙ + W− q+ μ+ ˙ − q− μ− ˙ −x −x −x
≤ −cκ1 q− q− +
− q + κ1 q − + O (1) δ + q − q − + +
k=
− + δ + O (1) q + O (1) |e|
k>
− − − + O (1) κ1 δ + q q + + qk + O (1) |e| 2
k=
-
. + W− q+ μ+ ˙ − q− μ− ˙ −x −x
−
− − − − ≤ −cκ1 q q + + O (1) κ1 δ + q q + + qk +O (1) |e| k=
)
+ W − q − + (˜ μ − μ ) − q − μ− − μ + q + − q − − μ+ − x˙
*
|μ − x| ˙ + q− + μ+ ˜ −μ − − − − − ≤ −cκ1 q || q + || + O (1) || δ + q q + || + qk k=
+ O (1) |e| q − + O (1) |e| + || q − q − + || (O (1) − cκ1 ) . ≤ O (1) || δ + k k=
(7.57) Adding (7.57) and (7.51) we obtain k
Ek = E +
Ek
k=
≤ O (1) δ + O (1) |e| +
q − (O (1) − cκ1 ) k k=
+ q− q− + (O (1) − cκ1 ) ≤ O (1) δ + O (1) |e| , which holds for sufficiently large κ1 . This implies (7.30) in Case R1.
(7.58)
256
7. Well-Posedness of the Cauchy Problem
Case R2: q− < q+ < 0, > 0. Writing Ek as in (7.38), and using (7.44) (instead of (7.45) as in the previous case), we find for k = that + + + Ek ≤ O (1) δ + q q + || + qk˜ + O (1) |e| − cκ1 qk+ || . ˜= k
(7.59) For k = the situation is similar to the previous case. We define auxiliary states and speeds
+
+ ω ˜ = H q+ − ω−1 , μ ˜ = μ ω−1 ,ω ˜ , (7.60)
ω = H (−) ω+ , μ = μ ω+ , ω ; see Figure 7.5.
ω+−1
H ( )ω+−1
H ( )ω+
ω
ω+
ω ˜ Figure 7.5. The situation for q− < q+ < 0, > 0, and k = .
Remember that
+ ω+ = H q+ ω−1
+ + and μ+ = μ ω−1 , ω .
In this case we use (7.33) with ω ¯ = ω+ , ε = q+ , and ε = −. This gives +
q − (˜ μ − μ ) − q+ μ+ = O (1) q+ || q+ + || . (7.61) − μ As in (7.54), we find that
|μ − x| ˙ ≤ μ ω+ , ω − μl vδ,+ , v δ,+ + O (1) δ
= μ ω+ , H (−)ω+ − μ v δ,+ , vδ,+ + O (1) δ
≤ O (1) δ + O (1) ω+ − ωn+ + q + . ≤ O (1) δ + O (1) k
(7.62)
k=
We also obtain the analogue of (7.55), namely, − −
−
+
+ μ − μ ˜ = μ ω−1 , H q− ω−1 − μ ω−1 , H q+ − ω−1
7.1. Stability
+
− + = O (1) ω−1 − ω−1 + q − q− − + q + O (1) |e| . = O (1) δ + q+ q+ + || + k
257
(7.63)
k=
By genuine nonlinearity, using Lemma 7.6, we find that μ ˜ − μ > c q + ,
(7.64)
for some constant c. Now W− = W+ + κ1 || . Using the above estimates (7.61)–(7.64), we compute E = W+ q+ μ+ ˙ − W+ + κ1 q− μ− ˙ −x −x -
. = −κ1 q− μ− ˙ − W+ q+ μ+ ˙ − q− μ− ˙ −x −x −x
≤ −κ1 q+ + (˜ μ − μ ) + κ1 q+ + μ− ˜ + |x˙ − μ | −μ -
. + κ1 q+ − q− − μ− ˙ − W+ q+ μ+ ˙ − q− μ− ˙ −x −x −x ≤ −cκ1 q+ q+ + + +
+ + + O (1) κ1 q + δ + q q + + qk + |e| W+
-
− q + μ+ − x˙ − q− + + ≤ −cκ1 q q +
k=
− . μ − x˙
+ q + |e| + O (1) κ1 q+ + δ + q+ q+ + + k k=
q + μ+ − μ − q + + (˜ + μ − μ )
+ + |μ − x| q + μ− − μ + q+ − q− − μ− − x ˙ ˙ + ˜ + q ≤ −cκ1 q+ q+ + + O (1) δ + q+ q+ + + k W+
k=
+ O (1) |e| q + + O (1) |e| + q + q + + (O (1) − cκ1 ) . ≤ O (1) δ + k k=
(7.65) Finally,
k
Ek = E +
Ek
k=
≤ O (1) δ + O (1) |e| +
q + (O (1) − cκ1 ) k k=
+ q+ q+ + (O (1) − cκ1 )
258
7. Well-Posedness of the Cauchy Problem
≤ O (1) δ + O (1) |e|
(7.66)
by choosing κ1 larger if necessary. Hence (7.30) holds in this case as well. Case R3: q− < 0 < q+ , > 0. Since the front at x is a rarefaction front, both estimates (7.51) and (7.59) hold. Moreover, we have that q+ − q− = q+ + q− < 2 ≤ 2δ. Then from AD + BC ≤ (A + B)(D + C) for positive A, B, C, and D, we obtain E = W+ q+ μ+ ˙ − W− q− μ− ˙ −x −x
≤ O (1) q+ + q− μ+ ˙ + μ− ˙ −x −x
≤ O (1) μ+ ˙ + μ− ˙ −x −x
+ = O (1) μ ω−1 , ω+ − μ v δ,+ , vδ,+ −
+ μ ω−1 , ω− − μ v δ,− , v δ,− + − + − = O (1) δ + q + q + qk + qk k>
q + + q − . = O (1) δ + k k k>
k<
(7.67)
k<
Using (7.51) for k < and (7.59) for k > , and choosing κ1 sufficiently large, we obtain (7.30). Now we shall study the cases where the front at x is a shock front. Also, here we prove (7.30) in three cases depending on q− and q+ . If the front at x is a shock front, then by the construction of the front-tracking approximation we have H ()v δ,− = vδ,+ + e, or
H ()H q − uδ1 = H q + uδ1 + e,
where q ± = q1± , . . . , qn± , and e is the error of the front at x. Then we can use (7.31) and continuity of the mapping H to find that + − + q −q − + q − q − k k k=
− − − q + || + q + O (1) |e| . = O (1) || q
k
k=
(7.68)
7.1. Stability
259
We also have that
uδ1 = H −q + vδ,+ = H −q + H ()vδ,− + e ,
or
H −q − v δ,− = H −q + H ()v δ,− + O (1) |e| ,
by the continuity of H. From this we obtain + − + q −q − + q − q − k k k=
+ + + = O (1) || q q + || + q + O (1) |e| .
(7.69)
k
k=
Case S1: 0 < q+ < q− , < 0. If k = , then we can write Ek as (7.38) and use the arguments leading to (7.51) and the estimate (7.69) to obtain + + + Ek ≤ O (1) || q q + || + qk˜ +O (1) |e|−cκ1 qk+ || . (7.70) ˜= k
ω ˜ ω H ( )ω+ H( )ω+−1
ω+
ω+−1 Figure 7.6. The situation for 0 < q+ < q− , < 0, and k = .
For k = we define the auxiliary states and speeds as in (7.60); see Figure 7.6. Then the estimate (7.61) holds. Also, using (7.69) we find that − +
− + μ − μ ˜ = O (1) ω−1 − ω−1 + q − q− − + q + O (1) |e| . (7.71) = O (1) || q+ q+ + || + k k=
Moreover,
|μ − x| ˙ = μ ω+ , ω − μl v δ,− , vδ,+
260
7. Well-Posedness of the Cauchy Problem
≤ μ ω+ , H (−)ω+ − μ v δ,+ , H (−) v δ,+ + O (1) |e|
= O (1) ω+ − ωn+ + O (1) |e| + q + |e| . = O (1) (7.72) k k>
By Lemma 7.6, we have μ − μ ˜ > cq+ .
(7.73)
W+ = W− + κ1 || ,
(7.74)
In this case
and < 0 < q+ < q− . We estimate
E = W+ q+ μ+ ˙ − W+ − κ1 || q− μ− ˙ −x −x
-
. = κ1 || q− μ− ˙ + W+ q+ μ+ ˙ − q− μ− ˙ −x −x −x -
− . − = κ1 || q+ + || μ− ˙ − q+ + || μ− − μ + q μ − x − μ -
. + W+ q+ μ+ ˙ − q− μ− ˙ −x −x
= κ1 || q+ + || (˜ μ − μ ) - +
− + κ1 || q + || μ − x˙ − (˜ μ − μ )
. + q− − q+ − || μ− ˙ −x -
. + W+ q+ μ+ ˙ − q− μ− ˙ −x −x
≤ κ1 || q+ + || (˜ μ − μ )
+ − + κ1 || q + || μ − μ ˜ + |μ − x| ˙ + κ1 || q+ − q− − μ− ˙ −x -
. + W+ q+ μ+ ˙ − q− μ− ˙ −x −x
≤ −cκ1 q+ + || || q+
+ + 2 + + O (1) κ1 || q q + || + qk + O (1) |e| k=
+ + O (1) κ1 || qk + O (1) |e| k>
+
W+
q + μ+ − μ − q + − (˜ μ − μ )
+ q+ − q− − μ− ˙ + || |μ − x| ˙ + (q+ + ||) μ− ˜ −x −μ
+
+ + + + qk ≤ −cκ1 q + || || q + O (1) || q q + || + k=
7.1. Stability
261
+ O (1) |e| q + + || q + q + + || (O (1) − cκ1 ) + O (1) |e| . (7.75) ≤ O (1) k k=
As before, setting κ1 sufficiently large, (7.75) and (7.70) imply Ek = E + Ek ≤ O (1) |e| , k
(7.76)
k=
which is (7.30). Case S2: q+ < q− < 0, < 0. In this case we proceed as in Case S1, but using (7.68) instead of (7.69). For k = this gives the estimate − − − Ek ≤ O (1) || q q + || + qk˜ +O (1) |e|−cκ1 qk− || . (7.77) ˜= k
We now define the intermediate states ω ˜ , ω and the speeds μ ˜ and μ as in (7.52); see Figure 7.7.
ω−1
ω
H( )ω−1
H ( )ω ω
ω ˜ Figure 7.7. The situation for q+ < q− < 0, < 0, and k = .
Then the estimate (7.53) holds. As in Case R1, we compute + +
− + μ − μ ˜ = O (1) ω−1 − ω−1 + q − q− − − q + O (1) |e| , = O (1) || q− q− + || + k
(7.78)
k=
and
|μ − x| ˙ ≤ μ ω− , H ()ω− − μ v δ,− , H ()vδ,− + O (1) |e| ≤ O (1) δ + O (1) ω− − ω0− + O (1) |e| q − . ≤ O (1) |e| + O (1) (7.79) k k<
262
7. Well-Posedness of the Cauchy Problem
In this case, genuine nonlinearity and Lemma 7.6 imply that μ ˜ − μ > c q− ,
(7.80)
with c > 0. Moreover, now W+ = W− − κ1 || . Now we can use the (by now) familiar technique of estimating E :
E = W− − κ1 || q+ μ+ ˙ − W− q− μ− ˙ −x −x
≤ −κ1 || q− + || (˜ μ − μ )
− + + κ1 || q + || μ − μ ˜ + |μ − x| ˙ + κ1 || q+ − q− − μ+ ˙ −x - . + W− q+ μ+ ˙ − q− μ− ˙ −x −x ≤ −cκ1 q− || q− + ||
− − − − + O (1) κ1 || q + || q q + || + qk k=
q − μ− − μ − q − + (˜ + μ − μ ) + q+ − q− − μ+ ˙ −x
+ O (1) |e| + || |μ − x| ˙ + q− + || μ+ − μ ˜ − q ≤ −cκ1 q− || q− + || + O (1) q− q− + || + k W−
k=
+ O (1) |e| q − + || q − q − + || (O (1) − cκ1 ) + O (1) |e| . (7.81) ≤ O (1) k k=
Combining (7.81) and (7.77) we obtain Ek = E + Ek ≤ O (1) |e| , k
(7.82)
k=
which is (7.30). Case S3: q+ < 0 < q− , < 0. For k = , the estimate (7.77) remains valid. Next we consider the case k = . The O (1) that multiplies || in (7.69) (or (7.69)) is proportional to the total variation of the initial data. Hence we can assume that this is arbitrarily small by choosing T.V. (u0 ) sufficiently small. Since all terms qj± are bounded, we can and will assume that + q − q − − ≤ 1 || + O (1) |e| . 2
(7.83)
7.1. Stability
263
Without loss of generality we may assume that q+ ≥ q− . This implies that + q − q − − ≥ q − − q + − || = q − − q + + ≥ 2q − + . (7.84) Thus 1 || + O (1) |e| , 2
2q− + ≤
(7.85)
or 1 q− + ≤ − || + O (1) |e| , (7.86) 4 which can be rewritten as − q + − O (1) |e| ≥ 1 || . (7.87) 4 From this we conclude that − q + ≥ 1 || − O (1) |e| . (7.88) 4 We define the auxiliary states ω ˜ , ω and the speeds μ ˜ and μ as in (7.52); see Figure 7.8. Then estimates (7.78) and (7.79) hold.
ω
ω
H ( )ω
ω ˜
H ( )ω−1
ω−1
Figure 7.8. The situation for q+ < 0 < q− , < 0, and k = .
By Lemma 7.6 we have that μ ˜ − μ ≤ 0, μ− − μ ≥ c q − + ,
(7.89) (7.90)
for a positive constant c. Recalling that W− ≥ 1, and using (7.89), (7.90), and the estimates (7.78) and (7.79) (which remain valid in this case), we compute E = W+ q+ μ+ ˙ − W− q− μ− ˙ −x −x ≤ W + q + (˜ μ − μ ) − W − q − μ− − μ
264
7. Well-Posedness of the Cauchy Problem
+ W+ q+ μ+ ˜ + |μ − x| ˙ + W− q− |μ − x| ˙ −μ − −
≤ − q c q + + O (1) || q− q− + || + qk˜− + O (1) |e| ˜= k
− − −c − − ≤ q || + O (1) || q q + || + qk˜ + O (1) |e| . (7.91) 4 ˜= k
Now (7.77) − and (7.91) are used to balance the terms containing the factor / k= qk . The remaining term, −
q || − 1 c + O (1) q − + || , 4 can be made negative by choosing T.V. (u0 ) (and hence O (1)) sufficiently small. Hence also in this case (7.30) holds. Finally, if q− or q+ is zero, (7.30) can easily be shown to be a limit of one of the previous cases. Summing up, we have proved the following theorem: Theorem 7.7. Let uδ1 and v δ2 be front-tracking approximations, with accuracies defined by δ1 , δ2 ,
G uδ1 (t) < M, and G vδ2 (t) < M, for t ≥ 0. (7.92) For sufficiently small M there exist constants κ1 , κ2 , and C2 such that the functional Φ defined by (7.15) and (7.16) satisfies (7.21). Furthermore, there exists a constant C (independent of δ1 and δ2 ) such that δ
u 1 (t) − v δ2 (t) ≤ C uδ1 (0) − vδ2 (0) + Ct δ1 ∨ δ2 . (7.93) 1 1 To state the next theorem we need the following definition. Let the domain D be defined as the L1 closure of the set D0 = u ∈ L1 (R; Rn ) | u is piecewise constant and G(u) < M ; (7.94) that is, D = D 0 . Since the total variation is small, we will assume that all possible values of u are in a (small) neighborhood Ω ⊂ Rn . Theorem 7.8. Let fj ∈ C 2 (Rn ), j = 1, . . . , n. Consider the strictly hyperbolic equation ut + f (u)x = 0. Assume that each wave family is either genuinely nonlinear or linearly degenerate. For all initial data u0 in D, defined by (7.94), any sequence of front-tracking approximations uδ converges to a unique limit u as δ → 0. Furthermore, let u and v denote solutions ut + f (u)x = 0, with initial data u0 and v0 , respectively, obtained as a limit of a fronttracking approximation. Then u(t) − v(t) 1 ≤ C u0 − v0 1 .
(7.95)
7.2. Uniqueness
265
Proof. First we use (7.93) to conclude that any front-tracking approximation uδ has a unique limit u as δ → 0. Then we take the limit δ → 0 in (7.93) to conclude that (7.95) holds. Note that this also gives an error estimate - .for front tracking for systems. If we denote the limit of the sequence uδ by u and vδ2 = uδ , then by letting δ2 → 0 in (7.93) δ
u ( · , t) − u( · , t) ≤ C uδ0 − u0 + δt = O (1) δ 1 1 for some finite constant C. Hence front tracking for systems is a first-order method.
7.2 Uniqueness Let St denote the map that maps initial data u0 into the solution u of ut + f (u)x = 0,
u|t=0 = u0
at time t, that is, u = St u0 . In Chapter 6 we showed the existence of the semigroup St , and in the previous section its stability for initial data in the class D as limits of approximate solutions obtained by front tracking. Thus we know that it satisfies S0 u = u,
St Ss u = St+s u,
S St u − Ss v 1 ≤ L (|t − s| + u − v 1 ) for all t, s ≥ 0 and u, v in D. In this section we prove uniqueness of solutions that have initial data in D. We want to demonstrate that any other solution u coincides with this semigroup. To do this we will basically need three assumptions. The first is that u is a weak solution, the second is that it satisfies Lax’s entropy conditions across discontinuities, and the third is that it has locally bounded variation on a certain family of curves. Concretely, we define an entropy solution to ut + f (u)x = 0,
u|t=0 = u0 ,
to be a bounded measurable function u = u(x, t) of bounded total variation satisfying the following conditions: A The function u = u(x, t) is a weak solution of the Cauchy problem (7.1) taking values in D, i.e., T (uϕt + f (u)ϕx ) dx dt + ϕ(x, 0)u0 (x) dx = 0 (7.96) 0
R
R
for all test functions ϕ whose support is contained in the strip [0, T .
266
7. Well-Posedness of the Cauchy Problem
B Assume that u has a jump discontinuity at some point (x, t), i.e., there exist states ul,r ∈ Ω and speed σ such that if we let ul for y < x + σ(s − t), U (y, s) = (7.97) ur for y ≥ x + σ(s − t), then lim
ρ→0
1 ρ2
t+ρ x+ρ
t−ρ
x−ρ
|u(y, s) − U (y, s)| dy ds = 0.
(7.98)
Furthermore, there exists k such that λk (ul ) ≥ σ ≥ λk (ur ) .
(7.99)
C There exists a θ > 0 such that for all Lipschitz functions γ with Lipschitz constant not exceeding θ, the total variation of u(x, γ(x)) is locally bounded. Remark 7.9. One can prove, see Exercise 7.1, that the front-tracking solution constructed in the previous chapter is an entropy solution of the conservation law. There is a direct argument showing that any weak solution, whether it is a limit of a front-tracking approximation or not, satisfies a Lipshitz continuity in time of the spatial L1 -norm, as long as the solution has a uniform bound on the total variation. We present that argument here. Theorem 7.10. Let u0 ∈ D, and let u denote any weak solution of (7.1) such that T.V. (u(t)) ≤ C. Then u( · , t) − u( · , s) 1 ≤ C f Lip |t − s| ,
s, t ≥ 0.
(7.100)
Proof. Let 0 < s < t < T , and let αh be a smooth approximation to the characteristic function of the interval [s, t], so that lim αh = χ[s,t] .
h→0
Furthermore, define ϕh (y, τ ) = αh (τ )φ(y), where φ is any smooth function with compact support. If we insert this into the weak formulation T (uϕh,t + f (u)ϕh,x ) dx dt + ϕh (x, 0)u(x, 0) dx = 0, (7.101) 0
R
R
and let h → 0, we obtain t φ(y) (u(y, t) − u(y, s)) dy + φy f (u) dy ds = 0. s
7.2. Uniqueness
From this we obtain
267
u( · , t) − u( · , s) 1 = sup
φ(y) (u(y, t) − u(y, s)) dy
|φ|≤1
t
= − sup
|φ|≤1
φ(y)y f (u) dy ds s
t
≤
T.V. (f (u)) ds s
≤ C f Lip (t − s), which proves the claim. Here we first used Exercise A.1, Theorem A.2, subsequently the definition (A.1) for T.V. (f ), and finally the Lipschitz continuity of f and the bound on the total variation on u. Remark 7.11. This argument provides an alternative to the proof of the Lipschitz continuity in Theorem 2.14 in the scalar case. Before we can compare an arbitrary entropy solution to the semigroup solution, we need some preliminary results. Firstly, Theorem 7.10 says that any function u( · , t) taking values in D and satisfying A is L1 Lipschitz continuous: u( · , t) − u( · , s) 1 ≤ L(t − s), for t ≥ s. Furthermore, by the structure theorem for functions of bounded variation [148, Theorem 5.9.6], u is continuous almost everywhere. For the sake of definiteness, we shall assume that all functions in D are right continuous. Also, there exists a set N of zero Lebesgue measure in the interval [0, T ] such that for t ∈ [0, T ] \ N , the function u( · , t) is either continuous at x or has a jump discontinuity there. Intuitively, the set N can be thought of as the set of times when collisions of discontinuities occur. Lemma 7.12. If (7.96)–(7.98) hold, then ul = lim u(y, t), y→x−
and
ur = lim u(y, t), y→x+
σ (ul − ur ) = f (ul ) − f (ur ) .
Proof. Let Pλ denote the parallelogram . Pλ = (y, s) | |t − s| ≤ λ, |y − x − σ(s − t)| ≤ λ . Integrating the conservation law over Pλ , we obtain x+λ+λσ x+λ−λσ u(y, t + λ) dy − u(y, t − λ) dy x−λ+λσ
x−λ−λσ
t+λ
+ t−λ
(f (u) − σu)(x + λ + σ(s − t), s) ds
268
7. Well-Posedness of the Cauchy Problem
−
t+λ
t−λ
(f (u) − σu)(x − λ + σ(s − t), s) ds = 0.
If we furthermore integrate this with respect to λ from λ = 0 to λ = ρ, and divide by ρ2 , we obtain ! " ρ x+λ−λσ ρ x+λ+λσ 1 u(y, t + λ) dy dλ − u(y, t − λ) dy dλ ρ2 0 x−λ+λσ 0 x−λ−λσ ρ t+λ 1 + 2 (f (u) − σu)(x + λ + σ(s − t), s) ds dλ ρ 0 t−λ ρ t+σ − (f (u) − σu)(x − λ + σ(s − t), s) ds dλ = 0. 0
t−σ
Now let ρ → 0. Then ρ x+λ+λσ 1 1 u(y, t + λ) dy dλ → (ul + ur ), 2 ρ 0 x−λ+λσ 2 ρ x+λ−λσ 1 1 u(y, t − λ) dy dλ → (ul + ur ), ρ2 0 x−λ−λσ 2 ρ t+λ 1 (f (u) − σu)(x + λ + σ(s − t), s) ds dλ → f (ur ) − σur , ρ2 0 t−λ ρ t+λ 1 (f (u) − σu)(x − λ + σ(s − t), s) ds dλ → f (ul ) − σul . ρ2 0 t−λ Hence 1 1 (ul + ur ) − (ul + ur ) + (f (ur ) − σur ) − (f (ul ) − σul ) = 0. 2 2 This concludes the proof of the lemma. The next lemma states that if u satisfies C, then the discontinuities cannot cluster too tightly together. Lemma 7.13. Assume that u : [0, T ] → D satisfies C. Let t ∈ [0, T ] and ε > 0. Then the set Bt,ε = x ∈ R | lim sup |u(x, t) − u(y, s)| > ε (7.102) s→t+, y→x
has no limit points. Proof. Assume that Bt,ε has a limit point denoted by x0 . Then there is ∞ a monotone sequence {xi }i=1 in Bt,ε converging to x0 . Without loss of generality we assume that the sequence is decreasing. Since u(x, t) is right continuous, we can find a point zi in xi , xi−1 such that ε |u (zi , t) − u (xi , t)| ≤ . 2
7.2. Uniqueness
269
Now choose si > t and yi ∈ zi+1 , zi such that |u (yi , si ) − u (xi , t)| ≥ ε,
|si − t| ≤ θ max {|yi − zi | , |yi − zi+1 |} .
We define a curve γ(x) for x ∈ [x0 , x1 ] passing through all the points (zi , t) and (yi , si ) by ⎧ for x = x0 or x ≥ z1 , ⎪ ⎨t si −t for x ∈ [yi , zi ], γ(x) = si − (x − yi ) zi −yi (7.103) ⎪ ⎩ i −t t + (x − zi+1 ) yis−z for x ∈ [z , y ]. i+1 i i+1 Then γ is Lipschitz continuous with Lipschitz constant θ, and we have that ε |u (yi , si ) − u (zi , t)| ≥ 2 for all i ∈ N. This means that the total variation of u(x, γ(x)) is infinite, violating C, concluding the proof of the lemma. In the following, we let σ be a number strictly larger than the absolute value of any characteristic speed, and we also demand that σ ≥ 1/θ, where θ is the constant in C. The next lemma says that if u satisfies C, then discontinuities cannot propagate faster than σ . Precisely, we have the following result. Lemma 7.14. Assume that u : [0, T ] → D satisfies C. Then for (x, t) ∈ 0, T × R, lim
s→t+ , y→x± |x−y|>σ (s−t)
u(y, s) = u(x±, t).
(7.104)
Proof. We assume that the lemma does not hold. Then, for some (x0 , t) there exist decreasing sequences sj → t and yj → x0 such that |yj − x0 | ≥ σ (sj − t) ,
|u (yj , sj ) − u (x0 , t)| ≥ ε
for some ε > 0 and j ∈ N. Now let s1 − t , θ where as before θ is defined by C. Now we define a subsequence of {(yj , sj )} as follows. Set j1 = 1 and for i ≥ 1 define s zi = yji − jiθ−t , - . ji+1 = min k sk ≤ t − θ (yk − zi ) . z 0 = y1 +
Then yji ∈ zi+1 , zi
and
. |sji − t| ≤ θ max |yji − zi | , |yji − zi+1 |
for all i. Let γ be the curve defined in (7.103). Since we have that zi → x0 , we have that ε |u (zi , t) − u (x0 , t)| ≤ 2
270
7. Well-Posedness of the Cauchy Problem
for sufficiently large i. Consequently, |u (zi , t) − u (yji , sji )| ≥
ε , 2
and the total variation of u(x, γ(x)) is infinite, contradicting C. The next lemma concerns properties of the semigroup St . We assume that u is a continuous function u : [0, T ] → D, and wish to estimate ST u(0)−u(T ). Let h be a small number such that N h = T . Then we can calculate S ST u(0) − u(T ) 1 ≤
N ST −(i−1)h u((i − 1)h) − ST −ih u(ih)
1
i=1
≤L
N 1 u(ih) − Sh u((i − 1)h) h. h 1
i=1
Letting h tend to zero we formally obtain the following lemma: Lemma 7.15. Assume that u : [0, T ] → D is Lipschitz continuous in the L1 -norm. Then for every interval [a, b] we have S ST u(0) − u(T ) L1 ([a+σ T,b−σ T ] ; Rn ) T 1 ≤ O (1) lim inf Sh u(t) − u(t + h) L1 ([a+σ (t+h),b−σ (t+h)] ; Rn ) dt. h→0+ h 0 Proof. For ease of notation we set · = · L1 ([a+σ (t+h),b−σ (t+h)] ; Rn ) . Observe that by finite speed of propagation, we can define u(x, 0) to be zero outside of [a, b], and the Lipschitz continuity of the semigroup will look identical written in the norm · to how it looked before. Let φ(t) = lim inf h→0+
1 u(t + h) − Sh u(t) . h
Note that φ is measurable, and for all h > 0, the function φh (t) =
1 u(t + h) − Sh u(t) h
is continuous. Hence we have that φ(t) = lim
inf
ε→0+ h∈Q∩[0,ε]
φh (t),
and therefore φ is Borel measurable. Define functions Ψ(t) = S ST −t u(t) − ST u(0) , t ψ(t) = Ψ(t) − L φ(s) ds. 0
(7.105)
7.2. Uniqueness
The function ψ is a Lipschitz function, and hence T ψ(T ) = ψ (s) ds.
271
(7.106)
0
Furthermore, Rademacher’s theorem2 implies that there exists a null set N1 ⊆ [0, T ] such that Ψ and ψ are differentiable outside N1 . Furthermore, using that Lebesgue measurable functions are approximately continuous almost everywhere (see [47, p. 47]) we conclude that there exists another null set N2 such that φ is continuous outside N2 . Let N = N1 ∪N N2 . Outside N we have 1 ψ (t) = lim Ψ(t + h) − Ψ(t) − Lψ(t). (7.107) h→0 h Using properties of the semigroup we infer Ψ(t + h) − Ψ(t) = S ST −t−h u(t + h) − ST u(0) − S ST −t u(t) − ST u(0) ≤ S ST −t−h u(t + h) − ST −t u(t) = S ST −t−h u(t + h) − ST −t−h Sh u(t) ≤ L u(t + h) − Sh u(t) , which implies lim
h→0
1 1 Ψ(t + h) − Ψ(t) ≤ L lim inf u(t + h) − Sh u(t) = Lφ(t). h→0 h h
Thus ψ ≤ 0 almost everywhere, and we conclude that ψ(T ) ≤ 0,
(7.108)
which proves the lemma.
The next two lemmas are technical results valid for functions satisfying (7.97) and (7.98). Lemma 7.16. Assume that u : [0, T ] → D is Lipschitz continuous, and that for some (x, t) equations (7.97) and (7.98) hold. Then for all positive α we have α lim sup |u(x + λh + ρy, t + h) − ur | dy = 0, (7.109) ρ→0+ |h|≤ρ
0
0
lim sup
ρ→0+ |h|≤ρ
−α
|u(x + λh + ρy, t + h) − ul | dy = 0.
(7.110)
2 Rademacher’s theorem states that a Lipschitz function is differentiable almost everywhere; see [47, p. 81].
272
7. Well-Posedness of the Cauchy Problem
Proof. We assume that the limit in (7.109) is not zero. Then there exist sequences ρi → 0 and |hi | < ρi and a δ > 0 such that α |u (x + λhi + ρi y, t + hi ) − ur | dy > δ (7.111) 0
for all i. Without loss of generality we assume that hi > 0, and let v(z, h) = u(x + λh + z, t + h). Then the map h → v( · , h) is Lipschitz continuous with respect to the L1 norm, since v( · , h) − v( · , η) 1 = |u(z, t + h) − u(λ(η − h) + z, t + η)| dz ≤ |u(z, t + h) − u(z, t + η)| dz + |u(z, t + η) − u(λ(η − h) + z, t + η)| dz ≤ M |h − η| + λ |η − h| T.V. (u(t + η)) 8 |η − h| . ≤M From (7.111) we obtain αρi |u(x + λh + z, t + h) − ur | dz 0 αρi ≥ |u (x + λhi + z, t + hi ) − ur | dz 0 αρi − |u (x + λhi + z, t + hi ) − u(x + λh + z, t + h)| dz 0
8 |hi − h| . ≥ δρi − M 8 < 1 (if this is not so, then (7.111) will We can (safely) assume that δ/M hold for smaller δ as well). We integrate the last inequality with respect to 8, hi ⊂ [−ρi , ρi ], we obtain h, for h in [−ρi , ρi ]. Since hi − ρi δ/M ρi αρi |u(x + λh + z, t + h) − ur | dz dh −ρi
0
≥
hi
f hi −ρi δ/M
8(hi − h) dh δρi − M
8). = (δ 2 ρ2i )/(2M Comparing this with (7.97) and (7.98) yields a contradiction. The limit (7.111) is proved similarly.
7.2. Uniqueness
273
For the next lemma, recall that a (signed) Radon measure is a (signed) regular Borel measure3 that is finite on compact sets. Lemma 7.17. Assume that w is in L1 (a, b ; Rn ) such that for some Radon measure μ, we have that x2 ≤ μ ([x1 , x2 ]) for all a < x1 < x2 < b. w(x) dx (7.112) x1
Then
b
|w(x)| dx ≤ μ (a, b ) .
(7.113)
a
Proof. First observe that the assumptions of the lemma also hold if the closed interval on the right-hand side of (7.112) is replaced by an open interval. We have that x2 x2 −ε = lim w(x) dx w(x) dx ε→0 x1 +ε
x1
≤ lim μ ([x1 + ε, x2 − ε]) = μ (x1 , x2 ) . ε→0
Secondly, since w is in L1 , it can be approximated by piecewise constant functions. Let v be a piecewise constant function with discontinuities located at a = x0 < x1 < · · · < xN = b, and b |w(x) − v(x)| dx ≤ ε. a
Then we have b |w(x)| dx ≤ a
b |w(x) − v(x)| + |v(x)| dx a a xi ≤ ε+ |v(x)| dx b
xi−1
i xi = ε+ v(x) dx xi−1 i xi xi ≤ ε+ (v(x) − w(x)) dx + w(x) dx xi−1 xi−1 i i b ≤ ε+ |v(x) − w(x)| dx + μ (xi−1 , xi ) a
i
≤ 2ε + μ (a, b ) . 3 A Borel measure μ is regular if it is outer regular on all Borel sets (i.e., μ(B) = inf{μ(A) | A ⊇ B, A open} for all Borel sets B) and inner regular on all open sets (i.e., μ(U ) = sup{μ(K) | K ⊂ U, K compact} for all open sets U ).
274
7. Well-Posedness of the Cauchy Problem
Since ε can be made arbitrarily small, this proves the lemma. Next we need two results that state how well the semigroup is approximated firstly by the solution of a Riemann problem with states that are close to the initial state for the semigroup, and secondly by the solution of the linearized equation. To define this precisely, let ω0 be a function in D, fix a point x on the real line (which will remain fixed throughout the next lemma and its proof), and let ω(y, t) be the solution of the Riemann problem ω0 (x−) for y < 0, ωt + f (ω)y = 0, ω(y, 0) = ω0 (x+) for y ≥ 0. (If ω0 is continuous at x, then ω(y, t) = ω0 (x) is constant.) Define A˜ = df (ω0 (x+)), and let u˜ be the solution of the linearized equation ˜uy = 0, u ˜t + A˜
u ˜(y, 0) = ω0 (y).
Furthermore, define u ˆ(y, t) by ω(y − x, t) u ˆ(y, t) = ω0 (y)
for |y − x| ≤ σ t, otherwise.
(7.114)
(7.115)
Then we can state the following lemma. Lemma 7.18. Let ω0 ∈ D. Then we have 1 x+ρ−hσ |(Sh ω0 ) (y) − u ˆ(y, h)| dy h x−ρ+hσ 1 h
= O (1) T.V. x−ρ,x ∪ x,x+ρ (ω0 ) ,
(7.116)
x+ρ−hσ
x−ρ+hσ
|(Sh ω0 ) (y) − u ˜(y, h)| dy
2 = O (1) T.V. x−ρ,x+ρ (ω0 ) ,
(7.117)
for all x and all positive h and ρ such that x − ρ + hσ < x + ρ − hσ . Proof. We first prove (7.117). In the proof of this we shall need the following general result: Let v¯ be the solution of v¯t + f (¯ v )y = 0 with Riemann initial data ul for y < 0, v¯(y, 0) = ur for y ≥ 0, for some states ul,r ∈ Ω. We have that this Riemann problem is solved by waves separating constant states ul = v0 , v1 , . . . , vn = ur . Let uc be a constant in Ω and set Ac = df (uc ). Assume that ul and ur satisfy Ac (ul − ur ) = λck (ul − ur ) ;
7.2. Uniqueness
275
i.e., λck is the kth eigenvalue and ul − ur is the kth eigenvector of Ac . Let v˜ be defined by ul for y < λck t, v˜(y, t) = ur for y ≥ λck t (˜ v solves ut + Ac uy = 0 with a single jump at y = 0 from ul to ur as initial data). We wish to estimate 1 σ t I= |¯ v (y, t) − v˜(y, t)| dy. t −σ t Note that since v¯ and v˜ are equal outside the range of integration, the limits in the integral can be replaced by ∓∞. Due to the hyperbolicity of the system, the vectors {rj (u)}nj=1 form a basis in Rn , and hence we can find unique numbers ¯l,r j such that ur − ul =
n
ε¯lj rj (ul )
j=1
=
1
ε¯rj rj (ur ).
(7.118)
j=n
From ur − ul = εc rk (uc ) for some εc it follows that ε¯li = li (ul ) ·
n
ε¯lj rj (ul )
j=1
= li (ul ) · (ul − ur ) = (li (ul ) − li (uc )) · (ul − ur ) + li (uc ) · (ul − ur ) = (li (ul ) − li (uc )) · (ul − ur ) + εc li (uc ) · rk (uc ) = (li (ul ) − li (uc )) · (ul − ur ), i = k. Thus we conclude (using an identical argument for the right state) that l ε¯ ≤ C |ul − ur | |ul − uc | , i = k, i (7.119) r |¯ εi | ≤ C |ul − ur | |ur − uc | , i = k. Let εi denote the strength of the ith wave in v¯. Then, by construction of the solution of the Riemann problem, for i < k we have that εi − ε¯li ≤ C |vi−1 − ul |2 + |vi − ul |2 ≤ C |ul − ur |2 , while for i > k we find that 2
|εi − ε¯ri | ≤ C |ul − ur | , for some constant C. Assume that the k-wave in v¯ moves with speed in the ¯ k ]; i.e., if the k-wave is a shock, then λ = λ ¯ k = μk (vk−1 , vk ), interval [λk , λ k ¯ k = λk (vk ). and if the wave is a rarefaction wave, then λk = λk (vk−1 ) and λ
276
7. Well-Posedness of the Cauchy Problem
˜ k ) and ¯ = max(λ ¯k , λ ˜ k ). Then we can write I as Set s = min(λk , λ s 1 I= |ul − v¯(y, t)| dy t −∞ s¯ ∞ + |ˆ v(y, t) − v¯(y, t)| dy + |ur − v¯(y, t)| dy s
s
= I1 + I2 + I3 . Next we note that the first integral above can be estimated as k−1 k−1 k−1 2 l I1 ≤ C |vi − ul | ≤ C |εi | ≤ C ε¯i + |ur − ul | , i=1
i=1
i=1
and similarly, I3 ≤ C
n
|¯ εri |
+ |ul − ur | . 2
i=k+1
Using (7.119) we obtain I1 + I3 ≤ C |ul − ur | (|ul − uc | + |ur − uc | + |ul − ur |) ≤ C |ul − ur | (|ul − uc | + |ur − uc |) ,
(7.120)
for some constant C. It remains to estimate I2 . We first assume that the k-wave in v¯ is a shock wave and that λck > μk (vk−1 , vk ). Then I3 = (λck − μk (vk−1 , vk )) |ul − vk | ≤ C |ul − vk | (|uc − vk−1 | + |uc − vk |) ≤ C |ul − ur | (|ul − uc | + |ur − uc | + |vk − ur | + |vk−1 − ul |) , ≤ C |ul − ur | (|ul − uc | + |ur − uc | + C |ul − ur | (|ul − uc | + |ur − uc |)) ≤ C |ul − ur | (|ul − uc | + |ur − uc |)
(7.121)
by the above estimates for |vk − ur | and |vk−1 − ul |. If λck ≤ μk (vk−1 , vk ) or the k-wave is a rarefaction wave, we similarly establish (7.121). Combining this with (7.120) we find that I ≤ C |ul − ur | (|ul − uc | + |ur − uc |) .
(7.122)
Having established this preliminary estimate we turn to the proof of (7.117). Let ω ¯ 0 be a piecewise constant approximation to ω0 such that x+ρ ω ¯ 0 (x±) = ω0 (x±), |¯ ω0 (y) − ω0 (y)| dy ≤ , x−ρ
T.V. x−ρ,x+ρ (¯ ω0 ) ≤ T.V. x−ρ,x+ρ (ω0 ) . Furthermore, let v be the solution of the linear hyperbolic problem ˜ y = 0, vt + Av
v(y, 0) = ω ¯ 0 (y),
(7.123)
7.2. Uniqueness
277
where again A˜ = df (ω0 (x+)). Let the eigenvalues and the right and left ˜ k , ˜k , and ˜ eigenvectors of A˜ be denoted by λ lk , respectively, for k = 1, . . . , n, normalized so that 0 for j = k, ˜ ˜ (7.124) lk = 1, lk · r˜j = 1 for j = k. Then it is not too difficult to verify (see Exercise 10) that v(y, t) is given by ˜ ˜ k t) r˜k . v(y, t) = lk · ω ¯ 0 (y − λ (7.125) k
We can also construct v by front tracking. Since the eigenvalues are constant and the initial data piecewise constant, front tracking will give the exact solution. Hence v will be piecewise constant with a finite number of jumps occurring at xi (t), where we have that d ˜k , xi (t) = λ dt
˜ k I (v(xi (t)+, t) − v(xi (t)−, t)) = 0, A˜ − λ for all t where we do not have a collision of fronts, that is, for all but a finite number of t’s. Now we apply the estimate (7.122) to each individual front xi . Then we obtain, introducing vi± = v(xi (t)±, t), x+ρ−σ ε |(S Sε v( · , τ )) (y) − v(y, τ + ε)| dy x−ρ+σ ε
≤ ε O (1)
v + − v − v + − ω0 (x+) + v− − ω0 (x+) i
i
i
i
i
v + − v − ≤ ε O (1) T.V. x−ρ,x+ρ (¯ ω0 ) i i i
2 ≤ ε O (1) T.V. x−ρ,x+ρ (ω0 ) .
(7.126)
Recall that A˜ = df (ω0 (x+)) and that u˜ was defined by (7.114), that is, ˜uy = 0, u ˜t + A˜
u ˜(y, 0) = ω0 (y).
(7.127)
In analogy to formula (7.125) we have that u˜ satisfies ˜ ˜ k t) r˜k . u ˜(y, t) = lk · ω0 (y − λ
(7.128)
k
Regarding the difference between u ˜ and v, we find that x+ρ−σ h |v(y, h) − u ˜(y, h)| dy x−ρ+σ h
=
x+ρ−σ h
x−ρ+σ h
k
˜ ˜ k h) r˜k dy lk · (¯ ω0 − ω0 ) (y − λ
(7.129)
278
7. Well-Posedness of the Cauchy Problem
≤ O (1)
x+ρ x−ρ
|¯ ω0 (y) − ω0 (y)| dy
≤ O (1) .
(7.130)
By the Lipschitz continuity of the semigroup we have that
x+ρ−σ h
|Sh ω ¯ 0 (y) − Sh ω0 (y)| dy ≤ L
x−ρ+σ h
x+ρ
x−ρ
|¯ ω0 (y) − ω0 (y)| dy ≤ L.
(7.131) Furthermore, by Lemma 7.15 with a = x − ρ, b = x + ρ, T = h, and t = 0, and using (7.126), we obtain 1 h
x+ρ−σ h
x−ρ+σ h
O (1) ≤ h ≤
|(Sh ω ¯ 0 ) (y) − v(y, h)| dy
h
1 lim inf ε→0+ ε
x+ρ−σ ε
x−ρ+σ ε
2 O (1) T.V. x−ρ,x+ρ (ω0 ) . 0
|(S Sε v ( · , τ )) (y) − v(y, τ + ε)| dy dτ (7.132)
Consequently, using (7.132), (7.131), and (7.130) we find that 1 h
x+ρ−hσ
x−ρ+hσ
|(Sh ω0 ) (y) − u ˜(y, h)| dy
2 L ≤ O (1) T.V. x−ρ,x+ρ (ω0 ) + + O (1) . h h
Since is arbitrary, this proves (7.117). Now we turn to the proof of (7.116). First we define z to be the function ul for y < λt, z(y, t) = ur for y ≥ λt, where |λ| ≤ σ . Recall that ¯(y, t) denotes the solution of ¯t + f (¯ v )y = 0 with Riemann initial data ul for y < 0, v¯(y, 0) = ur for y ≥ 0. Then trivially we have that
σ t
−σ t
|z(y, t) − v¯(y, t)| dy ≤ t O (1) |ul − ur | .
Let ω ¯ 0 be as (7.123) but replacing the TV bound by
(7.133)
7.2. Uniqueness
279
T.V. x−ρ,x ∪ x,x+ρ (¯ ω0 ) ≤ T.V. x−ρ,x ∪ x,x+ρ (ω0 ) . Recall that uˆ(y, t) was defined in (7.115) by u ˆ(y, t) =
ω(y − x, t) ω0 (y)
for |y − x| ≤ σ t, otherwise.
Let Jh be the set . Jh = y | hσ < |y − x| < ρ − hσ , and let vˆ be the function defined by u ˆ(y, t) vˆ(y, t) = ω ¯ 0 (y)
for |x − y| ≤ σ t, otherwise.
Then we have that
x+ρ−σ h
|ˆ v (y, h) − u ˆ(y, h)| dy ≤
x−ρ+σ h
|¯ ω0 (y) − ω0 (y)| dy ≤ .
(7.134)
Jh
Note that the bound (7.131) remains valid. We need a replacement for (7.126). In this case we wish to estimate
x+ρ−σ ε
I= x−ρ+σ ε
|(S Sε v( · , τ )) (y) − v¯(y, τ + ε)| dy.
For |x − y| > σ t, the function ¯(y, t) is discontinuous across lines located at xi . In addition it may be discontinuous across the lines |x − y| = σ t. Inside the region |x − y| ≤ σ t, v is an exact entropy solution, coinciding with the semigroup solution. Using this and (7.133) we find that
x−σ τ
I=
x+ρ−σ ε
+ x−ρ+σ ε
x+σ τ
|(S Sτ +ε ω ¯ 0 ) (y) − ω ¯ 0 (y)| dy
x+σ τ
+ x−σ τ
|(S Sε u ˆ( · , τ )) (y) − u ˆ(y, τ + ε)| dy
280
7. Well-Posedness of the Cauchy Problem
≤ ε O (1)
|¯ ω0 (xi +) − ω ¯ 0 (xi −)|
|xi −x|<σ τ x
+L x−2σ τ
x+2σ τ
|¯ ω0 (y) − ul | dy +
|¯ ω0 (y) − ur | dy x
≤ ε O (1) T.V. x−ρ,x ∪ x,x+ρ (ω0 ) .
(7.135)
Now using Lemma 7.15 we find that 1 x+ρ−σ h |(Sh ω ¯ 0 ) (y) − v¯(y, t)| dy h x−ρ+σ h O (1) h 1 x+ρ−σ ε ≤ lim inf |(S Sε v¯ ( · , τ )) (y) − v¯(y, τ + ε)| dy dτ h 0 ε→0+ ε x−ρ+σ ε ≤ O (1) T.V. x−ρ,x ∪ x,x+ρ (ω0 ) .
(7.136)
As before, since is arbitrary, (7.131), (7.134), and (7.136) imply (7.116). Remark 7.19. Note that if ω0 is continuous at x, then Lemma 7.18 and (7.117) say that the linearized equation gives a good local approximation of the action of the semigroup. If ω0 has a discontinuity at x, then T.V. x−ρ,x+ρ (ω0 ) does not become small as ρ tends to zero; hence we must resort to (7.116) in this case. Since the total variation of any function in D is small, (7.117) is a much stronger estimate than (7.116). Now that the preliminary technicalities are out of the way, we can set about proving that an entropy solution coincides with the semigroup. Let u be an entropy solution. To prove that u( · , t) = St u0 , it suffices to show, applying Lemma 7.15, that 1 Sh u( · , t) − u( · , t + h) L1 ([a,b]) = 0, (7.137) h for all a < b, and for all t ∈ [0, T ] \ N . Assume therefore that t ∈ N . Then by the structure theorem, see [148, Theorem 5.9.6], there exists a null set N ⊂ [0, T ] such that outside that set u is either continuous or has a jump discontinuity (as a function of x). Therefore, we split the argument into two cases, one where u has a jump discontinuity, and the case where u is continuous or has a small jump in the sense that it is not in the set Bt,ε . Consider first a point (x, t) where u has jump discontinuity.4 By condition B there exist ul,r ∈ Ω and σ such that the limit (7.98) holds when U is lim inf h→0
4 The following argument is valid for any jump discontinuity, but will be applied only to jumps in Bt,ε .
7.2. Uniqueness
281
defined by (7.97). Using a change of variables, we find that 1 x+σ h lim+ |u(y, t + h) − U (y, t + h)| dy h→0 h x−σ h 0 = lim+ σ |u (x + λh + σ hy, t + h) − ul | dy h→0
−1−λ/σ
+ 0
1−λ/σ
|u (x + λh + σ hy, t + h) − ur | dy = 0,
by Lemma 7.16. Hence for small positive h, we have that 1 x+σ h |u(y, t + h) − U (y, t + h)| dy ≤ ε˜, h x−σ h
(7.138)
for some small ε˜ to be determined later. By Lemma 7.14 we have U (y, s) = uˆ(y, s − t), where u ˆ is defined by (7.115) with ω0 (y) = u(y, t), and U is defined by (7.97), in some forward neighborhood of (x, t). Then using (7.138) and (7.116) we obtain 1 x+σ h |(Sh u( · , t)) (y) − u(y, t + h)| dy h x−σ h 1 x+σ h ≤ ε˜ + |(Sh u( · , t)) (y) − U (y, t + h)| dy h x−σ h ≤ ε˜ + O (1) T.V. x−2σ h,x ∪ x,x+2σ h (u( · , t)) ≤ 2˜ ε,
(7.139)
for all h sufficiently small, since we compute the total variation on a shrinking interval excluding the jump in u at x. Now we consider points (x, t) where u is either continuous or has a small jump discontinuity. Hence we can choose an interval c, d centered about x such that Bt,ε ∩ c, d = ∅. Recall that Bt,ε , defined in (7.102), is the set of points where u( · , t) has a jump larger than ε. Let the family of trapezoids Γh be defined by . Γh = (y, s) | s ∈ [t, t + h], y ∈ c + σ (s − t), d − σ (s − t) . Now we claim that for h sufficiently small, we have that for all (y, s) ∈ Γh , |u(y, s) − u(x, t)| ≤ 2ε + T.V. c,d (u( · , t)) .
(7.140)
To prove this, we argue as follows: By Lemma 7.14 discontinuities in u cannot propagate faster than σ ; hence u( · , t) is continuous in the lower corners of Γh , and the estimate surely holds for (y, s) located there. We must prove (7.140) for (y, s) in a region [c + h , d − h ] × [t, t + h] where h is given and we can be free to choose h small. Now also [c+h , d−h ]∩Bε,t = ∅; hence for each y ∈ [c + h , d − h ] we can find ξy , hy such that the estimate
282
7. Well-Posedness of the Cauchy Problem
(7.140) is valid for (y, s) ∈ y − ξy , y + ξy × [t, t + hy ] . Now we can cover the compact interval [c + h , d − h ] with a finite number of intervals of the form yi − ξyi , yi + ξyi , and choose h = min hyi . i
Then we obtain (7.140) for (y, s) in [c + h , d − h ] × [t, t + h]. Now we must compare u and u ˜ near (x, t). The eigenvectors of A˜ = df (u(x, t)) are normalized according to (7.124). Observe that trivially ˜lk · u r˜k . u= k
Then
d−σ h
c+σ h
|u(y, t + h) − u ˜(y, t + h)| dy ≤
k
d−σ h
c+σ h
˜ ˜ k h, t) − u(y, t + h) dy. lk · u(y − λ
(7.141)
To aid us here we use Lemma 7.17. Let x1 < x2 be in the interval c + σ h, d − σ h . Then we shall estimate x2
˜ ˜k h, t) dy. Ek = lk · u(y, t + h) − u(y − λ x1
If we integrate the conservation law over the region . ˜ k , x2 + (s − (t + h))λ ˜ k ], s ∈ [t, t + h] , (y, s) | y ∈ [x1 − (s − (t + h))λ we find that x2 u(y, t + h) dy −
˜k h x2 +λ
˜k h x1 −λ t+h
x1
u(y, t) dy
˜k u)(x2 + (s − (t + h))λ ˜ k , s) ds (f (u) − λ
+
t t+h
−
˜k u)(x1 − (s − (t + h))λ ˜ k , s) ds = 0. (f (u) − λ
t
Taking the inner product with ˜lk we obtain t+h ˜ ˜ k u)(x2 + (s − (t + h))λ ˜ k , s) ds Ek = lk · (f (u) − λ t
t+h
˜ ˜ k u)(x1 − (s − (t + h))λ ˜ k , s) ds lk · (f (u) − λ
− t t+h
= t
˜ ˜ k (u2 − u1 ) ds, lk · f (u2 ) − f (u1 ) − λ
(7.142)
7.2. Uniqueness
where we have defined
˜k , s , u1 = u x1 − (s − (t + h))λ Let A denote the matrix A =
1 0
283
˜k , s . u2 = u x2 + (s − (t + h))λ
˜ df su2 + (1 − s)u1 ds − A.
Then
˜lk · f (u2 ) − f (u1 ) − λ ˜ k (u2 − u1 ) = ˜ lk · A (u2 − u1 ) ˜ k (u2 − u1 ) ˜ 2 − u1 ) − λ + A(u =˜ lk · A (u2 − u1 ).
(7.143)
Since A ≤ O (1) (|u1 − u(x, t)| + |u2 − u(x, t)|) , (7.142) and (7.143) yield t+h |Ek | ≤ O (1) (|u1 − u(x, t)| + |u2 − u(x, t)|) |u2 − u1 | ds t
≤ O (1) sup |u(y, s) − u(x, t)| (y,s)∈Γh
t+h
×
T.V.x1 −(s−(t+h))λ˜ k ,x2 +(s−(t+h))λ˜ k (u( · , s)) ds.
t
Therefore, x2 (u(y, t + h) − u ˜ (y, t + h)) dy x1 ≤ |Ek | k
≤ O (1) sup
(y,s)∈Γh
t+h
× t
k
|u(y, s) − u(x, t)|
T.V.x1 −(s−(t+h))λ˜ k ,x2 +(s−(t+h))λ˜ k (u( · , s)) ds. (7.144)
Returning to (7.141) and using Lemma 7.17 we find that d−σ h u(y, t + h) − u ˜(y, t + h) dy c+σ h
≤ O (1) sup |u(y, s) − u(x, t)| (y,s)∈Γh
t+h
× t
T.V.[c+σ (s−t),d−σ (s−t)] (u( · , s)) ds.
(7.145)
284
7. Well-Posedness of the Cauchy Problem
Now we use (7.117), (7.145), and (7.140) to obtain
d−σ h
c+σ h
≤
|(Sh u( · , t)) (y) − u(y, t + h)| dy d−σ h
c+σ h
|(Sh u( · , t)) (y) − u ˜(y, t + h)| + |˜ u(y, t + h) − u(y, t + h)| dy
2 ≤ O (1) h T.V. c,d (u( · , t))
+ O (1) 2ε + T.V.[c,d] (u( · , t)) t+h × T.V.[c+σ (s−t),d−σ (s−t)] (u( · , s)) ds.
(7.146)
t
By Lemma 7.13 the set Bt,ε contains only a finite number of points; x1 < x2 < · · · < xN where u( · , t) has a discontinuity larger than ε. We can cover the set [a, b] \ ∪i {xi } by a finite number of open intervals cj , dj , j = 1, . . . , M , such that: (a) xi ∈ / ∪j cj , dj = ∅ for i = 1, . . . , N . (b) T.V. cj ,dj (u ( · , t)) ≤ 2ε for j = 1, . . . , M . (c) Every x ∈ [a, b] is contained in at most two distinct intervals ci , di . We have established that for sufficiently small h, 1 h
xi +σ h xi
−σ h
|(Sh u( · , t)) (y) − u(y, t + h)| ≤
ε , N
by (7.139) choosing ˜ = ε/(2N ). Also,
dj −σ h
cj +σ h
|(Sh u( · , t)) (y) − u(y, t + h)| dy ≤ O (1) ε t
t+h
T.V. cj +σ (s−t),dj −σ (s−t) (u( · , s)) ds
+ O (1) hεT.V. cj ,dj (u( · , t)) for all i, j, and any ε > 0. Combining this we find that 1 h
b
|(Sh u( · , t)) (y) − u(y, t + h)| dy a
≤
1 xi +σ h |(Sh u( · , t)) (y) − u(y, t + h)| dy h xi −σ h i
7.3. Notes
+
j
dj −σ h
cj +σ h
ε ≤ ε + O (1) h ≤ O (1) ε.
285
|(Sh u( · , t)) (y) − u(y, t + h)| dy
t+h
T.V. (u( · , s)) ds + εT.V. (u( · , t)) t
Since ε can be arbitrarily small, (7.137) holds, and we have proved the following theorem: Theorem 7.20. Let fj ∈ C 2 (Rn ), j = 1, . . . , n. Consider the strictly hyperbolic equation ut + f (u)x = 0. Assume that each wave family is either genuinely nonlinear or linearly degenerate. For every u0 ∈ D, defined by (7.94), the initial value problem ut + f (u)x = 0,
u(x, 0) = u0 (x)
has a unique weak entropy solution satisfying conditions A–C, pages 265– 266. Furthermore, this solution can be found by the front-tracking construction.
7.3 Notes The material in Section 7.1 is taken almost entirely from the fundamental result of Bressan, Liu, and Yang [24]; there is really only an O (|e|) difference. Stability of front-tracking approximations to systems of conservation laws was first proved by Bressan and Columbo in [19], in which they used a pseudopolygon technique to “differentiate” the front-tracking approximation with respect to the initial location of the fronts. This approach was since used to prove stability for many special systems; see [34], [4], [1], [2]. The same results as those in Section 7.1 of this chapter have also been obtained by Bressan, Crasta, and Piccoli, using a variant of the pseudopolygon approach [20]. This does lead to many technicalities, and [20] is heavy reading indeed! The material in Section 7.2 is taken from the works of Bressan [17, 18] and coworkers, notably Lewicka [23], Goatin [21], and LeFloch [22]. There are few earlier results on uniqueness of solutions to systems of conservation laws; most notable are those by Bressan [14], where uniqueness and stability are obtained for Temple class system where every characteristic field is linearly degenerate, and in [16] for more general Temple class systems. Continuity in L1 with respect to the initial data was also proved by Hu and LeFloch [75] using a variant of Holmgren’s technique. See also [57].
286
7. Well-Posedness of the Cauchy Problem
Stability for some non-strictly hyperbolic systems of conservation laws (these are really only “quasi-systems”) has been proved by Winther and Tveito [142] and Klingenberg and Risebro [84]. We end this chapter with a suitable quotation: This is really easy: |what you have| ≤ |what you want| + |what you have − what you want| Rinaldo Colombo, private communication
Exercises 7.1 Show that the solution of the Cauchy problem obtained by the fronttracking construction of Chapter 6 is an entropy solution in the sense of conditions A–C on pages 265–266. 7.2 The proof of Theorem 7.8 is detailed only in the genuinely nonlinear case. Do the necessary estimates in the case of a linearly degenerate wave family.
Appendix A Total Variation, Compactness, etc.
I hate T.V. I hate it as much as peanuts. But I can’t stop eating peanuts. Orson Welles, The New York Herald Tribune (1956)
A key concept in the theory of conservation laws is the notion of total variation, T.V. (u), of a function u of one variable. We define T.V. (u) := sup |u (xi ) − u (xi−1 )| . (A.1) i
The supremum in (A.1) is taken over all finite partitions {xi } such that xi−1 < xi . The set of all functions with finite total variation on I we denote by BV (I). Clearly, functions in BV (I) are bounded. We shall omit explicit mention of the interval I if (we think that) this is not important, or if it is clear which interval we are referring to. For any finite partition {xi } we can write |u (xi+1 ) − u (xi )| = max (u (xi+1 ) − u (xi ) , 0) i
i
−
min (u (xi+1 ) − u (xi ) , 0)
i
=: p + n. Then the total variation of u can be written T.V. (u) = P + N := sup p + sup n. H. Holden and N.H. Risebro, Front Tracking for Hyperbolic Conservation Laws, Applied Mathematical Sciences 152, DOI 10.1007/978-3-642-23911-3, © Springer-Verlag Berlin Heidelberg 2011
(A.2) 287
288
Appendix A. Total Variation, Compactness, etc.
We call P the positive, and N the negative variation, of u. If for the moment we consider the finite interval I = [a, x], and partitions with a = x1 < · · · < xn = x, we have that pxa − nxa = u(x) − u(a), where we write pxa and nxa to indicate which interval we are considering. Hence pxa ≤ Nax + u(x) − u(a). Taking the supremum on the left-hand side we obtain Pax − Nax ≤ u(x) − u(a). Similarly, we have that Nax − Pax ≤ u(a) − u(x), and consequently u(x) = Pax − Nax + u(a).
(A.3)
In other words, any function u(x) in BV can be written as a difference between two increasing functions,1 u(x) = u+ (x) − u− (x), u(a) + Pax
(A.4)
Nax .
where u+ (x) = and u− (x) = Let ξj denote the points where u is discontinuous. Then we have that |u(ξξj +) − u(ξξj −)| ≤ T.V. (u) < ∞, j
and hence we see that there can be at most a countable set of points where u(ξ+) = u(ξ−). Equation (A.3) has the very useful consequence that if a function u in BV is also differentiable, then |u (x)| dx = T.V. (u) . (A.5) This equation holds, since d x d x |u (x)| dx = Pa + N dx = P + N = T.V. (u) . dx dx a We can also relate the total variation with the shifted L1 -norm. Define λ(u, ε) = |u(x + ε) − u(x)| dx. (A.6) If λ(u, ε) is a (nonnegative) continuous function in ε with λ(u, 0) = 0, we say that it is a modulus of continuity for u. More generally, we will use the name modulus of continuity for any continuous function λ(u, ε) vanishing at ε = 02 such that λ(u, ε) ≥ u( · + ε) − u p , where · p is the Lp -norm. We 1 This
decomposition is often called the Jordan decomposition of u. is not an exponent, but a footnote! Clearly, λ(u, ε) is a modulus of continuity if and only if λ(u, ε) = o (1) as ε → 0. 2 This
Appendix A. Total Variation, Compactness, etc.
289
will need a convenient characterization of total variation (in one variable), which is described in the following lemma. Lemma A.1. Let u be a function in L1 . If λ(u, ε)/ |ε| is bounded as a function of ε, then u is in BV and T.V. (u) = lim
ε→0
λ(u, ε) . |ε|
(A.7)
Conversely, if u is in BV , then λ(u, ε)/ |ε| is bounded, and thus (A.7) holds. In particular, we shall frequently use λ(u, ε) ≤ |ε| T.V. (u)
(A.8)
if u is in BV . Proof. Assume first that u is a smooth function. Let {xi } be a partition of the interval in question. Then xi xi u(x + ε) − u(x) dx. |u (xi ) − u (xi−1 )| = u (x) dx ≤ lim xi−1 ε→0 xi−1 ε Summing this over i we get T.V. (u) ≤ lim inf ε→0
λ(u, ε) |ε|
for differentiable functions u(x). Let u be an arbitrary bounded function in L1 , and uk be a sequence of smooth functions such that uk (x) → u(x) for almost all x, and uk − u 1 → 0. The triangle inequality shows that |λ (uk , ε) − λ (u, ε)| ≤ 2 uk − u 1 → 0. Let {xi } be a partition of the interval. We can now choose uk such that uk (xi ) = u (xi ) for all i. Then
|u (xi ) − u (xi−1 )| ≤ lim inf ε→0
λ(uk , ε) . |ε|
Therefore, T.V. (u) ≤ lim inf ε→0
λ(u, ε) . |ε|
290
Appendix A. Total Variation, Compactness, etc.
Furthermore, we have |u(x + ε) − u(x)| dx =
(j −1)ε
j
=
=
ε
0
≤
|u(x + ε) − u(x)| dx
ε
0
j
jε
|u(x + jε) − u(x + (j − 1)ε)| dx |u(x + jε) − u(x + (j − 1)ε)| dx
j ε
T.V. (u) 0
= |ε| T.V. (u) . Thus we have proved the inequalities λ(u, ε) λ(u, ε) λ(u, ε) ≤ T.V. (u) ≤ lim inf ≤ lim sup ≤ T.V. (u) , (A.9) ε→0 |ε| |ε| |ε| ε→0 which imply the lemma. Observe that we trivially have ˜ ε) := sup λ(u, σ) ≤ |ε| T.V. (u) . λ(u,
(A.10)
|σ|≤|ε|
For functions in Lp care has to be taken as to which points are used in the supremum, since these functions in general are not defined pointwise. The right choice here is to consider only points xi that are points of approximate continuity 3 of u. Lemma A.1 remains valid. We include a useful characterization of total variation. Theorem A.2. Let u be a function in L1 (I) where I is an interval. Assume u ∈ BV (I). Then T.V. (u) = sup u(x)φx (x) dx. (A.11) φ∈C01 (I), |φ|≤1
I
Conversely, if the right-hand side of (A.11) is finite for an integrable function u, then u ∈ BV (I) and (A.11) holds. 3 A function u is said to be approximately continuous at x if there exists a measurable set A such that limr→0 |[x − r, x + r] ∩ A| / |[x − r, x + r]| = 1 (here |B| denotes the measure of the set B), and u is continuous at x relative to A. (Every Lebesgue point is a point of approximate continuity.) The supremum (A.1) is then called the essential variation of the function. However, in the theory of conservation laws it is customary to use the name total variation in this case, too, and we will follow this custom here.
Appendix A. Total Variation, Compactness, etc.
291
Proof. Assume that u has finite total variation on I. Let ω be a nonnegative function bounded by unity with support in [−1, 1] and unit integral. Define 1 x ωε (x) = ω , ε ε and uε = ωε ∗ u.
(A.12)
Consider points x1 < x2 < · · · < xn in I. Then |uε (xi ) − uε (xi−1 )| i
≤
ε −ε
ωε (x)
|u(xi − x) − u(xi−1 − x)| dx
i
≤ T.V. (u) .
(A.13)
Using (A.5) and (A.13) we obtain |(uε ) (x)| dx = T.V. (uε ) = sup |uε (xi ) − uε (xi−1 )| i
≤ T.V. (u) . Let φ ∈
C01
with |φ| ≤ 1. Then ε u (x)φ (x) dx = − (uε ) (x)φ(x) dx ≤ |(uε ) (x)| dx ≤ T.V. (u) ,
which proves the first part of the theorem. Now let u be such that Du := sup u(x)φx (x) dx < ∞. φ∈C01 |φ|≤1
First we infer that − (uε ) (x)φ(x) dx = uε (x)φ (x) dx = − (ωε ∗ u)(x)φ (x) dx = − u(x)(ωε ∗ φ) (x) dx ≤ Du .
292
Appendix A. Total Variation, Compactness, etc.
Using that (see Exercise A.1)
f 1 = sup
f (x)φ(x) dx,
φ∈C01 , |φ|≤1
we conclude that
|(uε ) (x)| dx ≤ Du .
(A.14)
Next we show that u ∈ L∞ . Choose a sequence uj ∈ BV ∩ C ∞ such that (see, e.g., [47, p. 172]) uj → u a.e., and
uj − u 1 → 0,
u (x) dx → Du , j
For any y, z we have
z
uj (z) = uj (y) +
j → ∞,
j → ∞.
(A.15)
(A.16)
uj (x) dx.
y
Averaging over some bounded interval J ⊆ I we obtain 1 |uj | ≤ |uj (y)| dy + uj (x) dx, |J| J I
(A.17)
which shows that the uj are uniformly bounded, and hence u ∈ L∞ . Thus uε (x) → u(x) as ε → 0 at each point of approximate continuity of u. Using points of approximate continuity x1 < x2 < · · · < xn we conclude that |u(xi ) − u(xi−1 )| = lim |uε (xi ) − uε (xi−1 )| i
ε→0
i
≤ lim sup ε→0
≤ Du .
|(uε ) (x)| dx (A.18)
For a function u of two variables (x, y) the total variation is defined by T.V.x,y (u) = T.V.x (u) (y) dy + T.V.y (u) (x) dx. (A.19) The extension to functions of n variables is obvious. Total variation is used to obtain compactness. The appropriate compactness statement is Kolmogorov’s compactness theorem. We say that a subset M of a complete metric space X is (strongly) compact if any infinite subset
Appendix A. Total Variation, Compactness, etc.
293
of it contains a (strongly) convergent sequence. A set is relatively compact if its closure is compact. A subset of a metric space is called totally bounded if it is contained in a finite union of balls of radius ε for any ε > 0 (we call this finite union an ε-net). Our starting theorem is the following result. Theorem A.3. A subset M of a complete metric space X is relatively compact if and only if it is totally bounded. Proof. Consider first the case where M is relatively compact. Assume that there exists an ε0 for which there is no finite ε0 -net. For any element u1 ∈ M there exists an element u2 ∈ M such that u1 − u2 ≥ ε0 . Since the set {u1 , u2 } is not an ε0 -net, there has to be an u3 ∈ M such that u1 − u3 ≥ ε0 and u2 − u3 ≥ ε0 . Continuing inductively construct a sequence {uj } such that uj − uk ≥ ε0 ,
j = k,
which clearly cannot have a convergent subsequence, which yields a contradiction. Hence we conclude that there has to exist an ε-net for every ε. Assume now that we can find a finite ε-net for M for every ε > 0, and let M1 be an arbitrary infinite subset of M . Construct an ε-net for M1 with - (1) (1) . (j) ε = 12 , say u1 , . . . , uN1 . Now let M1 be the set of those u ∈ M1 such (1)
(1)
(N )
that u−uj ≤ 14 . At least one of M1 , . . . , M1 1 has to be infinite, since M1 is infinite. Denote (one of) this by M2 and the corresponding element u2 . On this set we construct an ε-net with ε = 14 . Continuing inductively we construct a nested sequence of subsets Mk+1 ⊂ Mk for k ∈ N such that - (k) (k) . Mk has an ε-net with ε = 1/2k , say u1 , . . . , uNk . For arbitrary elements u, v of Mk we have u − v ≤ u − uk + uk − v ≤ 1/2k−1 . The sequence {uk } with uk ∈ Mk is convergent, since uk+m − uk ≤
1 , 2k−1
proving that M1 contains a convergent sequence. A result that simplifies our argument is the following. Lemma A.4. Let M be a subset of a metric space X. Assume that for each ε > 0, there is a totally bounded set A such that dist(f, A) < ε for each f ∈ M . Then M is totally bounded. Proof. Let A be such that dist(f, A) < ε for each f ∈ M . Since A is totally bounded, there exist points x1 , . . . , xn in X such that A ⊆ ∪nj=1 Bε (xj ), where Bε (y) = {z ∈ X | z − y ≤ ε}.
294
Appendix A. Total Variation, Compactness, etc.
For any f ∈ M there exists by assumption some a ∈ A such that a − f < ε. Furthermore, a − xj < ε for some j. Thus f − xj < 2ε, which proves M⊆
n 9
B2ε (xj ).
j=1
Hence M is totally bounded. We can state and prove Kolomogorov’s compactness theorem. Theorem A.5 (Kolmogorov’s compactness theorem). Let M be a subset of Lp (Ω), p ∈ [1, ∞ , for some open set Ω ⊆ Rn . Then M is relatively compact if and only if the following three conditions are fulfilled : (i) M is bounded in Lp (Ω), i.e., sup u p < ∞.
u∈M
(ii) We have u( · + ε) − u p ≤ λ(|ε|) for a modulus of continuity λ that is independent of u ∈ M (we let u equal zero outside Ω). (iii)
p
lim
α→∞
{x∈Ω||x|≥α}
|u(x)| dx = 0 uniformly for u ∈ M .
Remark A.6. In the case Ω is bounded, condition (iii) is clearly superfluous. Proof. We start by proving that conditions (i)–(iii) are sufficient to show that M is relatively compact. Let ϕ be a nonnegative and continuous function such that ϕ ≤ 1, ϕ(x) = 1 on |x| ≤ 1, and ϕ(x) = 0 whenever |x| ≥ 2. Write ϕr (x) = ϕ(x/r). From condition (iii) we see that ϕr u − u → 0 as r → ∞. Using Lemma A.4 we see that it suffices to show that Mr = {ϕr u | u ∈ M } is totally bounded. Furthermore, we see that Mr satisfies (i) and (ii). In other words, we need to prove only that (i) and (ii) together with the existence of some R so that u = 0 whenever u ∈ M and |x| ≥ R imply that M is totally bounded. Let ωε be a mollifier, that is, 1 x ∞ ω ∈ C0 , 0 ≤ ω ≤ 1, ω dx = 1, ωε (x) = n ω . ε ε Then
u ∗ ωε − u pp =
|u ∗ ωε (x) − u(x)|p dx
Appendix A. Total Variation, Compactness, etc.
295
p
= u(x − y) − u(x) ωε (y) dy dx B ε p p ≤ |u(x − y) − u(x)| dy ωε q dx Bε p p np/q−p =ε ω q |u(x − y) − u(x)| dx dy Bε p np/q−p ≤ε ω q max λ(|z|) dy =
Bε |z|≤ε p εn+np/q−p ω q |B1 | max λ(|z|), |z|≤ε
where 1/p + 1/q = 1 and Bε = Bε (0) = {z ∈ Rn | z ≤ ε}. Thus u ∗ ωε − u p ≤ εn−1 ω q |B1 |
1/p
max λ(|z|),
|z|≤ε
(A.20)
which together with (ii) proves uniform convergence as ε → 0 for u ∈ M . Using Lemma A.4 we see that it suffices to show that Nε = {u∗ωε | u ∈ M } is totally bounded for any ε > 0. Holder’s ¨ inequality yields |u ∗ ωε (x)| ≤ u p ωε q , so by (i), functions in Nε are uniformly bounded. Another application of Holder’s ¨ inequality implies |u ∗ ωε (x) − u ∗ ωε (y)| = (u(x − z) − u(y − z))ωε (z) dz ≤ u( · + x − y) − u p ωε q , which together with (ii) proves that Nε is equicontinuous. The Arzel`a– Ascoli theorem implies that Nε is relatively compact, and hence totally bounded in C(BR+r ). Since the natural embedding of C(BR+r ) into Lp (Rn ) is bounded, it follows that Nε totally bounded in Lp (Rn ) as well. Thus we have proved that conditions (i)–(iii) imply that M is relatively compact. To prove the converse, we assume that M is relatively compact. Condition (i) is clear. Now let ε > 0. Since M is relatively compact, we can find functions u1 , . . . , um in Lp (Rn ) such that M⊆
m 9
Bε (uj ).
j=1
Furthermore, since C0 (Rn ) is dense in Lp (Rn ), we may as well assume that uj ∈ C0 (Rn ). Clearly, uj ( · + y) − uj p → 0 as y → 0, and so there is
296
Appendix A. Total Variation, Compactness, etc.
some δ > 0 such that uj ( · + y) − uj p ≤ ε whenever |y| < δ. If u ∈ M and |y| < δ, then pick some j such that u − uj p < ε, and obtain u( · + z) − u p ≤ u( · + z) − uj ( · + z) p + uj ( · + z) − uj p + uj − u p = 2 uj − u p + uj ( · + z) − uj p ≤ 3ε, proving (ii). When r is large enough, χBr uj = uj for all j, and then, with the same choice of j as above, we obtain χBr u − u p ≤ χBr (u − uj ) p + u − uj p ≤ 2 u − uj p ≤ 2ε, which proves (iii). Helly’s theorem is a simple corollary of Kolmogorov’s compactness theorem. - . Corollary A.7 (Helly’s theorem). Let hδ be a sequence of functions defined on an interval [a, b], and assume that this sequence satisfies δ
h < M, T.V. hδ < M, and ∞ where M is some constant independent of δ. Then there exists a subsequence hδn that converges almost everywhere to some function h of bounded variation. Proof. It suffices to apply (A.8) (for p = 1) together with the boundedness of the total variation to show that condition (ii) in Kolmogorov’s compactness theorem is satisfied. We remark that one can prove that the convergence in Helly’s theorem is at every point, not only almost everywhere; see Exercise A.2. The application of Kolmogorov’s theorem in the context of conservation laws relies on the following result. Theorem A.8. Let uη : Rn × [0, ∞ → R be a family of functions such that for each positive T , |uη (x, t)| ≤ CT ,
(x, t) ∈ Rn × [0, T ]
for a constant CT independent of η. Assume in addition for all compact B ⊂ Rn and for t ∈ [0, T ] that sup |uη (x + ξ, t) − uη (x, t)| dx ≤ νB,T (|ρ|), |ξ|≤|ρ|
B
Appendix A. Total Variation, Compactness, etc.
297
for a modulus of continuity ν. Furthermore, assume for s and t in [0, T ] that |uη (x, t) − uη (x, s)| dx ≤ ωB,T (|t − s|) as η → 0, B
for some modulus of continuity ωT . Then there exists a sequence ηj → 0 such that for each t ∈ [0, T ] the function {uηj (t)} converges to a function u(t) in L1loc (Rn ). The convergence is in C([0, T ]; L1loc(Rn )). Proof. Kolmogorov’s theorem implies that for each fixed t ∈ [0, T ] and for any sequence ηj → 0 there exists a subsequence (still denoted by ηj ) ηj → 0 such that {uηj (t)} converges to a function u(t) in L1loc (Rn ). Consider now a dense countable subset E of the interval [0, T ]. By possibly taking a further subsequence (which we still denote by {uηj }) we find that uηj (x, t) − u(x, t) dx → 0 as ηj → 0, for t ∈ E. B
˜ ≤ε Now let ε > 0 be given. Then there exists a positive δ such that ωB,T (δ) ˜ for all δ ≤ δ. Fix t ∈ [0, T ]. We can find a tk ∈ E with |tk − t| ≤ δ. Thus |uη˜(x, t) − uη˜(x, tk )| dx ≤ ωB,T (|t − tk |) ≤ ε for η˜ ≤ η B
and
B
uηj (x, tk ) − uηj (x, tk ) dx ≤ ε for ηj1 , ηj2 ≤ η and tk ∈ E. 1 2
The triangle inequality yields uη (x, t) − uη (x, t) dx j1 j2 B uηj (x, tk ) − uηj (x, tk ) dx ≤ uηj1 (x, t) − uηj1 (x, tk ) dx + 1 2 B B uηj (x, tk ) − uηj (x, t) dx + B
2
2
≤ 3ε, proving that for each t ∈ [0, T ] we have that uη (t) → u(t) in L1loc (Rn ). The bounded convergence theorem then shows that sup |uη (x, t) − u(x, t)| dx → 0 as η → 0, t∈[0,T ]
B
thereby proving the theorem.
298
Appendix A. Total Variation, Compactness, etc.
A.1 Notes Extensive discussion about total variation can be found, e.g., in [47] and [148]. The proof of Theorem A.3 is taken from Sobolev [132, pp. 28 ff]. An alternative proof can be found in Yosida [147, p. 13]. The proof of Theorem A.2 is from [47, Theorem 1, p. 217]. Kolmogorov’s compactness theorem, Theorem A.5, was first proved by Kolmogorov in 1931 [85] in the case where Ω is bounded, p > 1, and the translation u(x + ε) of u(x) is replaced by the spherical mean of u over a ball of radius ε in condition (ii). It was extended to the unbounded case by Tamarkin [135] in 1932 and finally extended to the case with p = 1 by Tulajkov [141] in 1933. M. Riesz [121] proved the theorem with translations. See also [49]. For other proofs of Kolmogorov’s theorem, see, e.g., [132, pp. 28 ff], [27, pp. 69 f], [147, pp. 275 f], and [145, pp. 201f].
Exercises A.1 Show that for any f ∈ L1 we have f 1 = sup f (x)φ(x) dx. φ∈C01 |φ|≤1
A.2 Show that in Helly’s theorem, Corollary A.7, one can find a subsequence hδn that converges for all x to some function h of bounded variation.
Appendix B The Method of Vanishing Viscosity
Details are the only things that interest. Oscar Wilde, Lord Arthur Savile’s Crime (1891 )
In this appendix we will give an alternative proof of existence of solutions of scalar multidimensional conservation laws based on the viscous regularization uμt +
m ∂ fj (uμ ) = μΔuμ , ∂x j j=1
where as usual Δu denotes the Laplacian be the following theorem:
/ j
uμ |t=0 = u0 ,
(B.1)
uxj xj . Our starting point will
Theorem B.1. Let u0 ∈ L1 ∩ L∞ ∩ C 2 with bounded derivatives and fj ∈ C 1 with bounded derivative. Then the Cauchy problem (B.1) has a classical solution, denoted by uμ , that satisfies1 uμ ∈ C 2 (Rm × 0, ∞ ) ∩ C(Rm × [0, ∞ ).
(B.2)
Furthermore, the solution satisfies the maximum principle uμ (t) ∞ ≤ u0 ∞ .
(B.3)
1 The existence and regularity result (B.2) is valid for systems of equations in one spatial dimension as well.
299
300
Appendix B. The Method of Vanishing Viscosity
Let v μ be another solution with initial data v0 satisfying the same properties as u0 . Assume in addition that both u0 and v0 have finite total variation and are integrable. Then u( · , t) − v( · , t) L1 (Rm ) ≤ u0 − v0 L1 (Rm ) ,
(B.4)
for all t ≥ 0. Proof. We present the proof in the one-dimensional case only, that is, with m = 1. Let K denote the heat kernel, that is, 1 x2 K(x, t) = √ exp − . 4μt 4μπt
(B.5)
Define functions un recursively as follows: Let u−1 = 0, and define un to be the solution of un |t=0 = u0 ,
unt + f (un−1 )x = μunxx ,
n = 0, 1, 2, . . . .
(B.6)
Then un (t) ∈ C ∞ (R) for t positive. Applying Duhamel’s principle we obtain un (x, t) = K(x − y, t)u0 (y) dy t − K(x − y, t − s)f (un−1 (y, s))y ds dy 0 t ∂ 0 = u (x, t) − K(x − y, t − s)f (un−1 (y, s)) ds dy. (B.7) ∂x 0 Define v n = un − un−1 . Then t
∂ v n+1 (x, t) = − K(x − y, t − s) f (un (y, s)) − f (un−1 (y, s)) ds dy. 0 ∂x Using Lipschitz continuity we obtain t n+1 v (t)∞ ≤ f Lip v n (s) ∞ 0
f Lip ≤ √ πμ
t 0
∂ K(x, t − s) dx ds ∂x
(t − s)−1/2 v n (s) ∞ ds.
Assume that |u0 | ≤ M for some constant M . Then we claim that vn (t) ∞ ≤ M f nLip
tn/2 μn/2 Γ( n+2 ) 2
,
where we have introduced the gamma function defined by ∞ Γ(p) = e−s sp ds. 0
(B.8)
Appendix B. The Method of Vanishing Viscosity
301
We shall use the following properties of the gamma function. Let the beta function B(p, q) be defined as 1 B(p, q) = sp−1 (1 − s)q−1 ds. 0
Then B(p, q) =
Γ(p)Γ(q) . Γ(p + q)
√ After a change of variables the last equality implies that Γ 12 = π. Equation (B.8) is clearly correct for n = 0. Assume it to be correct for n. Then t n+1 1 v (x, t) ≤ M f n+1 (t − s)−1/2 sn/2 ds √ Lip πμ(n+1)/2 Γ( n+2 ) 0 2 1 (n+1)/2 t n+1 = M f Lip √ (n+1)/2 n+2 (1 − s)−1/2 sn/2 ds πμ Γ( 2 ) 0 = M f n+1 Lip Hence we conclude that t ∈ [0, T ], and that
/ n
t(n+1)/2 μ(n+1)/2 Γ( n+3 ) 2 n
.
(B.9)
v converges uniformly on any bounded strip
u = lim un = lim n→∞
n→∞
n
vj
j=0
exists. The convergence is uniform on the strip t ∈ [0, T ]. It remains to show that u is a classical solution of the differential equation. We immediately infer that t ∂ u(x, t) = u0 (x, t) − K(x − y, t − s)f (u(y, s)) ds dy. (B.10) ∂x 0 It remains to show that (B.10) implies that u satisfies the differential equation ut + f (u)x = μuxx ,
u|t=0 = u0 .
(B.11)
Next we want to show that u is differentiable. Define Mn (t) = sup max |unx (x, s)| . x∈R 0≤s≤t
Clearly, |unx (x, t)|
1 ≤ f Lip √ πμ
0
t
(t − s)−1/2 Mn−1 (s) ds + M0 (t).
Choose B such that M 0 ≤ B/2. Then Mn (t) ≤ B exp(Ct/μ)
(B.12)
302
Appendix B. The Method of Vanishing Viscosity
if C is chosen such that 1 f Lip √ πμ
0
∞
s−1/2 e−Cs/μ ds ≤
1 . 2
Inequality (B.12) follows by induction: It clearly holds for n = 0. Assume that it holds for n. Then t n+1 u ≤ f Lip √1 (s, x) (t − s)−1/2 Mn (s) ds + B/2 x πμ 0 t 1 1 Ct/μ −1/2 −Cs/μ ≤ Be f Lip √ s e ds + πμ 0 2 ≤ BeCt/μ . Define Nn (t) = sup max |unxx (x, s)| . x∈R 0≤s≤t
˜ ≥ max{2N 0 , B 2 + 1} and C˜ ≥ C such that Choose B ∞
1 ˜ ˜ f + f √1 2B s−1/2 e−2Cs/μ ≤ . ∞ ∞ πμ 0 2 Then we show inductively that ˜ ˜ 2Ct/μ Nn (t) ≤ Be .
The estimate is valid for n = 0. Assume that it holds for n. Then n+1 0 u xx (x, t) ≤ uxx (x, t) t
∂ 2 + f ∞ Mn (s) + f ∞ Nn (s) K(y, t − s) dy ds ∂x 0 ≤ N0 t
1
+ f ∞ + f ∞ √ Mn (s)2 + Nn (s) (t − s)−1/2 ds πμ 0 ≤ N0 t
1
2 2Cs/μ ˜ + f ∞ + f ∞ √ B e + eCs/μ (t − s)−1/2 ds πμ 0 t
˜ ˜ ˜ 2Ct/μ ˜ f + f √1 ≤ Be 1 + 2B e−2Cs/μ ds ∞ ∞ πμ 0 ˜ ˜ 2Ct/μ ≤ Be .
We have now established that un → u uniformly and that unx and unxx both are uniformly bounded (in (x, t) and n). Lemma B.2 (proved after this theorem) implies that indeed u is differentiable and that ux equals the uniform limit of unx . Performing an integration by parts in (B.10) we find
Appendix B. The Method of Vanishing Viscosity
that the limit u satisfies u(x, t) = u0 (x, t) −
0
303
t
K(x − y, t − s)f (u(y, s))y ds dy.
Applying Lemma B.3 we conclude that u satisfies ut + f (u)x = μuxx ,
u|t=0 = u0 ,
with the required regularity.2 The proof of (B.3) is nothing but the maximum principle. Consider the auxiliary function U (x, t) = u(x, t) − η(t + (ηx)2 /2). Since U → −∞ as |x| → ∞, U obtains a maximum on R × [0, T ], say at the point (x0 , t0 ). We know that U (x0 , t0 ) = u(x0 , t0 ) − η(t0 + (ηx0 )2 /2) ≥ u0 (0). Hence η 3 x20 ≤ 2u(x0 , t0 ) − 2u0 (0) − 2ηt0 ≤ O (1)
(B.13)
independently of η, since u is bounded on R×[0, T ] by construction. Assume that 0 < t0 ≤ T . At the maximum point we have ux (x0 , t0 ) = η 3 x0 ,
ut (x0 , t0 ) ≥ η,
and uxx (x0 , t0 ) ≤ η 3 ,
which implies that ut (x0 , t0 ) + f (u(x0 , t0 ))ux (x0 , t0 ) − μuxx (x0 , t0 ) ≥ η − O (1) η3/2 − μη3 >0 if η is sufficiently small. We have used that f (u) is bounded and (B.13). This contradicts the assumption that the maximum was attained for t positive. Thus u(x, t) − η(t + (ηx)2 /2) ≤ sup U (x, 0) x
= sup u0 (x) − η 3 x2 /2 x
≤ sup u0 (x), x
which implies that u ≤ sup u0 . By considering η negative we find that u ≥ inf u0 , from which we conclude that u ∞ ≤ u0 ∞ . Lemma B.6 implies that any solution u satisfies the property needed for our uniqueness estimate, namely that if u0 is in L1 , then u( · , t) is in L1 . This is so, since we have that u( · , t) 1 − u0 1 ≤ u( · , t) − u0 1 ≤ Ct. 2 The
argument up this equality is valid for one-dimensional systems as well.
304
Appendix B. The Method of Vanishing Viscosity
Furthermore, since u is of bounded variation (which is the case if u0 is of bounded variation), ux is in L1 , and thus lim|x|→∞ ux (x, t) = 0. Hence, if u0 is in L1 ∩ BV , then we have that d u(x, t) dx = − (f (u)x + μuxx ) dx = 0. dt Hence
u(x, t) dx =
u0 (x) dx.
(B.14)
By the Crandall–Tartar lemma, Lemma 2.12, to prove (B.4) it suffices to show that if u0 (x) ≤ v0 (x), then u(x, t) ≤ v(x, t). To this end we first add a constant term to the viscous equation. More precisely, let uδ denote the solution of (for simplicity of notation we we let μ = 1 in this part of the argument) uδt + f (uδ )x = uδxx − δ,
uδ |t=0 = u0 .
In integral form we may write (cf. (B.10)) δ u (x, t) = K(x − y, t)u0 (y) dy t ∂ − K(x − y, t − s)f (uδ (y, s)) ds dy − δt. ∂x 0 Furthermore, δ u (x, t) − u(x, t) t ∂ δ ≤ ∂x K(x − y, t − s) f (u (y, s)) − f (u(y, s)) ds dy + |δ| t 0 t ∂ δ ≤ f Lip ∂x K(x − y, t − s) u (y, s) − u(y, s) ds dy + |δ| t 0 t δ u (s) − u(s) ds ≤ f Lip + |δ| t ∞ π(t − s) 0 t δ u (s) − u(s) dμ(s) + |δ| t ≤ ∞ 0
with the new integrable measure dμ(s) = f Lip / π (t − s). Gronwall’s inequality yields that ! √ " t δ t f Lip u (t) − u(t) ≤ t |δ| exp dμ(s) = t |δ| exp 2 √ , ∞ π 0 which implies that uδ → u in L∞ as δ → 0. Thus it suffices to prove the monotonicity property for uδ and vδ , where δ vtδ + f (vδ )x = vxx + δ,
vδ |t=0 = v0 .
(B.15)
Appendix B. The Method of Vanishing Viscosity
305
Let u0 ≤ v0 . We want to prove that uδ ≤ v δ . Assume to the contrary that uδ (x, t) > vδ (x, t) for some (x, t), and define tˆ = inf{t | uδ (x, t) > v δ (x, t) for some x}. Pick x ˆ such that uδ (ˆ x, tˆ) = v δ (ˆ x, tˆ). At this point we have uδx (ˆ x, tˆ) = vxδ (ˆ x, tˆ),
δ uδxx (ˆ x, tˆ) ≤ vxx (ˆ x, tˆ),
and uδt (ˆ x, tˆ) ≥ vtδ (ˆ x, tˆ).
However, this implies the contradiction δ −δ = uδt + f (uδ )uδx − uδxx ≥ vtδ + f (v δ )vxδ − vxx ≥ δ at the point (ˆ x, tˆ)
whenever δ is positive. Hence u(x, t) ≤ v(x, t) and the solution operator is monotone, and (B.4) holds. In the above proof we needed the following two results. Lemma B.2. Let φn ∈ C 2 (I) on the interval I, and assume that φn → φ uniformly. If φn ∞ and φn ∞ are bounded, then φ is differentiable, and φn → φ uniformly as n → ∞. Proof. The family {φn } is clearly equicontinuous and bounded. The Arzel``a–Ascoli theorem implies that a subsequence {φnk } converges uniformly to some function ψ. Then x x φnk = φnk dx → ψ dx, from which we conclude that φ = ψ. We will show that the sequence {φn } itself converges to ψ. Assume otherwise. Then we have a subsequence {φnj } that does not converge to ψ. The Arzela–Ascoli theorem implies the existence of a further subsequence {φnj } that converges to some element ˜ which is different from ψ. But then we have ψ, x x ψ dx = lim φnk = lim φnj = ψ˜ dx, k→∞
j →∞
˜ which is a contradiction. which shows that ψ = ψ, Lemma B.3. Let F (x, t) be a continuous function such that |F (x, t) − F (y, t)| ≤ M |x − y| uniformly in x, y, t. Define t u(x, t) = K(x − y, t)u0 (y) dy + K(x − y, t − s)F (y, s) ds dy. 0
Then u is in C 2 (Rm × 0, ∞ ) ∩ C(Rm × [0, ∞ ) and satisfies ut = uxx + F (x, t),
u|t=0 = u0 .
306
Appendix B. The Method of Vanishing Viscosity
Proof. To simplify the presentation we assume that u0 = 0. First we observe that t t
u(x, t) = F (x, s) ds + K(x − y, t − s) F (y, s) − F (x, s) ds dy. 0
0
The natural candidate for the time derivative of u is t
∂ ut (x, t) = F (x, t)+ K(x−y, t−s) F (y, s)−F (x, s) ds dy. (B.16) 0 ∂t To show that this is well-defined we first observe that ∂ K(x − y, t − s) ≤ O (1) K(x − y, 2(t − s)). ∂t t−s Thus t ∂ K(x − y, t − s) |F (y, s) − F (x, s)| ds dy ∂t 0 t 1 ≤ M O (1) K(x − y, 2(t − s)) |y − x| dy ds t − s 0 t 1 √ ≤ M O (1) ds ≤ O (1) . t−s 0 Consider now 1 u(x, t + Δt) − u(x, t) − u (x, t) t Δt t+Δt 1 ≤ F (x, s) ds − F (x, t) Δt t t+Δt 1 + K(x − y, t + Δt − s) |F (y, s) − F (x, s)| ds dy Δt t t 1 + Δt K(x − y, t + Δt − s) − K(x − y, t − s) 0 ∂ − K(x − y, t − s) |F (y, s) − F (x, s)| dy ds ∂t t+Δt 1 ≤ F (x, s) ds − F (x, t) Δt t t+Δt 1 +M K(y, t + Δt − s) |y| dy ds Δt t t ∂ ∂ +M ∂t K(y, t + θΔt − s) − ∂t K(y, t − s) |y| ds dy, 0
Appendix B. The Method of Vanishing Viscosity
307
for some θ ∈ [0, 1]. We easily see that the first two terms vanish in the limit when Δt → 0. The last term can be estimated as follows (where δ > 0): t ∂ K(y, t + θΔt − s) − ∂ K(y, t − s) |y| dy ds ∂t ∂t 0 t−δ ∂ ∂ ≤ ∂t K(y, t + θΔt − s) − ∂t K(y, t − s) |y| dy ds 0 t ∂ K(y, t + θΔt − s) + ∂ K(y, t − s) |y| dy ds + ∂t ∂t t−δ t−δ ∂ K(y, t + θΔt − s) − ∂ K(y, t − s) |y| dy ds ≤ ∂t ∂t 0 t 1 + O (1) K(y, 2(t + θΔt − s)) t + θΔt − s t−δ 1 + K(y, 2(t − s)) |y| dy ds. t−s Choosing δ sufficiently small in the second integral we can make that term less then a prescribed . For this fixed δ we choose Δt sufficiently small to make that integral less than . We conclude that indeed (B.16) holds. By using estimates ∂ K(x, t) ≤ O√(1) K(x, 2t), ∂x t 2 ∂ O (1) ∂x2 K(x, t) ≤ t K(x, 2t), we conclude that the spatial derivatives are given by t ∂ ux (x, t) = K(x − y, t − s)F (y, s) ds dy, 0 ∂x t 2 ∂ uxx (x, t) = K(x − y, t − s)F (y, s) ds dy, 2 0 ∂x
(B.17)
from which we conclude that ut (x, t) − uxx (x, t) t ∂ ∂2 = F (x, t) + K(x − y, t − s) − K(x − y, t − s) F (y, s) ds dy ∂x2 0 ∂t = F (x, t). (B.18)
Remark B.4. The lemma is obvious if F is sufficiently differentiable; see, e.g., [108, Theorem 3, p. 144]. Next, we continue by showing directly that as μ → 0, the sequence {uμ } converges to the unique entropy solution of the conservation law (B.30). We
308
Appendix B. The Method of Vanishing Viscosity
remark that this convergence was already established in Chapter 3 when we considered error estimates. In order to establish our estimates we shall need the following technical result. Lemma B.5. Let v : Rm → R such that v ∈ C 1 (Rm ) and |∇v| ∈ L1 (Rm ). Then |∇v| dx → 0 as η → 0. |v |≤η
Proof. By the inverse function theorem, the set x v(x) = 0, ∇v(x) = 0 is a smooth (m − 1)-dimensional manifold of Rm . Thus |∇v| dx = |∇v| dx. |v |≤η
0<|v|≤η
The integrand (the norm of the gradient times the characteristic function of the region where |v| is nonzero and less than η) tends pointwise to zero as η → 0. The lemma follows using Lebesgue’s dominated convergence theorem. The key estimates are contained in the next lemma. Lemma B.6. Assume that u0 ∈ C 2 (Rm ) with bounded derivatives and finite total variation. Let uμ denote the solution of equation (B.1). Then the following estimates hold: T.V. (uμ (t)) ≤ T.V. (uμ (0)) , u (t) − u (s) 1 ≤ C |t − s| . μ
0
ε
μ
i
(B.19) (B.20)
ε
Proof. We set w = ∂u /∂t and w = ∂u /∂xi for i = 1, . . . , m. Then we find that m ∂wi μ i + fj (u )w x = μΔwi (B.21) j ∂t j=1 for i = 0, 1, . . . , m. Define the following sign function: ⎧ ⎪ ⎨1 signη (x) = x/η ⎪ ⎩ −1
continuous approximation to the for x ≥ η, for |x| < η, for x ≤ −η.
Multiply (B.21) by signη (wi ) and integrate over Rm × [0, T ] for some T positive. This yields T T m
μ i ∂wi signη (wi ) dt dx + fj (u )w x signη (wi ) dt dx j ∂t m Rm 0 R 0 j=1
Appendix B. The Method of Vanishing Viscosity
T
μΔwi signη (wi ) dt dx.
= Rm
309
0
(B.22)
The first term in (B.22) can be written T T
i ∂wi signη (wi ) dt dx = w signη (wi ) t dt dx ∂t Rm 0 Rm 0 T − wi signη (wi )wti dt dx. Rm
0
Here we have that T 1 i i i w sign (w )w dt dx = wi wti dt dx η t η m i R 0 |w |≤η t≤T i w dt dx → 0, ≤ t |w i |≤η
t≤T
using Lemma B.5, which implies T T ∂wi ∂ i i (w ) dt dx → w dt dx signη ∂t Rm 0 Rm 0 ∂t = wi (T )1 − wi (0)1 ,
(B.23)
as η → 0. The second term in (B.22) reads T m
μ i I:= fj (u )w x signη (wi ) dt dx m
j=1 R m
=−
j=1
=−
1 η
j
0
Rm
w i |≤η t≤T
0
T
fj (uμ )wi signη (wi )
∂wi dt dx ∂xj
wi f (uμ ) · ∇wi dt dx,
where f = (f1 , . . . , fm ). This can be estimated as follows: ∇wi dt dx → 0, |I| ≤ sup |f (u)| u
|w i |≤η
(B.24)
t≤T
as η → 0. Here the supremum is over |u| ≤ uμ (0) ∞ . Finally, T T ∇wi 2 signη (wi ) dt dx ≤ 0. μ Δwi signη (wi ) dt dx = −μ Rm
Rm
0
0
(B.25) Using (B.23), (B.24), and (B.25) in (B.22) we obtain, when η → 0, i w (T ) − wi (0) ≤ 0. (B.26) 1 1 For i = 0 this implies uμ (t) − uμ (s) 1 =
m
R
s
t
∂uμ dt dx ∂t
310
Appendix B. The Method of Vanishing Viscosity
≤
t
0 w (t˜) dt˜dx
Rm s t 0
w (t˜)1 dt˜ s ≤ |t − s| w0 (0)1 . =
For i ≥ 1 we use (B.26) to prove (B.19). Recalling the results from Appendix A, we define m λi (u, μ) = |u (x + μei ) − u(x)| dx and λ(u, μ) = λi (u, μ). Rm
i=1
Then the inequalities (A.10) hold. We have that λi (uε ( · , t), μ) = |uε (x + μei , t) − uε (x, t)| dx Rm μ i = w (x + αei , t) dα dx m R 0μ i w (x + αei , t) dα dx ≤ m Rμ 0 i w ( · , t) dα ≤ 1 0 i = |μ| w ( · , t)1 ≤ |μ| wi ( · , 0)1 = |μ| T.V.xi (u0 ) dx1 · · · dxi−1 dxi+1 · · · dxm . Rm−1
Thus we find that T.V. (uε ( · , t)) = lim inf μ→0
λ (uε ( · , t), μ) ≤ T.V. (u0 ) , |μ|
which proves (B.19). From the estimates in Lemma B.6 we may conclude, using Helly’s theorem, Corollary A.7, and Theorem A.8, that there exists a (sub)sequence of {uμ } that converges uniformly in C([0, T ]; L1loc(Rm )) to a function that we denote by u. It remains to show that u is an entropy solution of the conservation law. Let k be in R. Then (uμ − k)t + ∇ · (f (uμ ) − f (k)) = μΔ(uμ − k).
(B.27)
Multiply (B.27) by signη (uμ − k) times a nonnegative test function φ and integrate over [0, T ] × Rm . We find, when we write U = uμ − k, that Ut signη (U )φ 0=
Appendix B. The Method of Vanishing Viscosity
311
+ ∇ · (f (uμ ) − f (k)) signη (U )φ − μ signη (U )ΔU φ dx dt
=
U signη (U ) t φ
− (f (uμ ) − f (k)) · signη (U )∇φ + φ signη (U )∇U dx dt
+μ ∇U · ∇ signη (U )φ dx dt − U signη (U )U Ut φ dx dt
=− U signη (U )φt + signη (U ) (f (uμ ) − f (k)) · ∇φ dx dt
− (U φ) t=0 −(U φ) t=T dx − φ signη (U ) (f (uμ ) − f (k)) · ∇U dx dt − U φ signη (U )U Ut dx dt 2 +μ signη (U )∇U · ∇φ dx dt + μ |∇U | signη (U )φ dx dt. The third and the fourth integrals tend to zero as η → 0 (since f is Lipschitz and x sign (x)η tends weakly to zero), and the last term is nonpositive. Hence
μ |u − k| φt + sign (uμ − k) (f (uμ ) − f (k)) · ∇φ dx dt (B.28) μ t=T − (u (0) − k)φ|t=0 dx ≥ μ sign (U ) ∇U · ∇φ dx dt. Taking μ → 0 we see that the right-hand side tends to zero, and we conclude that |u − k|φt + sign (u − k) (f (u) − f (k)) · ∇φ dx dt (B.29) + (u0 − k)φ|t=0 dx − (u(T ) − k)φ|t=T dx ≥ 0, which is the Kruzkov ˇ entropy condition. We have proved the following result. Theorem B.7. Let u0 ∈ C 2 (Rm ) ∩ L∞ (Rm ) with bounded derivatives and finite total variation, and let fj ∈ C 1 (R) with bounded derivative. Let uμ be the unique solution of (B.1). Then there exists a convergent subsequence of {uμ } that converges in C([0, T ]; L1loc(Rm )) to a function u that satisfies the Kruˇ ˇzkov entropy condition (B.29), and hence is the unique solution of ut +
m ∂ fj (u) = 0, ∂x j j=1
u|t=0 = u0 .
(B.30)
312
Appendix B. The Method of Vanishing Viscosity
B.1 Notes Our proof of Theorem B.1 is taken in part from [74], where a similar result is proved for an equation of the form ut +
m
ψj (x, t, u)uxj = μΔu.
j=1
We are grateful to H. Hanche-Olsen (private communication) for discussions on the proof of this theorem. We have also used [30]. Other proofs can be found; see, e.g., [107]. The conditions of Theorem B.1 can be weakened considerably. Alternative proofs of Theorem B.1 can be obtained using the dimensional splitting construction in Section 4.4. Lemma B.6 is familiar; see, e.g., [107]. Our presentation of Lemma B.6 and Theorem B.7 follows in part Bardos et al. [8]. Recently, Bianchini and Bressan [11, 12, 13] have published results concerning the vanishing viscosity method for general systems. More precisely, consider the solution uε of the system uε |t=0 = u0 .
uεt + A(uε )uεx = εuεxx,
They prove that uε converges to u, the solution of ut + A(u)ux = 0,
u|t=0 = u0 ,
as ε → 0. Their assumptions are the following: The matrices A(u) are smooth and strictly hyperbolic in the neighborhood of a compact set K, the initial data u0 has sufficiently small total variation, and limx→−∞ u0 (x) ∈ K. The proof uses an ingenious decomposition of the function uεx in terms of gradients of viscous traveling waves selected by a center manifold technique.
Exercises B.1 Consider the system of parabolic equations ut + f (u)x = μ uxx ,
u|t=0 = u0 ,
(B.31)
with u0 (x) ∈ Rn . a. Show that there exists a solution uμ of (B.31) that satisfies the regularity condition (B.2). b. Fix temporarily μ = 1, and consider the equation ut + f (u)x = uxx − δ, u|t=0 = u0 , (B.32) with solution uδ . Show that uδ (t) − u(t)1 → 0, where u solves (B.31) (with μ = 1).
B.1. Notes
313
c. Assume that the flux function f satisfies f (u1 , . . . , uj−1 , u∗ , uj+1 , . . . , un ) = const,
ui ∈ R, i = j,
for some j and some u∗ ∈ R. Assume that the jth component u0,j of u0 satisfies u0,j ≤ u∗ . Show that the jth component uj of the solution u of (B.31) satisfies uj (x, t) ≤ u∗ . d. Assume that there are constants u∗ < u∗ and j such that f (u1 , . . . , uj−1 , u∗ , uj+1 , . . . , un ) = const, ∗
f (u1 , . . . , uj−1 , u , uj+1 , . . . , un ) = const,
ui ∈ R, i = j, ui ∈ R, i = j,
∗
and that u∗ ≤ u0,j ≤ u . Show that u∗ ≤ uj (x, t) ≤ u∗ , and hence that the region {u ∈ Rn | u∗ ≤ uj ≤ u∗ } is invariant for the solution of (B.31). Systems with this property appear, e.g., in multiphase flow in porous media and chemical chromatography. For more on invariant regions, see [71] and [65].
Appendix C Answers and Hints
Health Warning We do not claim any liability for these hints and answers. Use them at your own risk. The Authors
The only way to get rid of a temptation is to yield to it. Resist it, and your soul grows sick with longing for the things it had forbidden itself. Oscar Wilde, The Picture of Dorian Gray (1891 )
315
316
Appendix C. Answers and Hints
Chapter 1, pages 19–21. 1.1 The characteristics are given by
x = 2 arctan x0 eξ , t = ξ + t0 , z = ξ + z0 , x eξ − 1
0 x = 2 arctan , t = 2 arctan t0 eξ , z = z0 , x0 e ξ + 1 ξ
x=2 sin z0 eσ dσ + x0 , t = ξ + t0 , z = z0 eξ , 0
x = cos(z0 )ξ + x0 ,
t = sin(z0 )ξ + t0 ,
1.2 a. u(x, y) = y +
z = z0 .
y 2 − x2 .
b. u(x, y) = c. u(x, y) = h
1 1 + 2 sinh2 (y/2) . x
x2 + y 2 exp arctan(y/x) .
d. u(x, y) = (x + 1)(y − 1). e.
u(x, y) = exp y + (1 − x2 )/2 − 1.
f.
u(x, y) = (y − x2 )2 exp x2 /2 − 1.
g.
x = (u − y 2 ) exp uy − 2y 3 /3 ,
which determines u implicitly in terms of (x, y). 1.3 This is identical to the scalar case; work with each component fi and ui , etc. 1.4 Set x = (x, y) and f = (f, g). Then the conservation law reads ut + ∇ · f = 0. Assume now that the position on the regular surface is 2 (x, t) and let Br = (y, τ ) | |x − y| + (t − τ )2 ≤ r2 . Let Γr be the intersection of the surface of discontinuity with Br , and choose a test function ϕ ∈ C0∞ (Br ). Then an application of Green’s theorem yields [(u1 , f1 ) − (u2 , f2 )] · (σ(s), n(s)) ds = 0, Γr
Appendix C. Answers and Hints
317
where n(s) denotes the normal to the surface, u1 and u2 denote the limits of u, and σ is the speed of the discontinuity. Since ϕ is arbitrary, we obtain σ (u1 − u2 ) = n · (ff1 − f2 ) .
1.5 Observe that we need only consider the extreme characteristics originating at x = ∓1, since before these meet, the solution will be linear between these. These characteristics have speed ±1, and hence the solution will be continuous until t = 1, and is given by u(x, t) = u0 (x/(t − 1)). The solution of the linearized equation is given by v(x, t) = u0 (αeαt x). From this we find that vn (x, (m + 1)/n) = vn αm,n eαm,n /n , m/n , and thus αm+1,n = αm,n eαm,n /n . Set 1/n = Δt. Assuming that the limit holds, we have α(m + 1, n) = α(t ¯ + Δt), and thus ln(¯ α(t + Δt)) − ln(¯ α(t)) = α(t). ¯ Δt Letting Δt to zero, we find that α ¯ (t) = α ¯ 2 (t), which gives the conclusion, since α(0) ¯ = 1. For t ≥ 1, αm,n diverges to +∞, which incidentally gives us the correct solution. 1.6 The solutions in parts a and b are, respectively, ⎧ ⎪ ⎨−1 for x ≤ t, u(x, t) = x/t for |x| < t, and u(x, t) = u0 (x). ⎪ ⎩ 1 for x > t, In the first case we directly verify that u(x, t) also solves (1.29). In the second case the Rankine–Hugoniot condition is violated. Set v = 12 u2 . Then vt + 23 (v3/2 )x = 0, and v(x, 0) = 1. Hence v(x, t) = 1 is the correct solution. 1.7 The jumps satisfy the Rankine–Hugoniot condition. That’s all.
Chapter 2, pages 58–62. 2.1 We find that f (u) =
2u(1 − u) , (u2 + (1 − u)2 )2
and that the graph of f is “S-shaped” in the interval [0, 1] with a single inflection point for u = 12 . Hence the solution of the Riemann problem will be a rarefaction wave followed by a shock. The left limit
318
Appendix C. Answers and Hints of the shock, u1 , will solve √ the equation f (u1 ) = (1 −f (u1 ))/(1 − u1 ), which √ gives u1 = 1 − 2/2, and the speed of the shock will be σ = (1 + 2)/2. For u < u1 we must find the inverse of f , and after some manipulation this is found to be 7 1 1 −1 (f ) (ξ) = 1− 4ξ + 1 − 1 − 1 . 2 ξ
Hence the solution will be given by ⎧ ⎪ for x ≤ 0, ⎨0 √ −1 u(x, t) = (f ) (x/t) for 0 ≤ x ≤ t(1 + 2)/2, ⎪ √ ⎩ 1 for x > t(1 + 2)/2. 2.2 To show that the function in part a is a weak solution we check that the Rankine–Hugoniot condition holds. In part b we find that uε (x, t) = uε0 (xε/(t + ε)). Then ⎧ ⎪−1 for x < t, ⎨ u ¯(x, t) = x/t for |x| ≤ t, ⎪ ⎩ 1 for x > t. The solution found in part a does not satisfy the entropy condition, whereas u¯ does. 2.3 Introduce coordinates (τ, y) by τ = t, y = x − at. Then the resulting problem reads uετ = εuεyy . The solution to this is found by convolution with the heat kernel and reads
ur − ul ∞ uε (y, τ ) = ul + √ exp −(y − z)2 /(4επτ ) dz. 4επτ 0 The result follows from this formula. 2.4 Add (2.74) and (2.54); then choose ψ as (2.55). 2.5 a. The characteristics are given by ∂t = 1, ∂ξ
∂x = c(x)f (z), ∂ξ
∂z = 0, ∂ξ
or t = ξ + t0 ,
∂x = c(x)f (z0 ), ∂ξ
b. The Rankine–Hugoniot condition reads s[[u]] = c[[f ]].
z = z0 .
Appendix C. Answers and Hints
c. The characteristics are given by
t = ξ + t0 , x = tan zξ + arctan(x0 ) ,
319
z = z0 .
Using all values of z0 between −1 and 1 for characteristics starting at the origin, and writing u in terms of (x, t), we obtain ⎧ ⎪−1 for x ≤ − tan t, ⎨ x u(x, t) = arctan for |x| < tan t, t ⎪ ⎩ 1 for x ≥ tan t. d. One possibility is to approximate f by a continuous, piecewise linear flux function, and keep the function c. The characteristics will no longer be straight lines, and one will have to solve the ordinary differential equations that come from the jump condition. Another possibility is to approximate c by piecewise constant or piecewise linear functions. e. The entropy condition reads |u − k|t + c(sign (u − k) (f (u) − f (k)))x ≤ 0 weakly for all k ∈ R. 2.6 a. The characteristics are given by ∂t = 1, ∂ξ
∂x = c(x)f (z), ∂ξ
∂z = −c (x)f (z). ∂ξ
(C.1)
|u − k|t + (q(u, k)c(x))x + sign (u − k) f (k)c (x) ≤ 0,
(C.2)
b. The entropy condition reads
in the distributional sense. c. Set η(u, v) = |u − v| and q(u, v) = sign (u − v) (f (u) − f (v)). Starting from the entropy condition (C.2) we get η(u, k)ϕt + q(u, k)c(x)ϕx − sign (u − k) f (k)c (x)ϕ dx dt ≥ 0, η(v, k)ϕs + q(v, k)c(y)ϕy − sign (v − k) f (k)c (y)ϕ dx dt ≥ 0. We set k = v in the first equation, k = u in the second, and then add and integrate, obtaining η(u, v)(ϕt + ϕs ) + q(u, v)(c(x)ϕx + c(y)ϕy ) − sign (u − v) (f (u)c (y) − f (v)c (x))ϕ dx dt dy ds + sign (u − v) (f (u) − f (v))(c(y) − c(x))ϕy − c (x)f (v)ϕ + c (y)f (u)ϕ dx dt dy ds ≥ 0.
320
Appendix C. Answers and Hints
Now, ϕy = ψy ω + ψωy . Therefore, the first term in the last integrand above can be split into sign (u − v) [(f (u) − f (v))(c(y) − c(x))] ωy ψ + sign (u − v) [(f (u) − f (v))(c(y) − c(x))] ωψy . The integral of the last term will vanish as ε1 → 0, since c is continuous. What remains is the integral of ψ sign (u − v) (f (u) − f (v))(c(y) − c(x))ωy − c (x)f (v)ω + c (y)f (u)ω . We have that
∂ (c(y) − c(x))ω , ∂y −(c(y) − c(x))ωy − c (x)ω = − (c(x) − c(y))ωx + c (x)ω ∂ =− (c(x) − c(y))ω . ∂x Thus the troublesome integrand can be written ∂ ψ sign (u − v) f (u) (c(y) − c(x))ω ∂y ∂ − f (v) (c(x) − c(y))ω . ∂x (c(y) − c(x))ωy + c (y)ω =
We add and subtract to find that this equals ∂ ψ sign (u − v)(f (u) − f (v)) (c(y) − c(x))ω ∂y + ψ sign (u − v) f (v) [c (y) − c (x)] ω. Upon integration, the last term will vanish in the limit, since c is continuous. Thus, after a partial integration we are left with ∂ ψq(u, v) (c(x) − c(y))ω dx dt dy ds ∂y ∂ ≤ c ∞ ε1 ∂y ψq(u, v) ω dx dt dy ds ≤ const ε1 (T.V. (v) + T.V. (ψ)) . By sending ε0 and ε1 to zero, we find that |u − v| ψt + sign (u − v) (f (u) − f (v))c(x)ψx dx dt ≥ 0. With this we can continue as in the proof of Proposition 2.10.
Appendix C. Answers and Hints
321
2.7 Mimic the proof of the Rankine–Hugoniot condition by applying the computation (1.18). 2.8 The function q satisfies q = f η . Thus q = u3 /3. The entropy condition reads T
2 1 2 1 3 1 u φt + u φx dt dx ≥ − u0 φ|t=0 − (u2 φ)|t=T dx. 2 3 2 R 0 R Choose φ that approximate the identity function appropriately. Then 2 u dx ≤ u20 . R
R
Solutions of conservation laws are not contractive in the L2 -norm, in general, as the following counterexample shows. Let 1 1 for 0 < x < 1, for 0 < x < 1, u0 = v0 = 2 0 otherwise, 0 otherwise. We find that 2
u(t) − v(t) 2 =
1 5 + t 4 24
for t < 2. 2.9 a. The Rankine–Hugoniot relation is the same as before, viz., s[[u]] = [[f ]]. b.
R
T 0
|u − k| φt + q(u, k)φx dt dx
+ R
(|u − k| φ)|t=0 − (|u − k| φ)|t=T dx
≥
R
T 0
sign (u − k) g(u) dt dx
for all k ∈ R and all nonnegative test functions φ ∈ C0∞ (R × [0, T ]). (Recall that q(u, k) = sign (u − k) (f (u) − f (k)).) 2.10 First we note that the Rankine–Hugoniot condition implies that v is locally bounded and uniformly continuous. Assume now that v − ϕ has a local maximum at (x0 , t0 ), where t0 > 0. Since p is piecewise differentiable, we can define the following limits: p¯l = lim p(x, t0 ) ≥ ϕx (x0 , t0 ) ≥ p¯r = lim p(x, t0 ). x→x0 −
x→x0 +
The inequalities hold, since v − ϕ has a maximum at (x0 , t0 ) and where p is differentiable, x x x vx = p + xpx − tH(p)x = p + p˙ + tpt = p + p˙ − p˙ = p. t t t
322
Appendix C. Answers and Hints
Thus ϕˆx = ϕx (x0 , t0 ) is between ¯l and p¯r . We also take the upper convex envelope. Thus Hl + σ(ϕˆx − p¯l ) ≥ H(ϕˆx ), Hr + σ(ϕ ˆx − p¯r ) ≥ H(ϕˆx ), where Hl,r = H(¯l,r ) and σ = (H Hl − Hr )/(¯l − p¯r ) if pl = pr and σ = H (pl,r ) otherwise. We add the two equations to find that σ 1 (¯l + p¯r ) − (H Hl + Hr ). 2 2 Now we find (x, t) close to (x0 , t0 ) such that σ ϕˆx ≥ H(ϕˆx ) +
σ=
(C.3)
x0 − x . t0 − t
Since v − ϕ has a local maximum at (x0 , t0 ), we have that v(x0 , t0 ) − v(x, t) ϕ(x0 , t0 ) − ϕ(x, t) ≥ . t0 − t t0 − t If p is assumed to be left continuous, we can now use this to show that σ p¯l − Hl ≥ ϕ ˆt + σ ϕˆx . Choosing (x, t) slightly to the right of the line x = σt we can also show that σ p¯r − Hr ≥ ϕˆt + σ ϕˆx , and therefore σ 1 (¯l + p¯r ) − (H Hl + Hr ) ≥ ϕ ˆt + σ ϕˆx . 2 2 Using (C.3) we conclude that ϕˆt + H(ϕ ˆx ) ≤ 0, and v is a subsolution. To show that v is also a supsolution, proceed along similar lines. 2.11 We find that |(f (p) − fδ (p)) − (f (q) − fδ (q))| |p − q| ≤ sup |f (p) − fδ (p)| f (p) − f ((j + 1)δ) − f (jδ) = sup δ j,p
f − fδ Lip = sup
jδ≤p≤(j+1)δ
= sup |f (p) − f (q)| ≤ δ f ∞ . p,q |p−q|≤δ
Appendix C. Answers and Hints
323
2.12 See [113]. 2.13 We first find the characteristics (parameterized using t) ⎧ 1 −t ⎪ ⎨η + (1 − e ) for η ≤ − 2 , x = x(η, t) = η(2e−t − 1) for − 21 < η < 0, ⎪ ⎩ η for η ≥ 0, u = u(η, t) = u0 (η)e−t . Characteristics with η ∈ − 12 , 0 collide at t = ln 2. At that time a shock forms. The solution reads ⎧
−t ⎪ for x < min 12 − e−t , 14 − 12 e−t , ⎨e u(x, t) = et2x for 12 − e−t ≤ x ≤ 0, −2 ⎪
⎩ 0 for x ≥ max 0, 14 − 12 e−t . 2.14 The solution reads
2 u(x, t) = 0
2.15 a.
u(x, t) =
b.
for x < 12 (e2 − 1)t, for x ≥ 12 (e2 − 1)t.
1 0
⎧ ⎪ ⎨ 0 2 u(x, 0) = 3 x−2 t ⎪ ⎩ 1
for x < t + 2, for x ≥ t + 2. for x ≤ 2, for 2 < x < for x ≥
√t 3
√t 3
+ 2,
+ 2.
2.16 The solution reads
⎧ ⎪x/t for 0 < x < t, ⎨ u(x, t) = 1 for t ≤ x ≤ 1 + t/2, ⎪ ⎩ 0 otherwise,
when t ≤ 2, and
√ x/t for 0 < x < 2t, u(x, t) = 0 otherwise,
when t > 2. 2.17 In Figure C.1 you can see how the fronts are supposed to move, but you will have to work out the states yourself.
324
Appendix C. Answers and Hints
t
x1
x2
x
Figure C.1. The fronts for Exercise 2.17.
Chapter 3, pages 112–116. 3.1 We do the MacCormack method only; the Lax–Wendroff scheme is similar. It simplifies the computation to use repeatedly that
φ(u + aε + bε2 + O ε3 )
ε2 = φ(u) + φ (u)aε + (φ (u)a2 + 2bφ (u)) + O ε3 2 as ε → 0. Consider an exact classical (smooth) solution u of ut + f (u)x = 0, and compute (where SM is the operator defined by the MacCormack scheme) 1 S(Δt)u − SM (Δt) Δt 1 = u(x, t + Δt) − u(x, t) Δt λ + f u(x, t) − λ(f (u(x + Δx, t)) − f (u(x, t))) 2
− f u(x − Δx, t) − λ(f (u(x, t)) − f (u(x − Δx, t))) + f (u(x, t)) − f (u(x − Δx, t)) 1 = (ut + f (u)x )Δt Δt
λ2 + utt − 2f (u)f (u)u2x − f (u)2 uxx Δx2 + O Δx3
2 = O Δx2 ,
LΔt =
where we have used that a smooth solution of ut + f (u)x = 0 satisfies utt − 2f (u)f (u)u2x − f (u)2 uxx = 0 as well.
Appendix C. Answers and Hints
325
3.2 a. If u = Ujn , v = Ujn+1 and w = Ujn−1 , we have that Ujn+1 = g(u, v, w) u v =u−λ f (s) ∨ 0 ds + f (s) ∧ 0 ds 0 0 w u − f (s) ∨ 0 ds − f (s) ∧ 0 ds 0 0 u v w =u−λ |f (s)| ds + f (s) ∧ 0 ds − f (s) ∨ 0 ds . 0
0
0
Computing partial derivatives we find that ∂g = 1 − λ |f (u)| ≥ 0 if λ |f | ≤ 1, ∂u ∂g = −λf (v) ∧ 0 ≥ 0, ∂v ∂g = λf (w) ∨ 0 ≥ 0. ∂w Consistency is easy to show. b. If f ≥ 0, the scheme coincides with the upwind scheme; hence it is of first order. c. For any number a we have ⎧ 1 4 ⎪ ⎨ a ∨ 0 = (a + |a|), |a| = a ∨ 0 − a ∧ 0, 2 ⇒ ⎪ a = a ∨ 0 + a ∧ 0, ⎩ a ∧ 0 = 1 (a − |a|). 2 Using this the form of the scheme easily follows. d. We have that u2 |u| du = sign (u) . 2 From this it follows that 1 u2 v2 v2 u2 f EO (u, v) = + − sign (v) + sign (u) 2 2 2 2 2 2 2 1 u v = (1 + sign (u)) + (1 − sign (v)) , 2 2 2 which is what we want to show. If f is convex with a unique minimum at u¯, then f EO (u, v) = f (u ∨ u ¯) + f (v ∧ u¯) − f (¯ u).
326
Appendix C. Answers and Hints
3.3 The scheme is not monotone, since ∂U Ujn+1 Δt n =∓ f (U Uj ±1 ). ∂U Ujn±1 2Δx 3.4 a. We find that Ujn+1 = Ujn /(1 + λ(U Ujn − Ujn−1 )), provided that the denominator is nonzero. Thus ∂U Ujn+1 = λU Ujn /(1 + λ(U Ujn − Ujn−1 ))2 , ∂U Ujn ∂U Ujn+1 = (1 − λU Ujn )/(1 + λ(U Ujn − Ujn−1 ))2 . ∂U Ujn−1 Assume λ < 1. Considering Ujn+1 as a function of Ujn , Ujn−1 ∈ [0, 1], we see that Ujn+1 takes on its largest value, namely one, when Ujn = Ujn−1 = 1. Thus Ujn+1 ∈ [0, 1] for all n and j. The same computation shows that the scheme is monotone. b. A constant is mapped into the same constant by this scheme, which therefore is consistent. A Taylor expansion around a smooth solution shows that the truncation error is of first order. 3.5 a. With the obvious notation we have that λ n Vjn+1 = Vjn − fj − fjn−1 + fjn−2 − fjn−1 , Δx and by a Taylor expansion about Ujn−1 , 1 n fjn = fjn−1 + (U Ujn − Ujn−1 )f (U Ujn−1 ) + (U U − Ujn−1 )2 f (ηηj −1/2 ), 2 j 1 n fjn−2 = fjn−1 + (U Ujn−2 − Ujn−1 )f (U Ujn−1 ) + (U U − Ujn−1 )2 f (ηηj −3/2 ). 2 j −2 Using this we get the desired result. b. Assuming that Vjn ≥ Vjn−1 ≥ 0 we find that
2 Vjn+1 ≤ Vjn − cΔt Vjn = g(V Vjn ). The function g has a maximum at 1/(2cΔt). Hence Vjn is in an interval where g is increasing. Thus 1 1 n+1 1 Vjn+1 ≤ g = < . (2 + n)cΔt (n + 2)cΔt n + 2 (n + 3)cΔt The case where Vjn−1 > Vjn is similar, and the case where 0 ≥ Vjn ∨ Vjn−1 is trivial. Thus we have completed the induction. Hence for all
Appendix C. Answers and Hints
327
n, Vˆ n will be in an interval where g is increasing, and Vjn+1 ≤ g Vˆ n . Taking the maximum over j and 0 on the left completes the claim. c. Assuming that the claim holds for n = 0, we wish to show that it holds for any n by induction. Since Vˆ n is in an interval where g is increasing, we find that ! " ˆ0 ˆ0 V V n+1 Vˆj ≤ 1− 1 + cnΔtVˆ0 1 + cnΔtVˆ0 =
Vˆ0
1 + cΔtVˆ0 (n − 1) , 1 + cnΔtVˆ0 1 + cΔtVˆ0 n
so if 1 + Vˆ0 cΔt(n − 1) 1 , 2 ≤ ˆ 1 + V0 cΔt(n + 1) 1 + Vˆ0 cΔtn we are ok. Set k = Vˆ0 cΔt. Since (1 + kn)2 − k 2 < (1 + kn)2 , the claim follows. d. Since Vjn ≤ Vˆ n , the claim follows by noting that Uin − Ujn =
i
Vjn .
k=j+1
-
. n
e. Since Uj converges to the entropy solution u for almost every x (and y) and t, we find that the claim holds. 3.6 a. We find that u( · + p, t) is another entropy solution with the same initial condition; hence u( · + p, · ) = u, and u is periodic. b. Taking the infimum over y and the supremum over x we find that this holds. c. Set uε = u ∗ ωε . Then uε is differentiable, and satisfies ∂t uε + ∂x (f (u) ∗ ωε ) = 0. Thus d dt
0
p
uε (x, t) dx = (f (u) ∗ ωε )(0, t) − (f (u) ∗ ωε )(p, t) = 0,
since also f ∗ ωε is periodic with period p. Therefore, p p uε (x, t) dx = u0,ε (x) dx. 0
0
328
Appendix C. Answers and Hints
We know that uε converges to u in L1 ([0, p]); hence p p u(x, t) dx = u0 (x) dx. 0
0
Now, since u(x, t) → u ¯ as t becomes large, p p¯ u= u0 (x) dx. 0
3.7 Using the Fourier representation, we find that a0 g(nx)ϕ(x) dx = ϕ(x) dx 2 + ak cos(nkx)ϕ(x)dx + bk sin(nkx)ϕ(x)dx k ∗
∗
if ϕ is in L1 . Since cos(nkx) 0 and sin(nkx) 0, part a follows. Now we have that h(sin(x)) is a continuously differentiable function ∗ of period 2π, and by part a, h(nx) a0 /2, where 2π a0 1 1 π/2 = h(sin(x)) dx = h(sin(x)) dx. 2 2π 0 π −π/2 Introducing λ = sin(x) we find that a0 1 1 h(λ) √ = dλ. 2 π −1 1 − λ2 Hence for any set E ⊂ R,
ν(E) = E
χ[−1,1] √ dλ. 1 − λ2
3.8 a. Observe first that vk is constant and equal to one on the interval [1, 2]. From the definition of F we see that |F (φ) − φ| ≤
1 |φ(b) − φ(a)| . 3
Thus |vk+1 − vk | =
|F (vj,k ) − vj,k | χj,k
j
and hence the limit v(x) = lim vk (x) k→∞
exists and is continuous.
k 2 ≤ , 3
Appendix C. Answers and Hints
b. Observe that T.V. (vj,k ) =
2 k 3
329
(on [0, 1] and [2, 3]), and that
5 5 |φ(b) − φ(a)| = T.V. (φ) . 3 3
T.V. (F (φ)) = Thus T.V. (vk+1 ) =
T.V. (F (vj,k )) =
j
5 T.V. (vj,k ) 3 j
5 2 k 10 k = · 2 · 3k = 2 . 3 3 3
(C.4)
c. We see that v(j/3k ) = vk (j/3k ) by construction. d. Define the upwind scheme by Ujn+1 = Ujn + λ(ffjn − fjn−1 ),
Uj0 = v(j/3k ) = vk (j/3k ).
(C.5)
From the assumptions on the flux function we know that the scheme is TVD with a CFL number at most one. Thus
10 k T.V. (U n ) ≤ T.V. U 0 ≤ T.V. (vk ) = 2 . 3 We apply Theorem 3.27, and consider U n − U n Δt ≤ T 3−kβ 10 2k . (Δx)β j +1 j 3 n j
For this to be less than a constant C(T ), we need 2/3β ≤ 1, or β ≥ ln 2/ ln 3 ≈ 0.63. For Theorem 3.27 to apply we note that (3.81) is satisfied with right-hand side zero.
Chapter 4, pages 158–162. 4.1 a. Set
αε (t) =
0
t
ωε (s − t1 ) − ωε (s − t2 ) ds,
and set ψ(x, y, t) = αε (t)ϕ(x, y) for some test function ϕ with |ϕ(x, y)| ≤ 1. Then αε will tend to the characteristic function of the interval [t1 , t2 ] as ε → 0. Hence, since u is a weak solution, ϕ(x, y)(u(x, y, t1 )−u(x, y, t2 )) dx dy t2
+ f (u)ϕx + g(u)ϕy dx dy dt = 0. t1
330
Appendix C. Answers and Hints
Then we have that u( · , · , t1 ) − u( · , · , t2 ) L1 (R2 ) = sup ϕ(x, y)(u(x, y, t1 ) − u(x, y, t2 )) dx dy |ϕ|≤1
≤
sup
≤
t2 t1
|ϕ|≤1
t2
f (u)ϕx + g(u)ϕy dx dy dt
T.V.x (f (u( · , y, t))) dy +
T.V.y (g(u(x, · , t))) dx dt
t1
≤ |t1 − t2 | ( f Lip ∨ g Lip ) T.V. (u0 ) . See also Theorem 7.10. b. Let uΔt and vΔt denote the dimensional splitting approximations to u and v, respectively. It is easy to show (using monotonicity for the one-dimensional solution operators) that if u0 ≤ v0 a.e., then uΔt ≤ vΔt a.e. Hence uΔt (x, y, t) − vΔt (x, y, t) ∨ 0 dx dy = 0, and since both uΔt and vΔt converge strongly in L1 to u and v, respectively, it follows that u(x, y, t) − v(x, y, t) ∨ 0 dx dy = 0, and thus u ≤ v a.e. 4.2 Let u = Stj u0 denote the solution of ut + fj (u)x = 0 with initial condition u|t=0 = u0 . Define {un } by u0 = u0 ,
1 un+1/2 = SΔt un ,
2 un+1 = SΔt un+1/2 .
1 Interpolate by defining uΔt = S2(t−t un if tn ≤ t ≤ tn+1/2 and n) 1 uΔt = S2(t−t un+1/2 whenever tn+1/2 ≤ t ≤ tn+1 . (Here n+1/2 ) tn = nΔt.) By mimicking the multidimensional case, one concludes that (i) uΔt ∞ ≤ C; (ii) T.V. (uΔt (t)) ≤ T.V. (u0 ); and (iii) uΔt (t) − uΔt (s) 1 ≤ C |t − s|. Theorem A.8 shows that uΔt has a limit u as Δt → 0. Write the Kruzkov ˇ entropy condition for uΔt for each time interval [tn , tn+1/2 ] (for f1 ) and [tn , tn+1/2 ] (for f2 ), add the results, and let Dt → 0. As in the multidimensional case, the limit is the Kruzkov ˇ entropy condition for u and the original initial value problem (4.92). The analysis in Section 4.3 applies concerning convergence rates.
Appendix C. Answers and Hints
331
4.3 For simplicity, we assume that m = 2. Using the heat kernel we can write (x − z)2 1 n+1/2 u (x, y) = √ exp − un (z, y) dz 4Δt 4πΔt (y − w)2 1 n+1 u (x, y) = √ exp − un+1/2 (x, w) dw 4Δt 4πΔt (x − z)2 + (y − w)2 1 = exp − un (z, w) dz dw. 4πΔt 4Δt From this we see that un+1 (x, y) is the exact solution of the heat equation with initial data un (x, y) after a time Δt. If we let u(x, y, t) denote the exact solution of the original heat equation, we therefore see that u(x, y, tn ) = uΔt (x, y, tn ),
n = 0, 1, 2, . . . .
We have that u and uΔt are L1 continuous in t; hence it follows that uΔt → u in L1 as Δt → 0. If we want a rate of this convergence, we first assume that u( · , · , tn ) is uniformly continuous. For t ∈ tn , tn+1/2 we have that uΔt (x, y, t) − u(x, y, t) (x − z)2 + (y − w)2 1 = exp − 4π(t − tn ) 4(t − tn ) × u(z, y, tn) − u(z, w, tn ) dw dz (x − z)2 1 = exp − 4(t − tn ) 4π(t − tn ) 1 (y − w)2 × exp − 4(t − tn ) 4π(t − tn ) × u(z, y, tn) − u(z, w, tn ) dw dz (x − z)2 1 = exp − η(z, y, t) − η(z, y, tn ) dz, 4(t − tn ) 4π(t − tn ) where η(z, w, t) denotes the solution of ηt = ηww ,
η(z, w, tn ) = un (z, w).
If un (z, w) is uniformly continuous, then √ |η(z, y, t) − η(z, y, tn )| ≤ C Δt, and hence √ |uΔt(x, y, t) − u(x, y, t)| ≤ C Δt,
332
Appendix C. Answers and Hints
and an identical estimate is available if t ∈ tn+1/2 , tn+1 . Hence if u0 (x, y) is uniformly continuous, then √ uΔt ( · , · , t) − u( · , · , t) L∞ (R2 ) ≤ C Δt. If u0 (x, y) is not assumed to be continuous, but merely of bounded variation, we must use Kruzkov’s ˇ interpolation lemma to conclude that √ η(z, · , t) − η(z, · , tn ) L1 (R) ≤ C Δt. loc
Using this we find that uΔt ( · , · , t) − u( · , · , t) L1
2 loc (R )
√ ≤ C Δt.
n+1/2 4.4 a. The scheme will be monotone if Uj (U n ) and Ujn U n+1/2 are monotone in all arguments. This is the case if 1 λ f 1 ≤ 1, and μ ≤ . 2 b. We see that if λ |f | ≤ 1, then n+1/2
min Ujn ≤ Uj j
and if μ ≤
≤ max Ujn , j
1 , 2 n+1/2
min Uj j
n+1/2
≤ Ujn+1 ≤ max Uj j
.
Therefore, the sequence {uΔt} is uniformly bounded. Now let Vjn be another solution with initial data Vj0 . Set Wjn = Ujn − Vjn . Then 1 1 (1 − f (ηηj +1 )) Wjn+1 + (1 + f (ηηj −1 )) Wjn−1 , 2 2 where ηj is between Ujn and Vjn . By the CFL condition, the coefficients of Wjn±1 are positive; hence 1 n+1/2 1 Wj ≤ (1 − f (ηηj +1 )) Wjn+1 + (1 + f (ηηj −1 )) Wjn−1 . 2 2 Summing over j we find that n+1/2 Wjn . Wj ≤ n+1/2
Wj
=
j
j
Similarly, we find that n+1 W ≤ (1 − 2μ) W n+1/2 + μ W n+1/2 + μ W n+1/2 , j j j +1 j −1 so that
n+1/2 n W n+1 ≤ W . Wj ≤ j j j
j
j
Appendix C. Answers and Hints
333
Setting Vjn = Ujn−1 we see that T.V. (uΔt ) is uniformly bounded, and setting Vj0 = 0 we see that uΔt 1 is also uniformly bounded. To apply Theorem A.8 we need to use Kruˇˇzkov’s interpolation lemma, Lemma 4.10, to find a temporal modulus of continuity. Now we find that
φ(x) uΔt (x, nΔt)−uΔt (x, mΔt) dx m xj+1/2
k+1 k = φ(x) dx Uj − Uj . k=n+1
j
xj−1/2
xj+1/2 If we set φ¯j = xj−1/2 φ(x) dx and Djk = Ujk − Ujk−1 , then the the sum over j can be written 1 k+1/2 k+1/2 k φ¯j μ Dj+1 − Dj + φ¯j Dj+1 − Djk 2 j j (C.6)
n n ¯ + φj λ fj +1 − fj −1 . j
We do a partial summation in the first sum to find that it equals k+1/2 −μ Dj φ¯j − φ¯j−1 . j
Now
φ¯j − φ¯j−1 =
xj+1/2
xj−1/2
x
x−Δx
φ (y) dy dx ≤ φ 1 Δx2 .
Since μ = Δt/Δx2 , the first sum in (C.6) is bounded by Δt φ 1 T.V. (uΔt ) . Similarly, the second term is bounded by Δt
φ 1 Δx T.V. (uΔt) . 2 λ
Finally, since λ = Δt/Δx, the last term in (C.6) is bounded by Δt φ 1 T.V. (uΔt) . Therefore, φ(x) (uΔt(x, nΔt) − uΔt (x, mΔt)) dx ≤ (n − m)Δt const ( φ 1 + φ 1 ) . Then by Lemma 4.10 and the bound on the total variation, uΔt ( · , t1 ) − uΔt ( · , t2 ) 1 ≤ const |t1 − t2 |.
334
Appendix C. Answers and Hints
Hence we can conclude, using Theorem A.8, that a subsequence of {uΔt } converges strongly in L1 . To show that the limit is a weak solution we have to do a long and winding calculation. We give only an outline here. Firstly, uΔt ϕt + f (uΔt )ϕx + uΔtϕxx dx dt xj+1/2 tn+1 = Ujn ϕt + fjn ϕx + Ujn ϕxx dx dt. n,j
xj−1/2
tn
After a couple of partial summations this equals xj+1/2 − ϕn+1 (x) dx Ujn+1 − Ujn n,j
(C.7)
xj−1/2
−
n,j
−
n,j
−
n,j
tn+1 tn tn+1 tn tn+1 tn
ϕj+1/2 (t) dt fjn − fjn−1
(C.8)
ϕj+1/2 (t) dtDj
(C.9)
n+1/2
n+1/2 ϕj+1/2 (t) − ϕj−1/2 (t) dt Uj − Ujn . (C.10)
Now split (C.7) by writing the term in the square brackets as 1 λ n n+1/2 n+1/2 n μ Dj − Dj−1 + Djn − Dj−1 − fj +1 − fjn−1 . 2 2 (C.11) The trick is now to pair the term in (C.11) with μ with (C.9), and the term with λ with (C.8). The limits of the sums of these are zero, as is the limit of (C.10), and the remaining part of (C.11) also tends to zero as Δt approaches zero. If you carry out all of this, you will find that the limit is a weak solution. 4.5 a. We multiply the inequality by e−γt and find that d −γt e u(t) ≤ 0. dt Thus u(t)e−γt ≤ u(0). b. Multiply the inequality by e−Ct to find that d u(t)e−Ct ≤ Ce−Ct . dt
Appendix C. Answers and Hints
We integrate from 0 to t: u(t)e
−Ct
− u(0) ≤
t 0
335
Ce−Cs ds = 1 − e−Ct.
After some rearranging, this t is what we want. c. Multiplying by exp(− 0 c(s)ds) we find that t Rt Rt u(t) ≤ u(0)e 0 c(s)ds + d(s)e s c(τ )dτ ds, 0
which implies the t claim. d. Set U (t) = 0 u(t)dt. Then U (t) ≤ C1 U (t) + C2 , and an application of part c gives that t C2 u(s) ds ≤ − 1 − eC1 t . C1 0 Inserting this in the original inequality yields the claim. e. Introduce w = u/f . Then t t f (s) w(t) ≤ 1 + g(s)w(s) ds ≤ 1 + g(s)w(s) ds. 0 f (t) 0 Let U be the right-hand side of the above inequality, that is, t U =1+ g(s)w(s) ds. 0
Clearly, U (0) = 1 and U (t) = g(t)w(t) ≤ g(t)U (t). Applying part c we find that t u(t) = w(t)f (t) ≤ f (t)U (t) ≤ f (t) exp g(s) ds . 0
t (If f is differentiable, we see that U (t) = f (t) + 0 g(s)u(s) ds satisfies U (t) ≤ f (t) + g(t)U (t), and hence we could have used part c directly.) 4.6 a. To verify the left-hand side of the inequality, we proceed as before. When doing this we have a right-hand side given by sign (u − v) (g(u) − g(v))ψωε0 ωε1 dx dt dy ds. Now we can send ε0 and ε1 to zero and obtain the answer. b. By choosing the test function as before, we find that the double integral on the left-hand side is less than zero. Therefore, we find that h(0) − h(T ) ≥ sign (u − v) (g(u) − g(v))ψ(x, t) dxdt,
336
Appendix C. Answers and Hints
and therefore
h(T ) ≤ h(0) +
T
|g(u) − g(v)| ψ dx dt
0
≤ h(0) + γ
T
0
|u − v| ψ dx dt.
c. By making M arbitrarily large we obtain the desired inequality. 4.7 a. We set uΔt (x, 0) = u0 (x), uΔt (x, t) S (2 (t − tn )) un (x), = R(x, 2(t − tn+1/2 ) + tn , tn+1/2 )un+1/2 (x),
+ t ∈ tn , tn+1/2 , + t ∈ tn+1/2 , tn .
b. A bound on uΔt ∞ is obtained as before. To obtain a bound on the total variation we first note that T.V. un+1/2 ≤ T.V. (un ) . Let u(x, t) and v(y, t) be two solutions of the ordinary differential equation in the interval [tn , tn+1 ]. Then as before we find that |u(x, t) − v(x, t)|t ≤ |g(x, t, u(x, t)) − g(x, t, v(y, t))| + |g(x, t, v(y, t)) − g(y, t, v(y, t))| ≤ γ |u(x, t) − v(y, t)| + |g(x, t, v(y, t)) − g(y, t, v(y, t))| . Setting w = u − v we obtain γ(t−tn+1 ) |w(t)| ≤ e |w(0)|+
tn+1
|g(x, t, v(y, t)) − g(y, t, v(y, t))| dt .
tn n+1/2
If now u(0) = u (x) and v(0) = un+1/2 (y), this reads n+1 (x) − un+1 (y) u γΔt n+1/2 ≤e (x) − un+1/2 (y) u +
tn+1
tn
t−t n + tn+1/2 g x, t, uΔt y, 2 t−t n − g y, t, uΔt y, + tn+1/2 dt . 2
By the assumption on g this implies that
n+1 γΔt T.V. u ≤e T.V. un+1/2 +
tn+1
tn
b(t) dt ,
Appendix C. Answers and Hints
337
and thus T.V. (uΔt ) ≤ eγT (T.V. (u0 ) + b 1 ) . We can now proceed as before to see that n+1 u (x) − un (x) dx ≤ const Δt. I
Hence we conclude, using Theorem A.8, that a subsequence of {uΔt } converges strongly in L1 to a function u of bounded variation. c. To show that u is an entropy solution, we can use the same argument as before. 4.8 Write v = ut and differentiate the heat equation ut = εΔu with respect to t. Thus vt = εΔv. Let signη be a smooth sign function, ⎧ ⎪ ⎨1 signη (x) = x/η ⎪ ⎩ −1
namely, for x ≥ η, for |x| < η, for x ≤ −η.
If we multiply vt = εΔv by signη (v) and integrate over Rm × [0, t], we obtain t t vt signη (v) dt dx = ε Δv signη (v) dt dx Rm
Rm
0
0
= −ε
Rm
t
0
2
|∇v| signη (v) dt dx
≤ 0. For the left-hand side we obtain t vt signη (v) dt dx = Rm
0
Rm
−
t
((v signη (v))t dt dx t v vt signη (v) dt dx.
0
Rm
0
The last term vanishes in the limit η → 0; see Lemma B.5. Thus we conclude that as η → 0, v(t) 1 − v(0) 1 ≤ 0. From this we obtain u(t) − u0 1 =
t t v(t˜) dt˜ ≤ v(0) t. ˜ ˜ v(t) dt dx ≤ 1 1 m
R
0
0
338
Appendix C. Answers and Hints
Chapter 5, pages 201–204. 5.1 The eigenvalues are λ = ±
−p (v). Hence we need that p is negative.
5.2 The shock curves are given by
√ S1 : u = ul + (v − vl )/ vvl , √ S2 : u = ul − (v − vl )/ vvl ,
v ≤ vl , v ≥ vl ,
and the rarefaction curves read R1 : u = ul + ln(v/vl ), R2 : u = ul − ln(v/vl ),
v ≤ vl , v ≥ vl .
We see that for vl > 0 the wave curves extend to infinity. Given a right state (vr , ur ) and a left state (vl , ul ) we see that the forward slow curves (i.e., the S1 and the R1 curves above) intersect the backward fast curves from (vr , ur ) (i.e., the curves of left states that can be connected to the given right state (vr , ur )) in a unique point (vm , um ). Hence the Riemann problem has a unique solution for all initial data in the half-plane v > 0. 5.3 The shock curves are given by S1 : u = ul − (v − vl )(p(vl ) − p(v)), S2 : u = ul + (v − vl )(p(vl ) − p(v)), and the rarefaction curves read v R1 : u = u l + −p (y) dy, vl v R2 : u = u l − −p (y) dy,
v ≤ vl , v ≥ vl ,
v ≤ vl , v ≥ vl .
vl
∞ If vl −p (y) dy < ∞, then there are points (vr , ur ) for which the Riemann problem does not have a solution. For further details, see [130, pp. 306 ff]. 5.4 The solution consists of a slow rarefaction wave connecting the left state (hl , 0) and an intermediate state (hm , vm ) with
vm = −2 hm − hl , and there is a fast shock connecting (hm , vm ) and (hr , 0), where 7 1 1 1 vm = √ (hr − hm ) + . hm hr 2 The intermediate state is determined by the relation 7 √ 1 1 2 2 hm − hl = (hr − hm ) + , hm hr
Appendix C. Answers and Hints
339
which has a unique solution (the left-hand side is increasing in hm , ending up at zero for hl , while the right-hand side is decreasing in hm , starting at zero for hr ). 5.5 a. Set f (w) = ϕw and compute ϕu + ϕ ϕv u ϕu u df = = ϕu v ϕv v + ϕ ϕu v
ϕv u ϕ 0 + = A + ϕI. ϕv v 0 ϕ
Hence an eigenvector of A with eigenvalue μ is also an eigenvector of df with corresponding eigenvalue μ + ϕ. Using this we find that ϕv u λ1 = ϕ, r1 = , λ2 = ϕ + (ϕu u + ϕv v), r2 = . −ϕu v b. In this case ϕu = u and ϕv = v, and hence 1 2 3 v 2 λ1 = u + v , r1 = , λ2 = u2 + v 2 , −u 2 2
u r2 = . v
We find that the contours of ϕ, i.e., circles about the origin, are contact discontinuities with associated speed equal to half the radius squared. These correspond to the eigenvalue λ1 . The rarefaction curves of the eigenvalue λ2 are half-lines starting at (ul , vl ) given by u v = ul vl such that u2 + v 2 ≥ u2l + vl2 . For the shock part of the solution the Rankine–Hugoniot relation gives (ϕ − ϕl )(vul − vl u) = 0. If ϕ = ϕl , we are on the contact discontinuity, so we must have u ul = . v vl This is again a straight line through the origin and through ul , vl . What parts of this line can we use? The shock speed is given by # 2 u2l + vl2 v v σ= + +1 2 vl vl # 2 u2l + vl2 u u = + +1 2 ul ul =
√ u2l + vl2 [uv + uul vvl + ul vl ] . 2ul vl
Using polar coordinates r = 2ϕ, u = r cos(θ) and v = r sin(θ), we see that # 2 r r σ = ϕl + +1 . rl rl
340
Appendix C. Answers and Hints
We want (this is an extra condition!) to have σ decreasing along the shock path, so we define the admissible part of the Hugoniot locus to be the line segment bounded by (ul , vl ) and the origin. Now the solution of the Riemann problem is found by first using a contact discontinuity from (ul , vl ) to some (um , vm ), where um vm ϕ(ul , vl ) = ϕ(um , vm ) and = , ur vr and (um , vm ) is on the same side of the origin as (ur , vr ). Finally, (um , vm ) is connected to (ur , vr ) with a 2-wave. c. In this case −1 ϕu = ϕv = , (1 + u + v)2 and therefore λ2 =
1 . (1 + u + v)2
The calculation regarding the Hugoniot loci remains valid; hence the curves ϕ = const are contact discontinuities. These are the lines given by v = c − u and u, v > 0. The rarefaction curves of the second family remain straight lines through the origin, as are the shocks of the second family. On a Hugoniot curve of the second family, the speed is found to be σ=
1 , (1 + ul + vl )(1 + u + v)
and if we want the Lax entropy condition to be satisfied, we must take the part of the Hugoniot locus pointing away from the origin. In this case both families of wave curves are straight lines. Such systems are often called line fields, and this system arises in chromatography. Observe that the family denoted by subscript 2 now has the smallest speed. Hence when we find the solution of the Riemann problem, we first use the second family, then the contact discontinuity. 5.6 Integrate the exact solution unj over the rectangle [(j − 1)Δx, (j + 1)Δx] × [nΔt, (n + 1)Δt]. The CFL condition yields that no waves cross the lines (j ± 1)Δx. Thus (j+1)Δx (n+1)Δt
n 0= (uj )t + f (unj )x dx dt (j −1)Δx
=
(j −1)Δx
nΔt
(j+1)Δx (n+1)Δt
(j+1)Δx
= (j −1)Δx
nΔt
unj dx +
(n+1)Δt (j+1)Δx nΔt
(j−1)Δx
unj (x, Δt) dt − Δx Ujn+1 + Ujn−1
f (unj ) dt
Appendix C. Answers and Hints
341
+ Δt f (U Ujn+1 ) − f (U Ujn−1 ) (j+1)Δx
= (j −1)Δx
unj (x, Δt) dt − 2ΔxU Ujn+1 .
5.7 a. Choose a hypersurface M transverse to rk (u0 ). Then the linear firstorder equation ∇w(u)·rk (u) with w equal to some regular function w0 on M has a unique solution locally, using, e.g., the method of characteristics. On M we may choose n − 1 linearly independent functions, say w01 , . . . , w0n−1 . The corresponding solutions wj , j = 1, . . . , n − 1, will have linearly independent gradients. See [128, p. 117] for more details, and [130, p. 321] for a different argument. b. The kth rarefaction curve is the integral curve of the kth right eigenvector field, and hence any k-Riemann invariant is constant along that rarefaction curve. c. The set of equations (5.136) yields n(n − 1) equations to determine n scalar functions wj . See [42, pp. 127 ff] for more details. d. For the shallow-water equations we have √ √ q q w1 = − 2 h, w2 = + 2 h. h h 5.8 a. Recall from Exercise 5.2 that the rarefaction curves are given by ln (v/vl ) = ∓ (u − ul ) . Therefore, the Riemann invariants are given by w− (v, u) = ln(v) + u,
w+ (v, u) = ln(v) − u.
b.–e. Introducing μ = w− and τ = w+ , we find that μ+τ μ−τ v = exp , u= . 2 2 The Hugoniot loci are given by
7
u − ul = ∓
v − vl
7
vl v
,
which in (μ, τ ) coordinates reads 1 [(μ − μl ) − (τ − τl )] 2
μ − μl + τ − τl
μl − μ + τl − τ = ∓ exp − exp . 4 4 Using Δμ and Δτ , this relation says that Δμ + Δτ Δμ − Δτ = ∓4 sinh . 4
(C.12)
342
Appendix C. Answers and Hints
Hence the Hugoniot loci are translation-invariant. The rarefaction curves are coordinate lines, and trivially translation-invariant. To show that H− (the part of the Hugoniot locus with the minus sign) is the reflection of H+ , we note that such a reflection maps Δμ → Δτ
and Δτ → Δμ.
Hence a reflection leaves the right-hand side of (C.12) invariant, and changes the sign of the left-hand side. Thus the claim follows. Recalling that the “−” wave corresponds to the eigenvalue +1/u, we must use the “+” waves first when solving the Riemann problem. Therefore, we now term the “+” waves 1-waves, and the “−” waves 2-waves. Hence the lines parallel to the μ-axis are the rarefaction curves of the first family. Then the 1-wave curve is given by
u − ul = ln vvl for v > vl , $ vl v u − ul = vl − v for u < ul , which in (μ, τ ) coordinates reads τ = τl τ = τ (μ)
for μ > μl , for μ < μl .
Here, the curve τ (μ) is given implicitly by (C.12), and we find that τ (μ) =
1 − cosh(· · · ) . 1 + cosh(· · · )
From this we conclude that dτ 0≥ ≥ −1, dμ
d2 τ ≥ 0, dμ2
and τ (μl ) = τl and limμ→−∞ τ (μ) = −1. By the reflection property we can also find the second wave curve in a similar manner. 5.9 a. We obtain (cf. (2.10)) 0= (ut + f (u)x ) ∇u η(u)φ dx dt = η(u)t φ dx dt + df (u)ux ∇u η(u)φ dx dt =− η(u)φt dx dt + ∇u q(u)ux φ dx dt
=− η(u)φt + q(u)φx dx dt. b. The right-hand side of (5.137) is the gradient of a vector-valued function q if d2 η(u)df (u) = df (u)t d2 η(u),
(C.13)
Appendix C. Answers and Hints
343
where df (u)t denotes the transpose of df (u), and d2 η(u) denotes the Hessian of η. The relation (C.13) follows from the fact that mixed derivatives need to be equal, i.e., ∂ 2q ∂ 2q = . ∂uj ∂uk ∂uk ∂uj Equation (C.13) imposes n(n − 1)/2 conditions on the scalar function η. c. We obtain v 1 η(u) = u2 − p(y) dy, q(u) = up(v). 2 d. Equation (C.13) reduces in the case of the shallow water equations to one hyperbolic equation, q2
q + h ηqq = ηhh + 2 ηhq , 2h h
with solution η(u) =
q2 1 + h2 , 2h 2
Q(u) =
q3 + hq 2h2
(where for obvious reasons we have written the entropy flux with capital Q rather than q). 5.10 a. Let rk and lk denote the right and left eigenvectors, respectively. Thus Ark = λk rk ,
lk A = λk lk ,
lj · rk = δj,k .
Decompose the left and right states in terms of right eigenvectors as ul =
n
αk rk ,
k=1
ur =
n
βk rk .
k=1
Then we can write the solution of the Riemann problem as (see [98, pp. 64 ff]) u(x, t) = ul + (βk − αk )rk = ur − (βk − αk )rk . k λk <x/t
k λk >x/t
b. A more compact form, valid for the general Cauchy problem as well, is u(x, t) =
n k=1
lk · u0 (x − λk t)rk .
344
Appendix C. Answers and Hints
Here the initial data is u0 , and the left eigenvectors are normalized. Using that ut = ux =
n k=1 n
(−λk )lk · u0 (x − λk t)rk , lk · u0 (x − λk t)rk ,
k=1
we see that ut + Aux =
n
− λk lk · u0 (x − λk t)rk + lk · u0 (x − λk t)Ark = 0.
k=1
Decomposing the initial data u0 (x) =
n
αk (x)rk ,
k=1
we see that αk (x) = lk · u0 (x), and hence u(x, 0) = u0 (x). c. In the case of the wave equation we find that 0 −1 A= , c > 0, −c2 0 1 1 1 λ1 = −c, r1 = √ , l1 = √ (c, 1), 1 + c2 c 1 + c2 1 1 1 λ2 = c, r2 = √ , l2 = √ (−c, 1), 1 + c2 −c 1 + c2 when φx u= . φt The initial data φ(x, 0) = f (x), translates into
φt (x, 0) = g(x)
f (x) u0 (x, 0) = . g(x)
We conclude that φ(x, t) =
1 1 (f (x + ct) + f (x − ct)) + 2 2c
x+ct
g(ξ) dξ, x−ct
the familiar d’Alembert’s formula for the solution of the linear wave equation.
Appendix C. Answers and Hints
345
Chapter 6, pages 230–231. 6.1 d. Up to second order in , the wave curves are given by Si () = exp(ri ). For an interaction between an i-wave and a j-wave (j < i) we have that (up to second order) ur = exp(i ri ) exp(j )ul . After the interaction, ur = exp(n rn ) · · · exp(j rj ) · · · exp(i ri ) · · · exp(1 r1 )ul up to second order. Comparing these two expressions and using part c and the fact that {ri } are linearly independent, we find that k = δkj j + δki i + second-order terms.
6.2 a. The definition of the front-tracking algorithm follows from the general case and the grid in the (μ, τ ) plane. In this case we do not need to remove any front of high generation. To show that Tn is nonincreasing in n, we must study interactions. In this case we have a left wave l separating states (μl , τl ) and (μm , τm ), colliding with a wave r separating (μm , τm ) and (μr , τr ). Now, the family of l must be greater than or equal to the family of r . The claim follows by studying each case, and recalling that (consult Exercise 5.8) 0 ≥ τ (μ) > −1. b. If the total variation of μ0 and τ0 is finite, then we have enough to show that front tracking produces a compact sequence. Hence if T.V. (ln(u0 ))
and T.V. (v0 )
are finite, then the sequence produced by front tracking is compact. A reasonable condition could then be that u0 (x) ≥ c > 0 almost everywhere, T.V. (u0 ) < M and T.V. (v0 ) < M . c. To show that the limit is a weak solution, we proceed as in the general case.
Chapter 7, page 286. 7.1 We have already established A and B in Theorem 6.6. To prove C, let T be defined by (6.19) and Q by (6.18), and assume that t = γ(x) is a curve with a Lipschitz constant L that is smaller than the inverse of the largest characteristic speed, i.e., L<
1 maxu∈D,k |λk (u)|
.
346
Appendix C. Answers and Hints
For such a curve we have that all fronts in uδ will cross γ from below. Let γt (x) be defined by γt (x) = min {t, γ(x)} . Since all fronts will cross γ, and hence also γt from below, we can define T |γt . Then we have that T |γt ≤ (T + kQ)|γt ≤ (T + kQ)(t) ≤ (T + kQ)(0). Since γ = limt→∞ γt , it follows that the total variation of uδ on γ is finite, independent of δ. Hence the total variation on γ of the limit u is also finite. 7.2 In the linearly degenerate case the Hugoniot loci coincide with the rarefaction curves. If you (meticulously) recapitulate all cases you will find that some of the estimates are easier because of this, while others are identical. If you do this exercise, you have probably mastered the material in Section 7.1!
Appendix A, page 298. A.1 Observe first that
φ(x)f (x) dx ≤ |f (x)| dx,
for any function |φ| ≤ 1. Now let ωε be a standard mollifier, and define φε = ωε ∗ sign (f ) . Clearly, f φε → |f | pointwise, and a simple application of dominated convergence implies that indeed, φε (x)f (x) dx → |f (x)| dx as ε → 0. A.2 Write as usual hδ = vδ − wδ , where v δ and wδ both are nondecreasing functions. Both sequences {v δ } and {wδ } satisfy the conditions of the theorem, and hence we may pass to a subsequence such that both vδ and wδ converge in L1 . After possibly taking yet another subsequence we conclude that we may obtain pointwise convergence almost everywhere. Let v be the limit of v δ , which we may assume to be nondecreasing (by possibly redefining it on a set of measure zero). Write v(x±) for the right, respectively left, limits of v at x. Fix x ∈ a, b . Let > 0 and η > 0 be such that v(y) < v(x+) + whenever x < y < x + η. If v δ ≥ v(x+) + 2, then x+η δ v − v ≥ (v δ (y) − v(y)) dy > η, 1 x
Appendix C. Answers and Hints
347
and so, since vδ − v 1 → 0 as δ → 0, we must conclude that v δ (x) < v(x+) + 2 for δ sufficiently small. Similarly, vδ (x) > v(x−) − 2. In particular, v δ (x) → v(x) whenever v is continuous at x, thus at all but a countable set of points x. In the same way we show that wδ (x) → w(x) for all but at most a countable set of points x. A diagonal argument shows that we can pass to a subsequence such that both vδ and wδ converge pointwise for all x in [a, b].
Appendix B, page 312. B.1 For more details on this exercise including several applications, as well as how to extend the result to hyperbolic systems in the μ → 0 limit, please consult [71]. a. Just follow the argument in Theorem B.1. b. Mimic the argument starting with (B.15). (Let δ in (B.15) be negative.) c. Redo the previous point, assuming that u0,j ≥ u∗ .
References
[1] D. Amadori and R. M. Colombo. Continuous dependence for 2 × 2 conservation laws with boundary. J. Differential Equations, 138:229–266, 1997. [2] D. Amadori and R. M. Colombo. Viscosity solutions and standard Riemann semigroup for conservation laws with boundary. Rend. Semin. Mat. Univ. Padova, 99:219–245, 1998. [3] K. A. Bagrinovski˘ ˘ and S. K. Godunov. Difference methods for multidimensional problems. Dokl. Akad. Nauk SSSR, 115:431–433, 1957. In Russian. [4] P. Baiti and A. Bressan. The semigroup generated by a Temple class system with large data. Differ. Integral Equ., 10:401–418, 1997. [5] P. Baiti and H. K. Jenssen. On the front tracking algorithm. J. Math. Anal. Appl., 217:395–404, 1997. [6] J. M. Ball. A version of the fundamental theorem for Young measures. In M. Rascle, D. Serre, and M. Slemrod, editors, PDE’s and Continuum Models of Phase Transitions, pages 241–259, Berlin, 1989. Springer. [7] D. B. Ballou. Solutions to nonlinear hyperbolic Cauchy problems without convexity conditions. Trans. Amer. Math. Soc., 152:441–460, 1970. [8] C. Bardos, A. Y. LeRoux, and J. C. Nedelec. First order quasilinear equations with boundary conditions. Comm. Partial Differential Equations, 4:1017–1034, 1979. [9] L. M. Barker. SWAP — a computer program for shock wave analysis. Technical report, Sandia National Laboratory, Albuquerque, New Mexico, 1963. SC4796(RR). 349
350
References
[10] H. Bateman. Some recent researches on the motion of fluids. Monthly Weather Review, 43:163–170, 1915. [11] S. Bianchini and A. Bressan. BV estimates for a class of viscous hyperbolic systems. Indiana Univ. Math. J., 49:1673–1713, 2000. [12] S. Bianchini and A. Bressan. A case study in vanishing viscosity. Discrete Contin. Dynam. Systems, 7:449–476, 2001. [13] S. Bianchini and A. Bressan. Vanishing viscosity solutions of nonlinear hyperbolic systems. Ann. of Math. (2), 161:223–342, 2005. [14] A. Bressan. Contractive metrics for nonlinear hyperbolic systems. Indiana Univ. Math. J., 37:409–421, 1988. [15] A. Bressan. Global solutions of systems of conservation laws by wave-front tracking. J. Math. Anal. Appl., 170:414–432, 1992. [16] A. Bressan. A contractive metric for systems of conservation laws with coinciding shock and rarefaction waves. J. Differential Equations, 106:332– 366, 1993. [17] A. Bressan. The unique limit of the Glimm scheme. Arch. Rat. Mech., 130:205–230, 1995. [18] A. Bressan. Hyperbolic Systems of Conservation Laws. Oxford Univ. Press, Oxford, 2000. [19] A. Bressan and R. M. Colombo. The semigroup generated by 2 × 2 conservation laws. Arch. Rat. Mech., 133:1–75, 1995. [20] A. Bressan, G. Crasta, and B. Piccoli. Well-posedness of the Cauchy problem for n × n systems of conservation laws. Mem. Amer. Math. Soc., 694:1–134, 2000. [21] A. Bressan and P. Goatin. Oleinik type estimates for n × n conservation laws. J. Differential Equations, 156:26–49, 1998. [22] A. Bressan and P. LeFloch. Uniqueness of weak solutions to systems of conservation laws. Arch. Rat. Mech., 140:301–317, 1997. [23] A. Bressan and M. Lewicka. A uniqueness condition for hyperbolic systems of conservation laws. Discrete Contin. Dynam. Systems, 6:673–682, 2000. [24] A. Bressan, T.-P. Liu, and T. Yang. L1 stability estimates for n × n conservation laws. Arch. Rat. Mech., 149:1–22, 1999. [25] J. M. Burgers. Applications of a model system to illustrate some points of the statistical theory of free turbulence. Proc. Roy. Neth. Acad. Sci. (Amsterdam), 43:2–12, 1940. [26] J. M. Burgers. The Nonlinear Diffusion Equation. Reidel, Dordrecht, 1974. [27] G. Buttazzo, M. Gianquinta, and S. Hildebrandt. One-dimensional Variational Problems. Oxford Univ. Press, Oxford, 1998. [28] W. Cho-Chun. On the existence and uniqueness of the generalized solutions of the Cauchy problem for quasilinear equations of first order without the convexity condition. Acta Math. Sinica, 4:561–577, 1964. [29] A. J. Chorin and J. E. Marsden. A Mathematical Introduction to Fluid Mechanics. Springer, New York, third edition, 1993.
References
351
[30] K. N. Chueh, C. C. Conley, and J. A. Smoller. Positively invariant regions for systems of nonlinear diffusion equations. Indiana Univ. Math. J., 26:373–392, 1977. [31] B. Cockburn and P.-A. Gremaud. Error estimates for finite element methods for scalar conservation laws. SIAM J. Numer. Anal., 33:522–554, 1996. [32] B. Cockburn and P.-A. Gremaud. A priori error estimates for numerical methods for scalar conservation laws. Part I: The general approach. Math. Comp., 65:533–573, 1996. [33] J. D. Cole. On a quasi-linear parabolic equation occurring in aerodynamics. Quart. Appl. Math., 9:225–236, 1951. [34] R. M. Colombo and N. H. Risebro. Continuous dependence in the large for some equations of gas dynamics. Comm. Partial Differential Equations, 23:1693–1718, 1998. [35] F. Coquel and P. Le Floch. Convergence of finite difference schemes for conservation laws in several space dimensions: The corrected antidiffusive flux approach. Math. Comp., 57:169–210, 1991. [36] F. Coquel and P. Le Floch. Convergence of finite difference schemes for conservation laws in several space dimensions: A general theory. SIAM J. Num. Anal., 30:675–700, 1993. [37] R. Courant and K. O. Friedrichs. Supersonic Flow and Shock Waves. Springer, New York, 1976. [38] M. G. Crandall and A. Majda. The method of fractional steps for conservation laws. Numer. Math., 34:285–314, 1980. [39] M. G. Crandall and A. Majda. Monotone difference approximations for scalar conservation laws. Math. Comp., 34:1–21, 1980. [40] M. G. Crandall and L. Tartar. Some relations between nonexpansive and order preserving mappings. Proc. Amer. Math. Soc., 78:385–390, 1980. [41] C. M. Dafermos. Polygonal approximation of solutions of the initial value problem for a conservation law. J. Math. Anal. Appl., 38:33–41, 1972. [42] C. M. Dafermos. Hyperbolic Conservation Laws in Continuum Physics. Springer, New York, 2000. [43] R. J. DiPerna. Global existence of solutions to nonlinear systems of conservation laws. J. Differential Equations, 20:187–212, 1976. [44] R. J. DiPerna. Compensated compactness and general systems of conservation laws. Trans. Amer. Math. Soc., 292:383–420, 1985. [45] R. J. DiPerna. Measure-valued solutions to conservation laws. Arch. Rat. Mech., 88:223–270, 1985. [46] B. Engquist and S. Osher. One-sided difference approximations for nonlinear conservation laws. Math. Comp., 36:321–351, 1981. [47] L. C. Evans and R. F. Gariepy. Measure Theory and Fine Properties of Functions. CRC Press, Boca Raton, Florida, USA, 1992. [48] A. R. Forsyth. Theory of Differential Equations, Part IV — Partial Differential Equations. Cambridge Univ. Press, 1906.
352
References
[49] M. Fr´ ´echet. Sur les ensembles compacts de fonctions de carr´ ´es sommables. Acta Szeged Sect. Math., 8:116–126, 1937. [50] I. M. Gel’fand. Some problems in the theory of quasilinear equations. Amer. Math. Soc. Transl., 29:295–381, 1963. [51] J.-F. Gerbeau and B. Perthame. Derivation of viscous Saint–Venant system for laminar shallow water; numerical validation. Discrete Contin. Dynam. Systems Ser. B, 1:89–102, 2001. [52] J. Glimm. Solutions in the large for nonlinear hyperbolic systems of equations. Comm. Pure Appl. Math., 18:697–715, 1965. [53] J. Glimm, J. Grove, X. L. Li, K.-M. Shyue, Y. Zeng, and Q. Zhang. Threedimensional front tracking. SIAM J. Sci. Comput., 19:703–727, 1998. [54] J. Glimm, J. Grove, X. L. Li, and N. Zhao. Simple front tracking. In G.-Q. Chen and E. DiBenedetto, editors, Nonlinear Partial Differential Equations, volume 238 of Contemp. Math., pages 33–149, Providence, 1999. Amer. Math. Soc. [55] J. Glimm, E. Isaacson, D. Marchesin, and O. McBryan. Front tracking for hyperbolic systems. Adv. in Appl. Math., 2:91–119, 1981. [56] J. Glimm, C. Klingenberg, O. McBryan, B. Plohr, D. Sharp, and S. Yaniv. Front tracking and two-dimensional Riemann problems. Adv. in Appl. Math., 6:259–290, 1985. [57] P. Goatin and P. G. LeFloch. Sharp L1 continuous dependence of solutions of bounded variation for hyperbolic systems of conservation laws. Arch. Rat. Mech., 157:35–73, 2001. [58] E. Godlewski and P.-A. Raviart. Hyperbolic Systems of Conservation Laws. Mathematiques ´ et Applications, Ellipses, Paris, 1991. [59] E. Godlewski and P.-A. Raviart. Numerical Approximation of Hyperbolic Systems of Conservation Laws. Springer, Berlin, 1996. [60] S. K. Godunov. Finite difference method for numerical computation of discontinuous solutions of the equations of fluid dynamics. Math. Sbornik, 47:271–306, 1959. In Russian. [61] R. Haberman. Mathematical Models. Prentice-Hall, Englewood Cliffs, 1977. [62] A. Harten, J. M. Hyman, and P. D. Lax. On finite-difference approximations and entropy conditions for shocks. Comm. Pure Appl. Math., 29:297–322, 1976. [63] G. W. Hedstrom. Some numerical experiments with Dafermos’s method for nonlinear hyperbolic equations. In R. Ansorge and W. T¨¨ ornig, editors, Numerische Losung ¨ nichtlinearer partieller Differential- und Integrodifferentialgleichungen, Lecture Notes in Mathematics, pages 117–138. Springer, Berlin, 1972. [64] G. W. Hedstrom. The accuracy of Dafermos’ method for nonlinear hyperbolic equations. In Proceedings of Equadiff III (Third Czechoslovak Conf. Differential Equations and Their Applications, Brno, 1972), volume 1, pages 175–178, Brno, 1973. Folia Fac. Sci. Natur. Univ. Purkynianae Brunensis, Ser. Monograph.
References
353
[65] D. Hoff. Invariant regions for systems of conservation laws. Trans. Amer. Math. Soc., 289:591–610, 1985. [66] H. Holden and L. Holden. On scalar conservation laws in one-dimension. In S. Albeverio, J. E. Fenstad, H. Holden, and T. Lindstrøm, editors, Ideas and Methods in Mathematics and Physics, pages 480–509. Cambridge Univ. Press, Cambridge, 1992. [67] H. Holden, L. Holden, and R. Høegh-Krohn. A numerical method for first order nonlinear scalar conservation laws in one-dimension. Comput. Math. Applic., 15:595–602, 1988. [68] H. Holden, K.-A. Lie, and N. H. Risebro. An unconditionally stable method for the Euler equations. J. Comput. Phys., 150:76–96, 1999. [69] H. Holden and N. H. Risebro. A method of fractional steps for scalar conservation laws without the CFL condition. Math. Comp., 60:221–232, 1993. [70] H. Holden and N. H. Risebro. Conservation laws with a random source. Appl. Math. Optim., 36:229–241, 1997. [71] H. Holden, N. H. Risebro, and A. Tveito. Maximum principles for a class of conservation laws. SIAM J. Appl. Math., 55:651–661, 1995. ¨ [72] E. H¨ ¨ older. Historischer Uberblick zur mathematischen Theorie von Unstetigkeitswellen seit Riemann und Christoffel. In P. L. Butzer and P. Feher, editors, E. B. Christoffel, pages 412–434. Birkh¨ auser, Basel, 1981. ¨ [73] E. Hopf. The partial differential equation ut + uux = μuxx . Comm. Pure Appl. Math., 3:201–230, 1950. [74] L. H¨ ormander. Lectures on Nonlinear Hyperbolic Differential Equations. ¨ Springer, Berlin, 1997. [75] Jiaxin Hu and P. G. LeFloch. L1 continuous dependence property for hyperbolic systems of conservation laws. Arch. Rat. Mech., 151:45–93, 2000. [76] H. Hugoniot. Sur un th´ ´eor` `eme g´ ´en´eral relatif ` a la propagation du mouvement dans les corps. C. R. Acad. Sci. Paris S´r. ´ I Math., 102:858–860, 1886. [77] H. Hugoniot. M´ ´emoire sur la propagation du mouvement dans les corps et specialement ´ dans les gaz parfaits. J. l’Ecoles Polytechn., 57:3–97, 1887. ibid. 58:1–125, 1887. Translated into English in [79], pp. 161–243, 245–358. [78] H. Hugoniot. M´ ´emoire sur la propagation du mouvement dans un fluid indefini. ´ J. Math. Pures Appl., 3:477–492, 1887. ibid. 4:153–167, 1887. [79] J. N. Johnson and R. Ch´ ´eret, editors. Classical Papers in Shock Compression Science. Springer, New York, 1998. [80] K. Hvistendahl Karlsen. On the accuracy of a dimensional splitting method for scalar conservation laws. Master’s thesis, University of Oslo, 1994. [81] K. Hvistendahl Karlsen. On the accuracy of a numerical method for twodimensional scalar conservation laws based on dimensional splitting and front tracking. Preprint Series 30, Department of Mathematics, University of Oslo, 1994. [82] K. Hvistendahl Karlsen and N. H. Risebro. An operator splitting method for convection–diffusion equations. Numer. Math., 77:365–382, 1997.
354
References
[83] J. Kevorkian. Partial Differential Equations. Analytical Solution Techniques. Wadsworth & Brooks/Cole, Pacific Grove, 1990. [84] C. Klingenberg and N. H. Risebro. Stability of a resonant system of conservation laws modeling polymer flow with gravitation. J. Differential Equations, 170:344–380, 2001. [85] A. N. Kolmogorov. On the compactness of sets of functions in the case of convergence in mean. Nachr. Ges. Wiss. G¨ ottingen, Math.-Phys. Kl. ¨ II, (9):60–63, 1931. Translated and reprinted in: Selected Works of A. N. Kolomogorov, Vol. I (V. M. Tikhomirov. ed.), Kluwer, Dordrecht, 1991, pp. 147–150. [86] D. Kr¨ oner. Numerical Schemes for Conservation Laws. Wiley-Teubner, Chichester, 1997. [87] S. N. Kruˇ ˇzkov. Results concerning the nature of the continuity of solutions of parabolic equations, and some of their applications. Math. Notes, 6:517– 523, 1969. [88] S. N. Kruˇ ˇzkov. First order quasi-linear equations in several independent variables. Math. USSR Sbornik, 10:217–243, 1970. [89] N. N. Kuznetsov. On stable methods of solving a quasi-linear equation of first order in a class of discontinuous functions. Soviet Math. Dokl., 16:1569– 1573, 1975. [90] N. N. Kuznetsov. Accuracy of some approximative methods for computing the weak solutions of a first-order quasi-linear equation. USSR Comput. Math. and Math. Phys. Dokl., 16:105–119, 1976. [91] J. O. Langseth. On an implementation of a front tracking method for hyperbolic conservation laws. Advances in Engineering Software, 26:45–63, 1996. [92] J. O. Langseth, N. H. Risebro, and A. Tveito. A conservative front tracking scheme for 1D hyperbolic conservation laws. In A. Donato and F. Oliveri, editors, Nonlinear Hyperbolic Problems: Theoretical, Applied and Computational Aspects, pages 385–392, Braunschweig, 1992. Vieweg. [93] J. O. Langseth, A. Tveito, and R. Winther. On the convergence of operator splitting applied to conservation laws with source terms. SIAM J. Appl. Math., 33:843–863, 1996. [94] P. D. Lax. Weak solutions of nonlinear hyperbolic equations and their numerical computation. Comm. Pure Appl. Math., 7:159–193, 1954. [95] P. D. Lax. Hyperbolic systems of conservation laws. II. Comm. Pure Appl. Math., 10:537–566, 1957. [96] P. D. Lax. Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves. CBMS–NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1973. [97] P. D. Lax and B. Wendroff. Systems of conservation laws. Comm. Pure Appl. Math., 13:217–237, 1960. [98] R. J. LeVeque. Numerical Methods for Conservation Laws. Birkh¨ auser, Basel, second edition, 1992.
References
355
[99] K.-A. Lie. Front Tracking and Operator Splitting for Convection Dominated Problems. PhD thesis, NTNU, Trondheim, 1998. [100] K.-A. Lie, V. Haugse, and K. Hvistendahl Karlsen. Dimensional splitting with front tracking and adaptive grid refinement. Numerical Methods for Part. Diff. Eq., 14:627–648, 1997. [101] M. J. Lighthill and G. B. Whitham. On kinematic waves. II. Theory of traffic flow on long crowded roads. Proc. Roy. Soc. London. Ser. A, 229:317– 345, 1955. [102] T.-P. Liu. The deterministic version of the Glimm scheme. Comm. Math. Phys., 57:135–148, 1977. [103] T.-P. Liu. Large time behavior of solutions of initial and initial-boundary value problems of a general system of hyperbolic conservation laws. Comm. Math. Phys., 55:163–177, 1977. [104] T.-P. Liu. Hyperbolic and Viscous Conservation Laws. CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1999. [105] T.-P. Liu and J. Smoller. On the vacuum state for isentropic gas dynamics. Adv. Pure Appl. Math., 1:345–359, 1980. [106] B. J. Lucier. A moving mesh numerical method for hyperbolic conservation laws. Math. Comp., 46:59–69, 1986. [107] J. M´ alek, J. Neˇcas, M. Rokyta, and M. R˚ ´ ˚zi uˇ ˇ ˇcka. Weak and Measure-valued Solutions to Evolutionary PDEs. Chapman & Hall, London, 1996. [108] R. McOwen. Partial Differential Equations. Prentice-Hall, Upper Saddle River, New Jersey, USA, 1996. [109] T. Nishida and J. Smoller. Solutions in the large for some nonlinear hyperbolic conservation laws. Comm. Pure Appl. Math, 26:183–200, 1973. [110] O. A. Ole˘nik. ˘ Discontinuous solutions of non-linear differential equations. Amer. Math. Soc. Transl. Ser., 26:95–172, 1963. [111] O. A. Ole˘nik. ˘ Uniqueness and stability of the generalized solution of the Cauchy problem for a quasi-linear equation. Amer. Math. Soc. Transl. Ser., 33:285–290, 1963. [112] O. A. Ole˘nik ˘ and S. N. Kruˇ ˇzkov. Quasi-linear second-order parabolic equations with many independent variables. Russian Math. Surveys, 16(5):105–146, 1961. [113] S. Osher. Riemann solvers, the entropy condition, and difference approximations. SIAM J. Num. Anal., 21:217–235, 1984. [114] S. Osher and J. A. Sethian. Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comp. Phys., 79:12–49, 1988. [115] B. Keyfitz Quinn. Solutions with shocks: An example of an L1 -contractive semigroup. Comm. Pure Appl. Math., 24:125–132, 1971. [116] W. J. M. Rankine. On the thermodynamic theory of waves of finite longitudinal disturbances. Phil. Trans. Roy. Soc., 160:277–288, 1870. Reprinted in [79], pp. 133–147.
356
References
[117] M. Renardy and R. C. Rogers. An Introduction to Partial Differential Equations. Springer, New York, 1993. [118] P. I. Richards. Shock waves on the highway. Oper. Res., 4:42–51, 1956. [119] G. F. B. Riemann. Selbstanzeige: Ueber die Fortpflanzung ebener Luftwellen von endlicher Schwingungsweite. G¨ ottinger Nachrichten, (19):192–197, ¨ 1859. [120] G. F. B. Riemann. Ueber die Fortpflanzung ebener Luftwellen von endlicher Schwingungsweite. Abh. K¨ onig. Gesell. Wiss. G¨ ¨ ¨ ottingen , 8:43–65, 1860. Translated into English in [79], pp. 109–128. [121] M. Riesz. Sur les ensembles compacts de fonctions sommables. Acta Szeged Sect. Math., 6:136–142, 1933. [122] N. H. Risebro. The partial differential equation ut +∇f (u) = 0. A numerical method. Master’s thesis, University of Oslo, 1987. [123] N. H. Risebro. A front-tracking alternative to the random choice method. Proc. Amer. Math. Soc., 117:1125–1129, 1993. [124] N. H. Risebro and A. Tveito. Front tracking applied to a nonstrictly hyperbolic system of conservation laws. SIAM J. Sci. Stat. Comput., 12:1401–1419, 1991. [125] N. H. Risebro and A. Tveito. A front tracking method for conservation laws in one dimension. J. Comp. Phys., 101:130–139, 1992. [126] B. L. Roˇ ˇzdestvenski˘ ˘ı and N. N. Janenko. Systems of Quasilinear Equations and Their Applications to Gas Dynamics. Amer. Math. Soc., Providence, 1983. [127] M. Schatzman. Continuous Glimm functionals and uniqueness of solutions of the Riemann problem. Indiana Univ. Math. J., 34:533–589, 1985. [128] D. Serre. Systems of Conservation Laws. Volume 1. Cambridge Univ. Press, Cambridge, 1999. [129] D. Serre. Systems of Conservation Laws. Volume 2. Cambridge Univ. Press, Cambridge, 2000. [130] J. Smoller. Shock Waves and Reaction–Diffusion Equations. Springer, New York, second edition, 1994. [131] J. Smoller and B. Temple. Global solutions of the relativistic Euler equations. Comm. Math. Phys., 156:67–99, 1993. [132] S. L. Sobolev. Applications of Functional Analysis in Mathematical Physics. Amer. Math. Soc., Providence, 1963. [133] B. K. Swartz and B. Wendroff. AZTEK: A front tracking code based on Godunov’s method. Appl. Num. Math., 2:385–397, 1986. [134] A. Szepessy. An existence result for scalar conservation laws using measure valued solutions. Comm. Partial Differential Equations, 14:1329–1350, 1989. [135] J. D. Tamarkin. On the compactness of the space Lp . Bull. Amer. Math. Soc., 32:79–84, 1932. [136] Pan Tao and Lin Longwei. The global solution of the scalar nonconvex conservation law with boundary condition. J. Partial Differential Equations, 8:371–383, 1995.
References
357
[137] B. Temple. Systems of conservation laws with invariant submanifolds. Trans. Amer. Math. Soc., 280:781–795, 1983. [138] Z.-H. Teng. On the accuracy of fractional step methods for conservation laws in two dimensions. SIAM J. Num. Anal., 31:43–63, 1994. [139] J. W. Thomas. Numerical Partial Differential Equations. Conservation Laws and Elliptic Equations. Springer, New York, 1999. [140] E. Toro. Riemann Solvers and Numerical Methods for Fluid Mechanics. Springer, Berlin, 1997. [141] A. Tulajkov. Zur Kompaktheit im Raum Lp ff¨ ur p = 1. Nachr. Ges. Wiss. G¨ ottingen, Math.-Phys. Kl. I, (39):167–170, 1933. ¨ [142] A. Tveito and R. Winther. Existence, uniqueness, and continuous dependence for a system of hyperbolic conservation laws modeling polymer flooding. SIAM J. Math. Anal., 22:905–933, 1991. [143] A. I. Vol’pert. The spaces BV and quasi-linear equations. Math. USSR Sbornik, 2:225–267, 1967. [144] E. T. Whittaker and G. N. Watson. A Course of Modern Analysis. Cambridge Univ. Press, Cambridge, fourth edition, 1999. [145] J. Wloka. Functionalanalysis und Anwendungen. de Gruyter, Berlin, 1971. [146] W.-A. Yong. A simple approach to Glimm’s interaction estimates. Appl. Math. Lett., 12:29–34, 1999. [147] K. Yosida. Functional Analysis. Springer, Berlin, 1995. Reprint of the 1980 edition. [148] W. P. Ziemer. Weakly Differentiable Functions. Springer, New York, 1989.
Index
approximate δ distribution, 45 approximately continuous, 290 Arzel` a–Ascoli theorem, 295, 305 ` Borel measure, 273 bores, 175 breakpoint, 34 Buckley–Leverett equation, 58, 67 Burgers’ equation (inviscid), 3, 4, 8, 11, 18, 21, 36, 56, 58, 60, 113 Burgers’ equation (viscous), 18 characteristic equation, 3 characteristics, 3, 16, 17, 19 chromatography, 340 Cole–Hopf transformation, 18 conservation laws, 1 conservative method, 65 consistent method, 65 contact discontinuity, 171 Courant–Friedrichs–Lewy(CFL) condition, 66 Crandall–Tartar’s lemma, 51, 55, 57, 74, 304 Dafermos’ method, 56
dam breaking, 197 dimensional splitting, 117, 157 directional derivative, 192 distributions, 6 derivatives of, 6 downwind scheme, 66 Duhamel’s principle, 300 Engquist–Osher’s scheme, 112 entropy condition, 14 Kruzkov, ˇ 26, 119 Lax, 186 Ole˘ ˘ınik, 114 traveling wave, 24 vanishing viscosity, 299–312 entropy/entropy flux pair, 59, 76, 203 ε-net, 293 essential variation, 290 Euler’s equations, 167, 201 finite speed of propagation, 37 first-order convergence, 55 flux density, 2 flux function, 2 fractional steps method, 117, 157 front 359
360
Index
generation of, 216 strength of, 209 front tracking, 17, 36, 49, 57, 126 two-dimensional, 158 front tracking in a box scalar case, 40 systems, 218 fronts scalar case, 40, 41 systems, 208 generalized functions, 6 generation of front, 216 genuinely nonlinear, 169, 170 Glimm’s functional, 215 Godunov’s scheme, 66, 112 Green’s theorem, 10, 32 Gronwall’s inequality, 155, 160, 304 Hamilton–Jacobi equation, 57, 60 heat kernel, 146, 300 Helly’s theorem, 57, 296, 298 Hugoniot locus, 175 hybrid methods, 66 hyperbolic, 168 hyperfast, 41, 216 implicit function theorem, 177, 201, 203 interaction estimate, 211 interaction potential, 213 invariant regions, 313 Jordan decomposition, 288 Kolmogorov’s compactness theorem, 294, 298 Kruzkov’s ˇ interpolation lemma, 149, 158 Kruzkov ˇ entropy condition, 26, 34, 36, 43, 53, 76, 153 multidimensional, 119 Kruzkov ˇ entropy solution, 28, 57 Kruzkov ˇ function, 27 Kuznetsov’s lemma, 82, 135 Kuznetsov’s theory, 57 Lagrangian coordinates, 36
Lax entropy condition, 186, 200, 224 Lax inequalities, 185 Lax shock definition of, 187 Lax’s theorem, 194 Lax–Friedrichs’ scheme, 65, 71, 74, 112, 159, 201 Lax–Wendroff’s scheme, 66, 112, 157 Lax–Wendroff’s theorem, 71, 112 Lebesgue point, 290 level set methods, 57 Lie bracket, 212, 230 line fields, 340 linearly degenerate, 36, 169, 171 linearly degenerate wave, 171 Lipschitz constant, 45 Lipschitz continuity in time, 55 Lipschitz continuous, 33, 45 Lipschitz seminorm, 45 local truncation error, 69 L1 -contractive, 55, 73 lower convex envelope, 30 MacCormack’s scheme, 66, 112 maximum principle, 40, 55, 303 measure-valued solutions, 99 model equation, 71 modulus of continuity, 288 in time, 81 mollifier, 45 monotone method, 73 monotone scheme, 77, 88, 111, 112 monotonicity, 55, 58 monotonicity preserving, 55, 73 Moses’ problems first, 198 seond, 199 multi-index, 148 negative variation, 288 numerical entropy flux, 77 numerical flux, 65 Ole˘ ˘ınik entropy condition, 114 operator splitting methods, 117 p-system, 167, 201 parabolic regularization, 158 positive variation, 288
Index quasilinear equation, 3, 19 Rademacher’s theorem, 271 Radon measure, 273 random choice method, 157, 229 Rankine–Hugoniot condition, 10, 19, 20, 25, 59, 60, 175 rarefaction front, 217 rarefaction wave, 16, 34, 169 real state, 227 relatively compact, 293 Richtmyer two-step Lax–Wendroff scheme, 66 Riemann invariants, 202 coordinate system of, 202 Riemann problem, 13, 14, 16, 17, 19, 30, 33, 61 shallow-water equations, 164, 168, 172, 175, 180, 189, 196, 200, 201 bores, 175 conservation of energy, 181 mass, 164 momentum, 164 hydrostatic balance, 165 pressure, 165 traveling wave, 183 vacuum, 197 shock, 34 admissible, 186 shock line, 225 smoothing method, 84 strictly hyperbolic, 168 strongly compact, 292
361
structure theorem for BV functions, 267 Temple class systems, 235 test functions, 6 total variation, 50, 287, 298 total variation diminishing(TVD), 55, 73 total variation stable, 73 totally bounded, 293 traffic flow, 11, 18, 34 traffic hydrodynamics, 18 traveling wave, 25, 183 traveling wave entropy condition, 24, 25 TVD, see total variation diminishing upper concave envelope, 33 upwind scheme, 65, 66 vanishing viscosity method, 87 viscosity solution, 60 viscous profile, 25 viscous regularization, 24, 56 wave family of, 169 wave curve, 188 wave equation, 166 d’Alembert’s formula, 344 waves systems, 208 weak solution, 8 Young’s theorem, 100