P.P.G. Dyke
An Introduction to Laplace Transforms and Fourier Series
D
SPRINGER
1!1
UNDERCiRADUAT£
Cl
MATHEHAnCS
D
SERIES
Springer Undergraduate Mathematics Series
Springer
London Berlin
Heidelberg New York Barcelona Hong Kong Milan Paris Singapore Tokyo
Advisory Board P.J. Cameron Queen Mary and Westfield College M.A.J. Chaplain University of Dundee K. Erdmann Oxford University L.C.G. Rogers University of Bath E. Stili Oxford University J.F. Toland University of Bath
Other books in this series A First Course in Discrete Mathematics I. Anderson Analytic Methods for Partial Differential Equations G. Evans,]. Blackledge, P. Yardley Applied Geometry for Computer Graphics and CAD D. Marsh Basic Linear Algebra T.S. Blyth and E.F. Robertson Basic Stochastic Processes Z. Brzezniak and T. Zastawniak Elementary Differential Geometry A. Pressley Elementary Number Theory G.A. ]ones and ].M. ]ones Elements of Logic via Numbers and Sets D.L. johnson Groups, Rings and Fields D.A.R. Wallace Hyperbolic Geometry ]. W. Anderson
Information and Coding Theory G.A. ]ones and J.M. ]ones Introduction to Laplace Transforms and Fourier Series P.P.G. Dyke Introduction to Ring Theory P.M. Cohn Introductory Mathematics: Algebra and Analysis G. Smith Introductory Mathematics: Applications and Methods G.S. Marshall Linear Functional Analysis B.P. Rynne and M.A. Youngson Measure, Integral and Probability M. Capifzksi and E. Kopp Multivariate Calculus and Geometry S. Dineen
Numerical Methods for Partial Differential Equations G. Evans,/. Blackledge, P. Yardley Sets, Logic and Categories P. Cameron
Topics in Group Theory G. C. Smith and 0. Tabachnikova Topologies and Uniformities J.M. ]ames Vector Calculus P.C. Matthews
P.P.G. Dyke
An Introduction to
Laplace Transforms
and Fourier Series With 51 Figures
�Springer
Philip P.G. Dyke, BSc, PhD Professor of Applied Mathematics, University of Plymouth, Drake Circus, Plymouth, Devon, PL4 BAA, UK
Cover illustration elements reproduced /Jy kind permission of.
Aptech Systems, Inc., Publishers of the GAUSS Mathematical and Statistical System, 23804 S.E. Kent-Kangley Road, Maple Valley, WA 98038, USA. Te� (206)432 -7855 Fax (206)432 -7832 email:
[email protected] URL: www.aptech.com American Statistical Association: Chance Vol 8 No 1 , 1995 article by KS and KW Heiner "rree Rings of the Northern Shawangunks' page 32 fig 2 Springer-Verlag: Mathematica in Education and Research Vol4 Issue 3 1995 article by Roman E Maeder, Beatrice Amrhein and Oliver Gloor 'Illustrated Mathematics: Visualization of Mathematical Objects' page9 fig !1 , originally published"as a CD ROM 'll!ustrated Mathematics' by TELOS: ISBN 0-387-14222-3, German edition by Birlchauser: ISBN 3-7643-5100-4. Mathematica in Education and Research Vol41ssue 3 1995 article by Richard J Gaylord and Kazume Nishidate "rraffic Engineering with Cellular Automata' page 35 fig 2. Mathematica in Education and Research Vol 5 Issue 2 1996 article by Michael Trott "!"he lmplicitization of a Trefoil Knot' page 14. Mathematica in Education and Research Vol 5 Issue 2 1996 article by Lee de Cola 'Coins, Trees, Bars and Bells: Simulation of the Binomial Pro cess page 19 fig 3. Mathematica in Education and Research Vol 51ssue 2 1996 article by Richard Gaylord and Kazume Nishidate 'Contagious Spreading' page 33 fig !. Mathematica in Education and Research Vol 5 Issue 2 1996 article by Joe Buhler and Stan Wagon 'Secrets of the Madelung Constant' page 50 fig !.
British Library Cataloguing in Publication Data Dyke, P.P.G. An introduction to Laplace transforms and Fourier series. (Springer undergraduate mathematics series) 1. Fourier series 2. Laplace transformation 3. Fourier transformations 4. Fourier series - Problems, exercises, etc. 5. Laplace transformations - Problems, exercises, etc. 6. Fourier transformations - Problems, exercises, etc. I. Title 515.7'23 ISBN 1852330155 Library of Congress Cataloging-in-Publication Data Dyke, P.P.G. An introduction to Laplace transforms and Fourier series./ P.P.G. Dyke p. em. -- (Springer undergraduate mathematics series) Includes index. ISBN 1-85233-015-5 (alk. paper) 1. Laplace transformation. 2. Fourier series. I. Title. II. Series. QA432.D94 1999 98-47927 CIP 515'.723-dc21 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. Springer Undergraduate Mathematics Series ISSN 1615-2085 ISBN 1-85233-015-5 Springer-Verlag London Berlin Heidelberg Springer-Verlag is a part of Springer Science+Business Media springeronline.com
© Springer-Verlag London Linlited 2001 Printed in Great Britain 3rd printing 2004
The use of registered names, trademarks etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Typesetting: Camera ready by the author and Michael Mackey Printed and bound at the Athenaeum Press Ltd., Gateshead, Tyne & Wear 12/3830-5432 Printed on acid-free paper SPIN 10980033
v
To Ottilie
Preface
This book has been primarily written for the student of mathematics who is in the second year or the early part of the third year of an undergraduate course. It will also be very useful for students of engineering and the physical sciences for whom Laplace Transforms continue to be an extremely useful tool. The book demands no more than an elementary knowledge of calculus and linear algebra of the type found in many first year mathematics modules for applied subjects. For mathematics majors and specialists, it is not the mathematics that will be challenging but the applications to the real world. The author is in the privileged position of having spent ten or so years outside mathematics in an engineering environment where the Laplace Transform is used in anger to solve real problems, as well as spending rather more years within mathematics where accuracy and logic are of primary importance. This book is written unashamedly from the point of view of the applied mathematician. The Laplace Transform has a rather strange place in mathematics. There is no doubt that it is a topic worthy of study by applied mathematicians who have one eye on the wealth of applications; indeed it is often called Operational Calculus. However, because it can be thought of as specialist, it is often absent from the core of mathematics degrees, turning up as a topic in the second half of the second year when it comes in handy as a tool for solving certain breeds of differential equation. On the other hand, students of engineering (particularly the electrical and control variety) often meet Laplace Transforms early in the first year and use them to solve engineering problems. It is for this kind of appli cation that software packages (MATLAB@, for example) have been developed. These students are not expected to understand the theoretical basis of Laplace Transforms. What I have attempted here is a mathematical look at the Laplace Transform that demands no more of the reader than a knowledge of elementary calculus. The Laplace Transform is seen in its typical guise as a handy tool for solving practical mathematical problems but, in addition, it is also seen as a particularly good vehicle for exhibiting fundamental ideas such as a mapping, linearity, an operator, a kernel and an image. These basic principals are covered vii
viii
in the first three chapters of the book. Alongside the Laplace Thansform, we develop the notion of Fourier series from first principals. Again no more than a working knowledge of trigonometry and elementary calculus is required from the student. Fourier series can be introduced via linear spaces, and exhibit properties such as orthogonality, linear independence and completeness which are so central to much of mathematics. This pure mathematics would be out of place in a text such as this, but Appendix C contains much of the background for those interested. In Chapter 4 Fourier series are introduced with an eye on the practical applications. Nevertheless it is still useful for the student to have encountered the notion of a vector space before tackling this chapter. Chap ter 5 uses both Laplace Thansforms and Fourier series to solve partial differential equations. In Chapter 6, Fourier Thansforms are discussed in their own right, and the link between these, Laplace Thansforms and Fourier series is established. Finally, complex variable methods are introduced and used in the last chapter. Enough basic complex variable theory to understand the inversion of Laplace Thansforms is given here, but in order for Chapter 7 to be fully appreciated, the student will already need to have a working knowledge of complex variable the ory before embarking on it. There are plenty of sophisticated software packages around these days, many of which will carry out Laplace Thansform integrals, the inverse, Fourier series and Fourier Thansforms. In solving real-life problems, the student will of course use one or more of these. However this text introduces the basics; as necessary as a knowledge of arithmetic is to the proper use of a calculator. At every age there are complaints from teachers that students in some re spects fall short of the calibre once attained. In this present era, those who teach mathematics in higher education complain long and hard about the lack of stamina amongst today's students. If a problem does not come out in a few lines, the majority give up. I suppose the main cause of this is the com puter/video age in which we live, in which amazing eye catching images are available at the touch of a button. However, another contributory factor must be the decrease in the time devoted to algebraic manipulation, manipulating fractions etc. in mathematics in the 11-16 age range. Fortunately, the impact of this on the teaching of Laplace Thansforms and Fourier series is perhaps less than its impact in other areas of mathematics. (One thinks of mechanics and differential equations as areas where it will be greater.) Having said all this, the student is certainly encouraged to make use of good computer algebra packages (e.g. MAPLE@, MATHEMATICA@, DERIVE@, MACSYMA@) where ap propriate. Of course, it is dangerous to rely totally on such software in much the same way as the existence of a good spell-checker is no excuse for giving up the knowledge of being able to spell, but a good computer algebra package can facilitate factorisation, evaluation of expressions, performing long winded but otherwise routine calculus and algebra. The proviso is always that students must what they are doing before using packages as even modern day computers can still be extraordinarily dumb! In writing this book, the author has made use of many previous works on the subject as well as unpublished lecture notes and examples. It is very diffi-
understand
ix
cult to know the precise source of examples especially when one has taught the material to students for some years, but the major sources can be found in the bibliography. I thank an anonymous referee for making many helpful sugges tions. It is also a great pleasure to thank my daughter Ottilie whose familiarity and expertise with certain software was much appreciated and it is she who has produced many of the diagrams. The text itself has been produced using J¥.£EX. P P G Dyke Professor of Applied Mathematics University of Plymouth January 1999
Contents
1. The Laplace Transform
1.1 1.2 1.3 1.4
1
Introduction . . . . . . The Laplace Transform Elementary Properties Exercises . . . . . . . .
1 2 5 11
2. Further Properties of the Laplace Transform
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Real Functions . . . . . . . . . . . . . . . . . . Derivative Property of the Laplace Transform . Heaviside's Unit Step Function Inverse Laplace Transform . Limiting Theorems . . The Impulse Function Periodic Functions Exercises . . . . . . .
13
13 14 18 19 23 25 32 34
3. Convolution and the Solution of Ordinary Differential Equations 37
3.1 Introduction . . . . . . . . . . . . 3.2 Convolution . . . . . . . . . . . . 3.3 Ordinary Differential Equations . 3.3.1 Second Order Differential Equations 3.3.2 Simultaneous Differential Equations 3.4 Using Step and Impulse Functions 3.5 Integral Equations 3.6 Exercises . . . . . . . . . . . . . .
4. Fourier Series
37 37 49 54 63 68 73 75 79
4.1 Introduction . . . . . . . . . . 4.2 Definition of a Fourier Series
79 81 xi
XII
CONTENTS
4.3 4.4 4.5 4.6 4. 7
Odd and Even Functions . Complex Fourier Series . . Half Range Series . . . . . Properties of Fourier Series Exercises . . . . . . . . .
91 94 96 101 108
.
5. Partial Differential Equations
5.1 5.2 5.3 5.4 5.5 5.6
Introduction . . . . . . . . . . Classification of Partial Differential Equations . Separation of Variables . . . . . . . . . . . Using Laplace Transforms to Solve PDEs Boundary Conditions and Asymptotics . Exercises . . . . . . . . . . . . . . . . . .
6. Fourier Thansforms
6.1 6.2 6.3 6.4 6.5 6.6
Introduction . . . . . . . . . . . . . . . . . Deriving the Fourier Transform . . . . . . Basic Properties of the Fourier Transform Fourier Transforms and PDEs . Signal Processing . Exercises . . . . . . . . . . . .
7. Complex Variables and Laplace Thansforms
7.1 7.2 7.3 7.4 7.5 7.6 7. 7
Introduction . . . . . . . . . . . . Rudiments of Complex Analysis . Complex Integration . . . . . . Branch Points . . . . . . . . . . . The Inverse Laplace Transform . Using the Inversion Formula in Asymptotics . Exercises . . . . . . . . . . . . . . . . . . . .
111
111 113 115 118 123 126
129
129 129 134 142 146 153
157
157 157 160 167 172 177 181
A. Solutions to Exercises
185
B. Table of Laplace Thansforms
227
C. Linear Spaces
231
C.1 Linear Algebra . . . . . . . . . . . . . . . . . C.2 Gramm-Schmidt Orthonormalisation Process
231 . 243
Bibliography
244
Index
246
1
The Laplace Transform
1.1
Introduction
As a discipline, mathematics encompasses a vast range of subjects. In pure mathematics an important concept is the idea of an axiomatic system whereby axioms are proposed and theorems are proved by invoking these axioms logi cally. These activities are often of little interest to the applied mathematician to whom the pure mathematics of algebraic structures will seem like tinkering with axioms for hours in order to prove the obvious. To the engineer, this kind of pure mathematics is even more of an anathema. The value of knowing about such structures lies in the ability to generalise the "obvious" to other areas. These generalisations are notoriously unpredictable and are often very surprising. In deed, many say that there is no such thing as non-applicable mathematics, just mathematics whose application has yet to be found. The Laplace Transform expresses the conflict between pure and applied mathematics splendidly. There is a temptation to begin a book such as this on linear algebra outlining the theorems and properties of normed spaces. This would indeed provide a sound basis for future results. However most applied mathematicians and all engineers would probably turn off. On the other hand, engineering texts present the Laplace Transform as a toolkit of results with little attention being paid to the underlying mathematical structure, regions of valid ity or restrictions. What has been decided here is to give a brief introduction to the underlying pure mathematical structures, enough it is hoped for the pure mathematician to appreciate what kind of creature the Laplace Transform is, whilst emphasising applications and giving plenty of examples. The point of view from which this book is written is therefore definitely that of the applied mathematician. However, pure mathematical asides, some of which can be quite 1
An I ntroduction to Laplace Transforms and Fourier Series
2
..C{F(t)}
t
=
f(s)
s space
space
Figure 1.1: The Laplace Transform as a mapping extensive, will occur. It remains the view of this author that Laplace Transforms only come alive when they are used to solve real problems. Those who strongly disagree with this will find pure mathematics textbooks on integral transforms much more to their liking. The main area of pure mathematics needed to understand the fundamental properties of Laplace Transforms is analysis and, to a lesser extent the normed vector space. Analysis, in particular integration, is needed from the start as it governs the existence conditions for the Laplace Transform itself; however as is soon apparent, calculations involving Laplace Transforms can take place without explicit knowledge of analysis. Normed vector spaces and associated linear algebra put the Laplace Transform on a firm theoretical footing, but can be left until a little later in a book aimed at second year undergraduate mathematics students. 1.2
The Laplace Transform
The definition of the Laplace Transform could hardly be more straightforward. Given a suitable function F(t) the Laplace Transform, written f(s) is defined by f(s) = F(t)e -st dt.
100
This bald statement may satisfy most engineers, but not mathematicians. The question of what constitutes a "suitable function" will now be addressed. The integral on the right has infinite range and hence is what is called an improper integral. This too needs careful handling. The notation .C{F(t)} is used to denote the Laplace Transform of the function F(t). Another way of looking at the Laplace Transform is as a mapping from points in the t domain to points in the s domain. Pictorially, Figure 1.1 indicates this mapping process. The time domain t will contain all those functions F(t) whose Laplace Transform exists, whereas the frequency domain s contains all the
1 . The Laplace Transform
3
images .C { F( t)}. Another aspect of Laplace Transforms that needs mentioning at this stage is that the variable s often has to take complex values. This means that f ( s) is a function of a complex variable, which in turn places restrictions on the (real) function F(t) given that the improper integral must converge. Much of the analysis involved in dealing with the image of the function F(t) in the s plane is therefore complex analysis which may be quite new to some readers. As has been said earlier, engineers are quite happy to use Laplace Trans forms to help solve a variety of problems without questioning the convergence of the improper integrals. This goes for some applied mathematicians too. The ar gument seems to be on the lines that if it gives what looks a reasonable answer, then fine! In our view, this takes the engineer's maxim "if it ain't broke, don't fix it" too far. This is primarily a mathematics textbook, therefore in this opening chapter we shall be more mathematically explicit than is customary in books on Laplace Transforms. In Chapter 4 there is some more pure mathematics when Fourier series are introduced. That is there for similar reasons. One mathe matical question that ought to be asked concerns uniqueness. Given a function F(t), its Laplace Transform is surely unique from the well defined nature of the improper integral. However, is it possible for two different functions to have the same Laplace Transform? To put the question a different but equivalent way, is there a function N(t), not identically zero, whose Laplace Transform is zero? For this function, called a null function, could be added to any suitable function and the Laplace Transform would remain unchanged. Null functions do exist, but as long as we restrict ourselves to piecewise continuous functions this ceases to be a problem. Here is the definition of piecewise continuous:
If a n i nterval [0, to ] say ca n be partitio ned i nto a fi nite num ber of subi ntervals [0, t i], [t1 , t2 ], [t2 , t3 ], . . . , [tn , to ] with 0, t 1 , t2 , . . . , tn , to a n in creasing seque nce of times a nd such that a give n fu nctio n f(t) is co ntinuous in each of these subi ntervals but not necessarily at the e nd poi nts themselves, the n f(t) is piece wise co nti nuous i n the i nterval [0 , to ].
Definition 1.1
Only functions that differ at a finite number of points have the same Laplace Transform. If F1 (t) = F(t) except at a finite number of points where they differ by finite values then .C{F1 (t)} = .C{F(t)}. We mention this again in the next chapter when the inverse Laplace Transform is defined. In this section, we shall examine the conditions for the existence of the Laplace Transform in more detail than is usual. In engineering texts, the simple definition followed by an explanation of exponential order is all that is required. Those that are satisfied with this can virtually skip the next few paragraphs and go on study the elementary properties, Section 1.3. However, some may need to know enough background in terms of the integrals, and so we devote a little space to some fundamentals. We will need to introduce improper integrals, but let us first define the Riemann integral. It is the integral we know and love, and is defined in terms of limits of sums. The strict definition runs as follows:Let F(x) be a function which is defined and is bounded in the interval a � x � b and suppose that m and M are respectively the lower and upper
4
An I ntroduction to Laplace Transforms a nd Fourier Series
bounds of F(x) in this interval (written [a, b] see Appendix C). Take a set of points
Xo = a, x 1 , X2 ,··· , Xr - 1 , Xr.··· , Xn = b and write 5r = Xr - Xr - 1· Let Mr, mr be the bounds of F(x) in the subinterval (Xr - 1 , Xr) and form the sums n
s = L mr5r.
r= 1
These are called respectively the upper and lower Riemann sums corresponding to the mode of subdivision. It is certainly clear that S ;::: s. There are a variety of ways that can be used to partition the interval (a, b) and each way will have (in general) different Mr and mr leading to different S and s. Let M be the minimum of all possible Mr and m be the maximum of all possible mr A lower bound or supremum for the set S is therefore M(b - a) and an upper bound or infimum for the set s is m(b - a) . These bounds are of course rough. There are exact bounds for S and s, call them J and I respectively. If I = J, F(x) is said to be Riemann integrable in (a, b) and the value of the integral is I or J and is denoted by
I = J=
1b F(x)dx.
For the purist it turns out that the Riemann integral is not quite general enough, and the Stieltjes integral is actually required. However, we will not use this concept which belongs securely in specialist final stage or graduate texts. The improper integral is defined in the obvious way by taking the limit: R-too
lim
1
a
R
oo
F(x)dx = { F(x)dx
Jo
provided F(x) is continuous in the interval a � x � R for every R, and the limit on the left exists. This is enough of general theory, we now apply it to the Laplace Transform. The parameter x is defined to take the increasing values from 0 to oo. The con dition jF(x)j � Meo: x is termed " F(x) is of exponential order" and is, speaking loosely, quite a weak condition. All polynomial functions and (of course) expo nential functions of the type ek x (k constant) are included as well as bounded functions. Excluded functions are those that have singularities such as ln(x) or 1/ (x 1) and functions that have a growth rate more rapid than exponential, for example e x2 • Functions that have a finite number of finite discontinuities are also included. These have a special role in the theory of Laplace Transforms (see Chapter 3) so we will not dwell on them here: suffice to say that a function such as 2n < x < 2n + 1 F(x) = 01 2n + 1 < x < 2n + 2 where n = 0, 1, . . . -
{
5
1 . The Laplace Transform
is one example. However, the function
F(x) =
{�
x rational x irrational
is excluded because although all the discontinuities are finite, there are infinitely many of them. We shall now follow standard practice and use t (time) instead of x as the dummy variable. 1.3
Elementary Properties
The Laplace Transform has many interesting and useful properties, the most fundamental of which is linearity. It is linearity that enables us to add results together to deduce other more complicated ones and is so basic that we state it as a theorem and prove it first. Theorem 1 .2 (Linearity)
Transform exists, then
If Ft (t) and F2 (t) are two functions whose Laplace
where a and b are arbitrary constants. Proof
.C{ aFt (t) + bF2 (t)}
100 (aFt + bF2 )e-stdt 100 (aFt e -st + bF2e -st) dt = a 1 00 Ft e -st dt + b 1 00 F2 e -st dt =
=
where we have assumed that
=
a.C{F1 (t)} + b.C{F2 (t)}
so that
jaFt + bF2 ! ::; ja i ! Ft l + l b l1 F2 l ::; (ja ! Mt + l b i M2 )east where 0: 3 = max{ O:t, 0: 2 }. This proves the theorem. 0
In this section, we shall concentrate on those properties of the Laplace Transform that do not involve the calculus. The first of these takes the form of another theorem because of its generality.
6
An Introduction to Laplace Transforms and Fourier Series
Theorem 1 .3 (First Shift Theorem) If it is possible to choose constants M and a such that I F(t) l � M eo:t, that is F(t) is of exponential order, then
.C{e- bt F(t)} = f(s + b) provided b � a. (In practice if F(t) is of exponential order then the constant a can be chosen such that this inequality holds.} Proof The proof is straightforward and runs as follows:-
.C{e - bt F(t)}
=
=
T
lim { e-st e-bt F(t)dt
T-too
Jo e-st e-bt F(t)dt (as the limit exists)
100 100 e-(s+b)t F(t)dt f(s + b).
This establishes the theorem. 0
We shall make considerable use of this once we have established a few elementary Laplace Transforms. This we shall now proceed to do. Example 1 .4
Find the Laplace Transform of the function F(t) = t.
Solution Using the definition of Laplace Transform,
.C(t)
=
T-too
lim
{ Jo
T
te-st dt.
Now, we have that
1 T � oo. 2
this last expression tends to as s Hence we have the result
1 .C(t) = 2• s
7
1. The Laplace Transform
We can generalise this result to deduce the following result: Corollary
. . . " (tn ) = n! , n a positive mteger. sn+ 1
}..,
Proof The proof is straightforward:
.C(tn )
=
=
If we put
100 tne-st dt this time taking the limit straight away [--tn e-st] 00 1oo _n__ tn - 1 e -st dt s s �.C(tn - 1 ). s
0
+
0
n = 2 in this recurrence relation we obtain
If we assume
then This establishes that
" ( tn+ 1 ) _- n + 1 � -_ (n + 1)! s sn+ 1 s n+ 2
}..,
.
by induction.
0
Find the Laplace Transform of .C{teat } and deduce the value of n a t e .C{t }, where a is a real constant and n a positive integer.
Example 1.5
Solution Using the first shift theorem with
b = -a gives
.C{F(t)eat } = f(s - a) so with we get Using F(t) = tn the formula
1 F(t) = t and f = 2 s
8
An Introduction to Laplace Transforms and Fourier Series
follows. Later, we shall generalise this formula further, extending to the case where n is not an integer. We move on to consider the Laplace Transform of trigonometric functions. Specifically, we shall calculate .C{ sin(t)} and .C{ cos(t)}. It is unfortunate, but the Laplace Transform of the other common trigonometric functions tan, cot, esc and sec do not exist as they all have singularities for finite t. The condition that the function F(t) has to be of exponential order is not obeyed by any of these singular trigonometric functions as can be seen, for example, by noting that je-at tan(t)l -+ oo as t ---+ n/2 and je-at cot(t)l -+ oo as t ---+ 0 for all values of the constant a. Similarly neither esc nor sec are of exponential order. In order to find the Laplace Transform of sin(t) and cos(t) it is best to determine .C(eit ) where i J{=ij. The function eit is complex valued, but it is both continuous and bounded for all t so its Laplace Transform certainly exists. Taking the Laplace Transform, =
.C(eit )
= =
fooo e-st eit dt fooo et (i-s) dt [e�i-s)t ] 00
z-1 s s-i s
s2 + 1
--
Now, .C(eit )
=
0
. 1 +z-s2 + 1 ·
.C(cos(t) + i sin(t)) .C(cos(t)) + i.C(sin(t)) .
Equating real and imaginary parts gives the two results .C(cos(t)) and .C(sin(t))
=
=
s � s +1
� s +1.
The linearity property has been used here, and will be used in future without further comment.
9
1. The Laplace Transform
Given that the restriction on the type of function one can Laplace Transform is weak, i.e. it has to be of exponential order and have at most a finite number of finite jumps, one can find the Laplace Transform of any polynomial, any combination of polynomial with sinusoidal functions and combinations of these with exponentials (provided the exponential functions grow at a rate � where a is a constant). We can therefore approach the problem of calculating the Laplace Transform of power series. It is possible to take the Laplace Transform of a power series term by term as long as the series uniformly converges to a piecewise continuous function. We shall investigate this further later in the text; meanwhile let us look at the Laplace Transform of functions that are not even continuous. Functions that are not continuous occur naturally in branches of electrical and control engineering, and in the software industry. One only has to think of switches to realise how widespread discontinuous functions are throughout electronics and computing.
eat
Example 1.6
where
Find the Laplace Transform of the function represented by F(t) F(t) =
{
t O < t < to 2to - t to � t � 2to 0 t > 2to .
Solution This function is of the "saw-tooth" variety that is quite common in electrical engineering. There is no question that it is of exponential order and that
100 e-st F(t)dt
exists and is well defined. F(t) is continuous but not differentiable. This is not troublesome. Carrying out the calculation is a little messy and the details can be checked using computer algebra .
.C(F(t))
100 e-st F(t)dt 1to t e-st d t 12to (2to -t)e-st dt to to t o 2to 2to = [ -!e- st] 1 !e-st dt [ - 2to -t e-st] -1 ! e-st d t S o s S to toS to to = --s e-sto - 821 [e-st] oto -e 8 -sto -812 [e-st] 2toto :2 [e-sto 1] 8 � [e-2sto e-sto] 1 [ - 2e-sto e-2sto] s2 1 s12 [1 - e-sto]2 =
=
+
0
0
= = =
_
+
+
+
+
+
+
_
10
An Introduction to laplace Transforms and Fourier Series
the next chapter we shall investigate in more detail the properties of discon tinuous functions such as the Heaviside unit step function. As an introduction to this, let us do the following example. In
Example 1 .7
fined by
Determine the Laplace Transform of the step function F(t) de F(t)
= { 0a
0 � t < to t?:to.
Solution F(t) itself is bounded, so there is no question that it is also of expo nential order. The Laplace Transform of F(t) is therefore
C(F(t))
= �a= e-stF(t)dt
=
loo ae-stdt to
= [-�S e-st] tooo = -as e- t so
Here is another useful result.
= =
If C(F(t)) f (s) then C(tF(t)) � and in general C(tn F(t)) ( - l ) n dsn f (s).
Theorem 1.8
=- dsd f (s)
Proof Let us start with the definition of Laplace Transform
C(F(t))
=�a= e-stF(t)dt
and differentiate this with respect to s to give
! = � �a= e-stF(t)dt = �a= -te-stF(t)dt
assuming absolute convergence to justify interchanging differentiation and (im proper) integration. Hence
C(tF(t))
=- dsd f (s).
11
1. The Laplace Transform
One can now see how to progress by induction. Assume the result holds for n, so that
� C(tn F(t)) = ( - l) n dsn f(s) and differentiate both sides with respect to s (assuming all appropriate conver gence properties) to give
or So
�+ 1 C(tn+ 1 F(t )) = ( -1) n+ 1 dsn+ 1 f(s)
which establishes the result by induction.
0
Determine the Laplace Transform of the function t sin(t) . Solution To evaluate this Laplace Transform we use Theorem 1.8 with f(t) =
Example 1 .9
sin(t). This gives
{ 1 }
2s
. d =C{t sm(t)} ds 1 + s2 = (1 + s2 ) 2 which is the required result.
1 .4
Exercises
1 . For each of the following functions, determine which has a Laplace Trans form. If it exists, find it; if it does not, say briefly why. (a) ln(t), (b) e 3t , (c) et 2, (d) e 1 1t , (e) 1/t, (f) f(t) =
even { 01 1�ff tt �1ss odd.
2. Determine from first principles the Laplace Transform of the following functions:-
(a) e kt , (b) t2 , (c) cosh(t). 3. Find the Laplace Transforms of the following functions:-
12
An Introduction to laplace Transforms and Fourier Series
{
4. Find the Laplace Transform of the function F(t), where F(t) is given by t O
+ ¢>), (b) e5t cosh(6t). 7. If G(at + b) F(t) determine the Laplace Transform of G in terms of C{F} = /(s) and a finite integral. (a) sin(wt
=
8. Prove the following change of scale result:C{ F(at)}
=
�� (�).
Hence evaluate the Laplace Transforms of the two functions (a) t cos(6t), (b) t2 cos(7t).
2
Further Properties of the Laplace Transform
2.1
Real Functions
Sometimes, a function F(t) represents a natural or engineering process that has no obvious starting value. Statisticians call this a time series. Although we shall not be considering F(t) as stochastic, it is nevertheless worth introducing a way of "switching on" a function. Let us start by finding the Laplace Transform of a step function the name of which pays homage to the pioneering electrical engineer Oliver Heaviside (1850- 1925). The formal definition runs as follows.
Heaviside's Unit Step Function, or simply the unit step func tion, is defined as 0 t<0 H(t) 1 t � 0.
Definition 2.1
=
{
Since H(t) is precisely the same as 1 for t > 0, the Laplace Transform of H(t) must be the same as the Laplace Transform of 1, i.e. 1/ s . The switching on of an arbitrary function is achieved simply by multiplying it by the standard function H(t), so if F(t) is given by the function shown in Figure 2.1 and we multiply this function by the Heaviside Unit Step function H(t) to obtain H(t)F(t), Figure 2.2 results. Examples of the use of H(t) are to be found throughout the rest of this text, in particular Sections 2.3 and 3.4. Sometimes it is necessary to define what is called the two sided Laplace Transform
/_: e-stF(t)dt 13
14
An Introduction to laplace Transforms a nd Fourier Series F(t)
Figure 2.1: F(t), a function with no well defined starting value H(t)F(t)
Figure 2.2: H(t)F(t), the function is now zero before t = 0 which makes a great deal of mathematical sense. However the additional prob lems that arise by allowing negative values of t are severe and limit the use of the two sided Laplace Transform. For this reason, the two sided transform will not be pursued here. 2.2
Derivative Property of the Laplace Trans form
Suppose a differentiable function F(t) has Laplace Transform f(s), we can find the Laplace Transform
C{F'(t)}
=�a= e-stF'(t)dt
of its derivative F' (t) through the following theorem. Theorem 2.2
C{F'(t)}
=�a= e-stF'(t)dt = -F(O) + sf(s).
15
2. Further Properties of the Laplace Transform
Proof Integrating by parts once gives
C{F'(t)} [F(t)e -st];' + 100 8e-st F(t)dt -F(O) + 8f(8) where F(O) is the value of F(t) at t 0. =
=
=
0
This is an important result and lies behind future applications that involve solving linear differential equations. The key property is that the transform of a derivative does not itself involve a derivative, only + which is an algebraic expression involving The downside is that the value is required. Effectively, an integration has taken place and the constant of integration is In the next chapter, this is exploited further through solving differential equations. Later still in this text, partial differential equations are solved. Let us proceed here by· finding the Laplace Transform of the second derivative of We also state this in the form of a theorem.
-F(O) 8f(8) F'(t) f(8). F(O) F(O). F(t). Theorem 2.3 IfF(t) is a twice differentiable function oft then C{F"(t}} 82f(8)- 8F(O)- F'(O). =
Proof The proof is unremarkable and involves integrating by parts twice. Here
are the details .
.c{F"(t)} 100 e-st F"(t)dt [F'(t)e-st }OO0 + loroo 8e-st F'(t)dt -F'(O) + [8F(t)e-st];' + 100 82e-st F(t)dt -F'(O)-8F(O) + 82f(8) 82f(8)- 8F(O)- F'(O). =
=
=
=
=
0
The general result, proved by induction, is
C{F(n)(t)} = sn f(8)- sn-l F(O)- 8n -2F'(O)-
· · ·-
F(n-l)(O)
where n is a positive integer. Note the appearance of n constants on the right hand side. This of course is the result of integrating this number of times. This result, as we have said, has wide application so it is worth getting to know. Consider the result =
C(sin (wt)) 82 +w w 2•
16
An Introduction to Laplace Transforms and Fourier Series
Now, so using the formula with
F(t)
=
! (sin (wt)) = w cos(wt) C(F'(t)) = sf(s) -F(O)
sin (wt) we have
C{w cos (wt)} = s s +w w - 0 C{ os (w t)} s +s w
so
2
=
c
2
2
2
another standard result. Another appropriate question to tackle at this point is the determination of the value of the Laplace Transform of
lot F(u)du. F(t) must be integrable in such a way that g(t) ht F(u)du is of exponential order. From this definition of g(t) it is immediately apparent that g(O) 0 and that g'(t) = F(t). This latter result is called the fundamental First of all, the function
=
=
theorem of the calculus. We can now use the result
C{g'(t)} = sg(s)- g(O) to obtain C{F(t)} = f(s) = sg(s) where we have written C{g(t)} = g(s). Hence g(s) fs(s) =
which finally gives the result
C (lot F(u)du) f�s) . =
The following result is also useful and can be stated in the form of a theorem. Theorem
2.4
IfC(F(t)) j(s) then c{ F�t) } loo j(u)du , assuming that C { F;t) } 0 as s ----t oo. =
=
----t
17
2. Further Properties of the Laplace Transform
G(t ) be the function F(t )jt, so that F(t= ) tG(t ). Using the property .C{tG(t )= } - dds.C{G(t )} we deduce that f(s)=.C{F(t )= } _!:_ds.C { F(tt ) } . Integrating both sides of this with respect to s from s to gives Proof Let
oo
since
.C { F; t ) }-+ 0 as s-+
oo
which completes the proof. 0
The function
Szt. ( = ) 1t sin(u) u du 0
--
defines the Sine Integral function which occurs in the study of optics. The formula for its Laplace Transform can now be easily derived as follows. = Let in the result
F(t ) sin(t )
.C ( F; t )) ioo f(u)du =
to give
We now use the result to deduce that .c
.C (lot F(u)du ) f�s) =
(lot sin� u) du) .C{Si(t )= } � tan- 1 (�). =
18
2.3
An Introduction to laplace Transforms and Fourier Series
Heaviside's Unit Step Function
As promised earlier, we devote this section to exploring some properties of Heav iside's Unit Step Function H (t) . The Laplace Transform of H (t) has already been shown to be the same as the Laplace Transform of 1, i.e. 1/ s . The Laplace Transform of H (t - to), to > 0, is a little more enlightening:
.C{H (t - to)} =
100H (t - to)e-stdt.
st 0 _e -. 1tooo e-stdt = [ Se--st ] tooo =S
Now, since H (t - to) = 0 for t < to this Laplace Transform is
.C{H (t - to)} =
__
This result is generalised through the following theorem. Theorem 2.5 (Second Shift Theorem)
order in t then
If F(t) is a function of exponential
.C{H (t - to)F(t - to)} = e-sto f(s) where f(s) is the Laplace Transform of F(t). Proof This result is proved by direct integration .
.C{H (t - to)F(t - to)} = =
100H (t - to)F(t - to)e- stdt
1to00 F(t - t0)e-stdt (by definition ofH ) 100
= e-sto F( u)e - s(u+to) du (writing u = t - to) = e-st o f(s). This establishes the theorem.
0
The only condition on F(t) is that it is a function that is of exponential order which means of course that it is free from singularities for t > t0• The principal use of this theorem is that it enables us to determine the Laplace Transform of a function that is switched on at time t = t0. Here is a straightforward example.
Determine the Laplace Transform of the sine function switched on at time t = 3. Solution The sine function required that starts at t = 3 is S(t) where
Example 2.6
S(t) =
{ 0sin(t)
t�3 t < 3.
2. Further
Properties of the laplace Transform
19
We can use the Heaviside Step function to write
S(t) = H(t - 3) sin(t). The Second Shift Theorem can then be used by utilising the summation formula =
sin(t)
sin(t- 3 + 3) = sin(t-3) cos(3)
+
cos(t-3) sin(3)
so
+
.C{S(t) } = .C{H(t - 3) sin(t- 3)} cos(3) .C{H(t - 3) cos(t- 3)} sin(3). This may seem a strange step to take, but in order to use the Second Shift Theorem it is essential to get the arguments of both the Heaviside function and the target function in the question the same. We can now use the Second Shift Theorem directly to give
+e -3 .C{S(t)} = e -38 cos(3)s2 1 + 8 sin(3) s + 1 1
+
or
2.4
.C{S(t)} = ( cos3 + s sin3) e - 38 f (s2 + 1). Inverse Laplace Transform
Virtually all operations have inverses. Addition has subtraction, multiplica tion has division, differentiation has integration. The Laplace Transform is no exception, and we can define the Inverse Laplace Transform as follows.
Definition
2.7
If F(t) has the Laplace Transform f(s), that is
.C{F(t)} = f(s) then the Inverse Laplace Transform is defined by .c - 1{/ (s)} = F(t) and is unique apart from null functions.
Perhaps the most important property of the inverse transform to establish is its linearity. We state this as a theorem.
Theorem
2.8
The Inverse Laplace Transform is linear, i.e.
Proof Linearity is easily established as follows. Since the Laplace Transform is linear, we have for suitably well behaved functions F1 (t) and F2 (t):
.C{aFt (t) + bF2 (t)} = a .C{Ft (t) } b .C{F2 (t)} = aft (s) b f2 (s).
+
+
20
An I ntroduction to Laplace Transforms and Fou rier Series
Taking the Inverse Laplace Transform of this expression gives
which is the same as
and this has established linearity of
.C - 1 {! ( s)} .
0
Another important property is uniqueness. In Chapter 1 we mentioned that the Laplace Transform was indeed unique apart from null functions (functions whose Laplace Transform is zero). It follows immediately that the Inverse Laplace Transform is also unique apart from the possible addition of null functions. These take the form of isolated values and can be discounted for all practical purposes. As is quite common with inverse operations there is no systematic method of determining Inverse Laplace Transforms. The calculus provides a good example where there are plenty of systematic rules for differentiation: the product rule, the quotient rule, the chain rule. However by contrast there are no systematic rules for the inverse operation, integration. H we have an integral to find, we may try substitution or integration by parts, but there is no guarantee of suc cess. Indeed, the integral may not be possible in terms of elementary functions. Derivatives that exist can always be found by using the rules;· this is not so for integrals. The situation regarding the Laplace Transform is not quite the same in that it may not be possible to find .C {F(t)} explicitly because it is an integral. There is certainly no guarantee of being able to find .c - 1{/ (s)} and we have to devise various methods of trying so to do. For example, given an arbitrary function of s there is no guarantee whatsoever that a function of t can be found that is its Inverse Laplace Transform. One necessary condition for example is that the function of s must tend to zero as s -t oo . When we are certain that a function of s has arisen from a Laplace Transform, there are techniques and theorems that can help us invert it. Partial fractions simplify rational functions and can help identify standard forms (the exponential and trigonometric func tions for example), then there are the shift theorems which we have just met which extend further the repertoire of standard forms. Engineering texts spend a considerable amount of space building up a library of specific Inverse Laplace Transforms and to ways of extending these via the calculus. To a certain ex tent we need to do this too. Therefore we next do some reasonably elementary examples.
Example
2.9
Use partial fractions to determine a .c - 1 s2 - a2 .
{
}
21
2. Further Properties of the Laplace Transform
Solution Noting that
a s2 - a2 gives straight away that
c- 1
{ s2 -a a2 }
= 21 [ s -1 a - s +1 a ] =
=
�(eat - e - at ) sinh (at) . 2
The first shift theorem has been used on each of the functions 1 f (s - a) and 1/(s + a) together with the standard result c -1{ 1/s} 1. Here is another example.
=
Example 2.10
Determine the value of 2 c-1 (s 3) 3 .
{ : }
Solution Noting the standard partial fraction decomposition
s2 (s + 3)3
= s +1 3 - (s +6 3)2 + (s +9 3)3
we use the first shift theorem on each of the three terms in turn to give s2 c- 1 c - 1_ 1 _ c - 1 6 £-1 9
{ (s + 3)3 } =
s+3 (s + 3)2 + (s + 3)3 e -3t 6te -3t + �2 t2 e -3t where we have used the linearity property of the c- 1 operator. Finally, we do =
_
the following four-in-one example to hone our skills.
Determine the following inverse Laplace Transforms + 3) . (b) c- 1 (s - 1) ; (c) c - 1 3s + 7 ; (d) c-1 e - 78 (a) c- 1 s(s -(s 1)(s (s + 3)3 s2 + 2s - 8 + 2) ' s2 - 2s + 5
Example 2.11
•
Solution All of these problems are tackled in a similar way, by decomposing the expression into partial fractions, using shift theorems, then identifying the simplified expressions with various standard forms. (a) Using partial fraction decomposition and not dwelling on the detail we get 1 4 s+3 3
--,.-s(s----,.-1)(s.,.+-..- 2).,.- = - 2s + 3(s - 1) + 6(s + 2) . -
Hence, operating on both sides with the Inverse Laplace Transform operator gives s+3 c- 1 � -1 4 c- 1 -1 1
s(s - 1)(s + 2)
=
=
2s + C 3(s - 1) + C 6(s + 2) - 1_ 1 _ �c- 1_ 1 _ - �2 c - 1�s + ic 3 s-1 +6 s+2 -
22
An Introduction to Laplace Transforms and Fourier Series
using the linearity property of .c - 1 once more. Finally, using the standard forms, we get
( b) The expression
s-1 s2 + 2s - 8
is factorised to
s-1 (s + 4)(s - 2)
which, using partial fractions is
1 + 5 6(s - 2) 6(s + 4) " Therefore, taking Inverse Laplace Transforms gives
.C-1 s2 +s -2s1- 8 = 61 e2t + 65 e -4t
( c) The denominator of the rational function
3s + 7 s2 - 2s + 5
does not factorise. In this case we use completing the square and standard trigonometric forms as follows:
3s + 7 2 s - 2s + 5
= (s -3s1)+2 7+ 4 - 3(s(s -- 1)I)2 ++ 104
·
So
.c - 1 2 3s + 7 s - 2s + 5
= 3.c- 1 (s (s- 1)-21)+ 4 + 5£ -1 (s - 1)2 2 + 4 = 3et cos(2t) + 5et sin(2t) .
Again, the first shift theorem has been used. ( d ) The final Inverse Laplace Transform is slightly different. The expression
(s - 3)3 contains an exponential in the numerator, therefore it is expected that the second shift theorem will have to be used. There is a little "fiddling" that needs to take place here. First of all, note that
.C-1 (s -1 3)3 = 21 t2e3t
2. Further Properties of the Laplace Tra nsform
23
e(s --3)7s 3 - { 0�(t 7)2 e3(t-7) 0t � 7t � 7. Of course, this can succinctly be expressed using the Heaviside Unit Step Func tion as �H(t7)(t- 7)2e 3
_
_c- 1
>
0
We shall get more practice at this kind of inversion exercise, but you should try your hand at a few of the exercises at the end of the chapter. 2.5
Limiting Theorems
In many branches of mathematics there is a necessity to solve differential equa tions. Later chapters give details of how some of these equations can be solved by using Laplace Transform techniques. Unfortunately, it is sometimes the case that it is not possible to invert to retrieve the desired solution to the original problem. Numerical inversion techniques are possible and these can be found in some software packages, especially those used by control engineers. Insight into the behaviour of the solution can be deduced without actually solving the differential equation by examining the asymptotic character of for small or large In fact, it is often very useful to determine this asymptotic be haviour without solving the equation, even when exact solutions are available as these solutions are often complex and difficult to obtain let alone interpret. In this section two theorems that help us to find this asymptotic behaviour are investigated.
f(s)
s
f(s)
s.
If the indicated limits exist then lim -tO F(t) slim-too sf(s) t {The left hand side is F(O) of course, or F(O+) if limt-to F(t) is not unique.) We have already established that C{F'(t)} = sf(s) -F(O) . (2. 1) However, if F'(t) obeys the usual criteria for the existence of the Laplace Trans form, that is F'(t) is of exponential order and is piecewise continuous, then liooo e-st F'(t)dtl fooo !e-st F'(t)!dt oo fo e-steMtdt 1 0 ass-t oo. M-s Thus letting s -t in (2. 1) yields the result. Theorem
2.12
(Initial Value )
=
Proof
<
<
oo
- -- -t
0
24
An Introduction to Laplace Transforms and Fourier Series
If the limits indicated exist, then t-limtoo F(t) slim-tO sf(s) .
Theorem 2.13 (Final Value)
=
Proof Again we start with the formula for the Laplace Transform of the deriva
tive of
F(t )
L:{F' (t )} 100 e- st F'(t )dt = sf(s) - F(O) =
(2 . 2)
this time writing the integral out explicitly. The limit of the integral as is
s
-t
0
{oo e-st F'(t)dt lim lim {T e - st F'(t)dt s-tO Jo s-tO T-too }0 T lim {e-s F(T) -F(O)} s-limlimtOT-too T-too F(T) -F(O) -too F(t ) -F(O). tlim lim
=
=
=
Thus we have, using Equation 2.2,
=
-too F(t ) -F(O) lim s-tO sf(s) - F(O) tlim from which, on cancellation of -F(O), the theorem follows.
0
s
Since the improper integral converges independently of the value of and all limits exist (a priori assumption) , it is therefore correct to have assumed that the order of the two processes (taking the Hmit and performing the integral) can be exchanged. (This has in fact been demonstrated explicitly in this proof. ) Suppose that the function can be expressed as a power series as follows
F(t) F(t) = ao + a1 t + a2 t 2 + · · · + an t n + · · · . If we assume that the Laplace Transform of F(t) exists, F(t ) is of exponential order and is piecewise continuous. If, further, we assume that the power series for F(t) is absolutely and uniformly convergent the Laplace Transform can be applied term by term L: {F(t )} f(s) L:{ao + a1t + a2 t 2 + · · · + an t n + ·} aoL:{l} + a1L:{ t } + a 2L:{t 2} · · · + an L:{ t n } + =
=
· ·
=
· · ·
provided the transformed series is convergent. Using the standard form =
I
£{tn } __!!: 8n +l :__
2. Further Properties of the Laplace Transform
25
the right hand side becomes
Hence
f(s) ao + sa21 + 2a2 + ... + sn!na+ln + ... . Example Demonstrate theasinitial and series, final valeval ue theorems usingby term the fuandnc tion F(t) e-t Expand e-t a power u ate term . confirm the legitimacy of term by term evaluation. 2.14
=s
$"3
=
Solution
1
s + l= e 1 F(t) F(O) lim lim sf(s) +8-1 = 1. This confirms the initial value theorem. The final value theorem is also con firmed as follows:lim F(t) lim e-t 0 lim sf(s) = lim _!__I 0. + The power series expansion for e-t is lim
t-+0
=
-
S-+00
t-+ oo
s-+0
0=
s-+ oo S
=
t-+ oo
s-+0 S
=
=
e-t
Hence the term by term evaluation of the power series expansion for gives the right answer. This is not a proof of the series expansion method of course, merely a verification that the method gives the right answer in this instance. 2.6
The Impulse Function
There is a whole class of "functions" that, strictly, are not functions at all. In order to be a function, an expression has to be defined for all values of the variable in the specified range. When this is not so, then the expression is not a function because it is not well defined. It may not seem at all sensible for us to bother with such creatures, in that if a function is not defined at a
26
An Introduction to Laplace Transforms and Fourier Series
certain point then what use is it? However, if a "function" instead of being well defined possesses some global property, then it indeed does turn out to be worth considering such pathological objects. Of course, having taken the decision to consider such objects, strictly there needs to be a whole new mathematical language constructed to deal with them. Notions such as adding them together, multiplying them, performing operations such as integration cannot be done without preliminary mathematics. The general consideration of this kind of object forms the study of (see Jones (1966) or Lighthill ( 1970)) which is outside the scope of this text. For our purposes we introduce the first such function which occurred naturally in the field of electrical engineering and is the so called impulse function. It is sometimes called Dirac's function after the pioneering theoretical physicist P.A.M. Dirac (1902-1984) . It has the following definition which involves its integral. This has not been defined properly, but if we write the definition first we can then comment on the integral.
generalised functions
8
The Dirac-8 function 8(t) is defined as having the follo wing 8(t) OVt ,t =j:. O /_: h(t)8(t)dt h(O) for any functio n h(t) continuous in ( Definition 2.15
properties
(2.3)
(2.4)
- oo , oo ) .
We shall see in the next paragraph that the Dirac-8 function can be thought of as the limiting case of a top hat function of unit area as it becomes infinitesimally thin but infinitely tall, i.e . the following limit
8(t) = lim t) { t t t T-tco
where Tp(t)
=
Tp (
0 < - 1/T �T - 1 /T < < 1/T 0 � 1fT.
The integral in the definition can then be written as follows:
/_
co
-co
h(t) lim
T-tco
Tp(t)dt
= T-tco lim
/_
co
-co
h(t)Tp(t)dt
provided the limits can be exchanged which of course depends on the behaviour of the function but this can be so chosen to fulfil our needs. The integral inside the limit exists, being the product of continuous functions, and its value is the area under the curve This area will approach the value as T --+ oo by the following argument. For sufficiently large values of T, the not to differ very interval [-1/T, 1/T] will be small enough for the value of = much from its value at the origin. In this case we can write + where is in some sense small and tends to zero as T --+ oo. The integral as T --+ oo and the property is established. thus can be seen to tend to
h(t)
!t(t)!
h(t)Tp(t).
h(O)
h(O) h(t)h(t) h(O) t(t)
2. Further Properties of the Laplace Transform
27
,-----if-----, I
T/2
I
T
T
Figure 2.3: The "top hat" function
b(t)
Returning to the definition of strictly, the first condition is redundant; only the second is necessary, but it is very convenient to retain it. Now as we have said, is not a true function because it has not been defined for = 0. has no value. Equivalent conditions to Equation 2.4 are:-
b(O)
b(t)
t
1� h(t)b(t)dt = h(O)
/_0� h(t)b(t)dt = h(O).
and
These follow from a similar argument as before using a limiting definition of b(t) in terms of the top hat function. In this section, wherever the integral of a b function or later related "derivatives" occurs it will be assumed to involve this kind of limiting process. The details of taking the limit will however be (
)
omitted. Let us now look at a more visual approach. As we have seen algebraically in the last paragraph is sometimes called the impulse function because it can be thought of as the shape of Figure 2.3, the top hat function if we let T -t oo. Of course there are many shapes that will behave like in some limit. The top hat function is one of the simplest to state and visualise. The crucial property is that the area under this top hat function is unity for all values of T, so letting T -t oo preserves this property. Diagrammatically, the Dirac-b or impulse function is represented by an arrow as in Figure 2.4 where the length of the arrow is unity. Using Equation 2.4 with = 1 we see that
b(t)
b(t)
/_: b(t)dt
=
h
1
which is consistent with the area under b ( t ) being unity. We now ask ourselves what is the Laplace Transform of = We suspect that it might be 1 for Equation 2.4 with valid choice of gives
h(t)
roo 8(t)e-stdt J_t'"oo 8(t)e-stdt lo=
it exist? h(t) o(t)?e-st,Doesa perfectly
=
1.
28
An Introduction to Laplace Transforms and Fourier Series o(t)
Figure 2.4: The Dirac-8 function However, we progress with care. This is good advice when dealing with gen eralised functions. Let us take the Laplace Transform of the top hat function Tp(t) defined mathematically by
t � -1/T -1/T < t < 1/T t 2: 1/T. The calculation proceeds as follows:-
As T -t oo , hence
(�)
T - T e - s/T � + 0 T 2s 2
2s
�
which -t ! as T -t oo . In Laplace Transform theory it is usual to define the impulse function J(t) such that
C{J(t)} = 1.
This means reducing the width of the top hat function so that it lies between 1/T (not -1/T and 1/T) and increasing the height from !T to T in order to preserve unit area. Clearly the difficulty arises because the impulse 0 and
29
2. Further Properties of the Laplace Transform
function is centred on t = 0 which is precisely the lower limit of the integral in the definition of the Laplace 'fransform. Using 0- as the lower limit of the integral overcomes many of the difficulties. The function t5(t - t0 ) represents an impulse that is centred on the time t = to. It can be considered to be the limit of the function K(t) where K(t) is the displaced top hat function defined by
{
K(t) =
t � t0 - 1/2T !T t0 - 1/2T < t < t0 + 1/2T 0 t ?:. t0 + 1/2T 0
as T -t oo. The definition of the delta function can be used to deduce that
and that, provided to
>
0
/_: h(t)t5(t - to)dt = h(to)
Letting to -t 0 leads to
.C{t5(t - to)} = e -sto . .C{t5(t)} = 1
a correct result. Another interesting result can be deduced almost at once and expresses mathematically the property of t5(t) to pick out a particular function value, known to engineers as the filtering property. Since
/_: h(t)t5(t - to)dt = h(to)
with h(t) = e -st f(t) and t0 = a ?:. 0 we deduce that
.C{ t5(t - a)f(t)} = e -as f(a). Mathematically, the impulse function has additional interest in that it en ables insight to be gained into the properties of discontinuous functions. From a practical point of view too there are a number of real phenomena that are closely approximated by the delta function. The sharp blow from a hammer, the discharge of a capacitor or even the sound of the bark of a dog are all in some sense impulses. All of this provides motivation for the study of the delta function. One property that is particularly useful in the context of Laplace 'fransforms is the value of the integral
This has the value 0 if u0
>
/_too t5(u - uo)du.
t and the value 1 if uo < t. Thus we can write {t Uo t5(u - uo)du = 01 tt <> uo
1-oo
{
30
An Introduction to Laplace Transforms a nd Fourier Series
or
/_too 6(u-uo)du = H(t-uo)
H
where is Heaviside's Unit Step Function. If we were allowed to differen tiate this result, or to put it more formally to use the fundamental theorem of the calculus (on functions one of which is not really a function, a second which is not even continuous let alone differentiable!) then one could write that "6( = or state that "the impulse function is the derivative of the Heaviside Unit Step Function" . Before the pure mathematicians send out lynching parties, let us examine these loose notions. Everywhere except where = the statement is equivalent to stating that the derivative of unity is zero, which is obviously true. The additional information in the albeit loose statement in quotation marks is a quantification of the nature of the unit jump in We know the gradient there is infinite, but the nature of it is embodied in the second integral condition in the definition of the delta function, Equation 2.4. The subject of is introduced through this concept and the interested reader is directed towards the texts by Jones and Lighthill. There will be some further detail in this book, but not until Chap ter 6 . All that will be noted here is that it is possible to define a whole string of derivatives 6' (t), 6"(t), etc. where all these derivatives are zero everywhere except at = 0. The key to keeping rigorous here is the property
u-uo) H' (u-uo)" u uo H(u - u0) .
generalised functions
t
/_: h(t)6(t)dt = h(O).
The "derivatives" have analogous properties, viz.
/_: h(t)6'(t)dt = -h'(O) and in general
/_: h(t)6(n)(t)dt = ( -l)nh(n)(O).
h(t)
Of course, the function will have to be appropriately differentiable. In this chapter, the Laplace Transform of this nth derivative of the Dirac delta function is required. It can be easily deduced that
/_: e-st15(n) (t)dt 1� e-st15(n)(t)dt = Sn. =
Notice that for all these generalised functions, the condition for the validity of the initial value theorem is violated, and the final value theorem although perfectly valid is entirely useless. It is time to do a few examples.
Determine the Inverse Laplace Transform .C- 1 { s2s+2 1 } and interpret the F(t) obtained.
Example 2.16
2. Further Propert ies of the Laplace Transform Solution
Writing
31
82 1 = 1 - 1 82 + 82 + 1 and using the linearity property of the Inverse Laplace Transform gives 1-} .c- 1 { 828+2 1 } = .c-1 {1} + .c-1 { 82 + 1 = 8(t) - sin(t). ThisNote function is sinusoidal withinverse a unit impulse 0. 1 {1} =at8(t).t = This the direct use of the arises straight away .c from our definition of . It i s quite possible for other definitions of Laplace .C Transform topure givemathematical the value � forbent. (for example). Thistheremayis worry those .C {8(t)}However, readers of a as long as consistency ininverse, the definitions of the delta function andexample the Laplace Transform andalways henceyieldits then no inconsistencies arise. The given above will the same answer .c -1 { 828: 1 } = 8(t) - sin(t). The small variations possible indefinition, the definition viz. of the Laplace Transform around t = 0 do not change this. Our .C {F(t)} = 1� e-st F(t)dt remains the most usual. 8 Example 2.17 Find the valu e of .c - 1 { : 1 } . 82 Solution Using a similar technique to the previous example we first see that 8 8 82 +3 1 = 8 - 82 + 1 so taking Inverse Laplace Transforms using the linearity property once more yields .c- 1 { 8/: 1 } = .c- 1 { 8} + .c-1 { 82 : 1 } = 8'(t) - cos(t) where 8'(t) is the first derivative of the Dirac-6 function which was defined earlier. Notice that the first derivative formula: .C {F'(t)} = 8j(8) - F(O) with F'(t) = 8'(t) - cos(t) gives 83 1 - F(O) .C{6'(t) - cos(t)} = -82+ --
--
--
--
32
An Introduction to Laplace Transforms and Fourier Series
which isnotindeed theCare aboveindeed resultis required apart fromif standard the troublesome F(O) . F(O)results is of course defined! Laplace Transform are tobestbe advice appliedistotoproblems containing generalised functions. When in doubt, the use limit definitions of 6(t) and the like, and follow the mathematics through carefully, especially the swapping of integrals and limits. The little book by Lighthill is full of excellent practical advice. 2. 7
Periodic Functions
We begin with a very straightforward definition that should be familiar to ev eryone:
If F(t) is a functio n that obeys the rule F(t) = F(t + r) forperisome od real for all values oft then F(t) is called a periodic functio n with Definition
T.
2.18
T
Periodic functions playparticularly a very important roleOneinonly manyhasbranches ofofengineering and applied science, physics. to think springs or alternating current present in household electricity to realise their prevalence. Inof periodic Chapter 4,functions. we study Here, Fouriera series in some depth. ThisTransform is a systematic study theorem on the Laplace of periodic functions is introduced, proved and used in some illustrative examples. Theorem
2 .19
>
Let F(t) have period T 0 so that F(t) = F(t .C{F(t)} = for1e-s- te-F(t)dt sT
+ T).
Then
many then proofsevaluates of properties of Laplace Transforms, this oneofbegins with itsLike definition the integral by using the periodicity F(t) .C{F(t)} = 100 e-st F(t)dt = loT e-st F(t)dt + lr2T e-st F(t)dt
Proof
+
{l 3T e -st F(t)dt + 2T
· · ·
+
T e -st F(t)dt + · · · r l(n-I)T
provided the series on thetheright hand side is convergent. This is assuredTransform since the function F(t) satisfies condition for the existence of its Laplace by construction. Consider the integral jnT e-st F(t)dt (n-l)T
2. Further Properties of the Laplace Transform F(t)
33
..t.
·T
T
2T
Figure 2.5: The graph of F(t) and substitute u = t - (n - l)T. Since F has period T this leads to i( nT-l) e -stF(t)dt = s(n l T J{oT e -suF(u)du n = 1,2, . . . n T which gives 100 e -st F(t)dt ( I + e- sT + e-2sT + . . ) 1T e -st F(t)dt oT e -stF(t)dt = f 1 - e-sT on summing the geometric progression. This proves the result. e-
-
)
·
0
Here is an example of using this theorem.
Example 2.20 A
rectified sine wave is defined by the expression � ) O < t < tr F(t) - m(t) 7r < t < 2tr F(t) = F(t ) =
determine L{F(t)}.
{ sins t + 2tr
Solution The graph of F(t) is shown in Figure 2. 5 . The function F(t) actually has but weit iscaneasier tothecarryanswer out theby using calculation as if thewithperiodT =was 2trWith. period Additionally check the theorem T = 2tr we have from Theorem 2.19, r2.,.. e-st F(t)dt --- --='-J 0'.C{ F(t)} = 1 - e-sT where the integral in the numerator is evaluated by 11"splitting into two as follows:{Jo 211" e -stF(t)dt = Jo{.,.. e -st sin(t)dt + 1.,.2. e -st (- sin(t))dt. 1r,
1r.
=
An Introduction to Laplace Transforms and Fourier Series
34
Now, writing �{} to denote the imaginary part of the function in the brace we have fo1r e -st sin(t)dt = � {fo7r e-st+itdt } .z -_I_s e -st+it] 07r = � [� { z -I-s (e-s1r+ i1r - I) } I (I + e -s1r ) } � { s-z . So 1r -1rs e -st sin (t)dt = I + e 2 I+s 1o Similarly, 21r -21r s -1rs e -st sin (t)dt = e I ++ e2 . s 1 Hence we deduce that 7r C{F(t)} = (I + s2 )(1 - e -27rs) . -
_ _ _
.
-
I + e -1rs
This isapplied preciselyto the the function answer that would have been obtained if Theorem 2.I9 had been F(t) = sin(t) 0 < t < F(t) = F(t + ) We can therefore have some confidence in our answer. 1r
2.8
1r .
Exercises
If F(t) = cosof sin( (at), use the derivative formula to re-establish the Laplace Transform at). 2. Use Theorem 2.2 with t F(t) = sin(u) 10 u du to establish the result. 1.
2. Further Properties of the Laplace Transform
3.
Prove that
4.
Find
c { fot cos(au) : cos(bu) du } .
35
c { 2 sin(t) sinh(t) } . t 6. Prove that if J(s) indicates the Laplace 'fransform of a piecewise contin uous function f(t) then lim f{s) = 0. s--too 7. Determine the following Inverse Laplace 'fransforms by using partial frac tions s+9 2(2s + 7) s > _2 (b) (a) s2 - 9 ' (s + 4)(s + 2) ' 1 (d) (c) s(s2s2 ++2k2 s(s + 3)2 ' 4k2) ' (e) (s - 2)2(s1 + 3)3 · 8. Verify the initial value theorem, Theorem 2.12, for the two functions (a)(b) 2(4++cos(t) and t)2 • 9 . Verify the final value theorem, Theorem 2.13, for the two functions (a)(b) 3t3+e-e-. t and t 10. Given that s C{sin(Vt)} = _.!5_e-11 83/2 4 use near xof=standard 0 to determine the value of the constant k. (You will sinneedx "'thex table transforms Appendix B.) 11. By using a power series expansion, determine (in series form) the Laplace 'fransforms of sin (t2) and cos (t2). 12. P (s) and Q(s) are polynomials, the degree of P(s) is less than that of Q(s) which is Use partial fractions to prove the result c_-I { P (s) } = t P (ak) eo:" t Q(s) k = I Q ' ( ak ) where ak are the n distinct zeros of Q(s) . 5.
Determine
n.
36
13.
An Introduction to Laplace Transforms and Fourier Series
Find the following Laplace Transforms: ( a) H(t - a) (b)
c
( )
h
= { t3+ 1
0 :::; t :::; 2 t>2
6
o ::; t ::; 2 t>2
h= { t+1
d the derivative of ft (t). 14 . Find the Laplace Transform of the triangular wave function: o ::; t < c F(t) = { �c - t c :::; t < 2c ( )
F(t + 2c)
= F(t).
3
Convolution and the Solution of Ordinary Differential Equations
3.1
Introduction
Itordinary is assumed from the outset(ODE), that students willis ahave some familiarity with differential equations but there brief resume given in Sec tion 3.3. The other central and probably new idea is that of the convolution integralsomeandkinds this ofis diff introduced fully in Section 3.2. Of course it is possible to solve e rential equation without using convolution as is extends obvious from the last chapter, but mastery of the convolution theorem greatly the power ofoperation Laplace Transforms toforsolve ODEs. In fact,offamiliarity withtopics the convolution is necessary the understanding many other that feature in thistopics text that suchareas outside the solution ofaspartial diffofeGreen' rentials equations (PDEs) and other it such the use for(BVP). forming the general solution of various types of boundary valuefunctions problem 3.2
Convolution
The definition of convolution is straightforward.
Definition 3.1 The convolution of two given functions j(t) f * g and is defined by the integral
37
and g(t) is written
38
An Introduction to Laplace Transforms and Fourier Series
The only condition thatthat is necessary to impose on theexists. functions f and g is that their behaviour be such the integral on the right Piecewise continuity ofpiecewise both incontinuity the intervalis repeated [0, t] is certainly sufficient. The following definition of here for convenience. If an interval [0, t0] say can be partitioned into a finite num ber of subintervals [0, t 1 ], [tt , t2], [t2 , t3], . . . , [tn , to] with 0, t 1 , t2 , . . . , tn , to an in creasing sequence of times and such that a given function f(t) is continuous in each of these subintervals but not necessarily at the end points themselves, then f(t) is piecewise continuous in the interval [0, to].
Definition 3.2
It is easy to prove the following theorem Theorem 3.3 (Symmetry) f * g = g * f . It is left as an exercise to the student to prove this. Probably the most important theorem concerning thetheuseconvolution of Laplacetheorem 'frans forms and convolution is introduced now. It is called and enables one, amongst otherthethings, to deduce theexpressed Inverse inLaplace 'frans form of an expression provided expression can be the form of a product of functions, each Inverse Laplace 'fransform of which is known. Thus, in a loose sense,unlike the Inverse Laplaceby parts, 'fransform isisequivalent to integration by parts, although integration there no integration left to do on the right hand side. If f(t) and g(t) are two functions of exponential order (so that their Laplace Transforms exist - see Chapter 1}, and writing .C{f} j(s) and .C{g} g(s) as the two Laplace Transforms then .c - 1 {fg} f * g where * is the convolution operator introduced above.
Theorem 3.4 (Convolution)
= = = Proof In order to prove this theorem, we in fact show that Jg = .C{f(t) * g(t)} bytation directin terms integration of the right handNow, side. In turn, this involves its interpre of a repeated integral. .C{f(t) * g(t)} = 100 e -st 1t f(r)g(t - r)drdt using thetakes definition of ofthea wedge Laplacein 'fransform. The domain of this repeated integral the form the t, r plane. This wedge (infinite wedge) is displayedtheinorder Figureof integration 3.1. 'frivial rewriting of this double integral to facilitate changing gives t .C{f(t) * g(t)} = 100 1 e -st f(r) g (t - r)drdt
3. Convolution and the Solution of ODEs
39
't =t
t
Figure 3.1: The domain of the repeated integral
t
and thus integrating with respect to first (horizontally first instead of vertically in Figure 3.1) gives
C{f(t) * g(t)}
=
=
100 [00 e-stf(r)g(t - r)dtdr 100 f(r) { [00 e-stg(t - r)dt} dr.
Implement the change of variable u constant so that it becomes
=
t - r in the inner integral where r is
[00 e-stg(t - r)dt 100 e-s(u+r)g(tt)du e -sr 1oo e sug( )d =
=
=
Thus we have
C{f * g}
= = =
e-srg(s).
-
100 f(r)e-srg(s)dr
g(s)/(s) /(s)g(s).
u u
40
An Introduction to Laplace Transforms and Fourier Series
= .c-1 {/g}.
Hence
f(t) * g(t) This establishes the theorem. Both sides are functions ofs here, r, t and u are dummy variables. 0
This particular result is sometimes referred to as Borel's Theorem, and the convolution referred to as Faltung. These names are found in older books and some present day engineering texts. Before going on to use this theorem, let us do an example or two on calculating convolutions to get a feel of how the operation works. Example 3.5 Find
the value of cos(t) * sin(t) .
Solution Using the definition of convolution we have
cos(t) * sin(t)
= lot cos(r) sin(t - r)dr.
To evaluate this we could of course resort to computer algebra: alternatively we use the identity sin(A) cos(B)
= � [sin(A + B) + sin(A - B)].
This identity is engraved on the brains of those who passed exams before the advent of formula sheets. It does however appear on most trigonometric formula sheets. Evaluating the integral by hand thus progresses as follows. Let A = t-r and B = T in this trigonometric formula to obtain sin(t - r) cos(r) whence, cos(t) * sin(t)
= � [sin(t) + sin(t - 2r)]
= lot cos(r) sin(t - r)dr = !2 1t0 [sin(t) + sin(t - 2r)]dr . [r]t0 + 1 [cos(t - 2r)]0t = 1 sm(t) = � t sin(t) + � [cos( -t) - cos(t)] = �t sin(t). 2
4
Let us try another example of evaluating a convolution integral. Example 3.6 Find the value of sin(t) * t2 •
41
3. Convolution and the Solution of ODEs
Solution We progress as before by using the definition
sin(t) * t2
= lot sin(r)(t - r)2dr.
It is up to us to choose the order as from Theorem 3.3 f * g = g * f . Of course we choose the order that gives the easier integral to evaluate. In fact there is little to choose in this present example, but it is a point worth watching in future. This integral is evaluated by integration by parts. Here are the details. sin(t) * t2
= lot sin(r)(t - r)2 dr t = [-(t - r)2 cos(r)] � - lo 2(t - r) cos(r)dr = t2 - 2 { [(t - r) sin(r)] � - lot sin(r)dr } = t2 - 2 { 0 + [- cos(r)J n t2 + 2 cos(t) - 2.
Of course the integration can be done by computer algebra. Both of these examples provide typical evaluations of convolution integrals. Convolution integrals occur in many different branches of engineering, particu larly when signals are being processed (see Section 6.5). However, we shall only be concerned with their application to the evaluation of the Inverse Laplace Transform. Therefore without further ado, let us do a couple of these examples. Example
3. 7
Find the following Inverse Laplace Transforms:
c-1 { (s2 : 1)2 } (b) c- 1 { s ()+ 1) } · 3 (a)
.
Solution (a) We cannot evaluate this Inverse Laplace Transform in any direct fashion. However we do know the standard forms
Hence
c- 1 { (s2 : 1) }
=
cos(t) and
c-1 { (s2 � 1) }
£{cos(t)}£{sin(t)} =
= sin(t).
(sZ : 1) 2
and so using the convolution theorem, and Example 3.1
c-1 { (s2 : 1)2 }
=
cos(t) * sin(t)
= �t sin(t).
42
An Introduction to Laplace Transforms and Fourier Series
(b) Proceeding similarly with this Inverse Laplace Transform, we identify the standard forms:-
J0-1 { s13 } = 21 t2 and '-- { (s2 1+ 1) } sm. (t) . J0 - 1
'-Thus
{
{
£ t2 }£ sin (t) }
,.. _ 1 { s3 (s21 + 1) }
and so
'--
=
= s3 (s22+ 1 ) =
12 . 2 t * sm(t) .
This convolution has been found in Example 3.6, hence the Inverse Laplace Transform is (t2 + 2 cos t - 2).
c- 1 { s3 ()+ 1) } � =
In this case there is an alternative approach as the expression
1 can be decomposed into partial fractions and the inverse Laplace Transform evaluated by the methods of Chapter 2. In a sense, the last example was "cooked" in that the required convolutions just happened to be those already evaluated. Nevertheless the power of the convolution integral is clearly demonstrated. In the kind of examples met here there is usually little doubt that the functions meet the conditions necessary for the existence of the Laplace Transform. In the real world, the functions may be time series, have discontinuities or exhibit a stochastic character that makes formal checking of these conditions awkward. It remains important to do this checking however: that is the role of mathematics! Note that the product of two functions that are of exponential order is also of exponential order and the integral of this product is of exponential order too. The next step is to use the convolution theorem on more complicated results. To do this requires the derivation of the "well known" integral
rlooo e -t2 dt
Example
3.8
=
! .;;r. 2
Use a suitable double integral to evaluate the improper integral
fo
oo
e -t2 dt.
Solution Consider the double integral
43
3. Convolution and the Solution of ODEs y
a
X
Figure 3.2: The domains of the repeated integrals where S is the quarter disc x � 0, y � 0, x2 + y2 � a2 • Converting to polar co-ordinates (R, 8) this integral becomes
where R2 = x2 +y2 , R cos(O) this integral we obtain
loa 1� e -R2 RdOdR
=
x, R sin(O)
=
y so that dS = RdOdR. Evaluating
7r a -R2 dR = 2. [- 21 e-R2 ] a 4"7r { 1 e-a2 } 7r
I = 2. J{ Re o
0
=
-
As a -+ oo, I -+ 'i· We now consider the double integral I
k
=
hk hk e-
The domain of this integral is a square of side k. Now I,
=
{ !,' e-•' } {f.' e-•' } {f.' } dx
A glance at Figure 3.2 will show that
However, we can see that
dy
=
e -•' dx
2
.
44
An Introduction to Laplace Transforms and Fourier Series
as k � oo . Hence if we let a � oo in the inequality we deduce that as a �
oo.
{ 100
Therefore
1oo
or
e-x
2
dx
2 e -x dx
r � =
y'i = -
2 as required. This formula plays a central role in statistics, being one half of the area under the bell shaped curve usually associated with the normal distribution. In this text, its frequent appearance in solutions to problems involving diffusion and conduction is more relevant. Let us use this integral to find the Laplace Transform of Ij.,fi. From the definition of Laplace Transform, it is necessary to evaluate the integral o
provided we can be sure it exists. The behaviour of the integrand at oo is not in question, and the potential problem at the origin disappears once it is realised that 1/.,fi itself is integrable in any finite interval that contains the origin. We can therefore proceed to evaluate it. This is achieved through the substitution st = u2 • The conversion to the variable u results in
1oo ; dt 1oo 0
e- t
yt
=
0
e - u2
S
.,fS 2udu = U
yS� 1oo .
0
e -u
2
du
We have therefore established the Laplace Transform result:
=
E. y -;
and, perhaps more importantly, the inverse
c- 1
{ � } vk· =
These results have wide application. Their use with the convolution theorem opens up a whole new class of functions on which the Laplace Transform and its inverse can operate. The next example is typical. Example
3.9 Determine
c- 1
{ .,(S(s1 - 1) } ·
45
3. Convolution and the Solution of ODEs
Solution The only sensible way to proceed using our present knowledge (but see Chapter 7) is to use the results c- 1 { Js } = vk and (by the shifting property of Chapter 2) 1 et {s - 1} = . Whence, using the convolution theorem c- 1
The lastof 1/.fi integralexcept is similar to upper that evaluated inisdetermining thetheLaplace Trans form for the limit. It tackled using same trans formation, only this time we keep careful track of the upper limit Therefore, . substitute r = u 2 to obtain v'i Jfo t e..;T-r dr 2 Jfo e-u du. Now, this integral is well known from the definition of the Error Function, erf(x). 2
=
Definition
3.10
The Error Function erf(x) is defined by
lo dt. Itfactor is related to the area under the normal distribution curve in statistics. The of 2/ .fi is there to ensure that the total area is unity as is required byexpress probability theory.as Returning the solution follows:- to Laplace Transforms, it is thus possible to 2
2 r -t e erf(x) = Vi
The 1-erf(x) is called the erfc(x).function It follows immediately thatcomplementary error function and is written erfc(x) = ..,fi2 lxf'X) e-t dt and erf(x) + erfc(x) = 1. the complementary thanrelated the errorto diffusion function iIttselfis inthatfactemerges from solving error the difffunction erentialrather equations 2
46
An I ntroduction to Laplace Transforms and Fourier Series
problems. Having solved one simple example, it isinappropriate toofinvert the Laplace Transform that occurs time and time again the solution diffusion problems. problems are metTransform explicitly ofin the Chapter the problemThese of finding the Laplace function5, but here we solve exp { -:: } whereitkcanis abeconstant the diffusion called the diffusion coefficient. Here thoughtrepresenting of as an unspecified constant. t - 3/2
Example
3.11 Determine
Solution We start as always by writing the Laplace Transform explicitly in terms ofsome the convoluted integral definition, andwithit looks daunting. To evaluate it days, does require algebra. As all algebraic manipulation these the option is thereany to usecase,computer algebra,ofalthough existingformula systemsbywould find this hard. the derivation this particular does help in the understanding of its subsequent use. So, we start with hand In
exp { -:: } } hoo First of inall,theletintegrand. us substitute u k/2../t. This is done so that a term appears Much other stuff appears too of course and we now sort this out: C{
t - 3 /2
e - k 2 /4te - st c 3 f 2 dt .
=
e -u2
=
du
=
-
t - 3 /2
� t- 312 dt 4
which swapinto round,account. then swap again once theeliminates negativethesign fromterm. the duThetermlimits is taken Thusback we obtain
1oo 0
e - k2 f4t e-st t - 3 f2 dt
=
� k
1oo 0
e -u2 e -skf4 u2 du
and stage. Wealgebra now strive to getneed the integral onby the right into this errorcompletes function the formfirst ( Computer systems leading the nose . through this kind of algebra!) First of all we complete the square The thus unsquared be written:term is independent of u; this is important. The integral can
47
3. Convolution and the Solution of ODEs
This completes the second stage.consideration The third andandperhaps most bizarre evaluating this integral involves manipulation of the stage integralof k.jS- . 100 e-(u-:u")2 du, where a= 2 0 If we let v= afu in this integral, then since but du= -advjv2 we obtain the unexpected result roo e-(u-�)2 du= r oo a e-(u-�)2 du. lo lo u2 The minusv hassignbeencancels withbytheu. exchange in limits as before, andimmediately the dummy variable replaced We can use this result to deduce that 1oo (1 + ,;) e-(u-�)2 du= 2 1oo e-(u-�)2 du. the integral,ourwefriend substitute = u- afu so that d).= (1 + aju2)du. InInwhich thisisleftway,hand werather regain from Example 3.8, apart from the lower limit than 0. Finally therefore 00 1 ( 1 + ,;) e -(u-�)2 du= /_: e ->.2 d.X = 2 1oo e ->.2 d). = ..[i. Hence we have deduced that r oo e-(u-�)2 du= .!:...[i 2 lo and is independent of the constant a. Using these calculation of the required Laplace Transform is results, a summary of the oo 2 .C { t - 3/2 exp { k4t } }= k4 e -k vs r e (u-f!:Li. 2u )2 du. Jo = ik e-kVi.!:...[i 2 2 .fi e -kvs = -k- . Taking the Inverse Laplace Transform of this result gives the equally useful formula >.
- oo
·
r::
-
48
An Introduction to Laplace Transforms and Fourier Series
As mentioned earlier, thisforLaplace Transform occurs in diffusion and conduction problems. In particular the applied mathematician, it enables the estimation ofdopossible time scales for the diffusion of pollutant from a point source. Let us one more example using the result just derived. Example 3.12
Use the convolution theorem to find .c- 1
{ } e- k Js
--
s
.
We note the result just derived, namely { } _k_ 2.;:;£3 together with the standard result Solution
.c - 1
e - k Js
t e - k 2 /4 .
-
to deduce that
We evaluate this integral by the (by now familiar) trick of substituting u k2 f This means that 4r.
k2 2udu = - -dr 4r 2
oo
2
=
and the limitsThey transform fromdue r = 0 and r = t to u = and u = k/2Vt respectively. swap round to the negative sign in the expression for du so we obtain =
=
Hence we have the result
.c - 1
{ } k
e- Js - = s
2 tX)
Vii Jk/2 ../i
erfc ( 2�)
erfc ( 2Vtk )
2
e -u
·
du
49
3. Convolution a nd the Solution of ODEs
which is alsoof ofthissome significance inconvolution the modelling of diffusion. Ana result alternative derivation result not using is possible using from Chapter 2, viz. c - 1 { l�s ) } 1t f(u)du. Thisshall formula canfurther be regarded asconvolution a special case of theinconvolution theorem. We make use of the theorem this kind of problemof in Chapter 6. In the remainder of this chapter we shall apply the results Chapteruse2 toof the of ordinary also makes the solution convolution theoremdiffbotherential as anequations alternative(ODEs). to usingThispartial fractions to enable solutions to be written explicitly but evenmore whereimportantly the right hand side ofgeneral the ODE is a general function.down =
3.3
Ordinary Differential Equations
At betheappropriately outset we stress that all theForfunctions in thisinsection will bewhich assumed toalgebraically diff e rentiable. the examples this section are explicit this is obvious, but outside this section and indeed outside text care needscriterion to be taken to ensure thatfor this remains theof thecase.Laplaceis ofthis course a stricter than that needed the existence Transform (thata tooltheforfunction beordinary of exponential order) so usingis usually the Laplace Transform as solving diff e rential equations not a problem On the other hand, using differential equations to establish results . forthe Laplace Transforms is certainly to beonavoided as this inautomatically imposes strict condition of differentiability the functions them. It is perhaps the premierthataimobeyof mathematics toresults, removenotrestrictions and widen the class of functions theorems and the other way round! Most of you will be familiar to a greater or lesser extent with differential equations: however for completeness a resumewhere is nowthegiven of theis basic essen tials. A differential equation is an equation unknown in the form oftheaLaplace derivativeTransform in Chapter 2 that operating on a derivative with . It was seen can eliminate the derivative, replacing each differenti ation with aaremultiple of toolItforshould notcertain be surprising therefore thatequation Laplace Transforms a handy solving types of diff e rential . going into detail, let us review some of the general terminology met in Before discussing differential equations. of an ordinary tainThe ed byorder the unknown. Thusdiff theerential equationequation is the highest derivative at (�:) 3 y sin(x) is a first order equation. The equation 4 +lnx = O l!Jy y (d ) + dx2 dx It
s.
+
=
50
An Introduction to Laplace Transforms and Fourier Series
is, on the other hand a second order equation. The equation u- y 4 + dy 7 + y = 0 ( dx3 J3 ) (dx ) is of third order.Instead Such weexoticshallequations will notto(cannot) befirstsolved using Laplace Transforms. be restricted solving and second order equations which are linear. Linearity has been met in Chapter 1 in the context of the Laplace Transform. The linearity property is defined by s
In a linear differential equation, the dependent variable obeys this linearity prop erty. We shall only be considering linear differential equations here (Laplace Transforms being linear themselves are only useful for solving linear differential equations). Differential equations that cannot be solved are those containing powers ofsecond the unknown orarexpressions such as tan(y), eY. Thus we will solve first and order line· differential equations. Although this seems rather restrictive, it does account for nearly all those linear ODEs found in real life situations. The word ordinary denotes that there is only differentiation with respect to a variable. single independent variable so that theis solution is thetherequired func tion of this If the number of variables two or more, differential equation becomes5.a partial differential equation (PDE) and these are considered later in Chapter The Laplace Transform of a derivative was found easily by direct integration byareparts in Chapter 2. The two useful results that will be used extensively here C{f'(t)} = s/(s) - f(O)
and C{f" (t)} = s2 j(s) - sf(O) - J'(O) wheresides the prime denotes differentiation with respect to t.ofNote that the right hand of both of these expressions involve knowledge f(t) at t = 0. This isconstants important. Also, thecontains. order of the derivative determinesconstant how many arbitrary the solution There is one arbitrary for each inte gration, so a first order ODE will have one arbitrary constant, a second order ODE two arbitraryequations constants, etc.are not Therelinear, are complications overthisuniqueness with differential that but fortunately does not concern us here. We know from Theorem 1.2 that the Laplace Transform is a linearUponoperator; itLaplace is not Transforms easily appliedof atolinear non-linear problems. taking ODE, thethederivatives themselves disappear, transforming into the Laplace Transform of function multiplied 2 bynumber s (for a first derivative) or s (for a second derivative). Moreover, the correct ofandconstants alsoa appear inorderthe ODE). form ofSome f(O) (for a first order ODE) and /(0) /'(0) (for second texts conclude therefore Laplacewhere Transforms be used only to solveat initial that isthatproblems enoughcaninformation is known the startvalueto problems, solve it. This
51
3. Convolution a nd the Solution of ODEs
isbestnotsuited strictlyto this true.method Whilstofit solution, remains the case that initial value value problems problems can are two point boundary befirstsolved byas transforming the equation, andareretaining f(O) and f'(O) (or the only) unknowns These unknowns then found algebraically by the . substitution of the given boundary conditions and the solving of the resulting diff erential equation(but. Wesee shall, however almost always be solving ODEs with initial conditions Example 3.14). From a physical standpoint this is entirely(seereasonable Transform is a mapping from t space to s . 1)TheandLaplace space Chapter t almost always corresponds to time. For problems involving time, the what situation is known now and the equation(s) are solved in order to determine is going on later. This is indeed the classical initial value problem. We are now ready to try a few examples. Example 3.13
Solve the first order differential equation dx + 3x = where x(O) 1. dt f(t) =
0 Solution Note that we have abandoned for thevariable. more usual x(t), but this should be regarded as a trivial change of dummy This ratherOf simple diwefferential equation can in fact be solved by a variety of methods. course use Laplace Transforms, but it is useful to check the answer by solving again usingtoseparation of variables or integrating factor methods as these will be familiar most students . Taking Laplace transforms leads to £ { �� } + 3£ { X} 0 which implies sx(s) - x(O) + 3x(s) = 0 standard overbar to denote Laplace Transform. Since x(O) = 1, solving forusingx( the s) gives =
1 s+3 1x(t) = _c- 1 s+3 x(s)
whence
= --
{ }
= e - 3t
usingLettheus standard this is but indeedwiththea different solution isboundary easily checked . . That look at theformsame equation, condition. Example 3.14
Solve the first order differential equation dx 3x = where x(1) = 1. dt +
0 Solution Proceeding as before, we now cannot insert the value of x(O) so we arrive at the solution -3t x (t) = _c- 1 { sx(O) + 3 } = x(O)e .
52
An Introduction to Laplace Transforms a nd Fourier Series
We now use the boundary condition we do have to give x(1) = x(O)e -3 =
which implies x(O) e3 and the solution is x(t) e3( 1 -t). Here is a slightly more challenging problem.
1
=
=
Example 3.15
Solve the differential equation dx 3x = 3t given x(O) = 0. dt
+ cos
( Laplace ple 3.13 forTaking the leftthehand side) Transform we obtain we have already done this in Exam
Solution
1+ 9 using standard forms. With the zero initial condition solving this for x(8) yields x ( 8) ( 8 + 3) 1( 82 + 9) . +
8X(8) - x(O) 3x(8) = 82 =
This solution is either in the using form ofpartial the product ofortwotheknown Laplacetheorem: Transforms.we Thus we invert fractions convolution choose the latter. First of all note the standard forms .c - 1 { -1- } e -3t and .c- 1 { -1- } cos(3t) . +3 82 +9 Using the convolution theorem yields: t 1 1+--- (- > { x(t) .c- 1 { } 3 82 + 9 lo e 3 t r cos(3r)dr. (The equally valid choice of lot e-3r cos (3(t - r)) dr couldconvolution have been somade, butthe a general rule it is better to arrange the order of the that (t - ) is in an exponential if you have one. ) The integral evaluatehere. usingThe integration algebra. isThestraightforward gory details areto omitted result is by parts or computer t e3r cos(3r)dr -(e3 1 t cos(3t) + e3t sin(3t) - 1). 6 1 =
=
8
as
0
=
=
8
T
=
3. Convolution and the Solution of ODEs
Thus we have
x(t)
= =
53
e-3t lot e3r cos(3r)dr 61 (cos(3t) + sin(3t)) 61 e-3t. -
This solution could have beentoobtained byoriginal partial ODE fractions which is algebraically simpler. It is also possible solve the by complementary func tion/particular integral techniques or to use integrating factor methods. Theto choice is yours! In the next example there is a clear winner. It is also possible get a closed together form answer integrating theorem factor techniques, using Laplace Transforms withusing the convolution is our choicebuthere. Find the general solution to the differential equation dx + 3x = f(t) where x(O) = 0, dt and f(t) is of exponential order (which is sufficient for the method of solution used to be valid).
Example 3.16
Solution It is compulsory to use convolution here as the right hand side is an arbitrary function. The Laplace Transform of the equation leads directly to
x(t)
=
=
so that
x(t) =
{ s+/(s)3 } lot e-3(t-r) f(r)dr
c- l
e-3t lot e3rf(r)dr.
The function f(t) is of course free to be assigned. In engineering and other applied subjects, f(t) is made to take exotic forms; the discrete numbers cor responding to the output(probabilistic) of laboratorynature. measurements perhaps or even a time series with a stochastic However, here f(t) must comply withweourmeet basicexamples definitionthatof involve a functiondiscrete and wemathematics must waitoruntilstatistically Chapter based 6 be fore functions. ofThefunction ability met to solve thishaskind of differential equation even with the definition here important practical consequences for the engineer and applied scientist. The function f(t) is termed input and the term x(t) output. To get from one to the other needs a transfer function. In the last example, the function 1/(s + 3) written in terms of the transform variable s is this transfer function. This is the language of systems analysis, and such concepts also form the cornerstone of control engineering. They are also vital
54
An Introduction to Laplace Transforms and Fourier Series
ingredients to branches of electrical engineeringtheandprocedure the machine dynamicstheside oflution mechanical engineering. In mathematics, for writing so to a non-homogeneous differential equation (that is one with a non-zero right hand equation side) in terms of the the development solution of theof the corresponding homogeneous differential involves complementary function and particular solution. Complementary functions and particular solutions are standard concepts in solving second order ordinary diff e rential equations, the subject of the next section. 3 .3 . 1
Second Order Differential Equations
Let us now do a few examples to see how Laplace Thansforms are used to solve second order ordinary differential equations. The technique is no different from solving first order LetODEs, but byfinding thetheinverse Laplace Thansform issecond often more challenging. us start finding solution to a homogeneous order ODE that will be familiar to most of you who know about oscillations.
Example 3.17
Use Laplace Transforms to solve the equation
�� + y = 0 with x(O) = 1, x'(O) = 0. Solution Taking the Laplace Thansform of this equation using the usual nota tion gives s2x(s) - sx(O) - x'(O) + x(s) 0. With x(O) = 1 and x' (0) 0 we obtain =
=
s x(s) = 2 s+ 1. x(t) =
This is atostandard formharmonic which inverts to problemcos(t). That this is the correct solution this simple motion is easy to check. Whyleadnottotryy(t)changing theWeinitial condition toto y(O) = 0 and y'(O) 1 which should sin(t)? are now ready build on this result and solve the inhomogeneous problem that follows. =
=
Example 3.18
Find the solution to the differential equation
�� + x t with x(O) 1, x'(O) = 0. Solution Apart from the trivial change of variable, we follow the last example and take Laplace Thansforms to obtain =
=
1 s2 x(s) - sx(O) - x'(O) + x = .C{t} = 2 s · =
With start conditions x(O) 1 and x' (0) = 0 this gives x(s)(s2 + 1) - s
so x(s)
=
=
_!_ s2
1 ...,- · s s2 (s2 + 1) 82 + 1 + -:-,...-:-
-
3. Convolution and the Solution of ODEs
55
Taking the Inverse Laplace Transform thus gives: x _c -1 { 2 : 1 } + _c - 1 { 2 (s21 + 1) } s s = cos(t) + 1t (t - r) sin(r)dr =
using the convolution theorem. Integrating by parts (omitting the details) gives x = cos(t) - sin(t) + t. The first two terms areis easily the complementary function andsolution. the thirdIttheis particular integral. The whole checked to be the correct upfferential to the reader to decide whether this approach to solving this particular di equation is any easierofthan theerential alternatives. ThewithLaplace Transform method provides the solution the diff equation a general right hand side in aHowever, simple andinstead straightforward manner. of restricting to this equation particular second order dif ferential equation, let us consider theattention more general � a dtx2 + b dx dt + ex = f ( t) (t � 0)
a, b and c are constants. We will not solve this equation, but discuss it inwhere theIn engineering context of applications. textsthis thesetextconstants are given names that haveaudience, engineering significance. Although is primarily for a mathematical it is nevertheless useful to run through these terms. In mechanics, a is the mass, b is the damping constant (diagrammatically represented by a dashpot), c is the spring constant (or stiffness) and x itself is the displacement of the mass. Inof the electrical circuits,sometimes a is the inductance, b is the resistance, c is the reciprocal capacitance called the reactance and x (replaced by q) is the charge, the rate of change of which with respect to time is later the more familiar electric current. Some of these names will be encountered when we do applied examples. The right-hand side is called the forcing or excitation. In terms of systems engineering, f(t) is the system input, and x(t) is the system output. Since a, b and c are all constant the system described by the equation is termed linear and time invariant. It will seem very odd to a mathematician to describe a but system governed by engineering a time-dependent differential equation as "time invariant" this is standard terminology. the Laplace Transform ofconditions this generalholdsecond orderyields differential equa tion,Taking assuming all the appropriate of course, a(s2 x(s) - sx(O) - x'(O)) + b(sx(s) - x(O)) + cx(s) = /(s). Itorder, is normally a problemwhento assume and !(s) orarenumerical of exponential but just not occasionally problemsthathavex(as)stochastic input,
An Introduction to Laplace Transforms and Fourier Series
56
care needs to be taken. These kind of problems are met in Chapter 6. Making x(s) the subject of this equation gives x(s)
=
f(s) + (as + b)x(O ) + ax'(O) . as2 + bs + c
Hence, incase theory, x(t) can be found by taking Inverse Laplace Transform. The simplest to consider is when x(O) and x' (0) are both zero. The output is then free from any embellishments that might be there because of special start conditions. In this special case, x(s)
=
1 f(s). 2 as + bs + c =
x
This equation isverystarkly inwhytheLaplace form "response transfer function input" which makes it clear Transforms are highly regarded by engi neers. The formula for x( s) can be inverted using the convolution theorem and examples this canorder be found later inequations this chapter. First however let us solve a few simplerof second differential explicitly. Use Laplace Transform techniques to find the solution to the second order differential equation
Example 3.19
cPx + 5 dx + 6x = 2e -t t dt2 dt subject to the conditions x 1 and x' = at t =
=
2:: 0,
0 0. Solution Taking the Laplace Transform of this equation we obtain using the usual overbar notation, s2x(s) - sx(O) - x'(O) + 5(sx(s) - x(O)) + 6x(s) = .C{e -t } = s +2 1
--
where the standard Laplace Transform for any constantas they a has been used. In future the so-called standard forms will not be quoted are listed in Appendix B. Inserting the initial conditions x 1 and x' 0 at t 0 and rearranging the formula as an equation for x(s) gives 2 =
=
=
Factorising gives
x(s)
(s2 + 5s + 6)x(s) = --1 + s + 5. s+
=
2 s+5 (s + 1)(s + 2)(s + 3) + (s + 2)(s + 3) .
57
3. Convolution and the Solution of ODEs
There arefraction manymethod ways ofofinverting expression, the easiest being to use the partial Chapter this 2. Doing this but omitting the details gives: x(8) = 8 +1 1 + +1 2 - +1 3 ·
This inverts immediately to
s
=
x(t) e -t + e -2t - e -3t
s
(t 2: 0).
The terms first term iscomplementary the particularfunction. integral (orTheparticular solution)is and the last two the whole solution overdamped and thereforelittlenon-oscillatory: this isitundeniably the easiest case tointeresting solve as itas ithe nvolves algebra However, is also physically the least . solution diesLaplace away Transform to zero verytechnique quickly. toIt solve does serve to demonstrate the power of the this kind of ordinary differential equation. An obvious question toTransform ask at thiscanjuncture is how is it ofknown whether a particular inverse Laplace be found? We know course that it is obtainable in principle, but thiswhich is a helps practical question.thisInquestion. ChapterOnly 7 we derive aandgeneral form for the inverse to answer a brief i n formal answer can be given at this stage. As long as the function /(8) has a/(8)finitedoesnumber of tofinitezeroisolated singularities thenfunctions inversionarecantogobeahead. If not tend for large s i generalised expected l (see Chapter 2), and if /(8) has a square root or a more elaborate multi-valued nature then directIninversion is made more complicated, althoughandtheretheislikeno formal difficulty. this case, error functions, Bessel functions usually feature in theissolution. Most ofandtheinvolves time, solving second order linear diff e rential equations straightforward no more than elementary transcendental functions (exponential and trigonometric functions). The next problem is more interesting from a physical point of view.
Example 3.20
ential equation
Use Laplace Transforms to solve the following ordinary differ
�x dx 9 . dt2 + 6 dt + x = m (t) (t 2: 0), subject to x(O) 0 and x'(O) = 0. =
s
Solution With no right hand side, and with zero initial conditions, x(t) = 0 would result. HoweverThewithformal the sinusoidal forcing,thetheproblem solutionis turns out toas befor quite interesting way of tackling the same . any secondfollows order that differential equation withTaking constantLaplace coefficients. Thus the the mathematics of the last example. Transforms, equation becomes 1 82x(8) - 8x(O) - x'(O) + 68x(8) - 6x(O) + 9x(8) � =
8 +1
58
An Introduction to Laplace Transforms a nd Fourier Series
this conditions time not lingering standard .C{sin(t) } ). With the bound ary insertedover and thea little tidyingformthis(forbecomes Oncechoice againthiseithertime.partial our Notefractions that or convolution can be used. Convolution is 1 t 3 and {te } = .C {sin(t)} = � .C (s+ 3)2 s +1 so _c-1 { (s2 + 1)�s + 3)2 } = lot re-3r sin(t - r)dr. This yieldsfollows to integration by partsofseveral times (or computer algebra, once).integral The result from application the formula: lot re-3r sin(t - r)dr = - 110 lot e-3r cos(t - r)dr + �10 }t0 e-3r sin(t - r)dr + _.!._10 te-3t . However we omit the details. The result is x(t) - _c- 1 { (s2 + 1)(s1 + 3)2 } = lot re-3r sin(t - r)dr e-3t (5t + 3) - 53 cos(t) + 25 sin(t). 50 2 0 The first term is the particular solution (called the transient response by engi neers since it dies away for large times), and the final two terms the complemen tary function (rather misleadingly called the steady state response by engineers since it persists. Of course there is nothing steady aboutfrequency it!). After a "long time" has elapsed, the response is harmonic at the same as the forc ing frequency. The "long time" is in fact in practice quite short as is apparent from the graph of thefromoutput x(t) which is displayed Figure 3.3.the The graph 5in. However 0. isandindistinguishable a sinusoid after about t = amplitude phase of the resulting oscillations are different. In fact, the combination - 530 cos(t) + ;5 sin(t) = 110 sin(t - t:) 110 is the amplitude and putsis thethephase steady(cos(t: state) into amplitude and phase form. = t' sin(t:) = � for this solution). It is the fact that the €
59
3. Convolution and the Solution of ODEs X
t - 0 . 02 -0 . 0 4 -0 . 06
Figure 3.3: The graph of x(t) response practical frequency applications.is the same as the forcing frequency that is important in Thereto second is very little more to be said in termswithof mathematics about theTheso lution order diff e rential equations constant coefficients. solutions oscillatory, decaying, aThemixture of the two,theoscillatory andandgrowing simplyisaregrowing exponentially. forcing excites response if the orresponse at the same frequency as the natural frequency of the diff e rential equation, resonance occurs. then This leads to enhanced amplitudes at theseresponse. frequen cies. If there is no damping, resonance leads to infinite amplitude Further details about thecoefficients properties can of thebe solution ofspecialist second order diffonerential equations with constant found in books differ ential equations and would be out of place here. What follows are examples where thesolving powerpractical of the Laplace Transform technique isat clearly demonstrated inof terms of engineering problems. It is the very applied end applied mathematics. We return to a purer style in Chapter 4. In thein the following example, a problem3.19intheelectrical circuits islinear solved.diffAserential men tioned constants in a preamble to Example equationresistors, can be given significance in terms of the basic elements of an electrical circuit: capacitors and inductors. Resistors have resistance R mea suredinductance in ohms, capacitors have capacitance measured in farads, and inductors have L measured in henrys. A current j flows through the circuit and the current is related to the charge q by C
.
dq
J= dt .
The laws obeyed by a (passive) electrical circuit are: 1 . Ohm 's law whereby the voltage drop across a resistor is Rj.
60
An Introduction to Laplace Transforms and Fourier Series
300 volts 0.02 Farads
2 henrys
1 6 0hms
Figure 3.4: The simple circuit 2. The voltage drop across an inductor is dj L dt "
3.
The voltage drop across a capacitor is q
c·
Hence in terms of q the voltage drops are respectively R ddtq ' L cPdt2q and q which enables the circuit laws order (Kirchhoff' s Laws) tocoefficients be expressed in terms of diff e rential equations of second with constant (L, R and 1/C). The forcing the rightexample. hand side is supplied by a voltage source, e.g. afunction battery. (input) Here is ona typical C
Example 3.21 Find the differential equation obeyed by the charge for the sim ple circuit shown in Figure 3.4, and solve it by the use of Laplace Transforms given j = 0, q = 0 at t = 0. Solution The current is j and the charge is q, so the voltage drop across the three devices are d q 16 . 16 dq and q = 50q. 2 dtj 2 cP J dt ' 0.002 dt2 =
'
=
61
3. Convolution a nd the Solution of ODEs q
6 . 05 3
2
4
t
5 . 95 5.9
Figurescale3.5:onThe q (t) ; q (O) = 0 although true, is not shown due to the fine the solution q axis This must be equal to 300 (the voltage output of the battery), hence 2 �; + 16 �; + 50q = 300. So, solving this by division by two and taking the Laplace Transform results in 150 s2 q (s) - sq(O) - q'(O) + 8sq(s) - 8q(O) + 25q(s) = s .
Imposing theof initial conditions gives the following equation for ij(s), the Laplace Transform q (t): 150
q(s) = s(s2 + 8s + 25)
=
150
s((s + 4)2 + 9) '
can be isdecomposed using partial fractions or the convolution theorem. Thisformer The easier. Thisbygives: 6(s + 4) _ = 6 q(s) ; - (s + 4)2 + 9
24
The right hand side is now a standard form, so inversion gives: q(t) = 6 - 6e -4t cos (3t) - 8e -4t sin(3t).
This solutionswamped is displayed in exponential Figure 3.5 . Itdecay can term. be seenInthat thetheoscillations are completely by the fact, current is the derivative of this which is:j = 50e-4t sin(3t)
62
An Introduction to Laplace Transforms and Fourier Series
6 4 2 0.5
1.5
2
t
Figure 3.6: The variation of current with time and this is shown in Figure 3.6 as decaying to zero very quickly. This is obviously not typical, as demonstrated by the next example. Here, we have a sinusoidal voltage source which might be thought of as mimicking the production of an alternating current.
Solve the same problem as in the previous example except that the battery is replaced by the oscillatory voltage source 100 sin (3t) .
Example 3.22
Solution The differential equation is derived as before, except for the different right hand side. The equation is
2
d2 q
dt2
+
16
dq
dt
+
. (3t ) . 50q = 100 sm
Taking the Laplace Transform of this using the zero initial conditions q(O) = O, q'(O) 0 proceeds as before, and the equation for the Laplace Transform of q(t) ( q(s)) is 150
=
q(s) = (s2 + 9)((s + 4) 2 + 9) ·
Note the appearance of the Laplace Transform of 100 sin(3t) here. The choice is to either use partial fractions or convolution to invert, this time we use con volution, and this operation by its very nature recreates 100 sin(3t) under an integral sign "convoluted" with the complementary function of the differential equation. Recognising the two standard forms:-
.c- 1 { 82 � 9 } = � sin(3t) and .c- 1 { (s ;) 2 9 } �e-4t sin(3t) , +
gives immediately
q(t)
+
=
= 5� ht e-4(t-T) sin (3(t - r)) sin(3r) dr.
3. Convolution and the Solution of ODEs
63
Figure 3.7: The variation of current with time Using integration by parts but omitting the details gives j
=
d��t) �� (2 cos(3t) + 3 sin(3t)) - �� e-4t (17sin(3t) + 6 cos(3t)). =
What this solution tells the electrical engineer is that the response quickly be comes sinusoidal at the same frequency as the forcing function but with smaller amplitude and different phase. This is backed up by glancing at Figure 3. 7 which displays this solution. The behaviour of this solution is very similar to that of the mechanical engineering example, Example 3.25, that we will soon meet and demonstrates beautifully the merits of a mathematical treatment of the basic equations of engineering. mathematical treatment enables analogies to be drawn between seemingly disparate branches of engineering.
A
3.3.2
Simultaneous Differential Equations
In the same way that Laplace Transforms convert a single differential equation into a single algebraic equation, so they can also convert a pair of differential equations into simultaneous algebraic equations. The differential equations we solve are all linear, so a pair of linear differential equations will convert into a pair of simultaneous linear algebraic equations familiar from school. Of course, these equations will contain s, the transform variable as a parameter. These expressions in s can get quite complicated. This is particularly so if the forcing functions on the right-hand side lead to algebraically involved functions of s. Comments on the ability or otherwise of inverting these expressions remain the same as for a single differential equation. They are more complicated, but still routine. Let us start with a straightforward example.
Solve the simultaneous differential equations dx 2x - 3y, dy = y - 2x, dt dt where x(O) = 8 and y(O) = 3.
Example 3.23
=
64
An Introduction to Laplace Transforms and Fourier Series
Solution Taking Laplace Transforms and inserting the boundary conditions straight away gives:-
sx(s) - 8 = 2x(s) - 3y(s) sy(s) - 3 = y(s) - 2x(s). Whence, rearranging we solve:-
(s - 2)x + 3y = 8
and 2x + (s - 1)y = 3
by the usual means. Using Cramer's rule or eliminating by hand gives the solutions
8s - 17 x(s) = s2 - 3s - 4 , y(s) = s23s- -3s22- 4 .
To invert these we factorise and decompose into partial fractions to give
x(s) = s +5 1 + s -3 4 5 2 y(s ) = These invert easily and we obtain the solution
x(t) = se -t + 3e4t y(t) = se -t - 2e4t . Even if one or both of these equations were second order, the solution method by Laplace Transforms remains the same. The following example hints at how involved the algebra can get, even in the most innocent looking pair of equations! Example 3.24
Solve the simultaneous differential equations � x dy -t dt2 + dt + 3 = 15e � y dx dt2 - 4 dt + 3y = 15 sin(2t)
X
where x = 35, x' = -48, y = 27 and y' = -55 at time t = 0. Solution Taking Laplace Transforms of both equations as before, retaining the standard notation for the transformed variable gives
s2 x - 35s + 48 + si} - 27 + 3x = s 15 +l 30 and s 2 y - 27s + 55 - 4(sx - 35) + 3i} = 2 s + 4.
3. Convolution and the Solution of ODEs
65
x, y
75
50
t
1
Figure 3.8: The solutions x ( t) and y(t) Solving for x and fj is indeed messy but routine. We do one step to group the terms together as follows:
(s2 + 3)x + sfj
=
-4sx- + (s2 + 3)y-
=
15 21 + -s+1 30 27s - 195 + �4 . 35s
-
s +
This time, this author has used a computer algebra package to solve these two equations. partial fraction routine has also been used. The result is
A
x(s)
=
and y(s)
=
30s
s2 + 1
-
s2 + 9
-
--
30s
3 2s 45 s2 + 9 + s + 1 + s2 + 4 3 2 60 s2 + 1 s + 1 + s2 + 4 ·
--
--
-
-
Inverting using standard forms gives x (t)
y(t)
=
=
30 cos(t) - 15 sin(3t) + 3e -t + 2 cos(2t) 30 cos(3t) - 60 sin(t) - 3e -t + sin(2t).
The last two terms on the right-hand side of the expression for both x and y resemble the forcing terms whilst the first two are in a sense the "complementary function" for the system. The motion is quite a complex one and is displayed as Figure 3.8
66
An Introduction to Laplace Transforms and Fourier Series X
y
(a)
E Force
c(y-x)
(b)
[
spring
10 I o I
k(y-x)
t
"' Fon:e
Figure 3.9: The forces due to a a damper and b a spring y
Figure 3.10: A simple mechanical system Having looked at the application of Laplace Transforms to electrical circuits, now let us apply them to mechanical systems. Again it is emphasised that there is no new mathematics here; it is however new applied mathematics. In mechanical systems, we use Newton's second law to determine the motion of a mass which is subject to a number of forces. The kind of system best suited to Laplace Transforms are the mass-spring-damper systems. Newton's second law is of the form
F m
x
where F is the force, rii is the mass 11nd the displacement. The components of the system that also act on the mass m are a spring and a damper. Both of these give rise to changes in displacement according to the following rules (see Figure 3.9). A damper produces a force proportional to the net speed of the mass but always opposes the motion, i.e. c(y - x) where c is a constant and the dot denotes differentiation with respect to t. A spring produces a force which is proportional to displacement. Here, springs will be well behaved and assumed to obey Hooke's Law. This force is k(y - x) where k is a constant sometimes called the stiffness by mechanical engineers. To put flesh on these bones, let us solve a typical mass spring damping problem. Choosing to consider two masses gives us the opportunity to look at an application of simultaneous differential equations.
Figure 3. 10 displays a mechanical system. Find the equations of motion, and solve them given that the system is initially at rest with x = 1 and y = 2.
Example 3.25
Solution Applying Newton's Second Law of Motion successively to each mass
using Hooke's Law (there are no dampers) gives:-
m 1 .X = kz (y - x) - k1 x mzfj = -k3 x - kz (Y - x).
3. Convolution and the Solution of ODEs
67
With the values for the constants m 1 , m2 , k1 , k2 and k3 given in Figure 3.10, the following differential equations are obtained:-
x + 3x - 2y = 0 2y + 4y - 2x = 0. The mechanics (thankfully for most) is now over, and we take the Laplace Transform of both equations to give:-
(s2 + 3)x - 2y = sx(O) + x(O) -x + (s2 + 2)y = sy(O) + y(O). The right hand side involves the initial conditions which are: x(O) = 1, y(O) = 2, x(O) = 0 and y(O) = 0. Solving these equations (by computer algebra or by hand) gives, for fj,
3 + 5s s s y (s) = (s2 +2s4)(s 2 + 1) = s2 + 1 + s2 + 4 and inverting gives
y(t) = cos(t) + cos(2t). Rather than finding x and inverting, it is easier to substitute for y and its second derivative y in the equation X = y + 2y. This finds x directly as
x(t) = cos(t) - 2 cos(2t). The solution is displayed as Figure 3.11. It is possible and mechanically desirable to form the combinations
1 1 3 (y - x) = cos(2t) and 3 (x + 2y ) = cos(t) as these isolate the two frequencies and help in the understanding of the sub sequent motion which at first sight can seem quite complex. This introduces the concept of normal modes which are outside the scope of this text, but very important to mechanical engineers as well as anyone else interested in the be haviour of oscillating systems.
68
An Introduction to Laplace Transforms and Fourier Series X 2
1
3
4
t
-1
-2
-3
Figure 3.11: A simple mechanical system solved 3.4
Using Step and Impulse Functions
In Section 2.3 some properties of Heaviside's Unit Step Function were explored. In this section we extend this exploration to problems that involve differential equations. As a reminder the step function H(t) is defined as H(t) = 01 tt <2::: 00 and a function f(t) which is switched on at time to is represented simply as f(t - t0)H(t - t0). The second shift theorem, Theorem 2.5, implies that the Laplace Transform of this is given by
{
C{f(t - to)H(t - t0)} = e -sto f(s). Another useful result derived in Chapter 2 is C{8(t - a)f(t)} = e -as f(a) where 8(t - a) is the Dirac -8 or impulse function centred at t = a. The prop erties of the 8 function as required in this text are outlined in Section 2.6. Let
us use these properties to solve an engineering problem. This next example is quite extensive; more of a case study. It involves concepts usually found in me chanics texts, although it is certainly possible to solve the differential equation that is obtained abstractly and without recourse to mechanics: much would be missed in terms of realising the practical applications of Laplace Transforms. Nevertheless this example can certainly be omitted from a first reading, and discarded entirely by those with no interest in applications to engineering. Example 3.26
The equation governing the bending of beams is of the form 4 k ddx4y = -W(x)
3.
69
Convolution a nd the Solution of ODEs
y
W(x) X
[
Figure 3.12: The beam and its load W(x)
where k is a constant called the flexural rigidity (the product of Young's Modulus and length in fact, but this is not important here}, and W(x) is the transverse force per unit length along the beam. The layout is indicated in Figure 3.12. Use Laplace Transforms (in x) to solve this problem and discuss the case of a point load. Solution There are several aspects to this problem that need a comment here. The mathematics comes down to solving an ordinary differential equation which is fourth order but easy enough to solve. In fact, only the fourth derivative of y(x) is present, so in normal circumstances one might expect direct integration (four times) to be possible. That it is not is due principally to the form W(x) usually takes. There is also that the beam is of finite length l. In order to use Laplace Transforms the domain is extended so that x E (0, oo) and the Heaviside Step Function is utilised. To progress in a step by step fashion let us consider the cantilever problem first where the beam is held at one end. Even here there are conditions imposed at the free end. However, we can take Laplace Transforms in the usual way to eliminate the x derivatives. We define the Laplace Transform in x as
y(s) =
100 e -"'8y(x)dx
where remember we have extended the domain to oo. In transformed coordinates the equation for the beam becomes:-
k(s4 y(s) - s3 y(O) - s2y'(O) - sy"(O) - y"'(O)) = -W(s). Thus,
y(O) y ' (O) y"(O) y"'(O) y(s) = -W(s) 4ks4 + s + s 2- + s3 + s_
-
-
--
-
and the solution can be found by inversion. It is at this point that the engineer would be happy, but the mathematician should be pausing for thought! The beam may be long, but it is not infinite. This being the case, is it legitimate to define the Laplace Transform in x as has been done here? What needs to be done is some tidying up using Heaviside's Step Function. If we replace y(x) by the combination y(x)[l - H(x - l)] then this latter function will certainly
An Introduction to Laplace Transforms a nd Fourier Series
70
fulfil the necessary and sufficient conditions for the existence of the Laplace Transform provided y(x) is piecewise continuous. One therefore interprets y (s) as
y(s)
= fooo y(x)[1 - H(x - l)]e-x8dx
and inversion using the above equation for y ( s) follows once the forcing is known. In general, the convolution theorem is particularly useful here as W(x) may take the form of data (from a strain gauge perhaps) or have a stochastic character. Using the convolution theorem, we have
{ ��s) } = �fox (x - �)3 W(�)d�. The solution to the problem is therefore y(x)[1 - H(x - l)] = 1 10 x (x - �) 3 W(�)d� r_ - 1
-
6k
1 2 y"(O) + 1 x3 y"'(O). + y(O) + xy'(O) + -x -6 2
If the beam is freely supported at both ends, this is mechanics code for the following four boundary conditions
=
=
y 0 at x 0, l (no displacement at the ends) and
=
=
y" 0 at x 0, l (no force at the ends).
This enables the four constants of integration to be found. Straightforwardly, y(O) 0 and y"(O) 0 so
=
=
= - 1 10x (x - �)3 W(�)d� + xy'(O) + 61 x3y"'(O). The application of boundary conditions at x = l is less easy. One method would be to differentiate the above expression with respect to x twice, but it is unclear how to apply this to the product on the left, particularly at x = l. The following procedure is recommended. Put u(x) = y"(x) and the original y(x)[1 - H(x - l)]
6k
differential equation, now in terms of u(x) becomes k
�u dx2
= -W(x)
with solution (obtained by using Laplace Transforms as before) given by
u(x)[1 - H(x - l)]
= 1 10x (x - �)W(�)d� + u(O) + xu'(O). -k
Hence the following expression for y"(x) has been derived
y"(x)[1 - H(x - l)]
= -� fox (x - �)W(�)d� + y"(O) + xy"'(O).
71
3. Convolution and the Solution of ODEs y
1.5
X
2
-0 . 4 -0 . 6
-
0
.
8
-1
Figure 3.13: The displacement of the beam y(x) with W length l equals 3.
=
1 and k
=
1. The
This is in fact the result that would have been obtained by differentiating the expression for y(x) twice ignoring derivatives of [1 - H(x - l)]. Applying the boundary conditions at x l (y(l ) y ( l ) = 0) now gives the results =
and
y'(O) = 6 l
=
"
l
� 1 (l - �) 3 W(�)d� - 6� 1 (l - �)W(�)d�. z
This provides the general solution to the problem in terms of integrals
y(x)[1 - H(x - l)]
It is now possible to insert any loading function into this expression and calculate the displacement caused. In particular, let us choose the values l = 3, k = 1 and consider a uniform loading of unit magnitude W constant 1. The integrals are easily calculated in this simple case and the resulting displacement is =
x3 9x x4 + y(x) = - 24 4 - 8
=
which is shown in Figure 3.13. We now consider a more realistic loading, that of a point load located at x = l/3 (one third of the way along the bar). It is now the point at which the laws of mechanics need to be applied in order to translate this into a specific
72
An Introduction to Laplace Transforms and Fourier Series
form for W(x). This however is not a mechanics text, therefore it is quite likely that you are not familiar with enough of these laws to follow the derivation. For those with mechanics knowledge, we assume that the weight of the beam is concentrated at its mid-point (x = and that the beam itself is static so that there is no turning about the free end at x = 1 . If the point load has magnitude P, then the expression for W ( x) is
1/2)
W(x) =
� H(x) + P6 (x - �1) - (� w + � P) 6(x).
From a mathematical point of view, the interesting point here is the presence of the Dirac-6 function on the right hand side which means that integrals have to be handled with some care. For this reason, and in order to present a different way of solving the problem but still using Laplace Transforms we go back to the fourth order ordinary differential equation for y(x) and take Laplace Transforms. The Laplace Transform of the right hand side (W(x)) is
W(s) W1s + (�2 w + 3 ) y s k [W + ( �2 w + 3 ) ] + + k [241W 4 � ( �3 ) 3 - � (�w 2 +3 ) ] + + k [ 2l + P ( 3 ) (-2 W +-P32 ) ] '
�P . Pe - l•t The boundary conditions are y = 0 at x = 0, 1 (no displacement at the ends) and y"(O) = 0 at x = 0, 1 (no forces at the ends). This gives P - i ts _ ..!.. y'(O) y"'(O) �P ( )=! 1s5 s4 e s4 s2 s4 •
=
_
This can be inverted easily using standard forms, together with the second shift theorem for the exponential term to give:-
y(x)[1 - H(x - 1)] = _ !
x
+ P
6
x
- 1
6
� P x3
y' (O)x 61 y"' (O)x3 .
Differentiating twice gives
y"(x ) = - -1 -1 Wx2
1 - 1 x - -1
x + y"'(O) x
This is zero at x = l , whence y"' (O) = 0. The boundary condition y(1) = 0 is messier to apply as it is unnatural for Laplace Transforms. It gives
y'(O) = -
� (2� W + 851 p)
so the solution valid for 0 :::; x :::; 1 is
� ( 2�1 x4 - 112 x3 + 2�12 x) - � (:1 l2x - �x3) 3 - � (x - � �) H (x - ��) .
y(x) =
-
3. Convolution and the Solution of ODEs
73
w 1
X
1.5
-0 . 8
k
Figure 3 . 14: The displacement of the beam y(x) with W = 1, = 1 and P = 1. The length l equals 3 This solution is illustrated in Figure 3.14 Other applications to problems that give rise to partial differential equations will have to wait until Chapter 5. 3.5
Integral Equations
An integral equation is, as the name implies, an equation in which the unknown occurs in the form of an integral. The Laplace Thansform proves very useful in solving many types of integral equation, especially when the integral takes the form of a convolution. Here are some typical integral equations:
J(b }((x, y)�(y)dy = f(x)
where }( and f are known. A more general equation is:
A(x) � (x) - .X and
J(b }((x, y)�(y)dy = f(x)
an even more general equation would be:
}(
J(b }((x, y, �(y))dy = f(x).
In these equations, is called the kernel theory of how to solve integral equations
of the integral equation. The general is outside the scope of this text, and we shall content ourselves with solving a few special types particularly suited to solution using Laplace Thansforms. The last very general integral equation is
74
An I ntroduction to Laplace Transforms and Fourier Series
non-linear and is in general very difficult to solve. The second type is called a Fredholm integral equation of the third kind. If A(x) = this equation becomes
1,
¢>(x) -.X lb K(x,y)¢>(y)dy = f(x)
which is the Fredholm integral equation of the second kind. In integral equa tions, x is the independent variable, so a and b can depend on it. The Fredholm integral equation covers the case where a and b are constant, in the cases where a or b (or both) depend on x the integral equation is called a Volterra integral equation. One particular case, b = x, is particularly amenable to solution using Laplace Thansforms. The following example illustrates this. Example 3.27
where f(x)
is
Solve the integral equation
a general function of x.
z ¢>( )
Solution The integral is in the form of a convolution; it is in fact .Xe * x where * denotes the convolution operation. The integral can thus be written
¢>(x) -.Xex ¢>(x) = f(x). Taking the Laplace Thansform of this equation and utilising the convolution theorem gives ¢> -.X s - 1 = fwhere ([> = .C¢> and f = .Cf . Solving for ([> gives s-s -1 -.X1 ) = f ([> ( - s-s(1- +1 .X !- = !- + s - (1.X/+ .X) . ¢> = ) So inverting gives ¢> = f + .XJ e(l+.X)z and x ¢>(x) = f (x) + A fo f (y) e( I + .X) (x - y)dy . *
-
4>
*
This is the solution of the integral equation. The solution of integral equations of these types usually involves advanced methods including complex variable methods. These can only be understood after the methods of Chapter 7 have been introduced and are the subject of more advanced texts (e.g. Hochstadt
(1989)) .
75
3. Convolution a nd the Solution of ODEs
3.6
Exercises
1 . Given suitably well behaved functions J, g and h establish the following properties of the convolution f * g where f * g = lot f(r)g(t - r)dr. f * g = g * /,- f * (g * h) = (f * g) * h, f1 f * f -1 =-1, stating any extra properties f f 1 to exist.
(a) (b) (c) determine such that must possess in order for the inverse
2. Use the convolution theorem to establish
3. Find the following convolutions (a)
t * cos(t), (b) t * t, (c) sin(t) * sin(t), (d) et * t, (e) et * cos(t).
4. (a) Show that
xlim --+0
(b) Show that
(erf(x) ) = ..!.. . .,fii
X
5. Solve the following differential equations by using Laplace Transforms: (a) + 3x x(O)
�;
(b)
(c)
(d)
(e)
= e 2t ,
= 1,
�; + 3x = sin(t), dx . dt2 + 4 dt + 5x = 8 s (t) ,
�x
m
dt2 - 3 dt - 2x = 6,
�x
dx
�� + y = 3 sin(2t),
x(1r)
x(O)
x(O)
= 1, = x'(O) = 0,
= x' (0) = 1,
y(O) = 3, y'(O)
= 1.
76
An Introduction to Laplace Transforms and Fourier Series
6. Solve the following pairs of simultaneous differential equations by using Laplace Transforms: (a)
subject to x(O) = 3, and y(O) = 0. (b) 4 dx + 6x + y dt rPx dy +xdt dt2
=
2 sin(2t)
=
3e- 2t
subject to x(O) = 2, x'(O) = -2, (eliminate y). (c) rPx dy x + 5dt2 dt dx rPy - 4y - 2 dt dt2 subject to x(O)
=
0, x' (0)
=
=
t
=
-2
0, y(O) = 1 and y' (0)
=
0.
7. Demonstrate the phenomenon of resonance by solving the equation:
�: + k2 x
=
A sin(t)
subject to x(O) = x0, x'(O) = v0 then showing that the solution is unbounded as t --? oo for particular values of k.
8. Assuming that the air resistance is proportional to speed, the motion of a particle in air is governed by the equation: dx w d2 x b g dt2 + dt = w . If at t = 0, x = 0 and dx v = - = vo dt show that the solution is gt (avo at ) - g)(l-e-� .::...._ X = - + ...:.....:.___::...:... 2
where a
a
a
= bgjw. Deduce the terminal speed of the particle.
77
3. Convolution and the Solution of ODEs
9.
Determine the algebraic system of equations obeyed by the transformed electrical variables and given the electrical circuit equations
j� , j-;, f3
q3
j1
+
= i2 h = E sin(wt)
Rd1 + L2 �: R1J1. + R3}.3 + C1 q3 = E sm. (wt) h =
where, as usual, the overbar denotes the Laplace Transform.
10. Solve the loaded beam problem expressed by the equation
k dd4y4 -[c - x + (x - c)H(x - c)] for 0 < x < 2c subject to the boundary conditions y(O) = 0, y'(O) = 0, y"(2c) 0 and y"'(2c) = 0. H(x) is the Heaviside Unit Step Function. Find the bending moment ky" at the point x = �c. X
=
wo C
=
11. Solve the integral equation
(t) = t2 +
lot <J>(y) sin(t - y)dy.
4 Fourier Series
4.1
lntrod uction
Before getting to Fourier series proper, we need to discuss the context. To understand why Fourier series are so useful, one would need to define an inner product space and show that trigonometric functions are an example of one. It is the properties of the inner product space, coupled with the analytically familiar properties of the sine and cosine functions that give Fourier series their usefulness and power. Some familiarity with set theory, vector and linear spaces would be useful. These are topics in the first stages of most mathematical degrees, but if they are new, the text by Whitelaw (1 983) will prove useful. The basic assumption behind Fourier series is that any given function can be expressed in terms of a series of sine and cosine functions, and that once found the series is unique. Stated coldly with no preliminaries this sounds preposterous, but to those familiar with the theory of linear spaces it is .not. All that is required is that the sine and cosine functions are a basis for the linear space of functions to which the given function belongs. Some details are given in Appendix C. Those who have a background knowledge of linear algebra sufficient to absorb this appendix should be able to understand the following two theorems which are essential to Fourier series. They are given without proof and may be ignored by those willing to accept the results that depend on them. The first result is Bessel's inequality. It is conveniently stated as a theorem. Theorem 4.1 (Bessel's Inequality) If
79
80
An Introduction to Laplace Transforms and Fourier Series
is an orthonormal basis for the linear space V, then for each a E V the series converges. In addition, the inequality 00
l
L ! (a, en W ::; l all 2
r=l
holds.
An important consequence of Bessel's inequality is the Riemann-Lebesgue lemma. This is also stated as a theorem:Theorem 4.2 (Riemann-Lebesgue) {e1 , e2 , . . . }
be an orthonormal ba Let sis of infinite dimension for the inner product space V. Then, for any a E V lim (a, en ) = 0. n-+oo
This theorem in fact follows directly from Bessel's inequality as the nth term of the series on the right of Bessel's inequality must tend to zero as tends to oo . Although some familiarity with analysis is certainly a prerequisite here, there is merit in emphasising the two concepts of pointwise convergence and uniform convergence. It will be out of place to go into proofs, but the difference is particularly important to the study of Fourier series as we shall see later. Here are the two definitions.
n
Definition 4.3 (Pointwise Convergence) Let { /o , ft , fm , .} · · · ,
· ·
be a sequence of functions defined on the closed interval [a, b]. We say that the sequence {!o , It , . . . , fm , . . . } converges pointwise to f on [a, b] if for each x E [a, b] and € > 0 there exists a natural number N(£, x) such that l fm (x) - f(x) l < € for all � N(£, x) . Definition 4.4 (Uniform Convergence) Let { /o , ft , · · · Jm , .} be a sequence of functions defined on the closed interval [a, b]. We say that the sequence {!o , /1, . . . , fm , . . . } converges uniformly to f on [a,b] if for each € > 0 there exists a natural number N(€) such that l fm (x) - f(x) ! < € for all � N(£) and for all x E [a, b]. m
· ·
m
81
4. Fourier Series
It is the difference and not the similarity of these two definitions that is im portant. All uniformly convergent sequences are pointwise convergent, but not vice versa. This is because N in the definition of pointwise convergence depends on in the definition uniform convergence it does not which makes uniform convergence a global rather than a local property. The N in the definition of uniform convergence will do for any in Armed with these definitions and assuming a familiarity with linear spaces, we will eventually go ahead and find the Fourier series for a few well known functions. We need a few more preliminaries before we can do this.
x;
x [a, b].
Definition of a Fourier Series
4.2
As we have said, Fourier series consist of a series of sine and cosine functions. We have also emphasised that the theory of linear spaces can be used to show that it possible to represent any periodic function to any desired degree of accuracy provided the function is periodic and piecewise continuous (see Appendix C for some details) . To start, it is easiest to focus on functions that are defined in the closed interval - 1r , 1r] . These functions will be piecewise continuous and they will possess one sided limits at -1r and 1r. So, using mathematical notation, we have -1r, 1r] -+ C. The restriction to this interval will be lifted later, but periodicity will always be essential. It also turns out that the points at which is discontinuous need not be points at which is defined uniquely. As an example of what is meant, Figure shows three possible values of the function
[
f:[
f
f
fa =
4.1
{ 01 tt < 11 >
at = These are = /2, and, although we do = 0, = and need to be consistent in order to satisfy the need for to be well defined, in theory it does not matter exactly where is. However, Figure is the right choice for Fourier series; the following theorem due to Dirichlet tells us why.
t 1.
/a (1)
fa (1) 1
/a (1) 1 !a (t) /a (1)
4.1c
If f is a member of the space of piecewise continuous functions which are 21r periodic on the closed interoal [-1r, 1r] and which has both left and right derivatives at each x E [-1r, 1r] , then for each x E [-1r, 1r] the Fourier series of f converges to the value f(x_ ) + f(x+ ) 2 At both end points, x ±1r, the series converges to
Theorem 4.5
=
82
An Introduction to Laplace Transforms and Fourier Series f,(t)
l=l
(a) f,(t)
l=l
(b) f,(t)
•
l=l
(c)
Figure 4.1: (a) la (1)
=
O,(b) la(1)
=
1,(c) la (1)
=
1/2
The proof of this is beyond the scope of this text, but some comments are usefully made. If x is a point at which the function I is continuous, then l(x _ ) + l(x+ ) = l(x) 2 and the theorem is certainly eminently plausible as any right hand side other than l(x) for this mean of left and right sided limits would be preposterous. It is still however difficult to prove rigorously. At other points, including the end points, the theorem gives the useful result that at points of discontinuity the value of the Fourier series for I takes the mean of the one sided limits of I itself at the discontinuous point. Given that the Fourier series is a continuous function (assuming the series to be uniformly convergent) representing I at this point of discontinuity this is the best that we can expect. Dirichlet's theorem is not therefore surprising. The formal proof of the theorem can be found in graduate texts such as Pinkus and Zafrany (1997) and depends on careful application of the Riemann-Lebesgue lemma and Bessel's inequality. Since I is periodic of period 2rr, I ( 1r ) = I ( -rr) and the last part of the theorem is seen to be nothing special, merely a re-statement that the Fourier series takes the mean of the one sided limits of I at discontinuous points. We now state the basic theorem that enables piecewise continuous functions to be able to be expressed as Fourier series. The linear space notation is that used in Appendix C to which you are referred for more details.
83
4. Fourier Series
Theorem 4.6
The sequence of functions
{ �' sin(x), cos(x), sin(2x), cos(2x), . . . }
form an infinite orthonormal sequence in the space of all piecewise continuous functions on the interval [-11", ] where the7 inner product (!,g) is defined by r 1 1 (/,g) - Jgdx the overbar denoting complex conjugate. Proof First we have to establish that (/,g) is indeed an inner product over the 1r
=
11"
- 1r
1r .
space of all piecewise continuous functions on the interval [-11", ] The integral
certainly exists. As f and g are piecewise continuous, so is the product fg and hence it is (Riemann) integrable. From elementarycontinuous propertiesfunctions of integration it isaneasy to deduce that the space of all piecewise is indeed inner productidentities, space. There are no surprises. 0 and 1 are the additive and multiplicative f is the additive inverse and the rules of algebra ensure associativity, some time establishingdistributivity that the setand commutativity. We do, however spend { �' sin(x), cos(x), sin(2x), cos(2x), ... } is orthonormal. To do this, it will be sufficient to show that \ �' �) 1 , (sin(nx), sin(nx)) 1, (cos(nx),cos(nx)) 1 , ( � ,sin(nx) ) 0 \ �' cos(nx) ) 0, (cos(mx), sin(nx)) 0, (cos(mx),sin(nx)) 0, (sin(mx),sin(nx)) 0, with =/; n; n 1 , 2, . . .. Time spent on this is time well spent as orthonor mality lies we do not usebehind shortmost cuts.of the important properties of Fourier series. For this, r -21dx 1 trivially !_11" J_'K 7r 1 1 (sin(nx), sin(nx)) -11" sin2 (nx)dx =
=
=
=
m
m,
=
=
=
=
-1r
=
=
=
,
84
An Introduction to Laplace Transforms and Fourier Series
11r 11r (1 - cos(2nx))dx 1 for all n 1r 2 1r 1 (cos(nx), cos(nx)) - 1 cos (nx)dx 1r - 1r 121r 1r - 1 ( 1 cos(2nx))dx 1 for all n -1r 1 1r 1 ( �' cos(nx) ) 1r 1-1r v 2 cos(nx)dx 11rv 2 [-n1 sin(nx)] 1r 0 for all n -1r 1r 1 1 ( � , sin(nx) ) 7r 1-1r v 2 sin(nx)dx 17r../2 [ n1 1r -- - - cos(nx)] 1n1rv 2 ((-1)n - (-1-)1rn) 0 for all n 11r 1r (cos(mx), sin(nx)) - 1 cos(mx) sin(nx)dx -1r =
=
=
2
=
=
-
r;:;
r;:;
=
=
=
+
=
-
r;:;
=
r;:;
=
=
=
=
=
� /_: (sin((m + n)x) sin((m - n)x)) dx __!._ [- cos ((m n)x) cos ((m - n)x) 1r ] 21r +
2
+
_
m+n
m-n
-1r
=
O , (m tf n)
since the function in the square bracket is the same at both -1r and 1r. H m n, sin((m - n)x) 0 but otherwise the arguments go through unchanged and (cos(mx), sin(mx)) 0, m, n 1, 2 . . .. Now =
=
=
(cos(mx), cos(nx))
=
=
=
1 11r cos(mx) cos(nx)dx 127r 1-1r1r (cos((m n)x) cos((m - n)x)) dx 1r n)x) sin((m - n)x) 1r 121r [-sin((m as m 1 n m - n ] - 1r n as all functions are zero at both limits.
1r
+
=
-
=
0
Finally, (sin(mx), sin(nx))
=
=
+ m+
+
...J.
+
1r sin(mx) sin(nx)dx .!.j 1r - 1r 127r 11r (cos((m - n)x) - cos((m -1r
+ n)x)) dx
85
4. Fourier Series
= 0 similarly to the previous result.
Hence the theorem is firmly established.
0
We have in the above theorem shown that the sequence
{ �' sin(x), cos(x), sin(2x), cos(2x), . . } .
is orthogonal. It is in fact also true that this sequence forms a basis (an or thonormal basis) for the space of piecewise continuous functions in the interval [-1r, 1r]. This and other aspects of the theory of linear spaces, an outline of which is given in Appendix C thus ensures that an arbitrary element of the linear space of piecewise continuous functions can be expressed as a linear combination of the elements of this sequence, i.e. ao ..j2 + a 1 cos(x) + a2 cos(2x) + · · · + an cos(nx) + · · · f(x) + b1 sin(x) + b2 sin(2x) + · · · + bn sin(nx) + · · · so f(x) ,.....,
00
.j; + � (an cos(nx) + bn sin(nx))
- 1r < x < 1r
(4.1)
with the tilde being interpreted as follows. At points of discontinuity, the left hand side is the mean of the two one sided limits as dictated by Dirichlet's the orem. At points where the function is continuous, the right-hand side converges to f(x) and the tilde means equals. This is the "standard" Fourier series expan sion for f(x) in the range -7r < x < 1r. Since the right hand side is composed entirely of periodic functions, period 21r, it is necessary that f(x) = f(x + 2N1r)
N = 0, ±1, ±2, . . . .
The authors of engineering texts are happy to start with Equation 4.1, then by multiplying through by sin(nx) (say) and integrating term by term between -7r and 1r, all but the bn on the right disappears. This gives
Similarly
1
1 1r f(x) sin(nx)dx. - 1r 1 1r f(x) cos(nx)dx. an = 7r -1r bn
= 7r
-
1
Of course, this gives the correct results, but questions about the legality or otherwise of dealing with and manipulating infinite series remain. In the context of linear spaces we can immediately write
86
An Introduction to Laplace Transforms and Fourier Series
is a vector expressed in terms of the orthonormal basis
eo = so
� , ea,. = cos(nx), e13,. = sin(nx), 1 11r f (x) cos(nx)dx ( !, e aJ = 7r
and
- tr
1
r f(x) sin(nx)dx. ( f, etJJ = ; }_ tr 00
The series
f = L (!, ek} ek k=O
k
where ek, = 0, . . . . . . is a renumbering of the basis vectors. This is the standard expansion of f in terms of the orthonormal basis and is the Fourier series for f. Invoking the linear space theory therefore helps us understand how it is possible to express any function piecewise continuous in [-1r, 1r] as the series expansion
(4. 1),
f (x) '"V
00
� + � (an cos(nx) + bn sin(nx))
- 1r < x < 1r
an 1 1-1rtr f (x) cos(nx)dx,
where
=
7r
-
bn 1 1-tr1r f(x) sin(nx)dx, n = 0, 1, 2, . . . .
and
=
7r
-
ao
and remembering to interpret correctly the left-hand side at discontinuities to be in line with Dirichlet's theorem. It is also now clear why is multiplied by in all that has gone before, it is to ensure orthonormality. Unfortunately books differ as to where the factor goes. Some use = with the factor in a separate integral for This should not done here as it contravenes the defi nition of orthonormality which is offensive to pure mathematicians everywhere. However it is standard practice to combine the factor in the coefficient with the same factor that occurs when computing from. f(x) through the orthogonality relationships. The upshot of this combination is the "standard" Fourier series which is adopted from here on:
1/..;2
ao 1
a0 .
f(x) '"V
ao
1 ao + L (an cos(nx) + bn sin(nx))
2
00
n=l
1/..;2
- 1r < x < 1r
1/2
87
4. Fourier Series
where
an = -1r1 1-1T1T f(x) cos(nx)dx,
bn = -1r1 1-1T1T f(x) sin(nx)dx, n 0, 1, 2, . . . . Wewho are perhaps now readyaretoa dolittlesomeimpatient practicalwithexamples. There is good news for those all this theory It is notFourier at all .calculate necessary to understand about linear space theory in order to series. earlier theory givesdecisive the framework in key whichquestions Fourier that seriescanoperate asin awkward well The as enabling us to give answers to ariseof or controversial cases, for example if the existence or uniqueness a particular Fourier series is in question. The first example is not controversial.
and
=
Example 4. 7 Determine the Fourier series for the function 1r
f(x) 2x + 1 - < x < 1r f(x) f(x + 27r) x E R =
where Theorem 4.5 applies at the end points.
1r ,
Solution As f (x) is obviously piecewise continuous in [ -1r, ] in fact the only discontinuities occurring at the end points, we simply use the formulae an = -1r1 1-1T1T f(x) cos(nx)dx, and bn = -1 1-1T1T f(x) sin(nx)dx to determine the Fourier coefficients. Now, :;1 }r_1T (2x + 1) cos(nx)dx 1 [ (2x + 1) sm. (nx)] 1T - -1 11T -Sin 2 . n (nx) dx (n =I 0) 1r n 1r 2 0 + .!. 2 [cos(nx)]� 1T 0 7rn since cos(n1r) ( - 1) n If n = 0 then ao = :;1 }r_ 1T (2x + 1)dx .!.1r [x2 + x] 1T-1T = 2 1r
=
-1r =
=
.
=
-1r
88
An Introduction to Laplace Transforms and Fourier Series
bn
= = = =
1
r1r (2x + ; l-
1) sin(nx)dx 1r !_ (2x + 1) cos(nx) + !_1r r �n cos(nx)dx 1r n }_1r !_ - (27r + 1) (-27r + 1) (-1) n + 1r
[-
[
]
n
- 1r
]
n
- �n ( -1) n .
Hence f(x)
oo (-1) n "' 1 - ( E -sin ( nx) , n=l n
-1r
<x<
oo (-1)n
From this series, we can deduce the Fourier series for x
1r.
as
follows:-
we have 2x + 1 "' 1 - 4 L -- sin(nx) , x E [ -1r, 1r]
n=l
so x
n
oo n+ l "' 2 L ( -1)n sin(nx) n=l
gives the Fourier series for x in [ -1r, 1r] . Thus the Fourier series for the general straight line must be y "' c -
oo
n (-1) 2m L - sin(nx) . n=l n
y =
mx + c in [ -7r,
1r
]
The explanation for the lack of cosine terms in these Fourier series follows later after the discussion of even and odd functions. Here is a slightly more involved example. Example 4.8 Find the Fourier series for the function f(x)
=
e"' ,
f(x + 27r)
=
- 1r
< x < 1r
f(x), x E R
where Theorem 4 . 5 applies at the end points.
Solution This problem is best tackled by using the power of complex numbers. We start with the two standard formulae:
an = -7r1 and
11r
-1r e"' cos(nx)dx
89
4. Fourier Series
and form the sum an + ibn where i = A. The integration is then quite straightforward
Hence, taking real and imaginary parts we obtain an =
2 sinh(7r) b n7r(1 + n2) '
_
_
2n(-1) n sinh(7r) . 7r (1 + n2)
In this example a0 is given by ao = � sinh(1r), 1r
hence giving the Fourier series as f(x) =
f:
sinh(1r) � (-1) + sinh(1r) : (cos(nx) - n sin(nx)) , -11" < x < 1r. 1 7r 7r n= l + n
Let us take this opportunity to make use of this series to find the values of some infinite series. The above series is certainly valid for x = 0 so inserting this value into both sides of the above equation and noting that f(O)( = e0 ) = 1 gives sinh(1r) 2 � ( - 1) n 1-_ + - � -- . 7r 7r n=l 1 + n2 _
Thus
_.;.....;..
oo ( - 1) n 1r 1 -- = - cosech (1r) - 2 n= l 1 + n2 2 � �
and
oo
oo ( 1) n ( 1) n = 2 2: � + 1 = 1rcosech (1r). L � n= -oo 1 + n n= l 1 + n
Further, there is the opportunity here to use Dirichlet's theorem. Putting x = 1r into the Fourier series for ex is not strictly legal. However, Dirichlet's theorem states that the value of the series at this discontinuity is the mean of the one
90
An Introduction to Laplace Transforms and Fourier Series
�
sided limits either side which is (e _ ,.. + e'��" ) = cosh rr. The Fourier series eval uated at x = 1r is 1 sinh(rr) 2 . -+ smh(1r 2 7r 1r n=l 1 + n as cos(nrr) = (-1) n and sin(nrr) = 0 for all integers n . The value of f(x) at x = 1r is taken to be that dictated by Dirichlet's theorem, viz. cosh(rr). We therefore equate these expressions and deduce the series _.:.....:..
00
� �
and
) Loo
-
_
7r
1
1
= -coth (rr) - 2 n=l 1 + n2 2
00
L n= - oo 1
--
1
=2 + n2
-
00
1 + 1 = rrcoth (rr). L 1 + n=l n2 --
Having seen the general method of finding Fourier series, we are ready to remove the restriction that all functions have to be of period 2rr and defined in the range [-rr, rr]. The most straightforward way of generalising to Fourier series of any period is to effect the transformation x --t rrxfl where l is assigned by us. Thus if x E [-rr, rr], rrxfl E [-l, l]. Since cos(rrxfl) and sin(rrx/l) have period 2l the Fourier series valid in [-l, l] takes the form f(x) where
1
"'
2 ao +
Ln=l (an cos ( -rrxl-) 00
n
/ _
+ bn sm
an = l1 J[1 (x) cos
and
bn =
.
(nrrx )) - , l
( nrrx ) dx l
-
� [l f(x) sin ( n;x ) dx.
-[ < X < [,
(4.2) (4.3)
The examples of finding this kind of Fourier series are not remarkable, they just contain (in general) messier algebra! Here is just one example. Example 4.9 Determine the Fourier Series of the function
f(x) l x l , -3 ::; x ::; 3, f(x + 6). f(x) = Solution The function f(x) = lxl is continuous, therefore we can use Equa tions 4.2 and 4.3 to generate the Fourier coefficients. First of all
1 13 lxldx = 3. -3
ao = 3
4. Fourier Series
91
Secondly, an =
� 1: lxl cos (n;x ) dx � 13 x cos (n;x ) dx
3x x x sin (n?r ) dx 3 [ n1r sin (n7r3 ) ] 03 - �3 3}{30 � n1r 3 1rx 0 + 2. [ � cos (n ) ] n1r n1r 3 0 �
=
so an =
�
(n )2
[-1 + (-1) n].
This is zero if n is even and -12/(n7r) 2 if n is odd. Hence a2k+ 1
=
-
12 , k (2k + 1)2 1r2
=
0, 1, 2, . . .
Similarly,
=
1rx ) dx + -1 13 x sin (n-1rx ) dx - -31 1-° xsm. (n-3 30 3 0 for all3 n.
Hence
2n - 1 1 � cos [ -- 1rx] , 0 � x � 3, f (x) 23 - 7r122 � 1)2 (2n 3 note the equality; f (x) is continuous on [0, 3] hence there is no discontinuity at the end points and no need to invoke Dirichlet's theorem. =
4.3
_
Odd and Even Functions
This topic is often met at a very elementary level at schools in the context of how graphs of functions look on the page in terms of symmetries. However, here we give formal definitions and, more importantly, see how the identification of oddness or evenness in functions literally halves the amount of work required in finding the Fourier series.
92
An Introduction to Laplace Transforms and Fourier Series f(x)
X
(a)
Figure 4.2: Definition 4.10 A
for all values of x. Definition 4.11 A
a
(b)
An even function, b An odd function
function f(x) is termed even with respect to the value a if f(a + x) = f(a - x) function f(x) is termed odd with respect to the value a if f(a + x) = -f(a - x)
for all values of x. The usual expressions "f(x) is an even function" and "f(x) is an odd function" means that a = 0 has been assumed. i.e. f(x) = f(-x) means f is an even function and f(x) = -f( -x) means f is an odd function. Well known even functions are :-
Well known odd functions are
x, sin(x) , tan(x). An even function of x, plotted on the (x, y) plane, is symmetric about the y axis. An odd function of x drawn on the same axes is anti-symmetric (see Figure 4.2).
The important consequence of the essential properties of these functions is that the Fourier series of an even function has to consist entirely of even functions and therefore has no sine terms. Similarly, the Fourier series of an odd function must consist entirely of odd functions, i.e. only sine terms. Hence, given 00
f(x) = 2ao + L (an cos(nx) + bn sin(nx)) - 1r < x < 1r n=l if f(x) is even for all x then bn = 0 for all n. If f(x) is odd for all x then an = 0 for all n . We have already had one example of this. The function x is odd, and the Fourier series found after Example 4. 7 is oo -l) n+ l x = 2 nL ( n sin(nx), -1r < x < 1r =l 1
which has no even terms.
93
4. Fourier Series
Determine the Fourier series for the function f(x) x2 f(x) f(x + 2n), � x � Solution Since x2 ( - x) 2 , f(x) is an even function. Thus the Fourier series consists solely of even functions which means bn 0 for all n. We therefore Example 4.12
=
=
-
1r.
1!"
=
compute the
=
an 's as follows
(n =I O)
also an
[xn2 sin(nx)] 1r 1r .!. }r 2nx sin(nx)dx 2 jtr cos (nx) dx 0 + -1 [ n2x cos(nx)] - nn - 1r n .!. 1r
=
so
-
-
1r
1r
2
1r
-
1r
-1r
n .i_ n2 ( -l )
oo f(x) �2 + 4 L (�1)2 n cos nx n=l =
- 1r
� x < 1r.
This last example leads to some further insights. If we let = in the Fourier series just obtained, the right-hand side is
x 0
hence so
1 - 221 + 312
-
••• =
11"2 . 12
This is interesting in itself as this series is not easy to sum. However it is also possible to put = 1r and obtain
x
00 1 11"2 �32 + 4 "" n2 L...,; n=l =
94
An Introduction to Laplace Transforms and Fourier Series
i.e.
00 1 1r2 6 L n= l n2 =
.
Note that even periodic functions are continuous, whereas odd periodic functions are, using Dirichlet's theorem, zero at the end points. We shall utilise the properties of odd and even functions from time to time usually in order to simplify matters and reduce the algebra. Another tool that helps in this respect is the complex form of the Fourier series which is derived next. 4.4
Complex Fourier Series
Given that a Fourier series has the general form
00 f(x) "' 21 ao + L (a cos(nx) + bn sin(nx) ) , n=l n we can write . 1 . cos (nx) - ( emx + e- mx ) 2
x < 1r,
-1r <
=
and
. 1 . 2 If these equations are inserted into Equation 4.1 then we obtain sin
(nx)
=
J(x) "' where
i (emx - e-mx ).
Cneinx n=-oo 00 L
Cn � (an - ibn ), C-n � (an + ibn), Using the integrals for an and bn we get . dx and C-n Cn 21r1 1 f(x)e-mx =
=
= -
1r
-1r
Co =
� ao, n 1r
=
1, 2, . . . .
.
1 f(x)emx dx. 21r 1
= -
- 1r
This is called the complex form of the Fourier series and can be useful for the computation of certain types of Fourier series. More importantly perhaps, it enables the step to Fourier Transforms to be made (Chapter 6) which not only unites this chapter and its subject, Fourier Series, to the earlier parts of the book, Laplace Transforms, but leads naturally to applications to the field of signal processing which is of great interest to many electrical engineers. Example 4.13
Find the Fourier series for the function f(t) t2 + t, -1r 5: t 5: 1r, f(t) f(t + 27r). =
=
95
4. Fourier Series
Solution We could go ahead and find the Fourier series in the usual way. How ever it is far easier to use the complex form but in a tailor-made way as follows. Given -1r f(t) cos(nt)dt an = ;1 r
J
and
r1r f(t) sin(nt)dt bn = ;1 }_
we define so that
so and
1
7r 2 ao = -7r1 (t + t)dt = -32 rr2 • -7r
These can be checked by direct computation. The Fourier series is thus, assum ing convergence of the right hand side,
(
_ 2 sin(nt) ( _ 1) n 4 cos(nt) f(t) ,..., �3 7r2 + � � n2 n n=l
)
- 1r <
t < 1r .
We shall use this last example to make a point that should be obvious at least with hindsight! In Example 4. 7 we deduced the result f(x)
=
oo -(-1) n sin(nx),
2x + 1 ,..., 1 - 4 L
n=l
n
-1r
� x � 1r.
96
An Introduction to Laplace Transforms and Fourier Series
x
The right hand side is pointwise convergent for all E [-n, n]. It is there fore legal (see Section 4.6) to integrate the above Fourier series term by term indefinitely. The left hand side becomes
1t (2x + l)dx The right hand side becomes t-4
=
t2 + t.
oo � ( l) n cos(nt). L n
n=l
This result may seem to contradict the Fourier series just obtained, viz. t2 + t
,...._
2 3
-1!"
oo ( - l )n 2+L n=l
) ( 4 cos(nt) - 2 sin(nt) n �
----'----'-
as the right hand sides are not the same. There is no contradiction however as t-4
)n
oo
L (�12
n=l
cos(nt)
is not a Fourier series due to the presence of the isolated t at the beginning. From a practical point of view, it is useful to know just how many terms of a Fourier series need to be calculated before a reasonable approximation to the periodic function is obtained. The answer of course depends on the specific function, but to get an idea of the approximation process, consider the function f(t)
=
f(t + 2),
which formally has the Fourier series f(t) = 3
_
f sin(nt) . 1T n=l n
�
The sequence formed by the first seven partial sums of this Fourier series are shown superimposed in Figure 4.3. In this instance, it can be seen that there is quite a rapid convergence to the "saw-tooth" function f(t) = t + 2. Problems arise where there are rapid changes of gradient (at the corners) and in trying to approximate a vertical line via trigonometric series (which brings us back to Dirichlet's theorem). The overshoots at corners (Gibbs' phenomenon) and other problems (e.g. aliasing) are treated in depth in specialist texts. Here we concentrate on finding the series itself and now move on to some refinements. 4.5
Half Range Series
There is absolutely no problem explaining half range series in terms of the normed linear space theory of Section 4.1. However, we shall postpone this
4. Fourier Series
97
f
t
4.3: The first seven partial sums of the Fourier series for the function f(t) t + 2, 0 � t � 2, f(t) f(t + 2) drawn in the range 0 � t � 8.
Figure
=
=
until half range series have been explained and developed in terms of even and odd functions. This is entirely natural, at least for the applied mathematician! Half range series are, as the name implies, series defined over half of the normal range. That is, for standard trigonometric Fourier series the function is defined only in instead of [ The value that takes in the other half of the interval, is free to be defined. If we take
[0, rr] [-rr, 0]
f(x)
f(x)
]
f(x)
- rr , rr .
=
f(x) f(-x)
f(x)
that is is even, then the Fourier series for can be entirely expressed in terms of even functions, i.e cosine terms. If on the other hand =
f(x) -f(-x)
f(x)
then is an odd function and the Fourier series is correspondingly odd and consists only of sine terms. We are not defining the same function as two different Fourier series, for is different, at least over half the range (see Figure We are now ready to derive the half range series in detail. First of all, let us determine the cosine series. Suppose
4.4).
f(x) =
1r
f(x) f(x + 2 ) -rr � x � and, additionally, f(x) = f(-x) so that f(x) is an even function. rr ,
Since the
An Introduction to Laplace Transforms and Fourier Series
98
f(x)
f(x)
X
X
(a)
(b)
4.4: f(x) is an even function (cosine series) , b f(x) is an odd function
Figure a (sine series) formulae
an = -7r1 111" f(x) cos(nx)dx
bn = -7r1 111"11" f(x) sin(nx)dx have already been derived, we impose the condition that f(x) is even. It is then and
easy to see that
-
11"
an = -7r2 111"0 f(x) cos nxdx and bn = 0. For odd functions, the formulae for an and bn are an = 0 and bn = -7r2 111"0 f(x) sin(nx)dx.
The following example brings these formulae to life. Example 4.14
Determine the half range sine and cosine series for the function f(t) = t2 + t, 0 :::; t :::; 7r. Solution We have previously determined that, for f(t) = t2 + t n an = 4(-n 21 ) ' bn = n-(1)n . 2 For this function, the graph is displayed in Figure 4.5 . If we wish f(t) to be even, then f(t) = f( -t) and so an = -7r2 111"0 (t2 + t) cos(nt)dt, bn = 0. On the other hand, if we wish f(t) to be odd, we require f(t) = -f( - t), where b� = -7r2 111"0 (t2 + t) sin(nt)dt, a� = 0.
4. Fourier Series
99 f 12
10 8
2
f(t) = t2 + t,
4.5:
-7r
Figure The graph shown for the entire range emphasise that it might be periodic but it is neither odd nor even
:::;
t :::; 1r to
Both of these series can be calculated quickly using the same trick as used to determine the whole Fourier series, namely, let k
= an + ibn' = -7r 1 (t2 + t)emtdt. 2
.
1r
0
We evaluate this carefully using integration by parts and show the details. k
= -7r 1 (t2 + t)eintdt = �1r [ t2in+ t eint] �1r r tin+ 1 eintdt .(zn+)21 eint + (zn. )3 eint] tr = -1r [ t2zn.+-t eint - � - 1r2 + 7r 27r + 1 [1r zn. + n2 + n3i ] (_ 1)n 1r
2
0
1r
0
2
-
2t
2
}0
2
-
-
2
0
from which and
n = �7r [- (7r2 n+ 7r) (- 1)n + �n3 (( - 1)n - 1)] . The constant term a0 is given by ao = -7r 1 (t2 + t)dt b'
2
1r
0
100
An Introduction to laplace Transforms and Fourier Series
lO 5 -3
-2
2
-5
3
t
-10
Figure function4.6: The function f(t) displayed both as an even function and as an odd
so the constant term in the Fourier series (ao/2) is even
Therefore we have deduced the following two series: the series for t2 + t ! [ 7r2 _::: ] + � � [ 27r + 1 ( -1) n _.!._ ] cos(nt), x E (0, 1r) n2 2 3 2 1r n=l L.J and the odd series for t2 + t (1r2 + 1r) (-1)n+ l �3 ((-1)n - 1)] sin(nt), x E (0,1r). �1r � [ L.J n n n=l They framefunctions this in terms of the theory of linear spaces,arethepictured originallyin Figure chosen 4.6. set ofTobasis �· cos(x), sin(x), cos(2x), sin(2x), . . . is no longer a basis in the halved interval [0, 1r]. However the sequences 1 V2 ' cos(x), cos(2x), . . . and sin(x), sin(2x), sin(3x) . . . are, separately, both bases. Half range series are thus legitimate. +
n2
+
+
:
101
4. Fourier Series
4.6
Properties of Fourier Series
In Fourier this section we shall be concerned withdifferentiation the integrationof Fourier and diffseries erentiation ofposes series. Intuitively, it is the that more problems than integration. This is because diff e rentiating cos(nx) or sin(nx) with respect to x gives n sin(nx) or ncos(nx) which for large n are both larger original terms.in magnitude. Integration For on thethoseotherfamiliar hand gives sin(nxin)fnmagnitude or -cos(nxthan)jnthe, both smaller with numerical analysis this numerical comes nointegration surprise whichnumerical differentiation always needs more care than by comparison The following theorem covers the differentiation of Fourier series. is safe. Theorem 4.15 If f is continuous on [ ) and piecewise differentiable in ( ) which means that the derivative f' is piecewise continuous on [ ) and if f (x) has the Fourier series -
as
as
-1r, 1r
-11", 1r
-11", 1r ,
00
f(x) "' 21 ao + L {an cos(nx) + bn sin(nx)} n=l then the Fourier series of the derivative of f(x) is given by /'(x) "' L {-nan sin(nx) + nbn cos(nx)}. n=l 00
The proof ofofthisa Fourier theoremseries follows standard analysis andandis can not given here.always The integration poses less of a problem virtually take place. The conditions for a function f(x) to possess a Fourier series are similar tointegrating those required forby term integrability (piecewise continuity is sufficient), therefore term can occur. A minor problem arises because the result is not necessarily anothertermFourier series.thisAisterm linearFormally, in x is pro duced by integrating the constant whenever not zero. the following theorem covers the integration of Fourier series. It is not proved either, although'satheorem. related more general result is derived a little later a precursor to Parseval ] and has the Theorem 4.16 Iff is piecewise continuous on the interval [ Fourier series as
- 1r , 1r
00
f(x) "' 21 ao + L {an cos(nx) + bn sin(nx)} n=l - 1r , 1r ,
then for each x E [ ) /_: f(t)dt �ao(x + 1r) + � [� sin(nx) � (cos(nx) - cos(n1r))] and the function on the right converges uniformly to the function on the left. =
-
102
An Introduction to Laplace Transforms and Fourier Series
Let us discuss the details of differentiating integrating Fourier series via 3 , x2 and xandin the the three series for the functions x range [ ] Formally, these three functions can be rendered periodic of period 27T by demanding that forFourier each,series f(x) themselves f(x + 27r)canandbeusing Theorem 4.5 at the end points. The three derived using Equation 4.1 and are 2 (6 - 1r2n2) sin(nx) L ) -l) n 3 n n=l oo 1T2 4( l)n 3 + :2::: :2 cos(nx) n=l :2::: n2 ( -l) n+ l sin(nx) n=l x <seriesWefor state without proof the following factsthe about allthesevalidthreeforseries.
=
00
X "-'
-1r
00
-
1r .
-
L....i
00
oo
=
-1r, 1r .
=
-1r, 1r
4. Fourier Series
103
series
00 F(t) "' 2 Ao + n=l L (An cos(nt) + Bn sin(nt)) , -1r < t < 1r, and the usual periodicity F(t) = F(t + 21r). Now suppose that we can define another function G(x) through the relationship jt G(x)dx = aot + F(t). 2 We the task of determining thatthen F(t)sethasourselves a full range Fourier series we havethe Fourier series for G(x). Using An = -7r 1 F(t) cos(nt)dt, and Bn = -7r 1 F(t) sin(nt)dt. By the fundamental theorem of the calculus we have that F'(t) G(t) - � ao. Now, evaluating the integral for An by parts once gives 1 F'(t) sin(nt)dt, An = - n7r and similarly, 1 Bn = n7r -tr F'(t) cos(nt)dt. Hence, writing the Fourier series for G(x) as G(x) = 2 ao + nL00=l (an cos(nx) + bn sin(nx)) , -7r < x < 1r, and comparing with G(t) = 2ao + F'(t) gives an bn and Bn = -. An = - n n We now need to consider the values of the Fourier series for F(t) at the end points t = -1r and t 1r. With t = 1r, F(1r) = - 2 ao7r + Jr_tr G(x)dx 1
-
1
1
1r
1r
1
- tr
=
1
1
1r
-tr
1r
1
1
=
1
1r
- tr
104
An Introduction to laplace Transforms and Fourier Series
and with
=
=
t -1r, F(-11") 21 ao1r. Since F(t) is periodic of period 27r we have Also
F(t) "' � Ao + nf=l (- bnn cos(nt) + ann sin(nt)) . Putting(piecewise t 1r is legitimate in the limit point differentiability), and givesthere is no jump discontinuity at this =
as
1 1 1 bn bn ( - 1 ) n -. -ao7r L... .t .. 2 -2 Ao - n�=l n cos(mr) -2 Ao - n� n =l We neednamely to relate Ao to ao and the bn terms. To do this, we use the new form of F(t) =
L....t -
=
F(t) "' 21 ao1r + nL00=l ;;:1 [bn ((-l )n - cos(nt)) + an sin(nt)] is writtentheinform termsofofthistheintegral Fourierwecoefficients Towhichdetermine note thatof G(t), the integral of F(t).
[1( G(x)dx �aot + F(t) =
=
00 1 ao(7r + t) + [bn ( - 1) n - bn cos(nt) + an sin(nt)] . L 2 ;;: n=l 1
Also so subtracting gives 00 1 0 d G(x) x (t-e)+ -a - [bn (cos(ne) - cos(nt)) +an (sin(nt) - sin(ne))] L 2 l( t n=l n which tells us the form of the integral of a Fourier series. In fact we alluded toFourier this inseries Example is anparticularly example where term by4.13.termHere proves useful.the ability to integrate a =
1
105
4. Fourier Series
Example 4.17
Use the Fourier series
to deduce the value of the series 00 (-1 ) k -l :E (2k - 1)3 . k=l
the result just derived on the integration of Fourier series, weSolution put e =Utilising 0 and t = 7r/2 so we can write so Now, sin(nthat 7r/2) takes the values 0, 1 ,0, -1, . . . for n = 0, 1,2, . . . we imme diatelysince deduce oo (-1)n . n7r sm ( 2) = - �oo (2k( - I-)k1)3- 1 �� which gives � (-l ) n s. ( n1r) = .!. 7r3 7r3 = 7r3 m 2 4 ( 6 24 ) 32 · n=1 n 3 Whence, putting n = 2k - 1 gives the result _
LJ
00 ( -1 ) k - 1 :E k=1 (2k - 1)3
= 7r323 .
The following theorem can now be deduced.
-1r,
If f(t) and g(t) are continuous in ( 1r) and provided /_: lf(tWdt < and /_: jg(tWdt < if an , bn are the Fourier coefficients of f(t) and an, /3n those of g(t), then
Theorem 4.18
oo
oo,
106
An Introduction to laplace Transforms and Fourier Series
Proof
Since
and we can write
f(t) "' �0 + L (an cos(nt) + bn sin(nt)) n=1 g(t) "' � + t (an cos(nt) + Pn sin(nt)) n=1 00
f(t)g(t) "' 21 aog(t) + L (ang(t) cos(nt) + bng(t) sin(nt)) . n=1 00
-Tr
1r
Integrating this series from to gives 111" f(t)g(t)dt 21 ao 1_11",.. g(t)dt _,..
=
+
� { an /_: g(t) cos(nt)dt bn /_: g(t) sin(nt)dt } +
provided theintegration Fourier seriesoperations for f(t) tois uniformly convergent, enabling the sum mation and be interchanged. This follows from the Cauchy-Schwarz inequality (see Appendix C, Theorem C1) since ( 11" lf(t) l2dt) 1/2 ( 11" lg(tWdt) 1/2 < oo. 11" f(t)g(t) dt � l ! 1_,.. 1 1 However, we know that - 1r
- 1r
1 111" g(t) cos(nt)dt 1 1_,..11" g(t) sin(nt)dt
;
- 1r
;
and that so this implies as
required.
ao
=
=
an Pn
2 1-11" g(t)dt 11"
7r = -
0
107
4. Fourier Series =
we put f(t)follows.g(t) in the above result, the following important theorem immediately If
-1r, 1r ,
If f(t) is continuous in the range ( ) is square integrable (i.e. /_: [f(tWdt < oo) and has Fourier coefficients an, bn then
Theorem 4.19 (Parseval)
/_: [f(tWdt 21ra6 + �(a� + b�). =
1r
This is a useful result for mathematicians, but perhaps its most helpful attribute lies in its interpretation. The left hand side represents the mean square value of f(t) (once it is divided by 27r). It can therefore be thought of in terms ofthatenergy if f(t) ofrepresents a signal. Whata waveform Parseval'iss theorem statestotherefore is the energy a signal expressed proportional the sum of the squares of its Fourier coefficients. In Chapter 6 when Fourier Transforms are discussed, Parseval's theorem re-emerges in this practical context, perhaps in a more recognisable For. now, let us content ourselves with a mathematical consequence of theform. theorem as
Given the Fourier series 2 oo --( -1)-n cos(nt) t2 �3 + 4 � nL..J =l n 2 deduce the value of 00 1 L n=l n4 "
Example 4.20
=
Solution
Applying Parseval's theorem to this series, the left hand side becomes
The right hand side becomes
27r
Equating these leads to or Hence
( �2)2 + 7r 00 16 .
2 5 - 7r
5
=
2 5 - 7r
9
E n4
+ 7r n�L..J00=l -n164
108
4.7
An Introduction to laplace Transforms and Fourier Series
Exercises
(Note:beThe two exercises depend more on knowledge of Appendix C and may left iffirstdesired.) 1. Use the Riemann-Lebesgue lemma to show that J�oo 11T g(t) sin ( + �) tdt = 0, where g(t) is piecewise continuous on the interval [0, 1r]. 2. tion Show that the functions Pn (t) that satisfy the ordinary differential equa n - 2t dPn + n (n + 1)Pn = 0, n = 0, 1, . . . ( 1 - t2) �P dt dt2 are orthogonal in the interval [-1, 1], where the inner product is defined by (p, q} = }_{1 pqdt. 1 3. f(t) is defined by 0 :::; t :::; !7T !7T :::; t :::; f(t) = { ! 7T !t t :::; 27T. Sketch the graph of f(t) and determine a Fourier series assuming f(t) f(t + 27T). 4. Determine the Fourier series for f(x) H(x), the Heaviside Unit Step ofFunction, the seriesin the range [-1r, 11r], 1f(x)1 = f(x1 + 27r). Hence find the value 1 - -3 + -5 - -7 + -9 - · · · . 5. Find the Fourier series of the function f(x) = { sin( -sin(! x)! x) 01r:::;<xx:::;:::;1r27T with f(x) = f(x + 21r). 6 . Determine the Fourier series for the function f(x) = 1 - x2 , f(x) f(x + 27r). Suggest possible values of f(x) at x 7. Deduce that the Fourier series for the function f(x) = -1r < x < 1r, a a real number is . . sinh(1ra) { -1 + 2 Loo (-1)n (acos(nx) - nsm(nx)) a n=l a2 + n2 } m
1T -
1T
1T
:::;
=
=
= 1T .
1T
=
e ax ,
109
4. Fourier Series
Hence find the values of the four series: 00 oo ( -1) n oo ( -1) n oo 1 1 :2:: a2 + n2 ' n=-oo :2:: a2 + n2 ' n=l :2:: a2 + n2 ' n=:2::-oo a2 + n2 n=l
.
8. If where j(t) = j(t + 2rr), sketch the graph of f(t) for -4rr :::; t :::; 4rr and obtain a Fourier series expansion for f(t). 9. Find the Fourier series expansion of the function j(t) where
j(t) =
{ rr(t2- rr) 2- rr0 <:::; tt << Orr.
and f(t) = f(t + 2rr). Hence determine the values of the series
oo 1 oo (-1) n+ l and L n2 :2:: 2 n=l n=l n 10. Determine the two Fourier half-range series for the function f(t) defined in Exercise 9, and sketch the graphs of the function in both cases over the range -2rr :::; t :::; 2rr] .
[
11 . Given the half range sine series t(
7r
_
t)
_
-
� � sin(2n - 1)t 0 <_t _< 7r 1r � (2n - 1)3 '
use Parseval's Theorem to deduce the value of the series Hence deduce the value of the series
00 1 :2:: n6 . n=l
12. Deduce that the Fourier series for the function f(x) = x4 , is
00
I; (2n -1r
1 _
1)6 •
< x < rr,
Explain why this series contains no sine terms. Use this series to find the value of the series
110
An Introduction to Laplace Transforms and Fourier Series
given that
oo ( - l ) n+ l
L
n=l
n2
= 7r122 ·
Assuming (correctly) that this Fourier series is uniformly convergent, use it to derive the Fourier series for x3 over the range x 13. Given the Fourier series
00
2
x "' L - (-l) n+ l sin(nx), n=l n
-1r < < 1r.
-1r < x < 1r,
integrate term by term to obtain the Fourier series for x2 , evaluating the constant of integration by integrating both sides over the range Use the same integration technique on the Fourier series for x4 given in the last exercise to deduce the Fourier series for x5 over the range x 14. In
[-1r, 1r]. -1r < < 1r.
an electrical circuit, the voltage is given by the "top-hat" function
02 << tt << 25.
Obtain the first five terms of the complex Fourier series for
V(t).
5
Partial Differential Equations
5.1
Introduction
In previous chapters, we have explained how ordinary differential equations can be solved using Laplace Transforms. In Chapter 4, Fourier series were introduced, and the important property that any reasonable function can be expressed as a Fourier series derived. In this chapter, these ideas are brought together, and the solution of certain types of partial differential equation using both Laplace Transforms and Fourier Series are explored. The study of the solution of partial differential equations (abbreviated PDEs) is a vast topic that it is neither possible nor appropriate to cover in a single chapter. There are many excellent texts (Sneddon (1957) and Williams (1980) to name but two) that have become standard. Here we shall only be interested in certain types of PDE that are amenable to solution by Laplace Transform. Of course, to start with we will have to assume you know something about partial derivatives! If a function depends on more than one variable, then it is in general possible to differentiate it with respect to one of them provided all the others are held constant while doing so. Thus, for example, a function of three variables (if differentiable in all three) will have three derivatives written
f(x, y, z)
aax'f oay'f and oaz·f
The three definitions are straightforward and, hopefully familiar.
oOXf = �limx-tO { f(x+ �x, y�X, z) - f(x, y, z) } 111
112
An Introduction to Laplace Transforms and Fourier Series
y and z are held constant, oayf = .:ly-tlimo { f(x, y+ Ay,z)Ay -f(x, y,z) } x and z are held constant, and oazf = .:llimz-tO { f(x,y, z+ AAz)z - f(x, y, z) } xterrifying, and y are held constant. If all this is deeply unfamiliar, mysterious and a little then a week or two with an elementary text on partial differentiation
is recommended. It is an easy task to perform: simply differentiate with respect to one of the variables whilst holding the others constant. Also, it is easy to deduce that all the normal rules of differentiation apply as long it is remembered which variables are constant and which is the one to which the function is being differentiated. One example makes all this clear.
and (b) xsin(x Find + yz).all first order partial derivatives of the functions (a) x2yz, Solution a The partial derivatives are as follows: {)ax (x2yz) = 2xyz -ay (x2yz) = x2z and (x2yz) = x2y. {) z The partial derivatives are as follows::x (xsin(x + yz)) = sin(x + yz) + xcos(x + yz) which needs the product rule, :V (xsin(x + yz)) = xzcos(x + yz) and ! (xsin(x + yz)) = xy cos(x + yz) which do not. There are chain rules for determining partial derivatives when f is a function of u, v and w which in turn are functions of y and z u = u(x,y, z) , v = v(x, y,z) and w = w(x, y, z). Example 5.1 ( )
8
8
(b)
x,
This is direct extension of the "function of a function" rule for single variable differentiation. There are other new features such as the Jacobian. We shall not pursue these here; instead the interested reader is referred to specialist texts such as Weinberger or Zauderer
(1965)
(1989).
113
5. Partial Differential Equations
5.2
Classification of Partial Differential Equa tions
In this book, we will principally be concerned with those partial differential equations that can be solved using Laplace Thansforms, perhaps with the aid of Fourier Series. Thus we will eventually concentrate on second order PDEs of a particular type. However, in order to place these in context, we need to quickly review (or introduce for those readers new to this subject) the three different generic types of second order PDE. The general second order PDE can be written
x
y.
where a 1 , b1 , c 1 , d1 , e 1 , It and 91 are suitably well behaved functions of and However, this is not a convenient form of the PDE for ¢1. By judicious use of Taylor 's theorem and simple co-ordinate transformations it can be shown (e.g. Williams (1980), Chapter 3) that there are basic types of linear second order partial differential equation. These standard types of PDE are termed hyperbolic, parabolic and elliptic following geometric analogies and are referred to as A crisp notation we introduce at this point is the whereby
three
canonical forms. derivative notation
suffix
iflx
=
¢ly = iflxx
=
¢lyy
8¢1
ax 8¢1 ay ' etc. f)2¢J f)x2 f) 2¢J f)y2
fP¢1 ¢lxy = axay ' etc.
This notation is very useful when writing large complicated expressions that involve partial derivatives. We use it now for writing down the three canonical forms of second order partial differential equations
a2 ifJxy hyperbolic elliptic a3 ( ¢Jxx ¢lyy ) parabolic
+
+ b2 ifJx + c2 ¢ly + h ¢1 = 92 + b3 ¢Jx + c3 ¢ly + h ¢1 = 93 x,y.
In these equations the a's, b's, c's, j's and 9 's are functions of the variables Laplace Thansforms are useful in solving parabolic and some hyperbolic PDEs. They are not in general useful for solving elliptic PDEs.
1 14
An I ntroduction to Laplace Transforms and Fourier Series
Let us now turn toaresome practical examples inhyperbolic order to seeequation how partial differential equations solved. The commonest is the one dimensional wave equation. This takes the form wheretoc describe is a constant called the celerity orstring, wave speed. This equation can be used waves travelling along a iniswhich casealong u will represent the displacement of the string from equilibrium, x distance the string' s equilibrium position, and t is time. As anyone who is familiar with string instru will know, u takesbuttherests formonofthea wave. The derivation ofdisplacement this equationof isments not straightforward, assumption that the the string from equilibrium isofsmall. This means that x is virtually the distance along the string. If we think the solution in (x, t) space, then the lines x±ct constant assume particular importance. They are called characteristics and the general solution to the wave equation has the general form u = f ( x - ct) + g(x + ct) where f and g are arbitrary functions. If we expand f and g as Fourier series over the interval [0, L] ihead) n whichendtheofstring exists (for inexample between theis bridge and the top (machine the fingerboard a guitar) then it immediate that can be thought of as an infinite superposition of sinusoidal waves:u(x, t) = L an cos[n(x - ct)] + bn sin[n(x - ct)] n=O 00 +L a� cos[m(x + ct)] + b� sin[m(x + ct)] + ya::,2 + ya�2 · m=O =
u
00
I
u =
Ifat the boundary conditions are appropriate to a musical instrument, i.e. 0 xAlthough = 0, L (all t) then this provides a good visual form of a Fourier series. itis israrely possible toasusetheretheareLaplace Transform to solve such wave problems, this done more natural methods and procedures utilise the(What wave-like properties of theheresolutions but are outside the scope ofthat this text. we are talking about is the method of characteristics see There e.g. Williams (1980) Chapter 3.) is one particularly widely occurring elliptic partial diffLaplace erentialTransform equation which is mentioned here but cannot in general be solved using techniques. This is Laplace's equation which, in its two dimensional form is Functions this equation ¢ that satisfy esting mathematical properties. Perhapsarethecalledmostharmonic importantandofpossess these isinter the
5. Partial Differential Equations
cf>(x,y)
115
D R2 D
following. A function has its which is harmonic in a domain E maximum and minimum values on , the border of and not inside itself. Laplace 's equation occurs naturally in the fields of hydrodynamics, electromag netic theory and elasticity when steady state problems are being solved in two dimensions. Examples include the analysis of standing water waves, the dis tribution of heat over a flat surface very far from the source a long time after the source has been switched on, and the vibrations of a membrane. Many of these problems are approximations to parabolic or wave problems that be solved using Laplace Transforms. There are books devoted to the solutions of Laplace's equation, and the only reason its solution is mentioned here is because the properties associated with harmonic functions are useful in providing checks to solutions of parabolic or hyperbolic equations in some limiting cases. Let us without further ado go on to discuss parabolic equations. The most widely occurring parabolic equation is called the heat conduction equation. In its simplest one dimensional form (using the two variables (time) and (distance)) it is written
aD
D,
can
x
t
a¢ = a2cf> at K, ax2 •
cf>(x, t)
This equation describes the manner in which heat is conducted along a bar made of a homogeneous substance located along the axis. The thermal conductivity (or thermal diffusivity of the bar is a positive constant that has been labelled K-. One scenario is that the bar is cold (at room temperature say and that heat has been applied to a point on the bar. The solution to this equation then describes the manner in which heat is subsequently distributed along the bar. Another possibility is that a severe form of heat, perhaps us ing a blowtorch, is applied to one point of the bar for a very short time then withdrawn. The solution of the heat conduction equation then shows how this heat gets conducted away from the site of the flame. A third possibility is that the rod is melting, and the equation is describing the way that the interface between the melted and unmelted rod is travelling away from the heat source that is causing the melting. Solving the heat conduction equation would predict the subsequent heat distribution, including the speed of travel of this interface. Each of these problems is what is called an and this is precisely the kind of PDE that can be solved using Laplace Transforms. The bulk of the rest of this chapter is indeed devoted to solving these. One more piece of "housekeeping" is required however, that is the use of Fourier series in enabling boundary conditions to be satisfied. This in turn requires knowledge of the technique of separation of variables. This is probably revision, but in case it is not, the next section is devoted to it.
)
x
)
initial value problem
5.3
Separation of Variables
The technique of separating the variables will certainly be a familiar method for solving ordinary differential equations in the cases where the two variables can
116
An Introduction to Laplace Transforms and Fourier Series
be isolated to occur exclusively on either side of the equality sign. In partial differential equations, the situation is slightly more complicated. It can be applied to Laplace 's equation and to the wave equation, both of which were met in the last section. However, here we solve the heat conduction equation. We consider the problem of solving the heat conduction equation together with the boundary conditions as specified below:
at/> 82 4> X E [0, L] at � ox2 cf>(x, 0) = f(x) at time t = 0 t/>(0, t) = cf>(L, t) = 0 for all time. '
The fundamental assumption for separating variables is to let
cf>(x,t) = T(t)X(x) so that the heat conduction equation becomes
T'X = �TX" where prime denotes the derivative with respect to t or obtain
T'
X"
x. Dividing by XT we
r = �-x ·
t x As t and x are independent variables, these must be equal to the same constant.
The next step is crucial to understand. The left hand side is a function of only, and the right hand side is a function of only. This constant is called the separation constant. It is wise to look ahead a little here. As the equation describes the very real situation of heat conduction, we should look for solutions that will decay as time progresses. This means that is likely to decrease with time which in turn leads us to designate the separation constant as negative. Let it be so
-a2 ,
and
T
T' = -a2 giving T(t) = T0e-a t , t � 0, T 2
x;' = - :2 giving X(x) = a' cos ( � ) + b' sin ( �) .
Whence the solution is
cf>(x, 0) f(x)
x
At time t = 0, = which is some prescribed function of (the initial temperature distribution along a bar E perhaps) then we would seem to require that sm ..JK, = a cos ..JK,
f(x)
x [0, L] ( ax ) + b . ( ax )
117
5. Partial Differential Equations
which in general is not possible. However, we can now use the separation con stant to our advantage. Recall that in Chapter 4 it was possible for any piecewise continuous function E to be expressed as a series of trigonometric functions. In particular we can express by
f(x), x [0, L]
f(x)
n dependence. Further, if we set an = nrr , n mteger . .jK, L then the boundary conditions at x 0 and x L are both satisfied. Here we have expressed the function f(x) as a half range Fourier sine series which is writing a = an to emphasise its
=
=
=
consistent with the given boundary conditions. Half range cosine series or full range series can also be used of course depending on the problem. Here, this leads to the complete solution of this particular problem in terms of the series ¢ L bne -( n2 1r2 �
(x t) ,
=
( n�x ) '
00
n=l
bn = � foL f(x) sin ( n�x ) dx.
where
The solution can be justified as correct using the following arguments. First of all the heat conduction equation is linear which means that we can superpose the separable solutions in the form of a series to obtain another solution provided the series is convergent. This convergence is established easily since the Fourier series itself is pointwise convergent by construction, and the multiplying factor e-(n2 1r2 � L2 )t is always less than or equal to one. The ¢( , obtained is thus the solution to the heat conduction equation. Here is a specific example.
x t) Example 5.2 Determine the solution to the boundary value problem: 82 ¢ 8¢ = at K- ox2 ' X E (0, 2] ¢ (x O) = x + ( 2 - 2x)H(x - 1) at time t 0 ¢ (0 , t) ¢ (2 , t) 0 for all time where H(x) is Heaviside's Unit Step Function (see Chapter 2, page 19). Solution The form of the function ¢ ( x , 0) is displayed in Figure 5.1. This func ,
=
=
=
tion is expressed as a Fourier sine series by the methods outlined in Chapter 4. This is in order to make automatic the satisfying of the boundary conditions. The Fourier sine series is not derived in detail as this belongs in Chapter 4. The result is 8(-1) n- l sin + 1) 1) 2 · 00
x (2 - 2x)H(x - "' � rr2(2n
_
( -nrr2-x )
1 18
An Introduction to Laplace Transforms and Fourier Series
cp
-1
x t) drawn as an odd function
Figure 5.1: The initial distribution of cp( ,
The solution to this particular boundary value problem is therefore
There is much more on solving PDEs using separation of variables in specialist texts such as Zauderer (1989). We shall not dwell further on the method for its own sake, but move on to solve PDEs using Laplace Transforms which itself often requires use of separation of variables. 5.4
Using Laplace Transforms to Solve PDEs
We now turn to solving the heat conduction equation using Laplace Transforms. The symbol invariably denotes time in this equation and the fact that time runs from now = 0) to infinity neatly coincides with the range of the Laplace Transform. Remembering that
t
(t
where overbar (as usual) denotes the Laplace Transform (i.e . .C(¢) = (/;) gives us the means of turning the PDE
into the ODE
scp(s) - ¢(0) = c(l(j;
We have of course assumed that
"' dx2 .
5. Partial Differential Equations
1 19
x
which demands a continuous second order partial derivative with respect to and an improper integral with respect to which are well defined for all values of so that the legality of interchanging differentiation and integration (sometimes called Leibniz' Rule) is ensured. Rather than continue in general terms, having seen how the Laplace Transform can be applied to a PDE let us do an example.
t
x
Example 5.3
Solve the heat conduction equation >
>
=
>
in the region t 0, x 0 with boundary conditions
=
>
the solution.]
Solution Taking the Laplace Transform (in t of course) gives
cP(/> s
= -
or
=
=
since (x, 8) Ae -x vls + Bex vs. Now, since limx-+oo
-t
-t oo ,
=
B 0. Thus (/>(x, 8) Ae -x vs. Letting x 0 in the Laplace Transform of
we have that
(/>(x, 8) 0 as x
=
hence =
=
=
=
=
=
-
Laplace Transform has integrated through all positive values of t. Inserting this boundary condition for (/> gives
120
An Introduction to Laplace Transforms and Fourier Series
whence the solution for (/> is
e - x v'S . ¢>(x,s) = 8
--
Inverting this using a table of standard forms, or Example 3.11, gives
¢>(x, t) = erfc (�xr 112) . Another physical interpretation of this solution runs as follows. Suppose there is a viscous fluid occupying the region x 0 with a flat plate on the plane x = 0. (Think of the plate as vertical to eliminate the effects of gravity.) The above expression for ¢> is the solution to jerking the plate in its own plane so that a velocity near the plate is generated. Viscosity takes on the role of conductivity here, the value is taken as unity in the above example. ¢> is the fluid speed dependent on time t and vertical distance from the plate x. Most of you will realise at once that using Laplace Transforms to solve this >
kind of PDE is not a problem apart perhaps from evaluating the Inverse Laplace Transform. There is a general formula for the Inverse Laplace Transform and we meet this informally in the next chapter and more formally in Chapter 7. You will also see that the solution
¢>(x, t) = erfc ( �xr 1 12 ) is in no way expressible in variable separable form. This particular problem is not amenable to solution using separation of variables because of the particular boundary conditions. There is a method of solution whereby we set � = and transform the equation (or its derivative with respect to at = 0 and another value = a say) for all t. The proof of this is straightforward and makes use of contradiction. This is quite typical of uniqueness proofs, and the details can be found in, for example, Williams (1980) pp195-199. Although the proof of uniqueness is a side issue for a book on Laplace Transforms and Fourier series, it is always important. Once a solution to a boundary value problem like the heat conduction in a bar has been found, it is vital to be sure that it is the only one. Problems that do not have a unique solution are called posed and they are not dealt with in this text. In the last twenty years, a great deal of attention has been focused on non-linear problems. These do not have a unique solution (in general) but are as far as we know accurate descriptions of real problems. So far in this chapter we have discussed the solution of evolutionary partial differential equations, typically the heat conduction equation where initial con ditions dictate the behaviour of the solution. Laplace Transforms can also be
xr 112
(x,
x) x
ill
(x
121
5. Partial Differential Equations
used to solve second order hyperbolic equations typified by the wave equation. Let us see how this is done. The one-dimensional wave equation may be written
where c is a constant called the celerity or wave speed, x is the displacement and t of course is time. The kind of problem that can be solved is that for which conditions at time t = 0 are specified. This does not cover all wave problems and may not even be considered to be typical of wave problems, but although problems for which conditions at two different times are given can be solved using Laplace Transforms (see Chapter 3) alternative solution methods (e.g. using characteristics) are usually better. As before, the technique is to take the Laplace Transform of the equation with respect to time, noting that this time there is a second derivative to deal with. Defining ¢ = £{¢}
and using
C
{ �:t }
= s 2 ¢ - s¢(0) - >'(0)
the wave equation transforms into the following ordinary differential equation for ¢:The solution to this equation is conveniently written in terms of hyperbolic functions since the x boundary conditions are almost always prescribed at two finite values of x. If the problem is in an infinite half space, for example the vibration of a very long beam fixed at one end (a cantilever ) , the complementary function Aesxfc + Be-sxfc can be useful. As with the heat conduction equation, rather than pursuing the general problem, let us solve a specific example.
Example 5.4 A beam of length a initially at rest has one end fixed at x = 0
with the other end free to move at x = a. Assuming that the beam only moves longitudinally (in the x direction) and is subject to a constant force along its length where is the Young 's modulus of the beam and is the displacement per unit length, find the longitudinal displacement of the beam at any subsequent time. Find also the motion of the free end at x = a .
E
D
ED
Solution The first part of this problem has been done already in that the Laplace Transform of the one dimensional wave equation is
where (x, t) is now the displacement, ¢ its Laplace Transform and we have assumed that the beam is perfectly elastic, hence actually obeying the one di mensional wave equation. The additional information we have available is that
122
An Introduction to Laplace Transforms and Fourier Series
x=0 ¢(x, 0) = 0, �� (x, 0) = 0 (beam is initially at rest) together with ¢(0, t) = 0 (x = 0 fixed for all time) and a¢(a,t) = D (the end at x = a is free). ax Hence 2 tF(fi s = 2 c2 ¢dx the solution of which can be written (fi(x, s) = kl cosh ( s; ) k2 sinh c;) . In order to find k1 and k2 , we use the x boundary condition. If ¢(0, = 0 then, taking the Laplace Transform, (fi ( O, s) = 0 so k1 = 0. Hence - s) = k2 sinh ( --;;-sx) and d(fidx = -skc2 cosh ( -sx)c ¢(x, We have that a¢ax = D at x = a for all t. The Laplace Transform of this is -dxd(fi = -Ds at x = a. Hence D sk2 sa) = cosh ( s c c from which k2 - s2 coshDe( :) Hence the solution to the ODE for (fi is Dc sinh (�) s2 cosh ( :)
the beam is initially at rest, is fixed for all time and that the other end at is free. These are translated into mathematics as follows:-
x=a
\It
+
t)
-
.
_
s
¢> =
s
and is completely determined. The remaining problem, and it is a significant one, is that of inverting this Laplace Transform to obtain the solution Many would use the inversion
¢(x, t) .
123
5. Partial Differential Equations
t
Semi Infinite Domain
X
X
0
Figure
5.2: The domain
formula of Chapter 7, but as this is not available to us, we scan the table of standard Laplace Transforms and find the result r- l �..-
{ s2sinh(sx) cosh( sa) }
=
and deduce that 8aD ..�,. 'I' ( x, t) - D x + 2 _
1r
( -1)n sm. ((2n - 1)1rx ) cos ((2n - 1)1rt ) L..J (2n - 1)2 2a 2a
8a � x+1r2 n= l
L..Jn=l (2n - . ((2n -2a1)1rx ) cos ((2n -2a1)1rct ) . � ( -1) n sm 1) 2
It is not until we meet the general inversion formula in Chapter 7 that we are able to see how such formulae are derived using complex analysis. 5.5
Boundary Conditions and Asymptotics
A partial differential equation together with enough boundary conditions to ensure the existence of a unique solution is called a boundary value problem, sometimes abbreviated to BVP. Parabolic and hyperbolic equations of the type suited to solution using Laplace Transforms are defined over a semi infinite domain. In two dimensional Cartesian co-ordinates, this domain will take the form of a semi infinite strip shown in Figure Since one of the variables is almost always time, the problem has conditions given at time t = 0 and the solution once found is valid for all subsequent times, theoretically to time t = oo. Indeed, this is the main reason why Laplace Transforms are so useful for these types of problems. Thus, in transformed space, the boundary condition at t = 0 and the behaviour of the solution as t -t oo get swept up in the Laplace Transform and appear explicitly in the transformed equations, whereas the conditions on the other variable (x say) which form the two infinite sides of the rectangular domain of Figure get transformed into two boundary conditions that are the end points of a two point boundary value problem (one
5 .2.
5.2
An Introduction to Laplace Transforms and Fourier Series
124
dimensional) in x. The Laplace Transform variable s becomes passive because no derivatives with respect to s are present. If there are three variables (or perhaps more) and one of them is time-like in that conditions are prescribed at t = 0 and the domain extends for arbitrarily large time, then similar arguments prevail. However, the situation is a little more complicated in that the transformed boundary value problem has only had its dimension reduced by one. That is, a three dimensional heat conduction equation which takes the form
where
EP¢> EP¢> 82 ¢> + + 8x2 8y 2 8z2 which is in effect a four dimensional equation involving time together with three space dimensions, is transformed into a Poisson like equation:'\7 2 ¢> =
-
2-
K"V ¢>(x , y, z, s) = s¢>(x , y, z, s) - ¢>(x , y , z, O) where, as usual, .C{¢>(x, y , z, t) } = {jj (x, y, z, s). It remains difficult in general to determine the Inverse Laplace Transform, so various properties of the Laplace Transform are invoked as an alternative to complete inversion. One device is particularly useful and amounts to using a special case of Watson's Lemma, a well known result in asymptotic analysis. If for small values of t, ¢>(x , y, z, t) has a power series expansion of the form >(x , y, z, t)
=
00
L an (x , y, z)tk+n
n=O
c,
and 1 > le-ct is bounded for some k and then the result of Chapter 2 following Theorem 2.13 can be invoked and we can deduce that as s -+ oo the Laplace Transform of ¢>, (jj has an equivalent asymptotic expansion
-
¢>(x , y , z, s) = � .LJ an (x , y, z) n=O
r(n + k + 1) . s n+k+l
Now asymptotic expansions are not necessarily convergent series; however the first few terms do give a good approximation to the function {jj(x , y, z, s). What we have here is an approximation to the transform of ¢> that relates the form for large s to the behaviour of the original variable ¢> for small t. Note that this is consistent with the initial value theorem (Theorem 2.12). It is sometimes the case that each series is absolutely convergent. A term by ter.m evaluation can then be justified. However, often the most interesting and useful applications of asymptotic analysis take place when the series are not convergent. The classical text by Copson (1967) remains definitive. The serious use of these results demands a working knowledge of complex variable theory, in particular of poles of complex functions and residue theory. These are not dealt with until
125
5. Partial Differential Equations
Chapteris a reasonably so examplesstraightforward involving complex variables are postponedseries, untiljustthento. Here example using asymptotic get the idea. 7
Example 5.5 Find an approximation to the solution of the partial differential
equation
for small times where
a
0) cos(
Solution It is possible to solve this BVP exactly, but let us take Laplace Trans forms to obtain s - cos(x) = c2 � dx then try an asymptotic series of the form
-
� bn (x) LJ
n=O s
n+k+l
valid far enough away from the singularities of ([> in s space. (This is only true provided ([> does notcoefficients have branchof 1/points. See Section for more about branch points Equating s n yields straight away that k = 0, then we .) get 7.4
and hence This yields
b1 = -c2
cos(x),
b2 = c4
cos(x),
b3 = -c6
cos(x) etc.
oo -l nc2n cos(x) n=O L ( s n)+2 . Provided wethenhaveconverge s c 0, term by term inversion is allowable here as the series will for all values of x . It is uniformly but not absolutely convergent. This results in -
>
>
which is immediately recognised as aAssolution thatfrom couldthishavelastbeen obtained directly using separation ofofvariables . is obvious example, we are hampered in the kind problem that can be solved because we have yet to gain experience of the use of complex
126
An Introduction to Laplace Transforms and Fourier Series
variables. Fourier Transforms are also a handy tool for solving certain types of BVP, and these are the subject of the next chapter. So finally in this chapter, a word about other methods of solving partial differential equations. In the years since the development of the workstation and desk top micro computer, there has been a revolution in the methods used by engineers and applied scientists in industry who need to solve boundary value problems. In real situations, these problems are governed by equations that are far more complicated than those covered here. Analytical methods can only give a first approximation to the solution of such problems and these methods have been surpassed by numerical methods based on finite difference and finite element approximations to the partial differential equations. However, it is still essen tial to retain knowledge of analytical methods, as these give insight as to the general behaviour in a way that a numerical solution will never do, and because analytical methods actually can lead to an increase in efficiency in the numerical method eventually employed to solve the real problem. For example, an analyt ical method may be able to tell where a solution changes rapidly even though the solution itself cannot be obtained in closed form and this helps to design the numerical procedure, perhaps even suggesting alternative methods (finite elements instead of finite differences) but more likely helping to decide on co ordinate systems and step lengths. The role of mathematics has thus changed in emphasis from providing direct solutions to real problems to giving insight into the underlying properties of the solutions. As far as this chapter is concerned, future applications of Laplace Transforms to partial differential equations are met briefly in Chapter 7, but much of the applications are too advanced for this book and belong in specialist texts on the subject, e.g. Weinberger (1965) . 5.6
Exercises
1. Using separation of variables, solve the boundary value problem:
a¢ at
[ 71"]
az ¢ K axz ' x E 0, 4 x x at time t = 0
(� ) ¢(0, t)
-
2. The function
a ax2 -
a¢ = 0 ax at
with x > 0, > 0, b > 0 and boundary conditions 0. Use Laplace Transforms to find ¢>, the Laplace Transform of ¢. (Do not attempt to invert it.)
a
127
5. Partial Differential Equations
Use Laplace Transforms to solve again the BVP of Exercise 1 but this time in the form - x).Jr } { sins2h(�sinh(�)-/r ¢>(x,t) = -x2 '!!.4 .x } { s2sinh(x.Jr) + sinh(� -/r) . Use the table ofbetween LaplacethisTransforms this expression. any differences solution andto invert the answer to Exercise 1.Explain 4. Solve the PDE 82 ¢> 8¢> 3.
+
+
2 "'t
2KC - 1
2KC _ 1
8x2
8y
>
with boundary conditions ¢>(x, 0) = 0, ¢>(0, y) = 1, y 0 and lim ¢>(x,y) = 0. x--too 5. Suppose that u(x, t) satisfies the equation of telegraphy 82 u 1 82 u - k au 1 k2 c2 8t2 c2 8t + 4 c2 u - 8x2 = ue - k t/ 2 ,
- -
- -
- -
-
•
and hence use Laplace Trans satisfied by ¢> Find the(in equation forms t) to determine the solution for which u(x, 0) = cos(mx), �: (x, 0) = 0 and u(O, t) ekt/2 • 6. The function u(x, t) satisfies the BVP U t - c2 u.,., = 0, X 0, t 0, u(O, t) = f(t), u(x, 0) = 0 where f(t) is piecewise continuous and of exponential order . (The suffix derivative notation hastogether been used.) Findconvolution the solution of thisDetermine BVP by using Laplace Transforms with the theorem. the explicit solution in the special case where f(t) = J(t), where J(t) is Dirac's delta function. 7. A semi-infinite solid occupying the region x 0 has its initial temperature set toT.,zero. A constant heat flux is applied to the face at x = 0, so that (O, t) = -a where T is the temperature field and a is a constant . Assuming linearbarheat the temperature at anyat time pointt isx (xgiven0)byof the and conduction, show that thefindtemperature at the face =
>
>
>
>
a
(K y ;t
where "' is the thermal conductivity of the bar.
128
An Introduction to Laplace Transforms and Fourier Series
8. Use asymptotic series to provide an approximate solution to the wave equation cP u a2 u at2 ax2 valid for small values of t with au u (x, 0) 0 , (x, 0) cos (x) at -- =
=
cz
__
=
.
9. Repeat the last exercise, but using instead the boundary conditions u (x, O)
=
au cos (x) , a (x, O ) t
=
0.
6
Fourier Transform s
6.1
Introduction
Later in this chapter we define the Fourier Transform. There are two ways of approaching the subject of Fourier Transforms, both ways are open to us! One way is to carry on directly from Chapter 4 and define Fourier Transforms in terms of the mathematics of linear spaces by carefully increasing the period of the function f ( x) . This would lead to the Fourier series we defined in Chap ter 4 becoming, in the limit of infinite period, an integral. This integral leads directly to the Fourier Transform. On the other hand, the Fourier Transform can be straightforwardly defined as an example of an integral transform and its properties compared and in many cases contrasted with those of the Laplace Transform. It is this second approach that is favoured here, with the first more pure mathematical approach outlined towards the end of Section 6.2. This choice is arbitrary, but it is felt that the more "hands on" approach should dominate here. Having said this, texts that concentrate on computational as pects such as the FFT (Fast Fourier Transform), on time series analysis and on other branches of applied statistics sometimes do prefer the more pure approach in order to emphasise precision. 6.2
Deriving the Fourier Transform
R
Definition 6.1 Let f be a function defined for all x E with values in C. The Fourier Transform is a mapping F : -+ C defined by
R
F(w) =
/_:
f (x)e -iwx dx .
129
130
An Introduction to Laplace Transforms and Fourier Series
Of course, for some f(x) the integral on the right does not exist. We shall spend some time discussing this a little later. There can be what amounts to trivial differences between definitions involving factors of 21r or .,fiir. Although this is of little consequence mathematically, it is important to stick to the defini tion whichever version is chosen. In engineering where x is often time, and w frequency, factors of 21r or .,fi1i can make a lot of difference. If F(w) is defined by the integral above, then it can be shown that 1 F(w)eiwx dw . f(x) = 211" This is the inverse Fourier Transform. It is instructive to consider F(w) as a complex valued function of the form
100
- oo
F(w) = A(w)ei¢(w )
where A(w) and ¢(w) are real functions of the real variable w. F is thus a complex valued function of a real variable w. Some readers will recognise F(w) as a spectrum function, hence the letters A and ¢ which represent the amplitude and phase of F respectively. We shall not dwell on this here however. If we merely substitute for F(w) we obtain oo
f(x ) = _.!_ A(w )eiwx H (w ) dw . 2 f
11" j We shall return to this later when discussing the relationship between Fourier Transforms and Fourier series. Let us now consider what functions permit Fourier Transforms. A glance at the definition tells us that we cannot for ex ample calculate the Fourier Transform of polynomials or even constants due to the oscillatory nature of the kernel. This is a feature that might seem to render the Fourier Transform useless. It is certainly a difficulty, but one that is more or less completely solved by extending what is meant by an integrable function through the use of generalised functions. These were introduced in Section 2.6, and it turns out that the Fourier Transform of a constant is closely related to the Dirac-8 function defined in Section 2.6. The impulse function is a represen tative of this class of functions and we met many of its properties in Chapter 2. In that chapter, mention was also made of the use of the impulse function in many applications, especially in electrical engineering and signal processing. The general mathematics of generalised functions is outside the scope of this text, but more of its properties will be met later in this chapter. If we write the function to be transformed in the form e - kx f(x) then the Fourier Transform is the integral - oo
/_: e -iwxe -kx f(x)dx
straight from the definition. In this form, the Fourier Transform can be related to the Laplace Transform. First of all, write
Fk (w) =
100 e-(k+iw)x f(x)dx
131
6. Fourier Transforms
then Fk (w) will exist provided the function f(x) is of exponential order (see Chapter 1 ). Note too that the bottom limit has become 0. This reflects that the variable x is usually time. The inverse ofbeFkdefined (w) is straightforward to find once it is realised that the function f(x) can as identically zero for x < 0. Whence we have 1 r oo eiwx I" k (w )dw = { e0-kx f(x) XX <� 00. 27r 1- oo An alternative way of expressing this inverse is in the form _.!.._ roo e ( k+iw)x Fk (w)dw = { O f(x) xX <� O0. 27r 1- oo Inoccurs this naturally. formula, andThisin isthea one aboveandfor therefore Fk (w) , the complex number k + iw variable, a complex variable andNow,it corresponds to s the Laplace Transform variable defined in Chapter the integral on the left of the last equation is not meaningful if we are to regard k + iw = s as the variable. As k is not varying, we can simply write z;o
1.
ds = idw
- ioo and k + ioo at the limits w = - oo and w = oo and s will takeThetheleft-hand values k integral respectively. is now, when written as an integral in s, 1k+ioo e8x F(s)ds 1
21fZ k-ioo
.
where F(s) = Fk (w) is now a complex valued function of the complex variable s.undertaken Although there isabove nothing illegal in theitwaydoestheamount variabletochanges havecartoon been i n the manipulations, a rather deri vation.afterAcomplex more rigorous this integral is givenTheinformula the next chapter variablesderivation have beenofproperly introduced. f(x) = 1 . 1k+ioo esxF(s)ds 21fZ k-ioo
-
is.C{/(x)}. indeed the general form of the inverse Laplace Transform, given F(s) = WeInnowChapter approach the definition of Fourier Transforms from a different view point. 4, Fourier series were discussed at some length. As a summary forthepresent purposes, if f(x) is a periodic function, and for simplicity let us take period as being 21r (otherwise in all that follows replace x by lxj21r where l is the period) then f(x) can be expressed as the Fourier series 00 f(x) ......, � + n=l 2)an cos(nx) + bn sin(nx))
132
An Introduction to Laplace Transforms and Fourier Series
where
1
an =
-
bn =
-
7r
and
1
7r
1 1r f(x) cos(nx)dx,
n = 0, 1 , 2, . . .
11r f(x) sin(nx)dx
n = 0, 1 , 2, . . . .
-tr
- tr
4 �
These have been derived in Chapter and follow from the orthogonality prop erties of sine and cosine. The factor in the constant term enables an to be expressed as the integral shown without n = being an exception. It is merely a convenience, and as with factors of 21r in the definition of Fourier Transform, the definition of a Fourier series can have these trivial differences. The task now is to see how Fourier series can be used to generate not only the Fourier Transform but also its inverse. The first step is to convert sine and cosine into exponential form; this will re-derive the complex form of the Fourier series first done in Chapter Such a re-derivation is necessary because of the slight change in definition of Fourier series involving the a0 term. So we start with the standard formulae . 1 . cos(nx) = - (emx + e -mx )
0
4.
2
and
sin(nx) = 2i1 (em. x - e - m. x ).
Some algebra of an elementary nature is required before the Fourier series as given above is converted into the complex form 00
J(x) = L
n=-oo
where Cn
= -
1
27r
Cn einx
11r f(x)e-m. xdx. - 1r
The complex numbers Cn are related to the real numbers an and bn by the simple formulae 1
.
Cn = (an - tbn ), C-n 2
=
Cn ,
n = 0, 1, 2, . . .
and it is assumed that bo = The overbar denotes complex conjugate. There are several methods that enable one to move from this statement of complex Fourier series to Fourier Transforms. The method adopted here is hopefully easy to follow as it is essentially visual. First of all consider a function g(t) which has period T. (We have converted from x to t as the period is T and no longer 27r . It is the transformation x -+ 21rtjT.) Figure 6.1 gives some insight into what we are trying to do here. The functions f(t) and g(t) coincide precisely
0.
133
6. Fourier Transforms
Figure 6.1: The function f(t) and its periodic clone g (t). in the interval can write
g
[-f, f], but not necessarily outside this range. Algebraically we { f(t) ltl < �T (t) = j(t - nT) H2n - 1)T < l tl < � (2n + 1)T
where n is an integer. Since g (t) is periodic, it possesses a Fourier series which, using the complex form just derived, can be written 00
g (t) = L Gn einwo t n= -oo where Gn is given by
T/2 Gn = .!_ { g (t) e -inwot dt T 1- T/2
and wo = 21rfT is the frequency. Again, this is obtained straightforwardly from the previous results in x by writing 7r
2
[
x = Tt = w0t.
We now combine these results to give
oo 1 T/2 { g (t) e-inwo t dt einwot . L: = (t) g T 1 -_ T/2 n= - oo
]
_
The next step is the important one. Note that nwo
21rn = -T
and that the difference in frequency between successive terms is wo. As we need this to get smaller and smaller, let w0 = Aw and nw0 = Wn which is to
134
An Introduction to Laplace Transforms and Fourier Series
0
remain finite as n -+ oo and w0 -+ together. The integral for Gn can thus be re-written
G=
!T/2 g(t)e-iw.,. tdt. -T/2
Having set everything up, we are now in a position to let T -+ oo, the mathe matical equivalent of lighting the blue touchpaper! Looking at Figure 6.1 this means that the functions f(t) and g(t) coincide, and
g(t)
=
T-too
oo n= -oo
Aw
lim L Getw.,. t 21r
with
G(w) =
.
/_: g(t)e-iwtdt.
We have let T -+ oo, replaced Aw by the differential dw and Wn by the variable w. All this certainly lies within the definition of the improper Riemann integral given in Chapter 1. We thus have
G(w) = with
/_: g(t)e-iwtdt 100
1 iwtdw . g(t) = 27r - oo G(w)e
This coincides precisely with the definition of Fourier Transform given at the beginning of this chapter, right down to where the factor of 21r occurs! As has already been said, this positioning of the factor 21r is somewhat arbitrary, but it important to be consistent, and where it is here gives the most convenient progression to Laplace Transforms as indicated earlier in this section and in the next chapter. In getting the Fourier Transform pair (as g and G are called) we have lifted the restriction that g(t) be a periodic function. We have done this at a price however in that the improper Riemann integrals must be convergent. As we have already stated, unfortunately this is not the case for a wide class of func- . tions including elementary functions such as sine, cosine, and even the constant function. However this serious problem is overcome through the development of generalised functions such as Dirac's 8 function (see Chapter 2). 6.3
Basic Properties of the Fourier Transform
There are as many properties of the Fourier Transform as there are of the Laplace Transform. These involve shift theorems, transforming derivatives, etc.
135
6. Fourier Transforms f
F
J_�---L---- t
__ __
Figure 6.2: The square wave function
f(t), and its Fourier Transform F(w) .
but they are not so widely used simply due to the restrictions on the class of functions that can be transformed. Most of the applications lie in the fields of Electrical and Electronic Engineering which are full of the jumpy and impulse like functions to which Fourier Transforms are particularly suited. Here is a simple and quite typical example.
Calculate the Fourier Transform of the "top hat" or rectangular pulse function defined as follows:f(t) = { A0 ilttil � TT where A is a constant (amplitude of the pulse) and T is a second constant (width of the pulse).
Example 6.2
>
Solution Evaluation of the integral is quite straightforward and the details are
as follows
F (w) = /_: f(t)e-iwtdt = =
F(w)
�-TT Ae-iwt dt
[
_
� e -iwt] T
ZW
2A sin(wT). w
-T
Mathematically this is routine and rather uninteresting. However the graphs of and F(w) are displayed side by side in Figure 6.2, and it is worth a little discussion. The relationship between and is that between a function of time (J ( and the frequencies that this function (called a signal by engineers) con tains, F(w) . The subject of spectral analysis is a large one and sections of it are devoted to the relationship between a spectrum (often called a power spectrum) of a signal and the signal itself. This subject has been particularly fast growing
f(t)
t))
f(t)
F(w)
136
An Introduction to Laplace Transforms and Fourier Series f(t)
IF(w)l
w
Figure 6.3: A typical wave form f (t) and the amplitude of its Fourier Transform IF(w) l = A(w) . since the days of the first satellite launch and the advent of satellite, then cable, and now digital television ensures its continuing growth. During much of the remainder of this chapter this kind of application will be hinted at, but a full account is of course not possible. The complex nature of F(w) is not a problem. Most time series are not symmetric, so the modulus of F(w) (A(w)) carries the frequency information. A more typical-looking signal is shown on the left in Figure 6.3. Signals do not have a known functional form, and so their Fourier Transforms cannot be determined in closed form either. However some general characteristics are depicted on the right hand side of this figure. Only the modulus can be drawn in this form as the Fourier Transform is in general a complex quantity. The kind of shape IF(w)l has is also fairly typical. High frequencies are absent as this would imply a rapidly oscillating signal; similarly very low frequencies are also absent as this would imply that the signal very rarely crossed the t axis. Thus the graph of IF(w) l lies entirely between w = and a finite value. Of course, any positive variation is theoretically possible between these limits, but the single maximum is most common. There is a little more to be said about these ideas in Section 6.5 when signal processing is discussed. Sometimes it is inconvenient to deal with explicitly complex quantities, and the Fourier Transform is expressed in real and imaginary form as follows. If
0
F(w)
then Fc (w)
=
=
/_: f(t)e-iwtdt
100 f(t) cos(wt)dt
is the Fourier Cosine Transform, and F8 (w) =
100 f(t) sin(wt)dt
is the Fourier Sine Transform. We note that the bottom limits in both the Fourier Cosine and Sine Transform are zero rather than - oo. This is in keeping
137
6. Fourier Transforms
t
with the notion that in practical applications corresponds to time. Once more we warn that differences from these definitions involving positioning of the factor are not uncommon. From the above definition it is easily deduced that
1r
F(w) = 100 [f(t) + f( -t)] cos(wt) dt - i 100 [f(t) - f( - t)] sin(wt)dt so if f is an odd function [f(t) = - f( -t)], F(w) is pure imaginary, and if f is an even function [f(t) = f( -t)] , F(w) is real. We also note that if the bottom
limit on each of the Fourier Sine and Cosine Transforms remained at -oo as in some texts, then the Fourier Sine Transform of an even function is zero as is the Fourier Cosine Transform of an odd function. This gives another good reason for the zero bottom limits for these transforms. Now let us examine some of the more common properties of Fourier Transforms, starting with the inverses of the Sine and Cosine Transforms. These are unsurprising: if
Fc(w) = 100 f(t) cos(wt)dt then
f(t) = -7r2 10 00 Fc(w) cos(wt)dw
and if then
F8(w) = 100 f(t) sin(wt)dt f(t) = -7r2 10 00 F8(w) sin(wt)dw.
The proof of these is left as an exercise for the reader. The first property of these transforms we shall examine is their ability to evaluate certain improper real integrals in closed form. Most of these integrals are challenging to evaluate by other means (although readers familiar with the residue calculus also found in summary form in the next chapter should be able to do them). The following example illustrates this.
Example 6.3 By considering the Fourier Cosine and Sine Transforms of the function f(t) = e-at , a a constant, evaluate the two integrals roo cos(kx) dx and roo x sin (kx) dx . }0 a 2 + x2 lo a2 + x2 Solution First of all note that the Cosine and Sine Transforms can be conve niently combined to give
Fc(w) + iF8 (w) = 100 e< -a+iw)tdt =
[ -a +1 �w. e(-a+iw)t] 00 1
a - iw
a + iw
0
138
whence
An Introduction to Laplace Transforms and Fou rier Series
a w and F8 (w) = a 2 + w2 . a2 + w2 Using the formula given for the inverse transforms gives Fc(w) =
27r 1oo cos( t)dw e-at 00 sin(wt)dw = e-at. �1 7r to x , t to k thus gives the results 1oo cos(kx) dx .!!_2 e-ak 100 xsin(kx) dx - 2 e-ak . -
and Changing variables w
o
a a2 + w2
o
w a2 + w2
0
and
w
a2 + x2
0
=
_
=
a
�
a2 + x2 Laplace Transforms are extremely useful for finding the solution to differential equations. Fourier Transforms can also be so used; however the restrictions on the class of functions allowed is usually prohibitive. Assuming that the improper integrals exist, which requires that -+ as -+ ±oo, let us start with the definition of the Fourier Transform
f 0 t /_: f(x)e-iwx dx. Since both limits are infinite, and the above conditions on f hold, we have that the Fourier Transform of f'(t) , the first derivative of f (t) , is straightforwardly using integration by parts. In general F(w) =
iwF(w)
i
The other principal disadvantage of using Fourier Transform is the presence of in the transform of odd derivatives and the difficulty in dealing with boundary conditions. Fourier Sine and Cosine Transforms are particularly useful however, for dealing with second order derivatives. The results
100 �; cos(wt)dt = -w2 Fc (w) - f'(O) and
100 �;dt sin( t)dt 0
w
= -w2 Fa (w) +
wf(O)
can be easily derived. These results are difficult to apply to solve differential equations of simple harmonic motion type because the restrictions imposed on
6. Fourier Transforms
j(t)
139
are usually incompatible with the character of the solution. In fact, even when the solution compatible, solving using Fourier Transforms is often im practical. Fourier Transforms do however play an important role in solving partial dif ferential equations as will be shown in the next section. Before doing this, we need to square up to some problems involving infinity. One of the common mathematical problems is coping with operations like integrating and differen tiating when there are infinities around. When the infinity occurs in the limit of an integral, and there are questions as to whether the integral converges then we need to be particularly careful. This is certainly the case with Fourier Transforms of all types; it is less of a problem with Laplace Transforms. The following theorem is crucial in this point.
is
Theorem 6.4 (Lebesgue Dominated Convergence Theorem)
Let
hER be a family of piecewise continuous functions. If 1. There exists a function g such that l fh (x)j � g(x) Vx E R and all h E R. fh,
2.
/_: g(x)dx <
3.
oo.
lim0 fh (X) = f(x) for every X E R h-+ oo oo lim01r !h(x)dx 1r f(x)dx. h-+ _00 -oo
then
=
This theorem essentially tidies up where it is allowable to exchange the processes of taking limits and infinite integration. To see how important and perhaps counter intuitive this theorem is, consider the following simple example. Take a rectangular pulse or "top hat" function defined by
x
incidental.
=
An Introduction to Laplace Transforms and Fourier Series
140
The proof of the Lebesgue Dominated Convergence Theorem involves clas sical analysis and is beyond the scope of this book. Suffice it to say that the function g(x) which is integrable over ( - oo, oo ) "dominates" the functions fh (x), and without it the limit and integral signs cannot be interchanged. As far as we are concerned, the following theorem is closer to home. Denote by G{R} the family of functions defined on n with values in C which are piecewise continuous and absolutely integrable. These are essentially the Fourier Transforms as defined in the beginning of Section 6.2. For each f E G { n} the Fourier Transform of f is defined for all E n by
w
F(w) = /_: f (x)e-iwx dx as in Section 6.2. That G{R} is a linear space over C is easy to verify. We now state the theorem.
6.5 For each f E G{R} , 1. F(w) is defined for all w E n. 2. F is a continuous function on n.
Theorem
3.
lim
w ±oo
F(w) = 0.
Proof To prove this theorem, we first need to use the previous Lebesgue Dom
inated Convergence Theorem. This was in fact the principal reason for stating it! We know that l eiwx l = hence
1,
/_: lf(x)e-iwx ldx = /_: lf(x) ldx <
oo.
F(w)
Thus we have proved the first statement of the theorem; is well defined for every real This follows since the above equation tells us that f(x)e-iwx is absolutely integrable on n for each real In addition, f(x)e -iwx is piecewise continuous and so belongs to G{R}. To prove that is continuous is a little more technical. Consider the difference
w.
w.
F(w)
F(w + h) - F(w) = /_: [f(x)e -iw(x+h) - f(x)e-iwx ]dx so If we let
F(w + h) - F(w) = /_: f(x)e -iwx [e -iwh - l)dx.
6. Fourier Transforms
141
to correspond to the fh (x) in the Lebesgue Dominated Convergence Theorem, we easily show that lim f (x) = 0 for every X E R. h-+0 h Also, we have that
lfh (x) l = l f(x) ll e -iw x ll e -iw h - l l � 2 l f(x) ! . Now, the function g(x) = 2 l f(x) l satisfies the conditions of the function g(x) in the Lebesgue Dominated Convergence Theorem, hence
{ -+0 1-'X)oo fh (x)dx = 0 hlim whence
-tO [F(w + h) F(w)] = 0 hlim which is just the condition that the function F(w) is continuous at every point -
of n. The last part of the theorem
lim F(w) = 0
w -t±oo
follows from the Riemann-Lebesgue lemma (see Theorem 4.2) since
F(w) =
I: f(x)e-iwxdx I: f(x) cos(wx)dx - i I: f(x) sin(wx)dx, =
lim F(w)
w -t±oo
is equivalent to
and
0
oo
lim r f(x) cos(wx) dx = 0 w-+±oo 1-oo
oo
lim r f(x) sin(wx)dx = 0 w -t±oo 1-oo together. These two results are immediate from the Riemann-Lebesgue lemma (see Exercise 5 in Section 6.6). This completes the proof. 0
The next result, also stated in the form of a theorem, expresses a scaling prop erty. There is no Laplace Transform equivalent due to the presence of zero in the lower limit of the integral in this case, but see Exercise 7 in Chapter 1.
142
An Introduction to Laplace Transforms and Fourier Series
6.6 Let f(x) E G{R} and a, b E R , a :f:. 0 and denote the Fourier Transform of f by :F(f) so
Theorem
/_: f (x)e-iwxdx.
:F(f) = Let g(x) = f(ax + b). Then
(�) .
e iwb/a :F(f) :F(g) = .!_ a Ia !
Proof As is usual with this kind of proof, the technique is simply to evaluate the Fourier Transform using its definition. Doing this, we obtain
:F(g(w)) =
/_: f (ax + b)e -iwxdx.
Now simply substitute t = ax + b to obtain, if a > 0
1 100 f (t)e -iw(t-b)/adt
:F(g(w)) = -a and if a < 0
-
oo
1 100 f (t)e-iw(t-b) /adt.
:F(g(w)) = --a
-
oo
So, putting these results together we obtain
(�)
eiwb/a:F(f) :F(g(w)) = .!_ a la l as required.
0
The proofs of other properties follow along similar lines, but as been mentioned several times already the Fourier Transform applies to a restricted set of func tions with a correspondingly smaller number of applications. 6 .4
Fourier Transforms and Partial Differential Equations
In Chapter 5, two types of partial differential equation, parabolic and hyperbolic, were solved using Laplace Transforms. It was noted that Laplace Transforms were not suited to the solution of elliptic partial differential equations. Recall the reason for this. Laplace Transforms are ideal for solving initial value problems, but elliptic PDEs usually do not involve time and their solution does not yield evolutionary functions. Perhaps the simplest elliptic PDE is Laplace's equation ('fil2
143
6. Fourier Transforms
gives a boundary value problem. The solutions are neither periodic nor are they initial value problems. Fourier Transforms as defined so far require that variables tend to zero at ±oo and these are often natural assumptions for elliptic equations. There are also two new features that will now be introduced before problems are solved. First of all, partial differential equations such as \!2 ¢> = 0 involve identical derivatives in x, y and, for three dimensional \! 2 , z too. It is logical therefore to treat them all in the same way. This leads to having to define two and three dimensional Fourier Transforms. Consider the function ¢>(x, y), then let and so that t/Jp(k, l) =
/_: /_: ¢>(x, y)ei(kx+ly) dxdy
becomes a double Fourier Transform. In order for these transforms to exist, ¢> must tend to zero uniformly for (x, y) being a large distance from the origin, i.e. as Jx2 + y2 becomes very large. The three dimensional Fourier Transform is defined analogously as follows:t/Jp(k, l, m)
=
/_: /_: /_: ¢>(x, y, z)ei(kx+ly+mz) dxdydz.
With all these infinities around, the restrictions on tPF are severe and appli cations are therefore limited. The frequency w has been replaced by a three dimensional space (k, l, m) called phase space. However, the kind of problem that gives rise to Laplace 's equation does fit this restriction, for example the behaviour of membranes, water or electromagnetic potential when subject to a point disturbance. For this kind of problem, the variable ¢> dies away to zero the further it is from the disturbance, therefore there is a good chance that the above infinite double or triple integral could exist. More useful for practical purposes however are problems in the finite domain and it is these that can be tackled usefully with a modification of the Fourier Transform. The unnatural part of the Fourier Transform is the imposition of conditions at infinity, and the modifications hinted at above have to do with replacing these by conditions at finite values. We therefore introduce the finite Fourier Transform (not to be confused with the FFT - Fast Fourier Transform). This is introduced in one dimension for clarity; the finite Fourier Transforms for two and three dimensions follow almost at once. If x is restricted to lie between say a and then the appropriate Fourier type transformation would be
b,
144
An Introduction to Laplace Transforms and Fourier Series
This would then be applied to a problem in engineering or applied science where a � x � The two-dimensional version could be applied to a rectangle
b.
a � x � b, c � y � d
ld 1b
and is defined by
Apart from the positive aspect of eliminating problems of convergence for (x, y) very far from the origin, the finite Fourier Transform unfortunately brings a host of negative aspects too. The first difficulty lies with the boundary conditions that have to be satisfied which unfortunately are now no longer infinitely far away. They can and do have considerable influence on the solution. One way to deal with this could be to cover the entire plane with rectangles with
1/27r
Example
1/l
l
6. 7 Find the solution to the two-dimensional Laplace Equation fP¢ + f:P¢ = 0, Y > 0, 8x2 {)y2
with 8¢ ox
-:o-
and
($(k, y) =
1.
/_:
145
6. Fourier Transforms
then
Also as
can be integrated by parts follows rX) a2 � e -ikx dx [ a¢> e -ikx ] 00 ik r oo a¢> e -ikxdx ax 00 1 oo ax 1 oo aX ik [>e-ikx] ::'oo - k2 i: and its x derivative decay to zero x ±oo. Hence if we take the Fourier Transform of the Laplace Equation in the question, +
=
_
-
-
=
as
-+
or
�y2�- - k2 (fi 0. As (fi -+ 0 for large y, the (allowable) solution is =
{fi = Ce -i k iY.
Now, we can apply the condition on y {fi ( k, O)
=
=
0
/_: >(x, O)e-ikxdx { 1 e -ikxdx 1-1 eik _ e- ik
ik k C = 2sin(k) k
=
whence
2 sin(k)
146
An Introduction to Laplace Transforms and Fourier Series
and
¢
=
2 sin (k) e - l k l v . k
In order to invert this, we need the Fourier Thansform equivalent of the Convo lution theorem ( see Chapter 3). To see how this works for Fourier Thansforms, consider the convolution of the two general functions f and g
F*G =
/_: f(r)g(t - r)dr /_: /_: f (r)G(x)e-ix(t-r) drdx /_: G(x)e-ixt /_: f(r)eixrdrdx =
=
=
1 {'X>
. 72 r }_ 00 G(x)F(x)e -ax t dx 1 :F 21r (FG).
Now, the Fourier Thansform of e -l k ) y is
y
and that of
2 sin ( k) k
hence by the convolution theorem,
is the required solution. 6.5
Signal Processing
There is no doubt that the most prolific application of Fourier Thansforms lies in the field of signal processing. As this is a branch of electrical engineering an in depth analysis is out of place here, but some discussion is helpful. To start with, we return to the complex form of the Fourier series and revisit explicitly the close connections between Fourier series and finite Fourier Thansforms. From Chapter 4 (and Section 6.2)
00 f(x) = L Cn einx n= -oo
147
6. Fourier Transforms
with
1 Cn = 27r
111" f(x)e-mx. dx. -11"
Thus 27rcn is the finite Fourier Transform of over the interval -1r, 1r) and the "inverse" is the Fourier series for If Cn is given as a sequence, then is easily found. (In practice, the sequence Cn consists of but a few terms.) The resulting is of course periodic as it is the sum of terms of the type for various Hence the finite Fourier Transform of this type of periodic function is a sequence of numbers, usually four or five at most. It is only a short step from this theoretical treatment of finite Fourier Transforms to the analysis of periodic signals of the type viewed on cathode ray oscilloscopes. A simple illustrative example follows.
f(x).
f(x) cneinx
f(x)
[
f(x) n.
Consider the simple "top hat" function f(x) defined by E [0, ) f( ) { 01 Xotherwise. Find its finite Fourier Transform and finite Fourier Sine Transform.
Example 6.8
X
1r
=
Solution The finite Fourier Transform of this function is simply
1o,.. e-inxdx [- e�·ninx ] 0,.. =
•
which can be written in a number of ways:-
2 sin( T) ein ;2 1 ( 1 - ( -1 ) n ) - 2i ; ; · in n (2k + 1) 1r
The finite Sine Transform is a more natural object to find: it is
1,.. sin(nx)dx o
=
.!
n (1 - (-1)
n) =
2 2k + 1 , n, k integers.
-
Let us use this example to illustrate the transition from finite Fourier Trans forms to Fourier Transforms proper. The inverse finite Fourier Transform of the function as defined in Example 6.8 is the Fourier series
f(x)
f(x)
=
1 00 1 - (-1) n einx 27r :2: �n
f(x) x [0,1r] f(x)
- oo
.
0�X�
1!".
[0, 1r],
However, although is only defined in the interval the Fourier series is periodic, period 21!". It therefore represents a square wave shown in Figure 6.4. Of course, E so is represented as a "window" to borrow a phrase from time series and signal analysis. If we write = then let -+ oo: we
x 1rtjl,
l
148
An Introduction to Laplace Transforms and Fourier Series f(x)
0
1T
X
Figure 6.4: The square wave regain the transformation that took us from Fourier series to Fourier Transforms proper, Section 6.2. However, what we have in Figure 6.5 is a typical signal. The Fourier Transform of this signal taken as a whole of course does not exist as conditions at ±oo are not satisfied. In the case of an actual signal therefore, the use of the Fourier Transform is made possible by restricting attention to a window, that is a finite range of t. This gives rise to a series (Fourier series) representation of the Fourier Transform of the signal. This series has a period which is dictated by the (usually artificially generated) width of the window. The Fourier coefficients give important information on the frequencies present in the original signal. This is the fundamental reason for using these methods to examine signals from such diverse applications as medicine, engineering and seismology. Mathematically, the way forward is through the introduction of the Dirac-IS function as follows. We have that
F{IS(t - to)} =
/_: IS(t - t0)e-iwtdt = e-iwto
and the inverse result implies that
F{e -itot } = 21r1S(w - wo). Whence we can find the Fourier Transform of a given Fourier series (written in complex exponential form) by term by term evaluation provided such operations are legal in terms of defining the Dirac-IS as the limiting case of an infinitely tall but infinitesimally thin rectangular pulse of unit area (se Section 2.6).
00
J(t) "' L Fn e inwo t n= - oo so that
00
L FnF{einwot } n= -oo
6. Fourier Transforms
149
which implies
00
F{f(t)} ,... 21r L Fn 8 (w nwo). n= - oo -
Now, suppose we let
00
j(t) = L 8(t - nT) n= - oo
that is f(t) is an infinite train of equally spaced Dirac-8 functions (called a Shah function by electrical and electronic engineers) , then f ( t) is certainly periodic (of period T). Of course it is not piecewise continuous, but if we follow the limiting processes through carefully, we can find a Fourier series representation of j(t) as 00
where wo = 21r
j(t) ,... L Fne -inwo t n= -oo
/T, with Fn
= =
� �
1 T/2 e-inwo t d j(t) t T - T/2 1 T/2 8(t)e -inwo t dt T -T/2 1
= -
T
for all n. Hence we have the result that
F{f(t)}
= =
00 21r L T1 8(w - nwo) n= - oo 00 Wo L 8(w - nwo). n= - oo
Which means that the Fourier Transform of an infinite string of equally spaced Dirac-8 functions (Shah function) is another string of equally spaced Dirac-8 functions:
is this result that is used extensively by engineers and statisticians when analysing signals using sampling. Mathematically, it is of interest to note that with T = 21r (w0 = we have found an invariant under Fourier Transformation. If f(t) and F(w) are a Fourier Transform pair, then the quantity It
1)
E = I: lf(tWdt
150
An Introduction to Laplace Transforms and Fourier Series
total energy.
f(t)
is called the This expression is an obvious carry over from representing a time series. (Attempts at dimensional analysis are fruitless due to the presence of a dimensional one on the right hand side. This is annoying to physicists and engineers, very annoying!) The quantity IF(wW is called the and the graph of this against w is called the and remains a very useful guide as to how the signal can be thought of in terms of its decomposition into frequencies. The energy spectrum of sin for example is a single spike in w space corresponding to the frequency The constant energy spectrum where all frequencies are present in equal measure corresponds to the "white noise" signal characterised by a hiss when rendered audible. The two quantities and energy spectral density are connected by the transform version of Parseval 's theorem (sometimes called Rayleigh's theorem, or Plancherel's identity). See Theorem 4.19 for the series version.
energy spectral density spectrum (kt) 21rjk.
f(t)
energy
lf(tW
Theorem 6.9 (Parseval's, for Transforms) F(w) and
If f(t) has a Fourier Transform
then Proof The proof is straightforward:-
1 r oo [' X) [' X) t = (t)d f(t) f(t) 1- oo j* 1- oo 21r 1_ 00 F(w)eiwttkvdt = 2 F(w) f(t)eiwt dttkv
j*
� /_: /_: = 2� /_: F(w)(F(w)t tkv = 2� /_: I F(wWtkv
where is the complex conjugate of f(t) . The exchange of integral signs is justified as long as their values remain finite. 0
Most of the applications of this theorem lie squarely in the field of signal pro cessing, but here is a simple example using the definition of energy spectral density.
Example 6.10 Determine the energy spectral densities for the following func tions:
151
6. Fourier Transforms
{i) f ( t) _
{ii}
{ 0A otherwise itl < T
This is the same function as in Example 6.2. f(t) =
{ 0e-at tt<>0.0
Solution The energy spectral density I F(w) i 2 is found from f(t) by first finding its Fourier Transform. Both calculations are essentially routine.
(i)
F(w)
= /_: f(t)eiwt dt T = A 1{ T eiwtdt � = �w [eiwt] T-T wt) = ��w [eiwT e-iwT] = 2A sin( w _
as already found in Example 6.2. So we have that
(ii)
F(w)
= hoo e-at e-iwtdt [e < - a -�w) t ] 00 = a + �w
=
Hence
a - iw a2 + w2 "
a - iw a + iw I F(w ) l 2 = a2 + w2 . a2 + w2
0
= a2 +1 w2 .
There is another aspect of signal processing that ought to be mentioned. Most signals are not deterministic and have to be analysed by using statistical tech niques such as sampling. It is by sampling that a time series which is given in the form of an analogue signal (a wiggly line as in Figure 6.5) is transformed
152
An Introduction to Laplace Transforms and Fourier Series f(x)
X
Figure 6.5: A time series into a digital one (usually a series of zeros and ones). The Fourier Transform is a means by which a signal can be broken down into component frequencies, from space to w space. This cannot be done directly since time series do not conveniently obey the conditions at ±oo that enable the Fourier Transform to exist formally. The autocovariance function is the convolution of f with itself and is a measure of the agreement (or correlation) between two parts of the signal time apart. It turns out that the autocovariance function of the time series is, however well behaved at infinity and it is usually this function that is subject to spectral decomposition either directly (analogue) or via sampling (digital). Digital time series analysis is now a very important subject due to the omnipresent (digital) computer. We will all experience digital television signals in the next few years; this is the latest manifestation of digital signal analysis. We shall not pursue it further here. What we hope to have achieved here is an appreciation of the importance of Fourier Transforms and Fourier series to the subject. Let us finish this chapter by doing an example which demonstrates a slightly different way of illustrating the relationship between finite and standard Fourier Transforms. The electrical engineering fraternity define the window function by W(x) where
t
t
W(x) =
I
0
!x i > �
� !x i = � 1 lx l < � ·
giving the picture of Figure 6.6. This is almost the same as the "top hat" function defined in the last example. Spot the subtle difference.
6. Fourier Transforms
153 W(x)
X
Figure 6.6: The window function W (x) Use of this function in Fourier Transforms immediately converts a Fourier Transform into a finite Fourier Transform as follows
_} 00 ( X -b��ba+ a) ) j(x)e-iwxdx = lb j(x)e-iwxdx. It is easy to check that if x - 2 + a) t = bl(b -a then t � corresponds to x b and t < - � corresponds to x < a. What this approach does is to move the work from inverting a finite Fourier Transform in r oo W
>
a
>
terms of Fourier series to evaluating
r oo Fw(w)eiw x dw _!_ 2
1T' J_ oo
where Fw(w) is the Fourier Transform of the "windowed" version of f(x). It will come as no surprise to learn that the calculation of this integral is every bit as difficult (or easy) as directly inverting the finite Fourier Transform. The choice lies between working with Fourier series directly or working with Fw(w) which involves series of generalised functions. 6.6
Exercises
1 . Determine the Fourier Transform of the function f(t) defined by k -T
{
An Introduction to Laplace Transforms and Fourier Series
154
3. Show that
['"' sin(u) du = ?:..
lo
4. Define f(t) =
u
2
{ oe-t tt �< 0O
and show that
f(at) - f(bt) b-a where a and b are arbitrary real numbers. Hence also show that f(at) * f(at) = tf(at) where * is the Fourier Transform version of the convolution operation defined by f(at) * f(bt) =
f(t) * g(t) =
/_: f(r)g(t - r)dr.
5. Consider the integral
and, using the substitution u = t - 1/2x, show that lg(x)l � providing a simple illustration of the Riemann-Lebesgue lemma.
0 hence
6. Derive Parseval's formula:-
/_: f(t)G(it)dt = /_: F(it)g(t)dt where F and G are the Fourier Transforms of f(t) and g(t) respectively and all functions are assumed to be well enough behaved. 7. Define f(t) = l - t2 , -l < t < l , zero otherwise, and g(t) = e -t , O � t < oo, zero otherwise. Find the Fourier Transforms of each of these functions and hence deduce the value of the integral
100 --;4 -t (t cosh(t) - sinh(t))dt 0
t
by using Parseval's formula (see Exercise 6). Further, use Parseval's the orem for Fourier Transforms, Theorem 6.9, to evaluate the integral
{00 (t cos(t) - sin(t))2 dt
lo
t6
6. Fourier Transforms
8.
155
Consider the partial differential equation
Ut = kuxx
X >
>
0, t 0 with boundary conditions u(x, 0) g(x) , u(O, t) = 0, where g(x) is a suitably well behaved function of x . Take Laplace Transforms in t to obtain an ordinary differential equation, then take Fourier Transforms in x to solve this in the form of an improper integral. 9. The Helmholtz equation takes the form Uxx + Uyy + k2 u = f(x,y) < x,y < Assuming that the functions u(x,y) and f (x , y) have - oo
=
oo .
Fourier Transforms show that the solution to this equation can formally be written:-
u(x, y) = -4�1- //// e-i[.X(x-�)+tt(y-q)J � +f(�f., 1J)- � dAdf.tdf.d1J, -
where all the integrals are from -oo to that must be obeyed by the functions
oo.
State carefully the conditions
u(x, y) and f(f., 1J).
10. Use the window function W(x) defined in the last section to express the
coefficients (en) in the complex form of the Fourier series for an arbitrary function f(x) in terms of an integral between -oo and oo. Hence, using the inversion formula for the Fourier Transform find an integral for f ). Compare this with the Fourier series and the derivation of the transform in Section 6.2, noting the role of periodicity and the window function.
(x
11. The two series
NT
N- l
Fk
=
fn
=
:2:': fne-inktl.wT,
n=O
N- l
� :2:': Fk einktl.wT k=O
where IJ..w = 21rf define the discrete Fourier Transform and its inverse. Outline how this is derived from continuous (standard) Fourier Transforms by considering the series fn as a sampled version of the time series j(t). 12. Using the definition in the previous exercise, determine the discrete Fourier Transform of the sequence 2, with =
{1, 1}
T 1.
13. Find, in terms of Dirac-6 functions the Fourier Transforms of cos(wot) and sin(w0t), where w0 is a constant.
7
Complex Variab les and Laplace Transform s
7. 1
Introduction
The material in this chapter is written on the assumption that you have some familiarity with complex variable theory (or complex analysis). That is we assume that defining f(z) where z = x + iy, i = .;=I, and where x and y are independent variables is not totally mysterious. In Laplace Transforms, can fruitfully be thought of as a complex variable. Indeed parts of this book (Section 6.2 for example) have already strayed into this territory. For those for whom complex analysis is entirely new (rather unlikely if you have got this far!), there are many excellent books on the subject. Those by Priestley (1985), Stewart and Tall (1983) or (more to my personal taste) Need ham (1997) or Osborne (1999) are recommended. We give a brief resume of required results without detailed proof. The principal reason for needing com plex variable theory is to be able to use and understand the proof of the formula for inverting the Laplace Transform. Section 7.6 goes a little further than this, but not much. The complex analysis given here is therefore by no means com plete. s
7.2
Rudiments of Complex Analysis
In this section we shall use z( = x + iy) as our complex variable. It will be assumed that complex numbers, e.g. 2 + 3i are familiar, as are their represen tation on an Argand diagram (Figure 7.1). The quantity z = x + iy is different in that x and y are both (real) variables, so z must be a complex variable. In 157
158
An Introduction to Laplace Transforms a nd Fourier Series Im{z
z-plane
Re{z}
Figure 7.1: The Argand diagram this chapter we shall be concerned with f(z), that is the properties of functions of a complex variable. In general we are able to split f(z) into real and imaginary parts f(z) = >(x,
y) + i1jl(x, y)
where ¢> and 7/1 are real functions of the two real variables x and y. For example
i
sin(z) = sin(x) cosh(y) + sinh(x) cos(y). This is the reason that many results from two real variable calculus are useful in complex analysis. However it does not by any means tell the whole story of the power of complex analysis. One astounding fact is that if f ( z) is a function that is once differentiable with respect to z, then it is also differentiable twice, three times, as many times as we like! The function f ( z) is then called a or function. Such a concept is absent in real analysis where differentiability to any specific order does not guarantee further differentiability. If f(z) is regular then we can show that ¢> ¢> = and =
regular analytic
a ax
_
a1jl ay
a a1jl . ay ax
These are called the Cauchy-Riemann equations. Further, we can show that i.e. both ¢> and 7/1 are harmonic. We shall not prove any of these results. Once a function of a single real variable is deemed many times differentiable in a certain range (one dimensional domain), then one possibility is to express the function as a power series. Power series play a central role in both real and complex analysis. The power series of a function of a real variable about the
7. Complex Variables a nd Laplace Transforms
159
n
point x = Xo is the Taylor series. Thuncate this series after + I terms and we get the result
n 2 f(x) = f(xo) + (x - xo)f' (xo) + (x -;,X0 ) f'(xo) + · · · + (x - x0 ) j(n) (xo) +Rn where
n!
:
n+ l Rn = (X : \) ) ! j (n+ l ) (xo + O(x - xo)) (0 � e � I) ( is the remainder. This series is valid in the range l x - Xo I � r, r is called the radius of convergence. The function f(x) is differentiable n + I times in the region Xo - r � x � xo + r. This result carries through unchanged to complex variables
n 2 f( z) = f( zo ) + (z - zo ) !'(zo ) + (z -2!zo) f" (zo ) + · · · + (z -n!zo) f (n) (zo ) + · · ·
disc
r.
but now of course f(z) is analytic in a iz - zol < What follows, however has no direct analogy in real variables. The next step is to consider functions that are regular in an region r1 < iz - zol < or in the limit, a 0 < iz - zol < r2 . In such a region, a complex function f(z) possesses a Laurent series
annular
punctured disc
r2 ,
00
f(z) = L an (z - zo) n . n= -oo The a0 + a1 (z - z0) + a2 (z - zo) 2 + · · · part of the series is the "Taylor part" but the an s cannot in general be expressed in terms of derivatives of f(z) evaluated at z = z0 for the very good reason that f(z) is not analytic at z = z0 so such derivatives are not defined! The rest of the Laurent series 1 . . . + a - 2 + ..,...a.-- - -.,..
(z - zo) 2
(z - zo)
is called the principal part of the series and is usefully employed in help ing to characterise with some precision what happens to f(z) at z = z0. If a_1 , a_2 , . . . , a- n , · · · are all zero, then f(z) analytic at z = zo and pos sesses a Taylor series. For such functions which have little application here, a1 = f'(z0), a2 = �f" (zo) etc. The last coefficient of the principal part, a- 1 is termed the of f(z) at z = zo and has parthe coefficient of z - zo ' ticular importance. Dealing with finding the ans (positive or negative in the Laurent series for f(z) takes us into the realm of complex integration. This is because, in order to find an we manipulate the values f(z) has in the region of validity of the Laurent series, i.e. in the annular region, to infer something about an . To do this, a complex version of the mean value theorem for integrals is used called Cauchy's integral formulae. In general, these take the form
is
residue
I
_ _
) - � l (Z -gZo(z))n+l dz
g( n) (Zo - 27rZ.
C
n)
160
An Introduction to Laplace Transforms a nd Fourier Series
g(z)
where we assume that is an analytic function of z everywhere inside C (even at z = zo). (Integration in this form is formally defined in the next section. ) The case = 1 gives insight, as the integral then assumes classic mean vq.lue form. Now if we consider which is not analytic at = zo, and let
n
f(z),
f(z) = (z g(z) - zo) n
z
where analytic at z = zo, then we have said something about the be haviour of at z = zo. has what is called a at z = zo, and we have specified this as a pole of order A consequence of being analytic and hence possessing a Taylor series about z = zo valid in j z - zo l = is that the principal part of has leading term with etc. Zo all equalling zero. A pole of order is therefore characterised by having terms in its principal part. If = 1, has a simple pole (pole of order 1) at z = zo. If there are infinitely many terms in the principal part of then is called an singularity of and there are no Cauchy integral formulae. The foregoing theory is not valid for such functions. It is also worth mentioning branch points at this stage, although they do not feature for a while yet. A branch point of a function is a point at which the function is many valued . .jZ is a simple example, Ln (z) a more complicated one. The ensuing theory is not valid for branch points.
g(z) is f(z)
n
f(z)
7.3
f(z)
singularity g(z) r a n f(z) (z -_ ) n a -n- l , a-n-2 n f(z) n f(z) f(z), essential f(z), n.
Complex Integration
The integration of complex functions produces many surprising results, none of which are even hinted at by the integration of a single real variable. Integration in the complex z plane is a line integral. Most of the integrals we shall be concerned with are integrals around closed contours. Suppose is a complex valued function of the real variable with E which has real part and imaginary part both of which are piecewise continuous on We can then integrate by writing
p(t) Pr(t) P (t) [a, b]. p(t)i lb p(t)dt lb Pr(t)dt + i lb Pi (t)dt.
t,
t [a, b]
=
The integrals on the right are Riemann integrals. We can now move on to define contour integration. It is possible to do this in terms of line integrals. However, line integrals have not made an appearance in this book so we avoid them here. Instead we introduce the idea of contours via the following set of simple definitions: Definition 7.1 arc C E
1. An is a set of points {(x(t), y(t)) : t [a, b]} where x(t) and y(t) are continuous functions of the real variable t. The arc is conveniently described in terms of the complex valued function z of the real variable t where z(t) = x(t) + iy(t).
7. Complex Variables and Laplace Transforms
161
Im{z
z-plane
Re{z } c
Figure 7.2: The curve C in the complex plane
An arc is simple if it does not cross itself. That is z(ti) = z(tz) =? t1 = tz for all t1 , tz E [a, b]. 3. An arc is smooth if z'(t) exists and is non-zero for t E [a, b]. This ensures that C has a continuously turning tangent. 4 . A (simple) contour is an arc consisting of a finite number of (simple) smooth arcs joined end to end. When only the initial and final values of z(t) coincide, the contour is a (simple) closed contour. Here, all contours will be assumed to be simple. It can be seen that as t varies, z(t) describes a curve (contour} on the z-plane. t is a real parameter, the value of which uniquely defines a point on the curve z(t) on the z-plane. The values t = a and t = b give the end points. If they are the same as in Figure 7.2 the contour is closed, and it is closed contours that are of interest here. By convention, t increasing means that the contour is described anti-clockwise. This comes from the parameter 0 describing the circle z(O) = e i8 = cos (O) + i sin(O) anti-clockwise 2.
as 0 increases. Here is the definition of integration which follows directly from this parametric description of a curve.
Definition 7.2 Let C be a simple contour as defined above extending from the point a: = z(a) to j3 = z(b}. Let a domain V be a subset of the complex plane and let the curve C lie wholly within it. Define f(z) to be a piecewise continuous function on C, that is f(z(t)) is piecewise continuous on the interval [a, b]. Then the contour integral of f(z) along the contour C is fc f(z)dz = lb f(z}dz = lb f(z(t))z'(t)dt. In addition to f(z) being continuous at all points of a curve C (see Figure 7.2)
it will be also be assumed to be of finite length (rectifiable}. An alternative definition following classic Riemann integration (see Chap ter 1) is also possible. The first result we need is Cauchy's theorem which we now state.
162
An Introduction to Laplace Transforms and Fourier Series
If f(z) is analytic in a domain D and on its (closed) boundary fc f(z)dz = 0 where the small circle within the integral sign denotes that the integral is around a closed loop.
Theorem 7.3
C
then
Proof There are several proofs of Cauchy's theorem that place no reliance on the analyticity of C and only use analyticity of C. How ever these proofs are rather involved and unenlightening except for those whose principal interest is pure mathematics. The most straightforward proof makes and use of Green's theorem in the plane which states that if are continuous and have continuous derivatives in and on C, then
f(z) on
f(z) inside
D
P(x, y)
Q (x, y)
This is easily proved by direct integration of the right hand side. To apply this to complex variables, consider and = + so = +
f(z) P(x,y) iQ (x,y) z x iy f(z)dz = (P(x,y) + iQ (x,y))(dx + idy) = P(x,y)dx - Q (x,y)dy + i(P(x,y)dy + Q (x,y)dx)
and so
fc f(z)dz = fc (P(x,y)dx - Q (x,y)dy) + i fc (P(x,y)dy + Q (x,y)dx).
Using Green's theorem in the plane for both integrals gives
Now the Cauchy-Riemann equations imply that if
f(z) = P(x,y) + t.Q (x,y) then aaxP = aayQ , and aayP = - aaxQ which immediately gives as required.
fc f(z)dz = 0 0
Cauchy's theorem is so useful that pure mathematicians have spent much time and effort reducing the conditions for its validity. It is valid if is analytic inside and not necessarily on C (as has already been said). This means that it is possible to extend it to regions that are semi-infinite (for example, a
/(z)
163
7. Complex Variables and Laplace Transforms z-plane
'Re{z}
c
Figure 7.3: The indented contour half plane) . This is crucial for a text on Laplace Transforms. Let us look at another consequence of Cauchy 's theorem. Suppose is analytic inside a closed curve C except at the point = where it is singular. Then
f(z)
may or may not be zero. If we C' (see Figure 7.3) where
fc f(z)dz
=
exclude the point z zo by drawing the contour z z0, then J1c, f(z)dz 0
and 'Y is a small circle surrounding =
as
z zo
=
f(z) is analytic inside C' by construction. Hence fc f(z)dz = i f(z)dz
since
f(z)dz
f(z)dz fc f(z)dz
r r = 0 as they are equal and opposite. + jBA jAB In order to evaluate therefore we need only to evaluate where 'Y is a small circle surrounding the singularity = of Now we apply the theory of Laurent series introduced in the last section. In order to do this with success, we restrict attention to the case where = is a pole. Inside 'Y, can be represented by the Laurent series
f(z)dz f -r (z zo) f(z). z zo
f (z)
a_n
(Z - Zo)
n +···+
a-2
a- 1
(Z - Zo)2 + (Z - Zo) + ao +
00
"C""' L....,
k=l
ak (z - zo) k .
164
An Introduction to Laplace Transforms and Fourier Series
We can thus evaluate J f(z)dz directly term by term. This is valid as the Laurent series is uniform1y convergent. A typical integral is thus
i (z - zo)kdz k � -n, n any positive integer or zero.
On
'Y,
=
z - zo
=
€ei9, 0 ::; () < 21r where € is the radius of the circle 'Y and
dz i€ei9d(). Hence
i (z - zo)k dz h21r i€k+l e(k+l)i1Jd(J [i(ki€k++l1) ei(k+l)IJ] 21r if k ::/; -1 =
=
k
0
0.
If = -1
Thus
r (z - zo > kdz = -r
J
Hence
i f(z)dz -1residue
a_ 1
{ 21ri kk=F -1-1. o
=
=
21ria- 1
where is the coefficient of in the Laurent series of f(z) about the singularity at z = zo called the of f(z) at z = zo . We thus, courtesy of Cauchy's theorem, arrive at the residue theorem. Theorem
points z1 ,
z-zo
7.4 (Residue Theorem) If f(z) is analytic inside C except at , z where f ( z) has poles then
Zz,
• • .
N
N
i f(z)dz = 27ri ,l)sum of residues of f(z) at z c
k= l
=
zk )·
Proof This result follows immediately on a straightforward generalisation of the case just considered to the case of f(z) possessing poles inside C. No further elaboration is either necessary or given.
N
0
If f(z) is a function whose only singularities are poles, it is called meromor phic. Meromorphic functions when integrated around closed contours C that contain at least one of the singularities of f(z) lead to simple integrals. Distort ing these closed contours into half or quarter planes leads to one of the most
165
7 . Complex Variables a nd Laplace Transforms Im{z}
-R
R
Re{z}
Figure 7.4: The semi-circular contour C widespread of the applications of the residue theorem, namely the evaluation of real but improper integrals such as
{'X> dx
lo
1 + x4
or
['X> cos(1rx) dx.
lo
a2 + x2
These are easily evaluated by residue calculus (i.e. application of the residue theorem). The first by 1 dz where C is a semi-circle in the upper half plane + z4
Jc 1
i el1rZ
dz whose radius then tends to infinity and the second by considering 0 a2 + z2 over a similar contour. We shall do these examples as illustrations, but point to books on complex variables for further examples. It is the evaluation of contour integrals that is a skill required for the interpretation of the Inverse Laplace Thansform.
7.5 Use suitable contours C to evaluate the two real integrals
7rx) dx (i) loroo cos( 2 a + x2
Example
( ) 1o oo 1 +dxx4 . zz
. .
-
Solution
(i) For the first part, we choose the contour C shown in Figure 7.4, that is a semi-circular contour on the upper half plane. We consider the integral
Now,
166
An Introduction to Laplace Transforms a nd Fourier Series
where r denotes the curved portion of C. On r,
hence
=
Hence,
as
i
R -t
oo
eitrz
dz -t c a2 + z2
1oo
1rR 2 a + R2
eitrx
_00 a2 + x2 dx
-t
as
0
as
R -t oo.
on the real axis z
=
x.
Using the residue theorem
i
eitrz
=
=
a2 + z2 dz 27ri {residue at z ia}. The residue at z ia is given by the simple formula c
=
=
-t zo [(z - zo}/(z}] a- 1 zlim where the evaluation of the limit usually involves L'Hopital's rule. Thus we have . (z - ia} etriz e - tra hm 2ai z-tia z2 + a2 Thus, letting R -t gives = -- .
oo
Finally, note that
itrx /_00 -..,.e. �dx -oo a2 + x2
and Thus,
=
100 cos(2 1rx)2 dx + . /_00 sin(2 1rx)2 dx z
-oo a + x
-oo a + x
oo oo r cos(7rx) dx 2 r cos(7rx) dx. a2 + x2 }0 a2 + x2
}_00
=
1oo cos(2 7rX)2 dx - -7r e-tra . a +� x 0 --::--'-
-
2a
167
7. Complex Variables and Laplace Transforms
(x
(ii) We could use a quarter circle > and y > for this problem as this only requires the calculation of a single residue = !J? · However, using the same contour (Figure 7.4) is in the end easier. The integral
0
0)
z
dz 4 = 21ri {sum of residues}. 1 1 +z c
= The residues are at the two solutions of that lie inside C, i.e. � and A With �' the two values of the residues calculated using the same formula as before are ��i and ��. Their sum is � - Hence
z= -
z=
f(z) = z4 + 1
i.
0
on r (the curved part of C)
1 < -1- 1· R4 1+z4 I I r dz < 1rR I lr 1 + z4 I R4 - 1 0 as R 100 dx = 1r . 1 + x4 -./2 100 dx 1 100 dx = . 1 + x4 2 _00 1 + x4 2-./2 --
Hence
--+
Thus
--+
oo.
- oo
Hence
1r
=
0
This is all we shall do on this large topic here. We need to move on and consider branch points. 7.4
Branch Points
For the applications of complex variables required in this book, we need one further important development. Functions such as square roots are double val ued for real variables. In complex variables, square roots and the like are called functions with branch points. The function w = ..fZ has a branch point at To see this and to get a and as e varies from to 211" ' describes feel for the implications, write = a circle radius in the plane, only a semi circle radius .fF is described in the w plane. In order for w to return to its original value, 0 has to reach 47r . Therefore there are actually two planes superimposed upon one another in the plane (0 varying from to 47r), under the mapping each point on the w plane can arise from one point on (one of) the planes. The two planes are sometimes called Riemann sheets, and the place where one sheet ends and the next one starts is
r
0
z
z
z
rei(J
z = 0. z
0
z z
168
An Introduction to Laplace Transforms and Fourier Series z-plane
w-plane
,--
Figure 7.5: The mapping w = Vz
Figure 7.6: A 3D visualisation of w = vz. The horizontal axes are the Argand diagram and the quantity "Svz is displayed on the vertical axis. The cut along the negative real axis is clearly visible
oo
marked by a cut from z = to z = and is typically the positive real axis (see Figure 7.5}. Perhaps a better visualisation is given in Figure 7.6 in which the vertical axis is the imaginary part of vz. It is clearly seen that a discontinuity develops along the negative real axis, hence the cut. The function w = Vi has two sheets, whereas the function w = zk a positive integer} has sheets and the function w = ln(z} has infinitely many sheets. When a contour is defined for the purposes of integration, it is not permitted for the contour to cross a cut. Figure 7.7 shows a cut and a typical way in which crossing it is avoided. In this contour, a complete circuit of the origin is desired but rendered impossible because of the cut along the positive real axis. So we start at B just above the real axis, go around the circle as far as C below B. Then along CD just below the real axis, around a small circle surrounding the origin as far as A then along and just above the real axis to complete the circuit at B. In order for this contour (called a contour) to approach the desired circular contour, IADI -t IBCI -t and the radius of. the small circle surrounding the origin also tends to zero. In order to see this process in operation let us do an example.
0
(N
0,
N
key hole 0
169
7. Complex Variables a nd Laplace Transforms
Figure 7. 7: The keyhole contour Example
7.6 Find the value of the real integral 1o00 1xa:+-1x dx where 0 < a < 1 a a real constant.
Solution In order to find this integral, we evaluate the complex contour integral
za:-1 dz lc 1 + z r
where C is the keyhole contour shown in Figure 7. 7. Sometimes there is trouble because there is a singularity where the cut is usually drawn. We shall meet this in the next example, but the positive real axis is free of singularities in this example. The simple pole at = is inside C and the residue is given by
z -1
za: - 1 1r z-+lim- 1 (z + 1) (z + 1) = e
Thus
za:- 1 dz = r za:-1 dz + 1 za:-1 dz + 1 za:-1 dz + r za:- 1 dz 1+ lc 1 + z lr 1 + 1+ jAB 1 + 1r = 21rie(a: - 1 )i . --
Z
Each integral is taken in turn. On r
CD
Z
'Y
Z
z = Rei9, 0 $ 8 $ 211", R >> 1, I 1za:+-z1 I < RRa:--11 '.
Z
170
An Introduction to Laplace Transforms and Fourier Series
hence
l l ;::i dz l < �':-� 21rR 0 z = xe21r so 1 _z dz = 1e _x -7
On CD,
CD
1z
a- 1
1+z
_ _
as
a-1
R -7 oo since a <
1.
2 (a - 1) dx.
R 1 + x e 1ri __
10 €a-1 ei(a-1)8 i€ei8d -7 0 € -7 0 since a > 0. dz = (} .8 1+z 21r 1 + €et Finally, on AB, z = x so 1 --dz za-1 = !R xa-1 dx. e 1+x AB 1 + z so
'Y
a-1
as
-
--
Adding all four integrals gives, letting a -7 0 and R -7 oo,
0 a- 1 a-1 { x e27ri(a-1) dx + rXJ x dx 27rie
Rearranging gives
r} = x1 a-+ x1 dx
7r sin(a1r) " This is only valid if 0 < a < the integral is singular otherwise. To gain experience with a different type of contour, here is a second example. After this, we shall be ready to evaluate the so called Bromwich contour for the Inverse Laplace 'fransform.
0
=
1,
7.7 Use a semi-circular contour in the upper half plane to evaluate 1 2ln z 2 dz (a > 0) real and positive 0z +a and deduce the values of two real integrals. Solution Figure 7.8 shows the semi-circular contour. It is indented at the origin z = 0 is an essential singularity of the integrand z2ln+za2 Thus ln z due at z = ia} = f 2 }0 z + a2 dz 21ri {Resi Example
as
•
provided R is large enough and the radius of the small semi-circle "f -7 0.
171
7. Complex Variables and Laplace Transforms
�{z }
Figure 7.8: The indented semi-circular contour C. The residue at z = ia is given by ln(ia) = !!... i ln(a) 2ia 4a 2a (a O) so the right-hand side of the residue theorem becomes 1r In(a) + ,. 1r2 . a 2a On the semi-circular indent i8)iEei8 d() -+ 0 as E -+ 0. 1 z2lnza2 dz = 1° ln(Ee 2 2 E e i8 + a2 "Y + 1r (This is because E In E -+ 0 as E -+ 0.) Thus 1 z2lnza2 dz = -1rlna + ,. 1r2 . a 2a + Now, as we can see from Figure 7.8, 1c z2lnz+ a2 dz = r z2ln+za2 dz + z2lnz 2 dz + 1 z2lnza2 dz + 1 z2lnza2 dz. + + L +a lr On r, z = Rei8 , 0 � () � 1r, and lnz lnR zI 2 + a2 I � R2 - a2 so In z 2 dz � 1rR2 In R2 -+ O as R -+ oo. 2 r z +a I R -a l lr Thus letting 0 and evaluating the straight line integrals via zR=-+x onoo,CDtheandradius z = ofxei1r on-+ AB gives r lnz dz = 1° In xei1r e'1r dx + 100 In x dx lc z2 + a2 00 x2 + a2 0 x2 + a2 = 2 loroo x2ln+Xa2 dx + i1r }0roo x2 + a2 . >
_
---
'Y,
0
CD
"Y
AB
'Y
.
dx
172
An Introduction to Laplace Transforms and Fourier Series
The integral is equal to
1r In a + �. 1r2
--
a 2a so equating real and imaginary parts gives the two real integrals rXJ lnx = ?rlna Jo x2 + a2 dx 2a and 100 x2�a2 = � · The second integral is an easily evaluated arctan standard form. 7.5
The Inverse Laplace Transform
We areItnow ready totoderive and use thetheformula forof athetransform inverse soLaplace Transin form. is a surprise engineers that inverse embedded real variables as the Laplace Transform requires sosurprising deep a knowledge of complex variables for its evaluation. It should not be so having studied the last two chapters. We state the inverse transform as a theorem.
If the Laplace Transform of F(t) exists, that is F(t) is of expo nential order and f(s) = 100 e -st F(t)dt then 1 . 10'+ik f(s)e8tds t > O F(t) = klim -+oo 21T� 0' -ik where IF(t)l � eMt for some positive real number M and is another real number such that > M. Theorem 7.8
{
}
-
u
u
TheHowever, proof ofwethishavehasnowalready been outlined incomplex Sectionvariable 6.2 of the last chapter. done enough formal theory toF (w) giveasa more complete proof. The outline remains the same in that we define in the last chapter, namely k Fk(w) = 100 e-(k+iw)z f(x)dx and rewrite this in notation more suited to Laplace Transforms, i.e. x becomes t, k + iw becomes and the functions are renamed. f(x) becomes F(t) and Fk(w) becomes f(s). However, the mechanics of the proof follows as before with Equation 6.2. Using the new notation, these two equations convert to Proof
s
173
7. Complex Variables and Laplace Transforms s-plane
u-iR
Figure 7.9: The Bromwich contour
� /_: est f(s)d{�(s)} = { �(t/ �00 where the integral ona complex the left isvalued a realintegral, integral.formally d{�(s)} = a real differential. Converting this to done by recognising that ds = id{�(s)} , gives the required formula, viz.
and
2
dJAJ
181 es f(s)ds 7r so t
1 F(t) = 2
where s0 and s1 represent the infinite limits (k - ioo, k + ioo in the notation ofof F(t) Chapter 6). Now, however we can be more precise. The required behaviour means that the real part of s must be at least as large as u, otherwise IF(t)l does not --+ 0 t --+ oo on the straight line between s0 and This line is parallel to the imaginary axis. The theorem is thus formally established. as
St .
0
TheFigure way of7.9.evaluating this integral is viathea closed contourcontour, of theconsists type shown inportion This contour, often called Bromwich of a of a circle, radius R, together with a straight line segment connecting thethetwosingularities points u - iRof the and function u + iR. The real number u must be selected so that allfrom s) are to the left of this line. This follows the conditions of Theorem 7.8.f (The integral where is thetheBromwich Cauchy'itself s residue perhapsC with additioncontour of one oris evaluated two cuts. using The integral is thetheorem, sum of
174
An Introduction to Laplace Transforms and Fourier Series
the integral over the curved portion, the integral along any cuts present and and the whole is 27ri times the sum of the residues of f (s)est inside C. The above integral is made the subject of this formula, and as R -+ oo this integral becomes F(t). this way, F(t) is calculated. Let us do two examples to see how the process operates. In
Example 7.9
Use the Bromwich contour to find the value of c -t { (s + 1)(s1 - 2)2 } •
Solution It is quite easy to find this particular inverse Laplace Transform using partial fractionsintegral as in Chapter illustration of the use ofdirect the methods. contour method.2; Inhowever the nextit serves example,as anthere are no alternative Now, est ds = 2 i {sum ofresidues} rJo (s + 1)(s 2)2 where C is the Bromwich contour of Figure 7.9. The residue at s = 1 is given by 1 est = - e -t . 1) lim (s -t 2)2 9 1)( ( + s 1 The residue at s 2 is given by d est [ (s 1)test - est ] = .! (3te2t - e2t ) . lim (s + 1)2 s=2 9 s-t2 ds (s + 1) Thus est ds = 27rt { 1 (e -t + 3te2t - e2t) } . rJo (s + 1)(s 9 2)2 Now, 1 ..,.--,..,.est...,. -."'"" ds = i est ds + l"'+iR est ds (s + 1)(s - 2)2
_
=
=
S -
S
+
+
.
_
r
+
c- 1
=
+
-
175
7. Complex Variables and Laplace Transforms s-plane
Figure 7.10: The cut Bromwich contour
Example
7.10 Find
e, - t
where a is a real constant.
{ e-8a,;B } --
Solution The presence of e- ay'S means that the origin 8 = 0 is a branch point. is thus necessary to use the cut Bromwich contour C' shown in Figure 7.10. ItNow, 1 est-a as
C'
S
,;B d8 = 0
---
by Cauchy' s theorem as thereintegral are no bysingularities We now evaluate the contour splitting itofintotheitsintegrand five parts:inside - C' . 1 est-a,;B d8 = { + r+iR + { + { + { = 0 8 lr lu- iR jAB j"Y leD and consider each bitcirclein turnis �:.. The radius of the large circle r is R and the radius of the small On r, 8 = Re i8 = Rcos (O) + iRsin (O) and on the left hand side of the s-plane cosO < 0. This means that, on r, R 8 = and by the estimation lemma, h 0 R oo. The second is the one required and we turn our attention to the third integral alongintegral AB. On AB, 8 = xei1r whence C'
'Y
--+
8
as
=
--+
X
176
An Introduction to Laplace Transforms and Fourier Series
whereas on CD s = x so that s
------
=
X
------
This means that !R e-xt+ai.../X dx. e- xt-ai..,fX = E dx + + 1AB lCD 1R E integrals on either side of Itit never is the cancel. case thatTheif the cut is really necessary, then On s = Eei9 , final integral to consider is that around -n (} < n, so that X
X
'Y ·
$
-t
Now, as E 0
i i-1r id(} -2ni. -t
=
'Y
Hence, letting R -t oo and E -t 0 gives 1 l
=
X
X
X
-
-
=
2
177
7. Complex Variables and Laplace Transforms
U+iR
s-plane
u-iR
Figure 7.11: The contour C distorted around the branch point at s = St 7.6
Using the Inversion Formula in Asymptotics
We saw in Chapter 5 how it is possible to use asymptotic expansions in order tosolve gainit information fromLeta diffuserential equation eventhisif kind it wasofnotproblem, possiblenowto in closed form. return to consider armedinversion with theintegral complex formula. It iswhen oftenexact possible to approximate the usinginversion asymptotic analysis inversion cannot be done. Although numerical techniques are of course also applicable, asymptotic methods aremethods often more appropriate all, there isprobably little pointbeenemploying . Afterit would numerical at this late stage when have easierbyto use numerical techniques from the outset on the original problem! Of course, so doing alwishing l the insight gained by adopting analytical methods wouldThehavefollow been lost. Not to lose this insight, we press on with asymptotics. ing embodies thattheorem have branch points.a particularly useful asymptotic property for functions oo,
Theorem 7.1 1 If /(s) is 0(1/lsl) as s ---* the singularity with largest real part is at s = s1 , and in some neighbourhood of St /(s) = (s - St)k L an (s - St )n - 1 < k < 0, n=O then f(t) = - ;e81 t sin(k1t-) f: an (-1)n r(:n:�� 1) as t ---* 00
n=O
oo.
Proof this proof we shall assume that s = s 1 is the only singularity and that itresult is a branch point.asThis is acceptableofas/(s) we areareseeking toonly provetheanoneasymptotic and as long the singularities isolated with the largest real part is of interest. If there is a tie, each singularity is considered In
An Introduction to Laplace Transforms and Fourier Series
178
and the and contributions fromj(s)eachhasmay be added. When thebeBromwich contour isthese used, the function branch points, it has to distorted around points and a typical distortion is shown in Figure 7.11 and the object is to approximate the integral 1 1 st 2 . e f(s)ds. On the lower Riemann sheet 1ft
c
and on the upper Riemann sheet The method used to evaluate the integral is to invoke Watson's lemma as follows 1 100 e(81-z )t f(s 1 10 e(81-z )t f(st - +xe_',... )e_',... dx+ -. - t +xe',... )e',... dx. f(t) = -. 2 2 These integrals combine to give oo ( )t f(t) = -� 2nz }0r e St -z [j(St + Xei1f ) - j(St + Xe-i1r)]dx. s lemma is now used to expand the two functions in the square bracket inWatson' powers of x. It really is just algebra, so only the essential steps are given. j so Assume that (s) can be expressed in the manner indicated in the theorem 1ft
1ft
00
0
00
and
f(st + xei,.. ) = (xei,.. ) k L an (xei,.. ) n n=O 00
J(st + xe-i,.. ) (xe-i,.. )k L an (xe-i,.. ) n . n=O =
These two expressions subtractedThus so that as is typical, it is the e±ik?r terms that prevent complete are cancellation. j(st + xei,.. ) - J(st + xe-i,.. ) xk2i sin(kn) L an ( -1)nxn . n=O We are now faced with the actual integration, so rOO (St )t � 2nz } e -z [j(St + Xei1f ) - j(St + Xe-i1r)]dx =
0
00
179
7. Complex Variables and Laplace Transforms
sin(k1r) eBtt � an( l)n {n 00 xn+k e-ztdx 7r J0 n=O provided definitionexchanging of the gammasummation function and integral signs is valid. Now the integral L....t
=
-
comesxt,toweourhaveaid to evaluate the integral that remains. Using the substitution u =
This then formally establishes theoftheorem, leaving only one orquery two niggly ques tions over the legality of some the steps. One common is how is it justifiable to integrate with respect to x as far as infinity when we are supposed to anbe approximation "close to s1 " thevalid branch point oft, hence /(s)? It has to be remembered that this isfrom for large the major contribution will come that part of the contour that is near to x = 0. The presence of infinity in the limit of the layers integralin isfluidtherefore somewhat misleading. (Forforthose familiarto with boundary mechanics, it is not uncommon "infinity" be as small as 0.2!) 0
=
at s theorem, s1 is a pole, then the integral can be evaluated directly bytheusesingularity of the residue giving f(t) 21rz1 . lcr e•t /(s)ds •tt If
=
=
-
Re
R istheorem where the residueto approximate of /(s) at s =thes1behaviour • The following example illustrates the use of this of the solution to a BVP for large y. as
Find the asymptotic behaviour y � oo for fixed x of the solution of the partial differential equation (J2cfJ = (J2 c/J ()y2 8x2 - c/J, x > 0, Y > 0, such that 8cfJ c/J(x, O) = () (x, O) = 0, c/J(O, y) = 1 . y
Example 7.1 2
180
Solution
An Introduction to laplace Transforms and Fourier Series
Taking the Laplace Transform of the given equation with respect to
y using the by now familiar notation leads to the following ODE for {fi: (1 + s2 )ifi = cP dx� where the boundary conditions at y = 0 have already been utilised. At x = 0 = we require that
gives the solution
e>(x , s) = z � s .
ez We have discarded thelargepart ofandthesocomplementary function as it does not tend to zero for cannot represent a function of s that can have arisensolved from aifLaplace Chapter 2). The problem would be completely we couldTransform invert the(seeLaplace Transform ii = !s e-z but alas this is not possible in closed form. The best we can do is to use Theorem 7.1 1 to find an approximation valid for large values of y. Now, ii has a simple pole at s = 0isandthe two branch pointsallatthree s = ±i . As the real part of all ofterm thesefor singularities same, viz. zero, contribute to the leading large y. At s = 0 the residue is straightforwardly e-z . Near s = i the expansion {fi(x, s) = �[1 - (2i)112 (s - i)112x + . ·] is valid. Near s = -i the expansion {fi(x, s) = -�[1 - (-2i)112 (s + i)112x + . ·] is valid. The value of (x, - s)ds e- z is approximated by thefromfollowing pole Theorem at s = 0, 7.11 and = ±i.theUsing the two contribution the two three branchterms; points at sfrom these are �
s
�
�
.
�
.
11"�
c
Sin ( 21 ) (<-2 .> 112 ry{3/3f22} ) near s = -i. The sum of all these dominant terms is f e-z + l21/ 2x3/ cos (y + 411") . 2 7r y 2
and
u "'
-1r1 e -iy
.
u "'
- -1r
�
181
7. Complex Variables and Laplace Transforms
ThisFurther is the behaviour ofthe the use solution >(x, y) forcan largebevalues ofiny.specialist texts examples of of asymptotics found onmorepartial diffasymptotic erential equations, e.gespecially . Williamsthe(1980), Weinberger (1965). For about expansions, rigorous side, there is nothing to better the classic text of Copson (1967). 7. 7
Exercises
The following functionsat alleachhave determine the residue pole.simple poles. Find their location and (i) 1 � z , (ii) z22z- +z -1 2 3z2 + 2 3 + 4z . ( ... ) (lv) 3 z + 3z2 + 2z (z - 1)(z2 + 9) z (v) cos(z) ( .) z sin(z) (vii) sm. e(zz ) . 2. The functions all have poles. Determine their location, order, and allfollowing the residues. (i) ( zz +- 11 ) 2 ( ) (z2 +1 1)2 cos(z)- (iv) e2z ) (iii) z3 sm (z 3 . Use the residue theorem to evaluate the following integrals: (i) L (z - 1)(z� 2)(z + i) dz where and z =C -i.is any contour that includes within it the points z 1, z 2 (ii) r ..,..-(z --z-41)�d3 z lc where C is any contour that encloses the point z 1. (iii) 00 1 10 X + 1 dx. (iv) { 00 cos(21Tx) dx. Jo x4 + x2 + 1 1.
m
·
Vl
--
11 ••
--:---- •
=
=
�
=
182
An Introduction to Laplace Transforms and Fourier Series
Use the indented semi-circular contour of Figure 7.8 to evaluate the three real integrals: . oo (ln x)2 dx, ( ) 1oo � lnx dx, (m). 1oo � xA dx, -1 < .X < 1. (1) 1 � o x +1 o x +1 o x +1 5. Determine the following inverse Laplace Transforms: (i) c - 1 { 8� } (ii) c - 1 { 1 + �} (iii) c - 1 { ..;/+ } , (iv) c - 1 { ..;/- 1 } . 1 You may find that the integrals 4.
..
.
n
,
,
and help the algebra! 6. Define the function cp(t) via the inverse Laplace Transform cp(t) = c- 1 { erf� } . Show that C{cp(t)} = Jrs sin ()s) . 7. The zero order Bessel function can be defined by the series oo k l k k Jo (xt) = (-1) ( x) 2 t2 Show that 8.
(k�)2
kL=O
/ C{Jo (xt)} = s1 (1 + :22 ) 1 2 -
Determine
c- 1
xy'S) { cosh scosh .jS) } (
(
by direct use of the Bromwich contour. 9. Use the Bromwich contour to show that oo } = 3 1 u2 e '- - 1 { e r
-sl/3
-
7r 0
-tu3 _ 1u 2
stn ( uv'3) du. •
2-
7. Complex Variables and Laplace Transforms
10.
The function l/J(x, t) satisfies the partial differential equation with o l/J(x, 0) = ol/Jt (x, 0) = 0, ljJ(O, t) = t. Use the asymptotic method of Section 7.6 to show that xet l/J(x, t) "' rn--;;; as t � oo v 21rt3 for fixed x.
183
A
Solutions to Exercises
Exercises 1 .4
1. (b)(a) lnt is singular at t 0, hence the Laplace Transform does not exist. C {e3t } 1o00 e3te-stdt [ 1 e(3-s)t]00 1 . =
=
=
--
3-s
>
0
=
--
s-3
2 j e Mt l for any M for large enough t, hence the Laplace Transform (c)doesetnot exist (notTransform of exponential order). (d) the Laplace does not exist (singular (singular atat tt 0).0). (e) the Laplace Transform does not exist (f) doest isnotan exist unless integer.(infinite number of (finite) jumps), also not defined 2. Using the definition of Laplace Transform in each case, the integration is reasonably straightforward: (a) 1000 ekt e-stdt s -1 k as(b)inIntegrating part (b) ofbytheparts previous gives,question. =
=
= --
Integrating by parts again gives the result s� . 185
186
An Introduction to Laplace Transforms and Fourier Series
(c) Using the definition of cosh(t) gives .C{cosh(t)} = � {100 ete -•tdt + 100 e -te-•tdt} s { -- -- } - -2 s - 1·
1 1 1 -+ 2 s-1 s+1 3.
(a) Thisis demands use of the first shift theorem, Theorem 1.3, which with .C{e-3tF(t)} = f(s + 3) and2with F(t) = t2 , using part (b) of the last question gives the answer b=3
( s + 3) 3 •
(b) For this part, we use Theorem 1.1 (linearity) from which the answer 4 6 + s2 s - 4
follows (c) Theatfirstonce.shift theorem with b = 4 and F(t) = sin(5t) gives 5 - 5 . (s + 4) 2 + 25 s2 + 8s + 41 4. When functions are defined in a piecewise fashion, the definition integral for the Laplace Transform is used and evaluated directly. For this problem we get .C{F(t)} = 100 e-st F(t)dt = 11 te-•t dt + [\2 - t)e-stdt which after integration by parts gives 1 ( 1 - e-• ) 2 . s2
5. Using Theorem 1.8 we get (a) .C{te2t } = ! (s � 2) = (s � 2)2 (b) .C{tcos(t)} = dsd 1 +s s2 = (11+-ss22)2 The last part demands differentiating twice, (c) .C{t2 cos(t)} = ds�2 1 +s s2 = (2s1 3+-s26s)3 .
A. Solutions to Exercises
187
These twoandexamples are notdirectly, difficult:thethesecond first has application to oscillating systems is evaluated needs the first shift theorem with b = 5. (a) C{sin(wt + ¢>)} = 100 e-st sin(wt + ¢>)dt and thistrick. integral lowing Let is evaluated by integrating by parts twice using the fol I = 100 e-st sin(wt + ¢>)dt then derive the formula ] w2 w [1 I = - - e- st sin(wt + ¢>) - 2 e -st cos(wt + ¢>) oo - 2 I s s s 0 from which _ ssin(¢>) +w cos(¢>) Is2 +w2 . (b) .C {e cosh(6t)} = (s -s5)-2 5- 36 = s2 -sl-Os5- 11 · 7. This problem illustrates the difficulty in deriving a linear translation plus scaling property Laplace Transforms. the culprit. Directforintegration yields: The zero in the bottom limit is C {G(t)} = roo e-st a(t)dt = 1roo ae
M
}_ b/a
8.
The proof proceeds by using the definition as follows: whichusing gives thethe results result. ofEvaluation of5 alongside the two Laplace Transforms follows from Exercise the change of scale just derived with, for (a) a = 6 and for (b) a = 7. The answers are result
An I ntroduction to Laplace Transforms and Fourier Series
188
Exercises 2 .8
1. gives If F(t) = cos(at) then F'(t) = -asin(at). The derivative formula thus .C{-asin(at)} = s.C{cos(at)} - F(O). Assuming we know that .C{cos(at)} = s2 +8 a2 then, straightforwardly .C{-asin(at)} = s s2 +s a2 - 1 = - s2 a+2 a2 i.e .C{sin(at)} = s2 +a a2 expected. 2. Using Theorem 2.2 gives as
In the text (after Theorem 2.4) we have derived that C {1t sin�u) du } = ; tan- 1 {; } , in fact this calculation is that one in reverse. The result isulations immediate. orderplace: to derive the required result, the following manip need to take C { si�(t) } = 100 e-st si�(t) dt and if we substitute ua = t the integral becomes 1oo e-asu sin(au) du. u 0 This is still equal to tan-1 {; } · Writing p = as then gives the result. (p is a dummy variable of course that can be re-labelled s.) 3. The calculation is follows: C {1t p(v)dv } = ;c{p(v)} so C {1t 1v F(u)dudv } = ;c {1v F(u)du } = 812 /(s) required. In
as
as
189
A. Solutions to Exercises
4. Using Theorem 2.4 we get
u C {1ot cos( au) u- cos(bu) du } = � 100 s a2 + u2 s
1 These integrals are standard "In" and the result -; In at once.
u du. b2 + u2
( 8822 ++ ab22 ) follows
5. This transform is computed directly as follows C { 2 sin(t)tsinh(t) } = C { et s�n(t) } C { e-t �n(t) } . _
Using the first shift theorem (Theorem 1.3) and the result of Exercise 2 above yields the result that the required Laplace Transform is equal to
(The identity tan - 1 (x) - tan- 1 (y)
= tan- 1 ( 1x+-xyy ) has been used.)
6. This follows straight from the definition of Laplace Transform: lim s-+ oo
100 e-st F(t)dt = 100 s-+limoo e-stF(t)dt = 0. /( ) = slim -+ oo s
0
0
It also follows from the final value theorem (Theorem 2.13) in that if lims-+oo s s is finite then by necessity lims-+oo s 0.
/( ) =
/( )
7. These problems are all reasonably straightforward 1 2(2s + 7) = 3 + (a) + 2 + 4 ( + 4)(s + 2) s
s
s
and inverting each Laplace Transform term by term gives the result 3e-2t + e-4t s+9 2 1 . . (b) Stmdarly s -3 s+3 s and the result of inverting each term gives 2e3t - e- 3t
2-9 =
and inverting gives the result 1
1
2 + 2 cos(2kt)
= cos2 (kt).
190
An Introduction to Laplace Transforms and Fourier Series
1 9s
1 9(s + 3)
1 3(s + 3) 2
which inverts to
1 1 - (3t + 1)e - 3t . g g (d) This last part is longer than the others. The partial fraction decom position is best done by computer algebra, although hand computation is possible. The result is 2 1 3 1 1 + ------2 + 3 3 2 2 125 (s + 3) (s - 2) (s + 3) 125(s - 2) 625(s - 2) 25(s + 3) 3 + 625(s + 3) e- 3 t e2 t (25t2 + 20t + 6). (5t - 3) + 625 1250 2 s we also have that 8. (a) F(t) = 2 + cos(t) -+ 3 as t -+ 0, and as + 2 8 +1 sf(s) -+ 2 + 1 = 3 as s -+ oo hence verifying the initial value theorem. (b) F(t) = (4 + t) 2 -+ 16 as t -+ 0. In order to find the Laplace Thansform, we expand and evaluate term by term so that sf(s) = 16 + 8/s + 2/s2 which obviously also tends to 16 as s -+ oo hence verifying the theorem once more. 1 3 9. (a) F(t) = 3 + e-t -+ 3 as t -+ oo. f(s) = + so that sf(s) -+ 3 as s s+1 s -+ 0 as required by the final value theorem. (b) With F(t) t3 e- t , we have f(s) = 6/(s + 1)4 and as F(t) -+ 0 as t -+ oo and sf(s) also tends to the limit 0 as s -+ 0 the final value theorem is verified. and the inversion gives
;
-
--
=
10. For small enough t, we have that sin(v'i) = v'i + O(t312 ) and using the standard form (or the result on p50):
with x = 3/2 gives . .C{sm(v'i)} = .C{v'i} + · · ·
+··· = r{3/2} 83/2
and using that r{3/2} = (1/2)r{1/2} = ../i/2 we deduce that .C{sin(v'i)} =
�2 + · · . .
2
191
A . Solutions to Exercises
Also, using the formula given, k k 3s /2 exp- 41s = s3/2 + . . . . Comparing these series for large values of s, equating coefficients of s-312 gives k = Vi2 . 11. Using the power series expansions for sin and cos gives n+2.,...,. t4.sin(t2 ) = n=O L (-l)n ...,. {2n + I)! and t4n • cos(t2 ) = n=O L (-l)n 2n.1 Taking the Laplace Thansform term by term gives � ( -1 ) n (4n + 2)! '--"{sm(t2 )} = n=O (2n + 1)'.s4n+3 and � ( -1) n (4n)!n '--"{cos(t2 )} = f:::o (2n) !s4 +l 12. Given that Q (s) is a polynomial with n distinct zeros, we may write Ak + · · · + - An P(s) = -A1 + -A2 + · · · + --s - ak s - an Q(s) s - a1 s - a2 where real constants to be determined. Multiplying both sides bythes A-kaskarethensomeletting s ak gives hm. P(s) (sQ-( a)k ) . hm. QP(s)( ) (s - ak ) = s-+ak Ak = s-+ak Using l'Hopital's rule now gives Ak = Q'P(a(akk)) for all k = 1, 2, . . . , n . This is true for all k, thus we have established that P(s) - P(al ) 1 + . . . + P(ak ) 1 + .. . P(an) 1 Q(s) Q'(at ) (s - a1 ) Q' (ak ) (s - ak) Q'(an) (s - an)· Taking the inverse Laplace Thansform gives the result _c- l { QP(s)(s) } = t=l Q'P(a(akk)) eakt k sometimes known Heaviside's expansion formula. 00
oo
.
L..J
.
-+
S
as
S
192
An Introduction to Laplace Transforms and Fourier Series
13. All of the problems in this question are solved by evaluating the Laplace Transform explicitly. .C.{H(t - a)} = e- stdt = _e--as .
loo
(a)
(b)
a
S
.C.{JI(t)} = 12 (t + 1)e-stdt + 100 3e-stdt.
Evaluating the right-hand integrals gives the solution
1 1 s s2 (e-2s - 1 ) . 2 (c) .C.{f2(t)} = 1 (t + 1)e- stdt + 1 00 6e- stdt. -+-
Once again, evaluating gives
[O, oo)
(d) As the function is in fact continuous in the interval the formula for the derivative of the Laplace Thansform (Theorem 2.2) can be used to give the result at once. Alternative, fi can be differentiated (it is 2)) and evaluated directly.
!I(t)
.!.s (e-2s - 1) 1 - H(t -
14. We use the formula for the Laplace Transform of a periodic function The orem 2.19 to give c .C.{ F(t)} _- J: (1e--ste2sF(t)c) dt . The numerator is evaluated directly:
[Jo 2c e-st F(t)dt = Jro te- stdt + 12c (2c - t)e-stdt c
which after routine integration by parts simplifies to
The Laplace Thansform is thus
.C.{F(t)} = 1 -1e2sc s12 (e-sc - 1)2 = s12 11-+ees-scc which simplifies to
1 82 tanh (� sc) .
A. Solutions to Exercises
193
Exercises 3 . 6
1. a If we substitute u = t - T into the definition of convolution then
()
g*f =
1t g(r)f(t - r)dr
0 - [ g(u - r)f(u)du
becomes
=
g * f.
b Associativity is proved by effecting the transformation (u, r) -+ (x, y) where u = t - x - y, and T = y on the expression
()
f * (g * h) =
10t 10 t-T f(r)g(u)h(t -
T -
u)dudr.
The area covered by the double integral does not change under this trans formation, it remains the right-angled triangle with vertices t), and t, O . The calculation proceeds as follows:
( )
dudr = so that
(0, (0, 0)
�i:: �� dxdy = -dxdy
1t 1t-x f(y)g(t - x - y)h(x)dydx = 1t h (x) [1t -x f(y)g(t - x - y)dy] dx 1t h(x)[f * g](t - x)dx = h * (! * g)
f * (g * h) =
=
and this is (! * g) * h by part a which establishes the result. c Taking the Laplace Transform of the expression f * f - 1 = 1 gives
()
()
.C{f} ..C{f - 1 } = from which
C{f- 1 } =
.!.s
1 s
/(s)
using the usual notation s is the Laplace Transform of f(t)). It must be the case that -+ as s -+ oo . The function f - 1 is not uniquely sf s ) defined. Using the properties of the Dirac-<5 function, we can also write
-1(
(/( ) 0
10t+ f(r)8(t - r)dr = f(t)
194
An Introduction to laplace Transforms and Fourier Series
from which Clearly, j(t)
=F 0.
2. Since £{ ! } = J and £{1} = 1/s we have C{f * 1} = l s so that, on inverting as
required.
.c-1 { � } = f * 1 = 1t f(r)dr
3. These convolution integrals are straightforward to evaluate:
(a)
h
cos(t) =
this is, using integration by parts
1t (t - r) cos(r)dr
1 - cos(t). t t3 (b) h t = (t - r)rdr = •
1
6
1t sin(t - r) sin(r)dr = � 1t [cos(2r - t) - cos(t)]dr 0
(c) sin(t) * sin(t) =
this is now straightforwardly
� (sin(t) + t cos(t)). t (d) et * t = 1 et-r rdr
which on integration by parts gives
-1 - t + e -t . t et-r cos(r)dr. (e) et * cos(t) =
1
Integration by parts twice yields the following equation t t et -r cos(r)dr et-r cos(r)dr = [e-r sin(r) - e-r cos(r)] � -
1
1
from which
t et-r cos(r)dr = 1 (sin(t) - cos(t) + et ). 2
lo
A. Solutions to Exercises
195 as
4. (a) This is proved by using l'Hopital's rule follows erf(x) } _ 1. 1 2 1x e-t2 dt -_ -2 -d 1x e-t2 dt . I1m 1m { 0 0 X ../i dx x -+ x -+ X ..{i o
--
o
and using Leibnitz' rule (or differentiation under the integral sign) this is
2 e-x2 = 2 . 0 -+ x ..ji ..ji hm
as
required. (b) This part is tackled using power series expansions. First note that 4 6 2n e -x2 = 1 - x2 + ::._ - ::._ + . . . + (-1) n+l ::__ + . . . .
2! 3!
n!
Integrating term by term (uniformly convergent for all
x) gives
t3/2 + t5/2 - t7/2 + . . · + ( -1 ) n+l 1-./t e-x2dx = t112 - 0
3 5.2! 7.3!
tn+l /2
(2n + 1).n! + . . .
from which
t
2 (1 - -t + t2 - t3 + . . + ( - 1) n+ l tn 2erf( vt) = Vi 3 5.2! 7.3! · (2n + 1).n! + . . .)
_ l.
t;
Taking the Laplace Transform of this series term by term (again justified by the uniform convergence of the series for all t) gives .c - l {c � erf(Vt)} = � ..ji
and taking out a factor get the required result:
·
1 - _1 + . . . + (-1)n + . . .) __2 + _ (�s - ....!_ 3s 5s3 7s4 (2n + 1)sn+1
1/ ..jS leaves the arctan series for 1/ ..jS. Hence we
5. All of these differential equations are solved by taking Laplace Transforms. Only some of the more important steps are shown. (a) The transformed equation is
sx(s) - x(O) + 3x(s) = s -1 2
--
from which, after partial fractions,
- = 1 + 1 4/5 1/5 x(s) s + 3 (s - 2)(s + 3) = s + 3 + s - 2·
196
An Introduction to laplace Transforms and Fourier Series
Inverting gives x(t)
= 54 e-3t + 51 e2t .
(b) This equation has Laplace Transform
1 (s + 3)x(s) - x(O) = � s +1
from which
1/10 s/10 - 3/ 10 x(s) = sx(O) + 3 s + 3 + s2 + 1 . _
=
The boundary condition x(1r) 1 is not natural for Laplace Transforms, however inverting the above gives
= (x(O) - 1�) e-3t - 110 cos(t) + 130 sin(t) and this is 1 when x = 1r, from which 1 9 e3"" x(O) - - = 10 10 x(t)
and the solution is
x(t)
= �10 e3(1r-t) - _.!._10 cos(t) + �10 sin(t).
(c) This equation is second order; the principle is the same but the algebra is messier. The Laplace Transform of the equation is 8 s2x (s) + 4sx(s) + 5x(s) = � s +1
and rearranging using partial fractions gives
+2 + 1 - s + 1 x(s) = (s +s 2)2 + 1 (s + 2)2 + 1 s2 + 1 s2 + 1·
Taking the inverse then yields the result x(t)
= e-2t (cos(t) + sin(t)) + sin(t) - cos(t).
(d) The Laplace Transform of the equation is
(s2 - 3s - 2)x(s) - s - 1 + 3 = �s
from which, after rearranging and using partial fractions,
x(s) - - -3s + (s 4(s3 )-2 �) _
_
_
2
_
_
141
5
(
s - 23 ) 2
17 - 4
197
A. Solutions to Exercises
which gives the solution x(t)
= -3 + 4e ! t cosh (�V17) - �e ft sinh (�V17) .
(e) This equation is solved in a similar way. The transformed equation is
6 s2y(s) - 3s + fi(s) - 1 = � +4 y(s) = s2 2+ 4 + 3ss2 ++ 31 8
from which
-
and inverting, the solution y(t)
= - sin(2t) + 3 cos(t) + 3sin(t)
results.
6. Simultaneous ODEs are transformed into simultaneous algebraic equations
and the algebra to solve them is often horrid. For parts (a) and (c) the algebra can be done by hand, for part (b) computer algebra is almost compulsory! (a) The simultaneous equations in the transformed state after applying the boundary conditions are
6 (s - 2)x(s) - (s + 1)y(s) = -s -3 +3 6 (2s - 3)x(s) + (s - 3)y(s) = _ s -_3 + 6 from which we solve and rearrange to obtain
4 + 3s - 1 x(s) = (s - 3)(s - 1) (s - 1)2 so that, using partial fractions
2 + 1 + 2 x(s) = -s - 1 (s - 1)2 s - 3 -giving, on inversion
= 2e3t + et + 2tet.
x(t) In order to find y(t) we eliminate dyjdt from the original pair of simulta neous ODEs to give 5 dx -x t yt dt Substituting for x(t) then gives
3 ( ) = -3e3t - 4 ( ) + --. 4 y(t)
= -e3t + et - tet.
An Introduction to Laplace Transforms and Fourier Series
198
(b) This equation is most easily tackled by substituting the derivative of
y = -4 �; - 6x + 2 sin(2t)
into the second equation to give
5 dd'lx2 + 6 ddtx + x = 4 cos(2t) + 3e-2t .
The Laplace Thansform of this is then
4s + 3 . 5(s2x(s) - sx(O) - x' (O)) + 6(sx(s) - x(O)) + x(s) = � s +4 s+2 --
After inserting the given boundary conditions and rearranging we are thus face with inverting
x(s) = 5s2lO+s 6s+ 2+ 1 + (s2 + 4)(5s24s + 6s + 1) + (s + 2)(5s32 + 6s + 1) "
Using a partial fractions package gives
2225 - 4(19s - 24) x(s) = 20(s29+ 1) + 3(s 1+ 2) + 1212(5s + 1) 505(s2 + 4) _
and inverting yields
l.
445 e- s t - 76 cos(2t) + 48 sm(2t). x(t) = 31 e- 2t + 2029 e-t + 121 505 505 2 Substituting back for y(t) gives 72 118 . y(t) = 32 e2t - 29lO e-t - 1157 606 e- s t + 505 cos(2t) + 505 sm(2t). .
l.
(c) This last problem is fully fourth order, but we do not change the line of approach. The Laplace transform gives the simultaneous equations
- - 5 = 21 (s2 - 1)x(s) + 5sy(s) s -2sx(s) + (s2 - 4)y(s) - s = - �s
in which the boundary conditions have already been applied. Solving for
y(s) gives
7s2 + 4 = 1 - 32 s + 32 s y(s) = s(ss24 ++ 4)(s2 + 1) :; s2 + 4 s2 + 1 which inverts to the solution y(t) = 1 - 32 cos(2t) + 32 cos(t). Substituting back into the second original equation gives
x(t) = -t - � sin(t) + � sin(2t).
199
A. Solutions to Exercises
7. Using Laplace Transforms, the transform of x is given by
x(s) = (s2 + 1)(sA 2 + k2 ) + (s2 V+o k2 ) + (ssx(O) 2 + k2 ) _
•
k # 1 this inverts to vo sm(kt) . . + x0 cos(kt). x(t) = k2 A 1 (sm(t) -k-) + k - sin(kt) If k = 1 there is a term (1 + s2 ) 2 in the denominator, and the inversion H
_
can be done using convolution. The result is
x(t) = � (sin(t) - t cos(t)) + v0 sin(t) + x(O) cos(t)
oo
and it can be seen that this tends to infinity as t --+ due to the term t cos(t). This is called a term. It is not present in the solution for # which is purely oscillatory. The presence of a secular term denotes resonance.
secular
k 1
8. Taking the Laplace Transform of the equation, using the boundary condi tions and rearranging gives
svo +g....: x(s) - ____; s2(s:. + a)
�
_
which after partial fractions becomes
s.
a s + a- g) + -'*(av a os-2 g)s + a . x(s) = -'*(avo _
This inverts to the expression in the question. The speed dx dt
-
As t --+
= ag (avoa- g) e -
-
-at
.
oo this tends to gfa which is the required terminal speed.
9. The set of equations in matrix form is determined by taking the Laplace Transform of each. The resulting algebraic set is expressed in matrix form follows:
as
10. The Laplace Transform of this fourth order equation is e -as ) k(s4y(s) - s3y(O) - s2y'(O) - sy"(O) - y"'(O)) = w-co (-sc - -s12 + s2
200
An I ntroduction to laplace Tra nsforms a nd Fourier Series
Using the boundary conditions is easy for those given at give = + and =
x = 0 , the others y111(0) 21 WoC.
y"(O) - 2cy111(0) 65 woc2 0 So y" (0) = � woc2 and the full solution is, on inversion wo (5cx4 - x5 + (x - c)5H(x - c)] y(x) = _..!._12 woc2x2 - _..!._12 wocx3 + 120 where 0 < - 1x -< 2c. Differentiating twice and putting x = cj2 gives y"(c/2) = 48 w0c2• 11. Taking the Laplace Transform and using the convolution theorem gives tjJ-(8) = 823 + tP-(8) 82 1+ 1 from which tP-(8) = 52 + 22 ·
8 8
Inversion gives the solution
Exercises 4.7
1 . The Riemann-Lebesgue lemma is stated as Theorem 4.2. As the constants bn in a Fourier sine series for g(t) in (0, 11'] are given by bn = -2 10" g(t) sin(nt)dt and these sine functions form a basis for the linear space of piecewise continuous functions in [0 , 11'] (with the usual inner product) of which g(t) 1l'
is a member, the Riemann-Lebesgue lemma thus immediately gives the result. More directly, Parseval's Theorem:
yields the results
n--+limoo 1r g(t) cos(nt)dt = 0 n--+limoo 1r g(t) sin(nt)dt = 0 -1r
-1r
as the nth term of the series on the right has to tend to zero as n --7 oo. As g(t) is piecewise continuous over the half range [0, 11'] and is free to be defined as odd over the full range [-11', 11'], the result follows.
A. Solutions to Exercises
201
[
]
2. The differential equation can be written
dPn = -n(n + I).P.n . !!_
[
[
mm
,
unless
m=n
as
required.
]
[
[ 1
]
]
]
mm
{ 1 PmPndt = 0 1- 1
3. The Fourier series is found using the formulae in Section 4.2. The calcu lation is routine if lengthy and the answer is 2 cos(3t) cos(5t) 57r f(t) = - cos (t) + + +··· I6 52 32 7r 2 cos(2t) cos(6t) cos(IOt) ; 22 + 62 + IQ2 sin(3t) sin(5t) sin(7t) . . · + .!. sin(t) . + 32 52 72 7r This function is displayed in Figure A. l .
( ( (
--
--
) )
• • •
_
_
)
4. The Fourier series for H (x) is found straightforwardly as
.!_ + � 2
t sin(2n - I)x .
7r n=1
2n - I
Put x = 1r /2 and we get the series in the question and its sum: I I I 7r I - 3 + 5 - 7 + ··· = 4 a series attributed to the Scottish mathematician James Gregory (I638I675).
202
An I ntroduction to laplace Transforms and Fourier Series f(t)
Figure A. 1: The original function composed of straight line segments 5. The Fourier series has the value � nsin(2nx) f(x) "' --;8 � (2n + 1)(2n - 1) · 6. This is another example where the Fourier series is found straightforwardly using integration by parts. The result is (-1)n cos(nx). 7r3 ) - 4 ""oo -1 - x2 "' -21 (1r - 3 n=l n2 As the Fourierat xseries controversy, = 1r, isf(x)in =fact1 -continuous 1r2 • for this example there is no 7. Evaluating the integrals takes a little stamina this time. �
and integrating twice by parts gives 1 sin(nx) - -n- cos(nx) ,.. - n2 b bn = [a1r a2 1r ] 2a n from which 2n sinh (a1r( ---.:. ...- ..:...:... . , n = 1, 2, . . .. 2a ,-., 7r :.-; -1)n) Similarly, an = 2asinha(�?r1r(-1)n) , n = 1 , 2, ... , and ao = 2sinh(7ra) _ ,..
A. Solutions to Exercises
203
This gives the series in the question. Putting
co
from which
( - 1) n
L n2 + a2
n= l Also, since
co co
1 =(a1rcosech (a1r) - 1). 2a2
co
(-1 ) n (-1 ) n 1 "" - 2 "' + L...J 2 2 2 L...J a2 ' n + a2 n= l n + a
co ( 1)n = �cosech (a1r). L ;+ a2 a co
-
we get the result
n
-
Putting
x = 0 gives the equation
x = 1r and using Dirichlet's theorem (Theorem 4.5) we get
1 -2 ( ! (1r) + f( -1r))
= cosh(a1r) = sinh(1ra) 1l"
coL
from which
1 2 n + a2 n= l
{
1 � a - + 2 L...J 2 2 a n= l n + a
1 =(a1rcoth (a1r) - 1). 2a2
co 1 - 2 "'co 1 + -1 co n2 + a2 n=l n2 + a2 a2 co 1 = -c1l" oth (a1r). L n 2 + a2 a co
Also, since
}
"" L...J
L...J
-
we get the result
-
8. The graph is shown in Figure A.2 and the Fourier series itself is given f(t)
= �1!" + ; sinh(1r) +
(
2 � (-1) n - 1 (-1) n sinh(1r) + cos (nt) n2 n2 + 1 1l" n= l (-1 } n -2 L --a-sinh(1r) sin(nt). n +1 l = n 1l" - L...J
co
J
by
204
An Introduction to laplace Transforms and Fourier Series
f (t)
15
12 . 5
10
t
Figure A.2: The function f(t) 9. The Fourier series expansion over the range (-1r, 1r] is found by integration
to be (-l)n 1rsm(nt)] - ;4 � � [ 22 cos(nt) + � sin(2n - l)t3 f(t) = 32 1r2 + � (2n l) n n and Figure givesgives a picture of it. The required series are found by first putting t = A.0 3which .
_
from which Putting t = 1r gives, using Dirichlet's theorem (Theorem 4.5) from which
205
A. Solutions to Exercises f
Figure A.3: The function
f(t)
as
a full Fourier series
10. The sine series is given by the formula 2 ,.. (t - 1r)2 sin(nt)dt an = 0 bn = 11" 10 with the result
!(
sin(2k - 1)t t) "' � � +2 L....J 2k + 1 1r
k= l
11"
� sin(nt)
L....J n= l
n
·
This is shown in Figure A.5. The cosine series is given by
cos(nt) f(t) "' 1r32 + 4 � n2 L....J
from which
_
and this is pictured in Figure A.4.
n= l
11 . Parseval's theorem (Theorem 4. 19) is j,.. [!(t)]2dt = 1ra� + 1r f:
n= l
Applying this to the Fourier series in the question is not straightforward, we need the version for sine series. This is easily derived as
as
206
An I ntroduction to Laplace Transforms a nd Fourier Series f
t
Figure A.4: The function f(t)
as
an even function
The left hand side is The right hand side is
whence the result
32 1 -;-�00 (2n - 1)6
Noting that
gives the result
12. The Fourier series for the function x4 is found as usual by evaluating the integrals 11" an = -1 x4 cos(nx)dx
1 11" bn = -1 1 x4 sin(nx)dx. - 11" 1r
and
1r
- 11"
207
A. Solutions to Exercises f
t
Figure
A.5 : The function f(t)
as
an odd function
= and there are no sine terms. is an even function, However as Evaluating the integral for the cosine terms gives the series in the question. With = the series becomes
bn 0
x4
oo (-1)n oo {-1)n+l 4 7r 2 o = s + 87r L.: � 48 L.: n=l n=l n4 and using the value of the second series (7r2 )/12 gives oo (-1)n+l 77r4 1 77r4 L n4 = 15 48 = 720 · n=l Differentiating term by term yields the Fourier series for x3 for x 0,
-
as
13. Integrating the series term by term gives
00 x "' L �2 ( -1)n+l sin(nx) n=l 00 2 2=._ "' � ( -1)n cos(nx) + A L.,; n2 2 n=l -
-1r
< < 1r
x
An Introduction to laplace Transforms and Fourier Series
208
A
where is a constant of integration. Integrating both sides with respect to x between the limits and 1l' gives
-1!'
1!'3 = 2A1!'. 3" Hence
A = 1!'2 /6 and the Fourier series for x2 is oo 4( 1) n 1!'2 L x2 "' 3" :2 cos(nx), +
1!'.
n= l
where -1!' < x < Starting with the Fourier series for x4 over the same range and integrating term by term we get
B
=
where is the constant of integration. This time setting x 0 immedi ately gives 0, but there is an x on the right-hand side that has to be expressed in terms of a Fourier series. We thus use
B=
00 x "' L ;;2 ( -1) n+ l sin(nx) n= l to give the Fourier series for x5 in [ -1!', 11'] as oo 2 4 . (nx) . x5 "' L ( -1) n 4n011'3 - 2n40 - 211'n sm 5 n=l
[
--
-
-
]
14. Using the complex form of the Fourier series, we have that 00 V(t) = L ene2mrit/5. n= - oo The coefficients are given by the formula Cn
= !5 lo{5 V (t)e2imrt/5dt.
By direct calculation they are
= � (i e7,..i/lO) 20 (� - e91ri/10 ) C-3 = 311' 10 C-2 = - (� - e1ri/10 ) +
C- 4
.
1l'
.
A. Solutions to Exercises
209
= � ( e31ri/IO) 16 20 ( . e71ri/10 ) Cl = 1r -t e91ri/IO) C2 = � ( e1rijiO) C3 = !� ( 10 ( -t. - e31ri/10 C4 = ) 1r
C-1
Co
i+
=
-i +
-i +
.
Exercises 5 . 6
1. Using the separation of variable technique with >(x, t) L Xk (x)Tk (t) k gives the equations for Xk (x) and Tk (t) as T' K.X"k -01.2 ....li Tk xk 2 where -a is the separation constant. It is negative because Tk (t) must not grow with t . In order to satisfy the boundary conditions we express x( 1rI4 - x) as a Fourier sine series in the interval 0 � x � 1rI4 as follows
=
=
x(x -
__
=
�) 2� k=f:l (2k � 1)3 sin[4(2k - 1)x] . =
Now the function Xk (x) is identified as
= 2� (2k � 1)3 sin[4(2k - 1)x] so that the equation obeyed by Xk (x) together with the boundary condi tions >(O, t) = >(1rl4 , t) = 0 for all time are satisfied. Thus 4(2k - 1)1r = Ot. JK . Putting this expression for Xk (x) together with Tk (t) = e -16(2k- I)2,..2 t/,. Xk (x)
gives the solution 'f'
..i..(x ' t)
1 -16 k -I 1r2 / = ...!.21r._ � L..J (2k - 1)3 e (2 ? t �< sin[4(2k - 1)x]. k=l
210
An Introduction to Laplace Transforms and Fourier Series
2. The equation is
()ljJ - ()ljJ = 0. b /J2l/J fJx2 8x 8t
t) gives the ODE a{jl' - b(fi' - s{fi = 0 after applying the boundary condition ljJ(x, 0) = 0. Solving this and noting that s) -+ 0 x -+ oo ljJ(O, t) = 1 =? ljJ-(O, s) = s1 and ljJ(x, Taking the Laplace transform (in
as
-
gives the solution
ifi = � exp { :a [b - Vb2 + 4as] } .
3. Taking the Laplace transform of the heat conduction equation results in the ODE "" cPdx(fi2 - sljJ- = -x ( 47r - x) .
Although this is a second order non-homogeneous ODE, it is amenable to standard complimentary function, particular integral techniques. The general solution is
The inverse of this gives the expression in the question. If this inverse is evaluated term by term using the series in Appendix A, the answer of Exercise is not regained immediately. However, if the factor 2K.t is expressed as a Fourier series, then the series solution is the same as that in Exercise
1 1.
4. Taking the Laplace transform in y gives the ODE cP(fi = sljJ-. dx2 Solving this, and applying the boundary condition l/J(O, y) = 1 which trans forms to ljJ-(O, s) = s1 gives the solution 1 ljJ-(x, s) = -e-xvs s which inverts to ljJ(x,y) = erfc { 2� } . -
r.
211
A. Solutions to Exercises
5. Using the transformation suggested in the question gives the equation
obeyed by >(x, t) as
12 fP4>2 = EJ24>2 c 8t 8x This is the standard wave inequation. To solve techniques, we transform the variable t to this obtainusingtheLaplace equationtransform cF(fi - s2 4>- = - s cos(mx). dx2 c2 The boundary conditions for 4>(x, t) are k (x,O) = -2k cos(mx). 4>(x,O) = cos(mx) and ¢/ (x,O) = -2> This last condition arises since u' (x, t) = 2ek kt/24>(x, t) + ekt/2 4>' (x, t). Applying these conditions gives, after some algebra >(x,s) = [1-s - s2 s+-m!22c2 cos(mx)l e- · + s2 s+-m!22c2 cos(mx). Using the second shift theorem (Theorem 2.5) we invert this to obtain tf2 (1 - cos(met - mx) cos(mx) + 2!c sin(met - mx) cos(mx) ek+cos( met) cos(mx) - 2!c sin(met) cos(mx)) t xjc u= ektf2 (cos(mct) cos(mx) - 2!c sin(met) cos(mx)) t < xjc. 6. Taking the Laplace transform of the one dimensional heat conduction equation gives SU = c2Uzz as u(x, 0) = 0. Solving this with the given boundary condition gives u(x,s) = /(s)e-zVifc. Using the standard form .c-I {e-aVi} = _a_e-a2f4t 2..;:;i3 gives, using the convolution theorem u = -x2 1t J(r) � 1r(t - r)3 e-z /4k(t-T)dr. lk z2/4kt When j(t) = 6(t), u = �2 v ;;:ti'e - . •
c2
.u.
_
{
>
0
2
212
An I ntroduction to Laplace Tra nsforms and Fourier Series
7. Assuming that the heat conduction equation applies gives that aT = a2T Ft K. ax2 so that when transformed T(x, s) obeys the equation - s) - T(x, 0) = lfl'i' (x, s). sT(x, dx2 Now, T(x,O) = 0 so this equation has solution K-
and since
dT = - a-(s,O) s dx To- = a yfK;
and the solution is using standard forms (or Chapter 3) Thissolution gives, atisxnot= 0singular the desired for T(O, t). Note that for non-zero x the at t form = 0. 8. Taking the Laplace transform of the wave equation given yields lflu s2 u- - au at (x,O) = c2 dx2
so that substituting the series
an (x) , k integer u=� L....J s n+ k+ l n=O as in Example 5.5 gives k = 1 all the odd powers are zero and · · · - cos(x) = c2 ( � � � · · · ) � � � � � � _
%
�
�
-+ - +-+
-+ -+- +
so that
ao = cos(x), a2 = c2a� = -c2 cos(x) a4 = c4 cos(x) etc. Hence u(x, s) = cos(x) ( s12 - sc24 cs64 c6ss · · ·) -
-+ - - -+
A. Solutions to Exercises
213
which inverts term by term to which in this case converges to the closed form solution u !c cos(x) sin(ct). 9. For this problem we proceed similarly. Laplace transforms are taken and the equation to solve this time is =
s2u_ -su(x, 0) = c2 dcPux2 . Once more substituting the series u = n�=O sann+(x)k+l , k integer gives this time ao as1 as22 - cos(x) = c2 (a0, asf as2� ) so that ao cos(x), a1 = 0 a2 = c2a� = -c2 cos(x) a3 0 a4 c4 cos(x) etc. _
+-+-+
�
+-+-+
· · ·
=
=
· · ·
=
giving
Inverting term by term gives the answer c2nt2n u(x,t) cos(x) nL=O(- l)n � which in fact in this instance converges to the result u = cos(x) cos(ct) . =
00
Exercises 6 . 6
1.
With the function f(t) defined, simple integration reveals that 0 F(w) = { ke -iwtdt + 1T -ke -iwtdt as
j_ T
0
214
An Introduction to Laplace Transforms and Fourier Series
_ [ e-iwt] O [e-iwt ] T -T sm. ( r) 2 . With f(t) e-t2 the Fourier transform is oo F(w) r= e-t2 e- iwtdt r e-(t-! iw)2 e-t w2 dt. }_ oo 1-oo Now although there ist a- complex number ( �iw) in the integrand, the change of variable �iw can still be made. The limits are actually to -oo - �iw and oo - �iw but this does not change its value so wechanged have that Hence F(w) y?re-t w2 • 3. Consider the Fourier transform of the square wave, Example 6.2. The inverse yields: A roo sin(wT) eiwtdw = A 7r 1-oo provided It! ::::; T. Let t = .!_0 and we get oo sin(wT) dw = 1 r 7r 1-oo Putting T 1 and spotting that the integrand is even gives the result. 4. Using the definition of convolution given in the question, we have ..+k - k -zw ZW 2k -:zw [cos(wT) - 1) 4ik 2 1 w 2w .
=
=
o
=
u =
=
W
W
=
f(at) * f(bt)
=
l:
e-at 100 f(br - ar)dr 1 = - e- at -- [f(bt - at) - 1J b-a =
f(a(t - r))f(br)dr.
f(at) - f(bt) b-a
As b --+ a we have
!(at) * f(at) = =
_ b-ta lim f(bt)b -- af(at) -
d f(at) = -tf'(at) = tf(at) . da
A. Solutions to Exercises
5.
215
With
=
g(x) /_: f(t)e-2n:ixtdt let u t - 1 /2x, so that du dt. This gives =
=
Adding these two versions of g(x) gives lg(x) l I � /_: (f(u) - f(u + 1 f2x))e-2n:ixudu l 1 /_00 - l f(u) - f(u + 1f2x)idu
< 2
as
oo
and x -t oo, the right hand side -t 0. Hence /_: j(t) cos(21rxt)dt -t 0 and /_: f (t) sin(21rxt)dt -t 0 which illustrates the Riemann-Lebesgue lemma. 6. First of all we note that therefore
=
/_: f(t)G(it)dt /_: f(t) /_: g(w)ewtdJ..Jdt. Assuming that the integrals uniformly can be interchanged, the rightarehand side canconvergent be writtenso that their order which is, straight away, in the required form /_: g(w)F(iw)dJ..J. Changing the dummy variable from w to t completes the proof. 7. Putting f(t) 1 - t2 where f (t) is zero outside the range - 1 � t � 1 and g(t) e-t, 0 � t < oo, we have =
=
216
An Introduction to Laplace Tra nsforms and Fourier Series
and
G(w) =
1oo e-teiwt dt.
Evaluating these integrals (the first involves integrating by parts twice!) gives F(w) = 3 (w cos(w) - sm(w)) w and G(w) = 1 +1 iw · Thus, using Parseval's formula from the previous question, the imaginary unit disappears and we are left with
4
.
r1-11 (1 + t)dt = 1oroo 4et;t (t cosh(t) - sinh(t)) dt
from which the desired integral is
we have that
t (1 - t2 ) 2 dt =
1-1
2. Using Parseval's theorem
_!.._211" 1oroo 16 (t cos(t)t6- sin(t)) 2 dt.
Evaluating the integral on the left we get
r)() (t cos(t) - sin(t)) 2 dt = !!__
15 ·
t6
1o
8. The ordinary differential equation obeyed by the Laplace Transform u( x, s) is g(x ) . � u (x, s ) !.. u- ( x, s) dx2 k k Taking the Fourier Transform of this we obtain the solution _
1 u(x, s) = 211" where
G(w) =
_
joo G(w) _
00
_
e•wx dw s + w2k ·
I: g(x)e-iwxdx
is the Fourier Transform of g(x). Now it is possible to write down the solution by inverting u (x, s ) as the Laplace variable s only occurs in the denominator of a simple fraction. Inverting using standard forms thus gives 1 u (x, t) = G(w ) e'w e -w2 kt dw .
/00 211" -oo
.
217
A. Solutions to Exercises
It is possible by completing the square and using the kind of "tricks" seen in Section (page etc.) to convert this into the solution that can be obtained directly by Laplace Transforms and convolution, namely 1 - ( - ) 4t
3.2
52
u(x, t) = --
2.,(rt
100 e - oo
2
z T f g(r)dr.
9. To convert the partial differential equation into the integral form is a straightforward application of the theory of Section 6.4. Taking Fourier transforms in x and y using the notation
v(.A, y) =
/_: u(x, y)e-i>.zdx
and we obtain
Using the inverse Fourier Transform gives the answer in the question. The conditions are that for both u(x, y) and f(x, y) all first partial derivatives must vanish at
±oo.
10. The Fourier series written in complex form is
00
f(x) ,...., L Cn einz n= -oo where
Cn =
1
211'
-
1
1f
- 1f
.
f(x)e -mz dx .
Now it is straightforward to use the window function W(x) to write
Cn =
1 100 W (-X - 1!') f(x)e -m. zdx.
2
-
_
00
211'
11. The easiest way to see how discrete Fourier Transforms are derived is to consider a sampled time series as the original time series f(t) multiplied by a function that picks out the discrete (sampled) values leaving all other values zero. This function is related to the Shah function (train of Dirac-& functions) is not necessarily (but is usually) equally spaced. It is designed by using the window function W ( x) met in the last question. With such a function, taking the Fourier Transform results in the finite sum of the kind seen in the question. The inverse is a similar evaluation, noting that because of the discrete nature of the function, there is a division by the total number of data points.
218
An Introduction to Laplace Transforms a nd Fourier Series
12. Inserting the values and
and
=
+
=
{1, 2, 1} into the series of the previous question, N 3
T 1 so we get the three values Fo = 1 + 2 1 = 4; F1 1 2e -2,.. i/3 =
+
+
e -4 1ri/3 = e -2 1ri/3 ;
13. immediately Using the exponential forms of sine and cosine the Fourier Transforms are F{cos(wot)} = 21 (&(w -wo) &(w + wo)) +
Exercises 7. 7
F{sin(wot)} = ;i (&(w - wo) - &(w + wo)).
1. Inis best all of these examples, the location of the pole is obvious, and the residue found by use of the formula z--+ a
lim (z - a)f(z)
where z = a is the location of the simple pole. In these answers, the location of the pole is followed after the semicolon by its residue. Where there is more than one pole, the answers are sequential, poles first followed by the corresponding residues. ( i)
z = -1; 1,
(iii) (iv) z (vi) z
z= =
1, 3i, -3i; ! , 152 (3 -i), 152 (3 + i),
0, -2, -1 ;
= mr
�' -!, 1,
(-1)nmr,
n
( ii)
(v )
integer,
z = 1 ; -1,
z = 0;
1,
2. As in the first example, the location of the poles is straightforward. The methods vary. For parts ( i) , ( ii ) and (iii) the formula for finding the residue at a pole of order n is best, viz.
1
(n -
1)!
d( n- 1) n !� dz( n - 1 ) (z - a) f(z).
For part ( iv) expanding both numerator and denominator as power series and picking out the coefficient of z works best. The answers are as
1/
219
A. Solutions to Exercises
follows
(i) z = 1, order 2 res = 4 (ii) z = i, order 2 res = -ii z = -i, order 2 res = ii (iii) z = O, order 3 res = -� (iv) z = O, order 2 res = 1. 3. (i)residues Usingofthetheresidue timesresidues the sumare:of the integrandtheorem, at the the threeintegral poles. isThe2nithree � (1 - i) (at z = 1), 145 (-2 - i) (at z = -2), � (1 + 3i) (at z = -i). The sum of these times 2ni gives the result (ii) This timeisthe12ni.residue (calculated easily using the formula) is 6, whence the integral (iii) ForBythistheintegral we use a semithe circular contour on the upperportion half plane. estimation lemma, integral around the curved tends tothezero asaxistheisradius gets veryintegral large.from Also0theto integral from -oo to 0 along real equal to the oo since the integrand is even. Thus we have 2 1000 X 1+ 1 dx = 2ni(sum of residues at z = e ./6, i, from which we get the answer i. (iv) Thistellintegral ments us that is evaluated using the same contour, and similar argu 2 100 x4cos(2nx) + x2 + I dx = 2ni(sum of residues at z = e11"'·;3 , e211"t·;3). (Note that the complex function considered is z4 +e21rz2iz+ 1 . Note also that the polesis,ofafter the aintegrand are those of z6 - 1 but excluding z = ±I.) The answer little algebra �
o
1rt
220
An Introduction to Laplace Transforms and Fourier Series
4. Problems (i) and (ii) are done by using the function 2 f(z) (ln(z)) z4 + 1 Integrated around the(±1indented semi circular contour of Figure 7.8, there are poles at z ± i)f../2. Only those at (±1 + i)f../2 or z = etri/4 , e3 tri/4 are inside the contour. Evaluating r , f(z)dz Jc along all the partsbitsof eventually the contourcontribute gives the nothing following(thecontributions: those along the curved denominator gets very large in absolute magnitude as the radius of the big semi-circle oo , the integral around the small circle r(ln r)2 0.) The contributions along the real0 axisitsareradius r 0 since 2 100 -(lnx) -- dx 0 X4 + 1 along the positive real axis where z x and roo (Inx4X ++ i1'lr)2 dx }0 along the negative real axis where z xeitr so In z In x + i1r. The residue theorem thus gives (In x)2 dX + 21TZ·1oo �dX - 1r2 1oo _1_ dX 2 1oo X4 + 1 0 X4 + 1 0 21ri{sum 0 X4 + 1 (A.1) of residues}. The residue at z = a is given by (lna)2 4a"3 using the formula for the residue of a simple pole. These sum to 1r2 - -- (8 - 10i ). 64../2 Equating real and imaginary parts of Equation A.1 gives the answers 31r3../2 '. (ii) roo lnx dx - 1r2 roo x4(lnx)+ 12 dx -- � (1") 1 6 v'2 }0 x4 + 1 }0 once the result 1r roo x4 1+ 1 dx = 2.,/2 lo =
=
-+
-+
as
-+
=
=
=
=
=
-+
A. Solutions to Exercises
221
from Example 7.5(ii) is used. (iii) The third integral also uses the indented semi circular contour of Figure 7.8. The contributions from the large and small semi circles are ultimately zero. There is a pole at z = i which has residue e1r>..i/2 /2i and the straight parts contribute
(positive real axis), and
lo x>..e>..i1r dx I + x2 oo (negative real axis). Putting the contributions together yields -
from which
--
x>.. dx - 7r....-- . 1oo -o I + x2 - 2 cose21r) -
-
5. These inverse Laplace Transforms are all evaluated form first principles using the Bromwich contour, although it is possible to deduce some of them by using previously derived results, for example if we assume that .c, -1
{ �} .;s
_
I
_
- .;;t
then we can carry on using the first shift theorem and convolution. How ever, we choose to use the Bromwich contour. The first two parts follow closely Example though none of the branch points in these problems is in the exponential. The principle is the same. (i) This Bromwich contour has a cut along the negative real axis from I to -oo. It is shown as Figure Hence
7. 10,
-
A.6.
.c, -1
{
I
} - _I {
est d 8.
8vfs+I - 27ri JB r 8vfs+I
The integral is thus split into the following parts
f
=
f + f+ f +
1+ f
=
27ri(residue at 8 =
0) where is the whole contour, r is the outer curved part, AB is the straight portion above the cut (c;$8 0) 'Y is the small circle surrounding the branch point 8 - I and is the straight portion below the cut (c;$8 < 0). The residue is one, the curved parts of the contour contribute Jc,
JBr
lr
JAB
'Y
C'
=
CD
JeD
>
nothing in the limit. The important contributions come from the integrals
222
An I ntroduction to Laplace Transforms and Fourier Series
s-plane
A
D
Figure A.6: The cut Bromwich contour along AB and CD. On AB we can put s xei,.. - 1. This leads to the integral =
{
{o et( -z - 1)
1 1oo (-x - l)iy'x On CD we can put s = xe-i1r - 1. The integrals do not cancel because of the in thereinforce. denominator reason isthe cut is there of course!). Theysquare in factrootexactly So the(theintegral oo et(-z- 1) dx. 1 l0 ( - 1) (-iyfx) Hence e -te - zt -2 l 1 L i(x + l)y'x 21ri Using the integral roo -zt -1ret [- I + erfv'i) 10 (x : I ) y'x gives the answer that c-1 { 1 } __!._ f e•t ds erfv'i. svfs+I 21ri 1 svfs+I (ii) This secondThepartimportant is tackledstep in a issimilar way. The contour is identical (Figure the correct parametrisation ofcut. the A.6). straight parts of the contour just above and just below the branch This time the two integrals along the cut are AB
CD
c
=
=
=
-X
Br
=
AB
dx
00
dx
1
dx
= -
=
.
=
Br
0 e<-z-1)t dx
1oo 1
+ iyfx
=
A. Solutions to Exercises
223
and
l
CD
=
-
1oo o
e ( -x- 1)t i G x dx,
1- y
and the integrals combine to give c- 1
{
} = 1 100 e-te-xt2i../Xdx
1
1 + ..jSTI
which gives the result
27ri
-
0
1+X
e-t __ - erfcVt. V1rt
(iii) This inverse can be obtained from part (ii) by using the first shift theorem (Theorem plO) . The result is
1.3,
C-1
r:
{ 1 } = c;1 1 + yS y 1rt
-
t;
et erfcvt.
(iv) Finally this last part can be deduced by using the formula
1
v'S +
1 - 2-11 y's - 1 - - s - 1·
The answer is
6. This problem is best tackled by use of power series, especially so as there is no problem manipulating them as these exponential and related functions have series that are uniformly convergent for all finite values of t and s, excluding 0. The power series for the error function obtained by integrating the exponential series term by term) yields:
s=
(
oo
Taking the inverse Laplace Transform using linearity and standard forms gives
>(t) =
2 ..fo
( - 1)n t2n
� (2n + 1)n! (2n)! ·
After some tidying up, this implies that
224
An I ntroduction to Laplace Transforms and Fourier Series
Taking the Laplace Transform of this series term by term gives C which is
2_ f ) - l ) n ( 1 / .JS) 2n+l { if>(Vt) } = _VifS (2n + 1)! n=O
as required. 7. This problem is tackled in a very similar way to the previous one. We simply integrate the series term by term and have to recognise
� ( -1)k (2k)! ( �) 2k
=O k� as the binomial series for
(k!) 2
s
(1 + -x2 ) - 112 s2
Again, the series are uniformly convergent except for s = ±ix which must be excluded alongside s = 0. 8. The integrand
cosh(x.JS) s cosh(..fS) has a singularity at the origin and wherever ..fS is an odd multiple of /2. The presence of the square roots leads one to expect branch points, but in fact there are only simple poles. There are however infinitely many of them at locations 1r
s=
-
(n + 4) 2 n2
and at the origin. The (uncut) Bromwich contour can thus be used; all the singularities are certainly to the left of the line s = u in Figure 7.9. The inverse is thus 1 cosh (x.JS) . 27rZ Br S COSh( yr::S) ds = sum of restdues. The residue at s = 0 is straightforwardly
1 . est
lim (s - 0) a-tO
{ est cosh( cosh (x.JS) } S
Vs)
=
1.
The residue at the other poles is also calculated using the formula, but the calculation is messier, and the result is
A. Solutions to Exercises
225
Thus we have ,.. _ 1 cosh(xy'S) s cosh(y'S)
,..,
{
}
=
1+
( 1 ) n - (n - .l)21r2 t 4 Loo --e 2 cos ( -
-
1r n=l 2n
-
n
1
-
)
1 - 1rx. 2
9. The Bromwich contour for the function
e - sl
has a cut from the branch point at the origin. We can thus use the contour depicted in Figure 7.10. As the origin has been excluded the integrand
has no singularities in the contour, so by Cauchy 's theorem the integral around C' is zero. As is usual, the two parts of the integral that are curved give zero contribution as the outer radius of r gets larger and larger, and inner radius of the circle 7 gets smaller. This is because cos O < 0 on the left of the imaginary axis which means that the exponent in the integrand is negative on r, also on 7 the ds contributes a zero as the radius of the circle 7 decreases. The remaining contributions are
oo 1 e - x t- x l x. 0 CD These combine to give x i ¥'3) 1 lCD - 1000 e-xt- 'ix S1ll (-2 l
and
AB
Substituting x =
,..,t'-1 { e-sl } =
-
e-i-.r/3d
-
1
=
+
!
.
dx .
u3 gives the result 1 1 st-s! d u.J3) 37r 1000 u2e-u3t-l.u2 s1n. (-e s u. 27ri 2
-
=
Br
d
t
10. Using Laplace Transforms in solving in the usual way gives the solution
- xYs2-1. 4>(x, s) = 2.e s2
The singularity of 4>(x, s) with the largest real part is at s = 1. The others are at s = -1, 0. Expanding 4>(x, s) about s = 1 gives -
ifJ(x, s) = 1 - xvIn2(s - 1) 2 + 1.
·
·
· .
226
An I ntroduction to Laplace Transforms and Fourier Series
In terms of Theorem 7.4 this means that k = 1/2 and ao = -x../2. Hence the leading term in the asymptotic expansion for ifJ(x, t) for large t is
whence as required.
B
Tab le of L aplace Transform s
Inconstants. this table, a few is a entries, real variable, a complex variable and the real isvariable x also appears. f(s)(= f000 t
s
In
e -•t F(t)dt)
1
1
8
1
-, n
sn
F(t)
.
= 1, 2, . .
1 , x > O, z s
1 --
1 tn -
(n - 1 ) ! ' 1 tz -
r(x) ' eat
s - a' s
cos(
s2 + a2 a
at)
sin(at)
s2 + a2 8
cosh(at)
s2 - a2 a
sinh(at)
s2 - a2
227
a
and b
are real
An Introduction to Laplace Transforms and Fourier Series
228
8(t) <)(n) (t) H(t -a) erfc ( � ) 2 a --e 2..;;i3 -a2/4t
1 s s../s1+a .JS(s1-a) 1 Js-a+b 1 sinh(sx) ssinh(sa) sinh(sx) scosh(sa) cosh(sx) scosh(sa) ssicosh(sx) nh(sa) ssisinnh(x.JS) h(a.JS)
erfvat .;a
Jo(at) sint(at) mrta ) n1rax ) cos ((-1n )n sm. (--xa + -1l'2 nLoo=l -n sm. ( (2n - 1)1rx ) sm. ( (2n - l)1rt ) -1l'4 n�L-=l 2a 2(-1) n 1 2a n cos ( (2n- l)7rx ) cos ( (2n- 1)7rt) (-l) 1 + -1r4 nL-�=l 2a 2n - 1 2a n1rta ) (-l)n eos ( -n1rax ) sm. (-at + 1l'-2 nL-�=l -n1l'aX ) -1n n -n21r2t!a2 sm (--Xa + 1r-2 n�L-=l --e n
(
)
•
B. Table of Laplace Transforms
229
cosh(xy'S) scosh(ay'S) sinh(xy'S) s2 sinh(ay'S) n s. ( (2n - 1)7rx ) cos ( (2n - 1)7rt ) sinh(sx) (-1) a � S x + s2 cosh(sa) 2a 1r2 � (2n - 1)2 m 2a { 1 2 tanh ( -as )
e- as
c
Linear Spaces
C.l
Linear Algebra
In this appendix, some fundamental concepts of linear algebra are given. The proofs are largely omitted; students are directed to textbooks on linear algebra for these. For this subject, we need to be precise in terms of the basic mathe matical notions and notations we use. Therefore we uncharacteristically employ a formal mathematical style of prose. It is essential to be rigorous with the basic mathematics, but it is often the case that an over formal treatment can obscure rather than enlighten. That is why this material appears in an appendix rather than in the main body of the text. A set of objects (called the elements of the set) is written as a sequence of (usually) lower case letters in between braces:In discussing Fourier series, sets have infinitely many elements, so there is a row of dots after an too. The symbol E read as "belongs to" should be familiar to most. So 8 E S means that 8 is a member of the set S. Sometimes the alternative notation S = {x!f(x)} is used. The vertical line is read as "such that" so that f(x) describes some property that x possess in order that 8 E S. An example might be S = {x!x E
R, !xl :S: 2 }
so S is the set of real numbers that lie between -2 and +2. The following notation is standard but is reiterated here for reference: 231
232
An Introduction to Laplace Transforms and Fourier Series
(a, b) denotes the open interval {x l a < x < b}, [a, bJ denotes the closed interval {x l a :::; x :::; b}, [a, b) is the set {x l a :::; x < b}, and (a, b] is the set {x l a < x :::; b}. The last two are described as half closed or half open intervals and are reasonably obvious extensions of the first two definitions. In Fourier series, oo is often involved so the following intervals occur:-
(a, oo) = {x l a < x} [a, oo) = {x l a ::; x} (-oo, a) = {x l x < a} (-oo, a] = {x l x :::; a}. These are all obvious extensions. Where appropriate, use is also made of the following standard sets
= { . . . , -3, - 2, - 1, 0, 1, 2, 3, . . .}, the set of integers z+ is the set of positive integers including zero: {0, 1, 2, 3, . . .} R = {x l x is a real number} = (-oo, oo). Z
Sometimes (but very rarely) we might use the set of fractions or rationals Q: Q=
{: I
m
}
and n are integers , n =/: 0 .
R+ is the set of positive real numbers R+ = {X I X E R, X 2:: 0}. Finally the set of complex numbers C is defined by C
= {z = x + iy I x, y E R, i = H}.
The standard notation
x = lR{z }, the real part of z y = �{z}, the imaginary part of z has already been met in Chapter 1. Hopefully, all of this is familiar to most of you. We will need these to define the particular normed spaces within which Fourier series operate. This we now proceed to do. A vector space V is an algebraic structure that consists of ele ments (called vectors) and two operations (called addition + and multiplication x ) . The following gives a list of properties obeyed by vectors a,b,c and scalars a, f3 E F where F is a field (usually R or C).
1 . a + b is also a vector (closure under addition). 2. (a + b) + c = a + (b + c) (associativity under addition).
C. Linear Spaces
3.
233
There exists a zero vector denoted by 0 such that 0 + a = a + 0 = a Va E V(additive identity).
4. For every vector a E V there is a vector - a (called "minus a" such that a + ( - a) = 0.
5. a + b = b + a for every a, b E V (additive commutativity).
6.
aa E
V for every a E F, a E V (scalar multiplicity).
7. a (a + b) = aa + ab for every a E F, a, b E V (first distributive law). 8. (a + {3)a = aa + {3a for every a , {3 E F, a E V (second distributive law). 9. For the unit scalar 1 of the field F, and every a E V l .a = a (multiplicative identity). The set V whose elements obey the above nine properties over a field F is called a vector space over F. The name linear space is also used in place of vector space and is useful as the name "vector" conjures up mechanics to many and gives a false impression in the present context. In the study of Fourier series, vectors are in fact functions. The name linear space emphasises the linearity property which is confirmed by the following definition and properties.
If at , a2 , . . . , an E V where V is a linear space over a field F and if there exist scalars a1 , a2 , . . . , an E F such that
Definition C.l
(called a linear combination of the vectors at , a2, . . . , an) then the collection of all such b which are a linear combination of the vectors at , a2, . . . , an is called the span of at , a2, . . . , an denoted by span{at , a2, . . . , an} . If this definition seems innocent, then the following one which depends on it is not. It is one of the most crucial properties possibly in the whole of mathematics.
If V is a linear (vector) space, the are said to be linearly independent if the equation
Definition C.2 (linear independence) at , a2, . . . , an E V
vectors
implies that all of the scalars are zero, i.e. Otherwise, a1 , a2 , . . . , an are said to be linearly dependent. Again, it is hoped that this is not a new concept. However, here is an exam ple. Most texts take examples from geometry, true vectors indeed. This is not appropriate here so instead this example is algebraic.
234
An Introduction to Laplace Transforms and Fourier Series
Example C.3
Is the set S = { I , x, 2 +x, x2 } with F = R linearly independent?
Solution The most general combination of I, x, 2 + x, x2 is
where x is a variable that can take any real value. for if we Now, for all x does not imply a1 = a2 a3 a4 and a4 then choose a 1 + 2a2 a2 + a3 The combination a1 I, a3 will do. The set is therefore not linearly - � , a2 � , a4 independent. On the other hand, the set {I, x, x2 } is most definitely linearly independent as a 1 + a 2 x + a 3 x2 for all x ::::? a 1 a2 a3 It is possible to find many independent sets. One could choose {x, sin(x), ln x} for example: however sets like this are not very useful as they do not lead to any applications. The set {I, x, x2 } spans all quadratic functions. Here is another definition that we hope is familiar.
y=0 = 0, = = =
=0 =0
=0
= = = 0, y = 0.
=0
= = = 0.
at , a2, . . . , an
A finite set of vectors is said to be a basis for the linear space V if the set of vectors is linearly independent and V= { } The natural number n is called the dimension of V and we write n = dim(V). Example C.S Let [a, b] (with a < b) denote the finite closed interval as already defined. Let f be a continuous real valued function whose value at the point x of [a, b] is f (x). Let C[a, b] denote the set of all such functions. Now, if we define addition and scalar multiplication in the natural way, i.e. h + h is simply the value of h (x) + h(x) and similarly af is the value of af(x), then it is clear that C[a, b] is a real vector space. In this case, it is clear that the set x, x2 , x3 , , xn are all members of C[a, b] for arbitrarily large n. It is therefore not possible for C[a, b] to be finite dimensional. Definition C.4
span at, a2 , . . . , an
at , a2 , . . . , an
• • .
Perhaps it is now a little clearer as to why the set {I, x, x2 } is useful as this is a basis for all quadratics, whereas {x, sin(x), ln x} does not form a basis for any well known space. Of course, there is usually an infinite choice of basis for any particular linear space. For the quadratic functions the sets {I - x, I + x, x2 } or {I, I - x2 , I + 2x + x2 } will do just as well. That we have these choices of bases is useful and will be exploited later. Most books on elementary linear algebra are content to stop at this point and consolidate the above definitions through examples and exercises. However, we need a few more definitions and properties in order to meet the requirements of a Fourier series.
Let V be a real or complex linear space. (That is the field F over which the space is defined is either R or C.) An inner product is an operation between two elements of V which results in a scalar. This scalar is denoted by ( ) and has the following properties:Definition C.6
at , a2
235
C. Linear Spaces
1. For each at E V, (at , at} is a non-negative real number, i.e. For each at E V, (at , at} = 0 if and only if at = 0.
2.
4. For each at , a2 E V, (at , a2} = (a2, at} where the overbar in the last property denotes the complex conjugate. If F = R are real, and Property 4 becomes obvious.
a 1 , a2
No doubt, students who are familiar with the geometry of vectors will be able to identify the inner product (at , a2} with a1 .a2 the scalar product of the two vectors a1 and a2 . This is one useful example, but it is by no means essential to the present text where most of the inner products take the form of integrals. Inner products provide a rich source of properties that would be out of place to dwell on or prove here. For example: (0, a} =
0 VaE V
and Instead, we introduce two examples of inner product spaces. 1. If e n is the vector space v i.e. a typical element of v has the form ' a = (a1 , a2 , . . . , an where ar = Xr + iyr , Xr , Yr E R. The inner product (a, b} is defined by
)
the overbar denoting complex conjugate.
b]
2. Nearer to our applications of inner products is the choice V = C [a, the linear space of all continuous functions f defined on the closed interval [a, With the usual summation of functions and multiplication by scalars this can be verified to be a vector space over the field of complex numbers en . Given a pair of continuous functions /, we can define their inner product by
b].
g
(f,g} = 1b f(x)g(x)dx.
It is left to the reader to verify that this is indeed an inner product space satisfying the correct properties in Definition
C6.
236
An I ntroduction to Laplace Transforms and Fourier Series
It is quite typical for linear spaces involving functions to be infinite dimen sional. In fact it is very unusual for it to be otherwise! What has been done so far is to define a linear space and an inner product on that space. It is nearly always true that we can define what is called a "norm" on a linear space. The norm is independent of the inner product in theory, but there is almost always a connection in practice. The norm is a generalisation of the notion of distance. If the linear space is simply two or three dimensional vectors, then the norm can indeed be distance. It is however, even in this case possible to define others. Here is the general definition of norm. Definition C. 7 Let V be a linear space. A norm on V is a function from V to R+ (non-negative real numbers), denoted by being placed between two vertical lines 11 -11 which satisfies the following four criteria:1. For each a1 E V, ! l a1 !1 2:: 0. 2. !1a1 ! 1 = 0 if and only if a1 = 0. 3. For each a1 E V and a E C
!l aad l = l a lll ad l -
4- For every a1 , a2 E V (4 is the triangle inequality.) For the vector space comprising the elements a = (a1 , a2 , . . . , an ) where ar = Xr + iyr, Xr, Yr E R, i.e. en met previously, the obvious norm is
l l al l = [l a 1 l 2 + ! a2 ! 2 + l a3 1 2 + · · · + l ani 2P 12 = [(a, a}F /2 • It is true in general that we can always define the norm 1 1 - 11 of a linear space equipped with an inner product ( ., .) to be such that I ! al l = [ (a, a)] 1 /2 . This norm is used in the next example. A linear space equipped with an in ner product is called an inner product space. The norm induced by the inner product, sometimes called the norm for the function space C[a, b], is
natural
ll f ll =
[l ] b
l f l 2 dx
1 /2
For applications to Fourier series we are able to make ll f ll = 1 and we adjust elements of V , i.e. C[a, b] so that this is achieved. This process is called normal isation. Linear spaces with special norms and other properties are the directions
237
C. linear Spaces
in which this subject now naturally moves. The interested reader is directed towards books on functional analysis. We now establish an important inequality called the Cauchy-Schwartz in equality. We state it in the form of a theorem and prove it.
Theorem C.8 (Cauchy-Schwartz) Let V be a linear space with inner prod uct (., .}, then for each a, b E V we have: Proof If (a, b} = 0 then the result is self evident. We therefore assume that (a, b} = o: =F 0, a may of course be complex. We start with the inequality where .A is a real number. Now,
!I a - .Ao:b W = (a - .Aab, a - .Ao:b} . We use the properties of the inner product to expand the right hand side as follows:-
(a - .Aab, a - .Aab} (a, a} - .A(ab, a} - .A(a, ab} + .A2Ial2 (b , b} so ! l aW - .Ao:(b, a} - .Aa(a, b} + .A2Io: l2ll bW i.e. I l aW - .Ao:a - .Aaa + .A2Io: l2ll bW so ll all2 - 2.Aial2 + .A2I o: l2llb W
=
�0 �0 �0 � 0.
This last expression is a quadratic in the real parameter .A, and it has to be positive for all values of .A. The condition for the quadratic to be non-negative is that
the inequality
b2 s 4ac and a
>
0. With
b2 S 4ac is 4l o: l4 or l al2
S
S
4l a l2ll aWIIbll2 ll allllb ll
and since o: = (a, b} the result follows. 0
An Introduction to Laplace Transforms and Fourier Series
238
The following is an example that typifies the process of proving that some thing is a norm.
Prove that llal l = J{a.a} E V is indeed a norm for the vector space with inner product (, )
Example C.9 V
.
Proof The proof comprises showing that J(a, a} satisfies the four properties of a norm.
1. 2.
l l all � 0 follows immediately from the definition of square roots. If a = 0 {::::} J(a, a) = 0.
3. ll aall = J(aa, aa} = Jaa(a, a} = vl a l 2 (a, a} = l a ! J(a, a) = l a lll al l . 4. This fourth property is the only one that takes a little effort to prove. Consider ! Ia + bll 2 • This is equal to
(a + b, a + b} = (a, a} + (b , a} + (a, b) + (b , b} = !l aW + (a, b} + (a, b) + ll b W. The expression (a, b} + (a, b}, being the sum of a number and its complex conjugate, is real. In fact
!(a, b} + (a, b) I = l 2lR(a, b) I :S: 2j(a, b} l 2 ll al l .llb l l
:S:
using the Cauchy-Schwartz inequality. Thus
! Ia + b W = !l aW + (a, b) + (a, b) + llb W !l aW + 2 ll allllbll + l l b W = ( !Ia + bll ) 2 •
:S:
Hence ! Ia + b ll Property 4.
:S:
llall + l lbll which establishes the triangle inequality,
Hence ll al l = J(a, a) is a norm for V.
0
239
C. Linear Spaces
An important property associated with linear spaces is orthogonality. It is a direct analogy/generalisation of the geometric result a.b 0, a =P 0, b =P 0 if a and b represent directions that are at right angles (i.e. are orthogonal) to each other. This idea leads to the following pair of definitions.
=
Let V be an inner product space, and let a, b E V. If (a, b) = then vectors a and b are said to be orthogonal. Definition C.ll Let V be a linear space, and let {a1 ,a2 , . . . ,an } be a sequence of vectors, ar E V, ar =P 0, r = 1 , 2, . . . , n, and let (a , aj) = 0, i =P j, 0 � i , j � n. Then {al , a2 , . . . , an } is called an orthogonal seti of vectors. Further to these definitions, a vector a E V for which !!all = 1 is called a unit vector and if an orthogonal set of vectors consists of all unit vectors, the set is called orthonormal. It is also possible to let n -+ oo and obtain an orthogonal set for an infinite Definition C.IO
0
dimensional inner product space. We make use of this later in this chapter, but for now let us look at an example.
Determine an orthonormal set of vectors for the linear space that consists of all real linear functions: {a + bx : a, b E R 0 � x � 1} using as inner product (/,g) = 1 1 fgdx. Solution The set {1, x} forms a basis, but it is not orthogonal. Let a + bx and c + dx be two vectors. In order to be orthogonal we must have (a + bx,c+ dx) = 11 (a + bx)(c+ dx)dx = O. Example C.12
Performing the elementary integration gives the following condition on the con stants c, and 1 1 0.
a, b,
d
ac + '2 (bc + ad) + "3 bd =
In order to be orthonormal too we also need
!Ia + bx ll = 1 and l lc + dxll = 1 and these give, additionally,
a2 + b2 = 1 , c2 + d2 = 1.
There are four unknowns and three equations here, so we can make a convenient choice. Let us set
a = -b = .;21
240
An I ntroduction to Laplace Transforms and Fourier Series
which gives
1 v'2 (1 - x)
as one vector. The first equation now gives 3c = -d from which
1 3 c = vw ' d = - vw ·
Hence the set {(1 - x)/VW, (1 - 3x)/Vl0} is a possible orthonormal one. Of course there are infinitely many possible orthonormal sets, the above was one simple choice. The next definition follows naturally.
In an inner product space, an orthonormal set that is also a basis is called an orthonormal basis.
Definition C.13
This next example involves trigonometry which at last gets us close to dis cussing Fourier series. Example C.14 Show that {sin(x), cos(x)} is an orthogonal basis for the inner product space V = {a sin(x) + b cos(x) l a, b E R, O � x � 1r} using as inner product (!,g) = 11 fgdx, J, g E V and determine an orthonormal basis. Solution V is two dimensional and the set {sin(x), cos(x)} is obviously a basis. We merely need to check orthogonality. First of all,
(sin(x), cos(x))
=
Jor sin(x) cos(x)dx = 21 Jor sin(2x)dx [- � cos(2x)[ =
=
0.
Hence orthogonality is established. Also, (sin(x), sin(x)) = and
Jo
1r (cos(x), cos(x)) = r cos2 (x)dx = 2 ·
Therefore
is
11f sin2 (x)dx = �
an orthonormal basis.
241
C. Linear Spaces
These two examples are reasonably simple, but for linear spaces of higher dimensions it is by no means obvious how to generate an orthonormal basis. One way of formally generating an orthonormal basis from an arbitrary basis is to use the Gramm-Schmidt orthonormalisation process, and this is given later in this appendix There are some further points that need to be aired before we get to dis cussing Fourier series proper. These concern the properties of bases, especially regarding linear spaces of infinite dimension. If the basis { a1 , a2 , . . . , an } spans the linear space V , then any vector v E V can be expressed as a linear combi nation of the basis vectors in the form
This result follows from the linear independence of the basis vectors, and that they span V. If the basis is orthonormal, then a typical coefficient, a1: can be determined by taking the inner product of the vector v with the corresponding basis vector a1: as follows
(v, a1:)
= =
n L:: ar (ar, ak) r=l n L arOkr r=l
= ak
where Okr is Kronecker's delta:-
Okr =
{ 01
r=k r#k
If we try to generalise this to the case n = oo there are some difficulties. They are not insurmountable, but neither are they trivial. One extra need always arises when the case n = oo is considered and that is convergence. It is this that prevents the generalisation from being straightforward. The notion of completeness is also important. It has the following definition:
Definition C.15 Let { e t , e2 , } be an infinite orthonormal system in an inner product space V. The system is complete in V if only the zero vector (u = 0) satisfies the equation • • • • • •
(u, en ) = 0,
n
EN
A complete inner product space whose basis has infinitely many elements is called a Hilbert Space, the properties of which take us beyond the scope of this
242
An Introduction to Laplace Transforms and Fourier Series
short appendix on linear algebra. The next step would be to move on to Bessel's Inequality which is stated but not proved in Chapter 4. Here are a few more definitions that help when we have to deal with series of vectors rather than series of scalars Definition C.16 Let w1 , w2, . . . , Wn, . . . be an infinite sequence of vectors in a normed linear space (e.g. an inner product space) W. We say that the sequence
converges in norm to the vector w E W if
w - Wn l l = 0. nlim -+oo ll This means that for each £ > 0, there exists n > n(£) such that £, Vn > n(£).
llw - wnl l <
Definition C.17 Let a1 , a2, . . . , an, . . . be an infinite sequence of vectors in the normed linear space V. We say that the series
converges in norm to the vector w if l lw - wnll -+ 0 as n -+
oo. We then write
There is logic in this definition as llwn - w l l measures the distance between the vectors w and wn and if this gets smaller there is a sense in which w converges to wn . Definition C.18 If { e1 , e2, . . . , en . . . } is an infinite sequence of orthonormal vectors in a linear space V we say that the system is closed in V if, for every a E V we have n lim !Ia er ) e r l l = 0 . �(a, n-+oo L...J r=l There are many propositions that follow from these definitions, but for now we give one more definition that is useful in the context of Fourier series. Definition C.19 If { e1 , e2, . . . , en . . . } is an infinite sequence of orthonormal vectors in a linear space V of infinite dimension with an inner product, then we say that the system is complete in V if only the zero vector a = 0 satisfies the equation (a, en ) = 0, n E N.
There are many more general results and theorems on linear spaces that are useful to call on from within the study of Fourier series. However, in a book such as this a judgement has to be made as when to stop the theory. Enough theory of linear spaces has now been covered to enable Fourier series
243
C. Linear Spaces
to be put in proper context. The reader who wishes to know more about the pure mathematics of particular spaces can enable their thirst to be quenched by many excellent texts on special spaces. (Banach Spaces and Sobolev Spaces both have a special place in applied mathematics, so inputting these names in the appropriate search engine should get results.) C.2
Gramm-Schmidt Orthonormalisation Pro cess
Even in an applied text such as this, it is important that we know formally how to construct an orthonormal set of basis vectors from a given basis. The Gramm-Schmidt process gives an infallible method of doing this. We state this process in the form of a theorem and prove it. Theorem C.20 Every finite dimensional inner product space has a basis con sisting of orthonormal vectors. Proof Let {v1 , v2 , V3, . . . , Vn} be a basis for the inner product space V. A second equally valid basis can be constructed from this basis as follows U1 = V1 (v2 , u1 ) U2 = V2 U l lu d l 2 ! (v3 , u2 ) (v3, u1 ) U3 = V3 U U2 l lul l l 2 t l lu2 1 1 2 (vn, Un- 1 ) U n = Vn Un- 1 llun - 1 1 1 2
-
..•
-
(vn, u1 ) U l lul l l 2 t
where Uk ::/; 0 for all k = 1, 2, . . . , n. If this has not been seen before, it may seem a cumbersome and rather odd construction; however, for every member of the new set {u 1 , u2 , U3, . . . , Un} the terms consist of the corresponding member of the start basis {vt , v2 , V3, . . . , vn } from which has been subtracted a series of terms. The coefficient of Uj in ui j < i is the inner product of Vi with respect to Uj divided by the length of Uj. The proof that the set { u1 , u2 , u3, . . . , Un } is orthogonal follows the standard induction method. It is so straightforward that it is left for the reader to complete. We now need two further steps. First, we show that {u 1 , u2 , u3, . . . , un} is a linearly independent set. Consider the linear combination a1 u1 + a2 u2 + · · · + O:nUn = 0 and take the inner product of this with the vector uk to give the equation n
L O:j (Uj , Uk ) = 0
j= l
244
from which we must have
An I ntroduction to Laplace Transforms and Fourier Series
ak (Uk , Uk ) =
0 so that ak = 0 for all k. This establishes linear independence. Now the set {w1 , w2 , w3 , . . . , wn } where Wk =
Uk
l l uk l l is at the same time, linearly independent, orthogonal and each element is of unit length. It is therefore the required orthonormal set of basis vectors for the inner product space. The proof is therefore complete. 0
Bib liography
Bolton, W. (1994) Laplace and z-transforms 128pp., Longmans. Bracewell, R.N. (1986) The Fourier Transform and its Applications (2nd Edi tion} 474pp., McGraw-Hill. Churchill, R.V. (1958) Operational Mathematics 337pp., McGraw-Hill. Copson, E.T. (1967) Asymptotic Expansions 120pp., C.U.P. Hochstadt, H. (1989) Integral Equations 282pp., Wiley-lnterscience. Jones, D.S. (1966) Generalised Functions 482pp., McGraw-Hill. (new edition 1982, C.U.P.) Lighthill, M.J. (1970) Fourier Analysis and Generalised Functions 79pp., C.U.P. Needham, T. (1997) Visual Complex Analysis 592pp., Clarendon Press, Ox ford. Osborne, A.D. (1999) Complex Variables and their Applications 454pp., Addison-Wesley. Pinkus, A and S. Zafrany (1997) Fourier Series and Integral Transforms 189pp., C.U.P. Priestly, H.A. (1985) Introduction to Complex Analysis 157pp., Clarendon Press, Oxford. Sneddon, I.N. (1957) Elements of Partial Differential Equations 327pp., McGraw-Hill. Spiegel, M.R. (1965) Laplace Transforms, Theory and Problems 261pp ., Schaum publ. co. 245
246
BIBLIOGRAPHY
Stewart, I. and D. Tall (1983) Complex Analysis 290pp., C.U.P. Watson, E.J. ( 1981) Laplace Transforms and Applications 205pp., Van Nos trand Rheingold. Weinberger H.F. (1965) A First Course in Partial Differential Equations 446pp., John Wiley. Whitelaw, T.A. (1983) An Introduction to Linear Algebra 166pp., Blaclde. Williams, W.E. (1980) Partial Differential Equations 357pp., O.U.P. Zauderer, E. (1989) Partial Differential Equations of Applied Mathematics 891pp., John Wiley.
Index
absolutely convergent, 24 amplitude, 128 analytic functions, 156 arc, 158 Argand diagram, 155 asymptotics, 122, 175, 179 autocovariance function, 150
complex variable, 155 complex variables, 122 complimentary error function, 174 contour, 159 contour closed, 159 simple, 159 contour integral, 159 contour integrals, 163 convergence, 115 convergence in the norm, 242 convergence, pointwise, 78 convergence, uniform, 78 convolution, 37, 73, 144 convolution for Fourier Transforms, 144 convolution theorem, 38, 48, 54, 144 cut (complex plane) , 166
basis, 234 bending beams, 68 Bessel Functions, 180 Bessel's inequality, 77 Borel 's theorem, 39 boundary condition, 120 boundary conditions, 142 boundary value problem, 121, 141 branch point, 158 branch points, 165, 175 Bromwich contour, 168, 171, 173
diffusion, 45 Dirac's delta function, 26, 125, 128, 132 Dirac's delta function derivative of, 32 Dirac,s delta function derivatives of, 30 Dirichlet's theorem, 79 discontinuous functions, 9
cantilever, 119 capacitor, 59 Cauchy's integral formulae, 157 Cauchy's theorem, 159 Cauchy-Riemann equations, 156, 160 Cauchy-Schwartz inequality, 237 celerity, 112, 119 chain rule, 110 characteristics, 112 complementary error function, 45 complex analysis, 155
electrical circuits, 55 energy spectral density, 148 247
248
energy spectrum, 148 error function, 45, 174 essential singularity, 158, 168 even functions, 89 exponential order, 4, 125 Faltung, 39 final value theorem, 23 first shift theorem, 5, 7 forcing function, 62 Fourier cosine transform, 134, 136 Fourier cosine transform inverse, 135 Fourier series, 112, 127, 129, 144, 240, 243 Fourier series basis functions, 98 complex form, 91, 130 cosine series, 96 differentiation of, 99 Fourier Transform from, 130 general period, 88 integration of, 99 orthonormality, 81 piecewise continuity, 99 sine series, 96 Fourier sine transform, 134, 136 Fourier sine transform inverse, 135 Fourier Transform discrete, 153 fast, 127 finite, 141, 145 Laplace Transform from, 128 shift theorems, 132 Fourier Transform pair, 132 Fourier Transforms, 127 Fourier Transforms finite, 144 higher dimensional, 141 Fredholm's Integral equation, 73 frequency, 128, 141 frequency space, 148, 150 gamma function, 177 generalised functions, 57, 128, 151
IN DEX
Green's theorem, 160 half-range series, 95 harmonic function, 113 harmonic functions, 156 heat conduction equation, 113, 117, 122, 125 Heaviside's Unit Step Function, 13, 18, 23, 67 Heaviside's unit step function, 10 ill-posed, 118 improper integral, 2 impulse function, 25, 128 inductor, 59 infinite dimension, 241 infinite dimensions, 236 initial value problem, 113 initial value problems, 50 initial value theorem, 23 inner product, 77, 235 inner product spaces, 235 integral equations, 72 intervals, 232 inverse Laplace Transform, 19, 41, 118, 129, 163, 170 inverse Laplace Transform partial fractions, 20 inversion formula, 170 isolated singularity, 158 keyhole contour, 166 Kirchhoff's Laws, 59 Kronecker's delta, 241 Laplace Transform and convolution, 38 derivative property, 14 of an integral, 16 of the Dirac delta function, 28 periodic functions, 32 two sided, 13 Laplace Transform, linearity of, 5 Laplace Transforms limiting theorems, 23 second order odes, 53
249
INDEX
solving first order odes, 50 Laplace's equation, 112, 142 Laurent series, 157, 161 Lebesgue dominated convergence theorem, 137, 138 line integral, 158 linear independence, 233 linear space, 138, 233 linearity, 19 linearity for inverse Laplace Transform, 19 for Laplace Transforms, 5 mass spring damper systems, 55 norm, 236, 238 null function, 3 odd functions, 89 order of a pole, 158 ordinary differential equation, 48 ordinary differential equations order, 49 orthogonal, 239 orthonormal, 83, 239 oscillating systems, 67 Parseval's formula, 152 Parseval's theorem, 104, 148 partial derivative, 109 partial derivatives, 110 partial differential equation, 177 partial differential equation elliptic, 111 hyperbolic, 111 parabolic, 111 partial differential equations, 140 partial differential equations classification, 111 hyperbolic, 119 similarity solution, 118 partial fractions, 20 periodic function, 129 periodic functions, 32 phase, 128
phase space, 141 piecewise continuity, 38 piecewise continuous, 24 piecewise continuous functions, 9 Plancherel's identity, 148 Poisson's equation, 122 pole, 158 power series, 24, 156 power spectrum, 133 principal part, 157 radius of convergence, 157 Rayleigh's theorem, 148 rectified sine wave, 33 regular functions, 156 residue, 157, 162, 167, 172, 178 residue calculus, 163 residue theorem, 122, 162, 164, 169, 177 resistor, 59 Riemann integral, 81, 132 Riemann Lebesgue lemma, 139 Riemann sheets, 165 Riemann-Lebesgue lemma, 78, 152 second shift theorem, 18, 19, 22 separation constant, 114 separation of variables, 114 set theory, 231 Shah function, 147 signal processing, 128, 144 simple arc, 159 simple harmonic motion, 136 simple pole, 158 simultaneous odes, 62 sine integral function, 17 singularity, 158, 167 spectral density, 148 spectrum, 128 standard sets, 232 steady state, 58 step function, 13 suffix derivative notation, 109 table of Laplace Transforms, 227 Taylor series, 156
250
time invariant systems, 55 time series, 13 time series analysis, 127 top hat function, 27, 133, 137 transfer function, 53 transient, 58 trigonometric functions, 8 uniformly convergent, 24 vector space, 2, 77, 233, 238 Volterra integral equation, 73 Watson's lemma, 122 wave equation, 119 white noise, 148 window, 150
INDEX
D
SPRINGER
1!1
UNDERGRADUATE
Emphasising the applications of Laplace transforms MATHEMA11CS
D
SERIES
throughout, the book also provides coverage of the underlying pure mathematical structures. Alongside the Laplace transform, the notion of Fourier series is developed from first principles. Exercises are provided to consolidate understanding of the concepts and techniques, and only a knowledge of elementary calculus and trigonometry is assumed. For second and third year students looking for a rigorous and practical introduction to the subject, this book will be an invaluable source.
The Springer Undergraduate Mathematics Series (SUMS) is a new series for undergraduates in the mathematical
sciences. From core foundational material to final year topics, SUMS books take a fresh and modern approach and are ideal for self-study or for a one- or two-semester course.
Each book includes numerous examples, problems
and fully-worked solutions.
ISBN 1-85233-015-5
ISBN 1 -85233-01 5-5
http://www.springer.co. uk