Carlos S. Kubrusly
Spectral Theory of Operators on Hilbert Spaces
Carlos S. Kubrusly Electrical Engineering Departme...
180 downloads
1262 Views
8MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Carlos S. Kubrusly
Spectral Theory of Operators on Hilbert Spaces
Carlos S. Kubrusly Electrical Engineering Department Catholic University of Rio de Janeiro Rio de Janeiro, RJ, Brazil
ISBN 978-0-8176-8327-6 e-ISBN 978-0-8176-8328-3 DOI 10.1007/978-0-8176-8328-3 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2012939729 Mathematics Subject Classification (2010): 47-XX. 47Axx
Springer Science+Business Media, LLC 2012 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.birkhauser-science.com)
To A
&
J
&
A
&
A
&
B
... and all life long my instinct has been to abandon anything for which I have no talent; tennis, golf, dancing, sallying, all have been abandoned, and perhaps it is desperation which keeps me writing... Graham Greene
Preface
This work is an introduction to the spectral theory of Hilbert space operators. My main goal is to offer a modern introductory textbook for a first course in spectral theory for graduate students, emphasizing recent aspects of the theory, with detailed proofs. The book is addressed to a wide audience consisting of graduate students in mathematics, as well as those studying statistics, economics, engineering, and physics. The text, however, can also be useful to working mathematicians delving into the spectral theory of Hilbert space operators, and for scientists wishing to apply spectral theory to their field. The prerequisite for this book is a first course in elementary functional analysis including, in particular, operator theory on Hilbert spaces at an introductory level. Of course, this includes a formal introduction to analysis, covering elements of measure theory and functions of a complex variable. However, every effort has been made to ensure that this book, despite its relative shortness, is as self-contained as possible. Chapter 1 summarizes the basic concepts from Hilbert space theory that will be required in the text. By no means is it intended to replace a formal introductory course on single operator theory. Its main purpose, besides summarizing the basics required for further chapters, is to unify notation and terminology. Chapter 2 discusses standard spectral results for (bounded linear) operators on Banach and Hilbert spaces, including the classical partition of the spectrum and spectral properties for specific classes of operators. Chapter 3 is devoted wholly to the Spectral Theorem for normal operators. After considering the compact case, it meticulously treats the general case, carrying out the proofs of both versions of the Spectral Theorem for the general case in great detail. The idea behind those highly detailed proofs is to discuss and explain delicate points, stressing some usually hidden features. The chapter closes with the Fuglede Theorems. Chapter 4 deals with functional calculus for normal operators, which depends on the Spectral Theorem, and also with analytic functional calculus (i.e., Riesz functional calculus). The same level of detail employed in the proofs of Chapter 3 is repeated here. Chapter 5 focuses VII
VIII
Preface
on Fredholm theory and compact perturbations of the spectrum, where a finer analysis of the spectrum is worked out, leading to further partitions involving the essential spectrum, the Weyl spectrum, and the Browder spectrum. The chapter closes with a discussion of Weyl’s and Browder’s Theorems, including some very recent results. The final section of each chapter is a section on Additional Propositions, consisting of either auxiliary results that will be required to support a proof in the main text, or further results extending some theorems proved in the main text. These are followed by a set of Notes, where each proposition is briefly discussed, and references are provided indicating proofs for all of them. The Additional Propositions can also be thought of as a set of proposed problems, and their respective Notes can be viewed as hints for solving them. At the end of each chapter, the reader will find a collection of suggested readings. This has a triple purpose: to offer a reasonable bibliography including most of the classics as well as some quite recent texts; to indicate where different approaches and proofs can be found; and to indicate where further results can be found. In this sense, some of the references are suggested as a second reading on the subject. The material in this book has been prepared to be covered in a onesemester graduate course. The resulting text is the outcome of attempts to meet the needs of a contemporary first course in spectral theory for an audience as described in the first paragraph, possessing the prerequisites listed in the second paragraph of this Preface. The logical dependence of the various sections (and chapters) is roughly linear and reflects approximately the minimum amount of material needed to proceed further. I have been lecturing on this subject for a long time; I often present it as a second part of an operator theory course. As a result, I have benefited from the help of many friends, among students and colleagues, and I am very grateful to all of them. In particular, I wish to thank Jo˜ao Zanni for his painstaking quest for typos. I also wish to thank Torrey Adams who copyedited the manuscript. Special thanks are due to an anonymous referee who made significant contributions throughout the text, and who corrected some important inaccuracies and mistakes that existed in the original version. Thanks are also due to Catholic University of Rio de Janeiro for providing the release time that made this project possible, as well as to CNPq (Brazilian National Research Council) for a research grant. Rio de Janeiro, March 2012
Carlos S. Kubrusly
Contents
VII
Preface 1 Preliminaries 1.1 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Inverse Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Orthogonal Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Adjoint Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Orthogonal Eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Compact Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 3 5 7 9 12 16 17 21
2 Spectrum
27
2.1 2.2 2.3 2.4 2.5 2.6 2.7
Basic Spectral Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Classical Partition of the Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . Spectral Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectral Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectrum of Compact Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27 30 35 38 41 44 47
3 Spectral Theorem 3.1 Spectral Theorem for Compact Operators . . . . . . . . . . . . . . . . . . . . . 3.2 Diagonalizable Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Spectral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Spectral Theorem: General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Fuglede Theorems and Reducing Subspaces . . . . . . . . . . . . . . . . . . . 3.6 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 55 59 63 67 79 84
IX
X
Contents
4 Functional Calculus 4.1 4.2 4.3 4.4 4.5
Rudiments of C*-Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional Calculus for Normal Operators . . . . . . . . . . . . . . . . . . . . Analytic Functional Calculus: Riesz Functional Calculus . . . . . . Riesz Decomposition Theorem and Riesz Idempotents . . . . . . . . Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 91 95 102 115 126
5 Fredholm Theory 131 5.1 Fredholm Operators and Fredholm Index . . . . . . . . . . . . . . . . . . . . . 131 5.2 Essential Spectrum and Spectral Picture . . . . . . . . . . . . . . . . . . . . . . 141 5.3 Riesz Points and Weyl Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5.4 Ascent, Descent, and Browder Spectrum . . . . . . . . . . . . . . . . . . . . . . 162 5.5 Remarks on Browder and Weyl Theorems . . . . . . . . . . . . . . . . . . . . 178 5.6 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 References
187
Index
193
1 Preliminaries
This chapter summarizes the background required for the rest of the book. Its purpose is threefold: notation, terminology, and basic results. By basic results we mean well-known theorems from single operator theory that will be needed in the sequel. As usual, the set of nonnegative integers, positive integers, and all integers will be denoted by N 0 , N , Z , and the set of rational numbers, real numbers, and complex numbers will denoted by Q , R , and C , respectively.
1.1 Notation and Terminology We assume the reader is familiar with the notions of linear space (or vector space), normed space, Banach space, inner product space, and Hilbert space. All spaces considered in this book are complex (i.e., over the complex field C ). Given a (complex) inner product space X , the sesquilinear form (linear in the first argument) · ; ·: X × X → C stands for the inner product in X . We do not distinguish notation for norms. Thus · denotes the norm on a normed space X (in particular, the norm generated by the inner product in an inner product space X — i.e., x2 = x ; x for all x in X ) and also the x (induced uniform) operator norm in B[X , Y ] (i.e., T = supx=0 T x for all T in B[X , Y ]), where B[X , Y ] stands for the normed space of all bounded linear transformations of a normed space X into a normed space Y. The induced uniform norm has the operator norm property, which means that if X , Y, and Z are normed spaces over the same scalar field, and if T ∈ B[X , Y ] and S ∈ B[Y, Z], then S T ∈ B[X , Z] and S T ≤ ST . Between normed spaces, continuous linear transformation and bounded linear transformation are synonyms. A transformation T : X → Y between normed spaces X and Y is bounded if there exists a constant β ≥ 0 such that T x ≤ β x for every x in X . It is said to be bounded below if there is a constant α > 0 such that αx ≤ T x for every x in X . An operator on a normed space X is precisely a bounded linear (i.e., a continuous linear) transformation of X into itself. Set B[X ] = B[X , X ] for short: the normed algebra of all operators on X . If X = {0}, then B[X ] contains the identity operator I and I = 1 so that B[X ] is a unital normed algebra. Moreover, if X = {0} C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3_1, © Springer Science+Business Media, LLC 2012
1
2
1. Preliminaries
is a Banach space, then B[X ] is a unital (complex ) Banach algebra. Since the induced uniform norm has the operator norm property, a trivial induction shows that T n ≤ T n, for every operator T ∈ B[X ] on a normed space X , for each integer n ≥ 0. A transformation T ∈ B[X , Y ] is a contraction if T ≤ 1; equivalently, if T x ≤ x for every x ∈ X . It is a strict contraction if T < 1. If X = {0}, then a transformation T is a contraction (or a strict contraction) if and only if supx=0 (T x/x) ≤ 1 (or supx=0 (T x/x) < 1). If X = {0}, then T < 1
=⇒
T x < x for every 0 = x ∈ X
=⇒
T ≤ 1.
If T satisfies the middle inequality, then it is called a proper contraction. Let X be a Banach space. A sequence {Tn } of operators in B[X ] converges uniformly to an operator T in B[X ] if Tn − T → 0, and {Tn } converges strongly to T if (Tn − T )x → 0 for every x in X . If X is a Hilbert space, then {Tn } converges weakly to T if (Tn − T )x ; y → 0 for every x, y in X (or, equivalently, (Tn − T )x ; x → 0 for every x in X , if the Hilbert space is comu s w plex). These will be denoted by Tn −→ T , Tn −→ T , or Tn −→ T , respectively. The sequence {Tn } is bounded if supn Tn < ∞. It is readily verified that u Tn −→ T
=⇒
s Tn −→ T
=⇒
w Tn −→ T
=⇒
supn Tn < ∞
(the last implication is a consequence of the Banach–Steinhaus Theorem). By a subspace of a normed space we mean a closed linear manifold of it. The closure of a linear manifold of a normed space is a subspace. A subspace of a Banach space is a Banach space. A subspace M of X is nontrivial if {0} = M = X . Let T ∈ B[X ] be an operator on a normed space X . A subset A of X is T -invariant (or A is an invariant subset for T ) if T (A) ⊆ A (i.e., if T x ∈ A whenever x ∈ A). An invariant linear manifold (invariant subspace) for T is a linear manifold (subspace) of X that, as a subset of X , is T -invariant. The zero space {0} and the whole space X (i.e., the trivial subspaces) are trivially invariant for every T in B[X ]. If M is an invariant linear manifold for T , then its closure M− is an invariant subspace for T . Let X and Y be linear spaces. The kernel (or null space) of a linear transformation T : X → Y is the inverse image of {0} under T , N (T ) = T −1 ({0}) = x ∈ X : T x = 0 , which is a linear manifold of X . The range of T is the image of X under T, R(T ) = T (X ) = y ∈ Y : y = T x for some x ∈ X , which is a linear manifold of Y. If X and Y are normed spaces and T lies in B[X , Y ] (i.e., if T is bounded), then N (T ) is a subspace of X (i.e., if T is bounded, then N (T ) is closed).
1.2 Inverse Theorems
3
If M and N are linear manifolds of a linear space X , then the ordinary sum (or simply, the sum) M + N is the linear manifold of X consisting of all sums u + v with u in M and v in N . The direct sum M ⊕ N is the linear space of all ordered pairs (u, v) with u in M and v in N , where vector addition and scalar multiplication are defined coordinatewise. Although ordinary and direct sums are different linear spaces, they are isomorphic if M ∩ N = {0}. A pair of linear manifolds M and N are algebraic complements of each other if M + N = X and M ∩ N = {0}. In this case, each vector x in X can be uniquely written as x = u+v with u in M and v in N . If two subspaces (linear manifolds) of a normed space are algebraic complements of each other, then they are called complementary subspaces (complementary linear manifolds).
1.2 Inverse Theorems One of the fundamental theorems of (linear) functional analysis, the Open Mapping Theorem, says that a continuous linear transformation of a Banach space onto a Banach space is an open mapping (see, e.g., [66, Section 4.5] — a mapping is open if it maps open sets onto open sets). A crucial corollary of it is the following result. Theorem 1.1. (Inverse Mapping Theorem). If X and Y are Banach spaces and if T ∈ B[X , Y ] is injective and surjective, then T −1 ∈ B[Y, X ]. Proof. An invertible transformation is precisely an injective and surjective one. Since the inverse of an invertible linear transformation is linear, and since an invertible transformation is open if and only if it has a continuous inverse, the stated result follows from the Open Mapping Theorem. Thus an injective and surjective bounded linear transformation between Banach spaces has a bounded (linear) inverse. Let G[X , Y ] denote the collection of all invertible (i.e., injective and surjective) elements from B[X , Y ] with a bounded inverse. (Recall that the inverse of any linear transformation is linear.) The above theorem says that, if X and Y are Banach spaces, then every invertible transformation from B[X , Y ] lies in G[X , Y ]. Set G[X ] = G[X , X ], which forms a group under multiplication whenever X is a Banach space, viz., the group of all invertible operators from B[X ] (every operator in G[X ] has an inverse in G[X ]). Here is a useful corollary of the Inverse Mapping Theorem. Theorem 1.2. (Bounded Inverse Theorem). Let X and Y be Banach spaces and take any T ∈ B[X , Y ]. The following assertions are equivalent . (a) T has a bounded inverse on its range. (b) T is bounded below . (c) T is injective and has a closed range.
4
1. Preliminaries
Proof. Part (i). The equivalence between (a) and (b) still holds if X and Y are just normed spaces. Indeed, if there exists T −1 ∈ B[R(T ), X ], then there exists a constant β > 0 such that T −1 y ≤ βy for every y ∈ R(T ). Take an arbitrary x ∈ X so that T x ∈ R(T ). Thus x = T −1 T x ≤ βT x, and so β1 x ≤ T x. Hence (a) implies (b). Conversely, if (b) holds true, then 0 < T x for every nonzero x in X , and so N (T ) = {0}, which means that T has a (linear) inverse on its range — a linear transformation is injective if and only if it has a null kernel . Take an arbitrary y ∈ R(T ) so that y = T x for some x ∈ X . Thus T −1 y = T −1 T x = x ≤ α1 T x = α1 y, which means that T −1 is bounded. Therefore (b) implies (a). Part (ii). Take an arbitrary R(T )-valued convergent sequence {yn }. Since each yn lies in R(T ), there exists an X -valued sequence {xn } such that yn = T xn for each n. Since {T xn } converges in Y, it is a Cauchy sequence in Y. Thus, if T is bounded below, then there exists α > 0 such that 0 ≤ αxm − xn ≤ T (xm − xn ) = T xm − T xn for every m, n so that {xn } is a Cauchy sequence in X , and thus it converges in X to, say, x ∈ X if X is a Banach space. Since T is continuous, it preserves convergence so that yn = T xn → T x, which implies that the (unique) limit of {yn } lies in R(T ). Conclusion: R(T ) is closed in Y by the classical Closed Set Theorem. That is, R(T )− = R(T ) whenever X is a Banach space, where R(T )− stands for the closure of R(T ). Moreover, since (b) trivially implies N (T ) = {0}, it follows that (b) implies (c). On the other hand, if N (T ) = {0}, then T is injective. If, in addition, the linear manifold R(T ) is closed in the Banach space Y, then it is itself a Banach space so that T : X → R(T ) is an injective and surjective bounded linear transformation of the Banach space X onto the Banach space R(T ). Therefore, its inverse T −1 lies in B[R(T ), X ] by the Inverse Mapping Theorem. That is, (c) implies (a). Notice that if one of the Banach spaces X or Y is zero, then the results in the previous theorems are either void or hold trivially. Indeed, if X = {0} and Y = {0}, then the unique operator in B[X , Y ] is not injective (thus not bounded below and not invertible); if X = {0} and Y = {0}, then the unique operator in B[X , Y ] is not surjective but is bounded below (thus injective) and has a bounded inverse on its singleton range; if X = {0} and Y = {0} the unique operator in B[X , Y ] has an inverse in B[X , Y ]. The next theorem is a rather useful result establishing a power series expansion for (λI − T )−1. This is the Neumann expansion (or Neumann series) due to C.G. Neumann. The condition T < |λ| will be weakened in Corollary 2.12. Theorem 1.3. (Neumann Expansion). If T ∈ B[X ] is an operator on a Banach space X , and if λ is any scalar such that T < |λ|, then λI−T has an inverse in B[X ] given by the following uniformly convergent series: ∞ k T (λI − T )−1 = λ1 . λ k=0
1.3 Orthogonal Structure
5
Proof. Take an arbitrary operator ] and an arbitrary T scalar TT ∈B[X nonzero n n k k ≤ λ. If T < |λ|, then sup < ∞. Thus, since n k=0 k=0 λ λ n T k for every n ≥ 0, it follows that the increasing sequence of nonk=0 λ n T k is bounded, and hence it converges in R , negative numbers k=0 λ T n is absolutely summable, and so it is which means that the sequence λ summable (since B[X ] is a Banach space — see, e.g., [66, Proposition 4.4]). T k converges in B[X ]. Equivalently, there is an That is, the series ∞ k=0 λ ∞ T k operator in B[X ], say k=0 λ , such that n k T k=0 λ
u −→
∞ k T k=0 λ
.
Now it is also readily verified by induction that, for every n ≥ 0, n k n k n+1 T T (λI − T ) λ1 = λ1 (λI − T ) = I − Tλ . λ λ But
T n λ
k=0
(λI − T ) λ1 Hence
and so
n
T k
k=0 λ
(λI − T ) λ1
1 λ
k=0
n n −→ O (since Tλ ≤ Tλ → 0 when T < |λ|), and so u
∞ T k k=0
λ
u −→ I
∞ k T k=0 λ
and
=
1 λ
1 λ
n
T k (λI k=0 λ
∞ k T i=0 λ
u − T ) −→ I.
(λI − T ) = I,
∈ B[X ] is the inverse of λI − T ∈ B[X ].
Remark. Observe that, under the hypothesis of Theorem 1.3, ∞ T k −1 1 (λI − T )−1 ≤ |λ| = |λ| − T . |λ| k=0
1.3 Orthogonal Structure Two vectors x and y in an inner product space X are orthogonal if x ; y = 0. In this case we write x ⊥ y. Two subsets A and B of X are orthogonal (notation: A ⊥ B) if every vector in A is orthogonal to every vector in B. The orthogonal complement of a set A is the set A⊥ made up of all vectors in X that are orthogonal to every vector of A, A⊥ = x ∈ X : x ; y = 0 for every y ∈ A , which is a subspace (i.e., a closed linear manifold) of X , and therefore A⊥ = (A⊥ )− = (A− )⊥ . If M and N are orthogonal (M ⊥ N ) linear manifolds of an inner product space X , then M ∩ N = {0}. In particular, M ∩ M⊥ = {0}. If M ⊥ N , then M ⊕ N is an orthogonal direct sum. If X is a Hilbert space, then M⊥⊥ = M−, and M⊥ = {0} if and only if M− = X . Moreover, if M and N are orthogonal subspaces of a Hilbert space X , then M + N is a subspace
6
1. Preliminaries
of X . In this case, M + N ∼ = M ⊕ N : ordinary and direct sums of orthogonal subspaces of a Hilbert space are unitarily equivalent — the inner product in M ⊕ N is given by (u1 , v1 ) ; (u2 , v2 ) = u1 ; u2 + v1 ; v2 for every (u1 , v1 ) and (u2 , v2 ) in M ⊕ N . Two linear manifolds M and N are orthogonal complementary if M + N = X and M ⊥ N . If M and N are orthogonal complementary subspaces of a Hilbert space X , then we identify the ordinary sum M + N with the orthogonal sum M ⊕ N , and write (an abuse of notation, actually) X = M ⊕ N (instead of X ∼ = M ⊕ N ). A central result of Hilbert space geometry, the Projection Theorem, says that, if H is a Hilbert space, and M is any subspace of H, then H = M + M⊥ . Ordinary and direct sums of orthogonal subspaces are unitarily equivalent. Thus the Projection Theorem is equivalently stated in terms of orthogonal direct sum up to unitary equivalence, H ∼ = M ⊕ M⊥, often written as H = M ⊕ M⊥ . This leads to the notation M⊥ = H M, if M is a subspace of H. For a nonempty subset A of a normed space X , let spanA denote the span (or linear span) of A: the linear manifold of X consisting of all (finite) linear combinations of vectors of A; equivalently, the smallest linear manifold of X that includes A (i.e., the intersection ofall linear manifolds of X that include A). Its closure (spanA)−, denoted by A (and sometimes also called the span of A), is a subspace of X . Let {Mγ }γ∈Γ be a family of subspaces of X indexed by an necessarily countable) nonempty index set Γ . arbitrary (not Mγ − of X is the topological sum of {Mγ }γ∈Γ , The subspace span γ∈Γ − or γ∈Γ Mγ . That is, usually denoted by γ∈Γ Mγ
−
−
Mγ = span Mγ = Mγ = Mγ . γ∈Γ
γ∈Γ
γ∈Γ
γ∈Γ
A pivotal result of Hilbert space theory is the Orthogonal Structure Theorem, which reads as follows. Let {Mγ }γ∈Γ be a family or of pairwise − there thogonal subspaces of a Hilbert space H. For every x ∈ γ∈Γ Mγ is a unique summable family {uγ }γ∈Γ of vectors uγ ∈ Mγ such that uγ . x= γ∈Γ Moreover, x2 = γ∈Γ uγ 2 . Conversely, if {uγ }γ∈Γ is a square-summable ∈ Mγ for each γ ∈ Γ , then {uγ }γ∈Γ is family of vectors in H with uγ − summable and u lies in . (See, e.g., [66, Section 5.5].) γ γ∈Γ γ∈Γ Mγ
For the definition of summable family and square-summable family see [66, Definition 5.26]. A summable family of vectors in a normed space has only a countable number of nonzero vectors, and so the sum γ∈Γ uγ has only a countable number of nonzero summands uγ (see, e.g., [66, Corollary 5.28]).
1.4 Orthogonal Projections
7
Take the direct sum γ∈Γ Mγ of pairwise orthogonal subspaces Mγ of a Hilbert space H made up of square-summable families of vectors. That is, Mα ⊥ Mβ for α = β, and x = {xγ }γ∈Γ ∈ γ∈Γ Mγ if and only if each 2 xγ ∈ Mγ and γ∈Γ xγ < ∞. This is a Hilbert space with inner product x ; y =
γ∈Γ
xγ ; yγ
for every x = {xγ }γ∈Γ and y = {yγ }γ∈Γ in γ∈Γ Mγ . A consequence (a restatement, actually) of the Orthogonal Structure Theorem says that − the orthogonal direct sum γ∈Γ Mγ and the topological sum M are γ γ∈Γ unitarily equivalent Hilbert spaces. That is,
− Mγ ∼ Mγ . = γ∈Γ
γ∈Γ
− = γ∈Γ Mγ = H), If, in addition, {Mγ }γ∈Γ spans H (i.e., γ∈Γ Mγ then it is usual to express the above identification by writing Mγ . H=
γ∈Γ
Similarly, the orthogonal direct sum γ∈Γ Hγ of a collection of Hilbert spaces {Hγ }γ∈Γ (not necessarily subspaces of any larger Hilbert space) consists of square-summable families {xγ }γ∈Γ (i.e., γ∈Γ xγ 2 < ∞), with each xγ in Hγ . This is a Hilbert space with inner product given as above. Let H and K be Hilbert spaces. The direct sum of T ∈ B[H] and S ∈ B[K] is the operator T ⊕ S ∈ B[H ⊕ K] such that (T ⊕ S)(x, y) = (T x, Sy) for every (x, y) ∈ H ⊕ K. If {Tγ }γ∈Γ isa family of operators with Tγ ∈ B[Hγ ], then their direct sum T : γ∈Γ Hγ → γ∈Γ Hγ is the mapping such that T {xγ }γ∈Γ = {Tγ xγ }γ∈Γ for every {xγ }γ∈Γ ∈ γ∈Γ Hγ , which is an operator in B
γ∈Γ
Hγ ] with T = supγ∈Γ Tγ , denoted by
T =
γ∈Γ
Tγ .
1.4 Orthogonal Projections A function F : X → X of a set X into itself is idempotent if F 2 = F , where F 2 stands for the composition of F with itself. A projection E is an idempotent (i.e., E = E 2 ) linear transformation E : X → X of a linear space X into itself. If E is a projection, then so is I − E, which is referred to as the complementary projection of E, and their null spaces and ranges are related as follows: R(E) = N (I − E)
and
N (E) = R(I − E).
8
1. Preliminaries
The range of a projection is the set of all its fixed points, R(E) = x ∈ X : Ex = x , which forms with the kernel a pair of algebraic complements, R(E) + N (E) = X
and
R(E) ∩ N (E) = {0}.
An orthogonal projection E : X → X on an inner product space X is a projection such that R(E) ⊥ N (E). If E is an orthogonal projection on X , then so is the complementary projection I − E : X → X . Every orthogonal projection is bounded (i.e., continuous — E ∈ B[H]). In fact, every orthogonal projection is a contraction. Indeed, every x ∈ X can be written as x = u + v with u ∈ R(E) and v ∈ N (E), so that Ex2 = Eu + Ev2 = Eu2 = u2 ≤ x2 since x2 = u2 + v2 by the Pythagorean Theorem (once u ⊥ v). Hence R(E) is a subspace (i.e., it is closed, because R(E) = N (I − E) and I − E is bounded). Moreover, N (E) = R(E)⊥ ; equivalently, R(E) = N (E)⊥. Conversely, if any of these properties holds for a projection E, then it is an orthogonal projection. Two orthogonal projections E1 and E2 on X are said to be orthogonal to each other (or mutually orthogonal ) if R(E1 ) ⊥ R(E2 ) (which is equivalent to saying that E1 E2 = E2 E1 = O). Let Γ be an arbitrary index set (not necessarily countable). If {Eγ }γ∈Γ is a family of orthogonal projections on an inner product space X that are orthogonal to each other (R(Eα ) ⊥ R(Eβ ) for α = β), then {Eγ }γ∈Γ is an orthogonal family of orthogonal projections on X . An orthogonal sequence of orthogonal projections {Ek }∞ k=0 is similarly defined. If {Eγ }γ∈Γ is an orthogonal family of orthogonal projections and Eγ x = x for every x ∈ X , γ∈Γ
then {Eγ }γ∈Γ is called aresolution of the identity on X . (Recall that for each x ∈ X the sum x = γ∈Γ Eγ x has only a countable number of nonzero vectors.) If {Ek }∞ k=0 is an infinite sequence, then the above identity in X means convergence in the strong operator topology: n s Ek −→ I as n → ∞. k=0
If {Ek }nk=0 is a finite nfamily, then the above identity in X obviously coincides with the identity k=0 Ek = I in B[X ] (e.g., {E1 , E2 } is a resolution of the identity on X whenever E2 = I − E1 is the complementary projection of the orthogonal projection E1 on X ). The Projection Theorem and the Orthogonal Structure Theorem of the previous section can be written in terms of orthogonal projections. Indeed, for every subspace M of a Hilbert space H there exists a unique orthogonal
1.5 Adjoint Operator
9
projection E ∈ B[H] such that R(E) = M, which is referred to as the orthogonal projection on M. This is an equivalent version of the Projection Theorem. An equivalent version of the Orthogonal Structure Theorem reads as follows. If {Eγ }γ∈Γ is a resolution of the identity on H, then H=
γ∈Γ
− R(Eγ ) .
Conversely, if {M γ }γ∈Γ is a family of pairwise orthogonal subspaces of H − , then the family of orthogonal projections such that H = γ∈Γ Mγ {Eγ }γ∈Γ , with each Eγ ∈ B[H] on each Mγ (i.e., with R(Eγ ) = Mγ ), is a resolution of the identity on H. Again, since {R(Eγ )}γ∈Γ is a family of pairwise orthogonal subspaces, the above orthogonaldecomposition of H is unitarily equivalent to the orthogonal direct sum γ∈Γ R(Eγ ), that is, − ∼ R(E ) R(E ), which is commonly written as = γ γ γ∈Γ γ∈Γ H=
γ∈Γ
R(Eγ ).
(See, e.g., [66, Theorems 5.52 and 5.59].) Let {Eγ }γ∈Γ be a resolution of the identity on a nonzero Hilbert space made up of nonzero projections, let {λγ }γ∈Γ be a similarly indexed family of scalars, and set D(T ) = x ∈ H: {λγ Eγ x}γ∈Γ is a summable family of vectors in H . The mapping T : D(T ) → H, defined by λγ Eγ x for every x ∈ D(T ), Tx = γ∈Γ
is called a weighted sum of projections. It can be shown that the domain D(T ) of a weighted sum of projections is a linear manifold of H and that T is a linear transformation. Moreover, T is bounded if and only if {λγ }γ∈Γ is a bounded family of scalars, which happens if and only if D(T ) = H. In this case T ∈ B[H] and T = supγ∈Γ |λγ |. (See, e.g., [66, Proposition 5.61].)
1.5 Adjoint Operator Throughout the text, H and K are Hilbert spaces. Take any T ∈ B[H, K]. The adjoint T ∗ of T is the unique mapping of K into H such that T x ; y = x ; T ∗ y for every x ∈ H and y ∈ K. In fact, T ∗ ∈ B[K, H] (i.e., T ∗ is bounded and linear), T ∗∗ = T , and T ∗ 2 = T ∗T = T T ∗ = T 2.
10
1. Preliminaries
Moreover, if Z is a Hilbert space and S ∈ B[K, Z], then S T ∈ B[H, Z] is such that (S T )∗ = T ∗ S ∗ (see, e.g., [66, Proposition 5.65 and Section 5.12]). Lemma 1.4. For every T ∈ B[H, K], N (T ) = R(T ∗ )⊥ = N (T ∗ T )
and
R(T )− = N (T ∗ )⊥ = R(T T ∗ )− .
Proof. x ∈ R(T ∗ )⊥ if and only if x ; T ∗ y = 0 or, equivalently, T x ; y = 0, for every y in K. This means that T x = 0; that is, x ∈ N (T ). Hence R(T ∗ )⊥ = N (T ). Since T x2 = T x ; T x = T ∗ T x ; x for every x in H, it follows that N (T ∗ T ) ⊆ N (T ). But N (T ) ⊆ N (T ∗ T ) trivially, and so N (T ) = N (T ∗ T ). This completes the proof of the first identities. Since these hold for every T in B[H, K], they also hold for T ∗ in B[K, H] and for T T ∗ in B[K]. Thus R(T )− = R(T ∗∗ )⊥⊥ = N (T ∗ )⊥ = N (T ∗∗ T ∗ )⊥ = N (T T ∗ )⊥ = R((T T ∗ )∗ )⊥⊥ = R(T ∗∗ T ∗ )⊥⊥ = R(T T ∗ )− , because M⊥⊥ = M− for every linear manifold M and T ∗∗ = T .
Lemma 1.5. R(T ) is closed in K if and only if R(T ∗ ) is closed in H. Proof. Set T⊥ = T |N (T )⊥ ∈ B[N (T )⊥, K], the restriction of T to N (T )⊥ . Recall that H = N (T ) + N (T )⊥. Thus every x ∈ H can be written as x = u + v with u ∈ N (T ) and v ∈ N (T )⊥ . If y ∈ R(T ), then y = T x = T u + T v = T v = T |N (T )⊥ v for some x ∈ H, and so y ∈ R(T |N (T )⊥). Hence R(T ) ⊆ R(T |N (T )⊥ ). Since R(T |N (T )⊥) ⊆ R(T ), it follows that R(T⊥ ) = R(T )
and
N (T⊥ ) = {0}.
If R(T ) = R(T )−, then take the inverse T⊥−1 in B[R(T ), N (T )⊥ ] (Theorem 1.2). For any w ∈ N (T )⊥ consider the functional fw : R(T ) → C defined by fw (y) = T⊥−1 y ; w for every y ∈ R(T ). Since T⊥−1 is linear and the inner product is linear in the first argument, it follows that f is linear. Since |fw (y)| ≤ T⊥−1 wy for every y ∈ R(T ), it follows that f is bounded. Moreover, R(T ) is a subspace of the Hilbert space K (it is closed in K), and so a Hilbert space itself. Then the Riesz Representation Theorem in Hilbert spaces (see, e.g., [66, Theorem 5.62]) says that there exists zw in the Hilbert space R(T ) such that fw (y) = y ; zw
1.5 Adjoint Operator
11
for every y ∈ R(T ). Thus, for an arbitrary x ∈ H, so that x = u + v with u ∈ N (T ) and v ∈ N (T )⊥ , we get x ; T ∗ zw = T x ; zw = T u ; zw + T v ; zw = T v ; zw = fw (T v) = T⊥−1 T v ; w = T⊥−1 T⊥ v ; w = v ; w = u ; w + v ; w = x ; w. Hence x ; T ∗ zw − w = 0 for every x ∈ H, which means that T ∗ zw = w. Thus w ∈ R(T ∗ ). This shows that N (T )⊥ ⊆ R(T ∗ ). On the other hand, R(T ∗ ) ⊆ R(T ∗ )− = N (T )⊥ by Lemma 1.4 (since T ∗∗ = T ) so that R(T ∗ ) = R(T ∗ )− . Therefore R(T ) = R(T )− implies R(T ∗ ) = R(T ∗ )− , and so the converse also holds because T ∗∗ = T . That is, R(T ) = R(T )−
if and only if
R(T ∗ ) = R(T ∗ )− .
Let T ∈ B[H] be an operator on a Hilbert space H and let M be a subspace of H. If M and its orthogonal complement M⊥ are both T -invariant (i.e., if T (M) ⊆ M and T (M⊥ ) ⊆ M⊥ ), then we say that M reduces T (or M is a reducing subspace for T ). An operator is reducible if it has a nontrivial reducing subspace (otherwise it is called irreducible), and reductive if all its invariant subspaces are reducing. Since M = {0} if and only if M⊥ = H, and M⊥ = {0} if and only if M = H, it follows that an operator T on a Hilbert space H is reducible if there exists a subspace M of H such that both M and M⊥ are nonzero and T -invariant or, equivalently, if M is nontrivial and invariant for both T and T ∗ , as we shall see next. Lemma 1.6. Take an operator T on a Hilbert space H. A subspace M of H is T -invariant if and only if M⊥ is T ∗ -invariant. M reduces T if and only if M is invariant for both T and T ∗, which implies that (T |M )∗ = T ∗ |M . Proof. Take any vector y ∈ M⊥. If T x ∈ M whenever x ∈ M, then x ; T ∗ y = T x ; y = 0 for every x ∈ M, and so T ∗ y ⊥ M; that is, T ∗ y ∈ M⊥. Thus T (M) ⊆ M implies T ∗ (M⊥ ) ⊆ M⊥. Since this holds for every operator in B[H], T ∗ (M⊥ ) ⊆ M⊥ implies T ∗∗ (M⊥⊥ ) ⊆ M⊥⊥, and hence T (M) ⊆ M (because T ∗∗ = T and M⊥⊥ = M− = M). Then T (M) ⊆ M if and only if T ∗ (M⊥ ) ⊆ M⊥, and so T (M⊥ ) ⊆ M⊥ if and only if T ∗ (M) ⊆ M. Therefore T (M) ⊆ M and T (M⊥ ) ⊆ M⊥ if and only if T (M) ⊆ M and T ∗ (M) ⊆ M, and so (T |M )x ; y = T x ; y = x ; T ∗ y = x ; (T ∗ |M )y for x, y ∈ M. An operator T ∈ B[H] on a Hilbert space H is self-adjoint (or Hermitian) if it coincides with its adjoint (i.e., if T ∗ = T ). A characterization of self-adjoint operators reads as follows: on a complex Hilbert space H, an operator T is self-adjoint if and only if T x ; x ∈ R for every vector x ∈ H. Moreover, if
12
1. Preliminaries
T ∈ B[H] is self-adjoint, then T = O if and only if T x ; x = 0 for all x ∈ H. (See, e.g., [66, Proposition 5.79 and Corollary 5.80].) A self-adjoint operator T is nonnegative (notation: O ≤ T ) if 0 ≤ T x ; x for every x ∈ H, and it is positive (notation: O < T ) if 0 < T x ; x for every nonzero x ∈ H. An invertible positive operator T in G[H] is called strictly positive (notation: O ≺ T ). For every T in B[H], the operators T ∗ T and T T ∗ in B[H] are always nonnegative (reason: 0 ≤ T x2 = T x ; T x = T ∗ T x ; x for every x ∈ H). If S and T are operators in B[H], then we write S ≤ T if T − S is nonnegative. Similarly, we also write S < T and S ≺ T if T − S is positive or strictly positive. Thus, if T − S is self-adjoint (in particular, if both are), then S = T if and only if S ≤ T and T ≤ S.
1.6 Normal Operators Let S and T be operators on the same space X . We say that they commute if S T = T S. An operator T ∈ B[H] on a Hilbert space H is normal if it commutes with its adjoint (i.e., if T T ∗ = T ∗ T , or O = T ∗ T − T T ∗ ). An operator T ∈ B[H] is hyponormal if T T ∗ ≤ T ∗ T (i.e., if O ≤ T ∗ T − T T ∗ ), which is equivalent to saying that (λI − T )(λI − T )∗ ≤ (λI − T )∗ (λI − T ) for every scalar λ ∈ C . An operator T ∈ B[H] is cohyponormal if its adjoint is hyponormal (i.e., if T ∗ T ≤ T T ∗ , or O ≤ T T ∗ − T ∗ T ). Since T ∗ T − T T ∗ is always self-adjoint, it follows that T is normal if and only if it is both hyponormal and cohyponormal. Thus every normal operator is hyponormal . If it is either hyponormal or cohyponormal, then it is called seminormal . Theorem 1.7. T ∈ B[H] is hyponormal if and only if T ∗ x ≤ T x for every x ∈ H. Moreover, the following assertions are pairwise equivalent . (a) T is normal . (b) T ∗ x = T x for every x ∈ H. (c) T n is normal for every positive integer n. (d) T ∗n x = T nx for every x ∈ H and every n ≥ 1. Proof. Take any T ∈ B[H]. The characterization for hyponormal operators goes as follows: T T ∗ ≤ T ∗ T if and only if T T ∗ x ; x ≤ T ∗ T x ; x or, equivalently, T ∗ x2 ≤ T x2, for every x ∈ H. Since T is normal if and only if it is both hyponormal and cohyponormal, it follows that (a) and (b) are equivalent. Thus, as T ∗n = T n∗ for each n ≥ 1, (c) and (d) are equivalent. If T ∗ commutes with T, then it commutes with T n and, dually, T n commutes with T ∗n = T n∗. Hence (a) implies (c), and (d) implies (b) trivially. In spite of the equivalence between (a) and (c) above, the square of a hyponormal is not necessarily hyponormal. It is clear that every self-adjoint
1.6 Normal Operators
13
operator is normal, and so is every nonnegative operator. Moreover, normality distinguishes the orthogonal projections among the projections. Theorem 1.8. If E ∈ B[H] is a nonzero projection, then the following assertions are pairwise equivalent . (a) E is an orthogonal projection. (b) E is nonnegative. (c) E is self-adjoint . (d) E is normal . (e) E = 1. (f) E ≤ 1. Proof. Let E be a projection (i.e., a linear idempotent). If E is an orthogonal projection (i.e., R(E) ⊥ N (E)), then I − E is again an orthogonal projection, thus bounded, and so R(E) = N (I − E) is closed. Hence R(E ∗ ) is closed by Lemma 1.5. Moreover, R(E) = N (E)⊥ . But N (E)⊥ = R(E ∗ )− by Lemma 1.4 (since E ∗∗ = E). Then R(E) = R(E ∗ ), and so for every x ∈ H there is a z ∈ H such that Ex = E ∗ z. Therefore, E ∗ Ex = (E ∗ )2 z = (E 2 )∗ z = E ∗ z = Ex. That is, E ∗ E = E. This implies that E is nonnegative (it is self-adjoint, and E ∗ E is always nonnegative). Outcome: R(E) ⊥ N (E)
=⇒
E≥O
E∗ = E
=⇒
=⇒
E ∗ E = EE ∗ .
Conversely, if E is normal, then E ∗ x = Ex for every x in H (Theorem 1.7) so that N (E ∗ ) = N (E). Hence R(E) = N (E ∗ )⊥ = N (E)⊥ by Lemma 1.4 (because R(E) is closed). Therefore, R(E) ⊥ N (E). That is, E ∗ E = EE ∗
=⇒
R(E) ⊥ N (E).
If O = E = E ∗ , then E2 = E ∗ E = E 2 = E = 0, and so E = 1: E = E∗
=⇒
E = 1
=⇒
E ≤ 1.
Take any v ∈ N (E)⊥ so that (I − E)v ∈ N (E) (since R(I − E) = N (E)). Hence (I − E)v ⊥ v so that 0 = (I − E)v ; v = v2 − Ev ; v. If E ≤ 1, then v2 = Ev ; v ≤ Evv ≤ Ev2 ≤ v2. Thus Ev = v = 1 Ev ; v 2 . So (I − E)v2 = Ev − v2 = Ev2 − 2 ReEv ; v + v2 = 0, and hence v ∈ N (I − E) = R(E). Then N (E)⊥ ⊆ R(E), which implies that R(E ∗ ) ⊆ N (E ∗ )⊥ by Lemma 1.4, and hence R(E ∗ ) ⊥ N (E ∗ ). But E ∗ is a projection whenever E is (E = E 2 implies E ∗ = E 2∗ = E ∗2 ). Thus E ∗ is an orthogonal projection so that E ∗ is self-adjoint, and so is E. Therefore, E ≤ 1
=⇒
E = E∗.
14
1. Preliminaries
Corollary 1.9. Every bounded weighted sum of projections is normal . Proof. Let T ∈ B[H] be a weighted sum of projections, which means that T x = γ∈Γ λγ Eγ x for every x in the Hilbert space H, where {λγ }γ∈Γ is a bounded family of scalars and {Eγ }γ∈Γ is a resolution of the identity on H. Since each Eγ∗ = Eγ and since R(Eα ) ⊥ R(Eβ ) for α = β, it is readily verified that the adjoint T ∗ of T is given by T ∗ x = γ∈Γ λγ Eγ x for every x in H (i.e., T x ; y = x ; T ∗ y for every x, y in H). Moreover, since Eγ2 = Eγ and since Eα Eβ = Eβ Eα = O for α = β, it is also readily verified that T T ∗ x = T ∗ T x for every x ∈ H. Thus T is normal. An isometry between metric spaces is a map that preserves distance, and so every isometry is injective. A linear transformation T between normed spaces is an isometry if and only if it preserves norm (T x = x for every x) and, between inner product spaces, if and only if it preserves inner product (T x ; T y = x ; y for every x, y in H). A transformation T ∈ B[H, K] between Hilbert spaces is an isometry if and only if T ∗ T = I (identity on H), and a coisometry if its adjoint is an isometry (i.e., if T T ∗ = I; identity on K). It is a unitary transformation if it is an isometry and a coisometry; that is, if it is an invertible transformation with T −1 = T ∗ or, equivalently, an invertible isometry, which means a surjective isometry. Thus a unitary operator T ∈ B[H] is precisely a normal isometry: T T ∗ = T ∗ T = I; and so normality distinguishes the unitary operators among the isometries. Lemma 1.10. If X is a normed space and T ∈ B[X ], then the real-valued 1 sequence {T n n} converges in R . Proof. Take any positive integer m. Every positive integer n can be written as n = m pn + qn for some nonnegative integers pn , qn with qn < m. Hence T n = T mpn+qn = T mpn T qn ≤ T mpnT qn ≤ T mpn T qn. Now suppose T = 0 to avoid trivialities, set μ = max 0≤k≤m−1 {T k } = 0, and recall that qn ≤ m − 1. Then 1
T n n ≤ T m 1
1
pn n
1
1
1
qn
μ n = μ n T m m − mn .
qn
1
Since μ n → 1 and T m m − mn → T m m as n → ∞, it follows that 1
1
lim sup T n n ≤ T m m . n
Therefore, since m is an arbitrary positive integer, 1
1
lim sup T n n ≤ lim inf T n n , n
1 n
and so {T n } converges in R .
n
1.6 Normal Operators
15
1
For each operator T ∈ B[X ] let r(T ) denote the limit of {T n n}, 1
r(T ) = lim T n n . n
1
As we saw in the proof of Lemma 1.10, r(T ) ≤ T n n for every n ≥ 1. In 1 1 particular, r(T ) ≤ T . Note that r(T k ) k = limn T kn kn = r(T ) for each 1 1 k ≥ 1 (as {T kn kn} is a subsequence of the convergent sequence {T n n}). k k Then r(T ) = r(T ) for every positive integer k. Thus, for every operator T ∈ B[X ] on a normed space X , and for each nonnegative integer n, 0 ≤ r(T )n = r(T n ) ≤ T n ≤ T n. An operator T ∈ B[X ] on a normed space X is normaloid if r(T ) = T . Here is an alternate definition of a normaloid operator. Theorem 1.11. r(T ) = T if and only if T n = T n for every n ≥ 1. Proof. Immediate by the above inequalities and the definition of r(T ).
Theorem 1.12. Every hyponormal operator is normaloid . Proof. Let T ∈ B[H] be a hyponormal operator on a Hilbert space H. Claim 1. T n 2 ≤ T n+1 T n−1 for every positive integer n. Proof. First note that, for any operator T ∈ B[H], T nx2 = T n x ; T n x = T ∗ T n x ; T n−1 x ≤ T ∗ T n xT n−1 x for each n ≥ 1 and every x ∈ H. Now, if T is hyponormal, then T ∗T n xT n−1x ≤ T n+1 xT n−1 x ≤ T n+1 T n−1 x2 by Theorem 1.7, and hence, for each n ≥ 1 and every x ∈ H, T nx2 ≤ T n+1 T n−1 x2 , which ensures the claimed result, thus completing the proof of Claim 1. Claim 2. T k = T k for every positive integer k ≤ n, for all n ≥ 1. Proof. The above result holds trivially if T = O and it also holds trivially for n = 1 (for all T ∈ B[H]). Let T = O and suppose the above result holds for some integer n ≥ 1. By Claim 1 we get T 2n = (T n)2 = T n2 ≤ T n+1 T n−1 = T n+1 T n−1. Therefore, as T n ≤ T n for every n ≥ 1, and since T = O, T n+1 = T 2n(T n−1 )−1 ≤ T n+1 ≤ T n+1.
16
1. Preliminaries
Hence T n+1 = T n+1 . Then the claimed result holds for n + 1 whenever it holds for n, which concludes the proof of Claim 2 by induction. Therefore, T n = T n for every integer n ≥ 1 by Claim 2, and so T is normaloid by Theorem 1.11. Since T ∗n = T n for each n ≥ 1, it follows that r(T ∗ ) = r(T ). Thus an operator T is normaloid if and only if its adjoint T ∗ is normaloid, and so (by Theorem 1.12) every seminormal operator is normaloid. In particular, every normal operator is normaloid . Summing up: an operator T is normal if it commutes with its adjoint (i.e., T T ∗ = T ∗ T ), hyponormal if T T ∗ ≤ T ∗ T , and normaloid if r(T ) = T . These classes are related by proper inclusion: Normal ⊂
Hyponormal ⊂
Normaloid.
1.7 Orthogonal Eigenspaces Let I denote the identity operator on a Hilbert space H. A scalar operator on H is a scalar multiple of the identity, say λI for any scalar λ in C . Take an arbitrary operator T on H. For every scalar λ in C consider the kernel N (λI − T ), which is a (closed) subspace of H (since T is bounded). If this kernel is not zero, then it is called an eigenspace of T . Lemma 1.13. Take T ∈ B[H] and λ ∈ C arbitrary. (a) If T is hyponormal, then N (λI − T ) ⊆ N (λI − T ∗ ). (b) If T is normal, then N (λI − T ) = N (λI − T ∗ ). Proof. Take any operator T ∈ B[H] and any scalar λ ∈ C . Observe that (λI − T )∗ (λI − T ) − (λI − T )(λI − T )∗ = T ∗ T − T T ∗ . Thus λI − T is hyponormal (normal) if and only if T is hyponormal (normal). Therefore, if T is hyponormal, then so is λI − T , and hence (λI − T ∗ )x ≤ (λI − T )x for every x ∈ H and every λ ∈ C by Theorem 1.7, which yields the inclusion in (a). If T is normal, then the preceding inequality becomes an identity, yielding the identity in (b). Lemma 1.14. Take T ∈ B[H] and λ ∈ C . If N (λI − T ) ⊆ N (λI − T ∗ ), then (a) N (λI − T ) ⊥ N (νI − T ) whenever ν = λ, (b) N (λI − T ) reduces T .
and
1.8 Compact Operators
17
Proof. (a) Take x ∈ N (λI − T ) and y ∈ N (νI − T ) arbitrary. Thus λx = T x and ν y = T y. If N (λI − T ) ⊆ N (λI − T ∗ ), then x ∈ N (λI − T ∗ ), and so λx = T ∗ x. Then ν y ; x = T y ; x = y ; T ∗x = y ; λx = λy ; x, and hence (ν − λ)y ; x = 0, which implies that y ; x = 0 whenever ν = λ. (b) If x ∈ N (λI − T ) ⊆ N (λI − T ∗ ), then T x = λx and T ∗ x = λx, and so λT ∗ x = λλx = λλx = λT x = T λx = T T ∗x, which implies that T ∗ x ∈ N (λI − T ). Thus N (λI − T ) is T ∗-invariant. But N (λI − T ) is T -invariant (if T x = λx, then T (T x) = λ(T x)). Therefore N (λI − T ) reduces T by Lemma 1.6. Lemma 1.15. If T ∈ B[H] is hyponormal, then (a) N (λI − T ) ⊥ N (νI − T ) for every λ, ν ∈ C such that λ = ν,
and
(b) N (λI − T ) reduces T for every λ ∈ C . Proof. Apply Lemma 1.13(a) to Lemma 1.14.
Theorem 1.16. If {λγ }γ∈Γ is a (nonempty) family of distinct complex numbers, and if T ∈ B[H] is hyponormal, then the topological sum
− M= N (λγ I − T ) γ∈Γ
reduces T , and the restriction of T to it, T |M ∈ B[M], is normal . Proof. For each γ ∈ Γ = ∅ write Nγ = N (λγ I − T ), which is a T -invariant subspace of H. Thus T |Nγ lies in B[Nγ ] and coincides with the scalar operator λγ I on Nγ . By Lemma 1.15(a), {Nγ }γ∈Γ isa family of pairwise orthog− . The Orthogonal onal subspaces of H. Take an arbitrary x∈ γ∈Γ Nγ Structure Theorem says that x = γ∈Γ uγ with each uγ in Nγ . Moreover, for each γ ∈ Γ , T uγ and T ∗ uγ lie in Nγ because Nγ reduces T (Lemmas ∗ continuous, we get 1.6 and since T−and T ∗are linear 1.15(b)). Thus, and ∗ T x = γ∈Γ T uγ in N and T x = T u in Nγ −. γ γ γ∈Γ γ∈Γ γ∈Γ − reduces T (Lemma 1.6). Finally, since {Nγ }γ∈Γ is a famThen γ∈Γ Nγ ily of orthogonal subspaces, it follows by the Orthogonal Structure Theorem that we may identify the topological sum M with the orthogonal direct sum γ∈Γ Nγ (cf. Lemma 1.15(a)), where each Nγ reduces T |M (it reduces T by Lemma 1.15(b)), and so T |M is identified with the direct sum of operators T |Nγ , which is normal because each T |Nγ is normal. γ∈Γ
1.8 Compact Operators A set A in a metric space is compact if every open covering of it has a finite subcovering. It is sequentially compact if every A-valued sequence has a subsequence that converges in A. It is totally bounded if for every ε > 0 there exists a finite partition of it into sets of diameter less than ε. The Compactness Theorem says that a set in metric space is compact if and only if it is
18
1. Preliminaries
sequentially compact, if and only if it is complete and totally bounded . Recall that every compact set is closed, and every totally bounded set is bounded. On a finite-dimensional normed space a set is compact if and only if it is closed and bounded. This is the Heine–Borel Theorem. Let X and Y be normed spaces. A linear transformation T : X → Y is compact (or completely continuous) if it maps bounded subsets of X into relatively compact subsets of Y. That is, T is compact if T (A)− is compact in Y whenever A is bounded in X ; equivalently, if T (A) lies in a compact subset of Y whenever A is bounded in X . Every linear compact transformation is bounded, thus continuous. Every linear transformation on a finite-dimensional space is compact . Let B∞[X , Y ] denote the collection of all compact linear transformations of a normed space X into a normed space Y so that B∞[X , Y ] ⊆ B[X , Y ] (and B∞[X , Y ] = B[X , Y ] whenever X is finite dimensional). In fact, B∞[X , Y ] is a linear manifold of B[X , Y ], which is closed in B[X , Y ] if Y is a Banach space. Set B∞[X ] = B∞[X , X ] for short, the collection of all compact operators on a normed space X . B∞[X ] is an ideal (i.e., a two-sided ideal) of the normed algebra B[X ]. That is, B∞[X ] is a subalgebra of B[X ] such that the product of a compact operator with a bounded operator is again compact. We assume that the compact operators act on a complex nonzero Hilbert space H, although the theory for compact operators equally applies (and is usually developed) for operators on Banach spaces. The theorem below says that R(λI − T ) is a subspace of H if T is compact and λ is nonzero, and the next one says that λI − T is invertible whenever it is injective. Theorem 1.17. If T ∈ B∞[H] and λ ∈ C \{0}, then R(λI − T ) is closed . Proof. Let M be a subspace of a complex Banach space X . Take any compact transformation K in B∞[M, X ]. Let I be the identity on M, let λ be a nonzero complex number, and consider the operator λI − K in B[M, X ]. Claim. If N (λI − K) = {0}, then R(λI − K) is closed in X . Proof. Suppose N (λI − K) = {0}. If R(λI − K) is not closed in X , then λI − K is not bounded below (Theorem 1.2) so that for every ε > 0 there is a nonzero vector xε ∈ M such that (λI − K)xε < εxε . Thus there is a sequence {xn } of unit vectors (xn = 1) in M for which (λI − K)xn → 0. Since K is compact and {xn } is bounded, Proposition 1.S ensures that {Kxn } has a convergent subsequence, say {Kxk }, so that Kxk → y ∈ X . However, λxk − y = λxk − Kxk + Kxk − y ≤ (λI − K)xk + Kxk − y → 0, and so the M-valued sequence {λxk } converges in X to y. Then y ∈ M (because M is closed in X ). Thus, since y is in the domain of K, it follows that y ∈ N (λI − K) (because K is continuous, and so Ky = K limk λxk = λ limk Kxk = λy). Moreover, y = 0 (because 0 = |λ| = λxk → y). Hence N (λI − K) = {0}, which is a contradiction. Therefore, R(λI − K) is closed in X , which completes the proof of the claimed result.
1.8 Compact Operators
19
Take T ∈ B[H]. The restriction (λI − T )|N (λI−T )⊥ of λI − T to N (λI − T )⊥ lies in B[N (λI − T )⊥ , H], which is injective (since N (λI − T )|N (λI−T )⊥ = {0}) and coincides with λI − T |N (λI−T )⊥ . If T is compact, then so is the restriction T |N (λI−T )⊥ in B[N (λI − T )⊥ , H] (Proposition 1.V). Thus the above claim says that (λI − T )|N (λI−T )⊥ = λI − T |N (λI−T )⊥ (where I in the right⊥ hand side denotes the identity on N (λI − T ) ) has a closed range for all λ = 0. But R (λI − T )|N (λI−T )⊥ = R(λI − T ). Theorem 1.18. If T ∈ B∞[H], λ ∈ C \{0}, and N (λI − T ) = {0}, then R(λI − T ) = H. Proof. Take any operator T ∈ B[H] and any scalar λ ∈ C . Consider the sequence {Mn }∞ n=0 of linear manifolds of H recursively defined by Mn+1 = (λI − T )(Mn ) for every n ≥ 0,
with
M0 = H.
It can be verified by induction that Mn+1 ⊆ Mn for every n ≥ 0. Indeed, M1 = R(λI − T ) ⊆ H = M0 and, if the above inclusion holds for some n ≥ 0, then Mn+2 = (λI − T )(Mn+1 ) ⊆ (λI − T )(Mn ) = Mn+1 , which concludes the induction. Suppose T ∈ B∞[H] and λ = 0 so that λI − T has a closed range (Theorem 1.17). If N (λI − T ) = {0}, then λI − T has a bounded inverse on its range (Theorem 1.2). Since (λI − T )−1 : R(λI − T ) → H is continuous, λI − T sends closed sets into closed sets. (A map between metric spaces is continuous if and only if the inverse image of closed sets is closed .) Hence, since M0 is closed, another induction ensures that {Mn }∞ n=0 is a decreasing sequence of subspaces of H. Now suppose that R(λI − T ) = H. If Mn+1 = Mn for some n, then (since M0 = H = R(λI − T ) = M1 ) there exists an integer k ≥ 1 such that Mk+1 = Mk = Mk−1 , which leads to a contradiction, namely, if Mk+1 = Mk , then (λI − T )(Mk ) = Mk so that Mk = (λI − T )−1 (Mk ) = Mk−1 . Outcome: Mn+1 is properly included in Mn for each n; that is, Mn+1 ⊂ Mn for every n ≥ 0. Hence each Mn+1 is a nonzero proper subspace of the Hilbert space Mn . Then each Mn+1 has a nonzero complementary subspace in Mn , namely Mn Mn+1 ⊂ Mn . Therefore, for each integer n ≥ 0 there is an xn ∈ Mn with xn = 1 such that 12 < inf u∈Mn+1 xn − u. Thus take any pair of integers 0 ≤ m < n, and set x = xn + λ−1 (λI − T )xm − (λI − T )xn (recall: λ = 0) so that T xn − T xm = λ(x − xm ). Since x lies in Mm+1 ,
20
1. Preliminaries 1 2 |λ|
< |λ|x − xm = T xn − T xm ,
and the sequence {T xn } has no convergent subsequence (every subsequence of it is not Cauchy). Since {xn } is bounded, it follows that T is not compact (cf. Proposition 1.S), which is a contradiction. Thus R(λI − T ) = H. Theorem 1.19. If T ∈ B∞[H] and λ ∈ C \{0} (i.e., if T is compact and λ is nonzero), then dim N (λI − T ) = dim N (λI − T ∗ ) < ∞. Proof. Take an arbitrary compact operator T ∈ B∞[H] and any nonzero scalar λ ∈ C . Theorem 1.17 says that R(λI − T ∗ ) = R(λI − T ∗ )−, and so R(λI − T ∗ ) = R(λI − T ∗ )− by Lemma 1.5. Thus (cf. Lemma 1.4), N (λI − T ) = {0} if and only if R(λI − T ∗ ) = H, N (λI − T ∗ ) = {0} if and only if R(λI − T ) = H. Since T, T ∗ ∈ B∞[H] (cf. Proposition 1.W), we get by Theorem 1.18 that N (λI − T ) = {0} implies
R(λI − T ) = H,
N (λI − T ∗ ) = {0} implies
R(λI − T ∗ ) = H.
Hence, N (λI − T ) = {0} if and only if N (λI − T ∗ ) = {0}. Equivalently, dim N (λI − T ) = 0 if and only if
dim N (λI − T ∗ ) = 0.
Now suppose dim N (λI − T ) = 0, and so dim N (λI − T ∗ ) = 0. Recall that N (λI − T ) = {0} is an invariant subspace for T (Proposition 1.F), and that T |N (λI−T ) = λI in B[N (λI − T )]. If T is compact, then so is T |N (λI−T ) (Proposition 1.V). Then λI = O is compact on N (λI − T ) = {0}, and hence dim N (λI − T ) < ∞ (Proposition 1.Y). Dually, since T ∗ is compact, we get dim N (λI − T ∗ ) < ∞. Thus there are positive integers m and n such that dim N (λI − T ) = m
and
dim N (λI − T ∗ ) = n.
n We show next that m = n. Let {ei }m i=1 and {fi }i=1 be orthonormal bases for ∗ the Hilbert spaces N (λI − T ) and N (λI − T ). Set k = min{m, n} ≥ 1 and consider the mappings S : H → H and S ∗ : H → H defined by
Sx =
k i=1
x ; ei fi
and
S∗x =
k i=1
x ; fi ei
for every x ∈ H. It is clear that S and S ∗ lie in B[H], and also that S ∗ is the adjoint of S; that is, Sx ; y = x ; S ∗ y for every x, y ∈ H. Actually, R(S) ⊆ {fi }ki=1 ⊆ N (λI − T ∗ ) and R(S ∗ ) ⊆ {ei }ki=1 ⊆ N (λI − T ) so that S and S ∗ in B[H] are finite-rank operators, thus compact (i.e., they lie in B∞[H]), and hence T + S and T ∗ + S ∗ also lie in B∞[H] (since this is a
1.9 Additional Propositions
21
subspace of B[H]). First suppose that m ≤ n (which implies that k = m). If x lies in N (λI − (T + S)), then (λI − T )x = Sx. But R(S) ⊆ N (λI − T ∗ ) = R(λI − T )⊥ (Lemma 1.4), and hence (λI − T )x = Sx = 0 (since mSx lies so that x = in R(λI − T )). Then x ∈ N (λI − T ) = span{ei }m i=1 m i=1 αi ei (for some family of scalars {αi }m ), and therefore 0 = Sx = j=1 αj Sej = m m i=1 m j=1 αj i=1 ej ; ei fi = i=1 αi fi , so that αi = 0 for every i = 1, . . . , m — {fi }m i=1 is an orthonormal set, thus linearly independent. That is, x = 0. Then N (λI − (T + S)) = {0} and so, by Theorem 1.18, m≤n
implies
R(λI − (T + S)) = H.
Dually, using exactly the same argument, n≤m
implies
R(λI − (T ∗ + S ∗ )) = H.
If m < n, then k = m < m + 1 ≤ n and fm+1 ∈ R(λI − (T + S)) = H so that there exists v ∈ H for which (λI − (T + S))v = fm+1 . Hence 1 = fm+1 ; fm+1 = (λI − (T + S))v ; fm+1 = (λI − T )v ; fm+1 − Sv ; fm+1 = 0, which is a contradiction. Indeed, (λI − T )v ; fm+1 = Sv ; fm+1 = 0 since fm+1 ∈ N (λI − T ∗ ) = R(λI − T )⊥ and Sv ∈ R(S) ⊆ span{fi }m i=1 . If n < m, then k = n < n + 1 ≤ m, and en+1 ∈ R(λI − (T ∗ + S ∗ )) = H so that there exists u ∈ H for which (λI − (T ∗ + S ∗ ))u = en+1 . Hence 1 = em+1 ; em+1 = (λI − (T ∗ + S ∗ ))u ; en+1 = (λI − T ∗ )u ; en+1 − S ∗ u ; en+1 = 0, which is again a contradiction (because en+1 ∈ N (λI − T ) = R(λI − T ∗ )⊥ and S ∗ u ∈ R(S ∗ ) ⊆ span{ei }ni=1 ). Therefore, m = n. Together, the results of Theorems 1.17 and 1.19 are referred to as the Fredholm Alternative, which is stated below. Corollary 1.20. (Fredholm Alternative). If T ∈ B∞[H] and λ ∈ C \{0}, then R(λI − T ) is closed and dim N (λI − T ) = dim N (λI − T ∗ ) < ∞.
1.9 Additional Propositions Let X be a complex linear space. f : X ×X → C is a sesquilinear form if it is additive in both arguments, homogeneous in the first argument, and conjugate homogeneous in the second argument. A Hermitian symmetric sesquilinear form that induces a positive quadratic form is an inner product. Proposition 1.A. (Polarization Identities). If f : X × X → C is a sesquilinear form on a complex linear space X , then, for every x, y ∈ X ,
22
(a1 )
1. Preliminaries
f (x, y) =
1 4
f (x + y, x + y) − f (x − y, x − y)
+ i f (x + iy, x + iy) − i f (x − iy, x − iy) .
If X is a complex inner product space and S, T ∈ B[X ], then, for x, y ∈ X , (a2 ) Sx ; T y = 14 S(x + y) ; T (x + y) − S(x − y) ; T (x − y) + iS(x + iy) ; T (x + iy) − iS(x − iy) ; T (x − iy) . (Parallelogram Law). If X is an inner product space and · is the norm generated by the inner product, then, for every x, y ∈ X , (b) x + y2 + x − y2 = 2 x2 + y2 . Proposition 1.B. A linear manifold of a Banach space and only if it is a subspace. A linear transformation is if it maps bounded sets into bounded sets. If X and Y spaces, then B[X , Y ] is a Banach space if and only if Y
is a Banach space if bounded if and only are nonzero normed is a Banach space.
Recall that the sum of two subspaces of a Hilbert space may not be a subspace (i.e., the sum may not be closed if the subspaces are not orthogonal). Proposition 1.C. If M and N are subspaces of a normed space X , and if dim N < ∞, then the sum M + N is a subspace of X . Every subspace of a Hilbert has a complementary subspace (e.g., its orthogonal complement). However, this is not true for every Banach space. Proposition 1.D. If M and N are complementary subspaces of a Banach space X , then the unique projection E : X → X with R(E) = M and N (E) = N is continuous. Conversely, if E ∈ B[X ] is a projection, then R(E) and N (E) are complementary subspaces of X . Proposition 1.E. Let X be a normed space. The subspaces N (T ) and R(T )− are invariant for any operator T ∈ B[X ]. Take another operator S ∈ B[X ]. If S and T commute, then the subspaces N (S), N (T ), R(S)−, and R(T )− are invariant for both S and T . Proposition 1.F. If T is an operator on a normed space, then N (p(T )) and R(p(T ))− are T -invariant subspaces for every polynomial p(T ) of T — in particular, N (λI − T ) and R(λI − T )− are T -invariant for every λ ∈ C . Proposition 1.G. Let M be a linear manifold of a normed space X , let Y be Banach space, and take T ∈ B[M, Y]. (a) There exists a unique extension T ∈ B[M−, Y] of T ∈ B[M, Y] over the closure of M. Moreover T = T .
1.9 Additional Propositions
23
(b) If X is a Hilbert space, then TE ∈ B[X , Y] is a (bounded linear ) extension of T ∈ B[M, Y] over the whole space X , where E ∈ B[X ] is the orthogonal projection on the subspace M− of X so that TE ≤ T . Proposition 1.H. Let M and N be subspaces of a Hilbert space. (a) M⊥ ∩ N ⊥ = (M + N )⊥. (b) dim N < ∞ =⇒ dim N (M ∩ N ) = dim M⊥ (M + N )⊥ . Proposition 1.I. A subspace M of a Hilbert space H is invariant (or reducing) for a nonzero T ∈ B[H] if and only if the unique orthogonal projection E ∈ B[H] with R(E) = M is such that E T E = T E (or T E = E T ). Take T ∈ B[H] and S ∈ B[K] on Hilbert spaces H and K, and X ∈ B[H, K] of H into K. If X T = SX, then X intertwines T to S (or T is intertwined to S through X). If there is an X with dense range intertwining T to S, then T is densely intertwined to S. If X has dense range and is injective, then it is quasiinvertible (or a quasiaffinity). If a quasiinvertible X intertwines T to S, then T is a quasiaffine transform of S. If T is a quasiaffine transform of S and S is a quasiaffine transform of T , then T and S are quasisimilar . If an invertible X (with a bounded inverse) intertwines T to S (so that X −1 intertwines S to T ), then T and S are similar (or equivalent ). Unitary equivalence is the special case of similarity through a (surjective) isometry: operators T and S are unitarily equivalent (notation: T ∼ = S) if there exists a unitary transformation X intertwining them. A linear manifold (or a subspace) of normed space X is hyperinvariant for an operator T ∈ B[X ] if it is invariant for every operator in B[X ] that commutes with T . Obviously, hyperinvariant subspaces are invariant. Proposition 1.J. Similarity preserves invariant subspaces (if two operators are similar, and if one has a nontrivial invariant subspace, then so has the other ), and quasisimilarity preserves hyperinvariant subspaces (if two operators are quasisimilar, and if one has a nontrivial hyperinvariant subspace, then so has the other ). is an orthogonal sequence of orthogonal projections Proposition 1.K. If {Ek } n s on a Hilbert space H, then k=1 Ek −→ −E, where E ∈ B[H] is the orthogonal projection with R(E) = R(E ) . k k∈N Proposition 1.L. A bounded linear transformation between Hilbert spaces is invertible if and only if its adjoint is (i.e., T ∈ G[H, K] ⇐⇒ T ∗ ∈ G[K, H]). Moreover , (T ∗ )−1 = (T −1 )∗ . Proposition 1.M. Every nonnegative operator Q ∈ B[H] on a Hilbert space 1 1 H has a unique nonnegative square root Q 2 ∈ B[H] (i.e., (Q 2 )2 = Q), which commutes with every operator in B[H] that commutes with Q.
24
1. Preliminaries
Proposition 1.N. Every T ∈ B[H, K] between Hilbert spaces H and K has a unique polar decomposition T = W |T |, where W ∈ B[H, K] is a partial isom1 etry (i.e., W |N (W )⊥ : N (W )⊥ → K is an isometry) and |T | = (T ∗ T ) 2 . Proposition 1.O. Every operator T ∈ B[H] on a complex Hilbert space H has a unique Cartesian decomposition T = A + iB, where A = 12 (T + T ∗ ) and B = − 2i (T − T ∗) are self-adjoint operators on H. Moreover, T is normal if and only if A and B commute and, in this case, T ∗ T = A2 + B 2 and max{A2 , B2 } ≤ T 2 ≤ A2 + B2 . Proposition 1.P. The restriction T |M of a hyponormal operator T to an invariant subspace M is hyponormal. If T |M is normal, then M reduces T. Proposition 1.Q. The restriction T |M of a normal operator T to an invariant subspace M is normal if and only if M reduces T . Proposition 1.R. A transformation U between Hilbert spaces is unitary if and only if it is an invertible contraction whose inverse also is a contraction (U ≤ 1 and U −1 ≤ 1), which happens if and only if U = U −1 = 1. Proposition 1.S. A linear transformation between normed spaces is compact if and only if it maps bounded sequences into sequences that have a convergent subsequence. Proposition 1.T. A linear transformation of a normed space into a Banach space is compact if and only if it maps bounded sets into totally bounded sets. Proposition 1.U. A linear transformation of a Hilbert space into a normed space is compact if and only if it maps weakly convergent sequences into strongly convergent sequences. Proposition 1.V. The restriction of a compact linear transformation to a linear manifold is again a compact linear transformation. Proposition 1.W. A linear transformation between Hilbert spaces is compact if and only if its adjoint is (i.e., T ∈ B∞[H, K] ⇐⇒ T ∗ ∈ B∞[K, H]). Proposition 1.X. A finite-rank (i.e., one with a finite-dimensional range) bounded linear transformation between normed spaces is compact . Proposition 1.Y. The identity I on a normed space X is compact if and only if dim(X ) < ∞. Hence, if T ∈ B∞[X ] is invertible, then dim(X ) < ∞. Proposition 1.Z. If a sequence of finite-rank bounded linear transformations between Banach spaces converges uniformly, then its limit is compact . Notes: These are standard results that will be required in the sequel. Proofs can be found in many texts on operator theory (see Suggested Reading). For
1.9 Additional Propositions
25
instance, Proposition 1.A can be found in [66, Propositions 5.4 and Problem 5.3], and Proposition 1.B in [66, Propositions 4.7, 4.12, 4.15, and Example 4.P]. Proposition 1.C can be found in [27, Proposition III.4.3] (for a Hilbert space proof see [50, Problem 13]), and Proposition 1.D in [27, Theorem III.13.2] or [66, Problem 4.35]. For Propositions 1.E and 1.F see [66, Problems 4.20 and 4.22]. Proposition 1.G(a) is referred to as extension by continuity; see, e.g., [66, Theorem 4.35]. It implies Proposition 1.G(b): an orthogonal projection is continuous, and T = TE|R(E) ∈ B[M−, H] is a restriction of TE ∈ B[X , H] to R(E) = M−. Proposition 1.H will be needed in Theorem 5.4. For Proposition 1.H(a) see [66, Problems 5.8]. A proof of Proposition 1.H(b) goes as follows. Proof of Proposition 1.H(b). Since (M ∩ N ) ⊥ (M⊥ ∩ N ), consider the orthogonal projection E ∈ B[H] on M⊥ ∩ N (i.e., R(E) = M⊥ ∩ N ) along M ∩ N (i.e., N (E) = M ∩ N ). Let L: N (M ∩ N ) → M⊥ ∩ N be the restriction of E to N (M ∩ N ). It is clear that L = E|N (M∩N ) is linear, injective (N (L) = {0}), and surjective (R(L) = R(E) = M⊥ ∩ N ). Therefore, N (M ∩ N ) is isomorphic to M⊥ ∩ N . Since M is closed and N is finite dimensional, M + N is closed (even if M and N are not orthogonal — cf. Proposition 1.C). Then (Projection Theorem) H = (M + N )⊥ ⊕ (M + N ) and M⊥ = (M+N )⊥ ⊕[M⊥ (M+N )⊥ ], and hence M⊥ (M + N )⊥ coincides with M⊥ ∩ (M + N ) = M⊥ ∩ N . Therefore, N (M ∩ N ) is isomorphic to M⊥ (M + N )⊥, and so they have the same dimension. For Proposition 1.I see, e.g., [63, Solutions 4.1 and 4.6], and for Proposition 1.J see, e.g., [62, Corollaries 4.2 and 4.8]. By the way, does quasisimilarity preserve nontrivial invariant subspaces? (See, e.g., [76, p. 194].) As for Propositions 1.K and 1.L see [66, Propositions 5.28 and p. 385]). Propositions 1.M, 1.N, and 1.O are the classical Square Root Theorem, the Polar Decomposition Theorem, and the Cartesian Decomposition Theorem (see, e.g., [66, Theorems 5.85 and 5.89, and Problem 5.46]). Propositions 1.P, 1.Q, 1.R can be found in [66, Problems 6.16 and 6.17, and Theorem 5.73]. Propositions 1.S, 1.T, 1.U, 1.V, 1.W, 1.X, 1.Y, and 1.Z are all related to compact operators (see, e.g., [66, Theorem 4.52(d,e), Problem 4.51, p. 258, Problem 5.41, Proposition 4.50, Remark to Theorem 4.52, Corollary 4.55]).
Suggested Reading Akhiezer and Glazman [3, 4] Beals [12] Berberian [17, 18] Brown and Pearcy [21, 22] Conway [27, 28, 29] Douglas [34] Dunford and Schwartz [39]
Fillmore [42] Halmos [47, 48, 50] Harte [52] Istrˇa¸tescu [58] Kubrusly [62, 63, 66] Radjavi and Rosenthal [76] Weidmann [89]
This page intentionally left blank
2 Spectrum
Let T : D(T ) → X be a linear transformation, where X is a nonzero normed space and D(T ), the domain of T , is a linear manifold of X . The general notion of spectrum, which applies to bounded or unbounded transformations, goes as follows. Let F denote either the real field R or the complex field C , and let I be the identity on X . The resolvent set ρ(T ) of T is the set of all scalars λ in F for which the linear transformation λI − T : D(T ) → X has a densely defined continuous inverse. That is, ρ(T ) = λ ∈ F : (λI − T )−1 ∈ B[R(λI − T ), D(T )] and R(λI − T )− = X (see, e.g., [8, Definition 18.2] — there are different definitions of the resolvent set for unbounded linear transformations, but they all coincide for the bounded case). The spectrum σ(T ) of T is the complement of set ρ(T ) in F . We shall however restrict the theory to operators on a complex Banach space (i.e., to bounded linear transformations of a complex Banach space to itself).
2.1 Basic Spectral Properties Throughout this chapter T : X → X will be a bounded linear transformation of X into itself (i.e, an operator on X ), so that D(T ) = X , where X = {0} is a complex Banach space. That is, T ∈ B[X ], where X is a nonzero complex Banach space. In such a case (i.e., in the unital complex Banach algebra B[X ]), Theorem 1.2 ensures that the resolvent set ρ(T ) is precisely the set of all complex numbers λ for which λI − T ∈ B[X ] is invertible (i.e., has a bounded inverse on X ). Therefore (cf. Theorem 1.1), ρ(T ) = λ ∈ C : λI − T ∈ G[X ] = λ ∈ C : λI − T has an inverse in B[X ] = λ ∈ C : N (λI − T ) = {0} and R(λI − T ) = X , and so σ(T ) = C \ ρ(T ) = λ ∈ C : λI − T has no inverse in B[X ] = λ ∈ C : N (λI − T ) = {0} or R(λI − T ) = X . C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3_2, © Springer Science+Business Media, LLC 2012
27
28
2. Spectrum
Theorem 2.1. The resolvent set ρ(T ) is nonempty and open, and the spectrum σ(T ) is compact . Proof. Take any T ∈ B[X ]. By the Neumann expansion (Theorem 1.3), if T < |λ|, then λ ∈ ρ(T ). Equivalently, since σ(T ) = C \ρ(T ), |λ| ≤ T for every λ ∈ σ(T ). Thus σ(T ) is bounded, and therefore ρ(T ) = ∅. Claim. If λ ∈ ρ(T ), then the open ball Bδ (λ) with center at λ and (positive) radius δ = (λI − T )−1 −1 is included in ρ(T ). Proof. If λ ∈ ρ(T ), then λI − T ∈ G[X ] so that (λI − T )−1 is nonzero and bounded, and hence 0 < (λI − T )−1 −1 < ∞. Set δ = (λI − T )−1 −1 , let Bδ (0) be the nonempty open ball of radius δ about the origin of the complex plane C , and take any ν in Bδ (0). Since |ν| < (λI − T )−1 −1 , it follows that ν(λI − T )−1 < 1. Then [I − ν(λI − T )−1 ] ∈ G[X ] by Theorem 1.3, and so (λ − ν)I − T = (λI − T )[I − ν(λI − T )−1 ] ∈ G[X ]. Thus λ − ν ∈ ρ(T ) so that Bδ (λ) = Bδ (0) + λ = ν ∈ C : ν = ν + λ for some ν ∈ Bδ (0) ⊆ ρ(T ), which completes the proof of the claimed result. Thus ρ(T ) is open (it includes a nonempty open ball centered at each of its points) and so σ(T ) is closed. Compact in C means closed and bounded. Remark. Since Bδ (λ) ⊆ ρ(T ), it follows that the distance of any λ in ρ(T ) to the spectrum σ(T ) is greater than δ; that is (compare with Proposition 2.E), λ ∈ ρ(T ) implies
(λI − T )−1 −1 ≤ d(λ, σ(T )).
The resolvent function RT : ρ(T ) → G[X ] of an operator T ∈ B[X ] is the mapping of the resolvent set ρ(T ) of T into the group G[X ] of all invertible operators from B[X ] defined by RT (λ) = (λI − T )−1 for every λ ∈ ρ(T ). Since RT (λ) − RT (ν) = RT (λ)[RT (ν)−1 − RT (λ)−1 ]RT (ν), it follows that RT (λ) − RT (ν) = (ν − λ)RT (λ)RT (ν) for every λ, ν ∈ ρ(T ) (because RT (ν)−1 − RT (λ)−1 = (ν − λ)I). This is the resolvent identity. Swapping λ and ν in the resolvent identity, it follows that RT (λ) and RT (ν) commute for every λ, ν ∈ ρ(T ). Also, T RT (λ) = RT (λ)T for every λ ∈ ρ(T ) (since RT (λ)−1 RT (λ) = RT (λ)RT (λ)−1 trivially). Let Λ be a nonempty open subset of the complex plane C . Take a function f : Λ → C and a point ν ∈ Λ. Suppose there exists a complex number
2.1 Basic Spectral Properties
29
f (ν) with the following property. For every ε > 0 there is a δ > 0 such that f (λ)−f (ν) − f (ν) < ε for all λ in Λ for which 0 < |λ − ν| < δ. If there exists λ−ν such an f (ν) ∈ C , then it is called the derivative of f at ν. If f (ν) exists for every ν in Λ, then f : Λ → C is analytic (or holomorphic) on Λ. A function f : C → C is entire if it is analytic on the whole complex plane C . To prove the next result we need the Liouville Theorem, which says that every bounded entire function is constant. Theorem 2.2. If X is nonzero, then the spectrum σ(T ) is nonempty. Proof. Let T ∈ B[X ] be an operator on a nonzero complex Banach space X , ∗ ∗ and let B[X ] stand for the dual of B[X ]. That is, B[X ] = B[B[X ], C ] is the Banach space of all bounded linear functionals on B[X ]. Since X = {0}, it ∗ follows that B[X ] = {O}, and so B[X ] = {0}, which is a consequence of the Hahn–Banach Theorem (see, e.g., [66, Corollary 4.64]). Take any nonzero η in ∗ B[X ] (i.e., a nonzero bounded linear functional η : B[X ] → C ), and consider the composition of it with the resolvent function, η ◦ RT : ρ(T ) → C . Recall that ρ(T ) = C \σ(T ) is nonempty and open in C . Claim 1. If σ(T ) is empty, then η ◦ RT : ρ(T ) → C is bounded. Proof. The resolvent function RT : ρ(T ) → G[X ] is continuous (since scalar multiplication and addition are continuous mappings, and inversion is also a continuous mapping; see, e.g., [66, Problem 4.48]). Thus RT (·): ρ(T ) → R is continuous. Then sup|λ|≤T RT (λ) < ∞ if σ(T ) = ∅. Indeed, if σ(T ) is empty, then ρ(T ) ∩ BT [0] = BT [0] = {λ ∈ C : |λ| ≤ T } is a compact set in C , so that the continuous function RT (·) attains its maximum on it by the Weierstrass Theorem: a continuous real-valued function attains its maximum and minimum on any compact set in a metric space. On the other hand, if T < |λ|, then RT (λ) ≤ (|λ| − T )−1 (cf. Remark that follows Theorem 1.3), so that RT (λ) → 0 as |λ| → ∞. Thus, since RT (·): ρ(T ) → R is continuous, supT ≤λ RT (λ) < ∞. Hence supλ∈ρ(T ) RT (λ) < ∞, and so sup (η ◦ RT )(λ) ≤ η sup RT (λ) < ∞,
λ∈ρ(T )
λ∈ρ(T )
which completes the proof of Claim 1. Claim 2. η ◦ RT : ρ(T ) → C is analytic. Proof. If λ and ν are distinct points in ρ(T ), then RT (λ) − RT (ν) + RT (ν)2 = RT (ν) − RT (λ) RT (ν) λ−ν by the resolvent identity. Set f = η ◦ RT : ρ(T ) → C . Let f : ρ(T ) → C be defined by f (λ) = −η(RT (λ)2 ) for each λ ∈ ρ(T ). Therefore, f (λ) − f (ν) − f (ν) = η RT (ν) − RT (λ) RT (ν) λ−ν ≤ ηRT (ν)RT (ν) − RT (λ)
30
2. Spectrum
so that f : ρ(T ) → C is analytic because RT : ρ(T ) → G[X ] is continuous, which completes the proof of Claim 2. Thus, by Claims 1 and 2, if σ(T ) = ∅ (i.e., if ρ(T ) = C ), then η ◦ RT : C → C is a bounded entire function, and so a constant function by the Liouville Theorem. But (see proof of Claim 1) RT (λ) → 0 as |λ| → ∞, and hence η(RT (λ)) → 0 as |λ| → ∞ (since η is continuous). Then η ◦ RT = 0 for all η ∗ in B[H] = {0} so that RT = O (by the Hahn–Banach Theorem). That is, (λI − T )−1 = O for λ ∈ C , which is a contradiction. Thus σ(T ) = ∅. Remark. σ(T ) is compact and nonempty, and so is its boundary ∂σ(T ). Hence, ∂σ(T ) = ∂ρ(T ) = ∅.
2.2 A Classical Partition of the Spectrum The spectrum σ(T ) of an operator T in B[X ] is the set of all scalars λ in C for which the operator λI − T fails to be an invertible element of the algebra B[X ] (i.e., fails to have a bounded inverse on R(λI − T ) = X ). According to the nature of such a failure, σ(T ) can be split into many disjoint parts. A classical partition comprises three parts. The set σP (T ) of those λ for which λI − T has no inverse (i.e., such that the operator λI − T is not injective) is the point spectrum of T , σP (T ) = λ ∈ C : N (λI − T ) = {0} . A scalar λ ∈ C is an eigenvalue of T if there exists a nonzero vector x in X such that T x = λx. Equivalently, λ is an eigenvalue of T if N (λI − T ) = {0}. If λ ∈ C is an eigenvalue of T , then the nonzero vectors in N (λI − T ) are the eigenvectors of T , and N (λI − T ) is the eigenspace (which is a subspace of X ), associated with the eigenvalue λ. The multiplicity of an eigenvalue is the dimension of the respective eigenspace. Thus the point spectrum of T is precisely the set of all eigenvalues of T . Now consider the set σC (T ) of those λ for which λI − T has a densely defined but unbounded inverse on its range, σC (T ) = λ ∈ C : N (λI − T ) = {0}, R(λI − T )− = X and R(λI − T ) = X (see Theorem 1.3), which is referred to as the continuous spectrum of T . The residual spectrum of T is the set σR (T ) of all scalars λ such that λI − T has an inverse on its range that is not densely defined: σR (T ) = λ ∈ C : N (λI − T ) = {0} and R(λI − T )− = X . The collection {σP (T ), σC (T ), σR (T )} is a partition (i.e., it is a disjoint covering) of σ(T ), which means that they are pairwise disjoint and σ(T ) = σP (T ) ∪ σC (T ) ∪ σR (T ).
2.2 A Classical Partition of the Spectrum
31
The diagram below, borrowed from [62], summarizes such a partition of the spectrum. The residual spectrum is split into two disjoint parts, σR(T ) = σR1(T ) ∪ σR2(T ), and the point spectrum into four disjoint parts, σP (T ) =
4 i=1 σPi(T ). We adopt the following abbreviated notation: Tλ = λI − T , Nλ = N (Tλ ), and Rλ = R(Tλ ). Recall that if N (Tλ ) = {0}, then its linear inverse Tλ−1 on Rλ is continuous if and only if Rλ is closed (Theorem 1.2). R− =X λ
R− = X λ
R− = Rλ R− = Rλ R− = Rλ R− = Rλ λ λ λ λ
Nλ = {0}
Tλ−1 ∈ B[Rλ ,X ]
ρ (T )
∅
∅
σR1(T )
Tλ−1 ∈ / B[Rλ ,X ]
∅
σC (T )
σR2(T )
∅
⎫ ⎬
σP1(T )
σP2(T )
σP3(T )
σP4(T )
⎭
Nλ = {0}
σCP (T )
σAP (T )
Fig. § 2.2. A classical partition of the spectrum Theorem 2.2 says that σ(T ) = ∅, but any of the above disjoint parts of the spectrum may be empty (Section 2.7). However, if σP (T ) = ∅, then a set of eigenvectors associated with distinct eigenvalues is linearly independent. Theorem 2.3. Let {λγ }γ∈Γ be a family of distinct eigenvalues of T . For each γ ∈ Γ let xγ be an eigenvector associated with λγ . The set {xγ }γ∈Γ is linearly independent . Proof. For each γ ∈ Γ take 0 = xγ ∈ N (λγ I − T ) = {0}, and consider the set {xγ }γ∈Γ (whose existence is ensured by the Axiom of Choice). Claim. Every finite subset of {xγ }γ∈Γ is linearly independent. Proof. Take an arbitrary finite subset of {xγ }γ∈Γ , say {xi }ni=1 . If n = 1, then linear independence is trivial. Suppose {xi }ni=1 is linearly independent for some n n ≥ 1. If {xi }n+1 is not linearly independent, then x = α x n+1 i=1 i=1 i i , where the family {αi }ni=1 of complex numbers has at least one nonzero number. Thus n n αi T xi = αi λi xi . λn+1 xn+1 = T xn+1 = i=1
i=1
If λn+1 = 0, λi = 0 for every i = n + 1 (because the eigenvalues are disthen n not linearly independent, which tinct) and i=1 αi λi xi = 0 so that {xi }ni=1 is n is a contradiction. If λn+1 = 0, then xn+1 = i=1 αi λ−1 n+1 λi xi , and therefore n −1 n α (1 − λ λ )x = 0 so that {x } is not linearly independent (since i i i i n+1 i=1 i=1 λi = λn+1 for every i = n + 1 and αi = 0 for some i), which is again a contradiction. This completes the proof by induction:{xi}n+1 i=1 is linearly independent. However, if every finite subset of {xγ }γ∈Γ is linearly independent, then so is the set {xγ }γ∈Γ itself (see, e.g., [66, Proposition 2.3]).
32
2. Spectrum
There are some overlapping parts of the spectrum which are commonly used. For instance, the compression spectrum σCP (T ) and the approximate point spectrum (or approximation spectrum) σAP (T ), which are defined by σCP (T ) = λ ∈ C : R(λI − T ) is not dense in X = σP3(T ) ∪ σP4(T ) ∪ σR (T ), σAP (T ) = λ ∈ C : λI − T is not bounded below = σP (T ) ∪ σC (T ) ∪ σR2(T ) = σ(T )\σR1(T ). The points of σAP (T ) are referred to as the approximate eigenvalues of T . Theorem 2.4. The following assertions are pairwise equivalent . (a) For every ε > 0 there is a unit vector xε in X such that (λI −T )xε < ε. (b) There is a sequence {xn } of unit vectors in X such that (λI −T )xn → 0. (c) λ ∈ σAP (T ). Proof. Clearly (a) implies (b). If (b) holds, then there is no constant α > 0 such that α = αxn ≤ (λI − T )xn for all n. Thus λI − T is not bounded below, and so (b) implies (c). Conversely, if λI − T is not bounded below, then there is no constant α > 0 such that αx ≤ (λI − T )x for all x ∈ X or, equivalently, for every ε > 0 there exists a nonzero yε in X such that (λI − T )yε < εyε . Set xε = yε −1 yε , and hence (c) implies (a). Theorem 2.5. The approximate point spectrum σAP (T ) is a nonempty closed subset of C that includes the boundary ∂σ(T ) of the spectrum σ(T ). Proof. Take an arbitrary λ in ∂σ(T ). Recall that ρ(T ) = ∅, σ(T ) is closed (Theorem 2.1), and ∂σ(T ) = ∂ρ(T ) = ρ(T )− ∩ σ(T ). Thus λ is a point of adherence of ρ(T ), and so there exists a sequence {λn } with each λn in ρ(T ) such that λn → λ. Since (λn I − T ) − (λI − T ) = (λn − λ)I for every n, it follows that λn I − T → λI − T in B[X ]. Claim. supn (λn I − T )−1 = ∞. Proof. Since each λn lies in ρ(T ) and λ ∈ ∂σ(T ) does not lie in ρ(T ) (because σ(T ) is closed), it follows that the sequence {λn I − T } of operators in G[X ] converges in B[X ] to λI − T ∈ B[X ]\G[X ]. If λ ∈ σP (T ), then there exists x = 0 in X such that, if supn (λn I − T )−1 < ∞, 0 = x = (λn I − T )−1 (λn I − T )x ≤ sup (λn I − T )−1 lim sup (λn I − T )x n
n
−1
≤ sup (λn I − T ) n
(λI − T )x = 0,
2.2 A Classical Partition of the Spectrum
33
which is a contradiction. If λ ∈ σP (T ), then N (λI − T ) = {0}, and hence there exists the inverse (λI − T )−1 on R(λI − T ) so that, for each n, (λn I − T )−1 − (λI − T )−1 = (λn I − T )−1 (λI − T ) − (λn I − T ) (λI − T )−1. If supn (λn I − T )−1 < ∞, then (λn I − T )−1 → (λI − T )−1 in B[X ], because (λn I − T ) → (λI − T ) in B[X ], and so (λI − T )−1 ∈ B[X ]. That is, (λI − T ) ∈ G[X ], which is again a contradiction (since λ ∈ σ(T )). This completes the proof of the claimed result. Since (λn I − T )−1 = supy=1 (λn I − T )−1 y, take a unit vector yn in X for which (λn I − T )−1 − n1 ≤ (λn I − T )−1 yn ≤ (λn I − T )−1 for each n. The above claim ensures that supn (λn I − T )−1 yn = ∞, and hence inf n (λn I − T )−1 yn −1 = 0, so that there exist subsequences of {λn } and {yn }, say {λk } and {yk }, for which (λk I − T )−1 yk −1 → 0. Set xk = (λk I − T )−1 yk −1 (λk I − T )−1 yk and get a sequence {xk } of unit vectors in X such that (λk I − T )xk = (λk I − T )−1 yk −1 . Hence (λI − T )xk = (λk I − T )xk − (λk − λ)xk ≤ (λk I − T )−1yk −1 + |λk − λ|. Since λk → λ, it then follows that (λI − T )xk → 0, and so λ ∈ σAP (T ) according to Theorem 2.4. Therefore, ∂σ(T ) ⊆ σAP (T ). This inclusion clearly implies that σAP (T ) = ∅ (for σ(T ) is closed and nonempty). Finally, take an arbitrary λ ∈ C \σAP (T ) so that λI − T is bounded below. Therefore, there exists an α > 0 for which αx ≤ (λI − T )x ≤ (νI − T )x + (λ − ν)x, and hence (α − |λ − ν|)x ≤ (νI − T )x, for all x ∈ X and ν ∈ C . Then νI − T is bounded below for every ν such that 0 < α − |λ − ν|. Equivalently, ν ∈ C \σAP (T )) for every ν sufficiently close to λ (i.e., if |λ − ν| < α). Thus the nonempty open ball Bα (λ) centered at λ is included in C \σAP (T ). Therefore C \σAP (T ) is open, and so σAP (T ) is closed. Remark. σAP (T ) = σ(T )\σR1(T ) is closed in C and includes ∂σ(T ) = ∂ρ(T ). So C \σR1(T ) = ρ(T ) ∪ σAP (T ) = ρ(T ) ∪ ∂ρ(T ) ∪ σAP (T ) = ρ(T )− ∪ σAP (T ) is closed in C . Outcome: σR1(T ) is an open subset of C . For the remainder of this section we assume that T lies in B[H], where H is a nonzero complex Hilbert space. This will bring forth some important
34
2. Spectrum
simplifications. A particularly useful instance of such simplifications is the formula for the residual spectrum in the next theorem. First we need the following piece of notation. If Λ is any subset of C , then set Λ∗ = λ ∈ C : λ ∈ Λ so that Λ∗∗ = Λ, (C \Λ)∗ = C \Λ∗, and (Λ1 ∪ Λ2 )∗ = Λ∗1 ∪ Λ∗2 . Theorem 2.6. If T ∗ ∈ B[H] is the adjoint of T ∈ B[H], then ρ(T ) = ρ(T ∗ )∗ ,
σ(T ) = σ(T ∗ )∗ ,
σC (T ) = σC (T ∗ )∗ ,
and the residual spectrum of T is given by the formula σR (T ) = σP (T ∗ )∗ \σP (T ). As for the subparts of the point and residual spectra, σP1(T ) = σR1(T ∗ )∗ ,
σP2(T ) = σR2(T ∗ )∗ ,
σP3(T ) = σP3(T ∗ )∗ ,
σP4(T ) = σP4(T ∗ )∗ .
For the compression and approximate point spectra we get σCP (T ) = σP (T ∗ )∗ ,
σAP (T ∗ )∗ = σ(T )\σP1(T ),
∂σ(T ) ⊆ σAP (T ) ∩ σAP (T ∗ )∗ = σ(T )\(σP1(T ) ∪ σR1(T )). Proof. Since S ∈ G[H] if and only if S ∗ ∈ G[H], we get ρ(T ) = ρ(T ∗ )∗ . Hence σ(T )∗ = (C \ρ(T ))∗ = C \ρ(T ∗ ) = σ(T ∗ ). Recall that R(S)− = R(S) if and only if R(S ∗ )− = R(S ∗ ), and N (S) = {0} if and only if R(S ∗ )⊥ = {0} (cf. Lemmas 1.4 and 1.5), which means that R(S ∗ )− = H. Thus σP1(T ) = σR1(T ∗ )∗, σP2(T ) = σR2(T ∗ )∗, σP3(T ) = σP3(T ∗ )∗, and σP4(T ) = σP4(T ∗ )∗. Applying the same argument, σC (T ) = σC (T ∗ )∗ and σCP (T ) = σP (T ∗ )∗ . Hence, σR (T ) = σCP (T )\σP (T ) implies
σR (T ) = σP (T ∗ )∗ \σP (T ).
Moreover, by using the above properties and the definition of σAP (T ∗ ) = σP (T ∗ ) ∪ σC (T ∗ ) ∪ σR2(T ∗ ) = σCP (T )∗ ∪ σC (T )∗ ∪ σP2(T )∗ , we get σAP (T ∗ )∗ = σCP (T ) ∪ σC (T ) ∪ σP2(T ) = σ(T )\σP1(T ). Therefore, σAP (T ∗ )∗ ∩ σAP (T ) = σ(T )\(σP1(T ) ∪ σR1(T )). But σ(T ) is closed and σR1(T ) is open (and so is σP1(T ) = σR1(T ∗ )∗ ) in C . This implies that σP1(T ) ∪ σR1(T ) ⊆ σ(T )◦ , where σ(T )◦ denotes the interior of σ(T ), and ∂σ(T ) ⊆ σ(T )\(σP1(T ) ∪ σR1(T )). Remark. We have just shown that σP1(T ) is an open subset of C .
2.3 Spectral Mapping
35
2.3 Spectral Mapping Spectral Mapping Theorems are pivotal results in spectral theory. Here we focus on the important particular case for polynomials. Further versions will be considered in Chapter 4. Let p: C → C be a polynomial with complex coefficients, take any subset Λ of C , and consider its image under p, viz., p(Λ) = p(λ) ∈ C : λ ∈ Λ . Theorem 2.7. (Spectral Mapping Theorem for Polynomials). Take an operator T ∈ B[X ] on a complex Banach space X . If p is any polynomial with complex coefficients, then σ(p(T )) = p(σ(T )). Proof. To avoid trivialities, let p: C → C be an arbitrary nonconstant polynomial with complex coefficients, n αi λi , with n ≥ 1 and αn = 0, p(λ) = i=0
for every λ ∈ C . Take an arbitrary ν ∈ C and consider the factorization n (λi − λ), ν − p(λ) = βn i=1
with βn = −1n+1 αn where {λi }ni=1 are the roots of ν − p(λ), so that νI − p(T ) = νI −
n i=0
αi T i = βn
n i=1
(λi I − T ).
If λi ∈ ρ(T ) for every i = 1, . . . , n, then βn ni=1 (λi I − T ) ∈ G[X ] so that ν ∈ ρ(p(T )). Thus if ν ∈ σ(p(T )), then there exists λj ∈ σ(T ) for some j = 1, . . . , n. However, λj is a root of ν − p(λ), that is, ν − p(λj ) = βn
n i=1
(λi − λj ) = 0,
and so p(λj ) = ν. Hence, if ν ∈ σ(p(T )), then ν = p(λj ) ∈ p(λ) ∈ C : λ ∈ σ(T ) = p(σ(T )) because λj ∈ σ(T ), and therefore σ(p(T )) ⊆ p(σ(T )). Conversely, if ν ∈ p(σ(T )) = {p(λ) ∈ C : λ ∈ σ(T )}, then ν = p(λ) for some λ ∈ σ(T ). Thus ν − p(λ) = 0 so that λ = λj for some j = 1, . . . , n, and hence
36
2. Spectrum
νI − p(T ) = βn
n i=1
= (λj I − T )βn
(λi I − T )
n j=i=1
(λi I − T ) = βn
n j=i=1
(λi I − T )(λj I − T )
since (λj I − T ) commutes with (λi I − T ) for every i. If ν ∈ ρ(p(T )), then (νI − p(T )) ∈ G[X ] so that n
(λi I − T ) νI − p(T ) −1 (λj I − T ) βn j=i=1
= νI − p(T ) νI − p(T ) −1 = I = νI − p(T ) −1 νI − p(T ) n = (νI − p(T ) −1 βn
j=i=1
(λi I − T ) (λj I − T ).
Then (λj I − T ) has a right and a left inverse (i.e., it is invertible), and so (λj I − T ) ∈ G[X ] by Theorem 1.1. Hence, λ = λj ∈ ρ(T ), which contradicts the fact that λ ∈ σ(T ). Conclusion: if ν ∈ p(σ(T )), then ν ∈ / ρ(p(T )), that is, ν ∈ σ(p(T )). Thus p(σ(T )) ⊆ σ(p(T )). In particular, σ(T n ) = σ(T )n for every n ≥ 0, that is, ν ∈ σ(T )n = {λn ∈ C : λ ∈ σ(T )} if and only if ν ∈ σ(T n ), and σ(αT ) = ασ(T ) for every α ∈ C , that is, ν ∈ ασ(T ) = {αλ ∈ C : λ ∈ σ(T )} if and only if ν ∈ σ(αT ). Also notice (even though this is not a particular case of the Spectral Mapping Theorem for polynomials) that if T ∈ G[X ], then σ(T −1 ) = σ(T )−1 , which means that ν ∈ σ(T )−1 = {λ−1 ∈ C : 0 = λ ∈ σ(T )} if and only if ν ∈ σ(T −1 ). Indeed, if T ∈ G[X ] (so that 0 ∈ ρ(T )) and if ν = 0, then −νT −1 (ν −1 I − T ) = νI − T −1 . Thus ν −1 ∈ ρ(T ) if and only if ν ∈ ρ(T −1 ). Also note that if T ∈ B[H], then σ(T ∗ ) = σ(T )∗ by Theorem 2.6, where H is a complex Hilbert space. The next result is an extension of the Spectral Mapping Theorem for polynomials which holds for normal operators in a Hilbert space H. If Λ1 and Λ2 are arbitrary subsets of C and p : C ×C → C is any polynomial in two variables (with complex coefficients), then set
2.3 Spectral Mapping
37
p(Λ1 , Λ2 ) = p(λ1 , λ2 ) ∈ C : λ1 ∈ Λ1 , λ2 ∈ Λ2 ;
in particular, with Λ∗ = {λ ∈ C : λ ∈ Λ}, p(Λ, Λ∗ ) = p(λ, λ) ∈ C : λ ∈ Λ . Theorem 2.8. (Spectral Mapping Theorem for Normal Operators). If T ∈ B[H] is normal and p(·, ·) is a polynomial in two variables, then σ(p(T, T ∗ )) = p(σ(T ), σ(T ∗ )) = p(λ, λ) ∈ C : λ ∈ σ(T ) . i j Proof. Take any normal operator T ∈ B[H]. If p(λ, λ) = n,m i,j=0 αi,j λ λ , then n,m ∗ i ∗j ∗ ∗ set p(T, T ) = i,j=0 αi,j T T = p(T , T ). Let P(T, T ) be the collection of all those polynomials p(T, T ∗ ), which is a commutative subalgebra of B[H] since T commutes with T ∗. Consider the collection T of all commutative subalgebras of B[H] containing T and T ∗, which is partially ordered (in the inclusion ordering) and nonempty (e.g., P(T, T ∗) ∈ T ). Moreover, every chain in T has an upper bound in T (the union of all subalgebras in a given chain of subalgebras in T is again a subalgebra in T ). Thus Zorn’s Lemma says that T has a maximal element, say A(T ). Outcome: If T is normal, then there is a maximal (thus closed) commutative subalgebra A(T ) of B[H] containing T and T ∗. Since P(T, T ∗) ⊆ A(T ) ∈ T, and every p(T, T ∗) ∈ P(T, T ∗) is normal, A(p(T, T ∗ )) = A(T ) for every nonconstant p(T, T ∗). Furthermore, Φ(p(T, T ∗ )) = p(Φ(T ), Φ(T ∗ )) for every homomorphism Φ : A(T ) → C . Thus, by Proposition 2.Q(b), ) . σ(p(T, T ∗ )) = p(Φ(T ), Φ(T ∗ )) ∈ C : Φ ∈ A(T )). ConTake a surjective homomorphism Φ: A(T ) → C (i.e., take any Φ ∈ A(T sider the Cartesian decomposition T = A + iB, where A, B ∈ B[H] are selfadjoint, and so T ∗ = A − iB (Proposition 1.O). Thus Φ(T ) = Φ(A) + iΦ(B) and Φ(T ∗ ) = Φ(A) − iΦ(B). Since A = 12 (T + T ∗ ) and B = − 2i (T − T ∗ ) lie in P(T, T ∗), we get A(A) = A(B) = A(T ). Moreover, since they are self-adjoint, )} = σ(A) ⊂ R and {Φ(B) ∈ C : Φ ∈ A(T )} = σ(B) ⊂ R {Φ(A) ∈ C : Φ ∈ A(T (Propositions 2.A and 2.Q(b)), and so Φ(A) ∈ R and Φ(B) ∈ R . Hence Φ(T ∗ ) = Φ(T ). Therefore, since σ(T ∗ ) = σ(T )∗ for every T ∈ B[H] by Theorem 2.6, and according to Proposition 2.Q(b), ) σ(p(T, T ∗ )) = p(Φ(T ), Φ(T )) ∈ C : Φ ∈ A(T )} = p(λ, λ) ∈ C : λ ∈ {Φ(T ) ∈ C : Φ ∈ A(T = p(λ, λ) ∈ C : λ ∈ σ(T ) = p(σ(T ), σ(T )∗ ) = p(σ(T ), σ(T ∗ )).
38
2. Spectrum
2.4 Spectral Radius The spectral radius of an operator T ∈ B[X ] on a nonzero complex Banach space X is the nonnegative number rσ (T ) = sup |λ| = max |λ|. λ∈σ(T )
λ∈σ(T )
The first identity in the above expression defines the spectral radius rσ (T ), and the second one is a consequence of the Weierstrass Theorem (cf. proof of Theorem 2.2) since σ(T ) = ∅ is compact in C and the function | |: C → R is continuous. A straightforward consequence of the Spectral Mapping Theorem for polynomials reads as follows. Corollary 2.9.
rσ (T n ) = rσ (T )n for every n ≥ 0.
Proof. Take an arbitrary nonnegative integer n. Theorem 2.7 ensures that σ(T n ) = σ(T )n . Hence ν ∈ σ(T n ) if and only if ν = λn for some λ ∈ σ(T ), n and so supν∈σ(T n ) |ν| = supλ∈σ(T ) |λn | = supλ∈σ(T ) |λ|n = supλ∈σ(T ) |λ| . If λ ∈ σ(T ), then |λ| ≤ T . This follows by the Neumann expansion of Theorem 1.3 (cf. proof of Theorem 2.1). Thus rσ (T ) ≤ T . Therefore, for every operator T ∈ B[X ], and for each nonnegative integer n, 0 ≤ rσ (T n ) = rσ (T )n ≤ T n ≤ T n. Thus rσ (T ) ≤ 1 if T is power bounded (i.e., if supn T n < ∞). Indeed, in this 1 case, rσ (T )n = rσ (T n ) ≤ supk T k and limn (supk T k ) n = 1, and so sup T n < ∞
implies
n
rσ (T ) ≤ 1.
Remark. If T is a nilpotent operator (i.e., if T n = O for some n ≥ 1), then rσ (T ) = 0, and so σ(T ) = σP (T ) = {0} (cf. Proposition 2.J). An operator T ∈ B[X ] is quasinilpotent if rσ (T ) = 0 (i.e., if σ(T ) = {0}). Thus every nilpotent is quasinilpotent. Since σP (T ) may be empty for a quasinilpotent operator (cf. Proposition 2.N), these classes are related by proper inclusion: Nilpotent
⊂
Quasinilpotent.
The next result is the well-known Gelfand–Beurling formula for the spectral radius. Its proof requires another piece of elementary complex analysis, viz., every analytic function has a power series representation. That is, if f : Λ → C is analytic, and if Bα,β (ν) = {λ ∈ C : 0 ≤ α < |λ − ν| < β} lies in , then f has a unique Laurent expansion about the point the open set Λ ⊆ C ∞ ν, namely, f (λ) = k=−∞ γk (λ − ν)k for every λ ∈ Bα,β (ν). Theorem 2.10. (Gelfand–Beurling Formula). 1
rσ (T ) = lim T n n . n
2.4 Spectral Radius
39
Proof. Since rσ (T )n ≤ T n for every positive integer n, and since the limit 1 of the sequence {T n n} exists by Lemma 1.10, we get 1
rσ (T ) ≤ lim T n n . n
For the reverse inequality, proceed as follows. Consider the Neumann expansion (Theorem 1.3) for the resolvent function RT : ρ(T ) → G[X ], ∞ RT (λ) = (λI − T )−1 = λ−1 T k λ−k k=0
for every λ ∈ ρ(T ) such that T < |λ|, where the above series converges in the (uniform) topology of B[X ]. Take an arbitrary bounded linear functional ∗ η : B[X ] → C in B[X ] (cf. proof of Theorem 2.2). Since η is continuous, ∞ η(T k )λ−k η(RT (λ)) = λ−1 k=0
for every λ ∈ ρ(T ) such that T < |λ|. Claim. The above displayed identity holds whenever rσ (T ) < |λ|. k −k Proof. λ−1 ∞ is a Laurent expansion of η(RT (λ)) about the orik=0 η(T )λ gin for every λ ∈ ρ(T ) such that T < |λ|. But η ◦ RT is analytic on ρ(T ) (cf. Claim 2 in Theorem 2.2) so that η(RT (λ)) has a unique Laurent expansion about the origin for every λ ∈ ρ(T and hence for every λ ∈ C such ), ∞ that rσ (T ) < |λ|. Then η(RT (λ)) = λ−1 k=0 η(T k )λ−k, which holds whenever rσ (T ) ≤ T < |λ|, must be the Laurent expansion about the origin for every λ ∈ C such that rσ (T ) < |λ|, thus proving the claimed result. ∞ k −k Hence, if rσ (T ) < |λ|, then the series of complex numbers k=0 η(T )λ −1 k k −k converges, and so η((λ T ) ) = η(T )λ → 0, for every η in the dual space ∗ B[X ] of B[X ]. This means that the B[X ]-valued sequence {(λ−1 T )k } converges weakly. Then it is bounded (in the uniform topology of B[X ] as a consequence of the Banach–Steinhaus Theorem). That is, the operator λ−1 T is power bounded. Thus |λ|−n T n ≤ supk (λ−1 T )k < ∞, so that 1 1 |λ|−1 T n n ≤ sup (λ−1 T )k n , k 1
1
for every n. Therefore, |λ|−1 limn T n n ≤ 1, and so limn T n n ≤ |λ| for 1 every λ ∈ C such that rσ (T ) < |λ|. That is, limn T n n ≤ rσ (T ) + ε for every ε > 0. Outcome: 1 lim T n n ≤ rσ (T ). n
What Theorem 2.10 says is that rσ (T ) = r(T ), where rσ (T ) is the spectral 1 radius of T and r(T ) is the limit of the sequence {T n n } (whose existence was proved in Lemma 1.10). We shall then adopt one and the same notation (the simplest, of course) for both of them. Thus, from now on, we write
40
2. Spectrum 1
r(T ) = sup |λ| = max |λ| = lim T n n . λ∈σ(T )
λ∈σ(T )
n
A normaloid was defined in Section 1.6 as an operator T for which r(T ) = T . Since rσ (T ) = r(T ), it follows that a normaloid operator acting on a complex Banach space is precisely an operator whose norm coincides with the spectral radius. Moreover, since on a complex Hilbert space H every normal operator is normaloid, and so is every nonnegative operator, and since T ∗ T is always nonnegative, it follows that, for every T ∈ B[H], r(T ∗ T ) = r(T T ∗ ) = T ∗ T = T T ∗ = T 2 = T ∗ 2 . Further useful properties of the spectral radius follow from Theorem 2.10: r(αT ) = |α| r(T ) for every α ∈ C and, if H is a complex Hilbert space and T ∈ B[H], then r(T ∗ ) = r(T ). An important application of the Gelfand–Beurling formula is the characterization of uniform stability in terms of the spectral radius. An operator T in B[X ] is uniformly stable if the power sequence {T n } converges uniformly u to the null operator (i.e., if T n → 0). Notation: T n −→ O. Corollary 2.11. If T ∈ B[X ] is an operator on a complex Banach space X , then the following assertions are pairwise equivalent . u (a) T n −→ O.
(b) r(T ) < 1. (c) T n ≤ β αn for every n ≥ 0, for some β ≥ 1 and some α ∈ (0, 1). Proof. Since r(T )n = r(T n ) ≤ T n for each n ≥ 1, it follows that T n → 0 implies r(T ) < 1. Now suppose r(T ) < 1 and take any α in (r(T ), 1). Since 1 r(T ) = limn T n n (Gelfand–Beurling formula), there is an integer nα ≥ 1 n such that T ≤ αn for every n ≥ nα . Thus T n ≤ β αn for every n ≥ 0 with β = max 0≤n≤nα T n α−nα , which clearly implies T n → 0. An operator T ∈ B[H] on a complex Hilbert space H is strongly stable or weakly stable if the power sequence {T n } converges strongly or weakly to the null operator (i.e., if T n x → 0 for every x in H, or T n x ; y → 0 for every x and y in H — equivalently, T n x ; x → 0 for every x in H — cf. Section s w 1.1). These are denoted by T n −→ O and T n −→ O, respectively. Therefore, from what we have considered so far, u O =⇒ r(T ) < 1 ⇐⇒ T n −→ s w O =⇒ T n −→ O =⇒ sup T n < ∞ =⇒ r(T ) ≤ 1. T n −→ n
2.5 Numerical Radius
41
The converses to the above one-way implications fail in general. The next result applies the preceding characterization of uniform stability to extend the Neumann expansion of Theorem 1.3. Corollary 2.12. Let T ∈ B[X ] be an operator on a complex Banach space, and let λ ∈ C be any nonzero complex number . n T k converges uniformly. In this case we (a) r(T ) < |λ| if and only if k=0 λ ∞ ∞ get λ ∈ ρ(T ) and RT (λ) = (λI − T )−1 = λ1 k=0 Tλ k where k=0 Tλ k n T k . denotes the uniform limit of k=0 λ n T k (b) If r(T ) = |λ| and strongly, then λ ∈ ρ(T ) and k=0 λ converges ∞ ∞ RT (λ) = (λI − T )−1 = λ1 k=0 Tλ k where k=0 Tλ k denotes the strong n T k . limit of k=0 λ n T k does not converge strongly. (c) If |λ| < r(T ), then k=0 λ
n T k n u Proof. If O, and therefore converges uniformly, then Tλ −→ k=0 λ |λ|−1 r(T ) = r Tλ < 1 by Corollary 2.11. On the other hand, if r(T ) < |λ|, then ∈ ρ(T ) so that λI − T ∈ G[X ], and also r Tλ = |λ|−1 r(T ) < 1. Hence, T λ n ≤ β αn for every n ≥ 0, for some β ≥ 1 and α ∈ (0, 1), according to λ n ∞ is an Corollary 2.11, and so k=0 Tλ k < ∞, which means that Tλ absolutely summable sequence in B[X ]. Now followthe steps in the proof of n T k converges strongly, Theorem 1.3 to conclude the results in (a). If k=0 λ n n then Tλ x → 0 in X for x ∈ X so that supn Tλ x < ∞ for every every x ∈ X , and hence supn Tλ n < ∞ (by the Banach–Steinhaus Theorem). Thus |λ|−1 r(T ) = r( Tλ ) ≤ 1, which proves (c). Moreover, (λI − T ) λ1
n
T k
k=0 λ
=
1 λ
n
T k
k=0 λ
(λI − T ) = I −
T n+1 λ
s −→ I.
T k T k , where ∞ ∈ B[X ] is the strong Therefore, (λI − T )−1 = λ1 ∞ k=0 k=0 λ λ n T k , which concludes the proof of (b). limit of k=0 λ
2.5 Numerical Radius The numerical range W (T ) of an operator T ∈ B[H] acting on a nonzero complex Hilbert space H is the (nonempty) set consisting of the inner products T x ; x for unit vectors x ∈ H; that is, W (T ) = λ ∈ C : λ = T x ; x for some x = 1 . It can be shown that W (T ) is a convex set in C (see, e.g., [50, Problem 210]), and it is clear that W (T ∗ ) = W (T )∗ .
42
2. Spectrum
Theorem 2.13.
σP (T ) ∪ σR (T ) ⊆ W (T ) and σ(T ) ⊆ W (T )− .
Proof. Take any operator T ∈ B[H] on a nonzero complex Hilbert space H. If λ ∈ σP (T ), then there is a unit vector x ∈ H such that T x = λx. Hence T x ; x = λx2 = λ so that λ ∈ W (T ). If λ ∈ σR (T ), then λ ∈ σP (T ∗) by Theorem 2.6, and so λ ∈ W (T ∗ ). Thus λ ∈ W (T ). Therefore, σP (T ) ∪ σR (T ) ⊆ W (T ). If λ ∈ σAP (T ), then there is a sequence {xn } of unit vectors in H such that (λI − T )xn → 0 by Theorem 2.4. Hence 0 ≤ |λ − T xn ; xn | = |(λI − T )xn ; xn | ≤ (λI − T )xn → 0 so that T xn xn → λ. Since each T xn ; xn lies in W (T ), the classical Closed Set Theorem says that λ ∈ W (T )−. Thus σAP (T ) ⊆ W (T )−, and so σ(T ) = σR (T ) ∪ σAP (T ) ⊆ W (T )− .
The numerical radius of an operator T ∈ B[H] on a nonzero complex Hilbert space H is the nonnegative number w(T ) = sup |λ| = sup |T x ; x|. λ∈W (T )
x=1
It is readily verified that w(T ∗ ) = w(T )
and
w(T ∗ T ) = T 2.
Unlike the spectral radius, the numerical radius is a norm on B[H]. That is, 0 ≤ w(T ) for every T ∈ B[H] and 0 < w(T ) if T = O, w(αT ) = |α|w(T ), and w(T + S) ≤ w(T ) + w(S) for every α ∈ C and every S, T ∈ B[H]. However, the numerical radius does not have the operator norm property in the sense that the inequality w(S T ) ≤ w(S)w(T ) is not true for all operators S, T ∈ B[H]. Nevertheless, the power inequality holds: w(T n ) ≤ w(T )n for all T ∈ B[H] and every positive integer n (see, e.g., [50, p. 118 and Problem 221]). Moreover, the numerical radius is a norm equivalent to the (induced uniform) operator norm of B[H] and dominates the spectral radius, as in the next theorem. Theorem 2.14.
0 ≤ r(T ) ≤ w(T ) ≤ T ≤ 2w(T ).
Proof. Take any T ∈ B[H]. Since σ(T ) ⊆ W (T )−, we get r(T ) ≤ w(T ). Moreover, w(T ) = supx=1 |T x ; x| ≤ supx=1 T x = T . Now recall that, by the polarization identity (cf. Proposition 1.A), T x ; y = 14 T (x + y) ; (x + y) − T (x − y) ; (x − y) + iT (x + iy) ; (x + iy) − iT (x − iy) ; (x − iy)
2.5 Numerical Radius
43
for every x, y in H. Therefore, since |T z ; z| ≤ supu=1 |T u ; u|z2 = w(T )z2 for every z ∈ H, it follows that, for every x, y ∈ H, |T x ; y| ≤
1 4
|T (x + y) ; (x + y)| + |T (x − y) ; (x − y)|
+ |T (x + iy) ; (x + iy)| + |T (x − iy) ; (x − iy)| ≤ 14 w(T ) x + y2 + x − y2 + x + iy2 + x − iy2 .
So it follows by the parallelogram law (cf. Proposition 1.A) that |T x ; y| ≤ w(T ) x2 + y2 ≤ 2w(T ) whenever x = y = 1. Thus, since T = supx=y=1 |T x ; y| (see, e.g., [66, Corollary 5.71]), it follows that T ≤ 2w(T ). An operator T ∈ B[H] is spectraloid if r(T ) = w(T ). Recall that an operator is normaloid if r(T ) = T or, equivalently, if T n = T n for every n ≥ 1 (see Theorem 1.11). The next result is a straightforward application of the previous theorem. Corollary 2.15. Every normaloid operator is spectraloid . Indeed, r(T ) = T implies r(T ) = w(T ), but r(T ) = T also implies w(T ) = T (according to Theorem 2.14). Thus w(T ) = T is a property of every normaloid operator on H. Actually, this can be viewed as a third definition of a normaloid operator on a complex Hilbert space. Theorem 2.16. T ∈ B[H] is normaloid if and only if w(T ) = T . Proof. Half of the proof was presented above. It remains to prove that w(T ) = T implies
r(T ) = T .
Suppose w(T ) = T (and T = O to avoid trivialities). Recall that W (T )− is compact in C (for W (T ) is clearly bounded). Thus maxλ∈W (T )− |λ| = supλ∈W (T )− |λ| = supλ∈W (T ) |λ| = w(T ) = T , and therefore there exists λ ∈ W (T )− such that λ = T . Since W (T ) is nonempty, λ is a point of adherence of W (T ), and so there is a sequence {λn } with each λn in W (T ) such that λn → λ. This means that there is a sequence {xn } of unit vectors in H (i.e., xn = 1) such that λn = T xn ; xn → λ, where |λ| = T = 0. Hence, if S = λ−1 T ∈ B[H], then Sxn ; xn → 1. Claim. Sxn → 1 and ReSxn ; xn → 1. Proof. |Sxn ; xn | ≤ Sxn ≤ S = 1 for each n. But Sxn ; xn → 1 implies |Sxn ; xn | → 1 (and so Sxn → 1) and also ReSxn ; xn → 1 (since | · | and Re(·) are continuous functions), which concludes the proof.
44
2. Spectrum
Then (I − S)xn 2 = Sxn − xn 2 = Sxn 2 − 2ReSxn ; xn + xn 2 → 0 so that 1 ∈ σAP (S) ⊆ σ(S) (cf. Theorem 2.4). Hence r(S) ≥ 1, and so r(T ) = r(λ S) = |λ| r(S) ≥ |λ| = T , which implies that r(T ) = T (because r(T ) ≤ T for every operator T ). Remark. If T ∈ B[H] is spectraloid and quasinilpotent, then T = O. In fact, if w(T ) = 0, then T = O (since the numerical radius is a norm — also see Theorem 2.14); in particular, if w(T ) = r(T ) = 0, then T = O. Therefore, the unique normal (or hyponormal, or normaloid, or spectraloid ) quasinilpotent operator is the null operator . Corollary 2.17. If there exists λ ∈ W (T ) such that |λ| = T , then T is normaloid and λ ∈ σP (T ). In other words, if there exists a unit vector x such that T = |T x ; x|, then r(T ) = w(T ) = T and T x ; x ∈ σP (T ). Proof. If λ ∈ W (T ) is such that |λ| = T , then w(T ) = T (Theorem 2.14) so that T is normaloid (Theorem 2.16). Moreover, since λ = T x ; x for some unit vector x, it follows that T = |λ| = |T x ; x| ≤ T xx ≤ T , and hence |T x ; x| = T xx. That is, the Schwarz inequality becomes an identity, which implies that T x = αx for some α ∈ C (see, e.g., [66, Problem 5.2]). Thus α ∈ σP (T ). But α = αx2 = αx ; x = T x ; x = λ.
2.6 Spectrum of Compact Operators The spectral theory of compact operators plays a central role in the Spectral Theorem for compact normal operators of the next chapter. Normal operators were defined on Hilbert spaces; thus we keep on working with compact operators on Hilbert spaces, as we did in Section 1.8, although the spectral theory of compact operators can be equally developed on nonzero complex Banach spaces. So we assume that all operators in this section act on a nonzero complex Hilbert space H. The main result for characterizing the spectrum of compact operators is the Fredholm Alternative of Corollary 1.20, which can be restated as follows. Theorem 2.18. (Fredholm Alternative). Take T ∈ B∞[H]. If λ ∈ C \{0}, then λ ∈ ρ(T ) ∪ σP (T ). Equivalently, σ(T )\{0} = σP (T )\{0}. Moreover, if λ ∈ C \{0}, then dim N (λI − T ) = dim N (λI − T ∗ ) < ∞ so that λ ∈ ρ(T ) ∪ σP4(T ). Equivalently, σ(T )\{0} = σP4(T )\{0}. Proof. Take a compact operator T on a Hilbert space H and a nonzero scalar λ in C . Corollary 1.20 and the diagram of Section 2.2 ensure that
2.6 Spectrum of Compact Operators
45
λ ∈ ρ(T ) ∪ σP1(T ) ∪ σR1(T ) ∪ σP4(T ). Also by Corollary 1.20, N (λI − T ) = {0} if and only if N (λI − T ∗ ) = {0}, so that λ ∈ σP (T ) if and only if λ ∈ σP (T ∗ ). Thus, λ ∈ σP1(T ) ∪ σR1(T ) by Theorem 2.6, so that λ ∈ ρ(T ) ∪ σP4(T ) or, equivalently, λ ∈ ρ(T ) ∪ σP (T ) (since λ ∈ σP1(T ) ∪ σP2(T ) ∪ σP3(T )). Therefore, σ(T )\{0} = σP (T )\{0} = σP4(T )\{0}.
The scalar 0 may be anywhere. That is, if T ∈ B∞[H], then λ = 0 may lie in σP (T ), σR (T ), σC (T ), or ρ(T ). However, if T is a compact operator on a nonzero space H and 0 ∈ ρ(T ), then H must be finite dimensional. Indeed, if 0 ∈ ρ(T ), then T −1 ∈ B[H] so that I = T −1 T is compact (since B∞[H] is an ideal of B[H]), which implies that H is finite dimensional (cf. Proposition 1.Y). The preceding theorem in fact is a rewriting of the Fredholm Alternative (and it is also referred to as the Fredholm Alternative). It will be applied often from now on. Here is a first application. Let B0 [H] denote the class of all finite-rank operators on H (i.e., the class of all operators from B[H] with a finite-dimensional range). Recall that B0 [H] ⊆ B∞[H] (finite-rank operators are compact — cf. Proposition 1.X). Let #A denote the cardinality of a set A, so that #A < ∞ means “A is a finite set”. Corollary 2.19. If T ∈ B0 [H], then σ(T ) = σP (T ) = σP4(T )
and
#σ(T ) < ∞.
Proof. If dim H < ∞, then an injective operator is surjective, and linear manifolds are closed (see, e.g., [66, Problem 2.18 and Corollary 4.29]), and so the diagram of Section 2.2 says that σ(T ) = σP (T ) = σP4(T ) (for σP1(T ) = σR1(T ∗ )∗ according to Theorem 2.6). On the other hand, suppose dim H = ∞. Since B0 [H] ⊆ B∞[H], Theorem 2.18 says that σ(T )\{0} = σP (T )\{0} = σP4(T )\{0}. Since dim R(T ) < ∞ and dim H = ∞, it follows that R(T )− = R(T ) = H and N (T ) = {0} (because dim N (T ) + dim R(T ) = dim H; see, e.g., [66, Problem 2.17]). Then 0 ∈ σP4(T ) (cf. diagram of Section 2.2), and therefore σ(T ) = σP (T ) = σP4(T ). If σP (T ) is infinite, then there exists an infinite set of linearly independent eigenvectors of T (Theorem 2.3). Since every eigenvector of T lies in R(T ), this implies that dim R(T ) = ∞ (because every linearly independent subset of a linear space is included in some Hamel bases — see, e.g., [66, Theorem 2.5]), which is a contradiction. Conclusion: σP (T ) must be finite. In particular, the above result clearly holds if H is finite dimensional since, as we saw above, dim H < ∞ implies B[H] = B0 [H].
46
2. Spectrum
Corollary 2.20. Take an arbitrary compact operator T ∈ B∞[H]. (a) 0 is the only possible accumulation point of σ(T ). (b) If λ ∈ σ(T )\{0}, then λ is an isolated point of σ(T ). (c) σ(T )\{0} is a discrete subset of C . (d) σ(T ) is countable. Proof. Let T be a compact operator on H. Claim. An infinite sequence of distinct points of σ(T ) converges to zero. Proof. Let {λn }∞ n=1 be an infinite sequence of distinct points of σ(T ). Without loss of generality, suppose that every λn is nonzero. Since T is compact and 0 = λn ∈ σ(T ), it follows by Theorem 2.18 that λn ∈ σP (T ). Let {xn }∞ n=1 be a sequence of eigenvectors associated with {λn }∞ n=1 (i.e., T xn = λn xn with each xn = 0), which is a sequence of linearly independent vectors by Theorem 2.3. For each n ≥ 1, set Mn = span{xi }ni=1 , which is a subspace of H with dim Mn = n, and Mn ⊂ Mn+1 for every n ≥ 1 (because {xi }n+1 i=1 is linearly independent and so xn+1 lies in Mn+1 \Mn ). From now on the argument is similar to that in the proof of Theorem 1.18. Since each Mn is a proper subspace of the Hilbert space Mn+1 , it follows that there exists yn+1 in Mn+1 with yn+1 = 1 for which n+1 1 i=1 αi xi in Mn+1 so that 2 < inf u∈Mn yn+1 − u. Write yn+1 = (λn+1 I − T )yn+1 =
n+1 n αi (λn+1 − λi )xi = αi (λn+1 − λi )xi ∈ Mn . i=1
i=1
Recall that λn = 0 for all n, take any pair of integers 1 ≤ m < n, and set −1 y = ym − λ−1 m (λm I − T )ym + λn (λn I − T )yn −1 so that T (λ−1 m ym ) − T (λn yn ) = y − yn . Since y lies in Mn−1 , 1 2
−1 < y − yn = T (λ−1 m ym ) − T (λn yn ),
which implies that the sequence {T (λ−1 n yn )} has no convergent subsequence. Thus, since T is compact, Proposition 1.S ensures that {λ−1 n yn } has no bounded subsequence. That is, supk |λk |−1 = supk λ−1 y = ∞, and so k k ∞ inf k |λk | = 0 for every subsequence {λk }∞ of {λ } . Thus λ n n → 0, which n=1 k=1 concludes the proof of the claimed result. (a) Thus, if λ = 0, then there is no sequence of distinct points in σ(T ) that converges to λ; that is, λ = 0 is not an accumulation point of σ(T ).
2.7 Additional Propositions
47
(b) Therefore, every λ in σ(T )\{0} is not an accumulation point of σ(T ); equivalently, every λ in σ(T )\{0} is an isolated point of σ(T ). (c) Hence σ(T )\{0} consists entirely of isolated points, which means that σ(T )\{0} is a discrete subset of C . (d) Since a discrete subset of a separable metric space is countable (see, e.g., [66, Example 3.Q]), and since C is separable, σ(T )\{0} is countable. Corollary 2.21. If an operator T ∈ B[H] is compact and normaloid, then σP (T ) = ∅ and there exists λ ∈ σP (T ) such that |λ| = T . Proof. Suppose T is normaloid (i.e., r(T ) = T ). Thus σ(T ) = {0} only if T = O. If T = O and H = {0}, then 0 ∈ σP (T ) and T = 0. If T = O, then σ(T ) = {0} and T = r(T ) = maxλ∈σ(T ) |λ|, so that there exists λ = 0 in σ(T ) such that |λ| = T . Moreover, if T is compact and σ(T ) = {0}, then ∅ = σ(T )\{0} ⊆ σP (T ) by Theorem 2.18. Hence r(T ) = maxλ∈σ(T ) |λ| = maxλ∈σP (T ) |λ| = T . Thus there exists λ ∈ σP (T ) such that |λ| = T . Corollary 2.22. Every compact hyponormal operator is normal . Proof. Suppose T ∈ B[H] is a compact hyponormal operator on a nonzero complex Hilbert space − says that σP (T ) = ∅. Consider the H. Corollary 2.21 subspace M = N (λI − T ) of Theorem 1.16 with {λγ }γ∈Γ = λ∈σP (T ) σP (T ). Observe that σP (T |M⊥ ) = ∅. Indeed, if there is a λ ∈ σP (T |M⊥ ), then there exists 0 = x ∈ M⊥ such that λx = T |M⊥ x = T x, and so x ∈ N (λI − T ) ⊆ M, which is a contradiction. Moreover, recall that T |M⊥ is compact and hyponormal (Propositions 1.O and 1.U). Thus, if M⊥ = {0}, then Corollary 2.21 says that σP (T |M⊥ ) = ∅, which is another contradiction. Therefore, M⊥ = {0} so that M = H (see Section 1.3), and hence T = T |H = T |M is normal according to Theorem 1.16.
2.7 Additional Propositions Proposition 2.A. Let H = {0} be a complex Hilbert space and let T denote the unit circle about the origin of the complex plane. (a) If H ∈ B[H] is hyponormal, then σP (H)∗ ⊆ σP (H ∗ ) and σR (H ∗ ) = ∅. (b) If N ∈ B[H] is normal, then σP (N ∗ ) = σP (N )∗ and σR (N ) = ∅. (c) If U ∈ B[H] is unitary, then σ(U ) ⊆ T . (d) If A ∈ B[H] is self-adjoint, then σ(A) ⊂ R . (e) If Q ∈ B[H] is nonnegative, then σ(Q) ⊂ [0, ∞). (f) If R ∈ B[H] is strictly positive, then σ(R) ⊂ [α, ∞) for some α > 0. (g) If E ∈ B[H] is a nontrivial projection, then σ(E) = σP (E) = {0, 1}.
48
2. Spectrum
Proposition 2.B. Similarity preserves the spectrum and its parts, and so it preserves the spectral radius. That is, let H and K be nonzero complex Hilbert spaces. For every T ∈ B[H] and W ∈ G[H, K], (a) σP (T ) = σP (W T W −1 ), (b) σR (T ) = σR (W T W −1 ), (c) σC (T ) = σC (W T W −1 ). Hence σ(T ) = σ(W T W −1 ), ρ(T ) = ρ(W T W −1 ), and r(T ) = r(W T W −1 ). Unitary equivalence also preserves the norm: if W is a unitary transformation, then, in addition, T = W T W −1 . Proposition 2.C. σ(S T )\{0} = σ(T S)\{0} for every S, T ∈ B[H]. 1
1
Proposition 2.D. If Q ∈ B[H] is nonnegative, then σ(Q 2 ) = σ(Q) 2 . Proposition 2.E. Take any operator T ∈ B[H]. Let d denote the usual distance in C . If λ ∈ ρ(T ), then r((λI − T )−1 ) = [d(λ, σ(T ))]−1 . If T is hyponormal and λ ∈ ρ(T ), then (λI − T )−1 = [d(λ, σ(T ))]−1 . Proposition 2.F. Let {Hk } be a collection of Hilbert spaces, let {Tk } be a (similarly indexed ) collection of operators with each Tk in B[Hk ], and consider the (orthogonal ) direct sum T in B[ k k k Hk ]. Then
(a) σP ( k Tk ) = k σP (Tk ),
(b) σ( k Tk ) = k σ(Tk ) if the collection {Tk } is finite. In general (if the collection {Tk } is not finite), then − (c) ⊆ σ( k Tk ) and the inclusion may be proper. k σ(Tk ) However, if (λI − Tk )−1 = [d(λ, σ(Tk ))]−1 for each k and every λ ∈ ρ(Tk ), − (d) = σ( k Tk ), which happens whenever each Tk is hyponormal. k σ(Tk ) Proposition 2.G. An operator T ∈ B[X ] on a complex Banach space is normaloid if and only if there is a λ ∈ σ(T ) such that |λ| = T . Proposition 2.H. For every operator T ∈ B[H] on a complex Hilbert space, σR (T ) ⊆ λ ∈ C : |λ| < T . Proposition 2.I. If H and K are complex Hilbert spaces and T ∈ B[H], then
2.7 Additional Propositions
r(T ) =
inf
W ∈G[H,K]
49
W T W −1 .
The spectral radius expression in the preceding proposition ensures that an operator is uniformly stable if and only if it is similar to a strict contraction. Proposition 2.J. If T ∈ B[X ] is a nilpotent operator on a complex Banach space, then σ(T ) = σP (T ) = {0}. Proposition 2.K. An operator T ∈ B[H] on a complex Hilbert space is spectraloid if and only if w(T n ) = w(T )n for every n ≥ 0. An operator T ∈ B[H] on a separable infinite-dimensional Hilbert space H is diagonalizable if T x = ∞ k=0 αk x ; ek ek for every x ∈ H, for some orthonormal basis {ek }∞ for H and some bounded sequence {αk }∞ k=0 k=0 of scalars. Proposition 2.L. If T ∈ B[H] is diagonalizable and H is complex, then σP (T ) = λ ∈ C : λ = αk for some k ≥ 1 , σR (T ) = ∅, and σC (T ) = λ ∈ C : inf k |λ − αk | = 0 and λ = αk for every k ≥ 1 . An operator S+ ∈ B[K+ ] on a Hilbert space K+ is a unilateral shift , and an operator S ∈ B[K] on a Hilbert space K is a bilateral shift , if there exists ∞ an infinite sequence {Hk }∞ k=0 and an infinite family {Hk }k=−∞ ∞of nonzero pairwise orthogonal subspaces of K + and K such that K+ = k=0 Hk and ∞ K = k=−∞ Hk (cf. Section 1.3), and both S+ and S map each Hk isometrically onto Hk+1 , so that each transformation U+ (k+1) = S+ |Hk : Hk → Hk+1 , and each transformation Uk+1 = S|Hk : Hk → Hk+1 , is unitary, and therefore dim Hk+1 = dim Hk . Such a common dimension is the multiplicity of S+ and ∞ 2 2 S. If H = H for all k, then K + = + (H) = k k=0 H and K = (H) = ∞ k=−∞ H are the direct orthogonal sums of countably infinite copies of a single nonzero Hilbert space H, indexed either by the nonnegative integers or by all integers, which are precisely the Hilbert spaces consisting of all squaresummable H-valued sequences {xk }∞ k=0 and of all square-summable H-valued families {xk }∞ k=−∞ . In this case (if Hk = H for all k), U+ (k+1) = S+ |H = U+ and Uk+1 = S|H = U for all k, where U+ and U are any unitary operators on H. In particular, if U+ = U = I, the identity on H, then S+ and S are referred 2 to as the canonical unilateral and bilateral shifts on + (H) and on 2 (H). The ∗ ∗ adjoint S+ ∈ B[K+ ] of S+ ∈ B[K+ ] and the adjoint of S ∈ B[K] of S ∈ B[K] are referred to as a backward unilateral shift . ∞shift and as a∞ backward bilateral ∞ ∞ ∞ Writing x for {x } in H , and x for {x } in k k k k k k=0 k=−∞ k=0 k=0 k=−∞ ∞ ∗ k=0 Hk , it follows that S+ : K+ → K+ and S+ : K+ → K+ , and S : K → K and S ∗ : K → K, are given by the formulas ∞ ∞ S+ x = 0 ⊕ k=1 U+ (k) xk−1 and S+∗ x = k=0 U+∗(k+1) xk+1 ∞ ∞ for all x = k=0 xk in K+ = k=0 Hk , with 0 being the origin of H0 , where U+ (k+1) is any unitary transformation of Hk onto Hk+1 for each k ≥ 0, and
50
2. Spectrum
Sx = ∞
∞
k=−∞ Uk xk−1
and
S∗x =
∞
∗ k=∞ Uk+1 xk+1
∞ for all x = k=−∞ xk in K = k=−∞ Hk , where, for each integer k, Uk+1 is any unitary transformation of Hk onto Hk+1 . The spectrum of a bilateral shift is simpler than that of a unilateral shift, for bilateral shifts are unitary operators (i.e., besides being isometries as unilateral shifts are, bilateral shifts are normal too). For a full treatment on shifts on Hilbert spaces see [49]. Proposition 2.M. Let D and T = ∂ D denote the open unit disk and the unit circle about the origin of the complex plane, respectively. If S+ ∈ B[K+] is a unilateral shift and S ∈ B[K] is a bilateral shift on complex spaces, then (a) σP (S+ ) = σR (S+∗ ) = ∅,
σR (S+ ) = σP (S+∗ ) = D ,
σC (S+ ) = σC (S+∗ ) = T .
(b) σ(S) = σ(S ∗ ) = σC (S ∗ ) = σC (S) = T . 2 A unilateral weighted shift T+ = S+ D+ in B[+ (H)] isthe product of a ∞ canonical unilateral shift S+ and a diagonal operator D+ = k=0 αk I, both in 2 B[+ (H)], where {αk }∞ k=0 is a bounded sequence of scalars. A bilateral weighted is the product of a canonical bilateral shift S and shift T = SD in B[ 2 (H)] ∞ a diagonal operator D = k=−∞ αk I, both in B[ 2 (H)], where {αk }∞ k=−∞ is a bounded family of scalars. 2 Proposition 2.N. Let T+ ∈ B[+ (H)] be a unilateral weighted shift, and let 2 T ∈ B[ (H)] be a bilateral weighted shift, where H is complex .
(a) If αk → 0 as |k| → ∞, then T+ and T are compact and quasinilpotent . If , in addition, αk = 0 for all k, then (b) σ(T+) = σR (T+) = σR2(T+) = {0} and σ(T+∗ ) = σP (T+∗ ) = σP2(T+∗ ) = {0}, (c) σ(T ) = σC (T ) = σC (T ∗ ) = σ(T ∗ ) = {0}. Proposition 2.O. Let T ∈ B[H] be an operator on a complex Hilbert space and let D be the open unit disk about the origin of the complex plane. w (a) If T n −→ O, then σP (T ) ⊆ D . u (b) If T is compact and σP (T ) ⊆ D , then T n −→ O.
(c) The concepts of weak, strong, and uniform stabilities coincide for a compact operator on a complex Hilbert space. Proposition 2.P. Take an operator T ∈ B[H] on a complex Hilbert space and let T = ∂ D be the unit circle about the origin of the complex plane. u w (a) T n −→ O if and only if T n −→ O and σC (T ) ∩ T = ∅.
(b) If the continuous spectrum does not intersect the unit circle, then the concepts of weak, strong, and uniform stabilities coincide.
2.7 Additional Propositions
51
The concepts of resolvent set ρ(T ) and spectrum σ(T ) of an operator T in the unital complex Banach algebra B[X ] as in Section 2.1, namely, ρ(T ) = {λ ∈ C : λI − T has an inverse in B[X ]} and σ(T ) = C \ρ(T ), of course, hold in any unital complex Banach algebra A, and so does the concept of spectral radius r(T ) = supλ∈σ(T ) |λ|, where the Gelfand–Beurling formula of Theorem 1 2.10, namely, r(T ) = limn T n n , holds in any unital complex Banach algebra (whose proof is essentially the same as the proof of Theorem 2.10). Recall that a component of a set in a topological space is any maximal (in the inclusion ordering) connected subset of it. A hole of a compact set is any bounded component of its complement. Thus the holes of the spectrum σ(T ) are the bounded components of the resolvent set ρ(T ) = C \σ(T ). If A is a closed unital subalgebra of a unital complex Banach algebra A (for instance, A = B[X ] where X is a Banach space), and if T ∈ A, then let ρ (T ) be the resolvent set of T with respect to A, let σ (T ) = C \ρ (T ) be the spectrum of T with respect to A, and set r (T ) = supλ∈σ (T ) |λ|. Recall that a homomorphism (or an algebra homomorphism) between two algebras is a linear transformation between them that also preserves product. Let A be a maximal (in the inclusion ordering) commutative subalgebra of a unital complex Banach algebra A (i.e., a commutative subalgebra of A that is not included in any other commutative subalgebra of A). Note that A is trivially unital, and closed in A because the closure of a commutative subalgebra of a Banach algebra is again commutative since multiplication is continuous in A. Consider the (unital complex commutative) Banach algebra C (of all complex numbers). Let A = {Φ: A → C : Φ is an homomorphism} stand for the collection of all algebra homomorphisms of A onto C . Proposition 2.Q. Let A be any unital complex Banach algebra (for instance, A = B[X ]). If T ∈ A, where A is any closed unital subalgebra of A, then (a) ρ (T ) ⊆ ρ(T ) and r (T ) = r(T ) (invariance of the spectral radius). Hence ∂σ (T ) ⊆ ∂σ(T ) and σ(T ) ⊆ σ (T ). Thus σ (T ) is obtained by adding to σ(T ) some holes of σ(T ). (b) If A is a maximal commutative subalgebra of A, then σ(T ) = Φ(T ) ∈ C : Φ ∈ A Moreover, in this case,
for each
T ∈ A .
σ(T ) = σ (T ).
Proposition 2.R. If A is a unital complex Banach algebra, and if S, T in A commute (i.e., if S, T ∈ A and S T − T S), then σ(S + T ) ⊆ σ(S) + σ(T ) and σ(ST ) ⊆ σ(S) · σ(T ).
52
2. Spectrum
If M is an invariant subspace for T , then it may happen that σ(T |M ) ⊆ σ(T ). Sample: every unilateral shift is the restriction of a bilateral shift to an invariant subspace (see, e.g., [62, Lemma 2.14]). However, if M reduces T , then σ(T |M ) ⊆ σ(T ) by Proposition 2.F(b). The full spectrum of T ∈ B[H] (notation: σ(T )# ) is the union of σ(T ) and all bounded components of ρ(T ) (i.e., σ(T )# is the union of σ(T ) and all holes of σ(T )). Proposition 2.S. If M is T -invariant, then σ(T |M ) ⊆ σ(T )# . Proposition 2.T. Let T ∈ B[H] and S ∈ B[K] be operators on Hilbert spaces H and K. If σ(T ) ∩ σ(S) = ∅, then for every bounded linear transformation Y ∈ B[H, K] there exists a unique bounded linear transformation X ∈ B[H, K] such that X T − SX = Y . In particular , σ(T ) ∩ σ(S) = ∅ and X T = SX
=⇒
X = O.
This is the Rosenblum Corollary, which will be used to prove the Fuglede Theorems in Chapter 3. Notes: Again, as in the previous chapter, these are basic results that will be needed throughout the text. We will not point out here the original sources but, instead, well-known secondary sources where the reader can find the proofs for those propositions, as well as some deeper discussions on them. Proposition 2.A holds independently of the forthcoming Spectral Theorem (Theorem 3.11) — see, e.g., [66, Corollary 6.18]. A partial converse, however, needs the Spectral Theorem (see Proposition 3.D in Chapter 3). Proposition 2.B is a standard result (see, e.g., [50, Problem 75] and [66, Problem 6.10]), as is Proposition 2.C (see, e.g., [50, Problem 76]). Proposition 2.D also dismisses the Spectral Theorem of the next chapter; it is obtained by the Square Root Theorem (Proposition 1.M) and by the Spectral Mapping Theorem (Theorem 2.7). Proposition 2.E is a rather useful technical result (see, e.g., [63, Problem 6.14]. Proposition 2.F is a synthesis of some sparse results (cf. [28, Proposition I.5.1], [50, Solution 98], [55, Theorem 5.42], [66, Problem 6.37], and Proposition 2.E). For Propositions 2.G and 2.H see [66, Sections 6.3 and 6.4]. The spectral radius formula in Proposition 2.I (see, e.g., [42, p. 22]) ensures that an operator is uniformly stable if and only if it is similar to a strict contraction (hint: use the equivalence between (a) and (b) in Corollary 2.11). For Propositions 2.J and 2.K see [66, Sections 6.3 and 6.4]. The examples of spectra in Propositions 2.L, 2.M, and 2.N are widely known (see, e.g., [66, Examples 6.C, 6.D, 6.E, 6.F, and 6.G]). Proposition 2.O deals with the equivalence between uniform and weak stabilities for compact operators, and it is extended to a wider class of operators in Proposition 2.P (see [63, Problems 8.8. and 8.9]). Proposition 2.Q(a) is readily verified, where the invariance of the spectral radius follows by the Gelfand–Beurling formula. Proposition 2.Q(b) is a key result for proving both Theorem 2.8 (the Spectral Mapping Theorem for normal operators) and also Lemma 5.43 of Chapter 5 (for the characterization
2.7 Additional Propositions
53
of the Browder spectrum in Theorem 5.44) — see, e.g., [76, Theorems 0.3 and 0.4]. For Propositions 2.R, 2.S, and 2.T see [80, Theorem 11.22], [76, Theorem 0.8], and [76, Corollary 0.13], respectively. Proposition 2.T will play a central role in the proof of the Fuglede Theorems of the next chapter (precisely, in Corollary 3.5 and Theorem 3.17).
Suggested Reading Bachman and Narici [8] Berberian [17] Conway [27, 28] Douglas [34] Dowson [35] Dunford and Schwartz [39] Fillmore [42] Gustafson and Rao [45]
Halmos [50] Herrero [55] Istrˇa¸tescu [58] Kato [60] Kubrusly [62, 63, 66] Radjavi and Rosenthal [76] Rudin [80] Taylor and Lay [87]
This page intentionally left blank
3 Spectral Theorem
The Spectral Theorem is a milestone in the theory of Hilbert space operators, providing a full statement about the nature and structure of normal operators. For compact normal operators the Spectral Theorem can be completely investigated without requiring any knowledge of measure theory, and this leads to the concept of diagonalization. However, the Spectral Theorem for plain normal operators (the general case) requires some (elementary) measure theory.
3.1 Spectral Theorem for Compact Operators Throughout this chapter H will denote a nonzero complex Hilbert space. Recall from Section 1.4 that a bounded weighted sum of projections is an operator T on H such that λγ Eγ x Tx = γ∈Γ
for every x ∈ H, where {Eγ }γ∈Γ is a resolution of the identity on H made up of nonzero projections (i.e., {E γ }γ∈Γ is an orthogonal family of nonzero orthogonal projections such that γ∈Γ Eγ x = x for every x ∈ H), and {λγ }γ∈Γ is a bounded family of scalars. Every bounded weighted sum of projections is normal (Corollary 1.9). The spectrum of a bounded weighted sum of projections is characterized next as the closure of the set {λγ }γ∈Γ , − σ(T ) = λ ∈ C : λ = λγ for some λ ∈ Γ , so that, since T is normaloid, T = r(T ) = supγ∈Γ |λγ |. Lemma 3.1. If T ∈ B[H] is a weighted sum of projections, then σR (T ) = ∅, and σP (T ) = λ ∈ C : λ = λγ for some γ ∈ Γ , σC (T ) = λ ∈ C : inf γ∈Γ |λ − λγ | = 0 and λ = λγ for all γ ∈ Γ . Proof. Let T = γ∈Γ λγ Eγ be a bounded weighted sum of projections so that {Eγ }γ∈Γ is a resolution of the identity on H and {λγ }γ∈Γ is a bounded sequence of scalars (see Section 1.4). Take an arbitrary x in H. Since {Eγ }γ∈Γ is C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3_3, © Springer Science+Business Media, LLC 2012
55
56
3. Spectral Theorem
a resolution of the identity on H, we get x = γ∈Γ Eγ x, and the general version of the Pythagorean Theorem(see the Orthogonal Structure Theorem of Section 1.3) ensures that x2 = γ∈Γ Eγ x2 . Take any scalar λ ∈ C . Thus (λI − T )x = γ∈Γ (λ − λγ )Eγ x and so (λI − T )x2 = γ∈Γ |λ − λγ |2 Eγ x2 by the same argument. If N (λI − T ) = {0}, then there exists x = 0 in H such that (λI − T )x = 0. Hence γ∈Γ Eγ x2 = 0 and γ∈Γ |λ − λγ |2 Eγ x2 = 0, which implies that Eα x = 0 for some α ∈ Γ and |λ − λα |Eα x = 0, and therefore λ = λα . Conversely, take any α ∈ Γ and any nonzero vector x in R(Eα ) (recall that R(Eγ ) = {0} because Eγ = O for every
γ ∈ Γ ). Since α = γ, it follows that R(E ) ⊥ R(Eα ) ⊥ R(Eγ ) whenever α α=γ∈Γ R(Eγ ),
and so R(Eα ) ⊆ ( α=γ∈Γ R(Eγ ))⊥ = α=γ∈Γ R(Eγ )⊥ = α=γ∈Γ N (Eγ ) (see Lemma 1.4 and Theorem 1.8). Thus x ∈ N (Eγ ) for every α = γ ∈ Γ , which implies that (λα I − T )x2 = γ∈Γ |λα − λγ |2 Eγ x2 = 0, and therefore N (λα I − T ) = {0}. Summing up: N (λI − T ) = {0} if and only if λ = λα for some α ∈ Γ . That is, σP (T ) = λ ∈ C : λ = λγ for some γ ∈ Γ . Take an arbitrary λ ∈ C such that λ = λγ for all γ ∈ Γ . The weighted sum of projections λI − T = γ∈Γ (λ − λγ )Eγ has an inverse, and this inverse is the weighted sum of projections (λI − T )−1 = γ∈Γ (λ − λγ )−1 Eγ because
(λ − λα )−1 Eα (λ − λβ )Eβ x α∈Γ β∈Γ = (λ − λα )−1 (λ − λβ )Eα Eβ x = α∈Γ
β∈Γ
γ∈Γ
Eγ x = x
for every x in H (since the resolution of the identity {Eγ }γ∈Γ is an orthogonal family of continuous projections, and the inverse is unique). Moreover, the weighted sum of projections of (λI − T )−1 is bounded if and only if supγ∈Γ |λ − λγ |−1 < ∞, that is, if and only if inf γ∈Γ |λ − λγ | > 0. So ρ(T ) = λ ∈ C : inf γ∈Γ |λ − λγ | > 0 . Since σR (T ) = ∅ (a weighted sum of projections is normal — Corollary 1.9 and Proposition 2.A(b)), it follows that σC (T ) = (C \ρ(T ))\σP (T ). Compare with Proposition 2.L. Actually, Proposition 2.L is a particular case of Lemma 3.1, since Ek x = x ; ek ek for every x ∈ H defines the orthogonal projection Ek onto the unidimensional space spanned by ek . Lemma 3.2. A weighted sum of projections T ∈ B[H] is compact if and only if the following triple condition holds. (i) σ(T ) is countable, (ii) 0 is the only possible accumulation point of σ(T ), and (iii) dim R(Eγ ) < ∞ for every γ such that λγ = 0.
3.1 Spectral Theorem for Compact Operators
Proof. Let T =
γ∈Γ
57
λγ Eγ ∈ B[H] be a weighted sum of projections.
Claim. R(Eγ ) ⊆ N (λγ I − T ) for every γ ∈ Γ . Proof. Take any α ∈ Γ . If x ∈ R(Eα ), then x = Eα x so that T x = T Eα x = λ γ∈Γ γ Eγ Eα x = λα Eα x = λα x (since Eγ ⊥ Eα whenever α = γ), and hence x ∈ N (λα I − T ), which concludes the proof of the claimed result. If T is compact, then σ(T ) is countable and 0 is the only possible accumulation point of σ(T ) (Corollary 2.20), and dim N (λI − T ) < ∞ whenever λ = 0 (Theorem 1.19) so that dim R(Eγ ) < ∞ for every γ such that λγ = 0 by the above claim. Conversely, if T = O, then T is trivially compact. Thus suppose T = O. Since T is normal (Corollary 1.9), it follows that r(T ) > 0 (because normal operators are normaloid, i.e., r(T ) = T ) so that there exists λ = 0 in σP (T ) by Theorem 2.18. If σ(T ) is countable, then let {λk } be any enumeration of the countable set σP (T )\{0} = σ(T )\{0}. Hence λk Ek x for every x ∈ H Tx = k
(cf. Lemma 3.1), where {Ek } is included in a resolution of the identity on H (which is itself a resolution of the identity / σP (T )). If {λk } is finite, n on H if 0 ∈ n } , then R(T ) = R(E ). If dim R(Ek ) < ∞ for each say {λk } = {λ k k k=1 nk=1 − k, then dim R(E ) < ∞. Thus T is a finite-rank operator, and so k k=1 compact (cf. Proposition 1.X). Now suppose {λk } is countably infinite. Since σ(T ) is a compact set (Theorem 2.1), it follows that the infinite set {λk } has an accumulation point in σ(T ) — this in fact is the Bolzano–Weierstrass Property that characterizes compact sets in metric spaces. If 0 is the only possible accumulation point of σ(T ), then 0 is the unique accumulation point of {λk }. Therefore, for each integer n ≥ 1 consider the partition {λk } = {λk } ∪ {λk }, where |λk | ≥ n1 and |λk | < n1 . Note that {λk } is a finite subset of σ(T ) (it has no accumulation point), and hence {λk } is an infinite subset of σ(T ). Set λk Ek ∈ B[H] for each n ≥ 1. Tn = k
We have just seen that dim R(Tn ) < ∞. That is, each Tn is a finite-rank operator. However, since Ej ⊥ Ek whenever j = k, we get 2 (T − Tn )x2 = λ E x Ek x2 ≤ n12 x2 ≤ sup |λk |2 k k k
k
k
u for all x ∈ H, so that Tn −→ T . Then T is compact by Proposition 1.Z.
Thus every bounded weighted sum of projections is normal and, if it is compact, then it has a countable set of distinct eigenvalues. The Spectral Theorem for compact operators ensures the converse: every compact and normal operator T is a (countable) weighted sum of projections, whose weights are precisely the eigenvalues of T .
58
3. Spectral Theorem
Theorem 3.3. (Spectral Theorem for Compact Operators). If an operator T ∈ B[H] on a (nonzero complex ) Hilbert space H is compact and normal, then there exists a unique countable resolution of the identity {Ek } on H and a bounded set of scalars {λk } such that λk Ek , T = k
where {λk } = σP (T ), the (nonempty) set of all (distinct ) eigenvalues of T , and each Ek is the orthogonal projection onto the eigenspace N (λk I − T ). If the above countable weighted sum of projections is infinite, then it converges in the (uniform) topology of B[H]. Proof. If T ∈ B[H] is compact and normal, then it has a nonempty point spectrum (Corollary 2.21). In fact, the heart of the matter is that T has enough eigenvalues so that its eigenspaces span the Hilbert space H. − = H. Claim. λ∈σP (T ) N (λI − T ) − Proof. Set M = λ∈σP (T ) N (λI − T ) , which is a subspace of H. Suppose M = H so that M⊥ = {0}. Consider the restriction T |M⊥ of T to M⊥ . If T is normal, then M reduces T (Theorem 1.16) so that M⊥ is T -invariant. Hence T |M⊥ ∈ B[M⊥ ] is normal (Proposition 1.Q). If T is compact, then T |M⊥ is compact (Proposition 1.V). Thus T |M⊥ is a compact normal operator on the Hilbert space M⊥ = {0}, and so σP (T |M⊥ ) = ∅ by Corollary 2.21. That is, there exist λ ∈ C and 0 = x ∈ M⊥ such that T |M⊥ x = λ x and so T x = λ x. Then λ ∈ σP (T ) and x ∈ N (λI − T ) ⊆ M. This leads to a contradiction: 0 = x ∈ M ∩ M⊥ = {0}. Outcome: M = H. Since T is compact, the nonempty set σP (T ) is countable (Corollaries 2.20 and 2.21) and bounded (since T ∈ B[H]). Then write σP (T ) = {λk }, where {λk } is a finite or infinite sequence of distinct elements in C consisting of all eigenvalues of T . Recall that each N (λk I − T ) is a subspace of H (each λk I − T is bounded). Moreover, since T is normal, Lemma 1.15(a) ensures that N (λk I − T ) ⊥ N (λj I − T ) if k = j. Thus {N (λk I − T )} is a sequence of pair − wise orthogonal subspaces of H spanning H; that is, H = k N (λk I − T ) by the above claim. Then the sequence {Ek } of the orthogonal projections onto each N (λk I − T ) is a resolution of the identity on H (see Section 1.4 — recall: − T ) is unique). Thus x = k Ek x. Since T is linear and conEk onto N (λk I tinuous, T x = k T Ek x for every x ∈ H. But Ek x ∈ R(Ek ) = N (λk I − T ), and so T Ek x = λk Ek x, for each k and every x ∈ H. Therefore, Tx = λk Ek x for every x ∈ H, k
and hence T is a countable weighted sum of projections. If it is a finite weighted sum of projections, then we are done. On the other hand, suppose it is an infinite weighted sum of projections. In this case, the above identity says that
3.2 Diagonalizable Operators
n k=1
59
s λk Ek −→ T as n → ∞.
That is, the sequence { nk=1 λk Ek } converges strongly to T . However, since T is compact, this convergence actually happens in the uniform topology (i.e., n { k=1 λk Ek } actually converges uniformly to T ). Indeed, 2 ∞
2 ∞ n λk Ek x = λk Ek x = |λk |2 Ek x2 T− k=1 k=n+1 k=n+1 ∞ ≤ sup |λk |2 Ek x2 ≤ sup |λk |2 x2 k=n+1
k≥n+1
k≥n
for ∞every integer n ≥ 12 (reason: ∞ R(Ej )2⊥ R(Ek ) whenever j = k, and x = E x so that x = k=1 k k=1 Ek x — see Sections 1.3 and 1.4). Hence
n n λk Ek = sup T − λk Ek x ≤ sup |λk | 0 ≤ T − k=1
x=1
k=1
k≥n
for all n ∈ N . Since T is compact and {λn } is a sequence of distinct elements in σ(T ), it follows that λn → 0 (cf. Claim in the proof of Corollary 2.20). Thus limn supk≥n |λk | = lim supn |λn | = 0, and so n u λk Ek −→ T as n → ∞. k=1
3.2 Diagonalizable Operators What the Spectral Theorem for compact (normal) operators says is that, if T ∈ B[H] is compact and normal, then the family of orthogonal projections {Eλ }λ∈σP (T ) onto each eigenspace N (λI − T ) is a countable resolution of the identity on H, and T is a weighted sum of projections. Thus we write λEλ , T = λ∈σP (T )
which is a uniform limit, and so it implies pointwise convergence (i.e., T x = λ∈σP (T ) λEλ x for every x ∈ H). This is naturally identified with (i.e., it is unitarily equivalent to) an orthogonal direct sum of scalar operators, λIλ , T ∼ = λ∈σP (T )
with each Iλ denoting the identity on each eigenspace N (λI − T ). In fact, Iλ = Eλ |R(Eλ ) , where R(Eλ ) = N (λI − T ). Thus, under such an identification, it is usual to write λEλ . T = λ∈σP (T )
These representations are referred to as the spectral decomposition of a compact normal operator T . The Spectral Theorem for compact (normal) operators can be restated in terms of an orthonormal basis for N (T )⊥ consisting of eigenvectors of T , as follows.
60
3. Spectral Theorem
Corollary 3.4. Let T ∈ B[H] be compact and normal . λ (a) For each λ ∈ σP (T )\{0} there is a finite orthonormal basis {ek (λ)}nk=1 for N (λI − T ) consisting entirely of eigenvectors of T .
λ (b) The set {ek } = λ∈σP (T )\{0} {ek (λ)}nk=1 is a countable orthonormal basis ⊥ for N (T ) made up of eigenvectors of T . λ (c) T x = λ∈σP (T )\{0} λ nk=1 x ; ek (λ)ek (λ) for every x ∈ H. (d) T x = k νk x ; ek ek for every x ∈ H, where {νk } is a sequence containing all nonzero eigenvalues of T including multiplicity (i.e., finitely repeated according to the dimension of the respective eigenspace).
Proof. We have already seen (in the proof of the previous theorem) that σP (T ) is nonempty and countable. Recall that σP (T ) = {0} if and only if T = O (since T is normaloid; see Corollary 2.21) or, equivalently, if and only if N (T )⊥ = {0} (i.e., N (T ) = H). If T = O, then the above assertions hold trivially (σP (T )\{0} = ∅, {ek } = ∅, N (T )⊥ = {0} and T x = 0x = 0 for every x ∈ H because the empty sum is null). Thus suppose T = O (so that N (T )⊥ = {0}), and take an arbitrary λ = 0 in σP (T ). Theorem 1.19 ensures that dim N (λI − T ) is finite, say, dim N (λI − T ) = nλ for some positive inteλ for ger nλ . This ensures the existence of a finite orthonormal basis {ek (λ)}nk=1 the Hilbert space N (λI − T ) = {0}. Recall that each ek (λ) is an eigenvector of T (since 0 = ek (λ) ∈ N (λI − T )).
nλ ⊥ Claim. λ∈σP (T )\{0} {ek (λ)}k=1 is a orthonormal basis for N (T ) . Proof. Theorem 3.3 ensures that H=
λ∈σP (T )
− N (λI − T ) .
Since {N (λI − T )}λ∈σP (T ) is a nonempty family of orthogonal subspaces of H (by Lemma 1.15(a)), it follows that
⊥ ! N (λI − T )⊥ = N (λI − T ) N (T ) = λ∈σP (T )\{0}
λ∈σP (T )\{0}
(see, e.g., [66, Problem 5.8(b,e)]). Hence (see Section 1.3),
− N (T )⊥ = N (λI − T ) . λ∈σP (T )\{0}
Therefore, since {N (λI − T )}λ∈σP (T ) is a family of orthogonal subspaces, the claimed result follows by part (a).
λ Note that the set {ek } = λ∈σP (T )\{0} {ek (λ)}nk=1 is countable (a countable union of countable sets is countable). Finally, consider the decomposition H = N (T ) + N (T )⊥ (see Section 1.3), and take an arbitrary x ∈ H so that
3.2 Diagonalizable Operators
61
⊥ x = u + v with u ∈ N (T ) and v ∈ N (T ) . Consider the Fourier series expanλ sion v = k v ; ek ek = λ∈σP (T )\{0} nk=1 v ; ek (λ)ek (λ) of v in terms of
λ for the Hilbert space the orthonormal basis {ek } = λ∈σP (T )\{0} {ek (λ)}nk=1 ⊥ N (T ) = {0}. Since T is linear and continuous, and since T ek (λ) = λek (λ) for each k = 1, . . . , nλ and each λ ∈ σP (T )\{0}, it follows that
Tx = Tu + Tv = Tv nλ = v ; ek (λ) T ek (λ) λ∈σP (T )\{0} k=1 nλ = λ v ; ek (λ) ek (λ). λ∈σP (T )\{0}
k=1
However, x ; ek (λ) = u ; ek (λ) + v ; ek (λ) = v ; ek (λ) because u ∈ N (T ) and ek (λ) ∈ N (T )⊥. Remark. Observe that if T ∈ B[H] is compact and normal, and if H is nonseparable, then 0 ∈ σP (T ) and N (T ) is nonseparable. Indeed, if T = O (otherwise the above statement is trivial), then N (T )⊥ = {0} is separable (i.e., it has a countable orthonormal basis) by Corollary 3.4 and so N (T ) is nonseparable whenever H = N (T ) + N (T )⊥ is nonseparable. Moreover, if 0 ∈ / σP (T ), then N (T ) = {0}, and therefore H = N (T )⊥ is separable. The expression of a compact normal operator T in Corollary 3.4(c) (as well as the very spectral decomposition of it) says that T is a diagonalizable operator in the sense that T = T |N (T )⊥ ⊕ O (recall that N (T ) reduces T ) and T |N (T )⊥ ∈ B[N (T )⊥ ] is a diagonal operator with respect to a countable orthonormal basis {ek } for the separable Hilbert space N (T )⊥ . Generalizing: An operator T ∈ B[H] (not necessarily compact) acting on any Hilbert space H (not necessarily separable) is diagonalizable if there exist a resolution of the identity {Eγ }γ∈Γ on H and a bounded family of scalars {λγ }γ∈Γ such that T u = λγ u whenever u ∈ R(Eγ ). Take an arbitrary x= γ∈Γ Eγ x in H. Since T is linear and continuous, it follows that T x = γ∈Γ T Eγ x = γ∈Γ λγ Eγ x, and hence T is a weighted sum of projections (which is normal). Thus we write λγ Eγ or T = λγ Eγ . T = γ∈Γ
γ∈Γ
Conversely, if T is a weighted sum of projections (i.e., T x = γ∈Γ λγ Eγ x for every x ∈ H), then T u = γ∈Γ λγ Eγ u = γ∈Γ λγ Eγ Eα u = λα u for every u ∈ R(Eα ) (since Eγ Eα = O whenever γ = α and u = Eα u whenever u ∈ R(Eα )), and so T is diagonalizable. Conclusion: An operator T on H is diagonalizable if and only if it is a weighted sum of projections for some bounded sequence of scalars {λγ }γ∈Γ and some resolution of the identity {Eγ }γ∈Γ on
62
3. Spectral Theorem
H. In this case, {Eγ }γ∈Γ is said to diagonalize T . (See also Proposition 3.A — just recall that the identity on any infinite-dimensional Hilbert space H is trivially diagonalizable (a diagonal, actually, if H = Γ2 ), and thus normal, but not compact — cf. Proposition 1.Y.) The implication (b) ⇒ (d) in Corollary 3.5 is the compact version of the Fuglede Theorem (Theorem 3.17) that will be developed in Section 3.5. Corollary 3.5. Suppose T is a compact operator on a Hilbert space H. (a) T is normal if and only if it is diagonalizable. Let {Ek } be a resolution of the identity on H that diagonalizes a compact and normal operator T ∈ B[H] into its spectral decomposition, and take any operator S ∈ B[H]. The following assertions are pairwise equivalent . (b) S commutes with T . (c) S commutes with T ∗ . (d) S commutes with every Ek . (e) R(Ek ) reduces S for every k. Proof. Take a compact operator T on H. If T is normal, then the Spectral Theorem ensures that it is diagonalizable. The converse is trivial since a diagonalizable operator is normal. This proves (a). From now on suppose T is compact and normal, and consider its spectral decomposition, λk Ek T = k
as in Theorem 3.3, where {Ek } is a resolution of the identity on H and {λk } = σP (T ) is the set of all (distinct) eigenvalues of T . Recall that T∗ = λk Ek . k
The central result in the above statement is the implication (b) ⇒ (e). The main idea behind its proof relies on Proposition 2.T (also see, e.g., [76, Theorem 1.16]). Take any operator S on H. Suppose S T = T S. Since Ek T = λk Ek = T Ek (recall: Ek T = j λj Ek Ej = λk Ek = j λj Ej Ek = T Ek ), it follows that each R(Ek ) = N (λk I − T ) reduces T (Proposition 1.I) so that Ek T |R(Ek ) = T Ek |R(Ek ) = T |R(Ek ) = λk I : R(Ek ) → R(Ek ), and hence, on each R(Ek ), (Ej SEk )T |R(Ek ) = (Ej SEk )(T Ek ) = Ej S T Ek = Ej T SEk = (T Ej )(Ej SEk ) = T |R(Ej ) (Ej SEk )
3.3 Spectral Measure
63
whenever j = k. Moreover, since σ(T |R(Ek ) ) = {λk } and λk = λj , we get σ(T Ek ) ∩ σ(T Ej ) = ∅ and so, according to Proposition 2.T, for j = k, Ej SEk = O. Thus (as each Ek is self-adjoint and {Ek } is a resolution of the identity) " # Ej y = Ej SEk x ; y = Ek SEk x ; y SEk x ; y = SEk x ; j
j
for every x, y ∈ H. Then SEk = Ek SEk , (Proposition 1.I) for every k. Hence which means that R(Ek ) is S-invariant − R(Ek )⊥ = R(E ) = R(E j j ) (because {Ek } is a resolution of j=k j=k the identity on H). Thus, since R(Ej ) is S-invariant for every j, it follows that R(Ek )⊥ also is S-invariant. Therefore, R(Ek ) reduces S for every k. This proves that (b) implies (e). But (e) is equivalent to (d) by Proposition 1.I. if SEk = Ek S for every k, then S T ∗ = Note that (d) implies (c). Indeed, ∗ k λk SEk = k λk Ek S = T S (as S is linear and continuous). Thus (b) implies (c), and therefore (c) implies (b) because T ∗∗ = T .
3.3 Spectral Measure If T is a compact operator, then its point spectrum is nonempty and countable, which may not hold for noncompact (normal) operators. But this is not the main role played by compact operators in the Spectral Theorem — we can deal with an uncountable weighted sum of projections T x = γ∈Γ λγ Eγ x. What is actually special with a compact operator is that a compact normal operator not only has a nonempty point spectrum − but it has enough eigenspaces N (λI − T ) = H (see proof of Theorem 3.3). to span H; that is, λ∈σP (T ) That makes the difference, since a normal (noncompact) operator may have an empty point spectrum or it may have eigenspaces but not enough to span the whole space H. However, the Spectral Theorem survives the lack of compactness if the point spectrum is replaced with the whole spectrum (which is never empty). Such an approach for the general case of the Spectral Theorem (i.e., for normal, not necessarily compact operators) requires measure theory. Let R stand for extended real line. Take a measure space (Ω, AΩ , μ), where AΩ is a σ-algebra of subsets of a nonempty set Ω and μ: AΩ → R is a nonnegative measure on AΩ , which means that μ(Λ) ≥ 0 for every Λ ∈ AΩ . The sets in AΩ are called measurable sets (or AΩ -measurable). Observe that AΩ = {∅}
64
3. Spectral Theorem
since Ω = ∅. The measure μ is said to be positive if it is nonnegative and not null (i.e., μ(Ω) > 0). It is finite if μ(Ω) < ∞ (i.e., μ: AΩ → R ), and σ-finite if Ω is a countable union of measurable sets of finite measure. Let F denote either the real line R or the complex plane C . A Borel σ-algebra of subsets of F (or a σ-algebra of Borel sets) is the σ-algebra AF generated by the usual (metric) topology of F . (Note that for the σ-algebra AR of Borel subsets of R , this coincides with the standard Borel σ-algebra generated by the open intervals.) It is clear that all open, and so all closed, subsets of F are Borel sets (i.e., all open or closed subsets of F lie in AF ). From now on suppose the nonempty Ω is a Borel subset of F (i.e., a set in the Borel σ-algebra AF ) and set AΩ = ℘ (Ω) ∩ AF , where ℘(Ω) is the power set of Ω (the collection of all subsets of Ω). This AΩ is a σ-algebra, referred to as the σ-algebra of Borel subsets of Ω. A measure μ on AΩ is said to be a Borel measure if μ(K) < ∞ for every compact set K ⊆ Ω (which is in AΩ ). Every Borel measure is σ-finite, and every finite measure is a Borel μ on
measure. The support of a measure AΩ is the set support (μ) = Ω\ Λ ∈ AΩ : Λ is open and μ(Λ) = 0 , which is a closed set such that its complement is the largest open set of measure zero. Equivalently, support (μ) is the smallest closed set whose complement has measure zero. In the vast literature on measure theory, the reader is referred, for instance, to [11], [21], [46], [65], [78], or [79] (which simply reflects the author’s preference but, by no means, exhausts any list). Remark. A set K ⊆ Ω ⊆ F is compact if it is complete and totally bounded (in F, total boundedness coincides with boundedness). So K ⊆ Ω is compact in the relative topology of Ω if and only if it is in F (where compact means closed and bounded). But open and closed in the preceding paragraph refer to the topology of Ω. Note: if Ω is closed in F , then closed in Ω and in F coincide. Definition 3.6. Let Ω be a nonempty Borel subset of C and let AΩ be the σ-algebra of Borel subsets of Ω. A (complex) spectral measure in a (complex) Hilbert space H is a mapping E : AΩ → B[H] such that (a) E(Λ) is an orthogonal projection for every Λ ∈ AΩ , (b) E(∅) = O and E(Ω) = I, (c) E(Λ1 ∩ Λ2 ) = E(Λ1 )E(Λ2 ) for every Λ1 , Λ2 ∈ AΩ , (d) E k Λk = k E(Λk ) whenever {Λk } is a countable collection of pairwise disjoint sets in AΩ (i.e., E is countably additive). If {Λk } is a countably infinite collection of pairwise disjoint sets in AΩ , then the identity in (d) means convergence in the strong topology:
n s E(Λk ) −→ E Λk . k=1
k
Indeed, since Λj ∩ Λk = ∅ if j = k, it follows by properties (b) and (c) that E(Λj )E(Λk ) = E(Λj ∩ Λk ) = E(∅) = O for j = k, so that {E(Λk )} is an
3.3 Spectral Measure
65
orthogonal sequence nof orthogonal projections in B[H]. Then, according to Proposition 1.K, { to the orthogonal projeck=1 E(Λk )} converges − strongly
tion in B[H] onto R(E(Λ )) = R(E(Λ k )) . Thus what propk k k erty (d) says is that E k Λk coincides with the orthogonal projection in R(E(Λ )) . This generalizes the concept of a resolution of B[H] onto k k the identity on H. In fact, if {Λk } is a partition of Ω, then the orthogonal sequence of orthogonal projections {E(Λk )} is such that
n s E(Λk ) −→ E Λk = E(Ω) = I. k=1
k
Let E : AΩ → B[H] be any spectral measure into B[H]. For each pair of vectors x, y ∈ H consider the mapping πx,y : AΩ → C defined by πx,y (Λ) = E(Λ)x ; y for every Λ ∈ AΩ . The mapping πx,y is an ordinary complex-valued (countably additive) measure on AΩ . For each x, y ∈ H consider the measure space (Ω, AΩ , πx,y ). Let B(Ω) denote the algebra of all complex-valued bounded AΩ -measurable functions φ : Ω → C on Ω. The $ integral of a function φ in B(Ω) with respect to the measure π , viz., φ dπx,y , will also be denoted x,y $ $ $ by φ(λ) dπx,y , or by φ dEλ x ; y, or by φ(λ) dEλ x ; y. Lemma 3.7. Let E : AΩ → B[H] be a spectral measure. For every bounded AΩ -measurable function φ ∈ B(Ω) there is a unique F ∈ B[H] such that % F x ; y = φ(λ) dEλ x ; y for every x, y ∈ H. Proof. Let φ : Ω → C be a bounded function and set φ∞ = supλ∈Ω |φ(λ)|. Define the sesquilinear form f : H×H → C by % f (x, y) = φ(λ) dEλ x ; y for every x, y ∈ H. Since E(·)x ; x is a positive measure, for every x ∈ H, % |f (x, x)| ≤ φ∞ dEλ x ; x = φ∞ E(Ω)x ; x = φ∞ x2 . Thus, by the polarization identity for sesquilinear forms (Proposition 1.A(a1 )), and by the parallelogram law (Proposition 1.A(b)), we get |f (x, y)| ≤ φ∞ x2 + y2 for every x, y ∈ H. Then f is bounded (supx=y=1 |f (x, y)| < ∞) and so is the linear functional f (·, y) : H → C for each y ∈ H. Thus, by the Riesz Representation Theorem in Hilbert space (see, e.g., [66, Theorem 5.62]), for each y ∈ H there is a unique zy ∈ H such that f (x, y) = x ; zy for each
66
3. Spectral Theorem
x ∈ H. This establishes a mapping F #: H → H that assigns to each y ∈ H a unique zy ∈ H such that f (x, y) = x ; F # y for every x, y ∈ H. It is easy to show that F # is unique and lies in B[H]. Hence F = (F # )∗ is the unique operator in B[H] such that F x ; y = f (x, y) for every x, y ∈ H. $ Notation. The unique F ∈ B[H] such that F x ; y = φ(λ) dEλ x ; y for every x, y ∈ H as in Lemma 3.7 is usually denoted by % F = φ(λ) dEλ . In particular, if φ is the$ characteristic $ function $χΛ of a set Λ ∈ AΩ , then E(Λ)x ; y = πx,y (Λ) = Λ dπx,y = χΛ dπx,y = χΛ(λ)dEλ x ; y for every x, y ∈ H, and so the orthogonal projection E(Λ) ∈ B[H] is denoted by % % E(Λ) = χΛ(λ) dEλ = dEλ . Λ
Lemma 3.8. Let E : AΩ → B[H] be a spectral measure. Suppose φ and ψ functions in B(Ω), and consider the operators are (bounded AΩ -measurable) $ $ F = φ(λ) dEλ and G = ψ(λ) dEλ in B[H]. Then % % ∗ (a) F = φ(λ) dEλ and (b) F G = φ(λ)ψ(λ) dEλ . $ Proof.$ Given φ and ψ in B(Ω), consider the operators F = φ(λ) dEλ and G = ψ(λ) dEλ in B[H] (cf. Lemma 3.7). Take an arbitrary pair x, y ∈ H. (a) Observe from the proof of Lemma 3.7 that % % ∗ ∗ F x ; y = y ; F x = f (y, x) = φ(λ) dEλ y ; x = φ(λ) dEλ x ; y. $ $ (b) Set π = πx,y . Since F x ; y = φ dπ = φ(λ) dEλ x ; y, it follows that % F Gx ; y = φ(λ) dEλ Gx ; y. $ $ Moreover, since Gx ; y = ψ dπ = ψ(λ) dEλ x ; y, it also follows that % E(Λ)Gx ; y = Gx ; E(Λ)y = ψ(λ) dEλ x ; E(Λ)y for every Λ ∈ AΩ . Therefore, F Gx ; y =
% φ dν,
where ν = νx,y : AΩ → C is the measure defined, for each Λ ∈ AΩ , by % ν(Λ) = ψ(λ) dEλ x ; E(Λ)y. Thus
3.4 Spectral Theorem: General Case
%
% dν = ν(Λ) = Λ
% ψ(λ) dE(Λ)Eλ x ; y =
67
% ψ(λ) dEλ x ; y =
Λ
ψ dπ Λ
dν for each Λ ∈ AΩ . This is usually written as dν = ψ dπ, meaning$ that ψ $= dπ is the Radon–Nikod´ ym derivative$of π with$respect to ν. In fact, Λ dν = Λ ψ dπ for every Λ ∈ AΩ implies that φ dν = φ ψ dπ for every φ ∈ B(Ω). Hence % % % F Gx ; y = φ dν = φ ψ dπ = φ(λ)ψ(λ) dEλ x ; y.
Lemma 3.9. Given a spectral measure E : AΩ →$ B[H] and a bounded AΩ measurable function φ ∈ B(Ω), the operator F = φ(λ) dEλ is normal . Proof. According to Lemmas 3.7 and 3.8, % ∗ F F = |φ(λ)|2 dEλ = F F ∗ .
Lemma 3.10. Let E : AΩ → B[H] $ be a spectral measure, where Ω is a compact subset of C . The operator F = λ dEλ is well defined in B[H], is normal, and % ∗ p(F, F ) = p(λ, λ) dEλ = p(F ∗, F ) for every polynomial p(·, ·): Ω × Ω → C in λ and λ. Proof. Since Ω = ∅ is compact, the identity function on Ω is bounded and AΩ -measurable, by Lemmas 3.7 and 3.9. If n,m and so F is well defined and normal n,m p(λ, λ) = i,j=0 αi,j λi λj , then set p(F, F ∗ ) = i,j=0 αi,j F i F ∗j = p(F ∗, F ). Thus (by linearity of the integral) the result follows by Lemma 3.8 (since p(·, ·): Ω → C such that λ → p(λ, λ) lies in B(Ω) because Ω is bounded). The standard form of the Spectral Theorem (Theorem 3.15) states the converse of Lemma 3.9. A proof of it relies on Theorem 3.11. In fact, Theorems 3.11 and 3.15 can be thought of as equivalent versions of the Spectral Theorem.
3.4 Spectral Theorem: General Case As always, all Hilbert spaces are nonzero and complex. Consider the notation in the paragraph preceding Proposition 3.B. Take the multiplication operator Mφ in B[L2 (Ω, μ)], which is normal, where φ lies in L∞ (Ω, μ), Ω is a nonempty set, and μ is a positive measure on AΩ (cf. Proposition 3.B). Theorem 3.11. (Spectral Theorem – first version). If T is a normal operator on a Hilbert space, then there is a positive measure μ on a σ-algebra AΩ of subsets of a set Ω = ∅ and a function ϕ in L∞ (Ω, μ) such that T is unitarily equivalent to the multiplication operator Mϕ on L2 (Ω, μ), and ϕ∞ = T . Proof. Let T be a normal operator on a Hilbert space H. We split the proof into two parts. In part (a) we prove the theorem for the case where there exists
68
3. Spectral Theorem
a star-cyclic vector x for T , which means that there is a vector x ∈ H such that n ∗m x} = H (each index m and n runs over all nonnegative integers; m,n {T T that is, (m, n) ∈ N 0 × N 0 ). In part (b) we consider the complementary case. (a) Fix a unit vector x ∈ H. Take a compact ∅ = Ω ⊂ C . Let P (Ω) be the set of all polynomials p(·, ·): Ω × Ω → C in λ and λ (equivalently, p(·, ·): Ω → C such that λ → p(λ, λ)). Let C(Ω) be the set of all complex-valued continuous functions on Ω. Thus P (Ω) ⊂ C(Ω). When equipped with the sup-norm, C(Ω) is a Banach space. The Stone–Weierstrass Theorem for complex functions (see, e.g., [31, p. 139] or [27, Theorem V.8.1]) says that, if Ω is compact, then P (Ω)− = C(Ω) in the sup-norm topology. That is, P (Ω) is dense in the Banach space (C(Ω), · ∞ ). Consider the functional Φ: P (Ω) → C given, for each p ∈ P (Ω), by Φ(p) = p(T, T ∗ )x ; x, which is clearly linear on the linear space P (Ω). Moreover, |Φ(p)| ≤ p(T, T ∗)x2 = p(T, T ∗ ) = r(p(T, T ∗ )) by the Schwarz inequality, since x = 1 and p(T, T ∗) is normal (thus normaloid). From now on set Ω = σ(T ). According to Theorem 2.8, |Φ(p)| ≤ r(p(T, T ∗ )) =
sup λ∈σ(p(T,T ∗ ))
|λ| = sup |p(λ, λ)| = p∞ , λ∈σ(T )
when P (σ(T )) is viewed as a linear manifold of the Banach space C(σ(T )) equipped with the sup-norm · ∞ . Thus Φ: P (σ(T )) → C is bounded too (a contraction, actually, once |Φ| ≤ 1). Since P (σ(T )) is dense in C(σ(T )) and since Φ: P (σ(T )) → C is continuous, Φ has a unique bounded linear extension over C(σ(T )), which we shall denote again by Φ: C(σ(T )) → C . Claim. Φ: C(σ(T )) → C is positive in the sense that Φ(ψ) ≥ 0 for every ψ ∈ C(σ(T )) such that ψ(λ) ≥ 0 for every λ ∈ σ(T ). Proof. Take any ψ ∈ C(σ(T )) such that ψ(λ) ≥ 0 for every λ ∈ σ(T ). Take an arbitrary ε > 0. Since Φ is continuous and P (σ(T ))− = C(σ(T )), there is a polynomial pε ∈ P (σ(T )) such that pε (λ, λ) ≥ 0 for all λ ∈ σ(T ), and |Φ(pε ) − Φ(ψ)| < ε. The Stone–Weierstrass Theorem for real functions (see, e.g., [31, p. 137] or [27, Theorem V.8.1]) ensures that there exists a polynomial qε in the real variables α, β and with real coefficients, (α, β) → qε (α, β) ∈ R , such that |qε (α, β)2 − pε (λ, λ)| < ε
for every
λ = α + iβ ∈ σ(T )
3.4 Spectral Theorem: General Case
69
(recall that pε (λ, λ) ≥ 0). Now consider the Cartesian decomposition of T , namely, T = A + iB, where A and B are commuting self-adjoint operators (see Proposition 1.O). Then qε (A, B) is self-adjoint, and so qε (A, B)2 is a nonnegative operator that commutes with T and T ∗ . Observe that |qε (A, B)2 x ; x − Φ(pε )| = |qε (A, B)2 x ; x − pε (T, T ∗ )x ; x| = |(qε (A, B)2 − pε (T, T ∗ ))x ; x| ≤ qε (A, B)2 − pε (T, T ∗ ) = r(qε (A, B)2 − pε (T, T ∗ )) since qε (A, B)2 − pε (T, T ∗ ) is normal, thus normaloid (reason: qε (A, B)2 is a self-adjoint operator that commutes with the normal operator pε (T, T ∗ )). So |qε (A, B)2 x ; x − Φ(pε )| ≤
sup λ=α+iβ∈σ(T )
|qε (α, β)2 − pε (λ, λ)| ≤ ε,
according to Theorem 2.8 again. Hence, |qε (A, B)2 x ; x − Φ(ψ)| ≤ |qε (A, B)2 x ; x − Φ(pε )| + |Φ(pε ) − Φ(ψ)| ≤ 2ε. But 0 ≤ pε (A, B)2 x ; x, and so 0 ≤ pε (A, B)2 x ; x ≤ 2ε + Φ(ψ) for every ε > 0, which implies that 0 ≤ Φ(ψ), thus completing the proof. Now take the bounded, linear, positive functional Φ on C(Ω) (positiveness is understood in the sense of the above Claim) with Ω = σ(T ), which is compact. The Riesz Representation Theorem in C(Ω) (see, e.g., [79, Theorem 2.14]) ensures the existence of a finite positive measure μ on the σ-algebra AΩ = Aσ(T ) of Borel subsets of Ω (where continuous functions are measurable) such that % Φ(ψ) = ψ dμ for every ψ ∈ C(Ω). Let U : P (Ω) → H be a mapping defined, for each p ∈ P (Ω), by Up = p(T, T ∗ )x. Regard P (Ω) as a linear manifold of the Banach space L2 (Ω, μ) equipped with the L2 -norm · 2 . It is plain that U is a linear transformation. Also, % % 2 2 Up2 = |p| dμ = pp dμ = Φ(pp) = p(T, T ∗ )p(T, T ∗ )x ; x = p(T, T ∗ )p(T, T ∗ )∗ x ; x = p(T, T ∗ )∗ x2 = p(T, T ∗)x2 for every p ∈ P (Ω) (since p(T, T ∗) is normal), which says that U is an isometry, thus injective, and so it has an inverse on its range R(U ) = p(T, T ∗ )x: p ∈ P (Ω) ⊆ H, say U −1 : R(U ) → P (Ω). For each polynomial p in P (Ω) let q be the polynomial in P (Ω) defined by q(λ, λ) = λp(λ, λ) for every λ ∈ Ω. Let ϕ: Ω → Ω ⊂ C
70
3. Spectral Theorem
be the identity function, ϕ(λ) = λ for every λ ∈ Ω, which is bounded (i.e., ϕ ∈ L∞ (Ω, μ) because Ω is bounded). In this case we may write q = ϕp
and
q(T, T ∗ )x = T p(T, T ∗)x.
Therefore, U −1 T Up = U −1 T p(T, T ∗ )x = U −1 q(T, T ∗ )x = U −1 Uq = q = ϕp = Mϕ p, for every p ∈ P (Ω). Since P (Ω) is dense in the Banach space (C(Ω), · ∞ ), and since Ω is a compact set in C and μ is a finite positive measure on AΩ , it follows that P (Ω) is also dense in the normed space (C(Ω), · 2 ), which in turn is a dense linear manifold of the Banach space (L2 (Ω, μ), · 2 ), and hence P (Ω) is dense in (L2 (Ω, μ), · 2 ). That is, P (Ω)− = L2 (Ω, μ) in the L2 -norm topology. Moreover, since U is a linear isometry of P (Ω) onto R(U ), which are linear manifolds of the Hilbert spaces L2 (Ω, μ) and H, it follows that U extends by continuity to a unique unitary transformation, also denoted by U, of the Hilbert space P (Ω)− = L2 (Ω, μ) onto the range of U, R(U ) ⊆ H. (In fact, it extends onto the closure of R(U ), but R(U ) is closed in H because every isometry of a Banach space into a normed space has a closed range.) Thus if x is a star-cyclic vector for T ; that is, if m,n {T nT ∗m x} = H, then − R(U ) = p(T, T ∗)x: p ∈ P (Ω) =
m,n
{T n T ∗m x} = H,
and so U is a unitary transformation in B[L2 (Ω, μ), H] such that U −1 T U = Mϕ . Then T on H is unitarily equivalent to the multiplication operator Mϕ on L2 (Ω, μ) for the identity function ϕ: Ω → Ω ⊂ C on Ω = σ(T ) (i.e., ϕ(λ) = λ for every λ ∈ Ω). Therefore, since T is normal (thus normaloid) and unitarily equivalent to Mϕ (thus with the same norm of Mϕ ), it follows that ϕ∞ = T . Indeed, according to Proposition 3.B, ϕ∞ = ess sup |ϕ| ≤ sup |λ| = r(T ) = T = Mϕ = ϕ∞ . λ∈σ(T )
(b) On the other hand, suppose there is no star-cyclic vector for T . Assume that T and H are nonzero to avoid trivialities. Since T is normal, the nontrivial n ∗m T x} of H reduces T for each unit vector x ∈ H. subspace Mx = m,n {T Consider the set = { Mx : unit x ∈ H} of all orthogonal direct sums of these subspaces, which is partially ordered (in the inclusion ordering). is not empty (if a unit y ∈ Mx ⊥, then My = m,n {T n T ∗m y} ⊆ Mx ⊥ because Mx ⊥ reduces T , and so Mx ⊥ My ). Moreover, every chain in has an upper bound
3.4 Spectral Theorem: General Case
71
in (the union of all orthogonal direct sums in a given chain of orthogonal direct sums in is again an orthogonal direct sum in ). Thus Zorn’s Lemma ensures that has a maximal element, say M = Mx , which coincides with H (otherwise it would not be maximal since M ⊕ M⊥ = H). Summing up: There exists an indexed family {xγ }γ∈Γ of unit vectors in H generating }γ∈Γ of orthogonal nontrivial suba (similarly indexed) collection {Mγ spaces of H such that each Mγ = m,n {T nT ∗m xγ } reduces T , each xγ is star-cyclic for T |Mγ , and H = γ∈Γ Mγ . Each restriction T |Mγ is a normal operator in B[Mγ ] (cf. Proposition 1.Q). Thus, by part (a), for each γ ∈ Γ there is a positive finite measure μγ on the σ-algebra AΩγ of Borel subsets of Ωγ = σ(T |Mγ ) such that T |Mγ in B[Mγ ] is unitarily equivalent to the multiplication operator Mϕγ in B[L2 (Ωγ , μγ )], where each ϕγ : Ωγ → Ωγ ⊂ C is the identity function on Ωγ (ϕγ (λ) = λ for every λ ∈ Ωγ ), which is bounded (ϕγ ∈ L∞ (Ωγ , μγ ) since each Ωγ is compact μγ )] such in C . Thus there exists a unitary transformation Uγ in B[Mγ , L2 (Ωγ , that Uγ T |Mγ = MϕγUγ for each index γ ∈ Γ . Hence the unitary U = γ∈Γ Uγ 2 in )] is such that B[ γ∈Γ Mγ , γ∈Γ L (Ωγ , μγ ( γ∈Γ Uγ ) ( γ∈Γ T |Mγ ) = γ∈Γ Uγ T |Mγ = γ M ϕγ Uγ = ( γ∈Γ Mϕγ ) ( γ∈Γ Uγ ). Therefore, T |Mγ in B[H] = B[ γ∈Γ Mγ ] is unitarily equivalent to γ∈Γ Mϕγ in B[ γ∈Γ L2 (Ωγ , μγ )]. T =
γ∈Γ
Consider the disjoint union of {Ωγ }, which is obtained by reindexing the elements of each Ωγ as follows. For each γ ∈ Γ write Ωγ = {λδ (γ)}δ∈Δγ so that, if there is a complex number λ in Ωα ∩ Ωβ for α = β in Γ , then the same complex number λ is represented as λδ (α) ∈ Ωα for some δ ∈ Δα and λδ (β) ∈ Ωβ for some δ ∈ Δβ , and these representations are considered as distinct elements. Therefore the sets Ωα and Ωβ become disjoint. (Regard the sets in {Ωγ } as subsets of distinct copies of C so that,
for distinct indices α, β ∈ Γ , the sets Ωα and Ωβ are disjoint.) Set Ω = γ∈Γ Ωγ . The disjoint union Ω — unlike the ordinary union — can viewed as the union of all σ(T |Mγ ) considering
the multiplicity of each point in γ∈Γ σ(T |Mγ ), disregarding possible overlappings. Note that σ(T |Mγ ) ⊆ σ(T ) since each Mγ reduces T (cf. Proposition 2.F(b)), and so Ω ⊆ γ∈Γ σ(T ) is bounded.
Let AΩ be the σ-algebra consisting of all subsets of Ω of the form Λ = γ∈Γ Λγ , where each Λγ is a Borel subset of Ωγ (i.e., Λγ ∈ AΩγ ). These are referred to as the Borel subsets of Ω. Let μγ : AΩγ → R be the finite measure on each AΩ
γ whose existence was ensured in part (a). Consider the Borel subsets Λ = j∈J Λj of Ω made up of disjoint countable unions of Borel subsets of Ωj (i.e., Λj ∈ AΩj ), and
let μ be the measure on AΩ such that μ(Λ) = j∈J μj (Λj ) for every Λ = j∈J Λj in AΩ with {Λj } being any disjoint collection of sets with each Λj ∈ A
Ωj , for J of Γ . (For a proper subset J of Γ we identify any countable subset j∈J Λj
with γ∈Γ Λγ when Λγ = ∅ for every γ ∈ J .) Note that μ is positive since
72
3. Spectral Theorem
each μj is. (Indeed, Λj = Ωj so that μj (Λj ) > 0 for at least one j in one J .) Consider the measure space (Ω, AΩ , μ). Let ϕ: Ω → C be the
function obtained from the functions ϕγ : Ωγ → Ωγ ⊂ C as follows. If λ ∈ γ∈Γ Ωγ , then λ ∈ Ωβ for some β ∈ Γ , and so set
ϕ(λ) = ϕβ (λ) = λ (each ϕγ is the identity function on Ωγ ). Since Ω = γ∈Γ Ωγ , where Ωγ = σ(T |Mγ ) ⊆ σ(T ), it follows that ϕ ∈ L∞ (Ω, μ) with ϕ∞ = ess sup |ϕ| ≤ sup |ϕ(λ)| ≤ sup |λ| = r(T ) ≤ T . λ∈σ(T )
λ∈Ω
(Actually, r(T ) = T because T is normaloid.) Now we show that
γ∈Γ
Mϕγ in B[
γL
2
(Ωγ , μγ )] is
unitarily equivalent to Mϕ in B[L2 (Ω, μ)]. 2 First note that the Hilbert spaces L2 (Ω, μ) and γ∈Γ L (Ωγ , μγ ) are unitarily equivalent. Actually, recall that direct sums and topological sums of families subspaces of a Hilbert space are unitarily equivalent. of orthogonal 2 2 − ∼ means that there is Thus γ∈Γ L (Ωγ , μγ ) = ( γ∈Γ L (Ωγ , μγ )) , which 2 a unitary transformation V : γ∈Γ L (Ωγ , μγ ) → ( γ∈Γ L2 (Ωγ , μγ ))−. Next
consider the mapping W: γ∈Γ L2 (Ωγ , μγ ) → L2 ( γ∈Γ Ωγ , μ) that assigns to each ψ = γ∈Γ ψγ in γ∈Γ L2 (Ωγ , μγ ), with each ψγ in L2 (Ωγ , μγ ), the func
tion W ψ in L2 ( γ∈Γ Ωγ , μ) defined as follows: if λ ∈ γ∈Γ Ωγ , then λ ∈ Ωβ for some β ∈ Γ and (W $ mapping Wis linear and surjec$ ψ)(λ) = ψβ (λ). The tive. Since W ψ2 = |W ψ|2 dμ = β∈Γ |ψβ |2 dμβ = β∈Γ ψβ 2 = ψ2 , it follows that W is an isometry as well, and so it is a unitary transformation of ( γ∈Γ L2 (Ωγ , μγ )) onto L2 ( γ∈Γ Ωγ , μ). Then it extends by continuity to a unitary transformation, also denoted by W, over
the whole Hilbert space ( γ∈Γ L2 (Ωγ , μγ ))− onto the Hilbert space L2 ( γ∈Γ Ωγ , μ). In other words,
W:( γ∈Γ L2 (Ωγ , μγ ))− → L2 ( γ∈Γ Ωγ , μ) = L2 (Ω, μ) is unitary. Therefore, ( γ∈Γ L2 (Ωγ , μγ ))− ∼ = L2 (Ω, μ), and hence, by transitivity, γ∈Γ
L2 (Ωγ , μγ ) ∼ = L2 (Ω, μ).
Now take an arbitrary ψ = γ∈Γ ψγ in γ∈Γ L2 (Ωγ , μγ ), and an arbitrary λ ∈ Ω = γ∈Γ Ωγ so that λ ∈ Ωβ for some β ∈ Γ . Recall that ϕ|Ωβ = ϕβ (which is the identity function on Ωβ ). Thus W ψ(λ) = ψβ (λ), and therefore (Mϕ W ψ)(λ) = (Mϕ ψβ )(λ) = ϕ(λ)ψβ (λ) = ϕ|Ωβ (λ)ψβ (λ) On = (ϕβ ψβ )(λ). −1 −1 the other hand, since V ψ = ψ , it follows that ( M )V ψ = ϕγ γ∈Γ γ γ∈Γ −1 M ψ = ϕ ψ , and so V ( M )V ψ = ϕ ψ ϕγ γ γ∈Γ γ∈Γ γ γ γ∈Γ γ γ . γ∈Γ ϕγ −1 Hence, (W V ( γ∈Γ Mϕγ )V ψ)(λ) = ( γ∈Γ ϕγ ψγ )β (λ) = (ϕβ ψβ )(λ). Then Mϕ W = W V ( γ∈Γ Mϕγ )V −1 on γ∈Γ L2 (Ωγ , μγ ), which extends by conti nuity over all ( γ∈Γ L2 (Ωγ , μγ ))−. Thus, for the unitary transformation W V,
3.4 Spectral Theorem: General Case
WV
73
γ∈Γ
M ϕγ
= Mϕ W V,
∼ and so γ∈Γ Mϕγ is unitarily equivalent to Mϕ (i.e., γ M ϕγ = Mϕ ) through ∼ ∼ the unitary W V. Then, T = Mϕ by transitivity, since T = γ∈Γ Mϕγ. That is, T in B[H] is unitarily equivalent to Mϕ in B[L2 (Ω, μ)]. Finally, since ϕ∞ ≤ T and T ∼ = Mϕ , it follows that (cf. Proposition 3.B), ϕ∞ ≤ T = Mϕ ≤ ϕ∞ .
Remark. Note that ϕ: Ω → C can be viewed as a sort of identity function (ϕ(λ) = λ ∈ C for every λ ∈ Ω) including multiplicity (and so it may not be injective), which takes disjoint unions in AΩ into ordinary unions in Aσ(T ) , so that ϕ(Ω)− = σ(T ) (by Proposition 2.F(d) since T |Mγ is normal), and each Λ˜ ∈ Aσ(T ) is such that Λ˜− = ϕ(Λ)− for some Λ ∈ AΩ , and hence Ω is bounded . The following commutative diagram illustrates the proof of part (b). H ⏐ ⏐ U'
T
−−−→
L2 (Ωγ , μγ ) ⏐ ⏐ V' − L2 (Ωγ , μγ ) ⏐ ⏐ W'
γ∈Γ
γ∈Γ
L2 (Ω, μ)
H ⏐ ⏐ 'U
γ∈Γ
γ∈Γ
Mϕ
−−−→
L2 (Ωγ , μγ ) ⏐ ⏐ 'V
− L2 (Ωγ , μγ ) ⏐ ⏐ 'W
L2 (Ω, μ).
Definition 3.12. A vector x ∈ H is a cyclic vector for an operator T in B[H] if H is the smallest invariant subspace for T that contains x. If T has a cyclic vector, then it is called a cyclic operator . A vector x ∈ H is a star-cyclic vector for an operator T in B[H] if H is the smallest reducing subspace for T that contains x. If T has a star-cyclic vector, then it is called a star-cyclic operator . Observe that x ∈ H is a cyclic vector for an operator T ∈ B[H] if and only if n {T nx} = H. That is, if and only if {p(T )x: p is a polynomial}− = H; which means that {Sx: S ∈ P(T )}− = H, where P(T ) is the algebra of all polynomials in T with complex coefficients. Similarly, it can also be verified that x ∈ H is a star-cyclic vector for an operator T ∈ B[H] if and only if {Sx: S ∈ C ∗ (T )}− = H, where C ∗ (T ) stands for the C*-algebra generated by T , which is the smallest C*-algebra of operators from B[H] containing T and the identity I. Recall that C ∗ (T ) = P(T, T ∗)−, the closure in B[H] of the set of all polynomials in two (noncommuting) variables T and T ∗ (in any order) with complex coefficients. (See, e.g., [27, Proposition IX.3.2] and [5, p. 1].)
74
3. Spectral Theorem
Remark. Reducing subspaces are invariant. Thus a cyclic vector is star-cyclic, and so a cyclic operator is star-cyclic. A normed space is separable if and only if ∗ − it spanned by a countable set. So, the subspace of H, {Sx: S ∈ P(T,∗T )}− = is m ∗n ∗n m {T T x} ∪{T T x} , is separable, and so is {Sx: S ∈ C (T )} = m,n ∗ − − {Sx: S ∈ P(T, T ) } . Hence, if T ∈ B[H] is a star-cyclic operator, then H is separable. (See, e.g., [66, Proposition 4.9] and [27, Proposition IX.3.3].) Then, T ∈ B[H] is cyclic =⇒ T ∈ B[H] is star-cyclic =⇒ H is separable. If T is normal , then m,n {T n T ∗m x} is the smallest subspace of H that contains x and reduces T , sothat x ∈ H is a star-cyclic vector for a normal operator T if and only if m,n {T n T ∗m x} = H (cf. proof of Theorem 3.11). An important result from [19] says that if T is a normal operator, then T is cyclic whenever it is star-cyclic, and so T ∈ B[H] is normal and cyclic ⇐⇒ T ∈ B[H] is normal and star-cyclic. It can also be shown along this line that x is a cyclic vector for a normal operator T if and only if it is a cyclic vector for T ∗ [50, Problem 164]. Consider the proof of Theorem 3.11. Part (a) deals with the case where the normal operator T is star-cyclic, while part (b) deals with the case where T is not star-cyclic. In the former case, H must be separable by the above Remark. We show next that if H is separable, then the measure in Theorem 3.11 can be taken to be finite. Indeed, the positive measure μ: AΩ → R constructed in part (a) is finite. However, the positive measure μ: AΩ → R constructed in part (b) may fail to be even σ-finite (see, e.g., [50, p. 66]). But finiteness can be restored by separability. Corollary 3.13. If T is a normal operator on a separable Hilbert space H, then there is a finite positive measure μ on a σ-algebra AΩ of subsets of a nonempty set Ω and a function ϕ in L∞ (Ω, μ) such that T is unitarily equivalent to the multiplication operator Mϕ on L2 (Ω, μ). Proof. Consider the argument in part (b) of the proof of Theorem 3.11. Since the collection {Mγ } is made up of orthogonal subspaces, we may take one unit vector from each subspace by the Axiom of Choice to get an orthonormal set {eγ } of unit vectors, which is in a one-to-one correspondence with the set {Mγ }. Hence {Mγ } and {eγ } have the same cardinality. Since {eγ } is an orthonormal set, its cardinality is not greater than the cardinality of any orthonormal basis for H, and so the cardinality of {Mγ } does not surpass the (Hilbert) dimension of H. If H is separable, then {Mγ } is countable. In this case, for notational convenience, replace the index γ in the index set Γ with an integer index n in a countable index set, say N , throughout the proof of part (b). Since the orthogonal collection {Mn } of subspaces is now countable, it follows that the family of sets {Ωn }, each Ωn = σ(T |Mn ), is countable. Recall that each positive measure μn : AΩn → R inherited from part (a) is finite. Take
3.4 Spectral Theorem: General Case
75
the positive measure
μ on AΩ defined in part (b). Thus μ(Λ) = j∈J μj (Λj ) for every set Λ = j∈J Λj in AΩ (each ⊆ Ωj ), where J
Λj in AΩj so that Λj is any subset of N . But now Ω = Ω and so μ(Ω) = n∈N n n∈N μn (Ωn ) = μ(Ω ), where μ(Ω ) = μ (Ω ) < ∞ for each n ∈ N . (Again, we are n n n n n∈N
identifying each Ωn with m∈N Ωm when Ωm = ∅ for every m = n.) Conclusion: If H is separable, then μ is σ-finite. However, we can actually get a finite measure. Indeed, without loss of generality, multiply each original measure μn by a positive scalar. Then we may assume that n∈N μn (Ω n ) < ∞ (e.g., take 0 < μn (Ωn ) ≤ 2−n for each n ∈ N ). In this case, μ(Ω) = n∈N μn (Ωn ) < ∞, so that μ: AΩ → R is now finite (rather than just σ-finite). Thus all measures involved in the proof of Theorem 3.11 are finite whenever H is separable. Remark. Corollary 3.13 forces the Hilbert space L2 (Ω, μ), with that particular finite measure μ, to be separable (since it is unitarily equivalent to a separable Hilbert space H). Recall that there exist nonseparable L2 spaces with finite measures, and also non-σ-finite measures for which L2 is separable (see, e.g., [21, pp. 174, 192, 376] and [51, pp. 2, 3]). Corollary 3.14. Let T be a normal operator on a Hilbert space H so that T ∼ = Mϕ on L2 (Ω, μ), where ϕ lies in L∞ (Ω, μ), with μ being the positive measure on the σ-algebra AΩ of subsets of the nonempty set Ω as in Theorem 3.11. The following assertions are pairwise equivalent . (a) H is separable. (b) μ is finite. (c) T is star-cyclic. (d) T is cyclic. Moreover, if any of the above equivalent assertions holds true, then the constant function ψ = 1 (ψ(λ) = 1 for all λ ∈ Ω) lies in L2 (Ω, μ) and is starcyclic for Mϕ , and so Mϕ has a star-cyclic vector in L∞ (Ω, μ). Proof. By Theorem 3.11, T ∼ = Mϕ . That is, T is unitarily equivalent to the multiplication operator Mϕ on L2 (Ω, μ), where ϕ is a function in L∞ (Ω, μ). Let U be a unitary transformation in B[L2 (Ω, μ), H] such that U ∗ T U = Mϕ . Observe that ψ in L2 (Ω, μ) is a cyclic (star-cyclic) vector for the normal opfor the erator Mϕ if and only if x =U ψ in H is acyclic (star-cyclic) vector n n n ∗m normal operator T . Indeed, {T x} = U {M ψ} and {T T x} = ϕ n n m,n U m,n {Mϕn Mϕ∗m ψ}. Corollary 3.13 says that (a) implies (b). We show next that (b) implies (c). Since ϕ: Ω → C is such that ϕ(λ) = λ ∈ C for every λ ∈ Ω, a trivial induction ensures that Mϕn ψ = λn ψ, Mϕ∗m ψ = λm ψ, and so n ∗m n m 2 m,n {Mϕ Mϕ ψ} = m,n {λ λ ψ}, for every λ in Ω and every ψ ∈ L (Ω, μ). If μ is finite, then the constant function 1 (i.e., 1(λ) = 1 for all λ ∈ Ω) lies in L2 (Ω, μ). Set ψ = 1 in L2 (Ω, μ), and let P (Ω) denote the set of all polynomials p(·, ·): Ω × Ω → C in λ and λ (equivalently, p(·, ·): Ω → C such that λ → p(λ, λ)). Thus (cf. proof of Theorem 3.11),
76
3. Spectral Theorem
L2 (Ω, μ) = P (Ω)− =
m,n
{λn λm } =
m,n
{Mϕn Mϕ∗m ψ}
so that ψ = 1 is a star-cyclic vector for Mϕ . Thus (b) implies (c). The Remark preceding Corollary 3.13 ensures that (c) =⇒ (d) =⇒ (a). Remark. Let K be a compact subset of C . The Stone–Weierstrass Theorem (cf. proof of Theorem 3.11) ensures that P (K)− = C(K) in the sup-norm, where C(K) is the set of all complex-valued continuous functions on K and P (K) is the set of all polynomials p(·, ·): K × K → C in λ and λ (or, p(·, ·): K → C such that λ → p(λ, λ)). Let P (K) denote the set of all polynomials p(·): K → C in one variable λ. The Lavrentiev Theorem says that if K is compact, then P (K)− = C(K) in the sup-norm if and only if C \K is connected and K ◦ = ∅ (i.e., if and only if the compact set K has empty interior and no holes), which extends the classical Weierstrass Approximation Theorem for (complexvalued) polynomials on compact intervals of the real line, viz., P ([α, β])− = C([α, β]) in the sup-norm (see [21, p. 18], or [27, Proposition VII.5.3] and [28, Theorem V.14.20]; for real-valued functions the Weierstrass Theorem holds for every compact set in the real line — see [31, p. 139]). This underlines the difference between cyclic and star-cyclic vectors. Note that if T is normal and P (σ(T ))− = C(σ(T ), then T is reductive (Proposition 3.J). Bram’s result [19] not only shows that normal operators are cyclic if and only if they are starcyclic, but also shows that, under the hypotheses of Corollary 3.14, Mϕ has a cyclic vector in L∞ (Ω, μ), with Ω = σ(T ) as in part (a) of the proof of Theorem 3.11 [28, Theorem V.14.21]. Theorem 3.15. (Spectral Theorem – second version). If T ∈ B[H] is normal, then there is a unique spectral measure E : Aσ(T ) → B[H] such that % T = λ dEλ . If Λ is a nonempty relatively open subset of σ(T ), then E(Λ) = O. Proof. We shall split the proof into four parts. (a) Let μ be a positive measure on a σ-algebra AΩ of Borel subsets of a nonempty bounded set Ω of complex numbers. Consider the multiplication operator Mφ ∈ B[L2 (Ω, μ)] for any φ ∈ L∞ (Ω, μ), and let χΛ : Ω → {0, 1} be the characteristic function of Λ ∈ AΩ , which is bounded (i.e., χΛ ∈ L∞ (Ω, μ)). Set E (Λ) = M χΛ for every Λ ∈ AΩ . It is readily verified that E : AΩ → B[L2 (Ω, μ)] is a spectral measure in L2 (Ω, μ) (cf. Definition 3.6). Take an arbritrary f ∈ L2 (Ω, μ). Since a characteristic function of a measurable set is a measurable function, % % % = πf,f (Λ) = E (Λ)f ; f χΛ dEλ f ; f = χΛ d π f,f = d πf,f Λ % % = M χΛ f ; f = χΛ f f dμ = χΛ |f |2 dμ
3.4 Spectral Theorem: General Case
77
for every Λ ∈ A Ω . Let Ψ be the collection of all simple functions ψ: Ω → C of the form ψ = i λi χΛi with respect to all finite measurable partitions {Λi } of Ω and all (similarly indexed) sets {λi } of constants in Ω with each λi ∈ Λi (so that ψ(λi ) = λi ). Since each ψ ∈ Ψ is AΩ -measurable (and bounded), % % % ψ dEλ f ; f = λi χΛi dEλ f ; f = λi χΛi |f |2 dμ i i % % 2 λi χΛi |f | dμ = ψ |f |2 dμ = i
for every ψ ∈ Ψ (by the previously displayed identity), which also holds for the function ϕ: Ω → C such that ϕ(λ) = λ. (Reason: inf ψ∈Ψ ϕ − ψ∞ = 0 so that ϕ ∈ Ψ − ⊂ L∞ (Ω, μ), thus apply extension by continuity.) Therefore, % % % 2 ϕ dEλ f ; f = ϕ |f | dμ = ϕ f f dμ = Mϕ f ; f for every f ∈ L2 (Ω, μ). By the polarization identity (Proposition 1.A) we get Sf ; g = 14 [S(f + g) ; (f + g)−S(f − g) ; (f − g)+iS(f + ig) ; (f + ig)− iS(f − ig) ; (f − ig) for every f, g ∈ L2 (Ω, μ) and S ∈ B[L2 (Ω, μ)]. Thus, replacing S with E (Λ) on the one hand, and with Mϕ on the other hand, % ϕ dEλ f ; g = Mϕ f ; g for every f, g ∈ L2 (Ω, μ), which means % % Mϕ = ϕ(λ) dEλ = λ dEλ . Uniqueness of the spectral measure E : AΩ → B[H] is proved as follows. Let P (Ω) be the set of polynomials p(·, ·) on Ω as defined in the proof of Theorem Mϕ = 3.11. If E :$AΩ → B[H] is a spectral $measure in L2 (Ω, $ $ μ) such that λ dE λ = λ dEλ , then p(T, T ∗) = p(λ, λ) dE = p(λ, λ) dE for every λ λ $ $ p ∈ P (Ω) by Lemma 3.10. This means that p dE λ x; y = p dEλ x; y for every x, y ∈ H, for every p ∈ P (Ω). Fix an arbitrary x ∈ H. So, in particular, % % p dE λ x; x = p dEλ x; x for every p ∈ P (Ω). Let K be the closure of the union of the supports of the measures E and E , which is compact since Ω is bounded, and let C(K) be the set of all complex-valued continuous functions on K. The Stone–Weierstrass Theorem for complex functions says that P (K) is dense in (C(K), · ∞ ), and so P (K) is dense in (L2 (K, ν), · 2 ) for every finite positive measure ν on AΩ with support in K (see proof of Theorem 3.11). Since the positive measures E (·)x ; x = E (·)x2 and E (·)x ; x = E (·)x2 on AΩ are finite, we get % % χΛ dE λ x; x = χΛ dEλ x; x for the characteristic function χΛ of an arbitrary set Λ ∈ AΩ . Thus
78
3. Spectral Theorem
E (Λ)x ; x =
% Λ
dE λ x; x =
%
dEλ x; x = E (Λ)x ; x,
Λ
and so (E (Λ) − E (Λ))x ; x = 0, for every Λ ∈ AΩ . Since E (Λ) − E (Λ) is self-adjoint for every Λ ∈ AΩ , and since the above identity holds for every x ∈ H, it follows that E is unique. That is (see, e.g., [66, Corollary 5.80]), E (Λ) − E (Λ) = O for every Λ ∈ AΩ . This completes the proof for the normal operator Mϕ ∈ B[L2(Ω, μ)]: $ there is a unique spectral measure E : AΩ → B[L2 (Ω, μ)] such that Mϕ = λ dEλ . (b) If T ∈ B[H] is normal, then T ∼ = Mϕ ∈ B[L2 (Ω, μ)] (for a positive measure μ union
on a σ-algebra AΩ of subsets of a bounded set Ω — the disjoint 2 γ∈Γ Ωγ ) by Theorem 3.11. Thus there is a unitary U ∈ B[H, L (Ω, μ)] such that T = U ∗Mϕ U . But E: AΩ → B[H] given by E(Λ) = U ∗E (Λ)U for each Λ ∈ AΩ is a spectral measure in H (Definition 3.6). Then we get from (a) that % % T x ; y = Mϕ U x ; U y = ϕ dEλ U x ; U y = ϕ dEλ x ; y for every x, y ∈ H, with ϕ: Ω → C such that ϕ(λ) = λ, which means % T = λ dEλ . Since E(Λ) ∼ = E (Λ), uniqueness of E implies uniqueness of E.$ Therefore, there is a unique spectral measure E : AΩ → B[H] such that T = λ dEλ .
(c) Recall that Ω is the disjoint union γ∈Γ Ωγ with Ωγ = σ(T |Mγ ) ⊆ σ(T ) — a set of complex numbers in σ(T ) that considers multiplicity of each point λ in σ(T ). Note that, for each λ ∈ Ω, {λ} ∈ AΩ (cf. proof of Theorem 3.11) and dim R(E({λ})) = dim R(E ({λ})) = dim R(M χ{λ} ) = dim L2 ({λ}, μ) ∈ {0, 1}. We shall now incorporate the multiplicity of λ in dim R(E({λ})) as follows.
For each Λ in Aσ(T ) set ∪Λ = γ∈Γ {Λ(γ) ∈ AΩ γ : Λ(γ) = Λ ∩ Ωγ } in AΩ so that, for each λ in σ(T ), {λ} ∈ Aσ(T ) and ∪{λ} = γ∈Γ {λ(γ) ∈ Ωγ : λ(γ) = λ} $ ˜ in AΩ . Set E˜ : Aσ(T ) → B[H] such that E(Λ) = ∪Λ dEλ = E(∪Λ) for every Λ $ ˜ on Aσ(T ) such that ˜λ = in Aσ(T ) . This defines a spectral measure E λ dE σ(T ) $ λ dEλ , which is unique since E is unique. In particular, for each λ in σ(T ) Ω $ ˜ ˜ = dim R(E(∪λ)). we get E({λ}) = ∪λ dEλ = E(∪λ), and so dim R(E({λ})) ˜ Therefore, using the same notation for E and E, it follows that multiplicity of λ is incorporated in dim R(E({λ})) and, $from (b), there is a unique spectral measure E : Aσ(T ) → B[H] such that T = λ dEλ . (d) We show that if ∅ = Λ ∈ Aσ(T ) is relatively open in σ(T ), then E(Λ) = O. Take any ∅ = Λ ∈ Aσ(T ) and consider the definition of ∪Λ ∈ AΩ in part (c). If Λ is relatively open in σ(T ), then there is a nonempty Λ(γ) ⊆ Λ in Aσ(T |Mγ ) relatively open in σ(T |Mγ ). It can be verified that each positive measure μγ on AΩγ (cf. part (b) in the proof of Theorem 3.11) is such that support (μγ ) = σ(T |Mγ ). Thus μγ (Λ(γ)) > 0, and so μ(∪Λ) > 0 (according to the definition
3.5 Fuglede Theorems and Reducing Subspaces
79
2 of μ in the proof $ of Theorem 3.11). However, since χ∪Λ ∈ L (Ω, μ), and since E (Λ )f ; f = χΛ |f |2 dμ for each Λ ∈ Ω and every f ∈ L2 (Ω, μ), % % dμ = μ(∪Λ). E (∪Λ)χ∪Λ 2 = E (∪Λ)χ∪Λ ; χ∪Λ = χ∪Λ |χ∪Λ |2 dμ = ∪Λ
˜ = E(∪Λ) = E (∪Λ) = 0, so Thus, E (∪Λ) = 0, and therefore E(Λ) ˜ ˜ and E. that E(Λ) = O. Again, use the same notation for E $ The representation T = λ dEλ is also referred to as the spectral decomposition of the normal operator T (see Section 3.2).
3.5 Fuglede Theorems and Reducing Subspaces Let Aσ(T ) be the σ-algebra of Borel subsets of the spectrum σ(T ) of T . The following lemma sets up the essential features for proving the next theorem. $ Lemma 3.16. Let T = λ dEλ be the spectral decomposition of a normal operator T ∈ B[H] as in Theorem 3.15. If Λ ∈ Aσ(T ) , then (a) E(Λ)T = T E(Λ). Equivalently, R(E(Λ)) reduces T . (b) Moreover, if Λ = ∅, then σ(T |R(E(Λ)) ) ⊆ Λ− . Proof. Consider the spectral decomposition of T . Take any ∅ = Λ ∈ Aσ(T ) . $ $ (a) Since E(Λ) = χΛ(λ) dEλ and T = λ dEλ , it follows by Lemma 3.8 that % % E(Λ)T = λ χΛ(λ) dEλ = χΛ(λ) λ dEλ = T E(Λ), which is equivalent to saying that R(E(Λ)) reduces T (by Proposition 1.I). (b) Let AΩ be the σ-algebra of Borel subsets of Ω as in Theorem 3.11. Take $an arbitrary nonempty Λ$∈ AΩ . Recall from the proof of Theorem 3.15 that λ dEλ = T ∼ = Mϕ = ϕ(λ) dEλ , so that T = U ∗Mϕ U for a uni2 tary U: H → L (Ω, μ), where the spectral measures E : AΩ → B[L2 (Ω, μ)] and E : AΩ → B[H] are given by E (Λ) = M χΛ and E(Λ) = U ∗E (Λ)U . Thus T |R(E(Λ)) ∼ = Mϕ |R(MχΛ ) . Indeed, since R(E (Λ)) = U ∗ R(E (Λ)U ), we get T |R(E(Λ)) = (U ∗Mϕ U )|U ∗ R(E (Λ)U) = U ∗Mϕ |R(E (Λ)) U = U ∗Mϕ |R(MχΛ ) U. Let μ|Λ be the restriction of μ to the σ-algebra AΛ = ℘(Λ) ∩ AΩ ⊆ AΩ so that R(M χΛ ) is unitarily equivalent to L2 (Λ, μ|Λ ); that is, R(M χΛ ) = f ∈ L2 (Ω, μ): f = χΛ g for some g ∈ L2 (Ω, μ) ∼ = L2 (Λ, μ|Λ ). Consider the following notation: set Mϕ χΛ = Mϕ |L2 (Λ,μ|Λ ) in B[L2 (Λ, μ|Λ )] (not in B[L2 (Ω, μ)]). Since R(E (Λ)) = R(M χΛ ) reduces Mϕ ,
80
3. Spectral Theorem
Mϕ |R(MχΛ ) ∼ = Mϕ χΛ . Hence T |R(E(Λ)) ∼ = Mϕ χΛ by transitivity, and therefore (cf. Proposition 2.B), σ(T |R(E(Λ)) ) = σ(Mϕ χΛ ). Since Λ = ∅, Proposition 3.B ensures that, for ϕ: Ω → C such that ϕ(λ) = λ, ! ϕ(Δ)− ∈ C : Δ ∈ AΛ and μ|Λ (Λ\Δ) = 0 σ(Mϕ χΛ ) ⊆ ! ϕ(Δ ∩ Λ)− ∈ C : Δ ∈ AΩ and μ(Λ\Δ) = 0 ⊆ ϕ(Λ)− . = ˜ : Aσ(T ) → B[H] be the spectral Finally, take an arbitrary Λ˜ ∈ Aσ(T ) and let E ˜ ˜ ˜ measure defined by E(Λ) = E(∪Λ) (cf. part (c) in the proof of Theorem 3.15). Since Λ˜− = ϕ(Λ)− for some Λ ∈ AΩ , and since ∪(ϕ(Λ)− ) ⊆ Λ−, we get − − σ(T |R(E( = Λ˜− . − )) ) ⊆ σ(T |R(E(Λ− )) ) ⊆ ϕ(Λ ) ˜ Λ)) ˜ ) ⊆ σ(T |R(E(ϕ(Λ) ˜
˜ and for Λ and Λ. ˜ Again, use the same notation for E and E,
The proof of the next theorem follows the same argument used in the proof of Corollary 3.5, now extended to the general case (see [76, Theorem 1.12]). $ Theorem 3.17. (Fuglede Theorem). Let T = λ dEλ be the spectral decomposition of a normal operator in B[H]. If S ∈ B[H] commutes with T , then S commutes with E(Λ) for every Λ ∈ Aσ(T ) . Proof. Take any Λ in Aσ(T ) . Suppose ∅ = Λ = σ(T ); otherwise the result is trivially verified with E(Λ) = O or E(Λ) = I. Set Λ = σ(T )\Λ in Aσ(T ) . Let Δ and Δ be arbitrary nonempty measurable closed subsets of Λ and Λ . That is, ∅ = Δ ⊆ Λ and ∅ = Δ ⊆ Λ lie in Aσ(T ) and are closed in C ; and so they are closed relative to the compact set σ(T ). Lemma 3.16(a) ensures that E(Λ)T |R(E(Λ)) = T E(Λ)|R(E(Λ)) = T |R(E(Λ)) : R(E(Λ)) → R(E(Λ)) for every Λ ∈ Aσ(T ) . In particular, this holds for Δ and Δ . Take any operator S on H such that S T = T S. Thus, on R(E(Δ)), (∗)
(E(Δ )SE(Δ))T |R(E(Δ)) = (E(Δ )SE(Δ))(T E(Δ)) = E(Δ )S T E(Δ) = E(Δ )T SE(Δ) = (T E(Δ ))(E(Δ )SE(Δ)) = T |R(E(Δ )) (E(Δ )SE(Δ)).
Since the sets Δ and Δ are closed and nonempty, Lemma 3.16(b) says that σ(T |R(E(Δ)) ) ⊆ Δ
and
σ(T |R(E(Δ ) ) ⊆ Δ ,
and hence, as Δ ⊆ Λ and Δ ⊆ Λ = σ(T )\Λ, (∗∗)
σ(T |R(E(Δ)) ) ∩ σ(T |R(E(Δ )) ) = ∅.
3.5 Fuglede Theorems and Reducing Subspaces
81
Therefore, according to Proposition 2.T, (∗) and (∗∗) imply that E(Δ )SE(Δ) = O. This identity holds for all Δ ∈ C Λ and all Δ ∈ C Λ , where C Λ = Δ ∈ Aσ(T ) : Δ is a closed subset of Λ . Now Proposition 3.C says that, in the inclusion ordering, R(E(Λ)) = sup R(E(Δ))
R(E(Λ )) = sup R(E(Δ ))
and
Δ ∈C Λ
Δ∈C Λ
and so, in the extension ordering (see, e.g., [66, Problem 1.17]), E(Λ) = sup E(Δ)
and
E(Λ ) = sup E(Δ ).
Δ∈C Λ
Then
Δ ∈C Λ
E(Λ )SE(Λ) = O.
Since E(Λ) + E(Λ ) = E(Λ ∪ Λ ) = I, this leads to SE(Λ) = E(Λ ∪ Λ )SE(Λ) = (E(Λ) + E(Λ ))SE(Λ) = E(Λ)SE(Λ). Thus R(E(Λ)) is S-invariant (cf. Proposition 1.I). Since this holds for every Λ in Aσ(T ) , it holds for Λ = σ(T )\Λ so that R(E(Λ )) is S-invariant. But R(E(Λ)) ⊥ R(E(Λ )) (i.e., E(Λ)E(Λ ) = O) and H = R(E(Λ)) + R(E(Λ )) (since H = R(E(Λ ∪ Λ )) = R(E(Λ) + E(Λ )) ⊆ R(E(Λ)) + R(E(Λ )) ⊆ H). Hence R(E(Λ )) = R(E(Λ))⊥, and so R(E(Λ))⊥ also is S-invariant. Then R(E(Λ)) reduces S, which, by Proposition 1.I, is equivalent to SE(Λ) = E(Λ)S.
Invariant subspaces were discussed in Section 1.1 (a subspace M is T invariant if T (M) ⊆ M). Recall that a subspace M of a normed space X is nontrivial if {0} = M = X . Reducing subspaces (i.e., subspaces M such that both M and M⊥ are T -invariant or, equivalently, a subspace M that is invariant for both T and T ∗ ), and reducible operators (i.e., operators that have a nontrivial reducing subspace), were discussed in Section 1.5. Hyperinvariant subspaces were defined in Section 1.9 (subspaces that are invariant for every operator that commutes with a given operator). Recall that an operator is nonscalar if it is not a (complex) multiple of the identity — otherwise it is called a scalar operator (Section 1.7). Scalar operators are clearly normal. A projection E is nontrivial if O = E = I. Equivalently, a projection is nontrivial if and only if it is nonscalar . (Indeed, if E 2 = E, then it follows at once that O = E = I if and only if E = αI for every α ∈ C ). Corollary 3.18. Consider operators acting on a complex Hilbert space of dimension greater than 1.
82
3. Spectral Theorem
(a) Every normal operator has a nontrivial reducing subspace. (b) Every nonscalar normal operator has a nontrivial hyperinvariant subspace which reduces every operator that commutes with it . (c) An operator is reducible if and only if it commutes with a nonscalar normal operator . (d) An operator is reducible if and only if it commutes with a nontrivial orthogonal projection. (e) An operator is reducible if and only if it commutes with a nonscalar operator that also commutes with its adjoint . $ Proof. Consider the spectral decomposition T = λ dEλ of a normal operator T as in Theorem 3.15. By Theorem 3.17, if S T = T S, then SE(Λ) = E(Λ)S or, equivalently, each subspace R(E(Λ)) reduces S, which means that {R(E(Λ))} is a family of reducing subspaces for every operator that commutes with T . If σ(T ) has a single point, say σ(T ) = {λ}, then T = λI (by uniqueness of the spectral measure); that is, T is a scalar operator so that every subspace of H reduces T . If dim H > 1, then in this case T has plenty of nontrivial reducing subspaces. Hence, if T is nonscalar, then σ(T ) has more than one point (and dim H > 1). If λ, μ lie in σ(T ) and λ = μ, then let D λ = B 1 |λ−μ| (λ) = ν ∈ C : |ν − λ| < 12 |λ − μ| 2 be the open disk of radius 12 |λ − μ| centered at λ. Set Λλ = σ(T ) ∩ D λ
and
Λλ = σ(T )\D λ ,
so that σ(T ) is the disjoint union of Λλ and Λλ . Observe that both Λλ and Λλ lie in Aσ(T ) . Since Λλ and σ(T )\D − λ are nonempty relatively open subsets − of σ(T ) and σ(T )\D λ ⊆ Λλ , it follows by Theorem 3.15 that E(Λλ ) = O and E(Λλ ) = O. Then I = E(σ(T )) = E(Λλ ∪ Λλ ) = E(Λλ ) + E(Λλ ), and so E(Λλ ) = I − E(Λλ ) = I. Therefore, O = E(Λλ ) = I, which means that R(E(Λλ )) is nontrivial; that is, {0} = R(E(Λλ )) = H. This proves (a) (for the particular case of S = T ), and also proves (b) — the nontrivial R(E(Λλ )) reduces every S that commutes with T . Thus (b) ensures that if T commutes with a nonscalar normal, then it is reducible. The converse follows by Proposition 1.I, since every nontrivial orthogonal projection is a nonscalar normal. This proves (c). Since a projection E is nontrivial if and only if {0} = R(E) = H, Proposition 1.I ensures that T is reducible if and only if it commutes with a nontrivial orthogonal projection, which proves
3.5 Fuglede Theorems and Reducing Subspaces
83
(d). Since every nontrivial orthogonal projection is self-adjoint and nonscalar, it follows by (d) that if T is reducible then it commutes with a nonscalar operator that also commutes with its adjoint. Conversely, suppose T is such that L T = T L and L T ∗ = T ∗ L (and so L∗ T = T L∗) for some nonscalar operator L. Then (L + L∗ ) T = T (L + L∗ ), where L + L∗ is self-adjoint (thus normal). If this self-adjoint L + L∗ is not scalar, then T commutes with the nonscalar normal L + L∗ . On the other hand, if this self-adjoint L + L∗ is scalar, then the nonscalar L must be normal. Indeed, if L + L∗ = αI, then L∗ L = L L∗ = αL − L2 , which shows that L is normal. Again, in this case, T commutes with the nonscalar normal L. Thus, in any case, T commutes with a nonscalar normal operator so that T is reducible according to (c); and this completes the proof of (e). All the equivalent assertions in Corollary 3.5, which hold for compact normal operators, remain equivalent in the general case (for plain normal, not necessarily compact operators). Again, the implication (a) ⇒ (c) in Corollary 3.19 below is the Fuglede Theorem (Theorem 3.17). $ Corollary 3.19. Let T = λ dEλ be a normal operator in B[H] and take any operator S in B[H]. The following assertions are pairwise equivalent. (a) S commutes with T . (b) S commutes with T ∗ . (c) S commutes with every E(Λ). (d) R(E(Λ)) reduces S for every Λ. $ Proof. Let T = λ dEλ be the spectral decomposition of a normal operator in B[H]. Take any Λ ∈ Aσ(T ) and an arbitrary S ∈ B[H]. First we show that ST = TS
⇐⇒
SE(Λ) = E(Λ)S
⇐⇒
S T ∗ = T ∗S.
If S T = T S, then SE(Λ) = E(Λ)S by Theorem 3.17 so that πx,S ∗ y (Λ) = E(Λ)x ; S ∗ y = SE(Λ)x ; y = E(Λ)Sx ; y = πSx,y (Λ), and hence % ST ∗ x ; y = T ∗ x ; S ∗ y = λ dEλ x ; S ∗ y % % = λ dSEλ x ; y = λ dEλ Sx ; y = T ∗Sx ; y, for every x, y ∈ H (cf. Lemmas 3.7 and 3.8(a)). Thus ST = TS
=⇒
SE(Λ) = E(Λ)S
=⇒
S T ∗ = T ∗S
⇐⇒
S ∗ T = T S ∗.
Since these hold for every S ∈ B[H], and since S ∗∗ = S, it follows that S∗T = T S∗
=⇒
S T = T S.
84
3. Spectral Theorem
This shows that (a), (b), and (c) are equivalent, and (d) is equivalent to (c) by Proposition 1.I (as in the final part of the proof of Theorem 3.17). Corollary 3.20. (Fuglede–Putnam Theorem). Suppose T1 in B[H] and T2 in B[K] are normal operators. If X ∈ B[H, K] intertwines T1 to T2 , then X intertwines T1∗ to T2∗ (i.e., if X T1 = T2 X, then X T1∗ = T2∗ X). Proof. Take T1 ∈ B[H], T2 ∈ B[K], and X ∈ B[H, K]. Consider the operators ( ) ( ) T1 O O O T = T1 ⊕ T2 = and S= X O O T2 in B[H ⊕ K]. If T1 and T2 are normal, then T is normal. If X T1 = T2 X, then S T = T S and so S T ∗ = T ∗S by Corollary 3.19. Hence X T1∗ = T2∗ X.
3.6 Additional Propositions Let Γ be an arbitrary (not necessarily countable) nonempty index set. We have defined a diagonalizable operator in Section 3.2 as follows. (1) An operator T on a Hilbert space H is diagonalizable if there is a resolution of the identity {Eγ }γ∈Γ on H and a bounded family of scalars {λγ }γ∈Γ such that T u = λγ u whenever u ∈ R(Eγ ). As we saw in Section 3.2, this can be equivalently stated as follows. (1 ) An operator T on a Hilbert space H is diagonalizable if it is a weighted sum of projections for some bounded sequence of scalars {λγ }γ∈Γ and some resolution of the identity {Eγ }γ∈Γ on H. Thus every diagonalizable operator is normal by Corollary 1.9, and what the Spectral Theorem for compact normal operators (Theorem 3.3) says is precisely that a compact operator is normal if and only if it is diagonalizable — cf. Corollary 3.5(a) — and so (being normal) a diagonalizable operator is such that r(T ) = T = supγ∈Γ |λγ | (cf. Lemma 3.1). However, a diagonalizable operator on a separable Hilbert space was defined in Section 2.7 (Proposition 2.L), whose natural extension to arbitrary Hilbert space (not necessarily separable) in terms of summable families (see Section 1.3) rather than summable sequences, reads as follows. (2) An operator T on a Hilbert space H is diagonalizable if there is an orthonormal basis {eγ }γ∈Γ for H and a bounded family {λγ }γ∈Γ of scalars such that T x = γ∈Γ λγ x ; eγ eγ for every x ∈ H. There is no ambiguity here; both definitions coincide: the resolution of the identity {Eγ }γ∈Γ behind the statement in (2) is given by Eγ x = x ; eγ eγ for every x ∈ H and every γ ∈ Γ according to the Fourier series expansion x = γ∈Γ x ; eγ eγ of each x ∈ H in terms of orthonormal basis {eγ }γ∈Γ . Now
3.6 Additional Propositions
85
consider the Banach space Γ∞ made up of all bounded families a = {αγ }γ∈Γ of complex numbers and also the Hilbert space Γ2 consisting of all squaresummable families z = {ζγ }γ∈Γ of complex numbers indexed by Γ . The definition of a diagonal operator in B[Γ2 ] goes as follows. (3) A diagonal operator B[Γ2 ] is a mapping D: Γ2 → Γ2 such that Dz = {αγ ζγ }γ∈Γ for every z = {ζγ }γ∈Γ ∈ Γ2 , where a = {αγ }γ∈Γ ∈ Γ∞ is a (bounded) family of scalars. (Notation: D = diag{αγ }γ∈Γ .) A diagonal operator D is clearly diagonalizable by the resolution of the identity Eγ z = z ; fγ fγ = ζγ fγ given by the canonical orthonormal basis {fγ }γ∈Γ for Γ2 , viz., fγ = {δγ,β }β∈Γ with δγ,β = 1 if β = γ and δγ,β = 0 if β = γ. So D is normal with r(D) = D = a∞ = supγ∈Γ |αγ |. Consider the Fourier series expansion of an arbitrary x in H in terms of any orthonormal basis {eγ }γ∈Γ for H, namely, x = γ∈Γ x ; eγ eγ , so that we can identify x in H with the square-summable family {x ; eγ }γ∈Γ in Γ2 (i.e., x is the image between the unitary equivaof {x ; eγ }γ∈Γ under a unitary transformation lent spaces H and Γ2 ). Thus, if T x = γ∈Γ λγ x ; eγ eγ , then we can identify T x with the square-summable family {λγ x ; eγ }γ∈Γ in Γ2 , and so we can identify T with a diagonal operator D that takes z = {x ; eγ }γ∈Γ into Dz = {λγ x ; eγ }γ∈Γ . This means that T and D are unitarily equivalent : there is a unitary transformation U ∈ B[H, Γ2 ] such that U T = D U. It is in this sense that T is said to be a diagonal operator with respect to the orthonormal basis {eγ }γ∈Γ . Thus definition (2) can be rephrased as follows. (2 ) An operator T on a Hilbert space H is diagonalizable if it is a diagonal operator with respect to some orthonormal basis for H. Proposition 3.A. Let T be an operator on a nonzero complex Hilbert space. The following assertions are equivalent. (a) T is diagonalizable in the sense of definition (1) above. (b) H has an orthogonal basis made up of eigenvectors of T . (c) T is diagonalizable in the sense of definition (2) above. (d) T is unitarily equivalent to a diagonal operator . Take a measure space (Ω, AΩ , μ) where AΩ is a σ-algebra of subsets of a nonempty set Ω and μ is a positive measure on AΩ . Let L∞ (Ω, C , μ) — short notation: L∞ (Ω, μ), L∞ (μ), or just L∞ — denote the Banach space of all essentially bounded AΩ -measurable complex-valued functions f : Ω → C on Ω equipped with the usual sup-norm (i.e., f ∞ = ess sup |f | for f ∈ L∞ ). Let L2 (Ω, C , μ) — short notation: L2 (Ω, μ), L2 (μ), or just L2 — denote the Hilbert space of all square-integrable AΩ -measurable complex-valued $ functions 1 f : Ω → C on Ω equipped with the usual L2 -norm (i.e., f 2 = |f |2 dμ 2 for f ∈ L2 ). Multiplication operators on L2 will be considered next (see, e.g.,
86
3. Spectral Theorem
[50, Chapter 7]). The first version of the Spectral Theorem (Theorem 3.11) says that multiplication operators are prototypes of normal operators. Proposition 3.B. Let φ: Ω → C and f : Ω → C be complex-valued functions on a nonempty set Ω. Let μ be a positive measure on a σ-algebra AΩ of subsets of Ω. Suppose φ ∈ L∞ . If f ∈ L2 , then φf ∈ L2 . Thus consider the mapping Mφ : L2 → L2 defined by Mφ f = φf
for every
f ∈ L2 .
That is, (Mφ f )(λ) = φ(λ)f (λ) for λ ∈ Ω. This is called the multiplication operator on L2, which is linear and bounded (i.e., Mφ ∈ B[L2 ]). Moreover : ∗
(a) Mφ = Mφ
∗
(i.e., (Mφ f )(λ) = φ(λ) f (λ) for λ ∈ Ω).
(b) Mφ is a normal operator . (c) Mφ ≤ φ∞ = ess sup |φ| = inf Λ∈NΩ supλ∈ Ω\Λ |φ(λ)|, where NΩ denotes the collection of all Λ ∈ AΩ such that μ(Λ) = 0. (d) σ(Mφ ) ⊆ ess R(φ) = essential range of φ = α ∈ C : μ({λ ∈ Ω: |φ(λ) − α| < ε}) > 0 for every ε > 0 = α ∈ C : μ(φ−1 (Vα )) > 0 for every neighborhood Vα of α = φ(Λ)− ∈ C : Λ ∈ AΩ and μ(Ω\Λ) = 0 ⊆ φ(Ω)− = R(φ)− . If, in addition, μ is σ-finite, then (e) Mφ = φ∞ , (f) σ(Mφ ) = ess R(φ). Particular case: Suppose Ω is a nonempty bounded subset of C and suppose ϕ: Ω → Ω ⊆ C is the identity function on Ω (i.e., ϕ(λ) = λ for every λ ∈ Ω, which is bounded). Then ess R(ϕ) = {Λ− ∈ C : Λ ∈ AΩ and μ(Ω\Λ) = 0} ⊆ Ω −, and so ess R(ϕ) ∩ Ω is the smallest relatively closed subset of Ω such that μ(Ω\(ess R(ϕ) ∩ Ω)) = 0. Equivalently, Ω\(ess R(ϕ) ∩ Ω) is the largest relatively open subset of Ω of measure zero. Thus ess R(ϕ) ∩ Ω is the (bounded and relatively closed) support of μ. Summing up, if Ω is a nonempty bounded subset of C , and ϕ: Ω → Ω is the identity function on Ω, then support (μ) = ess R(ϕ) ∩ Ω. If, in addition, Ω is closed in C , then support (μ) is compact. Therefore, if Λ in AΩ is nonempty and relatively open, included in the support of μ, then μ(Λ) > 0. (Indeed, if ∅ = Λ ⊆ ess R(ϕ) ∩ Ω is relatively open and μ(Λ) = 0, then [Ω\(ess R(ϕ) ∩ Ω)] ∪ Λ is a relatively open subset of Ω larger than Ω\(ess R(ϕ) ∩ Ω) and of measure zero; which is a contradiction.) If Ω = N , the set of all positive integers, AΩ = ℘(N ), the power set of N , and μ is the counting measure (assigning to each set the number of elements
3.6 Additional Propositions
87
in it, which is σ-finite), then a multiplication operator is reduced to a diagonal 2 operator on L2 (N , C , μ) = N2 = + , where the bounded ℘ (N )-measurable func∞ tion φ: N → C in L∞ (N , C , μ) = N∞ = + is the bounded sequence {αk } with αk = φ(k) for each k ∈ N , so that Mφ = φ∞ = supk |αk | and σ(Mφ ) = R(φ)− = {αk }−. What the Spectral Theorem says is that a normal operator is unitarily equivalent to a multiplication operator (Theorem 3.11), which turns out to be a diagonal operator in the compact case (Theorem 3.3). Proposition 3.C. Let Ω be a nonempty compact subset of the complex plane C and let E :AΩ → B[H] be a spectral measure (cf. Definition 3.6). For each Λ ∈ AΩ set C Λ = {Δ ∈ AΩ : Δ
is a closed subset of Λ}. R(E(Λ)) is the smallest subspace of H such that Δ∈C Λ R(E(Δ)) ⊆ R(E(Λ)). Proposition 3.D. Let H be a complex Hilbert space and let T be the unit circle about the origin of the complex plane. If T ∈ B[H] is normal, then (a) T is unitary if and only if σ(T ) ⊆ T , (b) T is self-adjoint if and only if σ(T ) ⊆ R , (c) T is nonnegative if and only if σ(T ) ⊆ [0, ∞), (d) T is strictly positive if and only if σ(T ) ⊆ [α, ∞) for some α > 0, (e) T is an orthogonal projection if and only if σ(T ) ⊆ {0, 1}. Observe that, if T = 10 00 and S = 21 11 in B[C 2 ], then S 2 − T 2 = 43 32 . Thus O ≤ T ≤ S does not imply T 2 ≤ S 2 . However, the converse holds. Proposition 3.E. If T, S ∈ B[H] are nonnegative operators, then 1
1
T ≤ S implies T 2 ≤ S 2 . $ Proposition 3.F. If T = λ dEλ is the spectral decomposition of a nonneg$ 1 1 ative operator T on a complex Hilbert space, then T 2 = λ 2 dEλ . $ Proposition 3.G. If T = λ dEλ is the spectral decomposition of a normal operator T ∈ B[H], and if λ0 is an isolated point of σ(T ), then {0} = R(E({λ0 })) ⊆ N (λ0 I − T ). Every isolated point of the spectrum of a normal operator is an eigenvalue. Recall: X ∈ B[H, K] intertwines T ∈ B[H] to S ∈ B[K] if X T = SX. If an invertible X intertwines T to S, then T and S are similar . If, in addition, X is unitary, then T and S are unitarily equivalent (cf. Section 1.9). Proposition 3.H. Let T1 ∈ B[H] and T2 ∈ B[K] be normal operators. If X ∈ B[H, K] intertwines T1 to T2 (i.e., X T1 = T2 X), then
88
3. Spectral Theorem
(a) N (X) reduces T1 and R(X)− reduces T2 so that T1 |N (X)⊥ ∈ B[N (X)⊥ ] and T2 |R(X)− ∈ B[R(X)− ]. Moreover , (b) T1 |N (X)⊥ and T2 |R(X)− are unitarily equivalent . A special case of Proposition 3.H says that if a quasiinvertible (i.e., injective with dense range) bounded linear transformation intertwines two normal operators, then these normal operators are unitarily equivalent . This happens, in particular, when X is invertible (i.e., if X is in G[H, K]). Proposition 3.I. Two similar normal operators are unitarily equivalent . A subset of a metric space is nowhere dense if its closure has empty interior. Since the spectrum of T ∈ B[H] is closed, σ(T ) is nowhere dense if and only if it has an empty interior; that is, σ(T )◦ = ∅. The full spectrum σ(T )# was defined in Section 2.7 (see Proposition 2.S). Note that the condition σ(T ) = σ(T )# is equivalent to saying that C \σ(T ) is connected (i.e., that ρ(T ) is connected). An operator T ∈ B[H] is reductive if all its invariant subspaces are reducing. Thus, by Proposition 1.Q, a normal operator is reductive if and only if its restriction to every invariant subspace is normal . Normal reductive operators are also called completely normal . Proposition 3.J. Suppose T ∈ B[H] is a normal operator. If σ(T ) = σ(T )# and σ(T )◦ = ∅, then T is reductive. If T is reductive, then σ(T )◦ = ∅. There are normal reductive operators D for which σ(D) = σ(D)#. For 2 instance, consider the diagonal D = diag{λk }k≥0 in B[+ ] with λk = e2πiαk, where {αk }k≥0 is a distinct enumeration of all rationals in [0, 1). Being a diagonal, D is normal (unitary, actually) and σ(D) = T (since {αk }k≥0 is dense in [0, 1]). Thus T = σ(D) ⊂ σ(D)# = D −. However, D is reductive (see, e.g., [35, Example 13.5]). On the other hand, there are normal operators S with σ(S)◦ = ∅ that are not reductive. Indeed, every bilateral shift S has an invariant subspace M such that the restriction S+ = S|M is a unilateral shift (see, e.g., [62, Proposition 2.13]). Since S is unitary (thus normal) and S+ is not normal, it follows by Proposition 1.Q that M does not reduce S. Thus S is not reductive, although σ(S) = T , and so σ(S)◦ = ∅. Proposition 3.K. Let T be a diagonalizable operator on a nonzero complex Hilbert space. The following assertions are equivalent. (a) Every nontrivial invariant subspace for T contains an eigenvector . (b) Every nontrivial invariant subspace Mfor T is spanned by the eigenvectors of T |M (i.e., it is such that M = λ∈σP (T |M ) N (λI − T |M )). (c) T is reductive. There are diagonalizable operators that are not reductive. In fact, since C is a separable metric space, it includes a countable dense subset, and so
3.6 Additional Propositions
89
does every compact subset of C . Let Λ be any countable dense subset of an arbitrary nonempty compact subset Ω of C . Let {λk } be an enumeration of Λ so that supk |λk | < ∞ (since Ω is bounded). Consider the diagonal operator 2 ]. As we had seen in Proposition 2.L, D = diag{λk } in B[+ σ(D) = Λ− = Ω. Therefore, every closed and bounded subset of the complex plane is the spec2 . Hence, if Ω ◦ = ∅, then Proposition trum of some diagonal operator on + 3.J ensures that the diagonal D is not reductive. Notes: A discussion on diagonalizable operators precedes the characterization in Proposition 3.A (see, e.g., [66, Problem 6.25]). Diagonal operators are extended to multiplication operators, whose properties that will be needed in the text are summarized in Proposition 3.B (see, e.g., [27, IX.2.6], [50, Chapter 7], and [76, p. 13]). For the technical result in Proposition 3.C, which is required in the proof of the Fuglede Theorem (Theorem 3.17), see [76, Proposition 1.4]. The remaining propositions depend on the Spectral Theorem (Theorem 3.15). Proposition 3.D considers the converse of Proposition 2.A (see, e.g., [66, Problem 6.26]). For Proposition 3.E see, for instance, [63, Problem 8.12]. Proposition 3.F is a consequence of Lemma 3.10 and Proposition 1.M (see, e.g., [66, Problem 6.45]). Proposition 3.G is an important classical result that shows how the spectral decomposition of a normal operator behaves similarly to the compact case of Theorem 3.3 for isolated points of the spectrum (compare with the claim in the proof of Lemma 3.2) — see, e.g., [66, Problem 6.28]. Propositions 3.H and 3.I follow from Corollary 3.20 (see, e.g., [27, IX.6.10 and IX.6.11]). Reductive normal operators are considered in Propositions 3.J and 3.K (see [35, Theorems 13.2 and 13.32] and [76, Theorems 1.23 and 1.25]).
Suggested Reading Arveson [6] Bachman and Narici [8] Berberian [14] Conway [27, 29] Douglas [34] Dowson [35]
Dunford and Schwartz [40] Halmos [47, 50] Helmberg [54] Kubrusly [66] Radjavi and Rosenthal [76] Rudin [80]
This page intentionally left blank
4 Functional Calculus
Fix an operator T in the operator algebra B[H]. Suppose an operator ψ(T ) in B[H] can be associated to each function ψ: Ω → C on a nonempty set Ω, in a suitable algebra of functions F (Ω). The central theme of this chapter investigates the mapping ΦT : F (Ω) → B[H] defined by ΦT (ψ) = ψ(T ) for every ψ in F (Ω); that is, the mapping ψ → ψ(T ) that takes each function ψ in a function algebra F (Ω) to the operator ψ(T ) in the operator algebra B[H]. If this mapping ΦT is linear and also preserves product (i.e., if it is an algebra homomorphism), then it is referred to as a functional calculus for T .
4.1 Rudiments of C*-Algebra Let A and B be algebras over the same scalar field F . If F = C , then these are referred to as complex algebras. A linear transformation Φ: A → B (of the linear space A into the linear space B) that preserves products (i.e., such that Φ(xy) = Φ(x)Φ(y) for every x, y in A) is a homomorphism (or an algebra homomorphism) of A into B. A unital algebra (or an algebra with identity) is an algebra with an identity element (i.e., with a neutral element under multiplication). An element x in a unital algebra A is invertible if there is an x−1 ∈ A such that x−1 x = xx−1 = 1, where 1 denotes the identity in A. A unital homomorphism between unital algebras is one that takes the identity of A to the identity of B. If Φ is an isomorphism (of the linear spaces A onto the linear space B) and also a homomorphism (of the algebra A onto the algebra B), then it is an algebra isomorphism of A onto B. In this case A and B are said to be isomorphic algebras. A normed algebra is an algebra A that is also a normed space whose norm satisfies the operator norm property, viz., xy ≤ x y for every x, y in A. The identity element of a unital normed algebra has norm 1. A Banach algebra is a complete normed algebra. The spectrum σ(x) of an element x ∈ A in a unital complex Banach algebra A is the complement of the set ρ(x) = {λ ∈ C : λ1 − x has an inverse in A}, and its spectral radius is the number r(x) = supλ∈σ(x) |λ|. An involution ∗: A → A on an algebra A is a map x → x∗ such that (x∗ )∗ = x, (xy)∗ = y ∗ x∗, and (αx + β y)∗ = αx∗ + βy ∗ for all scalars α, β and every x, y in A. A ∗-algebra C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3_4, © Springer Science+Business Media, LLC 2012
91
92
4. Functional Calculus
(or an involutive algebra) is an algebra equipped with an involution. If A and B are ∗-algebras, then a ∗-homomorphism (a ∗-isomorphism) between them is an algebra homomorphism Φ: A → B (an algebra isomorphism) that preserves the involution (i.e., Φ(x∗ ) = Φ(x)∗ in B for every x in A — we are using the same notation for the involutions on A and B). An element x in a ∗-algebra is called Hermitian if x∗ = x and normal if x∗ x = xx∗ . An element x in a unital ∗-algebra is unitary if x∗ x = xx∗ = 1. A C*-algebra is a Banach ∗-algebra A such that x∗ x = x2 for every x in A. In a ∗-algebra, (x∗ )∗ is usually denoted by x∗∗ . The origin 0 in a ∗-algebra and the identity 1 in a unital ∗-algebra are Hermitian. Indeed, since (αx)∗ = αx∗ for every x and every scalar α, we get 0∗ = 0 by setting α = 0. Since 1∗ x = (1∗ x)∗∗ = (x∗ 1)∗ = (1x∗ )∗ = x1∗ and (x∗ 1)∗ = x∗∗ = x, it follows that 1∗ x = x1∗ = x, and so (by uniqueness of the identity) 1∗ = 1. In a unital C*-algebra the expression 1 = 1 is a theorem (rather than an axiom) since 12 = 1∗ 1 = 12 = 1 so that 1 = 1 because 1 = 0. Elementary properties of Banach algebras (in particular, C*-algebras) and of algebra homomorphisms that will be needed in the sequel are brought together in the following lemma. Lemma 4.1. Let A and B be algebras over the same scalar field and let Φ: A → B be a homomorphism. Take an arbitrary x ∈ A. (a) If A is a ∗-algebra, then x = a + i b where a∗ = a and b∗ = b in A. (b) If A, B, and Φ are unital and x is invertible, then Φ(x)−1 = Φ(x−1 ). (c) If A is unital and normed, and x is invertible, then x−1 ≤ x−1 . (d) If A is a unital ∗-algebra and x is invertible, then (x−1 )∗ = (x∗ )−1 . (e) If A is a C*-algebra and x∗ x = 1, then x = 1. (f) If A is a C*-algebra, then x∗ = x. Suppose A and B are unital complex Banach algebras. (g) If Φ is a unital homomorphism, then σ(Φ(x)) ⊆ σ(x). (h) If Φ is an injective unital homomorphism, then σ(Φ(x)) = σ(x). 1
(i) r(x) = limn xn n . (j) If A is a unital complex C*-algebra and x∗ = x, then r(x) = x. From now on assume that A and B are unital complex C*-algebras. (k) If Φ is a unital ∗-homomorphism, then Φ(x) ≤ x. () If Φ is an injective unital ∗-homomorphism, then Φ(x) = x. Proof. Let Φ: A → B be a homomorphism and take an arbitrary x ∈ A.
4.1 Rudiments of C*-Algebra
93
(a) This is the Cartesian decomposition (as in Proposition 1.O). If A is a ∗-algebra, then set a = 12 (x∗ + x) ∈ A and b = 2i (x∗ − x) ∈ A so that a∗ = 1 −i 1 ∗ ∗ ∗ ∗ ∗ 2 (x + x ) = a, b = 2 (x − x ) = b, and a + i b = 2 (x + x − x + x) = x. (b) If A and B are unital algebras, Φ a unital homomorphism, and x invertible, then 1 = Φ(1) = Φ(x−1 x) = Φ(xx−1 ) = Φ(x−1 )Φ(x) = Φ(x)Φ(x−1 ). (c) If A is a unital normed algebra, then 1 = 1 = x−1 x ≤ x−1 x whenever x is invertible. (d) If A is a unital ∗-algebra and if x−1 x = xx−1 = 1, then x∗ (x−1 )∗ = (x−1 x)∗ = (xx−1 )∗ = (x−1 )∗ x∗ = 1∗ = 1, and so (x∗ )−1 = (x−1 )∗ . (e) If A is a C*-algebra and if x∗ x = 1, then x2 = x∗ x = 1 = 1. (f) Let A be a C*-algebra. The result is trivial for x = 0 because 0∗ = 0. Since x2 = x∗ x ≤ x∗ x, it follows that x ≤ x∗ if x = 0. Replacing x with x∗ and recalling that x∗∗ = x we get x∗ ≤ x. (g) Let A and B be unital complex Banach algebras, and let Φ be a unital homomorphism. If λ ∈ ρ(x), then λ1 − x is invertible in A. Since Φ is unital, Φ(λ1 − x) = λ1 − Φ(x) is invertible in B by (b). Hence λ ∈ ρ(Φ(x)). Thus ρ(x) ⊆ ρ(Φ(x)). Therefore, C \ρ(Φ(x)) ⊆ C \ρ(x). (h) If Φ: A → B is injective, then it has an inverse Φ−1 : R(Φ) → B on its range R(Φ) = Φ(A) ⊆ B which, being the image of a unital algebra under a unital homomorphism, is again a unital algebra. Moreover, Φ−1 itself is a unital homomorphism. Indeed, if y = Φ(u) and z = Φ(v) are arbitrary elements in R(Φ), then Φ−1 (y z) = Φ−1 (Φ(u)Φ(v)) = Φ−1 (Φ(u v)) = u v = Φ−1 (y)Φ−1 (z), and so Φ−1 is a homomorphism (since inverses of linear transformations are linear) which is trivially unital (since Φ is unital). Now apply (b) so that Φ−1 (y) is invertible in A whenever y is invertible in R(Φ) ⊆ B. If λ ∈ ρ(Φ(x)), then y = λ1 − Φ(x) = Φ(λ1 − x) is invertible in R(Φ), and therefore Φ−1 (y) = Φ−1 (Φ(λ1 − x)) = λ1 − x is invertible in A, which means that λ ∈ ρ(x). Thus ρ(Φ(x)) ⊆ ρ(x). Hence ρ(x) = ρ(Φ(x)) by (g). (i) The proof of the Gelfand–Beurling formula in the complex Banach algebra B[X ] of Theorem 2.10 holds in any unital complex Banach algebra. (j) If A is a C*-algebra and if x∗ = x, then x2 = x∗ x = x2 and so, by induction, x2n = x2n for every integer n ≥ 1. If A is unital and complex, 1 1 1 then r(x) = limn xn n = limn x2n 2n = limn (x2n ) 2n = x by (i). (k) Let A and B be complex unital C*-algebras and Φ a ∗-homomorphism. Then Φ(x)2 = Φ(x)∗ Φ(x) = Φ(x∗ )Φ(x) = Φ(x∗ x). Observe that x∗ x is Hermitian. Thus Φ(x∗ x) is Hermitian since Φ is a ∗-homomorphism. Hence Φ(x∗ x) = r(Φ(x∗ x)) ≤ r(x∗ x) = x∗ x = x2 by (j) and (g). () Consider the setup and the argument of the preceding proof and suppose, in addition, that the unital ∗-homomorphism Φ: A → B is injective. Thus apply (h) instead of (g) to get () instead of (k).
94
4. Functional Calculus
Remark. Lemma 4.1(k) says that a unital ∗-homomorphism Φ between unital complex C*-algebras is a contraction, and so it is a contractive unital ∗-homomorphism. If it is injective, then Lemma 4.1() says that Φ is an isometry, and so an isometric unital ∗-homomorphism. Since an isometry of a Banach space into a normed space has a closed range (see, e.g., [66, Problem 4.41]), it follows that, if a unital ∗-homomorphism Φ between unital complex C*-algebras A and B is injective, then the unital ∗-algebra R(Φ) = Φ(A) ⊆ B, being closed in the Banach space B, is itself a Banach space, and so R(Φ) is a unital complex C*-algebra. Thus the injective unital ∗-homomorphism Φ: A → B is such that Φ: A → R(Φ) is an isometric isomorphism (i.e., an injective and surjective linear isometry), and therefore A and R(Φ) are isometrically isomorphic unital complex C*-algebras. These are the elementary results on C*-algebras that will be needed in this chapter. For a thorough treatment of C*-algebras the reader is referred, for instance, to [5], [30], [43], and [74]. Throughout this chapter H stands for a nonzero complex Hilbert space, so that B[H] is a (unital complex) C*-algebra, where involution is defined as the adjoint operation. Take an arbitrary nonempty set Ω and let F (Ω) = C Ω be an algebra of complex-valued functions on Ω, where addition, scalar multiplication, and product are pointwise defined as usual. If F (Ω) is such that ψ ∗ ∈ F (Ω) whenever ψ ∈ F (Ω), where ψ ∗ is given by ψ ∗ (λ) = ψ(λ) for every λ ∈ Ω, then complex conjugation defines an involution on F (Ω). In this case F (Ω) is a commutative ∗-algebra. If, in addition, F (Ω) is endowed with a norm that makes it complete (and so a Banach space), and if ψ ∗ ψ = ψ2 (which is the case of the sup-norm · ∞ since (ψ ∗ ψ)(λ) = ψ ∗ (λ)ψ(λ) = ψ(λ)ψ(λ) = |ψ(λ)|2 for every λ ∈ Ω), then F (Ω) is a (complex) C*-algebra. Take any operator T in B[H]. The first steps towards a functional calculus for T are given by polynomials n of T . To each polynomial p: σ(T ) → C with complex coefficients, p(λ) = i=0 αi λi for λ ∈ σ(T ), associate the operator n p(T ) = αi T i . i=0
Let P (σ(T )) be the unital algebra of all polynomials (in one variable with complex coefficients) on σ(T ). The map ΦT : P (σ(T )) → B[H] given by ΦT (p) = p(T )
for each
p ∈ P (σ(T ))
m is a unital homomorphism from P (σ(T )) to B[H]. In fact, if q(λ) = j=0 βj λj , n m n m then (pq)(λ) = i=0 j=0 αi βj λi+j so that ΦT (pq) = i=0 j=0 αi βj T i+j = ΦT (p)ΦT (q), and also ΦT (1) = 1(T ) = T 0 = I. This can be extended from finite power series (i.e., to infinite power series: if a sequence nfrom polynomials) λk for each n, converges in some sense on σ(T ) to a {pn }, with pn (λ) = k=0 αk ∞ ∞ limit ψ, denoted by ψ(λ) = k=0 αk λk , then we may set ψ(T ) = k=0 αk T k 1 if the series converges in some topology of B[H]. For instance, if αk = k! for
4.2 Functional Calculus for Normal Operators
95
each k ≥ 0, then {pn } converges pointwise to the exponential function ψ such that ψ(λ) = eλ for every λ ∈ σ(T ) (on any σ(T ) ⊆ C ), and so we define ∞ 1 k eT = ψ(T ) = k! T k=0
(where the series converges uniformly in B[H]). However, unlike in the preceding paragraph, complex conjugation does not define an involution on P (σ(T )) (and so P (σ(T )) is not ∗-algebra under it). Indeed, p∗ (λ) = p(λ) is not a polynomial in λ — the continuous function p( · ): σ(T ) → C given by p(λ) = n i ∗ i=0 αi λ is not a polynomial in λ (for σ(T ) ⊆ R , even if σ(T ) = σ(T )). The extension of the mapping ΦT to some larger algebras of functions is the focus of this chapter.
4.2 Functional Calculus for Normal Operators Let Aσ(T ) be the σ-algebra of Borel subsets of the spectrum σ(T ) of an operator T on a Hilbert space H. Let B(σ(T )) be the Banach space of all bounded Aσ(T ) -measurable complex-valued functions ψ : σ(T ) → C on σ(T ) equipped with the sup-norm, and let L∞ (σ(T ), μ) be the Banach space of all essentially bounded (i.e., μ-essentially bounded) Aσ(T ) -measurable complex-valued functions ψ: σ(T ) → C on σ(T ), also equipped with the sup-norm · ∞ , for any positive measure μ on Aσ(T ) . In both cases we have unital commutative C*algebras, where involution is defined as complex conjugation, and the identity element is the characteristic function 1 = χσ(T ) of σ(T ) (carefully note that this is not the identity function ϕ(λ) = λχσ(T ) on σ(T )). If T is normal, then let E : Aσ(T$) → B[H] be the unique spectral measure of its spectral decomposition T = λ dEλ as in Theorem 3.15. Theorem 4.2. Let$T ∈ B[H] be a normal operator and consider its spectral decomposition T = λ dEλ . For every function ψ ∈ B(σ(T )) there is a unique operator ψ(T ) ∈ B[H] given by % ψ(T ) = ψ(λ) dEλ , which is normal. Moreover, the mapping ΦT : B(σ(T )) → B[H] such that ΦT (ψ) = ψ(T ) is a unital ∗-homomorphism. Proof. ψ(T ) is unique in B[H] and normal for every ψ ∈ B(σ(T )) by Lemmas 3.7 and 3.9. The mapping ΦT : B(σ(T )) → B[H] is a unital ∗-homomorphism by Lemma 3.8 and by the linearity of the integral. Indeed, $ (a) ΦT (α ψ + β φ) = (α ψ + β φ)(T ) = (α ψ(λ) + β φ(λ)) dEλ = α ψ(T ) + β φ(T ) = α ΦT (ψ) + β ΦT (φ), $ (b) ΦT (ψφ) = (ψφ)(T ) = ψ(λ)φ(λ) dEλ = ψ(T )φ(T ) = ΦT (ψ)ΦT (φ),
96
4. Functional Calculus
$ (c) ΦT (1) = 1(T ) = dEλ = I, $ (d) ΦT (ψ ∗ ) = ψ ∗ (T ) = ψ(λ) dEλ = ψ(T )∗ = ΦT (ψ)∗ , for every ψ, φ ∈ B(σ(T )) and every α, β ∈ C , where the constant function 1 (i.e., 1(λ) = 1 for every λ ∈ σ(T )) is the identity in B(σ(T )). Let P (σ(T )) be the set of all polynomials p(·, ·): σ(T ) × σ(T ) → C in λ and λ (or, p(·, ·): σ(T ) → C such that λ → p(λ, λ)), and let C(σ(T )) be the set of all complex-valued continuous (thus Aσ(T ) -measurable) functions on σ(T ). So P (σ(T )) ⊂ P (σ(T )) ⊂ C(σ(T )) ⊂ B(σ(T )), where the last inclusion follows from the Weierstrass Theorem since σ(T ) is compact — cf. proof of Theorem 2.2. Except for P (σ(T )), these are all unital ∗-subalgebras of B(σ(T )) and the restriction of the mapping ΦT to any of them remains a unital ∗-homomorphism into B[H]. Thus Theorem 4.2 holds for all those unital ∗-subalgebras of the unital C*-algebra B(σ(T )). Note that P (σ(T )) is a unital subalgebra of P (σ(T )), and the restriction of ΦT to it is a unital homomorphism into B[H], and so Theorem 4.2 also holds for P (σ(T )) by replacing ∗-homomorphism n with plain homomorphism. In particular, for p in P (σ(T )), say p(λ) = i=0 αi λi, we get, if T is normal, % % n n αi λi dEλ = αi T i p(T ) = p(λ) dEλ = i=0
n,m
i=0
i j
and, for p in P (σ(T )), say p(λ, λ) = i,j=0 αi,j λ λ (cf. Lemma 3.10), we get % % n,m n,m αi,j λi λj dEλ = αi,j T i T ∗j . p(T, T ∗ ) = p(λ, λ) dEλ = i,j=0
i,j=0
The quotient space of a set modulo an equivalence relation is a partition of the set. So we may regard B(σ(T )) as consisting of equivalence classes from L∞ (σ(T ), μ) — bounded functions are essentially bounded with respect to any measure μ. On the other hand, shifting from restriction to extension, it is sensible to argue that the preceding theorem might be extended from B(σ(T )) to some L∞ (σ(T ), μ), where bounded would be replaced by essentially bounded. But essentially bounded with respect to which measure μ ? A measure μ on a σ-algebra is absolutely continuous with respect to another measure ν on the same σ-algebra if ν(Λ) = 0 implies μ(Λ) = 0 for every measurable set Λ. Two measures on the same σ-algebra are equivalent if they are absolutely continuous with respect to each other, which means that they have the same sets of measure zero. That is, μ and ν are equivalent if, for every measurable set Λ, ν(Λ) = 0 if and only if μ(Λ) = 0. (Note: if two positive measures are equivalent, it may happen that one is finite but not the other.) Remark. Let E : Aσ(T ) → B[H] be the spectral measure of the spectral decomposition of a normal operator T, and consider the statements of Lemmas 3.7, 3.8, and 3.9 for Ω = σ(T ). Lemma 3.7 remains true if B(σ(T )) is replaced by
4.2 Functional Calculus for Normal Operators
97
L∞ (σ(T ), μ) for every positive measure μ: Aσ(T ) → R that is equivalent to the spectral measure E. The proof is essentially the same as that of Lemma 3.7. In fact, the spectral integrals of functions in L∞ (σ(T ), μ) coincide whenever they are equal μ-almost everywhere, and so a function in B(σ(T )) has the same integral of any representative from its equivalence class in L∞ (σ(T ), μ). Moreover, since μ and E have exactly the same sets of measure zero, we get φ∞ = ess sup |φ| =
inf
sup
Λ∈Nσ(T ) λ∈ σ(T )\Λ
|φ(λ)|
for every φ ∈ L∞ (σ(T ), μ), where Nσ(T ) denotes the collection of all Λ ∈ Aσ(T ) such that μ(Λ) = 0 or, equivalently, such that E(Λ) = O. Thus the sup-norm φ∞ of φ: σ(T ) → C with respect to μ dominates the sup-norm with respect to πx,x for every x ∈ H. Then the proof of Lemma 3.7 remains unchanged if B(σ(T )) is replaced by L∞$(σ(T ), μ) for any positive measure μ equivalent to E, since |f (x, x)| ≤ φ∞ dπx,x = φ∞ x2 for every x ∈ H still holds if φ ∈ L∞ (σ(T ), μ). Hence, for every φ ∈ L∞ (σ(T ), μ), there is a unique opera$ tor F ∈ B[H] such that F = φ(λ) dEλ as in Lemma 3.7. This ensures that Lemmas 3.8 and 3.9 also hold if B(σ(T )) is replaced by L∞ (σ(T ), μ). Theorem 4.3. Let $T ∈ B[H] be a normal operator and consider its spectral decomposition T = λ dEλ . If μ is a positive measure on Aσ(T ) equivalent to the spectral measure E : Aσ(T ) → B[H], then for every ψ ∈ L∞ (σ(T ), μ) there is a unique operator ψ(T ) ∈ B[H] given by % ψ(T ) = ψ(λ)dEλ , which is normal. Moreover, the mapping ΦT : L∞ (σ(T ), μ) → B[H] such that ΦT (ψ) = ψ(T ) is a unital ∗-homomorphism. Proof. If μ: Aσ(T ) → R and E : Aσ(T ) → B[H] are equivalent measures, then the above Remark ensures that the proof of Theorem 4.2 still holds. There are however instances where it is convenient that a positive measure μ on Aσ(T ) , besides being equivalent to the spectral measure E, be finite as ∞ well. For example, if ψ lies in $ L 2 (σ(T ), μ)2 and μ is finite, then ψ also lies 2 2 in L (σ(T ), μ), since ψ2 = |ψ| dμ ≤ ψ∞ μ(σ(T )) ≤ ∞; this is especially significant if it is expected that an essentially constant function will be in L2 (σ(T ), μ), as will be the case in the proof of Lemma 4.7 below. Definition 4.4. A scalar spectral measure for a normal operator is a positive finite measure equivalent to the spectral measure of its spectral decomposition. In other words, if T is a normal operator and E is the unique spectral measure of Theorem 3.15, then a positive finite measure μ is a scalar spectral measure for T if, for every measurable set Λ, μ(Λ) = 0 if and only if E(Λ) = O. Clearly (by transitivity), every positive finite measure that is equivalent to some scalar spectral measure for T is again a scalar spectral measure for T .
98
4. Functional Calculus
Definition 4.5. A separating vector for a collection {Cγ }γ∈Γ ⊆ B[H] of operators is a vector e ∈ H such that e ∈ N (C) for every O = C ∈ {Cγ }γ∈Γ . Take the collection {E(Λ)}Λ∈AΩ of all images of the spectral measure E associated to a normal operator T on H, where Ω is either the disjoint union of {Ωγ } or σ(T ) as in proof of Theorem 3.15. For each x, y ∈ H, let πx,y : AΩ → C be the scalar-valued (complex) measure of Section 3.3 related to E, namely, πx,y (Λ) = E(Λ)x ; y
for every
Λ ∈ Ω.
Lemma 4.6. If T ∈ B[H] is normal, then e ∈ H is a separating vector for {E(Λ)}Λ∈AΩ if and only if πe,e is a scalar spectral measure for T . Proof. Let E : AΩ → B[H] be the spectral measure for a normal operator T in B[H]. Note that πx,x (Λ) = E(Λ)x2 for every Λ ∈ AΩ and every x ∈ H, and so the measure πx,x : AΩ → R is positive and finite for every x ∈ H. Recall that e ∈ H is a separating vector for the collection {E(Λ)}Λ∈AΩ if and only if E(Λ)e = 0 whenever E(Λ) = O, or equivalently, if and only if {E(Λ)e = 0 ⇐⇒ E(Λ) = O}. Conclusion: e ∈ H is a separating vector for the collection {E(Λ)}Λ∈AΩ if and only if {πe,e (Λ) = 0 ⇐⇒ E(Λ) = O}; that is, if and only if E and πe,e are equivalent measures. Since πe,e is positive and finite, this means that πe,e is a scalar spectral measure for T . Lemma 4.7. If T is a normal operator on a separable Hilbert space H, then there is a separating vector e ∈ H for {E(Λ)}Λ∈AΩ, and so there is a vector e ∈ H such that πe,e is a scalar spectral measure for T (by Lemma 4.6). Proof. Take the positive measure μ on AΩ of Theorem 3.11. If H is separable, then μ is finite (Corollary 3.13). Let Mϕ be the multiplication operator on L2 (Ω, μ) with ϕ ∈ L∞ (Ω, μ) given by ϕ(λ) = λ ∈ σ(T ) ⊆ C for every λ ∈ Ω. Let χΛ ∈ L∞ (Ω, μ) be the characteristic function of each Λ ∈ AΩ . Set E (Λ) = M χΛ for every Λ ∈ AΩ so that E : AΩ → B[L2 (Ω, μ)] is the spectral measure in L2 (Ω, μ) for the spectral decomposition (cf. proof (a) of Theorem 3.15) % % Mϕ = ϕ(λ) dEλ = λ dEλ of the normal operator Mϕ ∈ B[L2 (Ω, μ)]. Hence, for each f ∈ L2 (Ω, μ), % % πf,f (Λ) = E (Λ)f ; f = M χΛ f ; f = χΛ f f dμ = |f |2 dμ Λ
for every Λ ∈ AΩ . In particular, set f = 1 = χΩ (the constant function f (λ) = 1 for all λ ∈ Ω, i.e., the characteristic function χΩ of Ω), which lies in L2 (Ω, μ) because μ is finite. Note that f = 1 = χΩ is a separating vector for {E (Λ)}Λ∈AΩ since E (Λ)1 = M χΛ 1 = χΛ = 0 if Λ = ∅ in AΩ . Thus π 1,1 is a scalar spectral measure for Mϕ by Lemma 4.6. Now, if T ∈ B[H] is normal, then T ∼ = Mϕ by Theorem 3.11. Let E : AΩ → B[H] be the spectral measure in H for T , given by E(Λ) = U ∗E (Λ)U for each Λ ∈ AΩ , where U is a unitary in B[H, L2 (Ω, μ)]
4.2 Functional Calculus for Normal Operators
99
(cf. proof (b) of Theorem 3.15). Set e = U ∗ 1 ∈ H with the function 1 = χΩ in L2 (Ω, μ). Note that e is a separating vector for {E(Λ)}Λ∈AΩ because E(Λ)e = U ∗E (Λ)U U ∗ 1 = U ∗E (Λ)1 = 0 if E(Λ) = O (since E (Λ)1 = 0 when E (Λ) = U E(Λ)U ∗ = O). Thus πe,e is a scalar spectral measure for T (Lemma 4.6). ˜ : Aσ(T ) → B[H] such that E(Λ) ˜ Next, take the spectral measure E = E(∪Λ) for ˜ every Λ ∈ Aσ(T ) (cf. proof (c) of Theorem 3.15). Thus E(Λ) = O implies that ˜ ˜ E(Λ)e = E(∪Λ)e = O, and so e is also a separating vector for {E(Λ)} Λ∈Aσ(T ). ˜ ˜e,e (Λ) = E(Λ)e ; e, is a scalar spectral measure for T Hence π ˜e,e , such that π ˜e,e . (Lemma 4.6). Again, use the same notation: E for E˜ and πe,e for π Remark. The preceding proof shows that the scalar spectral measure π 1,1 for μ of Theorem 3.11. Indeed, π1,1 (Λ) = E (Λ)12 = Mϕ is precisely the measure $ 2 2 M χΛ 1 = χΛ = Λ dμ = μ(Λ) for every Λ ∈ AΩ . And so is the scalar spectral measure πe,e for T . That is, μ = π1,1 = πe,e . In fact, for every Λ ∈ AΩ , μ(Λ) = π 1,1 (Λ) = E (Λ)1 ; 1 = U E(Λ)U ∗ U e ; U e = E(Λ)e ; e = πe,e (Λ). Theorem 4.8. Let T ∈ B[H] be a normal operator on$ a separable Hilbert space H and consider its spectral decomposition T = λ dEλ . Let μ be an arbitrary scalar spectral measure for T . For every ψ ∈ L∞ (σ(T ), μ) there is a unique operator ψ(T ) ∈ B[H] given by % ψ(T ) = ψ(λ)dEλ , which is normal. Moreover, the mapping ΦT : L∞ (σ(T ), μ) → B[H] such that ΦT (ψ) = ψ(T ) is an isometric unital ∗-homomorphism. Proof. All but the fact that ΦT is an isometry is a particular case of Theorem 4.3. We show that ΦT is an isometry. Lemma 3.8, the uniqueness in Lemma 3.7, and the uniqueness of the expression for ψ(T ) ensure that, for every x ∈ H, % % 2 ∗ ΦT (ψ)x = ψ(T ) ψ(T )x ; x = ψ(λ) ψ(λ) dEλ x ; x = |ψ(λ)|2 dπx,x . $ Suppose ΦT (ψ) = O. Then |ψ(λ)|2 dπx,x = 0 for every x ∈ H. Lemma 4.7 ensures the existence of a separating vector e ∈ H for {E(Λ)}Λ∈Aσ(T ). Thus $ consider the scalar spectral measure πe,e so that |ψ(λ)|2 dπe,e = 0, and hence ψ = 0 in L∞ (σ(T ), πe,e ), which means that ψ = 0 in L∞ (σ(T ), μ) for every measure μ equivalent to πe,e ; in particular, for every scalar spectral measure μ for T . So {0} = N (ΦT ) ⊆ L∞ (σ(T ), μ), and hence the linear transformation ΦT : L∞ (σ(T ), μ) → B[H] is injective; thus an isometry by Lemma 4.1(). If T is normal on a separable Hilbert space H, then ΦT : L∞ (σ(T ), μ) → B[H] is an isometric unital ∗-homomorphism between C*-algebras (Theorem 4.8). Thus ΦT : L∞ (σ(T ), μ) → R(ΦT ) is an isometric unital ∗-isomorphism between the C*-algebras L∞ (σ(T ), μ) and R(ΦT ) ⊆ B[H] (see Remark after Lemma 4.1), which is the von Neumann algebra generated by T (see Proposition 4.A).
100
4. Functional Calculus
Theorem 4.9. Let T ∈ B[H] be a normal operator on a$ separable Hilbert space H and consider its spectral decomposition T = λ dEλ . For every ψ ∈ C(σ(T )) there is a unique operator ψ(T ) ∈ B[H] given by % ψ(T ) = ψ(λ) dEλ , which is normal. Moreover, the mapping ΦT : C(σ(T )) → R(ΦT ) ⊆ B[H] such that ΦT (ψ) = ψ(T ) is an isometric unital ∗-isomorphism, where R(ΦT ) is the C*-algebra generated by T and the identity I in B[H]. Proof. The set C(σ(T )) is a unital ∗-subalgebra of the C*-algebra B(σ(T )), which can be viewed as a subalgebra of the C*-algebra L∞ (σ(T ), μ) for every measure μ (in terms of equivalence classes in L∞ (σ(T ), μ) of each function in B(σ(T ))). So ΦT : C(σ(T )) → B[H], the restriction of ΦT : L∞ (σ(T ), μ) → B[H] to C(σ(T )), is an isometric unital ∗-homomorphism under the same assumptions of Theorem 4.8. Hence ΦT : C(σ(T )) → R(ΦT ) is an isometric unital ∗isomorphism (cf. the paragraph that precedes the theorem statement). Thus it remains to show that R(ΦT ) is the C*-algebra generated by T and the identity I. Indeed, take R(ΦT ) = ΦT (C(σ(T ))). Since ΦT (P (σ(T ))) = P(T, T ∗), the algebra of all polynomials in T and T ∗, since P (σ(T ))− = C(σ(T )) in the supnorm topology by the Stone–Weierstrass Theorem (cf. proof of Theorem 3.11), and ΦT : C(σ(T )) → B[H] is an isometry (thus with a closed range), it follows that ΦT (C(σ(T ))) = ΦT (P (σ(T ))− ) = ΦT (P (σ(T )))− = P(T, T ∗)− = C ∗ (T ), where C ∗ (T ) stands for the C*-algebra generated by T and I (see the paragraph following Definition 3.12, recalling that now T is normal). The mapping ΦT of Theorems 4.2, 4.3, 4.8, and 4.9 such that ΦT (ψ) = ψ(T ) is referred to as the functional calculus for T , and the theorems themselves are referred to as the Functional Calculus for Normal Operators. Since σ(T ) is compact it follows that, for any positive measure μ on Aσ(T ) , C(σ(T )) ⊂ B(σ(T )) ⊂ L∞ (σ(T ), μ) (again, the last inclusion is interpreted in terms of the equivalence classes in L∞ (σ(T ), μ) of each representative in B(σ(T )), as in the Remark that precedes Theorem 4.3). Thus ψ lies in L∞ (σ(T ), μ) as in Theorem 4.3 or 4.8 whenever ψ satisfies the assumptions of Theorems 4.2 and 4.9, and so the next result applies to any of the preceding forms of the Functional Calculus Theorems. It says that ψ(T ) commutes with every operator that commutes with T . $ Corollary 4.10. Let T = λ dEλ be a normal operator in B[H]. Take any ψ in L∞ (σ(T ), μ), where μ is any positive measure on Aσ(T ) equivalent to the spectral measure E. Consider the operator ψ(T ) of Theorem 4.3, viz., % ψ(T ) = ψ(λ) dEλ in B[H]. If S in B[H] commutes with T , then S commutes with ψ(T ).
4.2 Functional Calculus for Normal Operators
101
Proof. If S T = T S, then by Theorem 3.17 (cf. proof of Corollary 3.19), % S ψ(T )x ; y = ψ(T )x ; S ∗ y = ψ(λ) dEλ x ; S ∗ y % % = ψ(λ) dSEλ x ; y = ψ(λ) dEλ Sx ; y = ψ(T )Sx ; y for every x, y ∈ H (see Lemma 3.7), which means that S ψ(T ) = ψ(T )S.
If T is a normal operator, then the Spectral Mapping Theorem for polynomials in Theorem 2.8 does not depend on the Spectral Theorem. Observe that, for a normal operator, the Spectral Mapping Theorem for polynomials in Theorem 2.7 is a particular case of Theorem 2.8. However, extensions from polynomials to bounded or continuous functions do depend on the Spectral Theorem via Theorems 4.8 or 4.9 as we shall see next. The forthcoming Theorems 4.11 and 4.12 are also referred to as Spectral Mapping Theorems for bounded or continuous functions of normal operators. Theorem 4.11. If T is a normal operator on a separable Hilbert space, if μ is a scalar spectral measure for T , and if ψ lies in L∞ (σ(T ), μ), then σ(ψ(T )) = ess R(ψ) = ess ψ(λ) ∈ C : λ ∈ σ(T ) = ess ψ(σ(T )). Proof. If the theorem holds for a scalar spectral measure, then it holds for any scalar spectral measure. Suppose T is a normal operator on a separable Hilbert space. Let μ be the positive measure of Theorem 3.11, which is a scalar spectral measure for T (cf. Remark that follows the proof of Lemma 4.7). Now set μ ˜=π ˜e,e as in the proof of Theorem 4.7, which is again a scalar spectral measure for T and, as before, use the same notation for μ and μ ˜). Take any ψ in L∞ (σ(T ), μ). By Theorem 4.8, ΦT : L∞ (σ(T ), μ) → B[H] is an isometric (thus injective) unital ∗-isomorphism between C*-algebras and ΦT (ψ) = ψ(T ). So σ(ψ) = σ(ΦT (ψ)) = σ(ψ(T )) by Lemma 4.1(h). Consider the multiplication operator Mϕ on the Hilbert space L2 (Ω, μ) of Theorem 3.11. Since T ∼ = Mϕ , L2 (Ω, μ) is separable. Since − ϕ(Ω) = σ(T ) (with ϕ(λ) = λ), take ψ ◦ ϕ ∈ L∞ (Ω, μ) (also denoted by ψ) so that the spectrum σ(ψ) coincides in both algebras. Since μ is a scalar spectral measure for Mϕ (cf. Remark that follows the proof of Lemma 4.7 again), we may apply Theorem 4.8 to Mϕ . Therefore, using the same argument, σ(ψ) = σ(ψ(Mϕ )). Claim. ψ(Mϕ ) = Mψ . Proof. Let E be the spectral measure for Mϕ , where E (Λ) = M χΛ with χΛ being the characteristic function of Λ in AΩ as in the proof of Lemma 4.7. given by πf,g (Λ) = E (Λ)f ; g for every Take the scalar-valued measure πf,g Λ in AΩ and each f, g ∈ L2 (Ω, μ). Recall from the remark that follows the proof of Lemma 4.7 that μ = π 1,1 . Observe that, for every Λ in AΩ ,
102
4. Functional Calculus
%
d πf,g = π f,g (Λ) = E (Λ)f ; g = χΛ f ; g =
Λ
%
% f gdμ =
Λ
f g d π 1,1 .
Λ
So d π f,g = f g d π1,1 (f g is the Radon–Nikod´ ym derivative of π f,g with respect $ to π 1,1 ; see proof of Lemma 3.8). Since ψ(Mϕ ) = ψ(λ) dEλ by Theorem 4.8, % % ψ(Mϕ )f ; g = ψ d Eλ f ; g = ψ d πf,g % % = ψf g d π 1,1 = ψf g dμ = Mψ f ; g for every f, g ∈ L2 (Ω, μ), and so ψ(Mφ ) = Mψ , proving the claimed identity. Hence, since μ is finite (thus σ-finite), it follows by Proposition 3.B that σ(ψ(T )) = σ(ψ(Mϕ )) = σ(Mψ ) = ess R(ψ).
Theorem 4.12. If T is a normal operator on a separable Hilbert space, then σ(ψ(T )) = ψ(σ(T ))
for every
ψ ∈ C(σ(T )).
Proof. Recall that C(σ(T )) ⊂ B(σ(T )) ⊂ L∞ (σ(T ), μ) for every measure μ. Take an arbitrary ψ ∈ C(σ(T )). Since σ(T ) is compact in C , and ψ is continuous, it follows that R(ψ) = ψ(σ(T )) is compact (see, e.g., [66, Theorem ˜= 4.64]), and so R(ψ)− = R(ψ). Consider the setup of the previous proof: μ π ˜e,e is also denoted by μ (the positive measure of Theorem 3.11), which is a scalar spectral measure for T and for Mϕ ∼ = T . From Proposition 3.B we get −1 ess R(ψ) = α ∈ C : μ(ψ (Bε (α))) > 0 for all ε > 0 ⊆ R(ψ), where Bε (α) stands for the open ball of radius ε centered at α, and also that μ(Λ) > 0 for every nonempty Λ ∈ Aσ(T ) open relative to σ(T ) = σ(Mϕ ). All that has been said so far about μ holds for every positive measure equivalent to it; in particular, it holds for every scalar spectral measure for T . If there is an α in R(ψ)\ess R(ψ), then there is an ε > 0 such that μ(ψ −1 (Bε (α))) = 0. Since ψ is continuous, the inverse image ψ −1 (Bε (α)) of the open set Bε (α) in C must be open in σ(T ), and hence ψ −1 (Bε (α)) = ∅. But this is a contradiction because, since α ∈ R(ψ), it follows that ∅ = ψ −1 ({α}) ⊆ ψ −1 (Bε (α)). Thus R(ψ)\ess R(ψ) = ∅, and so R(ψ) = ess R(ψ). Therefore, by Theorem 4.11, σ(ψ(T )) = ess R(ψ) = R(ψ) = ψ(σ(T )).
4.3 Analytic Functional Calculus: Riesz Functional Calculus The approach of this section is rather different from that of Section 4.2. However, Proposition 4.J will show that the Riesz functional calculus, when restricted to a normal operator on a Hilbert space, coincides with the functional calculus for normal operators of the previous section (for analytic functions).
4.3 Analytic Functional Calculus: Riesz Functional Calculus
103
Measure theory was the main ingredient for the last four sections of Chapter 3 and also for Section 2 of this chapter. The background for the present section is complex analysis, where the integrals involved here are all Riemann’s (unlike the measure theoretic integrals we have been dealing with so far). For the standard results of complex analysis used in this section the reader is referred, for instance, to [1], [13], [21], [26], and [79]. An arc is a continuous function from an arbitrary (nondegenerate) closed interval of the real line into the complex plane, say, α:[0, 1] → C , where Γ = R(α) = {α(t) ∈ C : t ∈ [0, 1]} is the range of the arc α, which is connected and compact (continuous image of connected sets is connected, and continuous image of compact sets is compact). Let G = {#X : ∅ = X ∈ ℘([0, 1])} be the set of all cardinal numbers of nonempty subsets of the interval [0, 1]. Given an arc α:[0, 1] → Γ set μ(t) = #{s ∈ [0, 1): α(s) = α(t)} ∈ G for each t ∈ [0, 1). That is, μ(t) ∈ G is the cardinality of the subset of [0, 1] consisting of all points s ∈ [0, 1) such that α(s) = α(t), for every t ∈ [0, 1). We say that μ(t) is the multiplicity of the point α(t) ∈ Γ . Consider the function μ:[0, 1) → G defined by t → μ(t), which is referred to as the multiplicity of the arc α. To avoid space-filling curves, assume from now on that Γ is nowhere dense. A pair (Γ, μ) consisting of the range Γ and the multiplicity μ of an arc is called a directed pair if the direction in which the arc traverses its range Γ , according to the natural order of the interval [0, 1], is taken into account. An oriented curve generated by an arc α:[0, 1] → Γ is the directed pair (Γ, μ), and the arc α itself is referred to as a parameterization of the curve (Γ, μ) (which is not unique for (Γ, μ)). If an oriented curve (Γ, μ) is such that α(0) = α(1) for some parameterization α (and so for all parameterizations), then it is called a closed curve. A parameterization α of an oriented curve (Γ, μ) is injective on [0, 1) if and only if μ = 1 (i.e., μ(t) = 1 for all t ∈ [0, 1)). If one parameterization of (Γ, 1) is injective, then so are all of them. A curve (Γ, 1) parameterized by an arc α that is injective on [0, 1) is called a simple curve (or a Jordan curve). Thus an oriented curve consists of a directed pair (Γ, μ), where Γ ⊂ C is the (nowhere dense) range of a continuous function α:[0, 1] → C called a parameterization of (Γ, μ), and μ is a G -valued function that assigns to each point t of [0, 1) the multiplicity of the point α(t) in Γ (i.e., each μ(t) says how often α traverses the point γ = α(t) of Γ ). For notational simplicity we shall refer to Γ itself as an oriented curve when the multiplicity μ is either clear in the context or is immaterial. In this case we simple say “Γ is an oriented curve”. By a partition P of the interval [0, 1] we mean a finite sequence {tj }nj=0 of points in [0, 1] such that 0 = t0 , tj−1 < tj for 1 ≤ j ≤ n, and tn = 1. The total variation of a function α:[0, 1] → C is the supremum of the set {ν ∈ R : n ν = j=1 |α(tj ) − α(tj−1 )|} taken over all partitions P of [0, 1]. A function α:[0, 1] → C is of bounded variation if its total variation is finite. An oriented curve Γ is rectifiable if some (and so any) parameterization α of it is of bounded variation, and its length (Γ) is the total variation of α. An arc α:[0, 1] → C is
104
4. Functional Calculus
continuously differentiable (or smooth) if it is differentiable on the open interval (0, 1), has one-sided derivatives at the end points 0 and 1, and its derivative α :[0, 1] → C is continuous on [0, 1]. In this case the oriented curve Γ parameterized by α is also referred to as continuously differentiable (or smooth). The linear space of all continuously differentiable functions from [0, 1] to C is usually denoted by C 1 ([0, 1]). If there is a partition of [0, 1] such that each restriction α|[tj−1 ,tj ] of the continuous α is continuously differentiable (i.e., such that α|[tj−1 ,tj ] lies in C 1 ([tj−1 , tj ])), then α:[0, 1] → C is a piecewise continuously differentiable arc (or piecewise smooth arc), and Γ parameterized by it is an oriented piecewise continuously differentiable curve (or an oriented piecewise smooth curve). In this case it is known from advanced calculus that Γ is rectifin $ tj |α[tj−1 ,tj ] (t)|dt. able (α is of bounded variation) with length (Γ) = j=1 tj−1 A continuous function ψ: Γ → C has a Riemann–Stieltjes integral associated with parameterization α: [0, 1] → Γ of a rectifiable curve Γ , % % 1 ψ(λ) dλ = ψ(α(t)) dα(t). 0
Γ
If, in addition, α is smooth, then % 1 % 1 ψ(α(t)) dα(t) = ψ(α(t)) α (t) dt. 0
0
Now consider a bounded function f :Γ → Y, where Y is a Banach space. Let α:[0, 1] → Γ be a parameterization of an oriented curve Γ, and let P = {tj }nj=0 be a partition of the interval [0, 1]. Recall that the norm of P is the number P = max 1≤j≤n (tj − tj−1 ). A Riemann–Stieltjes sum for the function f with respect to α based on a given partition P is a vector n S(f, α, P ) = f (α(τj )) (α(tj ) − α(tj−1 )) ∈ Y j=1
with τj ∈ [tj−1 , tj ]. Suppose there is a unique vector S ∈ Y with the following property: for every ε > 0 there is a δε > 0 such that, if P is an arbitrary partition of [0,1] with P ≤ δε , then S(f, α, P ) − S ≤ ε for all Riemann–Stieltjes sums S(f, α, P ) for f with respect to α based on P . When such a vector S exists, it is called the Riemann–Stieltjes integral of f with respect to α, denoted by % 1 % f (α(t)) dα(t). S = f (λ) dλ = Γ
0
If the Y-valued function f :Γ → Y is continuous, and if Γ is rectifiable, then it can be shown (exactly as in the case of complex-valued functions; see, e.g., [21, Proposition 5.3 and p. 385]) that there exists the Riemann–Stieltjes integral of f with respect to α. If α1 and α2 are parameterizations of Γ, then for every δ > 0 there are partitions P1 and P2 with norm less than δ such that curve Γ and a continS(f, α1 , P1 ) = S(f, α2 , P2 ). Given a rectifiable oriented $ uous Y-valued function on Γ, then the integral Γ f (λ) dλ does not depend on the parameterization of Γ, and we shall refer to it as the integral of f over Γ.
4.3 Analytic Functional Calculus: Riesz Functional Calculus
105
Let C[Γ, Y] and B[Γ, Y] be the linear spaces of all Y-valued continuous or bounded functions on Γ , respectively, equipped with the sup-norm, which are Banach spaces because Γ is compact and Y is complete. Recall that, since Γ is compact, then C[Γ, Y] ⊆ B[Γ, Y]. It is readily verified that the transformation $ : C[Γ, Y] → Y, assigning to each f in C[Γ, $ Y] its integral in Y, is linear and bounded (i.e., linear and continuous — ∈ B[C[Γ, Y], Y]). Indeed, % 1 % f (λ) dλ ≤ sup f (λ) |dα(t)| = f ∞ (Γ ), λ∈Γ
Γ
0
and therefore, for every L ∈ B[Y, Z], where Z also is a Banach space, % % L f (λ) dλ = Lf (λ) dλ. Γ
Γ
The reverse (or the opposite) of an oriented curve Γ is an oriented curve denoted by −Γ obtained from Γ by reversing the orientation of Γ . To reverse the orientation of Γ means to reverse the order of the domain [0, 1] of its parameterization α or, equivalently, to replace α(t) with α(−t) for t running over [−1, 0]. Reversing the orientation of a curve Γ has the effect of replacing $ $1 $0 $ Γ f (λ) dλ = 0 f (α(t)) dα(t) with 1 f (α(t)) dα(t) = −Γ f (λ) dλ so that % % f (λ) dλ = − f (λ) dλ. −Γ
Γ
Also note that the preceding arguments and results all remain valid if the continuous function f is defined on a subset Λ of C that includes the curve Γ (by simply replacing it with its restriction to Γ, which remains continuous). In particular, if Λ is a nonempty open subset of C and Γ is a rectifiable curve included in Λ, and if f : Λ → Y is a continuous $ function on the open set Λ, then f has an integral over Γ ⊂ Λ ⊆ C , viz., Γ f (λ) dλ. Recall that a topological space is disconnected if it is the union of two disjoint nonempty subsets that are both open and closed. Otherwise, the topological space is said to be connected . A subset of a topological space is called a connected set (or a connected subset ) if, as a topological subspace, it is a connected topological space itself. Take any nonempty subset Λ of C . A component of Λ is a maximal connected subset of Λ, which coincides with the union of all connected subsets of Λ containing a given point of Λ. Thus any two components of Λ are disjoint. Since the closure of a connected set is connected, any component of Λ is closed relative to Λ. By a region (or a domain) we mean a nonempty connected open subset of C . Every open subset of C is uniquely expressed as a countable union of disjoint regions that are components of it (see, e.g., [21, Proposition 3.9]). The closure of a region is sometimes called a closed region. Carefully note that different regions may have the same closure (sample: a punctured open disk). If Γ is a closed rectifiable oriented curve in C and if ζ ∈ C \Γ, then a classical and important result in complex analysis says that the integral
106
4. Functional Calculus
wΓ (ζ) =
1 2πi
% Γ
1 dλ λ−ζ
has an integer value that is constant on each component of C \Γ and zero on the unbounded component of C \Γ . The number wΓ (ζ) is referred to as the winding number of Γ about ζ. If a closed rectifiable oriented curve Γ is a simple curve, then C \Γ has only two components, just one of then is bounded, and Γ is their common boundary (this is the Jordan Curve Theorem). In this case (i.e., for a simple closed rectifiable oriented curve Γ ), wΓ (ζ) takes on only two values for each ζ ∈ C \Γ , either 0 or ±1, and if for every ζ in the bounded component of C \Γ we get wΓ (ζ) = 1, then we say that Γ is positively (i.e., counterclockwise) oriented and, otherwise, if wΓ (ζ) = −1, then we say that Γ is negatively (i.e., clockwise) oriented ; and for every ζ in the unbounded component of C \Γ we get wΓ (ζ) = 0. If Γ is positively oriented, then the reverse curve −Γ is negatively oriented, and
m vice versa. These notions can be extended as follows. A finite union Γ = j=1 Γj of disjoint closed rectifiable oriented simple curves Γj is called a path, and its winding number wΓ (ζ) about ζ ∈ C \Γ is defined by wΓ (ζ) = m j=1 wΓj (ζ). A path Γ is positively oriented if for every ζ ∈ C \Γ the winding number wΓ (ζ) is either 0 or 1, and negatively oriented if for every ζ ∈ C \Γ the winding number wΓ (ζ) is either 0 or −1. If a
m Γ is positively oriented, then the reverse path −Γ = path Γ = m j=1 j j=1 −Γj is negatively oriented. The inside (notation: ins Γ ) and the outside (notation: out Γ ) of a positively oriented path Γ are the sets ins Γ = ζ ∈ C : wΓ (ζ) = 1 and out Γ = ζ ∈ C : wΓ (ζ) = 0 .
From now on all paths are positively oriented . Note that if Γ = m j=1 Γj is a path and if there is a finite subset {Γj }nj =1 of Γ such that Γj ⊂ ins Γj +1 , then these nested (disjoint closed rectifiable oriented) simple curves {Γj } are oppositely oriented, with Γn being positively oriented, because wΓ (ζ) is either 0 or 1 for every ζ ∈ C \Γ. An open subset of C is a Cauchy domain (or a Jordan domain) if it has finitely many components whose closures are pairwise disjoint, and its boundary is a path. The closure of a Jordan domain is sometimes referred to as a Jordan closed region. Also observe that if Γ is a path, then {Γ, ins Γ, out Γ } is a partition of C . Since a path Γ is a closed set in C (finite union of closed sets), and since it is the common boundary of ins Γ and out Γ, it follows that ins Γ and out Γ are open sets in C , and their closures are given by the union with their common boundary, (ins Γ )− = Γ ∪ ins Γ
and
(out Γ )− = Γ ∪ out Γ.
If Γ is a path, if Y is a Banach space, and if f : Γ → Y is a continuous function (so that f has an integral over each Γj ), then we define the integral of the Y-valued f over the path Γ ⊂ C by % m % f (λ) dλ = f (λ) dλ. Γ
j=1
Γj
4.3 Analytic Functional Calculus: Riesz Functional Calculus
107
Again, if f : Λ → Y is a continuous function on a nonempty open subset Λ of C that includes a path Γ, then we define the integral of f over the path Γ as the integral of the restriction of f to Γ over Γ ⊂ Λ ⊆ C ; that is, as the integral of f |Γ : Γ → Y (which is continuous as well) over the path Γ : % % f (λ) dλ = f |Γ (λ) dλ. Γ
Γ
Therefore, if we are given a nonempty open subset Λ of C , a Banach space Y, and a continuous function f : Λ → Y, then we have seen how to define the integral of f over any path Γ included in Λ. The Cauchy Integral Formula says that if ψ: Λ → C is an analytic function on a nonempty open subset Λ of C that includes a path Γ and its inside ins Γ (i.e., Γ ∪ ins Γ ⊂ Λ ⊆ C ) then, for every ζ ∈ ins Γ , % ψ(λ) 1 ψ(ζ) = 2πi dλ. λ −ζ Γ In fact, this is a most common application of the general case of the Cauchy Integral Formula (see, e.g., [21, Problem 5.O]). Observe that the assumption Γ ∪ ins Γ ⊂ Λ ⊆ C (i.e., (ins Γ )− ⊂ Λ ⊆ C ) simply means that Γ ⊂ Λ ⊆ C and C \Λ ⊂ out Γ (i.e., wΓ (ζ) = 0 for every ζ ∈ C \Λ), and the assumption ζ ∈ ins Γ ⊂ Λ is equivalent to saying that ζ ∈ Λ\Γ and wΓ (ζ) = 1. Since the function ψ(·)[(·) − ζ]−1 : Γ → C is continuous on the path Γ , it follows that the above integral may be generalized to a Y-valued function. Thus consider the following special case in a complex Banach space Y. Throughout this chapter X will denote a nonzero complex Banach space. Set Y = B[X ], the Banach algebra of all operators on X . Take any operator T in B[X ] and let RT : ρ(T ) → G[X ] be its resolvent function, which is defined by RT (λ) = (λI − T )−1 for every λ in the resolvent set ρ(T ), which in turn is an open and nonempty subset of C (cf. Section 2.1). Let ψ: Λ → C be an analytic function on a nonempty open subset Λ of C such that Λ ∩ ρ(T ) is nonempty. Recall from the proof of Theorem 2.2 that RT : ρ(T ) → G[X ] is continuous, and so is the product ψRT : Λ ∩ ρ(T ) → B[X ], which is defined by (ψRT )(λ) = ψ(λ)RT (λ) ∈ B[X ] for every λ in the open subset Λ ∩ ρ(T ) of C . Then we can define the integral of ψRT over any path Γ ⊂ Λ ∩ ρ(T ), % % ψ(λ) RT (λ) dλ = ψ(λ) (λI − T )−1 dλ ∈ B[X ]. Γ
Γ
That is, the integral of the C -valued function ψ(·)[(·) − ζ]−1 : Λ\{ζ} → C , for each ζ ∈ ins Γ , on any path Γ such that (ins Γ )− ⊂ Λ, is generalized to the B[X ]-valued function ψ(·)[(·)I − T ]−1 : Λ ∩ ρ(T ) → B[X ], for each T ∈ B[X ], on any path Γ such that Γ ⊂ Λ ∩ ρ(T ). We shall see below that ψRT in fact is analytic, where the definition of a Banach-space-valued analytic function on a nonempty open subset of C is exactly the same as that for a scalar-valued analytic function.
108
4. Functional Calculus
If Ω is a compact set included in an open subset Λ of C (so Ω ⊂ Λ ⊆ C ), then there exists a path Γ ⊂ Λ such that Ω ⊂ ins Γ and C \Λ ⊂ out Γ (see, e.g., [27, p. 200]). As we have seen before, this is equivalent to saying that Ω ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ. Take an operator T ∈ B[X ]. Set Ω = σ(T ), which is compact. Now take an open subset Λ of C such that σ(T ) ⊂ Λ. Let Γ be any path such that Γ ⊂ Λ and σ(T ) ⊂ ins Γ ⊂ Λ. That is, for a given open set Λ, let the path Γ satisfy σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ. Since ρ(T ) = C \σ(T ), the above inclusions ensure that Γ ⊂ Λ ∩ ρ(T ) = ∅. Lemma 4.13. If ψ: Λ → C is an analytic function on a nonempty open subset Λ of C , then the integral % % ψ(λ) RT (λ) dλ = ψ(λ) (λI − T )−1 dλ Γ
Γ
does not depend on the choice of any path Γ that satisfies the assumption σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ. Proof. Let T be an arbitrary operator in B[X ] and take its resolvent function RT : ρ(T ) → G[X ] so that RT (λ) = (λI − T )−1 for every λ ∈ ρ(T ). Let ψ: Λ → C be an analytic function on an open set Λ ⊆ C that properly includes σ(T ), so that there is a path Γ satisfying the above assumption. If there was only one of such a path, then there was nothing to prove. Thus suppose Γ and Γ are distinct paths satisfying the above assumption. We shall show that % % ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ. Γ
Γ
Claim 1. The product ψRT : Λ ∩ ρ(T ) → B[X ] is analytic on Λ ∩ ρ(T ). Proof. First note that Λ ∩ ρ(T ) = ∅. The resolvent identity of Section 2.1 ensures that if λ and ν are distinct points in ρ(T ), then RT (λ) − RT (ν) + RT (ν)2 = RT (ν) − RT (λ) RT (ν) λ−ν (cf. proof of Theorem 2.2). Since RT : ρ(T ) → G[X ] is continuous, R (λ) − R (ν) T T lim + RT (ν)2 ≤ lim RT (ν) − RT (λ)RT (ν) = 0 λ→ν λ→ν λ−ν for every ν ∈ ρ(T ). Thus the resolvent function RT : ρ(T ) → G[X ] is analytic on ρ(T ) and so it is analytic on the nonempty open set Λ ∩ ρ(T ). Since the function ψ: Λ → C also is analytic on Λ ∩ ρ(T ), and since the product of analytic functions on the same domain is again analytic, it follows that the product
4.3 Analytic Functional Calculus: Riesz Functional Calculus
109
function ψRT : Λ ∩ ρ(T ) → B[X ] such that (ψRT )(λ) = ψ(λ)RT (λ) ∈ B[X ] for every λ ∈ Λ ∩ ρ(T ) is analytic on Λ ∩ ρ(T ), concluding the proof of Claim 1. The Cauchy Theorem says that if ψ: Λ → C is analytic on a nonempty open subset Λ of C that includes a path Γ and its inside ins Γ , then % ψ(λ) dλ = 0 Γ
(see, e.g., [21, Problem 5.O]). This can be extended from scalar-valued functions to Banach-space-valued functions. Let Y be a complex Banach space, let f : Λ → Y be an analytic function on a nonempty open subset $Λ of C that includes a path Γ and its inside ins Γ , and consider the integral Γ f (λ). Claim 2. If f : Λ → Y is analytic on a given nonempty open subset Λ of C and if Γ is any path such that (ins Γ )− ⊂ Λ, then % f (λ) dλ = 0. Γ
Proof. Take an arbitrary nonzero η in Y ∗ (i.e., take a nonzero bounded linear functional η : Y → C ), and consider the composition η ◦ f : Λ → C of η with an analytic function f : Λ → Y, which is again analytic on Λ. Indeed, f (λ) − f (ν) η (f (λ)) − η (f (ν)) − η (f (ν)) ≤ η − f (ν) λ−ν λ−ν for every pair of distinct points λ and ν in Λ. Since both η and the integral are linear and continuous, %
m %
m %
η f (λ) dλ = η f (λ) dλ = η f (λ) dλ j=1
Γ
=
m j=1
%
Γj
%
η (f (λ)) dλ = Γj
j=1
Γj
η (f (λ)) λ. Γ
$ The Cauchy Theorem for scalar-valued functions $ says that Γ η (f (λ)) dλ = 0 (because η ◦ f is analytic on Λ). Therefore, η( Γ f (λ) dλ) = 0. Since this holds for every η ∈ Y ∗, the Hahn–Banach $ Theorem (see, e.g., [66, Corollary 4.64]) ensures the claimed result, viz., Γ f (λ) = 0. Take the nonempty open subset ins Γ ∩ ins Γ of C , which includes σ(T ). Let Δ be the open component of ins Γ ∩ ins Γ including σ(T ). Consider the (rectifiable positively oriented) simple curve Γ ⊆ Γ ∪ Γ consisting of the boundary of Δ, and so ins Γ = Δ and Δ− = (ins Γ )− = Δ ∪ Γ . Observe that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊆ (ins Γ ∩ ins Γ )− ⊂ Λ. * = ins Γ \Δ (nonempty, since Γ and Γ are disThus Γ ⊂ Δ ∩ ρ(T ) = ∅. Set Δ * so as to make it a (positively oriented) path tinct). Orient the boundary of Δ
110
4. Functional Calculus
* Since ins Γ = Δ ∪ Δ * with Δ ∩ Δ * = ∅ and Γ* ⊆ Γ ∪ (−Γ ) so that ins Γ* = Δ. $ $ $ * Γ , Γ, and Γ are positively oriented, Γ f (λ) dλ = Γ f (λ) dλ + Γ* f (λ) dλ for every B[X ]-valued continuous function f whose domain includes (ins Γ )−. So % % % ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ + ψ(λ) RT (λ) dλ. * Γ Γ Γ But Claim 1 ensures that ψRT is analytic on Λ ∪ ρ(T ) and therefore, since $ *− = (ins Γ*)− ⊂ Λ ∪ ρ(T ), Claim 2 ensures that Δ * ψ(λ)RT (λ) dλ = 0. Hence Γ % % ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ. Γ
Γ
Similarly (by exactly the same argument), % % ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ. Γ
Γ
Definition 4.14. Take an arbitrary T ∈ B[X ]. If ψ: Λ → C is analytic on an open set Λ ⊆ C that includes the spectrum σ(T ) of T and if Γ is any path such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, then define the operator ψ(T ) in B[X ] by % % 1 1 ψ(T ) = 2πi ψ(λ) RT (λ) dλ = 2πi ψ(λ) (λI − T )−1 dλ. Γ
Γ
After defining the integral of ψRT over any path Γ such that Γ ⊂ Λ ∩ ρ(T ), and after ensuring in Lemma 4.13 its invariance for any path Γ such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, then it is clear that the preceding definition of the operator ψ(T ) is motivated by the Cauchy Integral Formula. Given an operator T in B[X ], we say that a complex-valued function ψ is analytic on σ(T ) (or analytic on a neighborhood of σ(T )) if it is analytic on an open set that includes σ(T ). By a path about σ(T ) we mean a path Γ whose inside (properly) includes σ(T ) (i.e., a path such that σ(T ) ⊂ ins Γ ). Thus what is behind Definition 4.14 is that, if ψ is analytic on σ(T ), then the above integral exists as an operator in B[X ], and the integral does not depend on the path about σ(T ) whose closure of its inside is included in the open set upon which ψ is defined (i.e., it is included in the domain of ψ). Lemma 4.15. If φ and ψ are analytic on σ(T ) and if Γ is any path about σ(T ) such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, where the nonempty open set Λ ⊆ C is the intersection of the domains of φ and ψ, then % 1 φ(T )ψ(T ) = 2πi φ(λ)ψ(λ) RT (λ) dλ, Γ
so that φ(T )ψ(T ) = (φ ψ)(T ) since φ ψ in analytic on σ(T ). Proof. Let φ and ψ be analytic on σ(T ). Thus φ ψ is analytic on the intersection Λ of their domains. Let Γ and Γ be arbitrary paths such that
4.3 Analytic Functional Calculus: Riesz Functional Calculus
111
σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ ins Γ ⊆ Λ. Thus, by Definition 4.14 and by the resolvent identity (cf. Section 2.1), % %
1 1 φ(T )ψ(T ) = 2πi φ(λ) RT (λ) dλ 2πi ψ(ν) RT (ν) dν Γ %Γ % φ(λ)ψ(ν) RT (λ) RT (ν) dν dλ = − 4π1 2 Γ Γ % % = − 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (λ) − RT (ν) dν dλ %Γ %Γ = − 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (λ) dν dλ Γ Γ % % + 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (ν) dλ dν %Γ Γ %
1 = − 4π2 φ(λ) ψ(ν)(λ − ν)−1 dν RT (λ) dλ Γ %Γ %
+ 4π1 2 ψ(ν) φ(λ)(λ − ν)−1 dλ RT (ν) dν. Γ
Γ
$ Since λ ∈ Γ , it follows that λ ∈ ins Γ , and hence Γ ψ(ν)(λ − ν)−1 dν = (2πi)ψ(λ) by the Cauchy Integral Formula. Moreover, since ν ∈ Γ , it follows that ν $∈ out Γ (so φ(λ)(λ − ν)−1 is analytic on ins Γ which includes Γ ), and hence Γ φ(λ)(λ − ν)−1 dλ = 0 by the Cauchy Theorem. Therefore, % 1 φ(T )ψ(T ) = 2πi φ(λ)ψ(λ) RT (λ) dλ. Γ
Given an operator T ∈ B[X ], let A(σ(T )) denote the set of all analytic functions on the spectrum σ(T ) of T . That is, ψ ∈ A(σ(T )) if ψ: Λ → C is analytic on an open set Λ ⊆ C that includes σ(T ). It is readily verified that A(σ(T )) is a unital algebra, where the domain of the product of two functions in A(σ(T )) is the intersection of their domains, and the identity element 1 in A(σ(T )) is the constant function 1(λ) = 1 for all λ ∈ Λ. Note that the identity function also lies in A(σ(T )) (i.e., if ϕ: Λ → Λ ⊆ C is such that ϕ(λ) = λ for every λ ∈ Λ, then ϕ ∈ A(σ(T ))). The next theorem is the main result of this section. (Recall that X is an arbitrary nonzero complex Banach space.) Theorem 4.16. Riesz Functional Calculus. Take any operator T ∈ B[X ]. For every function ψ ∈ A(σ(T )), let the operator ψ(T ) ∈ B[X ] be defined as in Definition 4.14, viz., % 1 ψ(T ) = 2πi ψ(λ) RT (λ) dλ. Γ
The mapping ΦT : A(σ(T )) → B[X ] such that ΦT (ψ) = ψ(T ) is a homomorphism. Moreover,
112
4. Functional Calculus
ΦT (p) = p(T ) =
1 2πi
% m Γ
j=0
αj λj RT (λ) dλ =
m j=0
αj T j
j for every polynomial p(λ) = m j=0 αj λ in one variable with complex coeffi cients (i.e., for every p ∈ P (σ(T )) ⊂ A(σ(T ))). In particular, % 1 ΦT (1) = 1(T ) = 2πi RT (λ) dλ = T 0 = I, Γ
where 1 denotes the constant function, 1(λ) = 1 for all λ in a neighborhood of σ(T ), so that ΦT takes the identity element 1 of A(σ(T )) to the identity operator I in B[X ], and hence ΦT is a unital homomorphism; and % 1 ΦT (ϕ) = ϕ(T ) = 2πi λ RT (λ) dλ = T, Γ
where ϕ is the identity function: ϕ(λ) = λ for λ in a neighborhood of σ(T ). k α λ with radiFurthermore, if ψ has a power series expansion ψ(λ) = ∞ k=0 k us of convergence greater than r(T ), and so on a neighborhood of σ(T ), then % ∞ ∞ 1 ΦT (ψ) = ψ(T ) = 2πi αk λk RT (λ) dλ = αk T k , Γ
∞
k=0
k=0
where the series k=0 αk T converges in B[X ]. Actually, if {ψn } is a sequence in A(σ(T )) that converges uniformly on every compact subset of its domain Λ (such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ) to ψ ∈ A(σ(T )), then {ΦT (ψn )} converges in B[X ] to ΦT (ψ).Thus ΦT :(A(σ(T )), · ∞ ) → B[X ] is continuous. $ Proof. Since the integral Γ ( · )RT dλ: A(σ(T )) → B[H] is a linear transformation of the linear space A(σ(T )) into the linear space B[X ], it follows that ΦT : A(σ(T )) → B[X ] is a linear transformation. That is, % 1 ΦT (α φ + β ψ) = (α φ + β ψ)(T ) = 2πi (α φ + β ψ)(λ) RT (λ) dλ Γ % % 1 1 = α 2πi φ(λ) RT (λ) dλ + β 2πi ψ(λ) RT (λ) dλ k
Γ
Γ
= α φ(T ) + β ψ(T ) = α ΦT (φ) + β ΦT (ψ) for every α, β ∈ C and every φ, ψ ∈ A(σ(T )). Moreover, by Lemma 4.15, % 1 ΦT (φ ψ) = (φ ψ)(T ) = 2πi (φ ψ)(λ) RT (λ) dλ Γ % 1 = 2πi φ(λ)ψ(λ) RT (λ) dλ = φ(T )ψ(T ) = ΦT (φ)ΦT (ψ) Γ
for every φ, ψ ∈ A(σ(T )). Thus Φ is a homomorphism. Recall: the preceding integrals do not depend on the path Γ such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ. Suppose ψ is analytic on the whole complex plane (i.e., suppose Λ = C , which is the case if ψ is a polynomial), and let Γ be any circle about the origin with radius δ greater than the spectral radius of T ; that is, r(T ) < δ. Since r(T ) < |λ| for every λ ∈ Γ , it follows by Corollary 2.12(a) that
4.3 Analytic Functional Calculus: Riesz Functional Calculus
%
113
%
∞ 1 ψ(λ) RT (λ) dλ = ψ(λ) T k dλ λk+1 k=0 Γ Γ % % ∞ 1 1 =I ψ(λ) λ dλ + T k ψ(λ) λk+1 dλ
(2πi) ψ(T ) =
k=1
Γ
Γ
for every ψ ∈ A(σ(T )), where the above series converges uniformly (i.e., it converges in the operator norm topology of B[X ]). Next consider two special cases consisting of elementary polynomials. First let ψ = 1 so that % % ∞ k 1 1 1 1 1(T ) = I 2πi dλ + T dλ = I + 0 = I, λ 2πi λk+1 1 2πi
k=1
Γ
$
Γ
1 Γ λ
dλ = 1(0) = 1 by the Cauchy Integral Formula and, $for k ≥ 1, $since1 dλ = 0. Indeed, with α(θ) = ρeiθ for every θ ∈ [0, 2π] we get Γ λ1 dλ = k+1 $ 2π −1 −iθ $ 2π $Γ2πλ −1 e ρ i eiθ dθ = i 0 dθ = 2πi and, for k ≥ 1, we 0 α(θ)$ α (θ) dθ = $ 0 ρ $ 2π 2π 1 also get Γ λk+1 dλ = 0 α(θ)−(k+1) α (θ) dθ = 0 ρ−(k+1) e−i(k+1)θ i ρ eiθ dθ = $ 2π i ρ−k 0 e−ikθ dθ = 0. Now let ψ = ϕ, the identity function, so that % % % ∞ k 1 1 1 1 1 ϕ(T ) = I 2πi dλ + T 2πi dλ + T λ 2πi λk dλ = 0 + T + 0 = T, $
Γ
k=2
Γ
Γ
$ $ 2π 1 (i.e., 2πi Γ dλ = 0 α (θ) dθ = were computed above. Thus % % 1 1 T 2 = ϕ(T )2 = 2πi ϕ(λ)2 RT (λ) dλ = 2πi λ2 RT (λ) dλ
1 dλ = 0 by the Cauchy Theorem since 2πi $ 2π iθ Γ i ρ 0 e dθ = 0) and the other two integrals
Γ
Γ
by Lemma 4.15, and so a trivial induction ensures that % 1 T j = 2πi λj RT (λ) dλ Γ
for every integer j ≥ 0. Therefore, by linearity of the integral, % m 1 αj T j = 2πi p(λ) RT (λ) dλ p(T ) = m
j=0
Γ
whenever p(λ) = j=0 αj λ ; that is, for every polynomial p ∈ P (σ(T )). Next we extend this result from finite power series (i.e., from polynomials) to infinite power To say that a function ψ has a power series expansion ψ(λ) = ∞ series. k α λ on k k=0 ∞ a neighborhood of σ(T ) means that the radius of convergence of the series k=0 αk λk is greater than r(T ), which implies that the polynomials n pn (λ) = k=0 αk λk make a sequence {pn }∞ n=0 that converges uniformly to ψ on a closed disk whose boundary is a circle Γ of radius r(T ) + ε about the origin for some ε > 0, which in turn implies that $ ψ ∈ A(σ(T ))). Thus, since the integral is linear and continuous (i.e., since ∈ B[C[Γ, B[X ]], B[X ]]), and according to Corollary 2.12(a), it follows that ∞ αk T k . ψ(T ) = j
k=0
Indeed,
114
4. Functional Calculus
ψ(T ) = =
1 2πi
% Γ
∞
k=0
%
∞ 1 ψ(λ) RT (λ) dλ = 2πi αk λk RT (λ) dλ k=0 Γ %
∞ 1 αk 2πi λk RT (λ) dλ = αk T k , ∞
Γ
k=0
where the operator series k=0 αk T k converges uniformly (i.e., it converges in the operator norm topology to ψ(T ) ∈ B[X ]). In fact, since the scalar series ∞ k k=0 αk λ converges uniformly on Γ to ψ ∈ A(σ(T )), % 1 ψ(λ) − pn (λ) RT (λ) dλ ψ(T ) − pn (T ) = 2πi Γ % 1 RT (λ) dλ ≤ sup ψ(λ) − pn (λ) 2πi λ∈Γ
Γ
= sup ψ(λ) − pn (λ) → 0 as n → ∞. λ∈Γ
Hence ΦT (pn ) → ΦT (ψ) in B[X ]. The same argument shows that if {ψn }∞ n=0 is a sequence of functions in A(σ(T )) that converges uniformly on every compact subset of its domain to ψ ∈ A(σ(T )), then ΦT (ψn ) → ΦT (ψ) in B[X ]. Corollary 4.17. Take any operator in T ∈ B[X ]. If ψ ∈ A(σ(T )), then ψ(T ) commutes with every operator that commutes with T . Proof. This is the counterpart of Corollary 4.10. Let S be an operator in B[X ]. Claim. If S T = T S, then RT (λ)S = SRT (λ) for every λ ∈ ρ(T ). Proof. S = S (λI − T )(λI − T )−1 = (λI − T )S (λI − T )−1 if S T = T S, and so RT (λ)S = (λI − T )−1 (λI − T )S (λI − T )−1 = SRT (λ) for λ ∈ ρ(T ). Therefore, according to Definition 4.14, since S lies in B[X ], % % 1 1 ψ(λ) RT (λ) dλ = ψ(λ) SRT (λ) dλ S ψ(T ) = S 2πi Γ 2πi Γ % 1 = ψ(λ) RT (λ)S dλ = ψ(T )S. 2πi Γ
Theorem 2.7 is the Spectral Mapping Theorem for polynomials, which holds for every Banach space operator. For normal operators on a Hilbert space, the Spectral Mapping Theorem was extended to larger classes of functions in Theorems 2.8, 4.11, and 4.12. Back to the general case of arbitrary Banach space operators, Theorem 2.7 can be extended to analytic functions, and this is again referred to as the Spectral Mapping Theorem. Theorem 4.18. Take an operator T ∈ B[X ] on a complex Banach space X . If ψ is analytic on the spectrum of T (i.e., if ψ ∈ A(σ(T ))), then σ(ψ(T )) = ψ(σ(T )).
4.4 Riesz Decomposition Theorem and Riesz Idempotents
115
Proof. Take T ∈ B[X ] and ψ: Λ → C in A(σ(T )). If λ ∈ σ(T ) (so that λ ∈ Λ), for consider the function φ: Λ → C defined by φ(λ) = 0 and φ(ν) = ψ(λ)−ψ(ν) λ−ν every ν ∈ Λ\{λ}. Since the function ψ(λ) − ψ(·) = φ(·)(λ − ·): Λ → C lies in A(σ(T )), Lemma 4.15, Theorem 4.16, and Corollary 4.17 ensure that % 1 [ψ(λ) − ψ(ν)] RT (ν) dν ψ(λ)I − ψ(T ) = 2πi %Γ 1 = 2πi (λ − ν) φ(ν) RT (ν) dν Γ
= (λI − T ) φ(T ) = φ(T ) (λI − T ). If ψ(λ) ∈ ρ(ψ(T )), then ψ(λ)I − ψ(T ) has an inverse [ψ(λ)I − ψ(T )]−1 in B[X ]. Since ψ(λ)I − ψ(T ) = φ(T ) (λI − T ), we get [ψ(λ)I − ψ(T )]−1 φ(T ) (λI − T ) = [ψ(λ)I − ψ(T )]−1 [ψ(λ)I − ψ(T )] = I, (λI − T ) φ(T ) [ψ(λ)I − ψ(T )]−1 = [ψ(λ)I − ψ(T )] [ψ(λ)I − ψ(T )]−1 = I. Thus λI − T has a left and a right bounded inverse, and so it has an inverse in B[X ], which means that λ ∈ ρ(T ), which is a contradiction. Therefore, if λ ∈ σ(T ), then ψ(λ) ∈ σ(ψ(T )), and hence ψ(σ(T )) = ψ(λ) ∈ C : λ ∈ σ(T ) ⊆ σ(ψ(T )). Conversely, take any ν ∈ ψ(σ(T )), which means that ν − ψ(λ) = 0 for every 1 λ ∈ σ(T ), and consider the function φ (·) = ν−ψ(·) : Λ → C , which lies in A(σ(T )) with Λ ⊆ Λ. Take Γ such that σ(T ) ⊂ ins Γ ⊆ (ins Γ )− ⊂ Λ as in Definition 4.14. Then Lemma 4.15 and Theorem 4.16 ensure that % 1 φ (T ) [νI − ψ(T )] = [νI − ψ(T )] φ (T ) = 2πi RT (λ) dλ = I. Γ
Thus νI − ψ(T ) has a bounded inverse, φ (T ) ∈ B[X ], and so ν ∈ ρ(ψ(T )). Equivalently, if ν ∈ σ(ψ(T )), then ν ∈ ψ(σ(T )). That is, σ(ψ(T )) ⊆ ψ(σ(T )).
4.4 Riesz Decomposition Theorem and Riesz Idempotents A clopen set in a topological space is a set that is both open and closed in it. If a topological space has a nontrivial (i.e., proper and nonempty) clopen set, then (by definition) it is disconnected . Consider an operator T ∈ B[X ] on a complex Banach space X . There are some different definitions of spectral set . We stick to the classical one [39, Definition VII.3.17]: A spectral set is a clopen set in the spectrum σ(T ) (i.e., a subset of σ(T ) ⊆ C that is both open
116
4. Functional Calculus
and closed relative to σ(T )). Suppose Δ ⊆ σ(T ) is a spectral set. Then there is a function ψΔ ∈ A(σ(T )) (analytic on a neighborhood of σ(T )) such that + 1, λ ∈ Δ, ψΔ (λ) = 0, λ ∈ σ(T )\Δ. It does not matter which value ψΔ (λ) takes for those λ in a neighborhood of σ(T ) but not in σ(T ). Recall that if Δ is nontrivial (i.e., if ∅ = Δ = σ(T )), then σ(T ) must be disconnected. Set EΔ = EΔ (T ) = ψΔ (T ) so that % % 1 1 EΔ = ψΔ (T ) = 2πi ψΔ (λ) RT (λ) dλ = 2πi RT (λ) dλ Γ
ΓΔ
−
for any path Γ such that σ(T ) ⊂ ins Γ and (ins Γ ) is included in a neighborhood of σ(T ) as in Definition 4.14, and ΓΔ is any path for which Δ ⊆ ins ΓΔ and σ(T )\Δ ⊆ out ΓΔ . The operator EΔ ∈ B[X ] is referred to as the Riesz idempotent associated with Δ. In particular, the Riesz idempotent associated with an isolated point λ0 of the spectrum of T , namely, E{λ0 } = E{λ0 } (T ) = ψ{λ0 } (T ), will be denoted by % % 1 1 Eλ0 = 2πi RT (λ) dλ = 2πi (λI − T )−1 dλ, Γλ0
Γλ0
where Γλ0 is any simple closed rectifiable positively oriented curve (e.g., any positively oriented circle) enclosing λ0 but no other point of σ(T ). Let T be an operator in B[X ]. It is readily verified that the collection of all spectral subsets of σ(T ) forms a (Boolean) algebra of subsets of σ(T ). Indeed, the empty set and whole set σ(T ) are spectral sets, complements relative to σ(T ) of spectral sets remain spectral sets, and finite unions of spectral sets are spectral sets, and so are differences and intersections of spectral sets. Lemma 4.19. Take any operator T ∈ B[X ] and let EΔ ∈ B[X ] be the Riesz idempotent associated with an arbitrary spectral subset Δ of σ(T ). (a) EΔ is a projection such that (a1 ) E∅ = O,
Eσ(T ) = I
and
Eσ(T )\Δ = I − EΔ .
If Δ1 and Δ2 are spectral subsets of σ(T ), then (a2 ) EΔ1 ∩Δ2 = EΔ1 EΔ2
and
EΔ1 ∪Δ2 = EΔ1 + EΔ2 − EΔ1 EΔ2 .
(b) EΔ commutes with every operator that commutes with T . (c) R(EΔ ) is a subspace of X that is S-invariant for every operator S ∈ B[X ] that commutes with T . Proof. (a) Lemma 4.15 ensures that a Riesz idempotent deserves its name: 2 EΔ = EΔ so that EΔ ∈ B[X ] is a projection. Observe from the definition of the function ψΔ for an arbitrary spectral subset Δ of σ(T ) that
4.4 Riesz Decomposition Theorem and Riesz Idempotents
(i)
ψ∅ = 0,
ψσ(T ) = 1,
117
ψσ(T )\Δ = 1 − ψΔ ,
and ψΔ1 ∪Δ2 = ψΔ1 + ψΔ2 − ψΔ1 ψΔ2 . $ ψ (λ)RT (λ) dλ for every spectral subset Δ of Since EΔ = ψΔ (T ) = Γ Δ σ(T ), where Γ is any path as in Definition 4.14, the identities in (a1 ) and (a2 ) follow from the identities in (i) and (ii), respectively (using the linearity of the integral, Lemma 4.15, and Theorem 4.16). (ii)
ψΔ1 ∩Δ2 = ψΔ1 ψΔ2
1 2πi
(b) Suppose S ∈ B[X ] commutes with T . Corollary 4.17 ensures that ST = TS
SEΔ = EΔ S.
implies
(c) Thus R(EΔ ) is S-invariant. Moreover R(EΔ ) is a subspace (i.e., a closed linear manifold) of X since EΔ is bounded and R(EΔ ) = N (I − EΔ ). In particular, R(EΔ ) is an invariant subspace for T . Theorem 4.20. Take T ∈ B[X ]. If Δ is any spectral subset of σ(T ), then σ(T |R(EΔ ) ) = Δ. Moreover, the map Δ → EΔ that assigns to each spectral subset of σ(T ) the associated Riesz idempotent EΔ ∈ B[X ] is injective. Proof. Consider the results in Lemma 4.19(a1 ). If Δ = σ(T ), then EΔ = I so that R(EΔ ) = X , and hence σ(T |R(EΔ ) ) = σ(T ). On the other hand, if Δ = ∅, then EΔ = O so that R(EΔ ) = {0}, and hence T |R(EΔ ) : {0} → {0} is the null operator on the zero space, which implies that σ(T |R(EΔ ) ) = ∅. In both cases the identity σ(T |R(EΔ ) ) = Δ holds trivially. (Recall that the spectrum of a bounded linear operator is nonempty on every nonzero complex Banach space.) Thus suppose ∅ = Δ = σ(T ). Let ψΔ ∈ A(σ(T )) be the function that defines the Riesz idempotent EΔ associated with Δ, + 1, λ ∈ Δ, ψΔ (λ) = 0, λ ∈ σ(T )\Δ, so that EΔ = ψΔ (T ) =
1 2πi
% ψΔ (λ)RT (λ) dλ Γ
for any path Γ as in Definition 4.14, and let ϕ ∈ A(σ(T )) be the identity function on σ(T ), that is, ϕ(λ) = λ
for every
so that (cf. Theorem 4.16) T = ϕ(T ) = Claim 1. σ(T EΔ ) = Δ ∪ {0}.
1 2πi
λ ∈ σ(T ),
% ϕ(λ)RT (λ) dλ. Γ
118
4. Functional Calculus
Proof. Since T EΔ = ϕ(T )ψΔ (T ) and % 1 ϕ(T )ψΔ (T ) = 2πi ϕ(λ)ψΔ (λ) RT (λ) dλ = Γ
1 2πi
% (ϕ ψΔ )(λ) RT (λ) dλ, Γ
and since ϕ ψΔ ∈ A(σ(T )), the Spectral Mapping Theorem ensures that σ(T EΔ ) = (ϕ ψΔ )(σ(T )) = ϕ(σ(T )) ψΔ (σ(T )) = σ(T ) ψΔ (σ(T )) = Δ ∪ {0} (cf. Lemma 4.15 and Theorems 4.16 and 4.18). Claim 2. σ(T EΔ ) = σ(T |R(EΔ ) ) ∪ {0}. Proof. Since EΔ is a projection, R(EΔ ) and N (EΔ ) are algebraic complements, which means that R(EΔ ) ∩ N (EΔ ) = ∅ and X = R(EΔ ) ⊕ N (EΔ ) — plain direct sum (no orthogonality in a pure Banach space setup). Thus, with respect to the complementary decomposition X = R(EΔ ) ⊕ N (EΔ ), we T |R(E ) O Δ get EΔ = OI O O = I ⊕ O, and so T EΔ = O O = T |R(EΔ ) ⊕ O because EΔ T = T EΔ . Then λI − T EΔ = (λI − T |R(EΔ) ) ⊕ λI for every λ ∈ C , and hence λ ∈ ρ(EΔ T ) if and only if λ ∈ ρ(T |R(EΔ ) ) and λ = 0. Equivalently, λ ∈ σ(EΔ T ) if and only if λ ∈ σ(T |R(EΔ ) ) or λ = 0. Claim 3. σ(T |R(EΔ ) ) ∪ {0} = Δ ∪ {0}. Proof. Claims 1 and 2. Claim 4. 0 ∈ Δ ⇐⇒ 0 ∈ σ(T |R(EΔ ) ). Proof. Suppose 0 ∈ Δ. Thus there exists a function φ0 ∈ A(σ(T )) such that + 0, λ ∈ Δ, φ0 (λ) = −1 λ ∈ σ(T )\Δ. λ , Observe that ϕ φ0 lies in A(σ(T )) and ϕ φ0 = 1 − ψΔ
on
σ(T ),
and hence, since ϕ(T ) = T and φ0 (T ) ∈ B[X ], T φ0 (T ) = φ0 (T )T = I − ψΔ (T ) = I − EΔ . If 0 ∈ σ(T |R(EΔ ) ), then T |R(EΔ ) has a bounded inverse on the Banach space R(EΔ ), which means that there exists an operator [T |R(EΔ ) ]−1 on R(EΔ ) such that T |R(EΔ) [T |R(EΔ) ]−1 = [T |R(EΔ ) ]−1 T |R(EΔ ) = I. Consider the operator [T |R(EΔ ) ]−1 EΔ in B[X ] so that, since T ψΔ (T ) = ψΔ (T )T ,
T [T |R(EΔ ) ]−1 EΔ + φ0 (T ) = [T |R(EΔ ) ]−1 EΔ + φ0 (T ) T = I ∈ B[X ]. Then T has a bounded inverse on X , namely, [T |R(EΔ ) ]−1 EΔ + φ0 (T ), so that 0 ∈ ρ(T ), which is a contradiction (since 0 ∈ Δ ⊆ σ(T )). Therefore,
4.4 Riesz Decomposition Theorem and Riesz Idempotents
0∈Δ
implies
119
0 ∈ σ(T |R(EΔ ) ).
Conversely, suppose 0 ∈ Δ. Thus there is a function φ1 ∈ A(σ(T )) such that + λ−1 , λ ∈ Δ, φ1 (λ) = 0, λ ∈ σ(T )\Δ. Again, observe that ϕ φ1 lies in A(σ(T )) and ϕ φ1 = ψΔ
on
σ(T ),
and hence, since ϕ(T ) = T and φ1 (T ) ∈ B[X ], T φ1 (T ) = φ1 (T )T = ψΔ (T ) = EΔ . Since φ1 (T )ψΔ (T ) = ψΔ (T )φ1 (T ) by Lemma 4.15, it follows that the operator φ1 (T ) is R(EΔ )-invariant. So φ1 (T )|R(EΔ ) lies in B[R(EΔ )], and is such that T |R(EΔ ) φ1 (T )|R(EΔ ) = φ1 (T )|R(EΔ ) T |R(EΔ ) = EΔ |R(EΔ ) = I ∈ B[R(EΔ )]. Then T |R(EΔ ) has a bounded inverse on R(EΔ ), namely, φ1 (T )|R(EΔ ) , so that 0 ∈ ρ(T |R(EΔ ) ). Thus 0 ∈ Δ implies 0 ∈ σ(T |R(EΔ ) ). Equivalently, 0 ∈ σ(T |R(EΔ ) )
implies
0 ∈ Δ.
This concludes the proof of Claim 4. Claims 3 and 4 ensure the identity σ(T |R(EΔ ) ) = Δ. Finally consider the map Δ → EΔ that assigns to each spectral subset of σ(T ) the associated Riesz idempotent EΔ ∈ B[X ]. We have already seen at the beginning of this proof that if EΔ = O, then σ(T |R(EΔ ) ) = ∅. Hence EΔ = O
=⇒
Δ=∅
by the above displayed identity (that holds for all spectral subsets of σ(T )). Take arbitrary spectral subsets Δ1 and Δ2 of σ(T ). From Lemma 4.19(a), EΔ1 \Δ2 = EΔ1 ∩(σ(T )\Δ2 ) = EΔ1 (I − EΔ2 ) = EΔ1 − EΔ1 EΔ2 , where EΔ1 and EΔ2 commute since EΔ1 EΔ2 = EΔ1 ∩Δ2 = EΔ2 EΔ1 , and so (EΔ1 − EΔ2 )2 = EΔ1 + EΔ2 − 2EΔ1 EΔ2 = EΔ1 \Δ2 + EΔ2 \Δ1 = EΔ1Δ2 , where Δ1Δ2 = (Δ1 \Δ2 ) ∪ (Δ2 \Δ1 ) is the symmetric difference. Thus EΔ1 = EΔ2
=⇒
EΔ1Δ2 = O
=⇒
Δ1Δ2 = ∅
=⇒
Δ1 = Δ2 ,
since Δ1Δ2 = ∅ if and only if Δ1 = Δ2 , which proves injectivity.
120
4. Functional Calculus
Remark. Let T be an operator in B[X ] and suppose σ(T ) is disconnected, so that there are nontrivial sets in the algebra of all spectral subsets of σ(T ). We show that to each partition of σ(T ) into spectral sets there is a corresponding decomposition of the resolvent function RT . Indeed, consider a finite partition {Δj }nj=1 of nontrivial spectral subsets of σ(T ); that is, ∅ = Δj = σ(T ) are clopen subsets of σ(T ) for each j = 1, . . . , n
such that n j=1
Δj = σ(T )
Δj ∩ Δk = ∅ whenever j = k.
with
Consider the Riesz idempotent associated with each Δj , namely, % 1 RT (λ) dλ. EΔj = 2πi ΓΔj
The results of Lemma 4.19 are readily extended for any integer n ≥ 2 so that each EΔj is a nontrivial projection (O = EΔj = EΔj2 = I) and n j=1
EΔj = I
with
EΔj EΔk = O whenever j = k.
Take an arbitrary λ ∈ ρ(T ). For each j = 1, . . . , n set Tj = T EΔj
and
Rj (λ) = RT (λ)EΔj
in B[X ] so that n j=1
Tj = T
and
n j=1
Rj (λ) = RT (λ).
Observe that these operators commute. Indeed, recall that T RT (λ) = RT (λ)T , and T EΔj = EΔj T , and hence RT (λ)EΔj = EΔj RT (λ) (cf. Corollary 4.17 and Lemma 4.19). Therefore, if j = k, then Tj Tk = Rj (λ)Rk (λ) = Tk Rj (λ) = Rj (λ)Tk = O. On the other hand, for every j = 1, . . . , n, Tj Rj (λ) = Rj (λ)Tj = T RT (λ)EΔj and, since (λI − T )RT (λ) = RT (λ)(λI − T ) = I, λEΔj − Tj Rj (λ) = Rj (λ) λEΔj − Tj = EΔj . Moreover, Claim 1 in the proof of Theorem 4.20 says that σ(Tj ) = Δj ∪ {0}.
4.4 Riesz Decomposition Theorem and Riesz Idempotents
121
Note that the spectral radii are such that r(Tj ) ≤ r(T ). Since Tjk = T k EΔj for every nonnegative integer k, it follows by Corollary 2.12 that ∞ k ∞ Tj k T 1 Rj (λ) = (λI − T )−1 EΔj = λ1 E = Δ j λ λ λ k=0
k=0
if λ in ρ(T ) is such that r(T ) < |λ| and, again by Corollary 2.12, ∞ T E Δ j k ∞ 1 RTj (λ) = (λI − T EΔj )−1 = λ1 = λ λ k=0
k=0
Tj λ
k
if λ in ρ(Tj ) is such that r(Tj ) < |λ|, where the above series converge uniformly in B[X ]. Hence, if r(T ) < |λ|, then Rj (λ) = RT (λ)EΔj = RT Δj = RTj where each Rj can actually be extended to be analytic in ρ(Tj ). The properties of Riesz idempotents in Lemma 4.19 and Theorem 4.20 lead to a major result in operator theory, which says that operators whose spectra are disconnected have nontrivial hyperinvariant subspaces. (For the original statement of the Riesz Decomposition Theorem see [77, p. 421].) Corollary 4.21. (Riesz Decomposition Theorem). Let T ∈ B[X ] be an operator on a complex Banach space X . If σ(T ) = Δ1 ∪ Δ2 , where Δ1 and Δ2 are disjoint nonempty closed sets, then T has a complementary pair {M1 , M2 } of nontrivial hyperinvariant subspaces, viz., M1 = R(EΔ1 ) and M2 = R(EΔ2 ), such that σ(T |M1 ) = Δ1 and σ(T |M2 ) = Δ2 . Proof. If Δ1 and Δ2 are disjoint closed sets in C such that σ(T ) = Δ1 ∪ Δ2 , then they are both clopen subsets of σ(T ) (i.e., spectral subsets of σ(T )), and hence the Riesz idempotents EΔ1 and EΔ2 associated with them are such that their ranges M1 = R(EΔ1 ) and M2 = R(EΔ2 ) are subspaces of X that are hyperinvariant for T by Lemma 4.19(c). Since Δ1 and Δ2 are nontrivial (i.e., ∅ = Δ1 = σ(T ) and ∅ = Δ2 = σ(T )), it follows that σ(T ) is disconnected (since they are both clopen subsets of σ(T )) and the projections EΔ1 and EΔ2 are nontrivial (i.e., O = EΔ1 = I and O = EΔ2 = I by the injectivity of Theorem 4.20), and so the subspaces M1 and M2 are nontrivial (i.e., {0} = R(EΔ1 ) = X and {0} = R(EΔ2 ) = X ). Lemma 4.19(a) ensures that EΔ1 + EΔ2 = I and EΔ1 EΔ2 = O, which means that EΔ1 and EΔ2 are complementary projections (not necessarily orthogonal even if X were a Hilbert space), and so their ranges are complementary subspaces (i.e., M1 + M2 = X and M1 ∩ M2 = {0}) as in Section 1.4. Finally, Theorem 4.20 ensures that σ(T |M1 ) = Δ1 and σ(T |M2 ) = Δ2 . Remark. Since M1 = R(EΔ1 ) and M2 = R(EΔ2 ) are complementary subspaces (i.e., M1 + M2 = X and M1 ∩ M2 = {0}), it follows that X can be identified with the direct sum M1 ⊕ M2 (not necessarily an orthogonal direct sum even if X were a Hilbert space), which means that there exists a
122
4. Functional Calculus
natural isomorphism Ψ : X → M1 ⊕ M2 between the normed spaces X and M1 ⊕ M2 (see, e.g., [66, Theorem 2.14]). The normed space M1 ⊕ M2 is not necessarily complete and the invertible linear transformation Ψ is not necessarily bounded. In other words, X ∼ M1 ⊕ M2 , where ∼ stands for algebraic similarity (i.e., isomorphic equivalence) and ⊕ for direct (not necessarily orthogonal) sum. Thus, since M1 and M2 are both T -invariant (recall that both EΔ1 and EΔ2 commute with T by Lemma 4.19), it follows that T ∼ T |M1 ⊕ T |M2 , which means Ψ T Ψ −1 = T |M1 ⊕ T |M2 . (If X were Hilbert and the subspaces orthogonal, then we might say that they reduce T .) We say that a pair Δ1 and Δ2 of subsets of σ(T ) are complementary spectral sets for T if, besides being spectral sets (i.e., subsets of σ(T ) that are both open and closed relative to σ(T )) they also form a nontrivial partition of σ(T ); that is, ∅ = Δ1 = σ(T ), ∅ = Δ2 = σ(T ), Δ1 ∪ Δ2 = σ(T ), and Δ1 ∩ Δ2 = ∅. Observe that the Riesz Decomposition Theorem says that, for every pair of complementary spectral sets, the ranges of their Riesz idempotents are complementary nontrivial hyperinvariant subspaces for T , and the spectra of the restrictions of T to those ranges coincide with themselves. Corollary 4.22. Let Δ1 and Δ2 be complementary spectral sets for an operator T ∈ B[X ], and let EΔ1 and EΔ2 be the Riesz idempotents associated with them. If λ ∈ Δ1 , then EΔ2 (N (λI − T )) = {0}, EΔ2 (R(λI − T )) = R(EΔ2 ), EΔ1 (N (λI − T )) = N (λI − T ) ⊆ R(EΔ1 ), EΔ1 (R(λI − T )) = R((λI − T )EΔ1 ) ⊆ R(EΔ1 ). so that R(λI − T ) = EΔ1 (R(λI − T )) + R(EΔ2 ). If dim R(EΔ1 ) < ∞, then dim N (λI − T ) < ∞ and R(λI − T ) is closed . Proof. Let T be an operator on a complex Banach space X . Claim 1. If Δ1 and Δ2 are complementary spectral sets and λ ∈ C , then (a)
N (λI − T ) = EΔ1 (N (λI − T )) + EΔ2 (N (λI − T )),
(b)
R(λI − T ) = EΔ1 (R(λI − T )) + EΔ2 (R(λI − T )).
Proof. Let EΔ1 and EΔ2 be the Riesz idempotents associated with Δ1 and Δ2 . Lemma 4.19(a) ensures that R(EΔ1 ) and R(EΔ2 ) are complementary subspaces (i.e., X = R(EΔ1 ) + R(EΔ2 ) and R(EΔ1 ) ∩ R(EΔ2 ) = ∅, since Δ1 and Δ2 are complementary spectral sets, so that EΔ1 and EΔ2 are complementary projections). Thus (cf. Section 1.1), there is a unique decomposition
4.4 Riesz Decomposition Theorem and Riesz Idempotents
123
x = u + v = EΔ1 x + EΔ2 x for every vector x ∈ X , with u ∈ R(EΔ1 ) and v ∈ R(EΔ2 ), where u = EΔ1 x and v = EΔ2 x (since EΔ1 (R(EΔ2 )) = {0} and EΔ2 (R(EΔ1 )) = {0} because EΔ1 EΔ2 = EΔ2 EΔ1 = O). So we get the decompositions in (a) and (b). Claim 2. If λ ∈ σ(T )\Δ for some spectral set Δ of σ(T ), then EΔ (N (λI − T )) = {0}
and
EΔ (R(λI − T )) = R(EΔ ).
Proof. Let Δ be any spectral set of σ(T ) and let EΔ be the Riesz idempotent associated to it. Since R(EΔ ) is an invariant subspace for T (cf. Lemma 4.19), take the restriction T |R(EΔ ) ∈ B[R(EΔ )] of T to the Banach space R(EΔ ). If λ ∈ σ(T )\Δ, then λ ∈ ρ(T |R(EΔ ) ). So (λI − T )|R(EΔ ) = λI|R(EΔ ) − T |R(EΔ) has a bounded inverse, where I|R(EΔ ) = EΔ |R(EΔ ) stands for the identity on R(EΔ ). Thus, since R(EΔ ) is T -invariant, RT |R(EΔ ) (λ) (λI − T )EΔ = (λI|R(EΔ ) − T |R(EΔ) )−1 (λI|R(EΔ ) − T |R(EΔ ) )EΔ = I|R(EΔ ) EΔ = EΔ , where RT |R(EΔ ) : ρ(T |R(EΔ ) ) → G[R(EΔ )] is the resolvent function of T |R(EΔ) . Now take an arbitrary x ∈ N (λI − T ). Since EΔ T = T EΔ , EΔ x = RT |R(EΔ ) (λ) (λI − T )EΔ x = RT |R(EΔ ) (λ) EΔ (λI − T )x = 0. Therefore, EΔ (N (λI − T )) = {0}. Moreover, since (λI − T )EΔ = λEΔ − T EΔ = λI|R(EΔ ) − T |R(EΔ ) , it follows that, if λ ∈ σ(T )\Δ, then λ ∈ ρ(T |R(EΔ ) ), by Theorem 4.20 (where T |R(EΔ ) ∈ B[R(EΔ )]), so that (see diagram of Section 2.2) R((λI − T )EΔ ) = R(λI|R(EΔ ) − T |R(EΔ) ) = R(EΔ ). Since T EΔ = EΔ T , and A(R(B)) = R(AB) for all operators A and B, EΔ (R(λI − T )) = R(EΔ (λI − T )) = R((λI − T )EΔ ) = R(EΔ ). This concludes the proof of Claim 2. From now on suppose λ ∈ Δ1 . In this case, Claim 2 ensures that EΔ2 (N (λI − T )) = {0}, and so it follows by Claim 1(a) that
124
4. Functional Calculus
N (λI − T ) = EΔ1 (N (λI − T )) ⊆ R(EΔ1 ). Hence dim R(EΔ1 ) < ∞
=⇒
dim(N (λI − T )) < ∞.
Again, from Claim 2 we get EΔ2 (R(λI − T )) = R(EΔ2 ). Thus, from Claim 1(b), R(λI − T ) = EΔ1 (R(λI − T )) + R(EΔ2 ), where (since T EΔ = EΔ T and A(R(B)) = R(AB) for all A and B), EΔ1 (R(λI − T )) = R(EΔ1 (λI − T )) = R((λI − T )EΔ1 ) ⊆ R(EΔ1 ). Recall that R(EΔ2 ) is a subspace (i.e., a closed linear manifold) of X (Lemma 4.19). If R(EΔ1 ) is finite dimensional, then EΔ1 (R(λI − T )) is finite dimensional, and so R(λI − T ) is the sum of a finite-dimensional linear manifold and a closed linear manifold, which is closed (cf. Proposition 1.C). Hence, dim(R(EΔ1 )) < ∞
=⇒
R(λI − T ) is closed.
Remark. Let λ0 be an isolated point of the spectrum of an operator T in B[X ]. A particular case of the preceding corollary for Δ1 = {λ0 } says that, if dim R(Eλ0 ) < ∞, then dim N (λ0 I − T ) < ∞ and R(λ0 I − T ) is closed. We shall prove the converse in Theorem 5.19 (the next chapter): dim R(Eλ0 ) < ∞ ⇐⇒ dim N (λ0 I − T ) < ∞ and R(λ0 I − T ) is closed. Also note that λ0 I − T |R(Eλ0 ) is a quasinilpotent operator on R(Eλ0 ). Indeed, σ(T |R(Eλ0 ) ) = {λ0 } according to Theorem 4.20, and so the Spectral Mapping Theorem (Theorem 2.7) ensures that σ(λ0 I − T |R(Eλ0 ) ) = {0}. Corollary 4.23. Let T ∈ B∞[X ] be a compact operator. If λ ∈ σ(T )\{0}, then λ is an isolated point of σ(T ) such that dim R(Eλ ) < ∞, where Eλ ∈ B[X ] is the Riesz idempotent associated with it, and {0} = N (λI − T ) ⊆ R(Eλ ) ⊆ N ((λI − T )n ) for some a positive integer n such that R(Eλ ) ⊆ N (λI − T )n−1 .
4.4 Riesz Decomposition Theorem and Riesz Idempotents
125
Proof. Let T be a compact operator on a nonzero complex Banach space X . We refer to the results of Section 2.6, which were proved on a Hilbert space but, as we had mentioned there, still hold on a Banach space. If λ ∈ σ(T )\{0}, then λ is an isolated point of σ(T ) by Corollary 2.20. Take the restriction T |R(Eλ ) ∈ B[R(Eλ )] of T to the Banach space R(EΔ ), which is again compact (Proposition 1.V). Recall from Theorem 4.18 that σ(T |R(Eλ ) ) = {λ}. If λ = 0, then 0 ∈ ρ(T |R(Eλ ) ), and so the compact operator T |R(Eλ ) ∈ B∞ [R(Eλ )] is invertible, which implies that (cf. Proposition 1.Y) dim R(Eλ ) < ∞. Set m = dim R(Eλ ). Note that m = 0 because Eλ = O by the injectivity of Theorem 4.20. Since σ(T |R(Eλ ) ) = {λ} we get by the Spectral Mapping Theorem (cf. Theorem 2.7) that σ(λI − T |R(Eλ ) ) = {0}; that is, the operator λI − T |R(Eλ ) is quasinilpotent on the m-dimensional space R(Eλ ). This implies that λI − T |R(Eλ ) in B[R(Eλ )] is a nilpotent operator for which (λI − T |R(Eλ ) )m = O. Indeed, this is a purely finite-dimensional algebraic result that is obtained together with the well-known Cayley–Hamilton Theorem (see [48, Theorem 58.2]). Thus (since Eλ is a projection that commutes with T whose range is T -invariant), it follows that (λI − T )m Eλ = O. Moreover, since X = {0} we get O = (λI − T )0 = I, and so (since Eλ = O) (λI − T )0 Eλ = O. Thus there exists an integer n ∈ [1, m] such that (λI − T )n Eλ = O
and
(λI − T )n−1 Eλ = O.
In other words, R(Eλ ) ⊆ N ((λI − T )n )
and
R(Eλ ) ⊆ N (λI − T )n−1 .
(Recall that AB = O if and only if R(B) ⊆ N (A), for all operators A and B.) The remaining assertions are readily verified. In fact, {0} = N (λI − T ) since λ ∈ σ(T )\{0} is an eigenvalue of the compact operator T (by the Fredholm Alternative of Theorem 2.18), and N (λI − T ) ⊆ R(Eλ ) by Corollary 4.22. Remark. Corollary 4.23 and Proposition 4.F ensure that, if T ∈ B∞[X ] (i.e., if T is compact) and λ ∈ σ(T )\{0}, then λ is a pole of the resolvent function RT : ρ(T ) → G[X ], and the integer n in the statement of the preceding result is the order of the pole λ. Moreover, it follows from Proposition 4.G that R(Eλ ) = N ((λI − T )n ).
126
4. Functional Calculus
4.5 Additional Propositions Consider a set S ⊆ B[H] of operators on Hilbert space H. The commutant S of S is the set S = {T ∈ B[H]: T S = S T for every S ∈ S} of all operators that commute with every operator in S, which is a unital subalgebra of B[H]. The double commutant S of S is the unital algebra S = (S ). The Double Commutant Theorem says that if A is a unital C*-subalgebra of B[H], then A− = A, where A− stands for the weak closure of A in B[H] (which coincides with the strong closure — see [27, Theorem IX.6.4]). A von Neumann algebra A is a C*-subalgebra of B[H] such that A = A, which is unital and is weakly (thus strongly) closed. Take a normal operator on a separable Hilbert space H. Let A∗ (T ) be the von Neumann algebra generated by T ∈ B[H], which is the intersection of all von Neumann algebras containing T , and coincides with the weak closure of P(T, T ∗) (see the paragraph following Definition 3.12). Proposition 4.A. Consider the setup of Theorem 4.8. The range of ΦT coincides with the von Neumann algebra generated by T (i.e., R(ΦT ) = A∗ (T )). Proposition 4.B. Take T ∈ B[X ]. If ψ ∈ A(σ(T )) and φ ∈ A(σ(ψ(T ))), then (φ ◦ ψ) ∈ A(σ(T )) and φ(ψ(T )) = (φ ◦ ψ)(T ). That is, % 1 φ(ψ(T )) = 2πi φ(ψ(λ)) RT (λ) dλ, Γ
where Γ is any path about σ(T ) such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, and the open set Λ ⊆ C is the domain of φ ◦ ψ. Proposition 4.C. Let EΔ ∈ B[X ] be the Riesz idempotent associated with a spectral subset Δ of σ(T ). The point, residual, and continuous spectra of the restriction T |R(EΔ ) ∈ B[R(EΔ )] of T ∈ B[X ] to R(EΔ ) are given by σP (T |R(EΔ )) = Δ∩σP (T ), σR (T |R(EΔ )) = Δ∩σR (T ), σC (T |R(EΔ )) = Δ∩σC (T ). Proposition 4.D. If λ0 is an isolated point of the spectrum σ(T ) of an operator T ∈ B[X ], then the range of the Riesz idempotent Eλ0 ∈ B[X ] associated with λ0 is given by 1 R(Eλ0 ) = x ∈ X : (λ0 I − T )n x n → 0 . Proposition 4.E. If λ0 is an isolated point of the spectrum σ(T ) of an operator T ∈ B[X ], then (with convergence in B[X ]) ∞ RT (λ) = (λI − T )−1 = (λ − λ0 )k Tk k=−∞ ∞ ∞ = (λ − λ0 )−k T−k + T0 + (λ − λ0 )k Tk k=1
k=1
for every λ in the punctured disk Bδ0 (λ0 )\{0} = {λ ∈ C : 0 < |λ − λ0 | < δ0 } ⊆ ρ(T ), where δ0 = d(λ0 , σ(T )\{λ0 }) is the distance from λ0 to the rest of the spectrum, and Tk ∈ B[X ] is such that
4.5 Additional Propositions
Tk =
1 2πi
%
127
(λ − λ0 )−(k+1) RT (λ) dλ
Γλ0
for every k ∈ Z , where Γλ0 is any positively oriented circle centered at λ0 with radius less than δ0 . An isolated point of σ(T ) is a singularity (or an isolated singularity) of the resolvent function RT : ρ(T ) → G[X ]. The expansion of RT in Proposition 4.E is the Laurent expansion of RT about an isolated point of the spectrum. Note that, for every positive integer k (i.e., for every k ∈ N ), T−k = Eλ0 (T − λ0 I)k−1 = (T − λ0 I)k−1 Eλ0 , and so T−1 = Eλ0 , the Riesz idempotent associated with the isolated point λ0 . The notion of poles associated with isolated singularities of analytic functions is extended as follows. An isolated point of σ(T ) is a pole of order n of RT , for sone n ∈ N , if T−n = O and Tk = O for every k < n (i.e., n is the largest positive integer such that T−|n| = O). Otherwise, if the number of nonzero coefficients Tk with negative indices is infinite, then the isolated point λ0 of σ(T ) is said to be an essential singularity of RT . Proposition 4.F. Let λ0 be an isolated point of the spectrum σ(T ) of T ∈ B[X ], and let Eλ0 ∈ B[X ] be the Riesz idempotent associated with λ0 . (a) The isolated point λ0 is a pole of RT of order n if and only if (λ0 I − T )n Eλ0 = O
and
(λ0 I − T )n−1 Eλ0 = O.
Therefore, if λ0 is a pole of RT of order n, then (b)
{0} = R((λ0 I − T )n−1 Eλ0 ) ⊆ N (λ0 I − T ),
and so λ0 is an eigenvalue of T (i.e., λ0 ∈ σP (T )). Actually (by item (a) and Corollary 4.22), λ0 is a pole of RT of order 1 if and only if (c)
{0} = R(Eλ0 ) = N (λ0 I − T ).
Proposition 4.G. Take T ∈ B[X ]. If λ0 is a pole of RT of order n, then R(Eλ0 ) = N ((λ0 I − T )n )
and
R(EΔ ) = R((λ0 I − T )n ),
where Eλ0 denotes the Riesz idempotent associated with λ0 and EΔ denotes the Riesz idempotent associated with the spectral set Δ = σ(T )\{λ0 }. Proposition 4.H. Let EΔ be the Riesz idempotent associated with a spectral subset Δ of σ(T ). If dim R(EΔ ) < ∞, then Δ is a finite set of poles. Proposition 4.I. Take T ∈ B[X ]. Let Δ be a spectral subset of σ(T ), and let EΔ ∈ B[X ] be the Riesz idempotent associated with Δ. If ψ ∈ A(σ(T )), then ψ ∈ A(σ(T |R(EΔ ) )) and ψ(T )|R(EΔ ) = ψ(T |R(EΔ ) ), so that
128
4. Functional Calculus 1 2πi
%
ψ(λ)RT (λ) dλ
ΓΔ
R(EΔ )
=
1 2πi
% ΓΔ
ψ(λ)RT |R(EΔ ) (λ) dλ.
For a normal operator on a complex Hilbert space, the functional calculus of Sections 4.2 and 4.3 (cf. Theorems 4.2 and 4.16) coincides for every analytic function on a neighborhood of the spectrum. $ Proposition 4.J. If T = λ dEλ ∈ B[H] is the spectral decomposition of a normal operator T on a Hilbert space H and if ψ ∈ A(σ(T )), then % % 1 ψ(T ) = ψ(λ) dEλ = 2πi ψ(λ) RT (λ) dλ, Γ
with the first integral as in Lemma 3.7 and the second as in Definition 4.14. Consider Definition 3.6. The notion of spectral measure E : AΩ → B[X ] (carrying the same properties of Definition 3.6) can be extended to a (complex) Banach space X where the projections E(Λ) for Λ ∈ AΩ are not orthogonal but are bounded. If an operator T ∈ B[X ] is such that E(Λ)T = T E(Λ) and σ(T |R(E(Λ)) ) ⊆ Λ− (see Lemma 3.16), then T is called a spectral operator (cf. [41, Definition XV.2.5]). Take ψ ∈ B(Ω), a bounded AΩ -measurable complex$ valued function on Ω. An integral ψ(λ) dEλ$ can be definedin this Banach n space setting as the uniform limit of integrals φn (λ) dEλ = i=1 αi E(Λi ) of n measurable simple functions φn = i=1 αi χΛi (i.e., of finite linear combinations of characteristic functions χΛi of measurable sets Λi ∈ AΩ ) — see, e.g., $ [40, pp. 891–893]. If a spectral operator T ∈ B[X ] is such that T = λ dEλ , then it is said to be of scalar type (cf. [41, Definition XV.4.1]). Proposition 4.J holds for Banach operators of scalar type. In this case, the pre$ $ space spectral ceding integral λ dEλ = ϕ(λ) dEλ (of thefunction ϕ(λ) = λ ∈ C for every n λ ∈ Ω, defined as the limit of a sequence i=1 βi E(Λi ) of integrals of simple functions) coincides with the integral of Theorem 3.15 if X is a Hilbert space, and if the spectral measure E : AΩ → B[X ], with Ω = σ(T ), takes on orthogonal projections only. $ Proposition 4.K. If T = λ dEλ ∈ B[H] is the spectral decomposition of a normal operator T on a Hilbert space H, then the orthogonal projection E(Λ) coincides with the Riesz idempotent EΛ for every clopen Λ in Aσ(T ) : % % 1 E(Λ) = dEλ = 2πi RT (λ) dλ = EΛ . Λ
ΓΛ
In particular, let T = k λk Ek be the spectral decomposition of a compact normal operator T on a Hilbert space, with {λk } = σP (T ) and {Ek } being a countable resolution of the identity where each orthogonal projection Ek is such that R(Ek ) = N (λk I − T ) (cf. Theorem 3.3). Take any nonzero eigenvalue λk in σ(T )\{0} = σP (T )\{0} = {λk }\{0} so that dim N (λk I − T ) < ∞ (cf. Theorems 1.19 and 2.18 and Corollary 2.20). Each orthogonal projection Ek coincides with the Riesz idempotent Eλk associated with each isolated point
4.5 Additional Propositions
129
0 = λk of σ(T ) according to Proposition 4.K. Therefore, by Corollary 4.23 and Proposition 4.G, it follows that N (λk I − T ) = N ((λk I − T )nk ) = R(Eλk ), where the integer nk is the order of the pole λk and the integer dim R(Eλk ) = dim N (λk I − T ) < ∞ is the finite multiplicity of the eigenvalue λk . Note that dim R(Eλk ) is sometimes called the algebraic multiplicity of the isolated point λk of the spectrum, while dim N (λk I − T ) — the multiplicity of the eigenvalue λk — is sometimes referred to as the geometric multiplicity of λk . In the preceding case (i.e., if T is compact and normal), these multiplicities are finite and coincide. Proposition 4.L. Every isolated point of the spectrum of a hyponormal operator is an eigenvalue. In fact, if λ0 is an isolated point of σ(T ), then {0} = R(Eλ0 ) ⊆ N (λ0 I − T ) for every hyponormal operator T ∈ B[H], where Eλ0 is the Riesz idempotent associated with the isolated point λ0 of the spectrum of T . Notes: For Proposition 4.A see, for instance, [27, Theorem IX.8.10]. Proposition 4.B is the composition counterpart of the product-preserving result of Lemma 4.15 (see, e.g., [56, Theorem 5.3.2] or [39, Theorem VII.3.12]). For Proposition 4.C see [39, Exercise VII.5.18] or [87, 1st edn. Theorem 5.7-B]. Proposition 4.D follows from Theorem 4.20, since, by Theorem 2.7, 1 λ0 I − T |R(Eλ0 ) is quasinilpotent so that limn (λI − T )n x n = 0 whenever x ∈ R(Eλ ). For the converse see [87, 1st edn. Lemma 5.8-C]. The Laurent expansion in Proposition 4.E and also Proposition 4.F are standard results (see [27, Lemma VII.6.11, Proposition 6.12, and Corollary 6.13]). Proposition 4.G refines the results of Proposition 4.F — see, e.g., [39, Theorem VII.3.24]. For Proposition 4.H see [39, Exercise VII.5.34]. Proposition 4.I is a consequence of Theorem 4.20 (see, e.g., [39, Theorem VII.3.20]). Proposition 4.J also holds for Banach space spectral operators of the scalar type (cf. [41, Theorem XV.5.1]), and Proposition 4.K is a particular case of it for the characteristic function in a neighborhood of a clopen set in the σ-algebra of Borel subsets of σ(T ). Proposition 4.L is the extension of Proposition 3.G from normal to hyponormal operators, which is obtained by the Riesz Decomposition Theorem (Corollary 4.21) — see, for instance, [66, Problem 6.28].
Suggested Reading Arveson [6] Brown and Pearcy [21] Conway [27, 29] Dowson [35] Dunford and Schwartz [39, 40, 41]
Hille and Phillips [56] Sz.-Nagy and Foia¸s [86] Radjavi and Rosenthal [76] Riesz and Sz.-Nagy [77] Taylor and Lay [87]
This page intentionally left blank
5 Fredholm Theory
The central theme of this chapter investigates compact perturbations. We shall be particularly concerned with properties of the spectrum of an operator that are invariant under compact perturbations; that is, properties of the spectrum of T that are also possessed by the spectrum of T + K for every compact operator K. As in Sections 1.8 and 2.6, we assume throughout this chapter that all operators lie in B[H], where H stands for a nonzero complex Hilbert space, although part of the theory developed here applies equally for operators on Banach spaces.
5.1 Fredholm Operators and Fredholm Index Let B∞[H] be the (two-sided) ideal of all compact operators from B[H]. An operator T ∈ B[H] is left semi-Fredholm if there exist A ∈ B[H] and K ∈ B∞[H] such that AT = I + K, and right semi-Fredholm if there exist A ∈ B[H] and K ∈ B∞[H] such that T A = I + K. We say that T ∈ B[H] is semi-Fredholm if it is either left or right semi-Fredholm, and Fredholm if it is both left and right semi-Fredholm. Let F and Fr be the classes of all left semi-Fredholm operators and of all right semi-Fredholm operators: F = T ∈ B[H]: AT = I + K for some A ∈ B[H] and some K ∈ B∞[H] , Fr = T ∈ B[H]: T A = I + K for some A ∈ B[H] and some K ∈ B∞[H] . The classes of all semi-Fredholm and Fredholm operators from B[H] will be denoted by SF and F , respectively; SF = F ∪ Fr
and
F = F ∩ Fr .
According to Proposition 1.W, K ∈ B∞[H] if and only if K ∗ ∈ B∞[H]. Thus T ∈ F
if and only if
T ∗ ∈ Fr .
Therefore,
C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3_5, © Springer Science+Business Media, LLC 2012
131
132
5. Fredholm Theory
T ∈ SF
if and only if
T ∗ ∈ SF ,
T ∈F
if and only if
T ∗ ∈ F.
The following properties on kernel and range of a Hilbert space operator and its adjoint (Lemmas 1.4 and 1.5) will often be used throughout this chapter: N (T ∗ ) = H R(T )− = R(T )⊥ , R(T ∗ ) is closed if and only if R(T ) is closed. Theorem 5.1. (a) An operator T ∈ B[H] is left semi-Fredholm if and only if R(T ) is closed and N (T ) is finite dimensional . (b) Hence,
F = T ∈ B[H]: R(T ) is closed and dim N (T ) < ∞ , Fr = T ∈ B[H]: R(T ) is closed and dim N (T ∗ ) < ∞ .
Proof. Let A, T , and K be operators on H, where K is compact. (a1 ) If T is left semi-Fredholm, then there are operators A and K such that AT = I + K, and so N (T ) ⊆ N (AT ) = N (I + K) and R(AT ) = R(I + K). The Fredholm Alternative (Corollary 1.20) says that dim N (I + K) < ∞ and R(I + K) is closed. Therefore, (i)
dim N (T ) < ∞,
(ii)
dim T (N (AT )) < ∞,
(iii)
R(AT ) is closed.
This implies that (iv)
T (N (AT )⊥ ) is closed.
Indeed, the restriction (AT )|N (A T )⊥ : N (AT )⊥ → H is bounded below by (iii) since (AT )|N (A T )⊥ is injective with range R(AT ) (Theorem 1.2). Thus there exists an α > 0 such that αv ≤ AT v ≤ A T v for every v ∈ N (AT )⊥. Then T |N (A T )⊥ : N (AT )⊥ → H is bounded below, and so T |N (A T )⊥ has a closed range (Theorem 1.2 again), proving (iv). But (ii) and (iv) imply that (v)
R(T ) is closed.
In fact, since H = N (AT ) + N (AT )⊥ by the Projection Theorem, it follows that R(T ) = T (H) = T (N (AT ) + N (AT )⊥ ) = T (N (AT )) + T (N (AT )⊥ ). Thus assertions (ii) and (iv) ensure that R(T ) is closed (since the sum of a closed linear manifold and a finite-dimensional linear manifold is closed —
5.1 Fredholm Operators and Fredholm Index
133
see Proposition 1.C). This concludes the proof of (v) which, together with (i), concludes half of the claimed result. (a2 ) Conversely, suppose dim N (T ) < ∞ and R(T ) is closed. Since R(T ) is closed, it follows that T |N (T )⊥ : N (T )⊥ → H (which is injective because N (T |N (T )⊥) = {0}) has a closed range R(T |N (T )⊥) = R(T ). Hence it has a bounded inverse on its range (Theorem 1.2). Let E ∈ B[H] be the orthogonal projection onto R(E) = R(T |N (T )⊥ ) = R(T ), and define A ∈ B[H] as follows: A = (T |N (T )⊥ )−1 E. If u ∈ N (T ), then AT u = 0 trivially. On the other hand, if v ∈ N (T )⊥, then AT v = (T |N (T )⊥ )−1E T |N (T )⊥ v = (T |N (T )⊥ )−1 T |N (T )⊥ v = v. Thus, for every x = u + v in H = N (T ) + N (T )⊥, we get AT x = AT u + AT v = v = E x, where E ∈ B[H] is the orthogonal projection onto N (T )⊥ . Therefore, AT = I + K, where −K = I − E ∈ B[H] is the complementary orthogonal projection onto the finite-dimensional space N (T ). Hence K is a finite-rank operator, and so compact (Proposition 1.X). Thus T is left semi-Fredholm. (b) Recalling that T ∈ F if and only if T ∗ ∈ Fr , and R(T ∗ ) is closed if and only if R(T ) is closed (Lemma 1.5), item (b) follows from item (a). Corollary 5.2. Take an operator T ∈ B[H]. (a) T is semi-Fredholm if and only if R(T ) is closed and N (T ) or N (T ∗ ) is finite dimensional . (b) T is Fredholm if and only if R(T ) is closed and both N (T ) and N (T ∗ ) are finite dimensional . In other words, SF = T ∈ B[H]: R(T ) is closed, dim N (T ) < ∞ or dim N (T ∗ ) < ∞ , F = T ∈ B[H]: R(T ) is closed, dim N (T ) < ∞ and dim N (T ∗ ) < ∞ . Therefore, the complement of F, B[H]\ F = B[H]\(F ∩ Fr ) = (B[H]\ F ) ∪ (B[H]\ Fr ), is the union of B[H]\ F = T ∈ B[H]: R(T ) is not closed or dim N (T ) = ∞ , B[H]\ Fr = T ∈ B[H]: R(T ) is not closed or dim N (T ∗ ) = ∞ . Proof. Recall that SF = F ∪ Fr and F = F ∩ Fr . Apply Theorem 5.1. .
134
5. Fredholm Theory
Let Z be the set of all integers and set Z = Z ∪ {−∞} ∪ {+∞}, the extended integers. Take any T in SF so that N (T ) or N (T ∗ ) is finite dimensional. The Fredholm index ind (T ) of T in SF is defined in Z by ind(T ) = dim N (T ) − dim N (T ∗ ). It is usual to write α(T ) = dim N (T ) and β(T ) = dim N (T ∗ ), and hence ind (T ) = α(T ) − β(T ). Since T ∗ and T lie in SF together, we get ind (T ∗ ) = −ind (T ). Note: β(T ) = α(T ∗ ) = dim R(T )⊥ = dim(H R(T )− ) = dim(H/R(T )− ), where the last identity is explained in the forthcoming Remark 5.3(a). Recall from Theorem 5.1 that T ∈ F implies dim N (T ) < ∞ and, dually, T ∈ Fr implies dim N (T ∗ ) < ∞. Therefore, T ∈ F
implies
ind(T ) = +∞,
T ∈ Fr
implies
ind (T ) = −∞.
In other words, if T ∈ SF = F ∪ Fr , then ind (T ) = +∞
implies
T ∈ Fr \F = Fr \F ,
ind (T ) = −∞
implies
T ∈ F \Fr = F \F .
Remark 5.3. (a) Banach Space. If T is an operator on a Hilbert space H, then N (T ∗ ) = R(T )⊥ = H R(T )− by Lemma 1.4. Thus Theorem 5.1 and Corollary 5.2 can be restated with N (T ∗ ) replaced by H R(T )−. However, consider the quotient space H/R(T )− of H modulo R(T )−, consisting of all cosets x + R(T )− for each x ∈ H. The natural mapping of H/R(T )− onto H R(T )− (viz., x + R(T )− → Ex where E is the orthogonal projection on R(T )⊥) is an isomorphism. Then dim(H/R(T )− ) = dim(H R(T )− ). Therefore we get the following restatement of Corollary 5.2. T ∈ B[H] is semi-Fredholm if and only if R(T ) is closed and N (T ) or H/R(T ) is finite dimensional. It is Fredholm if and only if R(T ) is closed and both N (T ) and H/R(T ) are finite dimensional. This is how the theory advances in a Banach space. Many properties, among those that will be developed in this chapter, work smoothly in any Banach space. Some of them, in order to be properly translated into a Banach space setting, will behave well if the Banach space is complemented. A Banach space is complemented if every subspace of it has a complementary subspace (i.e., if there exists a subspace N of X such that M + N = X and M ∩ N = {0} for every subspace M of X ). However, if a Banach space is complemented, then it is isomorphic to a Hilbert space [72] (i.e., topologically isomorphic, after the
5.1 Fredholm Operators and Fredholm Index
135
Inverse Mapping Theorem). For instance, Proposition 1.D (on complementary subspaces and continuous projections) is a significant exemplar of a result that acquires its full strength only on complemented Banach spaces (which are identified with Hilbert spaces). Only Hilbert spaces (up to an isomorphism) are complemented. We shall stick to Hilbert spaces. (b) Finite Rank. It was actually proved in Theorem 5.1 that T is left semi-Fredholm if and only if there exists an operator A ∈ B[H] and a finite-rank operator K ∈ B[H] such that AT = I + K. Since T is right semi-Fredholm if and only if T ∗ is left semi-Fredholm, which means that there exists a finite-rank operator K such that AT ∗ = I + K or, equivalently, such that T A∗ = I + K ∗ , and since K is a finite-rank operator if and only if K ∗ is (see, e.g., [66, Problem 5.40]), it follows that T is right semi-Fredholm if and only if there exists an operator A ∈ B[H] and a finite-rank operator K ∈ B[H] such that T A = I + K. So the definitions of left semi-Fredholm, right semi-Fredholm, semi-Fredholm, and Fredholm operators are equivalently stated if “compact” is replaced with “finite-rank”. For Fredholm operators we can even have the same A when stating that I − AT and I − T A are of finite rank (cf. Proposition 5.C). (c) Finite Dimensional. Take any operator T ∈ B[H]. Recall from linear algebra that dim H = dim N (T ) + dim R(T ) (see, e.g., [66, Problem 2.17]. Since H = R(T )− ⊕ R(T )⊥ by the Projection Theorem, it follows that dim H = dim R(T ) + dim R(T )⊥. Thus, if H is finite dimensional (so that N (T ) and N (T ∗ ) are finite dimensional), then ind (T ) = 0. Indeed, if dim H < ∞, then dim N (T ) − dim N (T ∗ ) = dim N (T ) − dim R(T )⊥ = dim N (T ) + dim R(T ) − dim H = dim H − dim H = 0. Since linear manifolds of finite-dimensional spaces are always closed, it follows by Corollary 5.2 that on a finite-dimensional space every operator is Fredholm with a null index . That is, dim H < ∞ =⇒ T ∈ F : ind(T ) = 0 = B[H]. (d) Fredholm Alternative. If K ∈ B∞[H] and λ = 0, then R(λI − K) is closed and dim N (λI − K) = dim N (λI − K ∗ ) < ∞. This is the Fredholm Alternative for compact operators of Corollary 1.20, which can be restated in terms of Fredholm indices, according to Corollary 5.2, as follows. If K ∈ B∞[H] and λ = 0, then λI − K is Fredholm with ind (λI− K) = 0. But ind(λI − K) = 0 means dim N (λI − K) = dim R(λI − K)⊥ (recall that dim N (λI − K ∗ ) = dim R(λI − K)⊥ by Lemma 1.5), and this implies that
136
5. Fredholm Theory
N (λI − K) = {0} if and only if R(λI − K)− = H (equivalently, if and only if R(λI − K)⊥ = {0}). Since R(λI − K) is closed if λI − K is Fredholm, we get another form of the Fredholm Alternative (cf. Theorems 1.18 and 2.18). If K ∈ B∞[H] and λ = 0, then N (λI − K) = {0} ⇐⇒ R(λI − K) = H, which means that λI − K is injective if and only if it is surjective. (e) Fredholm Index. Corollary 5.2 and some of its straightforward consequences can also be naturally rephrased in terms of Fredholm indices. Note that dim N (T ) and dim N (T ∗ ) are both finite if and only if ind (T ) is finite (reason: ind (T ) was defined for semi-Fredholm operators only, so that if one of dim N (T ) or dim N (T ∗ ) is infinite, then the other must be finite). Thus, T is Fredholm if and only if it is semi-Fredholm with a finite index, and so T ∈ SF =⇒ T ∈ F ⇐⇒ |ind (T )| < ∞ . Since F \Fr = F \F and Fr \F = Fr \F, it follows that T ∈ SF =⇒ T ∈ Fr \F ⇐⇒ ind(T ) = +∞ , T ∈ SF =⇒ T ∈ F \Fr ⇐⇒ ind(T ) = −∞ . (f) Product. Still from Corollary 5.2, a nonzero scalar operator is Fredholm with a null index. This is readily generalized: If T ∈ F , then γ T ∈ F and ind(γ T ) = ind (T ) for every γ ∈ C \{0}. A further generalization leads to the most important property of the index, namely its logarithmic additivity: ind(S T ) = ind(S) + ind(T ) whenever such an addition makes sense. Theorem 5.4. Take S, T ∈ B[H]. (a) If S, T ∈ F , then S T ∈ F . If S, T ∈ Fr , then S T ∈ Fr . (Therefore, if S, T ∈ F , then S T ∈ F .) (b) If S, T ∈ F or S, T ∈ Fr (in particular, if S, T ∈ F ), then ind(S T ) = ind(S) + ind(T ). Proof. (a) If S and T are left semi-Fredholm, then there are A, B, K, L in B[H], being K, L compact, such that BS = I + L and AT = I + K. Then (AB)(S T ) = A(I + L)T = AT + AL T = I + (K + AL T ). But K + AL T is compact (because B∞[H] is an ideal of B[H]). Thus S T is left semi-Fredholm. Summing up: S, T ∈ F implies S T ∈ F . Dually, S, T ∈ Fr if and only if S ∗, T ∗ ∈ F , which (as we have seen above) implies T ∗ S ∗ ∈ F , which means that ST = (T ∗ S ∗ )∗ ∈ Fr .
5.1 Fredholm Operators and Fredholm Index
137
(b) We shall split the proof of ind (S T ) = ind (S) + ind (T ) into three parts. (b1 ) Take S, T ∈ F . First suppose ind (S) and ind (T ) are both finite or, equivalently, suppose S, T ∈ F = F ∩ Fr . Consider the surjective transformation L: N (S T ) → R(L) ⊆ H defined by Lx = T x
for every
x ∈ N (S T ),
which is clearly linear with N (L) = N (T ) ∩ N (S T ) by the very definition of L. Since N (T ) ⊆ N (S T ), it follows that N (L) = N (T ). If y ∈ R(L) = L(N (S T )), then y = Lx = T x for some vector x ∈ H such that S T x = 0, and so Sy = 0, which implies that y ∈ R(T ) ∩ N (S). Therefore, R(L) ⊆ R(T ) ∩ N (S). Conversely, if y ∈ R(T ) ∩ N (S), then y = T x for some vector x ∈ H and Sy = 0, so that S T x = 0, which implies that x ∈ N (S T ) and y = T x = Lx ∈ R(L). Thus R(T ) ∩ N (S) ⊆ R(L). Hence, R(L) = R(T ) ∩ N (S). Again, as is well known from linear algebra, dim X = dim N (L) + dim R(L) for every linear transformation L on a linear space X (see, e.g., [66, Problem 2.17]). Thus, with X = N (S T ), and recalling that dim N (S) < ∞, dim N (S T ) = dim N (T ) + dim (R(T ) ∩ N (S)) = dim N (T ) + dim N (S) + dim (R(T ) ∩ N (S)) − dim N (S). Since R(T ) is closed, R(T ) ∩ N (S) is a subspace of N (S), so that (by the Projection Theorem) N (S) = R(T ) ∩ N (S) ⊕ (N (S) (R(T ) ∩ N (S))), and hence dim (N (S) (R(T ) ∩ N (S))) = dim N (S) − dim (R(T ) ∩ N (S)) (because dim N (S) < ∞). Thus dim N (T ) + dim N (S) = dim N (S T ) + dim (N (S) (R(T ) ∩ N (S))). Swapping T with S ∗ and S with T ∗ (which have finite-dimensional kernels and closed ranges), it follows by Lemma 1.4 and Proposition 1.H(a,b) that dim N (S ∗ ) + dim N (T ∗ ) = dim N (T ∗ S ∗ ) + dim(N (T ∗ ) (R(S ∗ ) ∩ N (T ∗ ))) = dim N ((S T )∗ ) + dim(R(T )⊥ (N (S)⊥ ∩ R(T )⊥)) = dim N ((S T )∗ ) + dim(R(T )⊥ (R(T ) + N (S))⊥ = dim N ((S T )∗ ) + dim(N (S) (R(T ) ∩ N (S))). Therefore, dim N (S) − dim N (S ∗ ) + dim N (T ) − dim N (T ∗ ) = dim N (S T ) − dim N ((S T )∗ ),
138
5. Fredholm Theory
and so ind (S T ) = ind (S) + ind (T ). (b2 ) Recall that if S, T ∈ F , then ind (S), ind (T ) = +∞. Suppose S, T ∈ F and one of ind (S) or ind(T ) is not finite, which means that ind (S) = −∞ or ind (T ) = −∞. If ind (S) = −∞, then dim N (S ∗ ) = ∞, and so dim R(S)⊥ = dim N (S ∗ ) = ∞. Since R(S T ) ⊆ R(S), we get R(S)⊥ ⊆ R(S T )⊥, and hence dim N ((T S)∗ ) = dim R(S T )⊥ = ∞. Since T S ∈ F by item (a), Theorem 5.1 says that dim N (T S) < ∞. Thus ind(T S) = dim N (T S) − dim N ((T S)∗ ) = −∞. On the other hand, if ind(S) = −∞, then ind (T ) = −∞, and the same argument leads to ind (S T ) = −∞. So, in both cases, ind(S T ) = ind(T S) and ind(S T ) = −∞ = ind (S) + ind (T ). (b3 ) Finally, suppose S, T ∈ Fr . Thus S ∗, T ∗ ∈ F , which (as we have seen in (b1 ) and (b2 )) implies that ind (T ∗ S ∗ ) = ind(S ∗ T ∗ ) = ind(S ∗ ) + ind (T ∗ ). But ind (S ∗ ) = −ind(S), ind(T ∗ ) = −ind (T ), ind (T ∗ S ∗ ) = ind((S T )∗ ) = −ind(S T ), and ind (S ∗ T ∗ ) = ind ((T S)∗ ) = −ind (T S). Thus, if S, T ∈ Fr , ind (S T ) = ind (S) + ind (T ).
If S, T ∈ SF \F, so that ind (S) and ind (T ) are both not finite, then the expression ind(S) + ind (T ) makes sense as an extended integer in Z if and only if either ind(S) = ind (T ) = +∞ or ind (S) = ind(T ) = −∞. But this is equivalent to saying that either S, T ∈ Fr \F = Fr \F or S, T ∈ F \F = F \Fr . Therefore, if S, T ∈ SF with one of them in Fr \F and the other in F \Fr , then the index expression of Theorem 5.4 does not make sense. Moreover, if one of them lies in Fr \F and the other lies in F \Fr , then it may happen that their product is not in SF . For instance, let H be an infinitedimensional Hilbert space, and let S+ be the canonical ∞ unilateral shift (of 2 infinite multiplicity) on the Hilbert space + (H) = k=0 H (cf. Section 2.7). dim N (S+∗ ) = 0, and R(S+ ) is closed It is readily verified that dim N (S+ ) = ∞, ∞ 2 2 in + (H). In fact, (S+ ) = + (H) k=1 H ∼ = H, N (S+∗ ) = {0}, and N ∞ R(S+ ) = {0} ⊕ k=1 H. Therefore, S+ ∈ Fr \F and S+∗ ∈ F \Fr (according to Theorem 5.1). However, it is easy to see that S+ S+∗ ∈ SF . Indeed, S+ S+∗ = O ⊕ I (where ∞ O stands for the null operator on H and I for the identity operator on k=1 H) does not lie in SF because it is self-adjoint and dim N (S+ S+∗ ) = ∞ (cf. Theorem 5.1 again). Corollary 5.5. Take any nonnegative integer n. If T ∈ F (or T ∈ Fr ), then T n ∈ F (or T n ∈ Fr ) and ind (T n ) = n ind (T ). Proof. The result holds trivially for n = 0 (the identity is Fredholm with index zero), and tautologically for n = 1. Thus suppose n ≥ 2. Theorem 5.4 says
5.1 Fredholm Operators and Fredholm Index
139
that the claimed result holds for n = 2. By using Theorem 5.4 again, a trivial induction ensures that the claimed result holds for each n ≥ 2. The null operator on an infinite-dimensional space is not Fredholm, and so a compact operator may not be Fredholm. However, the sum of a Fredholm operator and a compact operator is a Fredholm operator with the same index. In other words, the index of a Fredholm operator remains unchanged under compact perturbation, which is referred to as the index stability. Theorem 5.6. Take T ∈ B[H] and K ∈ B∞[H]. (a) T ∈ F ⇐⇒ T + K ∈ F
and
T ∈ Fr ⇐⇒ T + K ∈ Fr .
In particular, T ∈ F ⇐⇒ T + K ∈ F
and
T ∈ SF ⇐⇒ T + K ∈ SF .
(b) Moreover, for every T ∈ SF , ind(T + K) = ind(T ). Proof. (a) If T ∈ F , then there exist A ∈ B[H] and K1 ∈ B∞[H] such that AT = I + K1 . So A ∈ Fr . Take an arbitrary K ∈ B∞[H]. Set K2 = K1 + AK, which lies in B∞[H] because B∞[H] is an ideal of B[H]. Since A(T + K) = I + K2 , it follows that T + K ∈ F . Therefore, T ∈ F implies T + K ∈ F . The converse also holds because T = (T + K) − K. Hence T ∈ F
⇐⇒
T + K ∈ F .
Dually, if T ∈ Fr and K ∈ B∞[H], then T ∗ ∈ F and K ∗ ∈ B∞[H], so that T ∗ + K ∗ ∈ F , and hence T + K = (T ∗ + K ∗)∗ ∈ Fr . Therefore, T ∈ Fr
⇐⇒
T + K ∈ Fr .
(b) First suppose T ∈ F . The Fredholm Alternative of Corollary 1.20 says that I + K1 and I + K2 are Fredholm operators with Fredholm indices zero (see Remark 5.3(d)). Therefore, AT = I + K1 ∈ F with ind(AT ) = 0 and A(T + K) = I + K2 ∈ F with ind(A(T + K)) = 0. Note that, since T ∈ F , then A ∈ Fr . If T ∈ F , then T + K ∈ F by item (a). In this case, applying Theorem 5.4 for operators in Fr , we get ind (T + K) + ind(A) = ind (A(T + K)) = 0 = ind(AT ) = ind (A) + ind (T ). Since ind(T ) is finite (under the assumption that T ∈ F), it follows that ind (A) = −ind(T ) is finite as well, and so we may subtract ind (A) to get ind(T + K) = ind(T ). On the other hand, if T ∈ F \F = F \Fr , then T + K ∈ F \Fr by (a). In this case (see Remark 5.3(e)),
140
5. Fredholm Theory
ind(T + K) = −∞ = ind(T ). Dually, if T ∈ Fr \F and K ∈ B∞[H], then T ∗ ∈ F \F and K ∗ ∈ B∞[H]. So ind (T + K) = −ind(T ∗ + K ∗ ) = −ind(T ∗ ) = ind (T ).
Remark 5.7. (a) Index of A. We have established the following assertion in the preceding proof. If T ∈ F and A ∈ B[H] is such that AT = I + K for some K ∈ B∞[H], then A ∈ F and ind(A) = −ind(T ) = ind (T ∗ ). Moreover, if T ∈ F, then T ∈ Fr , so that T A = I + K for some K ∈ B∞[H]. Thus A ∈ F . Therefore, a similar argument leads to the dual statement. If T ∈ F and A ∈ B[H] is such that T A = I + K for some K ∈ B∞[H], then A ∈ F and ind(A) = −ind(T ) = ind (T ∗ ). (b) Weyl Operator. A Weyl operator is a Fredholm operator with null index (equivalently, a semi-Fredholm operator with null index). Let W = T ∈ F : ind (T ) = 0 denote the class of all Weyl operators from B[H]. Since T ∈ F if and only if T ∗ ∈ F and ind (T ∗ ) = −ind(T ), it follows that T ∈W
⇐⇒
T ∗ ∈ W.
Items (c), (d), and (f) in Remark 5.3 ensure that (i) every operator on a finite-dimensional space is a Weyl operator, dim H < ∞
=⇒
W = B[H],
(ii) the Fredholm Alternative can be rephrased as K ∈ B∞[H] and λ = 0
=⇒
λI − K ∈ W,
and (iii) every nonzero multiple of a Weyl operator is again a Weyl operator, T ∈W
=⇒
γ T ∈ W for every γ ∈ C \{0}.
In particular, every nonzero scalar operator is a Weyl operator. In fact, the product of two Weyl operators is again a Weyl operator (by Theorem 5.4), S, T ∈ W
=⇒
S T ∈ W.
Thus integral powers of Weyl operators are Weyl operators (Corollary 5.5), T ∈W
=⇒
T n ∈ W for every n ∈ N 0 .
5.2 Essential Spectrum and Spectral Picture
141
On an infinite-dimensional space, the identity is Weyl but not compact; and the null operator is compact but not semi-Fredholm. Actually, T ∈ F ∩ B∞[H]
⇐⇒
dim H < ∞
( ⇐⇒
T ∈ W ∩ B∞[H] ).
(Set T = −K in Theorem 5.6 and, for the converse, see Remark 5.3(c).) Also note that, if T is normal, then N (T ) = N (T ∗ T ) = N (T T ∗ ) = N (T ∗ ) by Lemma 1.4, and so every normal Fredholm operator is Weyl, T normal in F
=⇒
T ∈ W.
Since T ∈ B[H] is invertible if and only N (T ) = {0} and R(T ) = H (Theorem 1.1), which happens if and only if T ∗ ∈ B[H] is invertible (Proposition 1.L), it follows by Corollary 5.2 that every invertible operator is Weyl, T ∈ G[H]
=⇒
T ∈ W.
Theorem 5.6 ensures that, for every compact K ∈ B∞[H], T ∈W
=⇒
T + K ∈ W.
5.2 Essential Spectrum and Spectral Picture An element a in a unital algebra A is left invertible if there is an element a in A (a left inverse of a) such that a a = 1, where 1 stands for the identity in A, and it is right invertible if there is an element ar in A (a right inverse of a) such that aar = 1. An element a in A is invertible if there is an element a−1 in A (the inverse of a) such that a−1 a = a a−1 = 1. Thus a in A is invertible if and only if it has a left inverse a in A and a right inverse ar in A, which coincide with its inverse a−1 in A (since ar = a a ar = a ). Lemma 5.8. Take an operator S in the unital Banach algebra B[H]. (a) S is left invertible if and only if it is injective with a closed range (i.e., N (S) = {0} and R(S)− = R(S)). (b) S is left (right ) invertible if and only if S ∗ is right (left ) invertible. (c) S is right invertible if and only if it is surjective (i.e., R(S) = H). Proof. S ∈ B[H] is said to have a bounded inverse on its range if there exists S −1 ∈ B[R(S), H] such that S −1 S = I in B[H] and S S −1 = I in B[R(S)]). Claim. S is left invertible if and only if it has a bounded inverse on its range. Proof. If S ∈ B[H] is such that S S = I, then S |R(S) ∈ B[R(S), H] is such that S |R(S) S = I ∈ B[H]. Consider the operator S S |R(S) in B[R(S)], and take any y in R(S). Thus y = Sx for some x in H, and so S |R(S) y = S Sx =
142
5. Fredholm Theory
x. Then S S |R(S) y = Sx = y, and hence S S |R(S) = I ∈ B[R(S)]. Set S −1 = S |R(S) ∈ B[R(S), H], and get S −1 S = I in B[H] and S S −1 = I in B[R(S)]. Conversely, any extension of S −1 ∈ B[R(S), H] over the whole space H (cf. Proposition 1.G(b)) is a left inverse of S ∈ B[H]. This proves the Claim. (a) But S has a bounded inverse on its range if and only if it is injective (i.e., N (S) = {0}) and R(S) is closed, according to Theorem 1.2. (b) Since S S = I (S Sr = I) if and only if S ∗ S∗ = I (Sr∗ S ∗ = I), it follows that S is left (right) invertible if and only if S ∗ is right (left) invertible. (c) Thus we get from items (a) and (b) that S is right invertible if and only if N (S ∗ ) = {0} and R(S ∗ ) is closed, or equivalently (cf. Lemmas 1.4 and 1.5), R(S) = R(S)− = H; which means that R(S) = H. Let T be an operator in B[H]. The left spectrum σ (T ) and the right spectrum σr (T ) of T ∈ B[H] are the sets σ (T ) = λ ∈ C : λI − T is not left invertible , σr (T ) = λ ∈ C : λI − T is not right invertible , so that the spectrum σ(T ) of T ∈ B[H] is given by σ(T ) = λ ∈ C : λI − T is not invertible = σ (T ) ∪ σr (T ). Corollary 5.9. Take an operator T ∈ B[H]. σ (T ) = λ ∈ C : R(λI − T ) is not closed or N (λI − T ) = {0} , σr (T ) = λ ∈ C : R(λI − T ) is not closed or N (λI − T ∗ ) = {0} = λ ∈ C : R(λI − T ) is not closed or R(λI − T )− = H = λ ∈ C : R(λI − T ) = H . Proof. Take T ∈ B[H] and λ ∈ C . Set S = λI − T in B[H]. By Lemma 5.8, S is not left invertible if and only if R(S) is not closed or S is not injective (i.e., or N (S) = 0). This proves the expression for σ (T ). By Lemma 5.8, S is not right invertible if and only if R(S) = H, which means that R(S) = R(S)− or R(S)− = H, which in turn is equivalent to saying that R(S) = R(S)− or N (S ∗ ) = {0} (Lemma 1.5), thus proving the expressions for σr (T ). By Corollary 5.9 and Theorem 2.6 (see also the diagram of Section 2.2), σ (T ) = σ(T )\σR1 (T ) = σAP (T ), and so
σr (T ) = σ(T )\σP1 (T ) = σAP (T ∗ )∗ ,
σ (T ) and σr (T ) are closed subsets of C .
5.2 Essential Spectrum and Spectral Picture
143
Indeed, σAP (T ) is closed (cf. Theorem 2.5). Note that σP1 (T ) and σR1 (T ) are open in C (cf. Remarks following Theorems 2.5 and 2.6), and therefore σAP (T ) = σ(T )\σR1 (T ) = σ(T ) ∩ (C \σR1 (T )) and σAP (T ∗ )∗ = σ(T )\σP1 (T ) = σ(T ) ∩ (C \σP1 (T )) are closed (and bounded) in C , since σ(T ) is closed (and bounded); and so is the intersection , σ (T ) ∩ σr (T ) = σ(T ) σP1 (T ) ∪ σR1 (T ) = σAP (T ) ∩ σAP (T ∗ )∗ . The fact that each σP1 (T ) and σR1 (T ) is an open subset of C , and so each σ (T ) and σr (T ) is a closed subset of C , it a consequence of the property that T lies in the unital Banach algebra B[H] (so that σ(T ) is a compact set), which actually happens in any unital Banach algebra. Also recall that (S S)∗ = S ∗S∗ and (S Sr )∗ = Sr∗ S ∗, so that S is a left inverse of S if and only if S∗ is a right inverse of S ∗, and Sr is a right inverse for S if and only if Sr∗ is a left inverse for S ∗. Therefore (as we can also infer from Corollary 5.9), σ (T ) = σr (T ∗ )∗
and
σr (T ) = σ (T ∗ )∗ .
Consider the Calkin algebra B[H]/B∞[H] — the quotient algebra of B[H] modulo the ideal B∞[H] of all compact operators. If dim H < ∞, then all operators are compact, so that B[H]/B∞[H] is trivially null. Thus, whenever the Calkin algebra is brought into play, it will be assumed that dim H = ∞. Since B∞[H] is a subspace of B[H], it follows that B[H]/B∞[H] is a unital Banach algebra whenever H is infinite dimensional . Define the natural map (or the natural quotient map) π: B[H] → B[H]/B∞[H] by π(T ) = [T ] = S ∈ B[H]: S = T + K for some K ∈ B∞[H] = T + B∞[H] for every T in B[H]. Recall that the origin of the linear space B[H]/B∞[H] is [0] = B∞[H] ∈ B[H]/B∞[H], the kernel of the natural map π is N (π) = T ∈ B[H]: π(T ) = [0] = B∞[H] ⊆ B[H], and π is a unital homomorphism. Indeed, since B∞[H] is an ideal of B[H], π(T + T ) = (T + T ) + B∞[H] = (T + B∞[H]) + (T + B∞[H]) = π(T ) + π(T ), π(T T ) = (T T ) + B∞[H] = (T + B∞[H]) (T + B∞[H]) = π(T ) π(T ), for every T, T ∈ B[H], and π(I) = [I] is the identity element of the algebra B[H]/B∞[H]. Moreover, the norm on B[H]/B∞[H] is given by [T ] =
inf
K∈B∞[H]
T + K ≤ T ,
so that π is a contraction. Theorem 5.10. Take any operator T ∈ B[H]. T ∈ F (or T ∈ Fr ) if and only if π(T ) is left (or right) invertible in the Calkin algebra B[H]/B∞[H].
144
5. Fredholm Theory
Proof. We show that the following assertions are pairwise equivalent. (a) T ∈ F . (b) There is a pair of operators {A, K} with A ∈ B[H] and K ∈ B∞[H] such that AT = I + K. (c) There exists a quadruple of operators {A, K1 , K2 , K3 } with A ∈ B[H] and K1 , K2 , K3 ∈ B∞[H] such that (A + K1 )(T + K2 ) = I + K3 . (d) π(A)π(T ) = π(I), identity in B[H]/B∞[H], for some π(A) in B[H]/B∞[H]. (e) π(T ) is left invertible in B[H]/B∞[H]. By definition, (a) and (b) are equivalent, and (b) implies (c) trivially. If (c) holds, then there exist B ∈ [A] = π(A), S ∈ [T ] = π(T ), and J ∈ [I] = π(I) such that BS = J. Therefore, π(A)π(T ) = [A] [T ] = [B] [S] = π(B)π(S) = π(BS) = π(J) = [J] = [I] = π(I), so that (c) implies (d). Now observe that X ∈ π(A)π(T ) = π(AT ) if and only if X = AT + K1 for some K1 ∈ B∞[H], and X ∈ π(I) if and only if X = I + K2 for some K2 ∈ B∞[H]. Therefore, if π(A)π(T ) = π(I), then X = AT + K1 if and only if X = I + K2 , and hence AT = I + K with K = K2 − K1 ∈ B∞[H]. Thus (d) implies (b). Finally, (d) and (e) are equivalent by the definition of left invertibility. Outcome: T ∈ F if and only if π(T ) is left invertible in B[H]/B∞[H]. Dually, T ∈ Fr if and only if π(T ) is right invertible in B[H]/B∞[H]. The essential spectrum (or the Calkin spectrum) σe (T ) of T ∈ B[H] is the spectrum of π(T ) in unital Banach algebra B[H]/B∞[H], σe (T ) = σ(π(T )), and so σe (T ) is a compact subset of C . Similarly, the left essential spectrum σe (T ) and the right essential spectrum σre (T ) of T ∈ B[H] are defined as the left and the right spectrum of π(T ) in the Calkin algebra B[H]/B∞[H]: σe (T ) = σ (π(T )) and so
and
σre (T ) = σr (π(T )),
σe (T ) = σe (T ) ∪ σre (T ).
By using only the definitions of left and right semi-Fredholm operators, and of left and right essential spectra, we get the following characterization. Corollary 5.11. If T ∈ B[H], then σe (T ) = λ ∈ C : λI − T ∈ B[H]\F , σre (T ) = λ ∈ C : λI − T ∈ B[H]\Fr .
5.2 Essential Spectrum and Spectral Picture
145
Proof. Take T ∈ B[H]. Theorem 5.10 says that λI − T ∈ F if and only if π(λI − T ) is not left invertible in B[H]/B∞[H], which means (definition of left spectrum) that λ ∈ σ (π(T )). But σe (T ) = σ (π(T )) — definition of left essential spectrum. Thus λ ∈ σe (T ) if and only if λI − T ∈ F . Dually, λ ∈ σre (T ) if and only if λI − T ∈ Fr . Corollary 5.12. (Atkinson Theorem). σe (T ) = λ ∈ C : λI − T ∈ B[H]\F . Proof. The expressions for σe (T ) and σre (T ) in Corollary 5.11 lead to the claimed identity, since σe (T ) = σe (T ) ∪ σre (T ) and F = F ∩ Fr . Thus the essential spectrum is the set of all scalars λ for which λI − T is not Fredholm. The following equivalent version is also frequently used [7]. Corollary 5.13. (Atkinson Theorem). An operator T ∈ B[H] is Fredholm if and only if its image π(T ) in the Calkin algebra B[H]/B∞[H] is invertible. Proof. Straightforward from Theorem 5.10: T is Fredholm if and only if it is both left and right semi-Fredholm, and π(T ) is invertible in B[H]/B∞[H] if and only if it is both left and right invertible in B[H]/B∞[H]. This is usually referred to by saying that T is Fredholm if and only if T is essentially invertible. Thus the essential spectrum σe (T ) is the set of all scalars λ for which λI − T is not essentially invertible (i.e., λI − T is not Fredholm), and so the essential spectrum is also called the Fredholm spectrum. We have already seen that, in the unital Banach algebra B[H], the sets σ (T ) and σr (T ) are closed in C , because σ(T ) is a compact subset of C . Similarly, in the unital Banach algebra B[H]/B∞[H], σe (T ) and σre (T ) are closed subsets of C , since σe (T ) = σ(π(T )) is a compact set in C . From Corollary 5.11 we get C \σe (T ) = λ ∈ C : λI − T ∈ F }, C \σre (T ) = λ ∈ C : λI − T ∈ Fr }, which are open subsets of C . Thus, since SF = F ∪ Fr , it follows that , C σe (T ) ∩ σre (T ) = C \σe (T ) ∪ C \σre (T ) = λ ∈ C : λI − T ∈ SF , which is again an open subset of C . Therefore, the intersection σe (T ) ∩ σre (T ) = λ ∈ C : λI − T ∈ B[H]\SF is a closed subset of C . Observe that, since F = F ∩ Fr , the complement of the union σe (T ) ∪ σre (T ) is given by (cf. Corollary 5.12)
146
5. Fredholm Theory
, σe (T ) ∪ σre (T ) = C \σe (T ) ∩ C \σre (T ) = λ ∈ C : λI − T ∈ F ,
C \σe (T ) = C
which is an open subset of C . Corollary 5.14. If T ∈ B[H], then σe (T ) = λ ∈ C : R(λI − T ) is not closed or dim N (λI − T ) = ∞ , σre (T ) = λ ∈ C : R(λI − T ) is not closed or dim N (λI − T ∗ ) = ∞ .
Proof. Theorem 5.1 and Corollary 5.11. Note that, by Corollaries 5.9 and 5.14, σe (T ) ⊆ σ (T ) and so
and
σre (T ) ⊆ σr (T )
σe (T ) ⊆ σ(T ).
Corollary 5.14 also shows that σe (T ) = σre (T ∗ )∗ and hence
and
σre (T ) = σe (T ∗ )∗ ,
σe (T ) = σe (T ∗ )∗ .
Moreover, by the previous results and the diagram of Section 2.2, σC (T ) ⊆ σe (T ) ⊆ σ (T ) = σ(T )\σR1 (T ) = σAP (T ), σC (T ) = σC (T ∗ )∗ ⊆ σe (T ∗ )∗ = σre (T ) ⊆ σr (T ) = σ(T )\σP1(T ) = σAP (T ∗ )∗, so that
, σe (T ) ∩ σre (T ) ⊆ σ(T ) σP1 (T ) ∪ σR1 (T ) ,
which is an inclusion of closed sets, and therefore , , σP1 (T ) ∪ σR1 (T ) ⊆ σ(T ) σe (T ) ∩ σre (T ) ⊆ C σe (T ) ∩ σre (T ) , which is an inclusion of open sets. Remark 5.15. (a) Finite Dimensional. For any T ∈ B[H], σe (T ) = ∅
⇐⇒
dim H = ∞.
Actually, since σe (T ) was defined as the spectrum of π(T ) in the Calkin algebra B[H]/B∞[H], which is a unital Banach algebra if and only if H is infinite dimensional (in a finite-dimensional space all operators are compact), and since spectra are nonempty in a unital Banach algebra, it follows that
5.2 Essential Spectrum and Spectral Picture
dim H = ∞
=⇒
147
σe (T ) = ∅.
However, if we take the expression in Corollary 5.12 as the definition of the essential spectrum, viz., σe (T ) = {λ ∈ C : (λI − T ) ∈ B[H]\F }, then the converse holds by Remark 5.3(c): dim H < ∞
=⇒
σe (T ) = ∅.
(b) Atkinson Theorem. An argument similar to that in the proof of Corollary 5.13, using each expression for σe (T ) and σre (T ) in Corollary 5.11 separately, leads to the following result. An operator T ∈ B[H] lies in F (or in Fr ) if and only if its image π(T ) in the Calkin algebra B[H]/B∞[H] is left (right) invertible. (c) Compact Perturbation. Observe that, for every K ∈ B∞[H], σe (T + K) = σe (T ). Indeed, since π(T + K) = π(T ), it follows by the very definition of σe (T ) in B∞[H]/B[H] that σe (T ) = σ(π(T )) = σ(π(T + K)) = σe (T + K), for every K ∈ B∞[H]. Similarly, the definitions of σe (T ) and of σre (T ) in B∞[H]/B[H] ensure that σe (T ) = σ (π(T )) = σ (π(T + K)) = σe (T + K), and σre (T ) = σr (π(T )) = σr (π(T + K)) = σre (T + K), for every K ∈ B∞[H]. Thus σe (T + K) = σe (T )
and
σre (T + K) = σre (T ).
This invariance consists of an important feature of Fredholm theory. Now take any operator T in B[H]. For each k ∈ Z \{0} set σk (T ) = λ ∈ C : λI − T ∈ SF and ind (λI − T ) = k . Recall that a component of a set in a topological space is any maximal connected subset of it. A hole of a set in a topological space is any bounded component of its complement. If a set has an open complement (i.e., if it is closed), then a hole of it must be open. We shall show that each σk (T ) with finite index is a hole of σe (T ) that lies in σ(T ) (i.e., if k ∈ Z \{0}, then σk (T ) ⊆ σ(T )\σe (T ) is a bounded component of the open set C \σe (T )), and hence it is an open set. Moreover, we also show that σ+∞ (T ) and σ−∞ (T ) are holes of σre (T ) and of σe (T ) that lie in σe (T ) and σre (T ) (in fact, σ+∞ (T ) = σe (T )\σre (T ) is a bounded component of the open set C \σre (T ), and σ−∞ (T ) = σre (T )\σe (T ) is a bounded component of the open set C \σe (T )), and hence they are open sets. The sets σ+∞ (T ) and σ−∞ (T ) — which are holes of σre (T ) and of σe (T ) but are not holes of σe (T ) — are called pseudoholes of σe (T ). Let σPF (T ) denote the set of all eigenvalues of T of finite multiplicity,
148
5. Fredholm Theory
σPF (T ) = λ ∈ σP (T ): dim N (λI − T ) < ∞ = λ ∈ C : 0 < dim N (λI − T ) < ∞ . Observe from the diagram of Section 2.2 and Corollary 5.14 that σAP (T ) = σe (T ) ∪ σPF (T ). Theorem 5.16. For each k ∈ Z \{0}, σk (T ) = σ−k (T ∗ )∗ .
(a) If k ∈ Z \{0}, then (b)
σk (T ) = λ ∈ C : λI − T ∈ F and ind(λI − T ) = k + σPF (T ), 0 < k, ⊆ ∗ ∗ σPF (T ) , k < 0, so that σk (T ) ⊆ σ(T ), and σk (T ) = λ ∈ C : λI − T ∈ F and ind(λI − T ) = 0 k∈Z \{0} = λ ∈ C : λI − T ∈ F \W ⊆ λ ∈ C : λI − T ∈ F = C \σe (T ).
If k = ±∞, then (c)
σ+∞ (T ) ⊆ σP (T )\σPF (T ) and hence
k∈Z \{0}
and
σ−∞ (T ) ⊆ σP (T ∗ )∗ \σPF (T ∗ )∗ ,
σk (T ) ⊆ σP (T ) ∪ σR (T ) ⊆ σ(T ).
Moreover, σ+∞ (T ) = λ ∈ C : λI − T ∈ SF and ind (λI − T ) = +∞ = λ ∈ C : λI − T ∈ Fr \F = σe (T )\σre (T ) ⊆ C \σre (T ), σ−∞ (T ) = λ ∈ C : λI − T ∈ SF and ind(λI − T ) = −∞ = λ ∈ C : λI − T ∈ F \Fr = σre (T )\σe (T ) ⊆ C \σe (T ), which are all open subsets of C , and hence σ+∞ (T ) ⊆ σe (T ) Furthermore, if
and
σ−∞ (T ) ⊆ σe (T ).
5.2 Essential Spectrum and Spectral Picture
149
+
(d) Z \{0} = N denotes the set of all extended positive integers, and − Z \{0} = −N denotes the set of all extended negative integers, then σk (T ) σP1 (T ) ⊆ + k∈Z \{0} = λ ∈ C : λI − T ∈ SF and 0 < ind (λI − T ) , σR1 (T ) ⊆
σk (T ) − k∈Z \{0} = λ ∈ C : λI − T ∈ SF and ind(λI − T ) < 0 ,
so that σP1 (T ) ∪ σR1 (T ) ⊆
k∈Z \{0}
σk (T )
= λ ∈ C : λI − T ∈ SF and ind (λI − T ) = 0 , ⊆ λ ∈ C : λI − T ∈ SF = C σe (T ) ∩ σre (T ) ,
which are all open subsets of C . (e) If k is finite, then σk (T ) is a hole of σe (T ) ∩ σre (T ) (and therefore, a hole of σe (T )). Otherwise, σ+∞ (T ) is a hole of σe (T ) and σ−∞ (T ) is a hole of σre (T ), and so σk (T ) is an open subset of C for every k ∈ Z \{0}. Proof. (a) Recall that λI − T ∈ SF if and only if λI − T ∗ ∈ SF , and also that ind (λI − T ) = −ind(λI − T ∗ ). Hence λ ∈ σk (T ) if and only if λ ∈ σ−k (T ∗ ). (b) The expression for σk (T ) — with SF replaced with F — holds since for finite k all semi-Fredholm operators are Fredholm (cf. Remark 5.3(e)). Take any k ∈ Z \{0}. If k > 0 and λ ∈ σk (T ), then 0 < ind(λI − T ) < ∞. Hence 0 ≤ dim N (λI − T ∗ ) < dim N (λI − T ) < ∞, and so 0 < dim N (λI − T ) < ∞. Thus λ is an eigenvalue of T of finite multiplicity. Dually, if k < 0 (i.e., −k > 0) and λ ∈ σk (T ) = σ−k (T ∗ )∗ , then λ is an eigenvalue of T ∗ of finite multiplicity. To close the proof of (b), observe that C \σe (T ) = λ ∈ C : λI − T ∈ F according to Corollary 5.12. (c) If λ ∈ σ+∞ (T ), then 0 ≤ dim N (λI − T ∗ ) < dim N (λI − T ) = ∞, so that λ is an eigenvalue of T of infinite multiplicity. Dually, if λ ∈ σ−∞ (T ) = σ+∞ (T ∗ )∗ , then λ is an eigenvalue of T ∗ of infinite multiplicity. The expressions for σ+∞ (T ) and σ−∞ (T ) follow from Remark 5.3(e) and also from Corollary 5.11. Recall that σe (T ) and σre (T ) are closed in C , and so σ+∞ (T ) and
150
5. Fredholm Theory
σ−∞ (T ) are open in C . Moreover, the expressions for σ+∞ (T ) and σ−∞ (T ) and Corollary 5.12 ensure that these sets are subsets of σe (T ). (d) Recall from Section 1.3 that M− = H if and only if M⊥ = {0}. Hence R(λI − T )− = H if and only if N (λI − T ∗ ) = {0} by Lemma 1.4. In other words, R(λI − T )− = H if and only if dim N (λI − T ∗ ) = 0. Thus, according to the diagram of Section 2.2, we get σP1(T ) = λ ∈ C : R(λI − T ) is closed, 0 = dim N (λI − T ∗ ) = dim N (λI − T ) = λ ∈ C : R(λI − T ) is closed, 0 = dim N (λI − T ∗ ), ind(λI − T ) > 0 . Therefore, by Corollary 5.2, σP1 (T ) = λ ∈ C : λI − T ∈ SF , 0 = dim N (λI − T ), ind (λI − T ) > 0 σk (T ). ⊆ λ ∈ C : λI − T ∈ SF , ind (λI − T ) > 0 = + k∈Z \{0}
Dually (cf. item (a) and Theorem 2.6),
∗ ∗ σR1 (T ) = σP1 (T ∗ )∗ ⊆ σ (T ) k + k∈Z \{0}
∗ = σ−k (T )∗ = + − k∈Z \{0}
k∈Z \{0}
σk (T ).
Finally, note that
C \σe (T ) ∩ σre (T ) = λ ∈ C : λI − T ∈ SF
according to Corollary 5.11, as we had verified before. (e) Recall from item (d) that σk (T ) ⊆ C \σe (T ) ∩ σre (T ) = λ ∈ C : λI − T ∈ SF . k∈Z \{0}
Take any λ ∈ k∈Z \{0} σk (T ) so that λI − T ∈ SF . Proposition 5.D ensures that ind (νI − T ) is constant for every ν in a punctured disk Bε (λ)\{0} centered at λ for some positive radius ε. Thus, for every pair of distinct extended integers j, k ∈ Z \{0}, the sets σj (T ) and σk (T ) are not only disjoint, but they are (bounded) components (i.e., maximal connected subsets) of the open set C \σe (T ) ∩ σre (T ), and hence they are holes of the closed set σe (T ) ∩ σre (T ). By item (b), if j, k are finite (i.e., if j, k ∈ Z \{0}), then σj (T ) and σk (T ) are (bounded) components of the open set C \σe (T ) = C \σe (T ) ∪ σre (T ) and, in this case, they are holes of the closed set σe (T ). By item (c), σ+∞ (T ) and σ−∞ (T ) are (bounded) components of the open sets C \σre (T ) and C \σe (T ), and so σ+∞ (T ) and σ−∞ (T ) are holes of the closed sets σre (T ) and σe (T ), respectively. Thus, in any case, σk (T ) is an open subset of C . Therefore,
σ (T ) is open as claimed in (d). k∈Z \{0} k
5.2 Essential Spectrum and Spectral Picture
151
The following figure illustrates a simple instance of holes and pseudoholes of the essential spectrum. Observe that the right and left essential spectra, σe (T ) and σre (T ), differ only by pseudoholes σ±∞ (T ). σe (T ) = σe (T ) ∪ σre (T ) | | | |
| | || σk (T )
σe (T )
| | | |
♣
| | | |
| | ||
♠
| | ||
σre (T )
σe (T ) ∩ σre (T )
♣
| | | |
σ−∞ (T ) = σre (T )\σe (T )
♣
♠
| | ||
♠
σ+∞ (T ) = σe (T )\σre (T )
Fig. § 5.2. Holes and pseudoholes of the essential spectrum Now, for k = 0, we define σ0 (T ) as the following subset of σ(T ). σ0 (T ) = λ ∈ σ(T ): λI − T ∈ SF and ind (λI − T ) = 0 = λ ∈ σ(T ): λI − T ∈ F and ind (λI − T ) = 0 = λ ∈ σ(T ): λI − T ∈ W . According to Corollary 5.2, the diagram of Section 2.2, and Theorem 2.6, such a subset of the spectrum can be rewritten as σ0 (T ) = λ ∈ σ(T ): R(λI − T ) is closed and dim N (λI − T ) = dim N (λI − T ∗ ) < ∞ = λ ∈ σP (T ): R(λI − T ) = R(λI − T )− = H and dim N (λI − T ) = dim N (λI − T ∗ ) < ∞ = λ ∈ σP4 : dim N (λI − T ) = dim N (λI − T ∗ ) < ∞ , where σP4 (T ) = λ ∈ σP (T ): R(λI − T )− = R(λI − T ) = H . In fact,
152
5. Fredholm Theory
σ0 (T ) ⊆ σPF (T ) ∩ σPF (T ∗ )∗ ⊆ σP (T ) ∩ σP (T ∗ )∗ . Thus
σ0 (T ) = σ0 (T ∗ )∗ .
Remark 5.17. (a) Fredholm Alternative. According to Remark 5.3(d), if K ∈ B∞[H] and λ = 0, then λI − K is Fredholm with ind(λI − K) = 0, which means that λI − K is Weyl (i.e., λI − K ∈ W — see Remark 5.7(b)). Since σ0 (K) = {λ ∈ σ(K): λI − K ∈ W}, the Fredholm Alternative can be restated, once again, as follows (compare with Theorem 2.18): K ∈ B∞[H]
=⇒
σ(K)\{0} = σ0 (K)\{0}.
(b) Finite Dimensional. Remark 5.3(c) says that if H is finite dimensional, then every operator is Weyl (i.e., if dim H < ∞, then W = B[H]). Thus, dim H < ∞
=⇒
σ(T ) = σ0 (T ).
(c) Compact Perturbation. The expressions for σk (T ), σ+∞ (T ), and σ+∞ (T ), viz., σk (T ) = {λ ∈ C : λI − T ∈ F and ind(λI − T ) = k} for every k ∈ Z \{0}, σ+∞ (T ) = σe (T )\σre (T ), and σ−∞ (T ) = σre (T )\σe (T ) as in Theorem 5.16, together with the results of Theorem 5.4, ensure that the sets σk (T ) for each k ∈ Z \{0}, σ+∞ (T ), and σ+∞ (T ), also are invariant under compact perturbation (Remark 5.15(c)). That is, for every K ∈ B∞[H], σk (T + K) = σk (T ) for every k ∈ Z \{0}. However, such an invariance does not apply to σ0 (T ). Why? Observe that if λ ∈ σ0 (T ), then the definition of σ0 (T ) forces λ to be in σ(T ), while in the definition of σk (T ) for k = 0 this happens naturally. In fact, the definition of σ0 (T ) forces λ to be in σP (T ). Example: If dim H < ∞ (where all operators are compact), and if T = I, then we get σ0 (T ) = σ(T ) = {1} and σ0 (T − T ) = σ(T − T ) = {0}. That is, there is a compact K = −T such that {0} = σ0 (T + K) = σ0 (T ) = {1}. This will be generalized in the proof of Theorem 5.24, where it is shown that σ0 (T ) is never invariant under compact perturbation. The partition of the spectrum σ(T ) obtained in the following corollary is called the spectral picture of T [75]. Corollary 5.18. (Spectral Picture). If T ∈ B[H], then σk (T ) σ(T ) = σe (T ) ∪ k∈Z and σe (T ) = σe (T ) ∩ σre (T ) ∪ σ+∞ (T ) ∪ σ−∞ (T ),
5.3 Riesz Points and Weyl Spectrum
where σe (T ) ∩
k∈Z
153
σk (T ) = ∅,
σk (T ) ∩ σj (T ) = ∅ for every j, k ∈ Z such that j = k, and
σe (T ) ∩ σre (T ) ∩ σ+∞ (T ) ∪ σ+∞ = ∅.
Proof. The collection {σk (T )}k∈Z is clearly pairwise disjoint. According to Theorem 5.16(b), and by the definition of σ0 (T ), σk (T ) = λ ∈ σ(T ): λI − T ∈ F . k∈Z
Since σe (T ) ⊆ σ(T ), it follows by Corollary 5.12 that /F . σe (T ) = λ ∈ σ(T ): λI − T ∈
Hence we get the partition of the spectrum: σ(T ) = σe (T ) ∪ k∈Z σk (T ) with σe (T ) ∩ k∈Z σk (T ) = ∅. Since σe (T ) = σe (T ) ∪ σre (T ) ⊆ σ(T ), it also follows from Corollary 5.11 that / SF ⊆ σe (T ). σe (T ) ∩ σre (T ) = λ ∈ σ(T ): λI − T ∈ Moreover, Theorem 5.16(c) ensures that σ+∞ (T ) ∪ σ−∞ (T ) = λ ∈ σ(T ): λI − T ∈ SF \F , and that σ+∞ (T ) ∩ σ−∞ (T ) = ∅ . Thus we get the following partition of the essential spectrum: σ (T ) = σe(T ) ∩ σre (T ) ∪ σ+∞ (T ) ∪ σ−∞ (T ) with e σe (T ) ∩ σre (T ) ∩ σ+∞ (T ) ∪ σ+∞ = ∅. Recall that the collection {σk (T )}k∈Z \{0} consists of pairwise disjoint open sets (cf. Theorem 5.16), which are subsets of σ(T )\σe (T ). These are the holes of the essential spectrum σe (T ). Moreover, σ±∞ (T ) also are open sets (cf. Theorem 5.16), which are subsets of σe (T ). These are the pseudoholes of σe (T ) (they are holes of σre (T ) and σe (T )). Thus the spectral picture of T consists of the essential spectrum σe (T ), the holes σk (T ) and pseudoholes σ±∞ (T ) (to each is associated a nonzero index k in Z \{0}), and the set σ0 (T ). It is worth noticing that any spectral picture can be attained [25] by an operator in B[H] (see also Proposition 5.G).
5.3 Riesz Points and Weyl Spectrum The set σ0 (T ) will play a rather important role in the sequel. It consists of an open set τ0 (T ) and the set π0 (T ) of isolated points of σ(T ) for which their Riesz idempotents have finite rank. The next proposition says that π0 (T ) is
154
5. Fredholm Theory
precisely the set of all isolated points of σ(T ) that lie in σ0 (T ). Let Eλ ∈ B[H] be the Riesz idempotent associated with an isolated point λ of the spectrum σ(T ) of an operator T ∈ B[H]. Theorem 5.19. If λ is an isolated point of σ(T ), then the following assertions are pairwise equivalent . (a) λ ∈ σ0 (T ). (b) λ ∈ σe (T ). (c) λI − T ∈ F. (d) R(λI − T ) is closed and dim N (λI − T ) < ∞. (e) λ ∈ / σe (T ) ∩ σre (T ). (f) dim R(Eλ ) < ∞. Proof. Let T be an operator on an infinite-dimensional complex Hilbert space. (a)⇒(b)⇒(c). By definition, we have σ0 (T ) = {λ ∈ σ(T ): λI − T ∈ W} = {λ ∈ σ(T ): λI − T ∈ F and ind(λI − T ) = 0}. Thus (a) implies (c) tautologically, and (c) is equivalent to (b) by Corollary 5.12. (c)⇒(d). By Corollary 5.2, λI − T ∈ F if and only if R(λI − T ) is closed, dim N (λI − T ) < ∞, and dim N (λI − T ∗ ) < ∞. Thus (c) implies (d) trivially. (d)⇒(e). By Corollary 5.14, σe (T ) ∩ σre (T ) = {λ ∈ C : R(λI − T ) is not closed or dim N (λI − T ) = dim N (λI − T ∗ ) = ∞}. Thus (d) implies (e). From now on suppose λ is an isolated point of the spectrum σ(T ) of T .
(e)⇒(a). Observe that, if λ ∈ σ(T )\σ0 (T ), then λ ∈ σe (T ) ∪ k∈Z \{0} σk (T )
by Corollary 5.18, where k∈Z \{0} σk (T ) is an open subset of C by Theorem 5.16. So it has no isolated point. Thus, if λ is an isolated pont of σ(T )\σ0 (T ), then λ lies in σe (T ) = (σe (T ) ∩ σre (T )) ∪ (σ+∞ (T ) ∪ σ−∞ (T )), where the set σ+∞ (T ) ∪ σ−∞ (T ) is open in C , and hence it has no isolated point as well (cf. Theorem 5.16 and Corollary 5.18 again). Therefore, λ ∈ σe (T ) ∩ σre (T ). In other words, if λ is an isolated point of σ(T ) and λ ∈ / σe (T ) ∩ σre (T ), then λ ∈ σ0 (T ). Thus (e) implies (a). This concludes the proof that (a), (b), (c), (d), and (e) are pairwise equivalent. Next we show that (f) also is equivalent to them. (f)⇒(d). If λ is an isolated point of σ(T ), then Δ = {λ} is a spectral set for the operator T . Thus Corollary 4.22 ensures that (f) implies (d). (a)⇒(f). Let π: B[H] → B[H]/B∞[H] be the natural map of B[H] into the Calkin algebra B[H]/B∞[H], which is a unital homomorphism of the unital Banach algebra B[H] to the unital Banach algebra B[H]/B∞[H], whenever the complex Hilbert space H is infinite dimensional.
5.3 Riesz Points and Weyl Spectrum
155
Claim. If T ∈ B[H] and ν ∈ ρ(T ), then ν ∈ ρ(π(T )) and −1 π (νI − T )−1 = ν π(I) − π(T ) . Proof. Observe that, since B∞[H] is an ideal of B[H], π (νI − T )−1 π(νI − T ) = (νI − T )−1 + B∞[H] (νI − T ) + B∞[H] = (νI − T )−1 (νI − T ) + B∞[H] = I + B∞[H] = π(I) = [I], where π(I) = [I] is the identity in B[H]/B∞[H]. Therefore, −1 −1 −1 = νI − T + B∞[H] = ν π(I) − π(T ) , π (νI − T )−1 = π(νI − T ) completing the proof of the claimed result. Suppose the isolated point λ of σ(T ) lies in σ0 (T ). According to Corollary 5.18, λ ∈ σe (T ), which means, by definition, that λ ∈ σ(π(T )), and hence λ ∈ ρ(π(T )). That is, λπ(I) − π(T ) is invertible in B[H]/B∞[H]. Since the resolvent function of an element in a unital complex Banach algebra is analytic on the resolvent set (cf. proof of Claim 1 in the proof of Lemma 4.13), it then follows that (ν π(I) − π(T ))−1 : Λ → B[H]/B∞[H] is analytic on any nonempty open subset Λ of C such that (ins Γλ )− ⊂ Λ, where Γλ is any simple closed rectifiable positively oriented curve (e.g., any positively oriented circle) enclosing λ but no other point of σ(T ). Thus the Cauchy Theorem (cf. $ Claim 2 in the proof of Lemma 4.13) says that Γλ(ν π(I) − π(T ))−1 dν = [O], where [O] = π(O) $ = B∞[H]−1is the origin in Calkin algebra B[H]/B∞[H]. Then, 1 with Eλ = 2πi (νI − T ) dν standing for the Riesz idempotent associated Γλ with λ, and since π: B[H] → B[H]/B∞[H] is linear and bounded, % % −1 1 1 ν π(I) − π(T ) π (νI − T )−1 dν = 2πi dν = [O], π(Eλ ) = 2πi Γλ
Γλ
which means that Eλ ∈ B∞[H]. But the restriction of a compact operator to an invariant subspace is again compact (cf. Proposition 1.V), and so the identity I = Eλ |R(Eλ ) : R(Eλ ) → R(Eλ ) on R(Eλ ) is compact, and therefore dim(R(Eλ )) < ∞ (cf. Proposition 1.Y). Thus (a) implies (f). A Riesz point of an operator T is an isolated point λ of σ(T ) for which the Riesz idempotent Eλ has finite rank (i.e., for which dim R(Eλ ) < ∞). Let σiso (T ) denote the set of all isolated points of the spectrum σ(T ), σiso (T ) = λ ∈ σ(T ): λ is an isolated point of σ(T ) , so that its complement in σ(T ), σacc (T ) = σ(T )\σiso (T ),
156
5. Fredholm Theory
is the set of all accumulation points of σ(T ). Let π0 (T ) denote the set of all isolated points of σ(T ) that lie in σ0 (T ); that is, π0 (T ) = σiso (T ) ∩ σ0 (T ). Recall: σ0 (T ) ⊆ σP4 (T ) ⊆ σP (T ). Thus Theorem 5.19 says that π0 (T ) is precisely the set of all Riesz points of T , which are eigenvalues of T , π0 (T ) = λ ∈ σiso (T ): dim R(Eλ ) < ∞ ⊆ σP (T ), and also that σiso (T )\σe (T ) = σiso (T ) ∩ σ0 (T ) ⊆ σP4 (T ) ⊆ σP (T ). Summing up (cf. Theorem 5.19 and the diagram of Section 2.2): π0 (T ) = λ ∈ σP (T ): λ ∈ σiso (T ) and λ ∈ σ0 (T ) = σiso (T ) ∩ σ0 (T ) = λ ∈ σP (T ): λ ∈ σiso (T ) and λ ∈ σe (T ) = σiso (T )\σe (T ) = λ ∈ σP (T ): λ ∈ σiso (T ) and dim R(Eλ ) < ∞ / σe (T ) ∩ σre (T ) = λ ∈ σP (T ): λ ∈ σiso (T ) and λ ∈ = λ ∈ σP (T ): λ ∈ σiso (T ), dim N (λI − T ) < ∞, and R(λI − T ) is closed = λ ∈ σP4 (T ): λ ∈ σiso (T ) and dim N (λI − T ) = dim N (λI − T ∗ ) < ∞ . Let τ0 (T ) denote the complement of π0 (T ) in σ0 (T ), τ0 (T ) = σ0 (T )\π0 (T ) ⊆ σP4 (T ), so that {π0 (T ),τ0 (T )} is a partition of σ0 (T ). That is, σ0 (T ) ∩ π(T ) = ∅ and σ0 (T ) = τ0 (T ) ∪ π0 (T ). Corollary 5.20.
τ0 (T ) is an open subset of C .
Proof. Observe that τ0 (T ) = σ0 (T )\π0 (T ) ⊆ σ(T ) has no isolated point. Moreover, if λ ∈ τ0 (T ), then λI − T ∈ W and λ ∈ C \σe (T ) ∩ σre (T ) (by Theorem 5.19). Thus (since λ is not an isolated point), it follows by Proposition 5.D that there exists a nonempty open ball Bε (λ) centered at λ and included in τ0 (T ). Therefore τ0 (T ) is an open subset of C . Let ∂σ(T ) denote the boundary of σ(T ), as usual. Corollary 5.21. If λ ∈ ∂σ(T ), then either λ is an isolated point of σ(T ) in π0 (T ) or λ ∈ σe (T ) ∩ σre (T ). Proof. If λ ∈ ∂σ(T ), then λ ∈ k∈Z \{0} σk (T ) ∪ σ+∞ (T ) ∪ σ−∞ (T ) because this is a subset of the closed set σ(T ) that is open in C (cf. Theorem 5.16). Thus λ ∈ (σe (T ) ∩ σre (T )) ∪ σ0 (T ) by Corollary 5.18. But σ0 (T ) = τ0 (T ) ∪ π0 (T ),
5.3 Riesz Points and Weyl Spectrum
157
where π0 (T ) = σiso (T ) ∩ σ0 (T ), and so π0 (T ) consists of isolated points only. However λ ∈ τ0 (T ) because τ0 (T ) also is a subset of closed set σ(T ) that is open in C (Corollary 5.20). Thus λ ∈ (σe (T ) ∩ σre (T )) ∪ π0 (T ). Let π00 (T ) denote the set of all isolated eigenvalues of finite multiplicity, π00 (T ) = σiso (T ) ∩ σPF (T ). Since σ0 (T ) ⊆ σPF (T ), it follows that π0 (T ) ⊆ π00 (T ). Indeed, π0 (T ) = σiso (T ) ∩ σ0 (T ) ⊆ σiso (T ) ∩ σPF (T ) = π00 (T ). Corollary 5.22.
π0 (T ) = λ ∈ π00 (T ): R(λI − T ) is closed .
Proof. If λ ∈ π0 (T ), then λ ∈ π00 (T ) and R(λI−T ) is closed (since λ ∈ σ0 (T )). Conversely, if R(λI − T ) is closed and λ ∈ π00 (T ), then R(λI − T ) is closed and dim N (λI − T ) < ∞ (since λ ∈ σPF (T )), which means by Theorem 5.19 that λ ∈ σ0 (T ) (since λ ∈ σiso (T )). Thus λ ∈ σiso (T ) ∩ σ0 (T ) = π0 (T ). The set π0 (T ) of all Riesz points of T is sometimes referred to as the set of isolated eigenvalues of T of finite algebraic multiplicity (sometimes also called normal eigenvalues of T [55, p. 5], but we reserve this terminology for eigenvalues that satisfy the inclusion of Lemma 1.13(a)). The set π00 (T ) of all isolated eigenvalues of T of finite multiplicity is sometimes referred to as the set of isolated eigenvalues of T of finite geometric multiplicity. The Weyl spectrum of an operator T ∈ B[H] is the set ! σ(T + K), σw (T ) = K∈B∞[H]
which is the largest part of σ(T ) that remains unchanged under compact perturbations. That is, σw (T ) is the largest part of σ(T ) such that σw (T + K) = σw (T ) for every
K ∈ B∞[H].
Another characterization of σw (T ) will be given in Theorem 5.24. Lemma 5.23. (a) If T ∈ SF with ind (T ) ≤ 0, then T ∈ F and there exists a compact operator K ∈ B∞[H] such that T + K is left invertible (i.e., such that there exists an operator A ∈ B[H] for which A (T + K ) = I). (b) If T ∈ SF with ind (T ) ≥ 0, then T ∈ Fr and there exists a compact operator K ∈ B∞[H] such that T + K is right invertible (i.e., such that there exists an operator A ∈ B[H] for which (T + K )A = I). (c) If T ∈ SF, then ind (T ) = 0 if and only if there exists a compact operator K ∈ B∞[H] such that T + K is invertible (i.e., such that there exists an operator A ∈ B[H] for which A(T + K) = (T + K)A = I).
158
5. Fredholm Theory
Proof. Take an operator T in B[H]. Claim . If T ∈ SF with ind(T ) ≤ 0, then T ∈ F and there is a compact (actually, a finite-rank) operator K ∈ B∞[H] such that N (T + K) = {0}. Proof. Take T ∈ SF . If ind (T ) ≤ 0, then dim N (T ) − dim N (T ∗ ) ≤ 0. Thus dim N (T ) ≤ dim N (T ∗ ) and dim N (T ) < ∞ (so that T ∈ F by Theorem 5.1). Let {ei }ni=1 be an orthonormal basis for N (T ) and let B be an orthonormal basis for N (T ∗ ) = R(T )⊥ (cf. Lemma 1.4), whose cardinality is not less than n (since dim N (T ) ≤ dim N (T ∗ )). Take any orthonormal set {fk }nk=1 ⊆ B and define a map K : H → H by n x ; ej fj for every x ∈ H, Kx = j=1
which is clearly linear, and bounded as well (a contraction, actually). In fact, this is a finite-rank operator (thus compact); that is, K lies in B0 [H] ⊆ B∞[H], whose range is included in R(T )⊥. Indeed, R(K) ⊆ {fj }nj=1 ⊆ B = N (T ∗ ) = R(T )⊥ . Take any x ∈ N (T ) and consider its Fourier series expansion with respect to n the orthonormal basis {ei }ni=1 , namely, x = =1 x ; ei ei . Observe that n |x ; ei |2 = Kx2 for every x ∈ N (T ). x2 = j=1
Now, if x ∈ N (T + K), then T x = −Kx, and hence T x ∈ R(T ) ∩ R(K) ⊆ R(T ) ∩ R(T )⊥ = {0}, so that T x = 0 (i.e., x ∈ N (T )). Thus x = Kx = T x = 0, and hence x = 0. Therefore we get the claimed result: N (T + K) = {0}. (a) If T ∈ SF with ind (T ) ≤ 0, then T ∈ F and there exists K ∈ B∞[H] such that N (T + K) = {0} by the preceding claim. Now T + K ∈ F according to Theorem 5.6, and so R(T + K) is closed by Theorem 5.1. Thus T + K is left invertible by Lemma 5.8. (b) If T ∈ SF with ind(T ) ≥ 0, then T ∗ ∈ SF with ind(T ∗ ) = −ind(T ) ≤ 0. Therefore, according to item (a), T ∗ ∈ F , so that T ∈ Fr , and there exists K ∈ B∞[H] such that T ∗ + K is left invertible. Hence T + K ∗ = (T ∗ + K)∗ is right invertible with K ∗ ∈ B∞[H] (cf. Proposition 1.W). (c) If T ∈ SF with ind (T ) = 0, then T ∈ F and there exists K ∈ B∞[H] such that N (T + K) = {0} by the preceding claim. Since ind(T + K) = ind(T ) = 0 (Theorem 5.6), we get N ((T + K)∗ ) = {0}. So R(T + K)⊥ = {0} (Lemma 1.4), and hence R(T + K) = H. Thus T + K is invertible (Theorem 1.1). Conversely, if there exists K ∈ B∞[H] such that T + K is invertible, then T + K is Weyl (recall that every invertible operator is Weyl), and so is T (Theorem 5.6); that is, T ∈ F with ind(T ) = 0.
5.3 Riesz Points and Weyl Spectrum
159
According to the claim in the preceding proof, the statement of Lemma 5.23 holds if “compact” is replaced with “finite-rank” (see Remark 5.3(b)). Theorem 5.24. (Schechter Theorem). If T ∈ B[H], then σk (T ) = σ(T )\σ0 (T ). σw (T ) = σe (T ) ∪ k∈Z \{0}
Proof. Take an operator T in B[H]. Claim. If λ ∈ σ0 (T ), then there is a K ∈ B∞[H] such that λ ∈ σ(T + K). Proof. If λ ∈ σ0 (T ), then λI − T ∈ SF with ind (λI − T ) = 0. Thus, according to Lemma 5.23(c), λI − (T + K) is invertible for some K ∈ B∞[H], which implies that λ ∈ ρ(T + K). This completes the proof of the claimed result. Recall from Corollary 5.18 that σ(T ) = σe (T ) ∪
k∈Z \{0}
σk (T ) ∪ σ0 (T ),
where the above sets are all pairwise disjoint, so that -
σ0 (T ) = σ(T ) σe (T ) ∪ σk (T ) . k∈Z \{0}
Take an arbitrary λ ∈ σ(T ). If λ lies in σe (T ) ∪ k∈Z \{0} σk (T ), then λ lies in
σe (T + K) ∪ k∈Z \{0} σk (T + K) for every K ∈ B∞[H], since σe (T + K) = σe (T )
and
σk (T + K) = σk (T ) for every k ∈ Z \{0}
for every K ∈ B∞[H] (cf. Remarks 5.15(c) and 5.17(c)). On the other hand, if λ ∈ σ0 (T ), then the preceding claim ensures that there exists K ∈ B∞[H] for which λ ∈ σ(T + K). Therefore, as the largest part of σ(T ) that remains invariant under compact perturbation is σw (T ), σw (T ) = σe (T ) ∪ σk (T ) = σ(T )\σ0 (T ). k∈Z \{0}
Since σw (T ) ⊆ σ(T ) by the very definition of σw (T ), we get σe (T ) ⊆ σw (T ) ⊆ σ(T ). Chronologically, Theorem 5.24 precedes the spectral picture of Corollary 5.18 [81]. Since σk (T ) ⊆ σ(T )\σe (T ) for all k ∈ Z , it also follows that σw (T )\σe (T ) = σk (T ), k∈Z \{0}
which is the collection of all holes of the essential spectrum. Thus σe (T ) = σw (T ) ⇐⇒ σk (T ) = ∅ k∈Z \{0}
160
5. Fredholm Theory
(i.e., if and only if the essential spectrum has no holes). Since σw (T ) and σ0 (T ) are both subsets of σ(T ), it follows by Theorem 5.24 that σ0 (T ) is the complement of σw (T ) in σ(T ), σ0 (T ) = σ(T )\σw (T ), and therefore {σw (T ), σ0 (T )} forms a partition of the spectrum σ(T ): σ(T ) = σw (T ) ∪ σ0 (T )
and
σw (T ) ∩ σ0 (T ) = ∅.
σw (T ) = σ(T )
⇐⇒
σ0 (T ) = ∅,
Thus and so, by Theorem 5.24 again, σe (T ) = σw (T ) = σ(T )
⇐⇒
k∈Z
σk (T ) = ∅.
Moreover, the Weyl spectrum σw (T ) is always a compact set in C because it is, by definition, the intersection K∈B∞[H] σ(T + K) of compact sets in C , and so σ0 (T ) is closed in C if and only if σ0 (T ) = π0 (T ) ⊆ σiso (T ) (why?). Here is a third characterization of the Weyl spectrum of T ∈ B[H]. Corollary 5.25. For every operator T ∈ B[H], σw (T ) = λ ∈ C : λI − T ∈ B[H]\W . Proof. If λ ∈ ρ(T ), then λT − T is invertible, and so λT − T ∈ W (invertible operators are Weyl). Therefore, if λT − T ∈ W, then λ ∈ σ(T ), and the result follows from Theorem 5.24 and the definition of σ0 (T ): σw (T ) = σ(T )\σ0 (T ) and σ0 (T ) = λ ∈ σ(T ): λI − T ∈ W . Then the Weyl spectrum σw (T ) is the set of all scalars λ for which λI − T is not a Weyl operator (i.e., for which λI − T is not a Fredholm operator of index zero). Since an operator lies in W together with its adjoint, it follows by Corollary 5.25 that λ ∈ σw (T ) if and only if λ ∈ σw (T ∗ ): σw (T ) = σw (T ∗ )∗ . Theorem 5.24 also gives us another characterization of the set W of all Weyl operators from B[H] in terms of the set σ0 (T ) = σ(T )\σw (T ). Corollary 5.26. W = T ∈ B[H]: 0 ∈ ρ(T ) ∪ σ0 (T ) = T ∈ F : 0 ∈ ρ(T ) ∪ σ0 (T ) . Proof. Take T ∈ B[H]. If T ∈ W, then T ∈ F and T + K is invertible for some K ∈ B∞[H] by Lemma 5.23(c), which means that 0 ∈ ρ(T + K) or, equivalently, 0 ∈ σ(T + K), and so 0 ∈ σw (T + K). Thus 0 ∈ σw (T ) (cf. definition
5.3 Riesz Points and Weyl Spectrum
161
of the Weyl spectrum that precedes Lemma 5.23), and hence 0 ∈ ρ(T ) ∪ σ0 (T ) by Theorem 5.24. The converse is (almost) trivial: if 0 ∈ ρ(T ) ∪ σ0 (T ), then either 0 ∈ ρ(T ), so that T ∈ G[H] ⊂ W by Remark 5.7(b); or 0 ∈ σ0 (T ), which means that 0 ∈ σ(T ) and T ∈ W by the very definition of σ0 (T ). This proves the first identity, which implies the second one since W ⊆ F . Remark 5.27. (a) Finite Dimensional. Take any operator T ∈ B[H]. Since σe (T ) ⊆ σw (T ), it follows by Remark 5.15(a) that σw (T ) = ∅
=⇒
dim H < ∞
and, by Remark 5.3(c) and Corollary 5.25, Thus
dim H < ∞
=⇒
σw (T ) = ∅.
σw (T ) = ∅
⇐⇒
dim H = ∞.
(b) More on Finite Dimensional. Let T ∈ B[H] be an operator on a finitedimensional space. Then B[H] = W by Remark 5.3(c) so that σ0 (T ) = σ(T ) by the definition of σ0 (T ). Moreover, Corollary 2.9 ensures that σ(T ) = σiso (T ) (since #σ(T ) < ∞), and σ(T ) = σPF (T ) (since dim H < ∞). Hence, dim H < ∞
=⇒
σ(T ) = σPF (T ) = σiso (T ) = σ0 (T ) = π0 (T ) = π00 (T )
and so, either by Theorem 5.24 or by item (a) and Remark 5.15(a), dim H < ∞
=⇒
σe (T ) = σw (T ) = ∅.
(c) Fredholm Alternative. The Fredholm Alternative has been restated again and again in Corollary 1.20, Theorem 2.18, Remark 5.3(d), Remark 5.7(b), and Remark 5.17(a). Here is an ultimate restatement of it. If K is compact (i.e., K ∈ B∞[H]), then σ(K)\{0} = σ0 (K)\{0} (Remark 5.17(a)). But σ(K)\{0} ⊆ σiso (K) by Corollary 2.20, and π0 (K) = σiso (K) ∩ σ0 (K). Therefore (compare with Theorem 2.18 and Remark 5.17(a)), K ∈ B∞[H]
=⇒
σ(K)\{0} = π0 (K)\{0}.
Since π0 (K) = σiso (K)∩σ0 (K) ⊆ σiso (K)∩σPF (K) = π00 (K) ⊆ σPF (T ) ⊆ σP (T ), σ(K)\{0} = σPF (K)\{0} = π00 (K)\{0} = π0 (K)\{0}. (d) More on Compact Operators. Take a compact operator K ∈ B∞[H]. Remark 5.3(d) (Fredholm Alternative) says that, if λ = 0, then λI − K ∈ W. Thus, according to Corollary 5.25, σw (K)\{0} = ∅. If dim H = ∞, then σw (K) = {0} by item (a). Since σe (K) ⊆ σw (K), it follows by Remark 5.15(a) that, if dim H = ∞, then ∅ = σe (K) ⊆ {0}. Hence, K ∈ B∞[H] and dim H = ∞
=⇒
σe (K) = σw (K) = {0}.
162
5. Fredholm Theory
Therefore, by the Fredholm Alternative of item (c), K ∈ B∞[H] and dim H = ∞
σ(K) = π0 (K) ∪ {0},
=⇒
σw (K) = {0}.
(e) Normal Operators. Recall that T is normal if and only if λI − T is normal. Since every normal Fredholm operator is Weyl (see Remark 5.7(b)), and according to Corollaries 5.12 and 5.25, it follows that T normal
=⇒
σe (T ) = σw (T ).
Take any T ∈ B[H] and consider the following expressions (cf. Theorem 5.24 or Corollary 5.25, and the very definitions of σ0 (T ), π0 (T ), and π00 (T ) ). σ0 (T ) = σ(T )\σw (T ),
π0 (T ) = σiso (T )∩σ0 (T ),
π00 (T ) = σiso (T )∩σPF (T ).
The equivalent assertions in the next corollary define an important class of operators. An operator for which any of those equivalent assertions holds is said to satisfy Weyl’s Theorem. This will be discussed later in Section 5.5. Corollary 5.28. The assertions below are pairwise equivalent. (a) σ0 (T ) = π00 (T ). (b) σ(T )\π00 (T ) = σw (T ). Moreover, if σ0 (T ) = σiso (T ) (i.e., if σw (T ) = σacc (T )), then the above equivalent assertions hold true. Conversely, if any of the above equivalent assertions holds true, then π0 (T ) = π00 (T ). Proof. By Theorem 5.24, σ(T )\σ0 (T ) = σ, w (T ), and so (a) implies (b). If (b) holds, then σ0 (T ) = σ(T )\σw (T ) = σ(T ) σ(T )\π00 (T ) = π00 (T ), because π00 (T ) ⊆ σ(T ), and so (b) implies (a). Moreover, if σ0 (T ) = σiso (T ) (or, if their complements in σ(T ) coincide), then (a) holds because σ0 (T ) ⊆ σPF (T ). Conversely, if (a) holds, then π0 (T ) = σiso (T ) ∩ σ0 (T ) = σiso (T ) ∩ π00 (T ) = π00 (T ) because π00 (T ) ⊆ σiso (T ).
5.4 Ascent, Descent, and Browder Spectrum Take any T ∈ B[H] (or, more generally, take any linear transformation of linear space into itself). Let N 0 be the set of all nonnegative integers, and consider the nonnegative integral powers T n of T . It is readily verified that, for n ∈ N 0 , N (T n ) ⊆ N (T n+1 )
and
R(T n+1 ) ⊆ R(T n ),
which means that {N (T n )} and {R(T n )} are nondecreasing and nonincreasing (in the inclusion ordering) sequences of subsets of H, respectively.
5.4 Ascent, Descent, and Browder Spectrum
163
Lemma 5.29. Let n0 be an arbitrary integer in N 0 . (a) If N (T n0 +1 ) = N (T n0 ), then N (T n+1 ) = N (T n ) for every n ≥ n0 . (b) If R(T n0 +1 ) = R(T n0 ), then R(T n+1 ) = R(T n ) for every n ≥ n0 . Proof. (a) Rewrite the statements in (a) as follows. N (T n0 +1 ) = N (T n0 )
=⇒
N (T n0 +k+1 ) = N (T n0 +k ) for every k ≥ 0.
The claimed result holds trivially for k = 0. Suppose it holds for some k ≥ 0. Take an arbitrary x ∈ N (T n0 +k+2 ) so that T n0 +k+1 (T x) = T n0 +k+2 x = 0. Thus T x ∈ N (T n0 +k+1 ) = N (T n0 +k ), and so T n0 +k+1 x = T n0 +k (T x) = 0, which implies that x ∈ N (T n0 +k+1 ). Hence, N (T n0 +k+2 ) ⊆ N (T n0 +k+1 ). But N (T n0 +k+1 ) ⊆ N (T n0 +k+2 ) since {N (T n )} is nondecreasing. Therefore N (T n0 +k+2 ) = N (T n0 +k+1 ), and the claimed result holds for k + 1 whenever it holds for k, which completes the proof of (a) by induction. (b) Rewrite the statements in (b) as follows. R(T n0 +1 ) = R(T n0 )
=⇒
R(T n0 +k+1 ) = R(T n0 +k ) for every k ≥ 1.
The claimed result holds trivially for k = 0. Suppose it holds for some integer k ≥ 0. Take an arbitrary y ∈ R(T n0 +k+1 ) so that y = T n0 +k+1 x = T (T n0 +k x) for some x ∈ H, and hence y = T u for some u ∈ R(T n0 +k ). If R(T n0 +k ) = R(T n0 +k+1 ), then u ∈ R(T n0 +k+1 ), and so y = T (T n0 +k+1 v) for some v ∈ H. Thus y ∈ R(T n0 +k+2 ). Therefore, R(T n0 +k+1 ) ⊆ R(T n0 +k+2 ). Since the sequence {R(T n )} is nonincreasing, it follows that R(T n0 +k+2 ) ⊆ R(T n0 +k+1 ). Hence R(T n0 +k+2 ) = R(T n0 +k+1 ). Thus the claimed result holds for k + 1 whenever it holds for k, which completes the proof of (b) by induction. Let N 0 = N 0 ∪ {+∞} denote the set of all extended nonnegative integers with its natural (extended) ordering. The ascent and descent of an operator T ∈ B[H] are defined as follows. The ascent of T is the least (extended) nonnegative integer such that N (T n+1 ) = N (T n ), that is, asc(T ) = min n ∈ N 0 : N (T n+1 ) = N (T n ) , and the descent of T is the least (extended) nonnegative integer such that R(T n+1 ) = R(T n ), that is dsc(T ) = min n ∈ N 0 : R(T n+1 ) = R(T n ) . Note that the ascent of T is null if and only if T is injective, and the descent of T is null if and only if T is surjective. Indeed, since N (I) = {0} and R(I) = H, asc(T ) = 0
⇐⇒
N (T ) = {0},
dsc(T ) = 0
⇐⇒
R(T ) = H.
164
5. Fredholm Theory
Again, the notion of ascent and descent holds for every linear transformation of linear space into itself, and so does the next result. Lemma 5.30. Take any operator T ∈ B[H]. (a) If asc(T ) < ∞ and dsc(T ) = 0, then asc(T ) = 0. (b) If asc(T ) < ∞ and dsc(T ) < ∞, then asc(T ) = dsc(T ). Proof. (a) Suppose dsc(T ) = 0 (i.e., suppose R(T ) = H). If asc(T ) = 0 (i.e., if N (T ) = {0}), then take 0 = x1 ∈ N (T ) ∩ R(T ) and x2 , x3 in R(T ) = H such that x1 = T x2 and x2 = T x3 , and so x1 = T 2 x3 . Continuing this way, we can construct a sequence {xn }n≥1 of vectors in H = R(T ) such that xn = T xn+1 and 0 = x1 = T n xn+1 lies in N (T ), and so T n+1 xn+1 = 0. This implies that xn+1 ∈ N (T n+1 )\N (T n ) for each n ≥ 1, and hence asc(T ) = ∞ by Lemma 5.29. Summing up: If dsc(T ) = 0, then asc(T ) = 0 implies asc(T ) = ∞. (b) Set m = dsc(T ), so that R(T m ) = R(T m+1 ), and set S = T |R(T m ) . It is readily verified by using Proposition 1.E that R(T m ) is T -invariant, and so S ∈ B[R(T m)]. Note that R(S) = S(R(T m )) = R(S T m ) = R(T m+1 ) = R(T m ). Thus S is surjective, which means that dsc (S) = 0. Moreover, since asc(S) < ∞ (because asc(T ) < ∞), it follows by (a) that asc(S) = 0. That is, N (S) = {0}. Take x ∈ N (T m+1 ) and set y = T m x in R(T m ), so that Sy = T m+1 x = 0. Thus y = 0, and hence x ∈ N (T m ). Then N (T m+1 ) ⊆ N (T m ), and so N (T m+1 ) = N (T m ) since {N (T n )} is nondecreasing. This implies that asc(T ) ≤ m by Lemma 5.29. To prove the reverse inequality, suppose m = 0 (otherwise apply (a)) and take z in R(T m−1 )\R(T m ) so that z = T m−1 u and T z = T (T m−1u) = T m u lies in R(T m ), for some u in H. Since T m (R(T m )) = R(T 2m ) = R(T m ), we get T z = T m v for some v in R(T m ). Now note that T m (u − v) = T z − T z = 0 and T m−1 (u − v) = z − T m−1 v = 0 (reason: T m−1 v ∈ R(T 2m−1 ) = R(T m ) since v ∈ R(T m ), and z ∈ / R(T m )). Therefore, (u − v) ∈ N (T m )\N (T m−1 ), and so asc(T ) ≥ m. Outcome: If dsc(T ) = m and asc(T ) < ∞, then asc(T ) = m. Lemma 5.31. If T ∈ F, then asc(T ) = dsc(T ∗ ) and dsc(T ) = asc(T ∗ ). Proof. Take any T ∈ B[H]. We shall use freely the relations between range and kernel, involving adjoints and orthogonal complements, of Lemma 1.4. Claim 1. asc(T ) < ∞ if and only if dsc(T ∗ ) < ∞. dsc(T ) < ∞ if and only if asc(T ∗ ) < ∞. Proof. Take an arbitrary n ∈ N 0 . If asc(T ) = ∞, then N (T n ) ⊂ N (T n+1 ) so that N (T n+1 )⊥ ⊂ N (T n )⊥ or, equivalently, R(T ∗(n+1) )− ⊂ R(T ∗n )−, which implies that R(T ∗(n+1) ) ⊂ R(T ∗n ), and therefore dsc(T ∗ ) = ∞. Dually, if asc(T ∗ ) = ∞, then dsc(T ) = ∞. That is, asc(T ) = ∞ =⇒ dsc(T ∗ ) = ∞
and
asc(T ∗ ) = ∞ =⇒ dsc(T ) = ∞.
5.4 Ascent, Descent, and Browder Spectrum
165
If dsc (T ) = ∞, then R(T n+1 ) ⊂ R(T n ) so that R(T n )⊥ ⊂ R(T n+1 )⊥ or, equivalently, N (T ∗n ) ⊂ N (T ∗(n+1) ), and therefore asc(T ∗ ) = ∞. Dually, if dsc(T ∗ ) = ∞, then asc(T ) = ∞. That is, dsc(T ) = ∞ =⇒ asc(T ∗ ) = ∞
and
dsc(T ∗ ) = ∞ =⇒ asc(T ) = ∞.
Summing up: asc(T ) = ∞ if and only if dsc(T ∗ ) = ∞ and dsc(T ) = ∞ if and only if asc(T ∗ ) = ∞, which completes the proof of Claim 1. Claim 2. If dsc(T ) < ∞, then asc(T ∗ ) ≤ dsc(T ). If dsc(T ∗ ) < ∞, then asc(T ) ≤ dsc(T ∗ ). Proof. If dsc(T ) < ∞, then set n0 = dsc(T ) ∈ N 0 and take any integer n ≥ n0 . Thus R(T n ) = R(T n0 ), and hence R(T n )− = R(T n0 )− so that N (T ∗n )⊥ = N (T ∗n0 )⊥, which implies N (T ∗n ) = N (T ∗n0 ). Therefore, asc(T ∗ ) ≤ n0 , and so asc(T ∗ ) ≤ dsc (T ). Dually, if dsc(T ∗ ) < ∞, then asc(T ) ≤ dsc(T ∗ ). This completes the proof of Claim 2. Claim 3. If asc(T ) < ∞ and R(T n ) is closed for every n ≥ asc(T ), then dsc(T ∗ ) ≤ asc(T ). If asc(T ∗ ) < ∞ and R(T n ) is closed for every n ≥ asc(T ∗ ), then dsc(T ) ≤ asc(T ∗ ). Proof. If asc(T ) < ∞, then set n0 = asc(T ) ∈ N 0 and take an arbitrary integer n ≥ n0 . Thus N (T n ) = N (T n0 ) or, equivalently, R(T ∗n )⊥ = R(T ∗n0 )⊥, so that R(T ∗n0 )− = R(T ∗n )−. Suppose R(T n ) is closed. Thus R(T ∗n ) = R(T n∗ ) is closed (Lemma 1.5). Then R(T ∗n ) = R(T ∗n0 ), and therefore dsc(T ∗ ) ≤ n0 , so that dsc(T ∗ ) ≤ asc(T ). Dually, if asc(T ∗ ) < ∞ and R(T n ) is closed (i.e., R(T ∗n ) is closed — Lemma 1.5) for every integer n ≥ asc(T ∗ ), then it follows that dsc(T ) ≤ asc(T ∗ ); completing the proof of Claim 3. Outcome: If R(T n ) is closed for every n ∈ N 0 (so that we can apply Claim 3), then it follows by Claims 1, 2, and 3 that asc(T ) = dsc (T ∗ )
and
dsc(T ) = asc(T ∗ ).
In particular, if T ∈ F, then T n ∈ F by Corollary 5.5, which implies that R(T n ) is closed for every n ∈ N 0 by Corollary 5.2. Therefore, if T is Fredholm, then asc(T ) = dsc(T ∗ ) and dsc(T ) = asc(T ∗ ). A Browder operator is a Fredholm operator with finite ascent and finite descent. Let B denote the class of all Browder operators from B[H]: B = T ∈ F : asc(T ) < ∞ and dsc(T ) < ∞ ; equivalently, according to Lemma 5.30(b), B = T ∈ F : asc(T ) = dsc(T ) < ∞
166
5. Fredholm Theory
(i.e., B = {T ∈ F : asc(T ) = dsc(T ) = m for some m ∈ N 0 }). Thus F \ B = T ∈ F : asc(T ) = ∞ or dsc(T ) = ∞ and, by Lemma 5.31, T ∈B
if and only if
T ∗ ∈ B.
Two linear manifolds of a linear space X , say R and N , are said to be algebraic complements of each other (or complementary linear manifolds) if they sum the whole space and are algebraically disjoint; that is, if R+N =X
and
R ∩ N = {0}.
A pair of subspaces (i.e., closed linear manifolds) of a normed space that are algebraic complements of each other are called complementary subspaces. Lemma 5.32. If T ∈ B[H] with asc(T ) = dsc(T ) = m for some m ∈ N 0 , then R(T m ) and N (T m ) are algebraic complements of each other . Proof. Take any T ∈ B[H]. (In fact, as will become clear during the proof, this is a purely algebraic result which holds for every linear transformation of a linear space into itself with coincident finite ascent and descent.) Claim 1. If asc(T ) = m, then R(T m ) ∩ N (T m ) = {0}. Proof. If y ∈ R(T m ) ∩ N (T m ), then y = T m x for some x ∈ H and T m y = 0, so that T 2m x = T m (T m x) = T m y = 0; that is, x ∈ N (T 2m ) = N (T m ) since asc(T ) = m. Hence y = T m x = 0, proving Claim 1. Claim 2. If dsc(T ) = m, then R(T m ) + N (T m ) = H. Proof. If dsc(T ) = m, then R(T m ) = R(T m+1 ). Set S = T |R(T m ) . Again, by using Proposition 1.E, it is readily verified that R(T m ) is T -invariant, and so S ∈ B[R(T m )] with R(S) = S(R(T m )) = R(S T m ) = R(T m+1 ) = R(T m ). Then S is surjective, and hence S m is surjective too (since {R(S n )} is nondecreasing). Therefore, R(S m ) = R(T m ). Take an arbitrary vector x ∈ H. Thus T m x ∈ R(T m ) = R(S m ) so that there exists u ∈ R(T m ) such that S m u = T m x. Since S m u = T m u (because S m = (T |R(T m ) )m = T m |R(T m ) ), it follows that T m u = T m x, and so v = x − u is in N (T m ). Then x = u + v lies in R(T m ) + N (T m ). Hence H ⊆ R(T m ) + N (T m ), proving Claim 2. Theorem 5.33. Take T ∈ B[H] and consider the following assertions. (a) T is Browder but not invertible (i.e., T ∈ B and T ∈ G[H]). (b) T ∈ F is such that R(T m ) and N (T m ) are complementary subspaces for some m ∈ N . (c) T ∈ W. Claim:
(a) =⇒ (b) =⇒ (c).
5.4 Ascent, Descent, and Browder Spectrum
167
Proof. (a)⇒(b). Recall that T ∈ B[H] is invertible (equivalently, R(T ) = H and N (T ) = {0}) if and only if T ∈ G[H] (i.e., T ∈ B[H] has a bounded inverse) by Theorem 1.1. Moreover, every invertible operator is Fredholm (by the definition of Fredholm operators), and R(T ) = H and N (T ) = {0} if and only if asc(T ) = dsc(T ) = 0 (by the definition of ascent and descent). Thus every invertible operator is Browder, which in turn is Fredholm, G[H] ⊂ B ⊂ F , and it is readily verified that the inclusions are, in fact, proper. Therefore, if T ∈ B and T ∈ G[H], then asc(T ) = dsc(T ) = m for some integer m ≥ 1, and so N (T m ) and R(T m ) are complementary linear manifolds of H by Lemma 5.32. If T ∈ B, then T ∈ F , so that T m ∈ F by Corollary 5.5, and hence R(T m ) is closed by Corollary 5.2. Recall that N (T m ) is closed since T m is bounded. Thus R(T n ) and N (T m ) are subspaces of H. (b)⇒(c). If (b) holds, then, again, T ∈ F , so that T n ∈ F by Corollary 5.5, and hence N (T m ) and N (T m∗ ) are finite dimensional, and R(T m ) is closed by Corollary 5.2. Moreover, suppose there exists m ∈ N such that R(T m ) + N (T m ) = H
and
R(T m ) ∩ N (T m ) = {0}.
Since R(T m ) is closed, R(T m ) + R(T m )⊥ = H (Projection Theorem), where R(T m )⊥ = N (T m∗ ) (Lemma 1.4). Hence, R(T m ) + N (T m∗ ) = H
and
R(T m ) ∩ N (T m∗ ) = {0}.
Thus N (T m ) and N (T m∗ ) are both algebraic complements of R(T m ), and so they have the same (finite) dimension (which, in fact, is the codimension of R(T m ) — see, e.g., [66, Theorem 2.18]). Therefore, ind(T m ) = 0. Since T ∈ F, it follows that m ind (T ) = ind (T m ) = 0 (see Corollary 5.5). Thus (recalling that m = 0), ind (T ) = 0, so that T ∈ W. Therefore, since G[H] ⊂ W (cf. Remark 5.7(b)), it follows that G[H] ⊂ B ⊂ W ⊂ F. Note that the inclusion B ⊂ W can be otherwise proved by using Proposition 5.J. Also note that the assertion “T is Browder and not invertible” (i.e., T ∈ B\G[H]) means “T ∈ B and 0 ∈ σ(T )”. The following theorem says that if this happens, then 0 must be an isolated point of σ(T ); precisely, T ∈ B and 0 ∈ σ(T ) if and only if T ∈ F and 0 ∈ σiso (T ). Theorem 5.34. Take an operator T ∈ B[H]. (a) If T ∈ B and 0 ∈ σ(T ), then 0 ∈ π0 (T ). (b) If T ∈ F and 0 ∈ σiso (T ), then T ∈ B.
168
5. Fredholm Theory
Proof. (a) Recall: σ0 (T ) = {λ ∈ σ(T ): λI − T ∈ W} and B ⊂ W. Therefore, if T ∈ B and 0 ∈ σ(T ), then 0 ∈ σ0 (T ). Moreover, Theorem 5.33 ensures that there is an integer m ≥ 1 for which R(T m ) + N (T m ) = H
and
R(T m ) ∩ N (T m ) = {0}.
and
R(T m ) ∩ N (T m ) = {0}
Equivalently, R(T m ) ⊕ N (T m ) = H
(where the direct sum is not necessarily orthogonal and equality in fact means equivalence; that is, up to a topological isomorphism). Since both R(T m ) and N (T m ) are T m -invariant, it follows that T m = T m |R(T m ) ⊕ T m |N (T m ) . Since N (T m ) = {0} (otherwise T m would be invertible) and T m |N (T m ) = O acts on the nonzero space N (T m ), it follows that T m = T m |R(T m ) ⊕ O. Thus (see Proposition 2.F even though the direct sum is not orthogonal) σ(T )m = σ(T m ) = σ(T m |R(T m ) ) ∪ {0} by Theorem 2.7 (Spectral Mapping). Since T m |R(T m ) : R(T m ) → R(T m ) is surjective and injective, and since R(T m ) is closed because T m ∈ F , it follows that T m |R(T m ) ∈ G[R(T m )]. In other words, 0 ∈ ρ(T m |R(T m ) ), and therefore, 0 ∈ σ(T m |R(T m ) ). Thus σ(T m ) is disconnected, and hence 0 is an isolated point of σ(T )m = σ(T m ). Then 0 is an isolated point of σ(T ). That is, 0 ∈ σiso (T ). Outcome: 0 ∈ π0 (T ) = σiso (T ) ∩ σ0 (T ). (b) Take an arbitrary m ∈ N . If 0 ∈ σiso (T ), then 0 ∈ σiso (T m ). Indeed, if 0 is an isolated pont of σ(T ), then 0 ∈ σ(T )m = σ(T m ), and hence 0 is an isolated pont of σ(T )m = σ(T m ). Therefore, σ(T m ) = {0} ∪ Δ, where Δ is a compact subset of C that does not contain 0, which means that {0} and Δ are complementary spectral sets for the operator T m (i.e., they form a disjoint partition for σ(T m )). Let E0 = E{0} ∈ B[H] be the Riesz idempotent associated with 0. Corollary 4.22 ensures that N (T m ) ⊆ R(E0 ). Claim. If T ∈ F and 0 ∈ σiso (T ), then dim R(E0 ) < ∞. Proof. Theorem 5.19.
5.4 Ascent, Descent, and Browder Spectrum
169
Thus, if T ∈ F and 0 ∈ σiso (T ), then dim N (T m ) ≤ dim R(E0 ) < ∞. Since {N (T n )} is a nondecreasing sequence of subspaces, {dim N (T n )} is a nondecreasing sequence of positive integers, which is bounded by the (finite) integer dim R(E0 ). This ensures that T ∈ F and 0 ∈ σiso (T )
=⇒
asc(T ) < ∞.
Since σ(T )∗ = σ(T ∗ ), we get 0 ∈ σiso (T ) if and only if 0 ∈ σiso (T ∗ ), and so T ∈ F and 0 ∈ σiso (T )
=⇒
asc(T ∗ ) < ∞,
because T ∈ F if and only if T ∗ ∈ F . Therefore, according to Lemma 5.31, asc(T ) < ∞ and dsc(T ) < ∞, which means (by definition) that T ∈ B. Theorem 5.34 also gives us another characterization of the set B of all Browder operators from B[H] (compare with Corollary 5.26). Corollary 5.35. B = T ∈ F : 0 ∈ ρ(T ) ∪ σiso (T ) = T ∈ F : 0 ∈ ρ(T ) ∪ π0 (T ) . Proof. Consider the set π0 (T ) = σiso (T ) ∩ σ0 (T ) of Riesz points of T . Claim. If T ∈ F, then 0 ∈ σiso (T ) if and only if 0 ∈ π0 (T ). Proof. Theorem 5.19. Thus the expressions for B follow from (a) and (b) in Theorem 5.34.
Remark 5.36. (a) Finite Dimensional. Since {N (T n )} is nondecreasing and {R(T n )} is nonincreasing, we get dim N (T n ) ≤ dim N (T n+1 ) ≤ dim H and dim R(T n+1 ) ≤ dim R(T n ) ≤ dim H for every n ≥ 0. Thus, if H is finite dimensional, then asc(T ) < ∞ and dsc (T ) < ∞. Therefore (see Remark 5.3(c)), dim H < ∞
=⇒
B = B[H].
(b) Complements. Recall that σ(T )\σiso (T ) = σacc (T ), and also that σ(T )\π0 (T ) = σ(T )\ σiso (T ) ∩ σ0 (T ) = σ(T )\σiso (T ) ∪ σ(T )\σ0 (T ) = σacc (T ) ∪ σw (T ). Hence, since G[H] ⊂ B ⊂ W ⊂ F , we get from Corollary 5.35 that F \ B = T ∈ F : 0 ∈ σacc (T ) = T ∈ F : 0 ∈ σacc (T ) ∪ σw (T ) . Since W \ B = W ∩ (F \ B), it follows that W \ B = T ∈ W : 0 ∈ σacc (T ) . Moreover, since σ0 (T ) = τ0 (T ) ∪ π0 (T ), where τ0 (T ) is an open subset of C (according to Corollary 5.20), and since π(T ) = σiso (T ) ∩σ0 (T ), it also follows that σ0 (T )\σiso (T ) = τ0 (T ) ⊆ σacc (T ). Thus W \ B = T ∈ W : 0 ∈ τ0 (T ) by Corollaries 5.26 and 5.35. Summing up:
170
5. Fredholm Theory
W \ B = T ∈ W : 0 ∈ σacc (T ) = T ∈ W : 0 ∈ τ0 (T ) . Furthermore, since σ(T )\σ0 (T ) = σw (T ), we get from Corollary 5. 26 that F \W = T ∈ F : 0 ∈ σw (T ) . (c) Another Characterization. An operator T ∈ B[H] is Browder if and only if it is Fredholm and λI − T is invertible for sufficiently small λ = 0. Indeed, the identity B = {T ∈ F : 0 ∈ ρ(T ) ∪ σiso (T )} in Corollary 5.35 says that T ∈ B if and only if T ∈ F and either 0 ∈ ρ(T ) or 0 is an isolated point of σ(T ) = C \ ρ(T ). Since ρ(T ) is an open subset of C , this means that T ∈ B if and only if T ∈ F and there exists an ε > 0 such that Bε (0)\{0} ⊂ ρ(T ), where Bε (0) is the open ball centered at 0 with radius ε. This can be restated by saying that T ∈ B if and only if T ∈ F and λ ∈ ρ(T ) whenever 0 = |λ| < ε for some ε > 0. But this is precisely the claimed assertion. The Browder spectrum of an operator T ∈ B[H] is the set σb (T ) of all complex numbers λ for which λI − T is not Browder, σb (T ) = λ ∈ C : λI − T ∈ B[H]\B (compare with Corollary 5.25). Since an operator lies in B together with its adjoint, it follows at once that λ ∈ σb (T ) if and only if λ ∈ σb (T ∗ ): σb (T ) = σb (T ∗ )∗ . / B, so that Moreover, by the preceding definition, if λ ∈ σb (T ), then λI − T ∈ either λI − T ∈ / F or λI − T ∈ F and asc(λI − T ) = dsc(λI − T ) = ∞ (cf. the definition of Browder operators). In the former case, λ ∈ σe (T ) ⊆ σ(T ) according to Corollary 5.12. In the latter case, λI − T is not invertible (because N (λI − T ) = {0} if and only if asc(λI − T ) = 0 — also R(λI − T ) = H if and only if dsc(λI − T ) = 0), so that λ ∈ / ρ(T ); that is, λ ∈ σ(T ). Therefore, σb (T ) ⊆ σ(T ). Since B ⊂ W, it also follows by Corollary 5.25 that σw (T ) ⊆ σb (T ). Hence σe (T ) ⊆ σw (T ) ⊆ σb (T ) ⊆ σ(T ). Equivalently, G[H] ⊂ B ⊂ W ⊂ F . Corollary 5.37. For every operator T ∈ B[H], σb (T ) = σe (T ) ∪ σacc (T ) = σw (T ) ∪ σacc (T ). Proof. Recall that σ(T ) = {λ} − σ(λI − T ) by Theorem 2.7 (Spectral Mapping), and so 0 ∈ σ(λI − T )\σiso (λI − T ) if and only if λ ∈ σ(T )\σiso (T ). Thus, by the definition of σb (T ), and using Corollaries 5.12 and 5.35,
5.4 Ascent, Descent, and Browder Spectrum
171
σb (T ) = λ ∈ C : λI − T ∈ F or 0 ∈ ρ(λI − T ) ∪ σiso (λI − T ) = λ ∈ C : λ ∈ σe (T ) or 0 ∈ σ(λI − T )\σiso (λI − T ) = λ ∈ C : λ ∈ σe (T ) or λ ∈ σ(T )\σiso (T ) = σe (T ) ∪ σ(T )\σiso (T ) = σe (T ) ∪ σacc (T ). However, by Theorem 5.24, σw (T ) = σe (T ) ∪ κ(T ), where κ(T ) ⊆ σ(T ) is an open set in C . (Indeed, κ(T ) = k∈Z \{0} σk (T ), which is the union of open sets in C by Theorem 5.16.) Thus κ(T )\σacc (T ) = ∅, so that σw (T )\σacc (T ) = σe (T ) ∪ κ(T ) \σacc (T ) = σe (T )\σacc (T ) ∪ κ(T )\σacc (T ) = σe (T )\σacc (T ). The next result is particularly important: it sets another partition of σ(T ). Corollary 5.38. For every operator T ∈ B[H], σb (T ) = σ(T )\π0 (T ). Proof. By the Spectral Picture (Corollary 5.18) and Corollary 5.37 we get , σ(T )\σb (T ) = σ(T ) σe (T ) ∪ σacc (T ) = σ(T )\σe (T ) ∩ (σ(T )\σacc (T )
= σk (T ) ∩ σiso (T ) = σ0 (T ) ∩ σiso (T ) = π0 (T ), k∈Z
according to the definition of π0 (T ), since σk (T ) is open in C for every 0 = k ∈ Z (Theorem 5.16) and so it has no isolated point. Since σb (T ) ∪ π0 (T ) ⊆ σ(T ), it follows by Corollary 5.38 that σb (T ) ∪ π0 (T ) = σ(T )
and
σb (T ) ∩ π0 (T ) = ∅,
so that {σb (T ), π0 (T )} forms a partition of the spectrum σ(T ). Thus π0 (T ) = σ(T )\σb (T ) = λ ∈ σ(T ): λI − T ∈ B , and
σb (T ) = σ(T ) ⇐⇒ π0 (T ) = ∅.
Corollary 5.39. For every operator T ∈ B[H], π0 (T ) = σiso (T )\σb (T ) = σiso (T )\σw (T ) = σiso (T )\σe (T ) = π00 (T )\σb (T ) = π00 (T )\σw (T ) = π00 (T )\σe (T ). Proof. The identities involving σb (T ) are readily verified. In fact, recall that π0 (T ) = σiso (T ) ∩ σ0 (T ) ⊆ σiso (T ) ∩ σPF (T ) = π00 (T ) ⊆ σiso (T ) ⊆ σ(T ). Since σb (T ) ∩ π0 (T ) = ∅ and σ(T )\π0 (T ) = σb (T ) (Corollary 5.38), we get π0 (T ) = π0 (T )\σb (T ) ⊆ π00 (T )\σb (T ) ⊆ σiso (T )\σb (T ) ⊆ σ(T )\σb (T ) = π0 (T ).
172
5. Fredholm Theory
To prove the identities involving σw (T ) and σe (T ), proceed as follows. Since σ0 (T ) = σ(T )\σw (T ) (Theorem 5.24), we get π0 (T ) = σiso (T ) ∩ σ0 (T ) = σiso (T ) ∩ σ(T )\σw (T ) = σiso (T )\σw (T ). Moreover, using Theorem 5.24 again, σw (T ) = σe (T ) ∪ κ(T ), where κ(T ) =
σ k∈Z \{0} k (T ) is the collection of all holes of σe (T ), which is a subset of σ(T ) that is open in C (union of open sets — Theorem 5.16). Thus σiso (T )\σw (T ) = σiso (T )\(σe (T ) ∪ κ(T )) = (σiso (T )\σe (T )) ∩ (σiso (T )\κ(T )). But σiso (T ) ∩ κ(T ) = ∅ because κ(T ) is a subset of σ(T ) that is open in C . Hence σiso (T )\κ(T ) = σiso (T ), and so σiso (T )\σw (T ) = σiso (T )\σe (T ). Finally, recall again that π0 (T ) ⊆ π00 (T ) ⊆ σiso (T ). Also, By Theorem 5.24, π0 (T ) ⊆ σ0 (T ) = σ(T )\σw (T ) and σe (T ) ⊆ σw (T ) and, as we have just verified, σiso (T )\σw (T ) = π0 (T ) and σiso (T )\σe (T ) = σiso (T )\σw (T ). Thus, π00 (T )\σw (T ) ⊆ π00 (T )\σe (T ) ⊆ σiso (T )\σe (T ) = σiso (T )\σw (T ) = π0 (T ) = π0 (T )\σw (T ) ⊆ π00 (T )\σw (T ), and therefore π0 (T ) = π00 (T )\σw (T ) = π00 (T )\σe (T ).
Remark 5.40. (a) Finite Dimensional. Take any T ∈ B[H]. Since every operator on a finite-dimensional space is Browder (cf. Remark 5.36(a)), it follows by the very definition of the Browder spectrum σb (T ) that dim H < ∞
=⇒
σb (T ) = ∅.
The converse holds by Remark 5.27(a), since σw (T ) ⊆ σb (T ): dim H = ∞
=⇒
σb (T ) = ∅.
Thus σb (T ) = ∅
⇐⇒
dim H = ∞,
and σb (T ) = σ(T )\π0 (T ) (cf. Corollary 5.38) is a compact set in C because σ(T ) is compact and π0 (T ) ⊆ σiso (T ). Moreover, by Remark 5.27(b), dim H < ∞
=⇒
σe (T ) = σw (T ) = σb (T ) = ∅.
(b) Compact Operators. Since σacc (K) ⊆ {0} if K is compact (cf. Corollary 2.20), it follows by Remark 5.27(d), Corollary 5.37, and item (a) that K ∈ B∞[H] and dim H = ∞
=⇒
σe (K) = σw (K) = σb (K) = {0}.
5.4 Ascent, Descent, and Browder Spectrum
173
(c) Finite Algebraic and Geometric Multiplicities. We saw in Corollary 5.28 that the set of Riesz points π0 (T ) = σiso (T ) ∩ σ0 (T ) and the set of isolated eigenvalues of finite multiplicity π00 (T ) = σiso (T ) ∩ σPF (T ) coincide if any of the equivalent assertions of Corollary 5.28 holds true. The previous corollary supplies a collection of conditions equivalent to π0 (T ) = π00 (T ). Indeed, by Corollary 5.39, the following four assertions are pairwise equivalent. (i) π0 (T ) = π00 (T ). (ii) σe (T ) ∩ π00 (T ) = ∅
(i.e., σe (T ) ∩ σiso (T ) ∩ σPF (T ) = ∅ ).
(iii) σw (T ) ∩ π00 (T ) = ∅
(i.e., σw (T ) ∩ σiso (T ) ∩ σPF (T ) = ∅ ).
(iv) σb (T ) ∩ π00 (T ) = ∅
(i.e., σb (T ) ∩ σiso (T ) ∩ σPF (T ) = ∅ ).
Take any T ∈ B[H] and consider again the following relations: σ0 (T ) = σ(T )\σw (T ), π0 (T ) = σiso (T ) ∩ σ0 (T ), π00 (T ) = σiso (T ) ∩ σPF (T ). We have defined a class of operators for which the equivalent assertions of Corollary 5.28 hold true (namely, operators that satisfy Weyl’s Theorem). The following equivalent assertions define another important class of operators. An operator for which any of those equivalent assertions holds is said to satisfy Browder’s Theorem. This will be discussed later in Section 5.5. Corollary 5.41. The assertions below are pairwise equivalent. (a) σ0 (T ) = π0 (T ). (b) σ(T )\π0 (T ) = σw (T ). (c) σ(T ) = σw (T ) ∪ π00 (T ). (d) σ(T ) = σw (T ) ∪ σiso (T ). (e) σ0 (T ) ⊆ σiso (T ). (f) σ0 (T ) ⊆ π00 (T ). (g) σacc (T ) ⊆ σw (T ). (h) σw (T ) = σb (T ). Proof. Recall that {σw (T ), σ0 (T )} forms a partition of σ(T ), and also that π0 (T ) ⊆ π00 (T ) ⊆ σ(T ). Thus (a) and (b) are equivalent and, if (a) holds, σ(T ) = σw (T ) ∪ σ0 (T ) = σw (T ) ∪ π0 (T ) ⊆ σw (T ) ∪ π00 (T ) ⊆ σ(T ), so that (a) implies (c); and (c) implies (d) because π00 (T ) ⊆ σiso (T ): σ(T ) = σw (T ) ∪ π00 (T ) ⊆ σw (T ) ∪ σiso (T ) ⊆ σ(T ). Since σ0 (T ) = σ(T )\σw (T ), it follows that, if (d) holds, then σ0 (T ) = σ(T )\σw (T ) = σw (T ) ∪ σiso (T ) \σw (T ) ⊆ σiso (T ),
174
5. Fredholm Theory
so that (d) implies (e). Since π0 (T ) = σiso (T ) ∩ σ0 (T ), it follows that (e) is equivalent to (a). Since π00 (T ) = σiso (T ) ∩ σPF (T ) and σ0 (T ) ⊆ σPF (T ), it also follows that (e) is equivalent to (f). Since σacc (T ) and σiso (T ) are complements of each other in σ(T ), and σw (T ) and σ0 (T ) are complements of each other in σ(T ), it follows that (e) and (g) are equivalent. Similarly, since σw (T ) and σ0 (T ) are complements of each other in σ(T ), and σb (T ) and π0 (T ) also are complements of each other in σ(T ) (cf. Theorem 5.24 and Corollary 5.38), it follows that (h) and (a) are equivalent as well. The Browder spectrum is not invariant under compact perturbation. In fact, if T ∈ B[H] is such that σb (T ) = σw (T ) (i.e., if T is such that any of the equivalent assertions in Corollary 5.41 fails — and so all of them fail), then there exists a compact K ∈ B∞[H] such that σb (T + K) = σb (T ). Indeed, if σb (T ) = σw (T ), then σw (T ) ⊂ σb (T ), and so σb (T ) is not invariant under compact perturbation because σw (T ) is the largest part of σ(T ) that remains unchanged under compact perturbation. However, as we shall see in the forthcoming Theorem 5.44 (whose proof is based on Lemmas 5.42 and 5.43 below), σb (T ) is invariant under perturbation by compact operators that commute with T . Set A = B[H], where H is an infinite-dimensional complex Hilbert space. Let A be a unital closed subalgebra of the unital complex Banach algebra A, thus a unital complex Banach algebra itself. Take an arbitrary operator T in A. Let B∞[H] = B∞[H] ∩ A denote the collection of all compact operators from A , and let F denote the class of all Fredholm operators in the unital complex Banach algebra A . That is (cf. Proposition 5.C), F = T ∈ A : AT − I and T A − I are in B∞[H] for some A ∈ A . Therefore, F ⊆ F ∩ A . The inclusion is proper in general (see, e.g., [10, p. 86]), but it becomes an identity if A is a ∗-subalgebra of the C*-algebra A; that is, if the unital closed subalgebra A is a ∗-algebra (see, e.g., [10, Theorem A.1.3]). However, if the inclusion becomes an identity, then Corollary 5.12 ensures that the essential spectra in A and in A coincide: F = F ∩ A where
=⇒
σe (T ) = σe (T ),
σe (T ) = λ ∈ σ (T ): λI − T ∈ A \F
stands for the essential spectrum of T ∈ A with respect to A, with σ (T ) denoting the spectrum of T ∈ A with respect to the unital complex Banach algebra A. Similarly, let W denote the class of Weyl operators in A, W = T ∈ F : ind (T ) = 0 ,
5.4 Ascent, Descent, and Browder Spectrum
and let
175
σw (T ) = λ ∈ σ (T ): λI − T ∈ A \W
stand for the Weyl spectrum of T ∈ A with respect to A. If T ∈ A, set (T ) = λ ∈ σ (T ): λI − T ∈ W . σ0 (T ) = σ (T )\σw Moreover, let B stand for the class of Browder operators in A, (T ) B = T ∈ F : 0 ∈ ρ (T ) ∪ σiso according to Corollary 5.35, where σiso (T ) denotes the set of all isolated points of σ (T ), and ρ (T ) = C \σ (T ) is the resolvent set of T ∈ A with respect to the unital complex Banach algebra A. Let σb (T ) = λ ∈ σ (T ): λI − T ∈ A \B
be the Browder spectrum of T ∈ A with respect to A. Lemma 5.42. Consider the preceding setup. If T ∈ A, then σ (T ) = σ(T )
=⇒
σb (T ) = σb (T ).
Proof. Take an arbitrary operator T in A. Suppose σ (T ) = σ(T ), (T ) = σiso (T ). Therefore, if λ ∈ σiso (T ), then so that ρ (T ) = ρ(T ) and σiso the Riesz idempotent associated with λ, % 1 (νI − T )−1 dν, Eλ = 2πi Γλ
(T ) = σiso (T ), and also (νI − T )−1 ∈ A whenever lies in A (because σiso −1 (νI − T ) ∈ B[H] since ρ (T ) = ρ(T )). Thus Theorem 5.19 ensures that
λ ∈ σ0 (T )
⇐⇒
dim R(Eλ ) < ∞
⇐⇒
λ ∈ σ0 (T )
(T ), since σiso (T ) = σiso (T ). Hence whenever λ ∈ σiso π0 (T ) = σ0 (T ) ∩ σiso (T ) = σ0 (T ) ∩ σiso (T ) = π0 (T ).
Then, according to Corollary 5.38, σb (T ) = σ(T )\π0 (T ) = σ (T )\π0 (T ) = σb (T ).
Let {T } denote the commutant of T ∈ B[H] (i.e., the collection of all operators in B[H] that commute with T ). This is a unital subalgebra of the unital complex Banach algebra B[H], which is weakly closed in B[H] (see, e.g., [63, Problem 3.7]), and hence uniformly closed (i.e., closed in the operator norm topology of B[H]). Therefore A = {T } is a unital closed subalgebra of the unital complex Banach algebra A = B[H].
176
5. Fredholm Theory
Lemma 5.43. Consider the preceding setup with A = {T }. If 0 ∈ σ (T ) and T ∈ W , then 0 ∈ σiso (T ). In other words, A = {T } and 0 ∈ σ0 (T )
=⇒
0 ∈ σiso (T ).
Proof. Take any T ∈ B[H]. Let A be a unital closed subalgebra of A = B[H] that includes T . Suppose 0 ∈ σ0 (T ), which means that 0 ∈ σ (T ) and T ∈ W . Since T ∈ W , Lemma 5.23(c) says that there exists a compact K ∈ B∞[H] = B∞[H] ∩ A, actually a finite-rank operator, and an operator A ∈ A such that A(T + K) = (T + K)A = I. Hence A ∈ A is invertible with inverse A−1 = T + K ∈ A . If A = {T }, then AK = KA, and so A−1 K = KA−1. Let A be the unital closed commutative subalgebra of A generated by A, A−1, and K. Since T = A−1 − K, it follows that A includes T . Let σiso (T ) and σiso (T ) stand for the sets of isolated points of the spectra σ (T ) and σ (T ) of T with respect to the Banach algebras A and A, respectively. Since A ⊆ A , it follows by Proposition 2.Q(a) that σ (T ) ⊆ σ (T ). Hence 0 ∈ σ (T ) because 0 ∈ σ0 (T ) ⊆ σ (T ). Claim.
0 ∈ σiso (T ).
Proof. Let A denote the collection of all algebra homomorphisms of A into C . Recall from Proposition 2.Q(b) that σ (A−1 ) = Φ(A−1 ) ∈ C : Φ ∈ A , which is bounded away from zero (since 0 ∈ ρ (A−1 )), and σ (K) = Φ(K) ∈ C : Φ ∈ A = {0} ∪ Φ(K) ∈ C : Φ ∈ AF , where AF ⊆ A is a set of nonzero homomorphisms, which is finite because K is finite rank (and so K has a finite spectrum — Corollary 2.19). Note that 0 ∈ σ (T ) = σ (A−1 − K) if and only if 0 = Φ(A−1 − K) = Φ(A−1 ) − Φ(K) for some Φ ∈ A . If Φ ∈ A \AF , then Φ(K) = 0 so that Φ(A−1 ) = 0, which is a contradiction (because 0 ∈ σ(A−1 )). Thus Φ ∈ AF , and hence Φ(A−1 − K) = Φ(A−1 ) − Φ(K) = 0, so that Φ(A−1 ) = Φ(K), for at most a finite number of homomorphisms Φ in AF . Therefore, since the set {Φ(A−1 ) ∈ C : Φ ∈ A } = σ (A−1 ) is bounded away from zero, and since the set Φ ∈ AF : Φ(A−1 ) = Φ(K) = Φ ∈ AF : 0 ∈ σ (A−1 − K) is finite, it follows that 0 is an isolated point of σ (A−1 − K) = σ (T ), which concludes the proof of the claimed result. Since 0 ∈ σ (T ) ⊆ σ (T ), it then follows that 0 ∈ σiso (T ).
The next characterization of the Browder spectrum is the counterpart of the very definition of the Weyl spectrum. It says that σb (T ) is the largest
5.4 Ascent, Descent, and Browder Spectrum
177
part of σ(T ) that remains unchanged under compact perturbations in the commutant of T . That is, σb (T ) is the largest part of σ(T ) such that [69, 59] σb (T + K) = σb (T ) for every Theorem 5.44. For every T ∈ B[H], ! σb (T ) =
K ∈ B∞[H] ∩ {T }.
K∈B∞[H]∩{T }
σ(T + K).
Proof. Take any operator T ∈ B[H], and let A be a unital closed subalgebra of the unital complex Banach algebra A = B[H] (thus a unital complex Banach (T ) be the spectrum, the Browder algebra itself). Let σ (T ), σb (T ), and σw spectrum, and the Weyl spectrum of T with respect to A, respectively. Claim 1. If A = {T }, then σ (T ) = σ(T ). Proof. Suppose A = {T }. Trivially, T ∈ A. Let P = P(T ) be the collection of all polynomials p(T ) in T with complex coefficients, which is a unital commutative subalgebra of B[H]. Consider the collection T of all unital commutative subalgebras of B[H] containing T . Note that every element of T is included in A, and also that T is partially ordered (in the inclusion ordering) and nonempty (e.g., P ∈ T ). Moreover, every chain in T has an upper bound in T (the union of all subalgebras in a given chain of subalgebras in T is again a subalgebra in T ). Thus Zorn’s Lemma says that T has a maximal element, say A = A (T ) ∈ T . Hence there is a maximal commutative subalgebra A of B[H] containing T (which is unital and closed — see the paragraph that precedes Proposition 2.Q). Therefore, A ⊆ A ⊆ A. Let σ (T ) denote the spectrum of T with respect to A. If the preceding inclusions hold true, and since A is a maximal commutative subalgebra of A, then Proposition 2.Q(a,b) ensure that the following relations also hold true: σ(T ) ⊆ σ (T ) ⊆ σ (T ) = σ(T ). Claim 2. If A = {T }, then σb (T ) = σb (T ). Proof. Claim 1 and Lemma 5.42. Claim 3. If A = {T }, then σb (T ) = σw (T ).
Proof. Recall that λ ∈ σ0 (T ) if and only if λ ∈ σ (T ) and λI − T ∈ W . But λ ∈ σ (T ) if and only if 0 ∈ σ (λI − T ) by the Spectral Mapping Theorem (Theorem 2.7). Hence, λ ∈ σ0 (T )
⇐⇒
0 ∈ σ0 (λI − T ).
Since A = {T } = {λI − T } it follows by Lemma 5.43 that
178
5. Fredholm Theory
0 ∈ σ0 (λI − T )
=⇒
0 ∈ σiso (λI − T ).
However, applying the Spectral Mapping Theorem again, 0 ∈ σiso (λI − T )
Therefore,
λ ∈ σiso (T ).
⇐⇒
(T ), σ0 (T ) ⊆ σiso
which means that (cf. Corollary 5.41) σw (T ) = σb (T ).
Claim 4.
σw (T ) =
K∈B∞[H]∩A
σ(T + K).
Proof. This is the very definition of the Weyl spectrum of T with respect to (T ) = K∈B [H] σ(T + K), where B∞[H] = B∞[H] ∩ A. A. That is, σw ∞
By Claims 2, 3, and 4 we get σb (T ) =
! K∈B∞[H]∩{T }
σ(T + K).
5.5 Remarks on Browder and Weyl Theorems This final section consist of a brief survey on Weyl’s and Browder’s Theorems. As such, and unlike the previous sections, some of the assertions discussed here, instead of being fully proved, will be accompanied by a reference to indicate where the proof can be found. Take any operator T ∈ B[H], and consider the partitions {σw (T ), σ0 (T )} and {σb (T ), π0 (T )} of the spectrum of T in terms of Weyl and Browder spectra σw (T ) and σb (T ) and their complements σ0 (T ) and π0 (T ) in σ(T ) as in Theorem 5.24 and Corollary 5.38, so that σw (T ) = σ(T )\σ0 (T )
and
σb (T ) = σ(T )\π0 (T ),
where σ0 (T ) = λ ∈ σ(T ): λI − T ∈ W
and
π0 (T ) = σiso (T ) ∩ σ0 (T ).
Although σ0 (T ) ⊆ σPF (T ) and σacc (T ) ⊆ σb (T ) (i.e., π0 (T ) ⊆ σiso (T )) for every operator T , it happens that, in general, σ0 (T ) may not be included in σiso (T ) or, equivalently, σacc (T ) may not be included in σw (T ). Recall that π00 (T ) = σiso (T ) ∩ σPF (T ). Definition 5.45. An operator T is said to satisfy Weyl’s Theorem (or Weyl’s Theorem holds for T ) if any of the equivalent assertions of Corollary 5.28 holds true. Therefore, T satisfies Weyl’s Theorem if
5.5 Remarks on Browder and Weyl Theorems
179
σ0 (T ) = σiso (T ) ∩ σPF (T ), that is, σ0 (T ) = π00 (T ) or, equivalently, σ(T )\π00 (T ) = σw (T ). Further necessary and sufficient conditions for an operator to satisfy Weyl’s Theorem can be found in [44]. See also [58, Chapter 11]. Definition 5.46. An operator T is said to satisfy Browder’s Theorem (or Browder’s Theorem holds for T ) if any of the equivalent assertions of Corollary 5.41 holds true. Therefore, T satisfies Browder’s Theorem if σ0 (T ) ⊆ σiso (T ) ∩ σPF (T ), that is, σ0 (T ) ⊆ π00 (T ) or, equivalently, or σ0 (T ) ⊆ σiso (T ),
σ0 (T ) = π0 (T ), σacc (T ) ⊆ σw (T ),
or
or σw (T ) = σb (T ).
A word on terminology. The expressions “T satisfies Weyl’s Theorem” and “T satisfies Browder’s Theorem” are the usual terminologies and we shall stick to them, although saying that T “satisfies Weyl’s (or Browder’s) property” rather than “satisfies Weyl’s (or Browder’s) Theorem” would perhaps sound more appropriate. Remark 5.47. (a) Sufficiency for Weyl’s. σw (T ) = σacc (T )
=⇒
T satisfies Weyl’s Theorem,
according to Corollary 5.28. (b) Browder’s not Weyl’s. Consider Definitions 5.45 and 5.46. If T satisfies Weyl’s Theorem, then it obviously satisfies Browder’s Theorem. An operator T satisfies Browder’s but not Weyl’s Theorem if and only if the proper inclusion σ0 (T ) ⊂ σiso (T ) ∩ σPF (T ) holds true, which implies that there exists an isolated eigenvalue of finite multiplicity not in σ0 (T ) (i.e., there exists an isolated eigenvalue of finite multiplicity in σw (T ) = σ(T )\σ0 (T )). Outcome: If T satisfies Browder’s Theorem but not Weyl’s Theorem, then σw (T ) ∩ σiso (T ) ∩ σPF (T ) = ∅. (c) Equivalent Condition. The preceding result can be extended as follows (cf. Remark 5.40(c)). Consider the equivalent assertions of Corollary 5.28 and of Corollary 5.41. If Browder’s Theorem holds and π0 (T ) = π00 (T ), then Weyl’s Theorem holds (i.e., if σ0 (T ) = π0 (T ) and π0 (T ) = π00 (T ), then σ0 (T ) = π00 (T ) tautologically). Conversely, if Weyl’s Theorem holds, then π0 (T ) = π00 (T ) (see Corollary 5.28), and Browder’s Theorem holds trivially. Summing up: Weyl’s Theorem holds
⇐⇒ Browder’s Theorem holds and π0 (T ) = π00 (T ).
180
5. Fredholm Theory
In other words, Weyl’s Theorem holds if and only if Browder’s Theorem and any of the equivalent assertions of Remark 5.40(c) hold true. (d) Trivial Cases. By Remark 5.27(a), if dim H < ∞, then σ0 (T ) = σ(T ). Thus, according to Corollary 2.19, dim H < ∞ =⇒ σ0 (T ) = π00 (T ) = σ(T ): Every operator on a finite-dimensional space satisfies Weyl’s Theorem. This extends to finite-rank but not to compact operators (see examples in [38]). On the other hand, if σP (T ) = ∅, then σ0 (T ) = π00 (T ) = ∅: Every operator without eigenvalues satisfies Weyl’s Theorem. Since σ0 (T ) ⊆ σPF (T ), this extends to operators with σPF (T ) = ∅. Investigating quadratic forms with compact difference, Hermann Weyl proved in 1909 that Weyl’s Theorem holds for self-adjoint operators (i.e., every self-adjoint operator satisfies Weyl’s Theorem) [90]. We shall present a contemporary proof of the original Weyl’s result. First recall from Sections 1.5 and 1.6 the following elementary (proper) inclusion of classes of operators: Self-Adjoint ⊂
Normal ⊂
Hyponormal.
Theorem 5.48. (Weyl’s Theorem). If T ∈ B[H] is self-adjoint, then σ0 (T ) = σiso (T ) ∩ σPF (T ). Proof. Take any operator T ∈ B[H]. Claim 1. If T is self-adjoint, then σ0 (T ) = π0 (T ). Proof. If T is self-adjoint, then σ(T ) ⊂ R (Proposition 2.A). Thus no subset of σ(T ) is open in C , and hence σ0 (T ) = τ0 (T ) ∪ π0 (T ) = π0 (T ) because τ0 (T ) is open in C (Corollary 5.20). Claim 2. If T is hyponormal and λ ∈ π00 (T ), then R(λI − T ) is closed. Proof. λ ∈ π00 (T ) = σiso (T ) ∩ σPF (T ) if and only if λ is an isolated point of σ(T ) and 0 < dim N (λI − T ) < ∞. Thus, if λ ∈ π00 (T ) and T is hyponormal, then Proposition 4.L ensures that dim R(Eλ ) < ∞, where Eλ is the Riesz idempotent associated with λ. Then R(λI − T ) is closed by Corollary 4.22. Therefore, by Claim 2, if T is hyponormal, then (cf. Corollary 5.22) π0 (T ) = λ ∈ π00 (T ): R(λI − T ) is closed = π00 (T ). Thus, in particular, if T is self-adjoint, then π0 (T ) = π00 (T ), and therefore, according to Claim 1, σ0 (T ) = π0 (T ) = π00 (T ) = σiso (T ) ∩ σPF (T ).
5.5 Remarks on Browder and Weyl Theorems
181
The preceding proof actually says that if T is self-adjoint, then T satisfies Browder’s Theorem and, if T is hyponormal, then π0 (T ) = π00 (T ). Thus if T is self-adjoint, then T satisfies Weyl’s Theorem by Remark 5.47(c). Theorem 5.48 was extended to normal operators in [83]. This can be verified by using Proposition 5.E, which says that, if T is normal, then σ(T )\σe (T ) = π 00 (T ). But, by Corollary 5.18, σ(T )\σe (T ) = k∈Z \{0} σk (T ) ∪ σ0 (T ), where k∈Z \{0} σk (T ) is open in C (cf. Theorem 5.16), and π00 (T ) = σiso (T ) ∩ σPF (T ) is closed in C . Thus σ0 (T ) = π00 (T ). Therefore, every normal operator satisfies Weyl’s Theorem. Moreover, Theorem 5.48 was further extended to hyponormal operators in [24], and to seminormal operators in [15]. In other words: If T or T ∗ is hyponormal, then T satisfies Weyl’s Theorem. Some additional definitions and terminologies will be needed. An operator is isoloid if every isolated point of its spectrum is an eigenvalue; that is, T is isoloid if σiso (T ) ⊆ σP (T ). A Hilbert space operator T is said to be dominant if R(λI − T ) ⊆ R(λI − T ∗ ) for every λ ∈ C or, equivalently, if for each λ ∈ C there is an Mλ ≥ 0 such that (λI − T )(λI − T )∗ ≤ Mλ (λI − T )∗ (λI − T ) [33]. Therefore, every hyponormal operator is dominant (with Mλ = 1) and isoloid (Proposition 4.L). Recall that a subspace M of a Hilbert space H is invariant for an operator T ∈ B[H] (or T -invariant) if T (M) ⊆ M, and reducing if it is invariant for both T and T ∗. A part of an operator is a restriction of it to an invariant subspace, and a direct summand is a restriction of it to a reducing subspace. The main result in [15] reads as follows (see also [16]). Theorem 5.49. If each finite-dimensional eigenspace of an operator on a Hilbert space is reducing, and if every direct summand of it is isoloid, then it satisfies Weyl’s Theorem. This is a fundamental result that includes many of the previous results along this line, and has also been frequently applied to yield further results; mainly through the following corollary. Corollary 5.50. If an operator on a Hilbert space is dominant and every direct summand of it is isoloid, then it satisfies Weyl’s Theorem. Proof. Take any T ∈ B[H]. The announced result can be restated as follows. If R(λI − T ) ⊆ R(λI − T ∗ ) for every λ ∈ C , and the restriction T |M of T to each reducing subspace M is such that σiso (T |M ) ⊆ σP (T |M ), then T satisfies Weyl’s Theorem. To prove it we need the following elementary result, which extends Lemma 1.13(a) from hyponormal to dominant operators. Take an arbitrary scalar λ ∈ C . Claim. If R(λI − T ) ⊆ R(λI − T ∗ ), then N (λI − T ) ⊆ N (λI − T ∗ ). Proof. Take any S ∈ B[H]. If R(S) ⊆ R(S ∗ ), then R(S ∗ )⊥ ⊆ R(S)⊥, which is equivalent to N (S) ⊆ N (S ∗ ) by Lemma 1.4, thus proving the claimed result.
182
5. Fredholm Theory
If R(λI − T ) ⊆ R(λI − T ∗ ), then N (λI − T ) reduces T by the preceding Claim and Lemma 1.14(b). Therefore, if R(λI − T ) ⊆ R(λI − T ∗ ) for every λ ∈ C , then every eigenspace of T is reducing and so is, in particular, every finite-dimensional eigenspace of T . Thus the stated result is a straightforward consequence of Theorem 5.49. Since every hyponormal operator is dominant, every direct summand of a hyponormal operator is again hyponormal (in fact, every part of a hyponormal operator is again hyponormal — Proposition 1.P), and every hyponormal operator is isoloid (Proposition 4.L), it follows that Corollary 5.50 offers another proof that every hyponormal operator satisfies Weyl’s Theorem. We need a few more definitions and terminologies. An operator T ∈ B[H] is paranormal if T x2 ≤ T 2 x x for every x ∈ H, and totally hereditarily normaloid (THN) if all parts of it are normaloid, as well as the inverse of all invertible parts. (Tautologically, totally hereditarily normaloid operators are normaloid.) Hyponormal operators are paranormal and dominant, but paranormal operators are not necessarily dominant. These classes are related by proper inclusion (see [37, Remark 1] and [38, Proposition 2]): Hyponormal ⊂ Paranormal ⊂ THN ⊂ Normaloid ∩ Isoloid . Weyl’s Theorem has been extended to classes of nondominant operators that properly include the hyponormal operators. For instance, it was extended to paranormal operators in [88] and, beyond, to totally hereditarily normaloid operators in [36]. In fact [36, Lemma 2.5]: If T is totally hereditarily normaloid, then both T and T ∗ satisfy Weyl’s Theorem. So Weyl’s Theorem holds for paranormal operators and their adjoints and, in particular, Weyl’s Theorem holds for hyponormal operators and their adjoints. Let T and S be arbitrary operators acting on Hilbert spaces. First we consider their (orthogonal) direct sum. Proposition 2.F(b) says that the spectrum of a direct sum coincides with the union of the spectra of the summands, σ(T ⊕ S) = σ(T ) ∪ σ(S). For the Weyl spectra, only inclusion is ensured. In fact, the Weyl spectrum of a direct sum is included in the union of the Weyl spectra of the summands, σw (T ⊕ S) ⊆ σw (T ) ∪ σw (S), but equality does not hold in general [53]. However, it holds if σw (T ) ∩ σw (S) has empty interior [71], ◦ σw (T ) ∩ σw (S) = ∅ =⇒ σw (T ⊕ S) = σw (T ) ∪ σw (S).
5.5 Remarks on Browder and Weyl Theorems
183
In general, Weyl’s Theorem does not transfer from T and S to their direct sum T ⊕ S. The above identity involving the Weyl spectrum of a direct sum, viz., σw (T ⊕ S) = σw (T ) ∪ σw (S), when it holds, plays an important role in establishing sufficient conditions for a direct sum to satisfy Weyl’s Theorem. This was recently investigated in [70] and [38]. As for the problem of transferring Browder’s Theorem from T and S to their direct sum T ⊕ S, the following necessary and sufficient condition was proved in [53, Theorem 4]. Theorem 5.51. If both operators T and S satisfy Browder’s Theorem, then the direct sum T ⊕ S satisfies Browder’s Theorem if and only if σw (T ⊕ S) = σw (T ) ∪ σw (S). Now consider the tensor product T ⊗ S of a pair of Hilbert space operators T and S (for an expository paper on tensor products that will suffice for our needs see [64]). It is known from [20] that the spectrum of a tensor product coincides with the product of the spectra of the factors, σ(T ⊗ S) = σ(T ) · σ(S). For the Weyl spectrum it was proved in [57] that the inclusion σw (T ⊗ S) ⊆ σw (T ) · σ(S) ∪ σ(T ) · σw (S) holds. However, it remained an open question whether the preceding inclusion might be an identity; that is, it was not known if there existed a pair of operators T and S for which the above inclusion was proper. This question was solved quite recently, as we shall see later. Sufficient conditions ensuring that the equality holds were investigated in [67]. For instance, if σe (T )\{0} = σw (T )\{0}
and
σe (S)\{0} = σw (S)\{0}
(which holds, in particular, for compact operators T and S), or if σw (T ⊗ S) = σb (T ⊗ S) (the tensor product satisfies Browder’s Theorem), then [67, Proposition 6] σw (T ⊗ S) = σw (T ) · σ(S) ∪ σ(T ) · σw (S). Again, Weyl’s Theorem does not necessarily transfer from T and S to their tensor product T ⊗ S. The preceding identity involving the Weyl spectrum of a tensor product, namely, σw (T ⊗ S) = σw (T ) · σ(S) ∪ σ(T ) · σw (S), when it holds, plays a crucial role in establishing sufficient conditions for a tensor product to satisfy Weyl’s Theorem, as was recently investigated in [84], [67], and [68]. As for the problem of transferring Browder’s Theorem from T and S to their tensor product T ⊗ S, the following necessary and sufficient condition was proved in [67, Corollary 6].
184
5. Fredholm Theory
Theorem 5.52. If both operators T and S satisfy Browder’s Theorem, then the tensor product T ⊗ S satisfies Browder’s Theorem if and only if σw (T ⊗ S) = σw (T ) · σ(S) ∪ σ(T ) · σw (S). According to Theorem 5.52, if there exist operators T and S that satisfy Browder’s Theorem, but T ⊗ S does not satisfy Browder’s Theorem, then the Weyl spectrum identity, viz., σw (T ⊗ S) = σw (T ) · σ(S) ∪ σ(T ) · σw (S), does not hold for such a pair of operators. An example of a pair of operators that satisfy Weyl’s Theorem (and thus satisfy Browder’s Theorem) but whose tensor product does not satisfy Browder’s Theorem was recently supplied in [61]. Therefore, [61] and [67] together ensure that there exists a pair of operators T and S for which the inclusion σw (T ⊗ S) ⊂ σw (T ) · σ(S) ∪ σ(T ) · σw (S) is proper; that is, for which the Weyl spectrum identity fails.
5.6 Additional Propositions Proposition 5.A. F and Fr are open sets in B[H], and so are SF and F. Proposition 5.B. The mapping ind ( · ): SF → Z is continuous, where the topology on SF is the topology inherited from B[H], and the topology on Z is the discrete topology. Proposition 5.C. Take T ∈ B[H]. The following assertions are equivalent . (a) T ∈ F . (b) There exists A ∈ B[H] such that I − AT and I − T A are compact . (c) There exists A ∈ B[H] such that I − AT and I − T A are finite rank . Proposition 5.D. Take an operator T in B[H]. If λ ∈ C \σe (T ) ∩ σre (T ), then there exists an ε > 0 such that dim N (νI − T ) and dim N (νI − T ∗ ) are constant in the punctured disk Bε (λ)\{0} = {λ ∈ C : 0 < |ν − λ| < ε}. Proposition 5.E. If T ∈ B[H] is normal, then σe (T ) = σre (T ) = σe (T ) and σ(T )\σe (T ) = σiso (T ) ∩ σPF (T )
(i.e., σ(T )\σe (T ) = π00 ).
Proposition 5.F. Let D denote the open unit disk centered at the origin of 2 ] is a unilatthe complex plane, and let T = ∂ D be the unit circle. If S+ ∈ B[+ 2 2 eral shift on + = + (C ), then σe (S+ ) = σre (S+ ) = σe (S+ ) = σC (S+ ) = ∂σ(S+ ) = T , and ind(λI − S+ ) = −1 if |λ| < 1 (i.e., if λ ∈ D = σ(S+ )\∂σ(S+ )).
5.6 Additional Propositions
185
Proposition 5.G. If ∅ = Ω ⊂ C is a compact set, {Λk }k∈Z is a collection of open sets included in Ω whose nonempty sets are pairwise disjoint, and Δ ⊆ Ω is a discrete set (i.e., containing only isolated points), then there exists T ∈ B[H] such that σ(T ) = Ω, σk (T ) = Λk for each k = 0, σ0 (T ) = Λ0 ∪ Δ, and for each λ ∈ Δ there is a positive integer nλ ∈ N such that dim R(Eλ ) = nλ (where Eλ is the Riesz idempotent associated with the isolated point λ ∈ Δ). Recall from Remark 5.27(b,d) that if K ∈ B∞[H] is compact, then either σw (K) = ∅ if dim H < ∞, or σw (K) = {0} if dim H = ∞. The next result says that, on an infinite-dimensional separable Hilbert space, every compact operator is a commutator. (An operator T is a commutator if there are operators A and B such that T = AB − BA.) Proposition 5.H. If T ∈ B[H] is an operator on an infinite-dimensional separable Hilbert space H, then 0 ∈ σw (T )
=⇒
T is a commutator.
Proposition 5.I. Take any operator T in B[H]. (a) If dim N (T ) < ∞ or dim N (T ∗ ) < ∞, then (a1 ) asc(T ) < ∞ =⇒ dim N (T ) ≤ dim N (T ∗ ), (a2 ) dsc(T ) < ∞ =⇒ dim N (T ∗ ) ≤ dim N (T ). (b) If dim N (T ) = dim N (T ∗ ) < ∞, then asc(T ) < ∞ ⇐⇒ dsc(T ) < ∞. Proposition 5.J. Suppose T ∈ B[H] is a Fredholm operator. (a) If asc(T ) < ∞ and dsc(T ) < ∞, then ind(T ) = 0. (b) If ind (T ) = 0, then asc(T ) < ∞ if and only if dsc(T ) < ∞. Therefore,
B ⊆ W ⊆ T ∈ W : asc(T ) < ∞ ⇐⇒ dsc(T ) < ∞ , W \ B = T ∈ W : asc(T ) = dsc(T ) = ∞ .
The next result is an extension of the Fredholm Alternative of Remark 5.7(b), and also of the ultimate form of it in Remark 5.27(c). Proposition 5.K. Take any compact operator K in B∞[H]. (a) If λ = 0, then λI − K ∈ B. (b) σ(K)\{0} = π0 (K)\{0} = σiso (K)\{0}. Proposition 5.L. Take T ∈ B[H]. If σw (T ) is simply connected (so it has no holes), then T + K satisfies Browder’s Theorem for every K ∈ B∞[H].
186
5. Fredholm Theory
Notes: Propositions 5.A to 5.C are standard results on Fredholm and semiFredholm operators. For instance, see [75, Proposition 1.25] or [27, Proposition XI.2.6] for Proposition 5.A, and [75, Proposition 1.17] or [27, Proposition XI.3.13] for Proposition 5.B. For Proposition 5.C, see [6, Remark 3.3.3] or [50, Problem 181]. The locally constant dimension of kernels is considered in Proposition 5.D (see, e.g., [27, Theorem XI.6.7]), and a finer analysis of the spectra of normal operators and of unilateral shifts is discussed in Propositions 5.E and 5.F (see, e.g., [27, Proposition XI.4.6] and [27, Example XI.4.10]). Every spectral picture is attainable [25]. This is described in Proposition 5.G — see [27, Proposition XI.6.13]. For Proposition 5.H, see [16, §7]. The results of Proposition 5.I are from [23, p. 57] (also see [32]), and Proposition 5.J is an immediate consequence of Proposition 5.I. Regarding the Fredholm Alternative version of Proposition 5.K, for item (a) see [74, Theorem 1.4.6]; item (b) follows from Corollary 5.35, which goes back to Corollary 2.20. The compact perturbation result of Proposition 5.L is from [9, Theorem 11].
Suggested Reading Aiena [2] Arveson [6] Barnes, Murphy, Smyth, and West [10] Caradus, Pfaffenberger, and Yood [23] Conway [27, 29] Douglas [34] Halmos [50] Harte [52]
Istrˇa¸tescu [58] Kato [60] M¨ uller [73] Murphy [74] Pearcy [75] Schechter [82] Sunder [85] Taylor and Lay [87]
References
1. L.V. Ahlfors, Complex Analysis, 3rd edn. McGraw-Hill, New York, 1978. 2. P. Aiena, Fredholm and Local Spectral Theory, with Applications to Multipliers, Kluwer, Dordrecht, 2004. 3. N.I. Akhiezer and I.M. Glazman, Theory of Linear Operators in Hilbert Space – Volume I , Pitman, London, 1981; reprinted: Dover, New York, 1993. 4. N.I. Akhiezer and I.M. Glazman, Theory of Linear Operators in Hilbert Space – Volume II , Pitman, London, 1981; reprinted: Dover, New York, 1993. 5. W. Arveson, An Invitation to C*-Algebras, Springer, New York, 1976. 6. W. Arveson, A Short Course in Spectral Theory, Springer, New York, 2002. 7. F.V. Atkinson, The normal solvability of linear equations in normed spaces, Mat. Sbornik 28 (1951), 3–14. 8. G. Bachman and L. Narici, Functional Analysis, Academic Press, New York, 1966; reprinted: Dover, Mineola, 2000. 9. B.A. Barnes, Riesz points and Weyl’s theorem, Integral Equations Operator Theory 34 (1999), 187–196. 10. B.A. Barnes, G.J. Murphy, M.R.F. Smyth, and T.T. West, Riesz and Fredholm Theory in Banach Algebras, Pitman, London, 1982. 11. R.G. Bartle, The Elements of Integration and Lebesgue Measure, Wiley, New York, 1995; enlarged 2nd edn. of The Elements of Integration, Wiley, New York, 1966. 12. R. Beals, Topics in Operator Theory, The University of Chicago Press, Chicago, 1971. 13. R. Beals, Advanced Mathematical Analysis, Springer, New York, 1973. 14. S.K. Berberian, Notes on Spectral Theory, Van Nostrand, New York, 1966. 15. S.K. Berberian, An extension of Weyl’s theorem to a class of not necessarily normal operators, Michigan Math. J. 16 (1969), 273–279.
C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3, © Springer Science+Business Media, LLC 2012
187
188
References
16. S.K. Berberian, The Weyl spectrum of an operator , Indiana Univ. Math. J. 20 (1971), 529–544. 17. S.K. Berberian, Lectures in Functional Analysis and Operator Theory, Springer, New York, 1974. 18. S.K. Berberian, Introduction to Hilbert Space, 2nd edn. Chelsea, New York, 1976. 19. J. Bram, Subnormal operators, Duke Math. J. 22 (1955), 75–94. 20. A. Brown and C. Pearcy, Spectra of tensor products of operators, Proc. Amer. Math. Soc. 17 (1966), 162–166. 21. A. Brown and C. Pearcy, Introduction to Operator Theory I – Elements of Functional Analysis, Springer, New York, 1977. 22. A. Brown and C. Pearcy, An Introduction to Analysis, Springer, New York, 1995. 23. S.R. Caradus, W.E. Pfaffenberger, and B. Yood, Calkin Algebras and Algebras of Operators on Banach Spaces, Lecture Notes in Pure and Applied Mathematics, Vol. 9. Marcel Dekker, New York, 1974. 24. L.A. Coburn, Weyl’s theorem for nonnormal operators, Michigan Math. J. 13 (1966), 285–288. 25. J.B. Conway, Every spectral picture is possible, Notices Amer. Math. Soc. 24 (1977), A-431. 26. J.B. Conway, Functions of One Complex Variable, Springer, New York, 1978. 27. J.B. Conway, A Course in Functional Analysis, 2nd edn. Springer, New York, 1990. 28. J.B. Conway, The Theory of Subnormal Operators, Mathematical Surveys and Monographs, Vol. 36, Amer. Math. Soc., Providence, 1991. 29. J.B. Conway, A Course in Operator Theory, Graduate Studies in Mathematics, Vol. 21, Amer. Math. Soc., Providence, 2000. 30. K.R. Davidson, C*-Algebras by Example, Fields Institute Monographs, Vol. 6, Amer. Math. Soc., Providence, 1996. 31. J. Dieudonn´e, Foundations of Modern Analysis, Academic Press, New York, 1969. 32. D.S. Djordjevi´c, Semi-Browder essential spectra of quasisimilar operators, Novi Sad J. Math. 31 (2001), 115–123. 33. R.G. Douglas, On majorization, factorization, and range inclusion of operators on Hilbert space, Proc. Amer. Math. Soc. 17 (1966), 413–415. 34. R.G. Douglas, Banach Algebra Techniques in Operator Theory, Academic Press, New York, 1972; 2nd edn. Springer, New York, 1998. 35. H.R. Dowson, Spectral Theory of Linear Operators, Academic Press, New York, 1978. 36. B.P. Duggal and S.V. Djordjevi´c, Generalized Weyl’s theorem for a class of operators satisfying a norm condition, Math. Proc. Royal Irish Acad. 104 (2004), 75–81.
References
189
37. B.P. Duggal, S.V. Djordjevi´c, and C.S. Kubrusly, Hereditarily normaloid contractions, Acta Sci. Math. (Szeged) 71 (2005), 337–352. 38. B.P. Duggal and C.S. Kubrusly, Weyl’s theorem for direct sums, Studia Sci. Math. Hungar. 44 (2007), 275–290. 39. N. Dunford and J.T. Schwartz, Linear Operators – Part I: General Theory, Interscience, New York, 1958. 40. N. Dunford and J.T. Schwartz, Linear Operators – Part II: Spectral Theory – Self Adjoint Operators in Hilbert Space, Interscience, New York, 1963. 41. N. Dunford and J.T. Schwartz, Linear Operators – Part III: Spectral Operators, Interscience, New York, 1971. 42. P.A. Fillmore, Notes on Operator Theory, Van Nostrand, New York, 1970. 43. P.A. Fillmore, A User’s Guide to Operator Algebras, Wiley, New York, 1996. 44. K. Gustafson, Necessary and sufficient conditions for Weyl’s theorem, Michigan Math. J. 19 (1972), 71–81. 45. K. Gustafson and D.K.M. Rao, Numerical Range, Springer, New York, 1997. 46. P.R. Halmos, Measure Theory, Van Nostrand, New York, 1950; reprinted: Springer, New York, 1974. 47. P.R. Halmos, Introduction to Hilbert Space and the Theory of Spectral Multiplicity, 2nd edn. Chelsea, New York, 1957; reprinted: AMS Chelsea, Providence, 1998. 48. P.R. Halmos, Finite-Dimensional Vector Spaces, Van Nostrand, New York, 1958; reprinted: Springer, New York, 1974. 49. P.R. Halmos, Shifts on Hilbert spaces, J. Reine Angew. Math. 208 (1961), 102– 112. 50. P.R. Halmos, A Hilbert Space Problem Book , Van Nostrand, New York, 1967; 2nd edn. Springer, New York, 1982. 51. P.R. Halmos and V.S. Sunder, Bounded Integral Operators on L2 Spaces, Springer, Berlin, 1978. 52. R. Harte, Invertibility and Singularity for Bounded Linear Operators, Marcel Dekker, New York, 1988. 53. R. Harte and W.Y. Lee, Another note on Weyl’s theorem, Trans. Amer. Math. Soc. 349 (1997), 2115–2124. 54. G. Helmberg, Introduction to Spectral Theory in Hilbert Space, North-Holland, Amsterdam, 1969. 55. D. Herrero, Approximation of Hilbert Space Operators – Volume 1 , 2nd edn. Longman, Harlow, 1989. 56. E. Hille and R.S. Phillips, Functional Analysis and Semi-Groups, Colloquium Publications Vol. 31, Amer. Math. Soc., Providence, 1957; reprinted: 1974.
190
References
57. T. Ichinose, Spectral properties of linear operators I , Trans. Amer. Math. Soc. 235 (1978), 75–113. 58. V.I. Istrˇ a¸tescu, Introduction to Linear Operator Theory, Marcel Dekker, New York, 1981. 59. M.A. Kaashoek and D.C. Lay, Ascent, descent, and commuting perturbations, Trans. Amer. Math. Soc. 169 (1972), 35–47. 60. T. Kato, Perturbation Theory for Linear Operators, 2nd edn. Springer, Berlin, 1980; reprinted: 1995. 61. D. Kitson, R. Harte, and C. Hernandez, Weyl’s theorem and tensor products: a counterexample, J. Math. Anal. Appl. 378 (2011), 128–132. 62. C.S. Kubrusly, An Introduction to Models and Decompositions in Operator Theory, Birkh¨ auser, Boston, 1997. 63. C.S. Kubrusly, Hilbert Space Operators, Birkh¨ auser, Boston, 2003. 64. C.S. Kubrusly, A concise introduction to tensor product, Far East J. Math. Sci. 22 (2006), 137–174. 65. C.S. Kubrusly, Measure Theory, Academic Press/Elsevier, San Diego, 2007. 66. C.S. Kubrusly, The Elements of Operator Theory, Birkh¨ auser/Springer, New York, 2011; enlarged 2nd edn. of Elements of Operator Theory, Birkh¨ auser, Boston, 2001. 67. C.S. Kubrusly and B.P. Duggal, On Weyl and Browder spectra of tensor products, Glasgow Math. J. 50 (2008), 289–302. 68. C.S. Kubrusly and B.P. Duggal, On Weyl’s theorem of tensor products, to appear. 69. D.C. Lay, Characterizations of the essential spectrum of F.E. Browder , Bull. Amer. Math. Soc. 74 (1968), 246–248. 70. W.Y. Lee, Weyl’s theorem for operator matrices, Integral Equations Operator Theory 32 (1998), 319–331. 71. W.Y. Lee, Weyl spectrum of operator matrices, Proc. Amer. Math. Soc. 129 (2001), 131–138. 72. J. Lindenstrauss and L. Tzafriri, On the complemented subspaces problem, Israel J. Math. 9 (1971), 263–269. 73. V. M¨ uller, Spectral Theory of Linear Operators: and Spectral Systems in Banach Algebras, 2nd edn. Birkh¨ auser, Basel, 2007. 74. G. Murphy, C*-Algebras and Operator Theory, Academic Press, San Diego, 1990. 75. C.M. Pearcy, Some Recent Developments in Operator Theory, CBMS Regional Conference Series in Mathematics No. 36, Amer. Math. Soc., Providence, 1978. 76. H. Radjavi and P. Rosenthal, Invariant Subspaces, Springer, Berlin, 1973; 2nd edn. Dover, New York, 2003. 77. F. Riesz and B. Sz.-Nagy, Functional Analysis, Frederick Ungar, New York, 1955; reprinted: Dover, New York, 1990.
References
191
78. H.L. Royden, Real Analysis, 3rd edn. Macmillan, New York, 1988. 79. W. Rudin, Real and Complex Analysis, 3rd edn. McGraw-Hill, New York, 1987. 80. W. Rudin, Functional Analysis, 2nd edn. McGraw-Hill, New York, 1991. 81. M. Schechter, On the essential spectrum of an arbitrary operator. I , J. Math. Anal. Appl. 13 (1966), 205–215. 82. M. Schechter, Principles of Functional Analysis, Academic Press, New York, 1971; 2nd edn. Graduate Studies in Mathematics, Vol. 36, Amer. Math. Soc., Providence, 2002. 83. J. Schwartz, Some results on the spectra and spectral resolutions of a class of singular operators, Comm. Pure Appl. Math. 15 (1962), 75–90. 84. Y.-M. Song and A.-H. Kim, Weyl’s theorem for tensor products, Glasgow Math. J. 46 (2004), 301–304. 85. V.S. Sunder, Functional Analysis – Spectral Theory, Birkh¨ auser, Basel, 1998. 86. B. Sz.-Nagy, C. Foia¸s, H. Bercovici, and L. K´erchy, Harmonic Analysis of Operators on Hilbert Space, Springer, New York, 2010; enlarged 2nd edn. of B. Sz.-Nagy and C. Foia¸s, North-Holland, Amsterdam, 1970. 87. A.E. Taylor and D.C. Lay, Introduction to Functional Analysis, Wiley, New York, 1980; reprinted: Krieger, Melbourne, 1986; enlarged 2nd edn. of A.E. Taylor, Wiley, New York, 1958. 88. A. Uchiyama, On the isolated points of the spectrum of paranormal operators, Integral Equations Operator Theory 55 (2006), 145–151. 89. J. Weidmann, Linear Operators in Hilbert Spaces, Springer, New York, 1980. ¨ 90. H. Weyl, Uber beschr¨ ankte quadratische Formen, deren Differenz vollstetig ist, Rend. Circ. Mat. Palermo 27 (1909), 373–392.
This page intentionally left blank
Index
absolutely continuous, 96 adjoint, 9 algebra homomorphism, 51, 91 algebra isomorphism, 91 algebra with identity, 91 algebraic complements, 3, 166 algebraic multiplicity, 129, 157 analytic function, 29 analytic function on neighborhoods, 110 analytic function on spectra, 110, 111 approximate eigenvalue, 32 approximate point spectrum, 32 approximation spectrum, 32 arc, 103 ascent of an operator, 163 Atkinson Theorem, 145 Axiom of Choice, 31, 74 backward bilateral shift, 49 backward unilateral shift, 49 Banach algebra, 91 Banach–Steinhaus Theorem, 2, 41 bilateral shift, 49 bilateral weighted shift, 50 Bolzano–Weierstrass Property, 57 Borel measure, 64 Borel sets, 64 Borel σ-algebra, 64
bounded below, 1 bounded component, 51 bounded inverse, 3 Bounded Inverse Theorem, 3 bounded linear transformation, 1 bounded measurable function, 65, 95 bounded sequence, 2 bounded variation, 103 Browder operator, 165 Browder spectrum, 170 Browder’s Theorem, 173, 179 Calkin algebra, 143 Calkin spectrum, 144 canonical bilateral shift, 49 canonical orthonormal basis, 85 canonical unilateral shift, 49 Cartesian decomposition, 24, 93 Cartesian Decomposition Theorem, 25 Cauchy domain, 106 Cauchy Integral Formula, 107 Cauchy Theorem, 109 clockwise oriented curve, 106 clopen set, 115 closed curve, 103 closed region, 105 cohyponormal operator, 12 coisometry, 14
C.S. Kubrusly, Spectral Theory of Operators on Hilbert Spaces, DOI 10.1007/978-0-8176-8328-3, © Springer Science+Business Media, LLC 2012
193
194
Index
commutant, 126, 175 commutator, 185 commuting operators, 12, 51, 80–83, 101, 114, 116 compact linear transformation, 18 compact operator, 18 compact perturbation, 147, 152, 157, 177, 185 compact set, 17 Compactness Theorem, 17 complementary linear manifolds, 3, 166 complementary projection, 7 complementary spectral sets, 122 complementary subspaces, 3, 166 complemented Banach space, 134 completely continuous, 18 completely normal, 88 complex algebra, 91 component of a set, 51, 105, 147 compression spectrum, 32 connected, 105 continuous linear transformation, 1 continuous spectrum, 30 continuously differentiable, 103, 104 contraction, 2 counterclockwise oriented curve, 106 C*-algebra, 92 curve, 103 cyclic operator, 73, 74 cyclic vector, 73, 74 densely intertwined, 23 derivative, 29 descent of an operator, 163 diagonal operator, 85 diagonalizable operator, 49, 61, 62, 84, 85, 88 direct sum of operators, 7, 182 direct sum of spaces, 3, 6, 7 direct summand, 181 directed pair, 103 disconnected, 105, 115 domain, 105 dominant operator, 181 double commutant, 126 Double Commutant Theorem, 126 eigenspace, 16, 30 eigenvalue, 30
eigenvector, 30 entire function, 29 equivalent measures, 96 equivalent operators, 23 essential range, 86 essential singularity, 127 essential spectrum, 144 essentially bounded functions, 85, 95 essentially invertible, 145 extension by continuity, 25 finite algebraic multiplicity, 129, 157 finite geometric multiplicity, 129, 157 finite measure, 64 finite-rank operators, 45 finite-rank transformation, 24 Fourier series expansion, 84, 85 Fredholm Alternative, 21, 44, 135, 136, 140, 152, 161, 185 Fredholm index, 134 Fredholm operator, 131–133 Fredholm spectrum, 145 Fuglede Theorem, 80 Fuglede–Putnam Theorem, 84 full spectrum, 52, 87 functional calculus, 91, 94, 100 Functional Calculus Theorems, 95, 97, 99, 100, 111 Gelfand–Beurling formula, 38, 51, 93 geometric multiplicity, 129, 157 Hahn–Banach Theorem, 29, 30, 109 Heine–Borel Theorem, 18 Hermitian element, 92 Hermitian operator, 11 hole, 51, 147, 151, 153 holomorphic function, 29 homomorphism, 51, 91 hyperinvariant subspace, 23, 81, 121 hyponormal operator, 12 idempotent function, 7 index stability, 139 inside of a path, 106 integral over a curve, 105 intertwine, 23, 87 invariant linear manifold, 2 invariant subspace, 2, 81, 181
Index Inverse Mapping Theorem, 3 invertible element, 91, 141 involution, 91 involutive algebra, 92 irreducible operator, 11 isolated eigenvalues, 157 isolated points of spectra, 154–157 isolated singularity, 127 isoloid operator, 181 isometric isomorphism, 94 isometrically isomorphic algebras, 94 isometry, 14 isomorphic algebras, 91 Jordan Jordan Jordan Jordan
closed region, 106 curve, 103 Curve Theorem, 106 domain, 107
kernel, 2 Laurent expansion, 38, 127 Laventriev Theorem, 76 left essential spectrum, 144 left inverse, 141 left semi-Fredholm operator, 131, 132 left spectrum, 142 length of a curve, 103 linear span, 6 Liouville Theorem, 29 logarithmic additivity, 136 measurable set, 64 multiplication operator, 67, 85, 86 multiplicity of a point in an arc, 103 multiplicity of a shift, 49 multiplicity of an arc, 103 multiplicity of eigenvalue, 30, 60, 129 mutually orthogonal projections, 8 natural map, 143 natural quotient map, 143 negatively oriented curve, 106 negatively oriented path, 106 Neumann expansion, 4 nilpotent operator, 38 nonnegative measure, 64 nonnegative operator, 12 nonscalar operator, 81
195
nontrivial projection, 81 nontrivial subspace, 2, 81 normal eigenvalues, 157 normal element, 92 normal operator, 12, 141, 162, 184 normaloid operator, 15, 43 normed algebra, 91 nowhere dense, 88 null space, 2 numerical radius, 42 numerical range, 41 open mapping, 3 Open Mapping Theorem, 3 operator, 1 operator norm property, 1 opposite oriented curve, 105 ordinary sum of subspaces, 3 oriented curve, 103 orthogonal complement, 5 orthogonal complementary subspaces, 6 orthogonal direct sum, 5 orthogonal direct sum of operators, 7 orthogonal direct sum of subspaces, 5, 7 orthogonal family of projections, 8 orthogonal projection, 8, 13 orthogonal projection on M, 9 orthogonal sequence of projections, 8 orthogonal sets, 5 Orthogonal Structure Theorem, 6, 8 orthogonal subspaces, 5 orthogonal vectors, 5 outside of a path, 106 parallelogram law, 22, 43 parameterization, 103 paranormal operator, 182 part of an operator, 181 partial isometry, 24 partition of an interval, 103 path, 106 path about a set, 110 piecewise continuously differentiable, 104 piecewise smooth curve, 104 piecewise smooth function, 104 point spectrum, 30 polar decomposition, 24 Polar Decomposition Theorem, 25 polarization identity, 21, 42, 77
196
Index
pole, 127 positive measure, 64 positive operator, 12 positively oriented curve, 106 positively oriented path, 106 power bounded operator, 38 power inequality, 42 power series, 4, 38, 95, 112 power set, 64 projection, 7 Projection Theorem, 6, 8 proper contraction, 2 pseudohole, 147, 151, 153 Pythagorean Theorem, 8, 56 quasiaffine transform, 23 quasiaffinity, 23 quasiinvertible transformation, 23 quasinilpotent operator, 38 quasisimilar operators, 23 Radon–Nikod´ ym derivative, 67, 102 range, 2 rectifiable curve, 103 reducible operator, 11, 81 reducing subspace, 11, 81, 181 reductive operator, 11, 88 region, 105 residual spectrum, 30 resolution of the identity, 8 resolvent function, 28 resolvent identity, 28 resolvent set, 27 reverse oriented curve, 105 Riemann–Stieltjes integral, 104 Riesz Decomposition Theorem, 121 Riesz Functional Calculus, 111 Riesz idempotent, 116, 154 Riesz point, 155, 156 Riesz Representation Theorems, 65, 69 right essential spectrum, 144 right inverse, 141 right semi-Fredholm operator, 131, 132 right spectrum, 142 Rosenblum Corollary, 52 scalar operator, 16, 81 scalar spectral measure, 97 scalar type operator, 128
Schechter Theorem, 159 self-adjoint element, 92 self-adjoint operator, 11 semi-Fredholm operator, 131–133 seminormal operator, 12 separating vector, 98 sequentially compact set, 17 σ-finite measure, 64 similar operators, 23, 88 similarity to a strict contraction, 49 simple curve, 103 simple function, 128 singularity, 127 smooth curve, 104 smooth function, 103 span of a set, 6 spectral decomposition, 59, 79 Spectral Mapping for polynomials, 35, 37 Spectral Mapping Theorems, 35, 37, 101, 102, 114 spectral measure, 64, 128 spectral operator, 128 spectral partition, 31 spectral picture, 152 spectral radius, 38, 91 spectral set, 115 Spectral Theorem: compact case, 58 Spectral Theorem: general case, 67, 76 spectraloid operator, 43, 49 spectrum, 27, 91 square-integrable functions, 85 square root, 23 Square Root Theorem, 25 square-summable family, 6 stability, 40 ∗-algebra, 91 star-cyclic operator, 73, 74 star-cyclic vector, 67, 73, 74 ∗-homomorphism, 92 ∗-isomorphism, 92 Stone–Weierstrass Theorem, 68, 76, 77, 100 strict contraction, 2 strictly positive operator, 12 strong convergence, 2 strongly stable operator, 40 subspace, 2 summable family, 6 support of a measure, 64, 86
Index tensor product of operators, 183 topological sum of subspaces, 6 total variation, 103 totally bounded set, 17 totally hereditarily normaloid, 182 uniform convergence, 2 uniform stability, 40 uniformly stable operator, 40, 49 unilateral shift, 49, 184 unilateral weighted shift, 50 unital algebra, 1, 91 unital complex Banach algebra, 2, 27, 51, 91, 174 unital homomorphism, 91, 143 unitarily equivalent operators, 23, 85 unitary element, 92
197
unitary operator, 14 unitary transformation, 14, 23 von Neumann algebra, 126 weak convergence, 2 weakly stable operator, 40 Weierstrass Approximation Theorem, 76 Weierstrass Theorem, 29 weighted sum of projections, 9 Weyl operator, 140 Weyl spectrum, 157 Weyl spectrum identity, 183 Weyl’s Theorem, 162, 178–182 winding number, 106 Zorn’s Lemma, 37, 71, 177