Lecture Notes in Mathematics Editors: J.–M. Morel, Cachan F. Takens, Groningen B. Teissier, Paris
1787
3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Stefaan Caenepeel Gigel Militaru Shenglin Zhu
Frobenius and Separable Functors for Generalized Module Categories and Nonlinear Equations
13
Authors Stefaan Caenepeel Faculty of Applied Sciences Vrije Universiteit Brussel, VUB Pleinlaan 2 1050 Brussels Belgium e-mail:
[email protected] http://homepages.vub.ac.be/˜scaenepe/welcome.html Gigel Militaru Faculty of Mathematics University of Bucharest Strada Academiei 14 70109 Bucharest 1 Romania e-mail:
[email protected]
Shenglin ZHU Institute of Mathematics Fudan University Shanghai 200433 China e-mail:
[email protected]
Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Caenepeel, Stefaan: Frobenius and separable functors for generalized module categories and nonlinear equations / Stefaan Caenepeel ; Gigel Militaru ; Shenglin Zhu. Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in mathematics ; Vol. 1787) ISBN 3-540-43782-7
Mathematics Subject Classification (2000): primary 16W30, secondary 16D90, 16W50, 16B50 ISSN 0075-8434 ISBN 3-540-43782-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science + Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready TEX output by the author SPIN: 10878510
41/3142/ du - 543210 - Printed on acid-free paper
Dedicated to Gilda, Lieve and Xiu
Preface
One of the key tools in classical representation theory is the fact that a representation of a group can also be viewed as an action of the group algebra on a vector space. This has been (one of) the motivations to introduce algebras, and modules over algebras. During the passed century, it has become clear that several different notions of module can be introduced, with a variety of applications in different mathematical disciplines. For example, actions by group algebras can also be used to develop Galois descent theory, with its applications in number theory. Graded modules originated from projective algebraic geometry. In fact a group grading can be considered as a coaction by the group algebra, i.e. the dual of an action. One may then consider various types of modules over bialgebras and Hopf algebras: Hopf modules (in integral theory), relative Hopf modules (in Hopf-Galois theory), dimodules (when studying the Brauer group). Perhaps the most important ones are the Yetter-Drinfeld modules, that have been studied in connection with the theory of quantum groups, the quantum Yang-Baxter equation, braided monoidal categories, and knot theory. Frobenius fuctors generalize the classical concept of Frobenius algebra that appeared first 100 years ago in the work of Frobenius on representation theory. The study of Frobenius algebras has seen a revival during the passed five years, serving as an important tool in problems arising from different fields: Jones theory of subfactors of von Neumann algebras ([98], [100]), topological quantum field theory ([3], [8]), geometry of manifolds and quantum cohomology ([79], [129] and the references indicated there), the quantum Yang-Baxter equation ([15], [42]), and Yetter-Drinfeld modules ([49], [88]). Separable functors are a generalization of the theory of separable field extensions, and of separable algebras. Separability plays a crucial role in several topics in algebra, number theory and algebraic geometry, for example in classical Galois theory, ramification theory, Azumaya algebras and the Brauer group theory, Hochschild cohomology and ´etale cohomology. A more recent application can be found in the Jones theory of subfactors of von Neumann algebras, already mentioned above with respect to Frobenius algebras. In this monograph, we present - from a purely algebraic point of view - a unification schedule for actions and coactions and their properties, where we are mainly interested in generalizations of Frobenius and separability prop-
VIII
Preface
erties. The unification theory takes place at four different levels. First, we have a unification on the level of categories of modules: DoiKoppinen modules were introduced first, and all modules mentioned above can be viewed as special cases. Entwined modules arose from noncommutative geometry; they are at the same time more general and easier to deal with, and provide new fields of applications. Secondly, there is a unification at the level of functors between module categories: one can introduce morphisms of entwining structures, and then associate such a morphism a pair of adjoint functors. Many “classical” pairs of adjoint functors (the induction functor, forgetful functors, restriction of (co)scalars, functors forgetting a grading, and their adjoints) are in fact special cases of this construction. A third unification takes place at the level of the properties of these pairs of adjoint functors. Here the inspiration comes from two at first sight completely different algebraic notions, having their roots in representation theory: separable algebras and Frobenius. We give a categorical approach, leading to the introduction of separable functors and Frobenius functors. Not only this explains the at first sight mysterious fact that both separable and Frobenius algebras can be characterized using Casimir elements, it also enables us to prove Frobenius and separability type properties in a unified framework, with several new versions of Maschke’s Theorem as a consequence. The fourth unification is based on the theory of Yetter-Drinfeld modules, their relation with the quantum Yang-Baxter equation, and the FRT Theorem. The pentagon equation has appeared in the theory of duality for von Neumann algebras, in connection with C ∗ -algebras. Here we explain how they are related to Hopf modules. In a similar way, another nonlinear equation which we called the Long equation is related to the category of Long dimodules, that finds its origin in generalizations of the Brauer-Wall group. Finally, the FS equation can be used to characterize Frobenius algebras, as well as separable algebras, providing yet another explanation of the relationship between the two notions. For all these equations, we have a version of the FRT Theorem. In Chapter 1, some preliminary results are given. We have included a Section about coalgebras and bialgebras, and one about adjoint functors. Section 1.2 deals with a classical treatment of Frobenius and separable algebras over fields, and we explain how they are connected to classical representation theory. Chapter 2 provides a discussion of entwining structures and their representations, entwined modules, and we discuss how they generalize other types of modules and how they are related to the smash (co)product and the factorization problem of an algebra through two subalgebras. We also give the general pair of adjoint functors mentioned earlier. First properties of the category of entwined modules are discussed, for example we discuss when the category of entwined modules is a monoidal category. We use entwining structures mainly as a tool to unify all kinds of modules, but we want to point
Preface
IX
out that they were originally introduced with a completely different motivation, coming from noncommutative geometry: one can generalize the notion of principal bundles to a very general setting in which the role of coordinate functions on the base is played by a general noncommutative algebra A, and the fibre of the principal bundle by a coalgebra C, where A and C are related by a map ψ : A ⊗ C → C ⊗ A, called the entwining map, that has to satisfy certain compatibility conditions (see [32] and [33]). Entwined modules, as representations of an entwining structure, were introduced by Brzezi´ nski [23], and he proved that Doi-Koppinen Hopf modules and, a fortiori, graded modules, Hopf modules and Yetter-Drinfeld modules are special cases. Entwined modules can also be applied to introduce coalgebra Galois theory, we come back to this in Section 4.8, where we also explain the link to descent theory. The starting points of Chapter 3 are Maschke’s Theorem from Representation Theory (a group algebra is semisimple if and only if the order of the group does not divide the characteristic of the base field), and the classical result that a finite group algebra is Frobenius. Larson and Sweedler have given Hopf algebraic generalizations of these two results, using integrals. Both the Maschke and Frobenius Theorem can be restated in categorical terms. Let us first look at Maschke’s Theorem. If we replace the base field k by a commutative ring, then we obtain the following result: if the order of the group G is invertible in k, then every exact sequence of kG-modules that splits as a sequence of k-modules is split as a sequence of kG-modules. If k is field, this implies immediately that kG is semisimple; in fact it turns out that all variations of Maschke’s Theorem that exist in the literature admit such a formulation. In fact we have more: the kG-splitting maps are constructed deforming the k-splitting maps in a functorial way. A proper definition of functors that have this functorial Maschke property was given by N˘ ast˘ asescu, Van den Bergh, and Van Oystaeyen [145]. They called these functors separable functors because for a given ring extension R → S, the restriction of scalars functor is separable if and only if S/R is separable in the classical sense. A Theorem of Rafael [158] gives necessary and sufficient conditions for a functor with an adjoint to be separable: the unit (or counit) of the adjunction has to be split (or cosplit). We will see that the separable functor philosophy can be applied successfully to any adjoint pair of functors between categories of entwined modules. We will focus mainly on the functors forgetting the action and the coaction, as this is more transparent and leads to several interesting results. A similar functorial approach can be used for the Frobenius property. It is well-known that a k-algebra S is Frobenius if and only if the restriction of scalars functors is at the same time a left and right adjoint of the induction functor. This has lead to the introduction of Frobenius functors: this is a functor which has a left adjoint that is also a right adjoint. An adjoint pair of Frobenius functors is called a Frobenius pair.
X
Preface
Let η : 1 → GF be the unit of an adjunction; as we have seen, to conclude that F is separable, we need a natural transformation ν : GF → 1. Our strategy will be to describe all natural transformations GF → 1; we will see that they form a k-algebra, and that the natural transformations that split the unit are idempotents (separability idempotents) in this algebra. A look at the definition of adjoint pairs of functors tells us that we have to investigate natural transformations GF → 1 and 1 → F G; the difference is that the normalizing properties for the separability property and the Frobenius property are not the same. But still we can handle both problems in a unified framework, and this is what we will do in Chapter 3. In Chapter 4, we will apply the results from Chapter 3 in some important subcases. We have devoted Sections to relative Hopf modules and Hopf-Galois theory, graded modules, Yetter-Drinfeld modules and the Drinfeld double, and Long dimodules. For example, we prove that, for a finitely generated projective Hopf algebra H, the Drinfeld double D(H) is a Frobenius extension of H if and only if H is unimodular. Part I tells us that Hopf modules, Yetter-Drinfeld modules and Long dimodules over a Hopf algebra H can be regarded as special cases as DoiKoppinen Hopf modules and entwined modules, and that a unified theory can be developed. In Part II, we look at these three types of modules from a different point of view: we will see how they are connected to three different nonlinear equations. The celebrated FRT Theorem shows us the close relationship between Yetter-Drinfeld modules and the quantum Yang-Baxter equation (QYBE) (see e.g. [115], [108], [128]). We will discuss how the two other types of modules, Hopf modules and Long dimodules, are related to other nonlinear equations. It comes as a surprise that the nonlinear equation related to the category of Hopf modules H MH is the pentagon (or fusion) equation, which is even older, and somehow more basic then the quantum Yang-Baxter equation. Using Hopf modules, we will present two different approaches to solving this equation: a first approach is to prove an FRT type Theorem for the pentagon equation; a second, completely different, approach was developed by Baaj and Skandalis for unitary operators on Hilbert spaces ([10]) and, more recently, by Davydov ([65]) for arbitrary vector spaces. We will conclude Chapter 6 with a few open problems that may have important consequences: from a philosophical point of view the theory presented herein views a finite dimensional Hopf algebra H simply as an invertible matrix R ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) that is a solution for the pentagon equation R12 R13 R23 = R23 R12 . Furthermore, in this case dim(H)|n. This point of view could be crucial in reducing the problem of classifying finite dimensional Hopf algebras (currently in full development and using different and complex techniques) to the elementary theory of matrices from linear algebra. At this point a new Jordan theory (we called it restricted Jordan theory) has to be developed. In Chapter 8, we will focus on the Frobenius-separability equation, all solu-
Preface
XI
tions of which are also solutions of the braid equation. An FRT type theorem will enable us to clarify the structure of two fundamental classes of algebras, namely separable algebras and Frobenius algebras. The fact that separable algebras and Frobenius algebras are related to the same nonlinear equation is related to the fact that separability and Frobenius properties studied in Chapters 3 and 4 are based on the same techniques. As we already indicated, the quantum Yang-Baxter equation has been intensively studied in the literature. For completeness sake, and to illustrate the similarity with our other nonlinear equations, we decided that to devote a special Chapter to it. This will also allow us to present some recent results, see Section 5.5. The three authors started their common research on Doi-Koppinen Hopf modules in 1995, with a three month visit by the second and third author to Brussels. The research was continued afterwards within the framework of the bilateral projects “Hopf algebras and (co)Galois theory” and “Hopf algebras in Algebra, Topology, Geometry and Physics” of the Flemish and Romanian governments, and “New computational, geometric and algebraic methods applied to quantum groups and differential operators” of the Flemish and Chinese governments. We benefitted greatly from direct and indirect contributions from - in alphabetical order - X-TO-Status: 00000003 Margaret Beattie, Tomasz Brzezi´ nski, Sorin Dˇ ascˇalescu, Jose (Pepe) G´omez Torrecillas, Bogdan Ichim, Bogdan Ion, Lars Kadison, Claudia Menini, Constantin Nˇ astˇasescu, S¸erban Raianu, Peter Schauenburg, Mona Stanciulescu, Dragos S ¸ tefan, Lucien Van hamme, Fred Van Oystaeyen, Yinhuo Zhang, and Yonghua Xu. Chapters 2 and 3 are based on an seminar given by the first author in Brussels during the spring of 1999. The first author wishes to thank Sebastian Burciu, Corina Calinescu and Erwin De Groot for their useful comments. Finally we wish to thank Paul Taylor for his kind permission to use his “diagrams” software. A few words about notation: in Part I, we work over a commutative ring k; unadorned Hom, ⊗, M etc. are assumed to be taken over k. In Part II, we are always assuming that we work over a field k. For k-modules M and N , IM will be the identity map on M , and τ : N ⊗ M → M ⊗ N will be the switch map mapping m ⊗ n to n ⊗ m. Also it is possible to read part II without reading part I first: one needs the generalities of Chapter 1, and the definitions in the first Sections of Chapter 2.
Brussels, Bucharest, Shanghai, February 2002
Stefaan Caenepeel Gigel Militaru Shenglin Zhu
Table of Contents
Part I Entwined modules and Doi-Koppinen Hopf modules 1
Generalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Coalgebras, bialgebras, and Hopf algebras . . . . . . . . . . . . . . . . . 3 1.2 Adjoint functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.3 Separable algebras and Frobenius algebras . . . . . . . . . . . . . . . . . 28
2
Doi-Koppinen Hopf modules and entwined modules . . . . . . 2.1 Doi-Koppinen structures and entwining structures . . . . . . . . . . 2.2 Doi-Koppinen modules and entwined modules . . . . . . . . . . . . . . 2.3 Entwined modules and the smash product . . . . . . . . . . . . . . . . . 2.4 Entwined modules and the smash coproduct . . . . . . . . . . . . . . . 2.5 Adjoint functors for entwined modules . . . . . . . . . . . . . . . . . . . . 2.6 Two-sided entwined modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Entwined modules and comodules over a coring . . . . . . . . . . . . 2.8 Monoidal categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39 39 48 50 59 64 68 71 78
3
Frobenius and separable functors for entwined modules . . . 3.1 Separable functors and Frobenius functors . . . . . . . . . . . . . . . . . 3.2 Restriction of scalars and the smash product . . . . . . . . . . . . . . . 3.3 The functor forgetting the C-coaction . . . . . . . . . . . . . . . . . . . . . 3.4 The functor forgetting the A-action . . . . . . . . . . . . . . . . . . . . . . . 3.5 The general induction functor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89 89 99 124 137 146
4
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Relative Hopf modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Hopf-Galois extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Doi’s [H, C]-modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Yetter-Drinfeld modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Long dimodules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Modules graded by G-sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Two-sided entwined modules revisited . . . . . . . . . . . . . . . . . . . . . 4.8 Corings and descent theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159 159 168 179 181 193 195 198 204
XIV
Table of Contents
Part II Nonlinear equations 5
6
Yetter-Drinfeld modules and the quantum Yang-Baxter equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 The quantum Yang-Baxter equation and the braid equation . . 5.3 Hopf algebras versus the QYBE . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The FRT Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 The set-theoretic braid equation . . . . . . . . . . . . . . . . . . . . . . . . . .
217 217 218 225 235 238
Hopf modules and the pentagon equation . . . . . . . . . . . . . . . . . 6.1 The Hopf equation and the pentagon equation . . . . . . . . . . . . . 6.2 The FRT Theorem for the Hopf equation . . . . . . . . . . . . . . . . . . 6.3 New examples of noncommutative noncocommutative bialgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 The pentagon equation versus the structure and the classification of finite dimensional Hopf algebras . . . . . . . . . . . .
245 245 253
7
Long dimodules and the Long equation . . . . . . . . . . . . . . . . . . . 7.1 The Long equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The FRT Theorem for the Long equation . . . . . . . . . . . . . . . . . . 7.3 Long coalgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
301 301 304 311
8
The Frobenius-Separability equation . . . . . . . . . . . . . . . . . . . . . . 8.1 Frobenius algebras and separable algebras . . . . . . . . . . . . . . . . . 8.2 The Frobenius-separability equation . . . . . . . . . . . . . . . . . . . . . . 8.3 The structure of Frobenius algebras and separable algebras . . 8.4 The category of FS-objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
317 317 320 332 339
267 277
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
1 Generalities
1.1 Coalgebras, bialgebras, and Hopf algebras In this Section, we give a brief introduction to Hopf algebras. A more detailed discussion can be found in the literature, see for example [1], [63], [140] or [172]. Throughout, k will be a commutative ring. In some specific cases, we will assume that k is a field. k M = M will denote the category of (left) k-modules (we omit the index k if no confusion is possible). ⊗ and Hom will be shorter notation for ⊗k and Homk . Let M and N be k-modules. IM : M → M will be the identity map, and τM,N : M ⊗ N → N ⊗ M the switch map. Indices will be omitted if no confusion is possible. M ∗ = Hom(M, k) is the dual of the k-module M . For m ∈ M and m∗ ∈ M ∗ , we will often use the duality notation m∗ , m = m∗ (m) Let M be a finitely generated and projective k-module. Then there exists a (finite) dual basis {mi , m∗i | i = 1, · · · , n} for M . This means that m=
n
m∗i , mmi and m∗ =
i=1
n
m∗ , mi m∗i
i=1
for all m ∈ M and m∗ ∈ M ∗ . Algebras and coalgebras Recall that a k-algebra (with unit) is a k-module together with a multiplication map m = mA : A⊗A → A and a unit element 1A ∈ A satisfying the conditions m ◦ (m ⊗ I) = m ◦ (I ⊗ m) m(a ⊗ 1A ) = m(1A ⊗ a) = a for all a ∈ A. The map η = ηA : k → A mapping x ∈ k to x1A is called the unit map of A and satisfies the condition
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 3–37, 2002. c Springer-Verlag Berlin Heidelberg 2002
4
1 Generalities
m ◦ (η ⊗ I) = m ◦ (I ⊗ η) = I The opposite Aop of an algebra A, is equal to A as a k-module, with multiplication mAop = mA ◦ τ . A is commutative if A = Aop , or m ◦ τ = m. k-alg will be the category of k-algebras, and multiplicative maps. Coalgebras are defined in a similar way: a k-coalgebra C is a k-module together with k-linear maps ∆ = ∆C : C → C ⊗ C and ε = εC : C → k satisfying (∆ ⊗ I) ◦ ∆ = (I ⊗ ∆) ◦ ∆
(1.1)
(ε ⊗ I) ◦ ∆ = (I ⊗ ε) ◦ ∆ = I
(1.2)
∆ is called the comultiplication or the diagonal map, and ε is called the counit or augmentation map. (1.2) tells us that the comultiplication is coassociative. We will use the Sweedler-Heyneman notation for the comultiplication: for c ∈ C, we write c(1) ⊗ c(2) = c(1) ⊗ c(2) ∆(c) =
(c)
The summation symbol will usually be omitted. The coassociativity can then be reformulated as follows: c(1)(1) ⊗ c(1)(2) ⊗ c(2) = c(1) ⊗ c(2)(1) ⊗ c(2)(2) and therefore we write ∆2 (c) = (∆ ⊗ I)(∆(c)) = (I ⊗ ∆)(∆(c)) = c(1) ⊗ c(2) ⊗ c(3) and, in a similar way, ∆3 (c) = c(1) ⊗ c(2) ⊗ c(3) ⊗ c(4) The counit property (1.2) can be restated as ε(c(1) )c(2) = ε(c(2) )c(1) = c The co-opposite C cop of a coalgebra C is equal to C as a k-module, with comultiplication ∆C cop = τ ◦ ∆C . C is called cocommutative if C = C cop , or τ ◦ ∆ = τ , or c(1) ⊗ c(2) = c(2) ⊗ c(1) for all c ∈ C. A k-linear map f : C → D between two coalgebras C and D is called a morphism of k-coalgebras if ∆D ◦ f = (f ⊗ f )∆C and εD ◦ f = εC
1.1 Coalgebras, bialgebras, and Hopf algebras
5
or f (c)(1) ⊗ f (c)(2) = f (c(1) ) ⊗ f (c(2) ) and εD (f (c)) = εC (c) for all c ∈ C. We also say that f is comultiplicative. The category of kcoalgebras and comultiplicative map is denoted by k-coalg. The tensor product of two coalgebras C and D is again a coalgebra. The comultiplication and counit are given by the formulas ∆C⊗D = (IC ⊗ τC,D ⊗ ID ) ◦ (∆C ⊗ ∆D ) and εC⊗D = εC ⊗ εD Example 1. Let X be an arbitrary set, and C = kX the free k-module with basis X. On C we define a comultiplication and counit as follows: ∆C (x) = x ⊗ x and εC (x) = 1 for all x ∈ X. kX is called the grouplike coalgebra. The convolution product Let C be a coalgebra, and A an algebra. Then we can define a multiplication on Hom(C, A) in the following way: for f, g : C → A, we let f ∗ g = mA ◦ (f ⊗ g) ◦ ∆C , that is, (f ∗ g)(c) = f (c(1) )g(c(2) ) This multiplication is called the convolution. ηA ◦ εC is a unit for the convolution. In particular, if A = k, we find that C ∗ is a k-algebra, with unit ε, and comultiplication given by c∗ ∗ d∗ , c = c∗ , c(1) d∗ , c(2) In fact, the multiplication on C ∗ is the dual of the comultiplication on C. If A is an algebra, which is finitely generated and projective as a k-module, then A∗ is a coalgebra. The comultiplication is given by ∗
mA ⊗ A)∗ ∼ A∗ −→(A = A∗ ⊗ A∗
This means that ∆(a∗ ) = a∗(1) ⊗ a∗(2) if and only if a∗ , ab = a∗(1) , aa∗(2) , b for all a ∈ A and b ∈ B. The comultiplication can be described in terms of a dual basis {ai , a∗i | i = 1, · · · , n} of A: ∆(a∗ ) =
n i,j=1
a∗ , ai aj a∗i ⊗ a∗j
(1.3)
6
1 Generalities
for all a∗ ∈ A∗ . From (1.3), it also follows that n
ai aj ⊗ a∗i ⊗ a∗j =
i,j=1
n
ai ⊗ ∆(a∗i )
(1.4)
i=1
For later use, we rewrite this formula in terms of coalgebras: put C = A∗ , and let {ci , c∗i | i = 1, · · · , n} be a finite dual basis for C. Then ∆(ci ) ⊗ c∗i = ci ⊗ cj ⊗ c∗i ∗ c∗j (1.5) i
i,j
Bialgebras and Hopf algebras Proposition 1. For a k-module H that is at once a k-algebra and a kcoalgebra, the following assertions are equivalent: 1. mH and ηH are comultiplicative; 2. ∆H and εH are multiplicative; 3. for all h, g ∈ H, we have ∆(gh) = g(1) h(1) ⊗ g(2) h(2) ε(gh) = ε(g)ε(h)
(1.6) (1.7)
∆(1) = 1 ⊗ 1 ε(1) = 1
(1.8) (1.9)
In this situation, we call H a bialgebra. A map between bialgebras that is multiplicative and comultiplicative is called a morphism of bialgebras. Proof. This follows from the following observations: mH is comultiplicative ⇐⇒ (1.6) and (1.8) hold; ηH is comultiplicative ⇐⇒ (1.7) and (1.9) hold ∆H is multiplicative ⇐⇒ (1.6) and (1.7) hold; εH is multiplicative ⇐⇒ (1.8) and (1.9) hold Definition 1. A bialgebra H is called a Hopf algebra if the identity IH has an inverse S in the convolution algebra Hom(H, H). Thus we need a map S : H → H satisfying S(h(1) )h(2) = h(1) S(h(2) ) = η(ε(h))
(1.10)
The map S is called the antipode of H. Let f : H → K be a morphism of bialgebras between two Hopf algebras H and K. It is well-known that f also preserves the antipode, that is, SK ◦ f = f ◦ SH and f is called a morphism of Hopf algebras.
1.1 Coalgebras, bialgebras, and Hopf algebras
7
Example 2. Let G be a semigroup. Then kG is a coalgebra (see Example 1), and a k-algebra. It is easy to see that kG is a bialgebra. If G is a group, then kG is a Hopf algebra. The antipode is given by S(g) = g −1 , for all g ∈ G. If H is bialgebra, then H op , H cop and H opcop are also bialgebras. If H antipode S, then S is also an antipode for H opcop . An antipode S for also an antipode for H cop , and is called a twisted antipode. S has to the property S(h(2) )h(1) = h(2) S(h(1) ) = η(ε(h))
has an H op is satisfy (1.11)
for all h ∈ H. Proposition 2. Let H be a Hopf algebra. Then S is a bialgebra morphism from H to H opcop . If S is bijective, then S −1 is a twisted antipode. If H is commutative or cocommutative, then S ◦ S = IH , and consequently S = S. Proof. Consider the maps ν, ρ : H ⊗ H → H given by ν(h ⊗ k) = S(k)S(h) and ρ(h ⊗ k) = S(hk) It is easy to prove that both ν and ρ are convolution inverses of the multiplication map m, and ν = ρ, and S(hk) = S(k)S(h) for all h, k ∈ H. Furthermore 1 = η(ε(1)) = (I ∗ S)(1) = I(1)S(1) = S(1) and we find that S : H → H op is multiplicative. In a similar way, we prove that S : H → H cop is comultiplicative: the maps ψ, ϕ : H → H ⊗ H given by ψ(h) = ∆(S(h)) and ϕ(h) = S(h(2) ) ⊗ S(h(1) ) are both convolution inverses of ∆H , and therefore ψ = ϕ and ∆(S(h)) = S(h(2) ) ⊗ S(h(1) ) for all h ∈ H. Finally ε(h) = ε((η ◦ ε)(h)) = ε(S(h(1) )h(2) ) = ε(S(h(1) ))ε(h(2) ) = ε(S(h)) Assume that S is bijective. Then S −1 (hk) = S −1 (k)S −1 (h), and S −1 (1) = 1. Applying S −1 to (1.10), we find (1.11), and S −1 is a twisted antipode. Finally, if H is commutative or cocommutative, then S is also a twisted antipode, and we have for all h ∈ H that (S ∗ (S ◦ S))(h) = S(h(1) S(S(h(2) )) = S S(h(2) )h(1) = S((η ◦ ε)(h)) = (η ◦ ε)(h) proving that S ◦ S is a convolution inverse for S, and S ◦ S = I.
8
1 Generalities
Modules Let A be a k-algebra. A left A-module M is a k-module, together with a map l ψ = ψM : A ⊗ M → M, ψ(a ⊗ m) = am such that a(bm) = (ab)m and 1m = m for all a, b ∈ A and m ∈ M . We say that ψ is a left A-action on M , or that A acts on M from the left. Let M and N be two left A-modules. A k-linear map f : M → N is called left A-linear if f (am) = af (m), for all a ∈ A and m ∈ M . The category of left A-modules and A-linear maps is denoted by A M. In a similar way, we can introduce right A-modules, and the category of right A-modules MA . Let B be another k-algebra. A k-module M that is at once a left A-module and a right B-module such that a(mb) = (am)b for all a ∈ A, b ∈ B and m ∈ M is called an (A, B)-bimodule. A MB will be the category of (A, B)-bimodules. Observe that we have isomorphisms of categories ∼ ∼ A MB = A⊗B op M = MAop ⊗B Take M ∈ MA and N ∈ A M. The tensor product M ⊗A N is by definition l r the coequalizer of the maps IM ⊗ ψN and ψM ⊗ IN , that is, we have an exact sequence M ⊗A⊗N - M ⊗ N −→M ⊗A M −→0 If H is a bialgebra, then the tensor product of two (left) H-modules M and N is again an H-module. The action on M ⊗ N is given by h(m ⊗ n) = h(1) m ⊗ h(2) n We also write M H = {m ∈ M | hm = ε(h)m, for all h ∈ H} Module algebras and module coalgebras Assume that H is a bialgebra. Let A be a left H-module, and a k-algebra. We call A a left H-module algebra if the unit and multiplication are left H-linear, or h(ab) = (h(1) a)(h(2) b) and h1A = ε(h)1A
(1.12)
for all h ∈ H, and a, b ∈ A. In a similar way, we introduce right H-module algebras. If A is a left H-module algebra, then Aop is a right H opcop -module algebra. A k-coalgebra that is also a left H-module is called a left H-module coalgebra if the counit and the comultiplication are left H-linear. This is equivalent to ∆C (hc) = h(1) c(1) ⊗ h(2) c(2) and εC (hc) = εH (h)εC (c)
(1.13)
1.1 Coalgebras, bialgebras, and Hopf algebras
9
for all h ∈ H and c ∈ C. We can also introduce right module coalgebras, and if C is a left H-module coalgebra, then C cop is a right H opcop -module coalgebra. If C is a right H-module coalgebra, then C ∗ is a left H-module algebra. The left H-action on C ∗ is given by the formula h · c∗ , c = c∗ , ch
(1.14)
In a similar way, if C is a left H-module coalgebra, then C ∗ is a right Hmodule algebra, with c∗ · h, c = c∗ , hc (1.15) Example 3. Let G be a group, and X a right G-set. This means that we have a map X × G → X : (x, g) → xg such that (xg)h = x(gh), for all g, h ∈ G. Then the coalgebra kX is a right kG-module coalgebra. Comodules Let C be a coalgebra. A right C-comodule M is a k-module together with a map ρ = ρrM : M → M ⊗ C such that (ρ ⊗ IC ) ◦ ρ = (IM ⊗ ∆C ) ◦ ρ and (IC ⊗ εC ) ◦ ρ = IM
(1.16)
We will say that C acts from the right on M . We will use the SweedlerHeyneman notation ρ(m) = m[0] ⊗ m[1] and (ρ ⊗ IC )(ρ(m)) = (IM ⊗ ∆C )(ρ(m)) = m[0] ⊗ m[1] ⊗ m[2] The second identity in (1.16) can be rewritten as ε(m[1] )m[0] = m for all m ∈ M . A map f : M → N between two right comodules is called a morphism of C-comodules, or a right C-colinear map if ρrN ◦ f = (f ⊗ IC ) ◦ ρrM or f (m)[0] ⊗ f (m)[1] = f (m[0] ) ⊗ m[1] for all m ∈ M . MC will be the category of right C-comodules and right C-colinear maps.
10
1 Generalities
Example 4. Let C = kX, with X an arbitrary set. Let M be a k-module graded by X, that is Mx M= x∈X
where every Mx is a k-module. Then M is a kX-comodule, the coaction is given by ρr (m) = mx ⊗ x if m = mx with mx ∈ Mx . Conversely, every kX-comodule M is graded by X, one defines the grading by Mx = {m ∈ M | ρ(m) = m ⊗ x} Thus we have an equivalence between MkX and the category of X-graded modules. We have a functor F : MC → C ∗ M defined as follows: for a right C-comodule M , we let F (M ) = M , with left C ∗ -action given by c∗ · m = c∗ , m[1] m[0] for all c∗ ∈ C ∗ and m ∈ M ; if f : M → N is right C-colinear, then it is easy to prove that f is also left C ∗ -linear, and we let F (f ) = f . Proposition 3. The functor F : MC → C ∗ M is faithful. If C is projective as a k-module, then F is fully faithful. If C is finitely generated and projective, then F is an isomorphism of categories. Proof. Take two right C-comodules M and N . Obviously HomC (M, N ) → C ∗ Hom(F (M ), F (N )) is injective, so F is faithful. Assume that C is k-projective, and let {ci , c∗i | i ∈ I} be a dual basis. Let M and N be C-comodules, and assume that f : M → N is left C ∗ -linear. We claim that f is also right C-colinear. Indeed, for all m ∈ M , we have f (m[0] ) ⊗ c∗i , m[1] ci f (m[0] ) ⊗ m[1] = =
i∈I
f (c∗i · m) ⊗ ci
i∈I
=
c∗i · f (m) ⊗ ci
i∈I
=
c∗i , f (m)[1] f (m)[0] ⊗ ci
i∈I
= f (m)[0] ⊗ f (m)[1]
1.1 Coalgebras, bialgebras, and Hopf algebras
11
Assume moreover that C is finitely generated, and let {ci , c∗i | i = 1, · · · , n} be a dual basis for C. We define a functor G : C ∗ M → MC as follows: G(M ) = M as a k-module, with right C-coaction ρ(m) =
n
c∗i · m ⊗ ci
i=1
We will show that ρ defines a coaction, and leave all other verifications to the reader. We obviously have (IM ⊗ ε)(ρ(m)) =
n
ε(ci )c∗i · m = ε · m = m
i=1
Next we want to prove that (ρ ⊗ IC ) ◦ ρ = (IM ⊗ ∆C ) ◦ ρ
(1.17)
For all c∗ , d∗ ∈ C ∗ , we have (IM ⊗ c∗ ⊗ d∗ ) ◦ (ρ ⊗ IC ) ◦ ρ (m) ∗ ∗ = (IM ⊗ c∗ ⊗ d∗ ) (cj ∗ ci ) · m ⊗ cj ⊗ ci i,j
c∗ , cj d∗ , ci (c∗j ∗ c∗i ) · m = i,j ∗
= (c ∗ d∗ ) · m = c∗ ∗ d∗ , ci c∗i · m = (IM ⊗ c∗ ⊗ d∗ ) c∗i · m ⊗ δ(ci ) = (IM ⊗ c∗ ⊗ d∗ ) ((IM ⊗ ∆C ) ◦ ρr )(m) and (1.17) follows after we apply Lemma 1 Lemma 1. Let M, Nbe k-modules, and assume that N is finitely generated and projective. Take j mj ⊗ pj and k mk ⊗ pk in M ⊗ N . If n∗ , pj mj = n∗ , pk mk j
for all n∗ ∈ N ∗ , then
k
j
mj ⊗ p j =
mk ⊗ pk
k
Proof. Let {ni , n∗i | i = 1, · · · , n} be a dual basis for N . Then mj ⊗ p j = mj ⊗ n∗i , pj ni = mk ⊗ n∗i , pk ni = mk ⊗ pk j
i,j
i,k
k
12
1 Generalities
Let H be a bialgebra. If M and N are right H-comodules, then M ⊗ N is again a right H-comodule. The H-coaction is given by ρrM ⊗N (m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1] We call M coH = {m ∈ M | ρ(m) = m ⊗ 1} the submodule of coinvariants of M . We can also introduce left C-comodules. For a left C-comodule M , the Sweedler-Heyneman notation takes the following form: ρlM (m) = m[−1] ⊗ m[0] ∈ C ⊗ M The category of left C-comodules and left C-colinear maps is denoted by C M. We have an isomorphism of categories C
M∼ = MC
cop
If M is at once a left C-comodule and a right D-comodule in such a way that (ρl ⊗ ID ) ◦ ρr = (IC ⊗ ρr ) ◦ ρl then we say that M is a (C, D)-bicomodule. We then write, following the Sweedler-Heyneman philosophy: (m[0] )[−1] ⊗ (m[0] )[0] ⊗ m[1] = m[−1] ⊗ (m[0] )[0] ⊗ (m[0] )[1] = m[−1] ⊗ m[0] ⊗ m[1] = ρlr (m) Observe that C itself is a (C, C)-bicomodule. C MD is the category of (C, D)bicomodules and left C-colinear right C-colinear maps. We have isomorphisms cop cop C MD ∼ = MC ⊗D = C⊗D M ∼ Proposition 4. Let C be a coalgebra, and M a finitely generated projective k-module. Right C-coaction on M are in bijective correspondence with left C-coactions on M ∗ . Proof. Let {mi , m∗i | i = 1, · · · , n} be a dual basis for M , and let ρr : M → M ⊗ C be a right C-coaction. We define ρl = α(ρr ) : M ∗ → C ⊗ M ∗ by ρl (m∗ ) =
n i=1
This is a coaction on M ∗ since
mi[1] ⊗ m∗ , mi[0] m∗i
(1.18)
1.1 Coalgebras, bialgebras, and Hopf algebras
(IC ⊗ ρl )(ρl (m∗ )) =
n
13
mi[1] ⊗ mj[1] ⊗ m∗ , mi[0] m∗i , mj[0] m∗j
i,j=1
=
n
mj[1] ⊗ mj[2] ⊗ m∗ , mj[0] m∗j
j=1
= (∆C ⊗ IM ∗ )(ρl (m∗ )) n n ε(m∗[−1] )m∗[0] = ε, mi[1] m∗ , mi[0] m∗i i=1
i=1
=
n
m∗ , mi m∗i = m∗
i=1
Conversely, given ρl : M ∗ → C ⊗ M ∗ , we define ρr = α (ρl ) : M → M ⊗ C by n m∗i[0] , mmi ⊗ m∗i[−1] ρr (m) = i=1
An easy computation shows that α and α are each others inverses. The category of comodules over a coalgebra over a field k is a Grothendieck category. Over a commutative ring, we have the following generalization of this result, due to Wisbauer [187]. Proposition 5. Let C be a coalgebra over a commutative ring k. The following assertions are equivalent: 1. C is flat as a k-module; 2. MC is a Grothendieck category and the forgetful functor MC → M is exact; 3. MC is an abelian category and the forgetful functor MC → M is exact. Proof. 1. ⇒ 2. It is clear that MC is additive. Let f : M → N be a map in MC . To prove that Ker (f ) is a C-comodule, we need to show, for any m ∈ Ker (f ): ρ(m) ∈ Ker (f ) ⊗ C = Ker (f ⊗ IC ) (using the fact that C is k-flat). This is obvious, since (f ⊗ IC )ρ(m) = f (m[0] ) ⊗ m[1] = ρ(f (m)) = 0 On Coker (f ), we put a C-comodule structure as follows: ρ(n) = n[0] ⊗ n[1] for all n ∈ N . This is well-defined: if n = f (m), then n[0] ⊗ n[1] = f (m)[0] ⊗ f (m)[1] = f (m[0] ) ⊗ m[1] = 0
14
1 Generalities
It is clear that every monic in MC is the kernel of its cokernel, and that every epic is the cokernel of its cokernel, so MC is an abelian category. Let us next see that MC is an AB3-category. If {Mλ | λ ∈ Λ} is a family in MC , then M = ⊕λ Mλ is again a comodule: we have maps Mλ
- Mλ ⊗ C
ρλ
iλ ⊗IC
- M ⊗C
and therefore a unique map ρ : M → M ⊗ C making M into a comodule, and iλ into a right C-colinear map. The fact that MC is an AB5-category follows easily since M is AB5, and the functor forgetting the C-coaction is exact. Let us finally show that MC has a family of generators. First observe that every right C-comodule of the form M ⊗ C, with C-coaction induced by C, is generated by C. Indeed, for any k-module M , we can find an epimorphism k (λ) → M in M, and therefore an epimorphism k (λ) ⊗ C = C (λ) → M ⊗ C in MC . Now we claim that the C-subcomodules of C form a family of generators of MC . It suffices to show that for every right C-comodule M and m ∈ M , there exists a C-subcomodule D of C and a C-colinear map f : D → M such that m ∈ Im (f ). ρ : M → M ⊗ C is a monomorphism in MC , so M is isomorphic to ρ(M ) = {n[0] ⊗ n[1] | n ∈ M }. C generates M ⊗ C, so there exists a Ccolinear map f : C → M ⊗ C and c ∈ C such that f (m) = ρ(m). Now let D = {d ∈ C | f (d) ∈ ρ(M )} Indeed, for d ∈ D, we can find n ∈ N such that f (d) = n[0] ⊗ n[1] , and we see that (ρ ⊗ IC )(f (d)) = n[0] ⊗ n[1] ⊗ n[2] ∈ ρ(M ) ⊗ C Now look at the diagram with exact rows that defines D: 1
1
- D
- C
f
f
? - ρ(M )
? - M ⊗C
C is flat, so we have a commutative diagram with exact rows 1
- D⊗C f ⊗ IC
1
- C ⊗C f ⊗ IC
? ? - ρ(M ) ⊗ C - M ⊗ C ⊗ C
1.1 Coalgebras, bialgebras, and Hopf algebras
15
and D ⊗ C = {x ∈ C ⊗ C | (f ⊗ IC )(x) ∈ ρ(M ) ⊗ C It follows that ρ(d) ∈ D ⊗ C, and D is a right C-comodule. We now have f : D → ρ(M ) ∼ = M in MC , and f (c) = m[0] ⊗ m[1] ∼ = m. 2. ⇒ 3. is trivial. 3. ⇒ 1. The forgetful functor F : MC → M is a left adjoint of • ⊗ C : M → MC . The unit and counit of the adjunction are given by ρ : M → M ⊗ C ; ρ(m) = m[0] ⊗ m[1] εN : N ⊗ C → N
: εN (n ⊗ c) = ε(c)n
for all M ∈ MC and N ∈ M. It is well-known that a functor between abelian categories that is a right adjoint of a covariant functor is left exact (see e.g. [11, I.7.1]), and it follows that • ⊗ C : M → MC is exact. Now the forgetful functor MC → M is also left exact, by assumption, so the composition • ⊗ C : M → M is left exact, and C is flat, as needed. Remark 1. The assumption that the forgetful functor is exact, in the second and third condition of the Proposition, means the following: for a C-colinear map f : MC → MC , the (co)kernel of f in MC has to be equal as a k-module to the kernel of f viewed as a map between k-modules. J. G´omez Torrecillas kindly pointed out to us that this condition is missing in Wisbauer’s paper [187]. For an example of a coalgebra C such that MC is abelian, while C is not flat, and the functor forgetting the coaction is not exact, we refer to [80]. The cotensor product Take M ∈ MC and N ∈ C M. The cotensor product M C N = M ⊗C N is defined as the equalizer 0−→M C N −→M ⊗ N
- M ⊗C ⊗N
Example 5. Let C = kX, and M and N X-graded modules. Then Mx ⊗ N x M C N = x∈X
For a fixed right C-comodule M , we have a functor M C • :
C
M→M
If M is flat as a k-module, then M ⊗ • is an exact functor, and it follows easily that M C • is left exact, but not necessarily right exact. Definition 2. A right C-comodule M is called right C-coflat if it is flat as a k-module, and if M C • is an exact functor. A similar definition applies to left C-comodules.
16
1 Generalities
Now take M ∈ MC , N ∈ C M, and P ∈ M. We then have a natural map f : (M C N ) ⊗ P → M C (N ⊗ P ) i mi ⊗ ni ) ⊗ p) = i mi ⊗ (ni ⊗ p).
given by f ((
Lemma 2. With notation as above, the natural map f : (M C N ) ⊗ P → M C (N ⊗ P ) is an isomorphism in each of the following cases: 1. P is k-flat (e.g. if k is a field); 2. M is right C-coflat. Proof. 1. M C N is defined by the exact sequence 0−→M C N −→M ⊗ N - M ⊗C ⊗N Using the fact that P is k-flat, we obtain a commutative diagram with exact rows 0 −→ (M C N ) ⊗ P −→ M ⊗ N ⊗ P - M ⊗C ⊗N ⊗P ∼ ∼ f = = 0 −→ M C (N ⊗ P ) −→ M ⊗ N ⊗ P - M ⊗C ⊗N ⊗P and the result follows from the Five Lemma (see e.g. [123, Sec. VIII.4]). 2. Recall the definition of the tensor product: N ⊗ P = N × P/I, where I is the ideal generated by elements of the form (n, p + q) − (n, p) − (n, q) ; (n + m, p) − (n, p) − (m, p) ; (nx, p) − (n, xp) and we have an exact sequence of left C-comodules 0−→I−→N × P −→N ⊗ P −→0 and, using the right C-coflatness of M , we find a commutative diagram with exact rows 0 −→ M C I = 0
−→
J
−→ M C (N × P ) ∼ =
−→ M C (N ⊗ P ) f
−→ 0
−→ (M C N ) × P
−→ (M C N ) ⊗ P
−→ 0
and the result follows again from Five Lemma. Assume that A is a k-algebra, C a k-coalgebra, P ∈ A M, M ∈ MC and N ∈ C MA . By this we mean that N is a left C-comodule and a right Amodule such that the right A-action is left C-colinear, i.e. ρl (na) = n[−1] ⊗ n[0] a for all n ∈ N and a ∈ A.
1.1 Coalgebras, bialgebras, and Hopf algebras
17
Lemma 3. With notation as above, the natural map f : (M C N ) ⊗A P → M C (N ⊗A P ) is an isomorphism in each of the following situations: 1. P is left A-flat; 2. M is right C-coflat. Proof. 1) The proof is identical to the proof of the first part of Lemma 2 2) The right A-action on M C N is given by mi ⊗ ni )a = mi ⊗ n i a ∈ M C N ( i
for every
i
i
mi ⊗ ni ∈ M C N . Now (M C N ) ⊗A P is the equalizer of (M C N ) ⊗ A ⊗ P −→ −→ (M C N ) ⊗ P
which is by Lemma 2 isomorphic to the equalizer of M C (N ⊗ A ⊗ P ) −→ −→ M C (N ⊗ P ) and this equalizer is isomorphic to M C (N ⊗A P ) because M is right Ccoflat. In some situations, the cotensor product can be computed explicitely. Proposition 6. Let M and N be right C-comodules, and assume that M is finitely generated and projective as a k-module. Then we have a natural isomorphism HomC (M, N ) ∼ = N C M ∗ Proof. We use notation as in Proposition 4. We know from (1.18) that M ∗ is a left C-comodule. From (1.18), we deduce that m∗[0] , mm∗[−1] = m∗ , m[0] m[1]
(1.19)
M is finitely generated projective, so we have an isomorphism α : Hom(M, N ) → N ⊗ M ∗ given by α(f ) =
n
f (mi ) ⊗ m∗i and α−1 (n ⊗ m∗ )(m) = m∗ , mn
i=1
We will show that α restricts to the required isomorphism. Assume first that f is right C-colinear. Using (1.18) we find that
18
1 Generalities
f (mi ) ⊗ m∗i[−1] ⊗ m∗i[0] =
i
=
f (mi ) ⊗ mj[1] ⊗ m∗i , mj[0] m∗j
i,j
f (mj[0] ) ⊗ mj[1] ⊗ m∗j
j
=
f (mj )[0] ⊗ f (mj )[1] ⊗ m∗j
j
and it followsthat α(f ) ∈ N C M ∗ . Now take k nk ⊗ m∗k ∈ N C M ∗ , and let f = α−1 ( k nk ⊗ n∗k ). f is then right C-colinear, since for all m ∈ M , we have f (m[0] ) ⊗ m[1] = n∗k , m[0] nk ⊗ m[1] (1.19)
=
k
n∗k[0] , mnk ⊗ n∗k[−1]
k
=
n∗k , mnk[0] ⊗ nk[1]
k
= ρ(f (m)) Coflatness versus injectivity Let C be a coalgebra over a field. We will show that a C-comodule is an injective object in the category of C-comodules if and only if it is C-coflat. Our proof is based on the approach presented in [63]. First we need some Lemmas. Lemma 4. Let C be a coalgebra over a field k, and M a right C-comodule. For every m ∈ M , there exists a finite dimensional subcomodule M of M containing m. Consequently there exists an index set J and a set {Mj | j ∈ J} consisting of finite dimensional right C-comodules, and an epimorphism φ : ⊕j∈J Mj → M in MC . Proof. Let {ci | i ∈ I} be a basis for C as a k-vector space, and write ρ(m) = mi ⊗ ci i∈I
where only a finite number of the mi are nonzero - for a change, we do not use the Sweedler notation. Let M be the k-subspace of M generated by the mk . M is finite dimensional, and ε(ci )mi ∈ M m= i∈I
We can write ∆(ci ) =
j,l∈I
ajl i cl ⊗ cm
1.1 Coalgebras, bialgebras, and Hopf algebras
19
where only a finite number of the ajl i ∈ k are different from 0. We now compute that ρ(mi ) ⊗ ci = mi ⊗ ∆(ci ) i∈I
i∈I
=
ajl i mi ⊗ cl ⊗ cm
i,j,l∈I
=
aji l ml ⊗ cl ⊗ ci
i,j,l∈I
Since the ci form a basis of C, we have ji ρ(mi ) = al ml ⊗ cl ∈ M ⊗ C j,l∈I
for all i ∈ I, and this proves that M is a subcomodule of M . Consider two right C-comodules M and Q. We say that Q is M -injective if for every subcomodule M ⊂ M , the canonical map HomC (M, Q) → HomC (M , Q) is surjective. Clearly Q is an injective comodule (i.e. an injective object of MC ) if and only if Q is M -injective for every M ∈ MC . Lemma 5. If {Mi | i ∈ I} is a collection of C-comodules, and Q ∈ MC is Mi -injective for all i ∈ I, then Q is also ⊕i∈I Mi -injective. Proof. Write M = ⊕i∈I Mi . Let M be a subcomodule of M , and f : M → Q C-colinear. Consider P = {(L, g) | M ⊂ L ⊂ M in MC , g : L → Q in MC , g|M = f } P is nonempty since (M , f ) ∈ P, and P is ordered: (L, g) ≤ (L , g ) if L ⊂ L and g|L = g. It is easy to show that this ordering is inductive, so P has a maximal element, by Zorn’s Lemma. We call this element (L0 , g0 ), and we claim that Mi ⊂ L0 , for all i ∈ I. Assume Mi is not contained in L0 , and consider h = g0|Mi ∩L0 : Mi ∩ L0 → Q Since Q is Mi -injective, we have a C-colinear map h : Mi → Q such that h|Mi ∩L0 = h Now define g : Mi + L0 → Q as follows g(x + y) = h(x) + g0 (y)
20
1 Generalities
for x ∈ Mi and y ∈ L0 . g is well-defined, since h and g0 coincide on Mi + L0 . Now g|L0 = g0 and Mi + L0 strictly contains L0 , so (L0 , g0 ) < (Mi + L0 , g) in P which is a contradiction. We conclude that Mi ⊂ L0 , so M = ⊕i∈I Mi ⊂ L0 , and g0 : M = L0 → Q extends f . Theorem 1. Let C be a coalgebra over a field k. For a right C-comodule Q, the following assertions are equivalent. 1. Q is injective as a C-comodule; 2. Q is M -injective, for every finite dimensional C-comodule M ; 3. Q is right C-coflat. Proof. 1. ⇒ 3. Assume that Q is injective. The coaction ρQ is monomorphic, so we have a C-colinear map νQ : Q ⊗ C → Q splitting ρQ . Let f : X → Y be a surjective morphism of left C-comodules, and take i qi ⊗ yi ∈ QC Y . ∈ X such that f (xi ) = yi , and our problem is As f is surjective, we find xi that we don’t know whether i qi ⊗ xi ∈ QC X. We have qi[0] ⊗ qi[1] ⊗ f (xi ) = qi ⊗ xi[−1] ⊗ f (xi[0] ) i
i
so i
q i ⊗ yi =
νM (qi[0] ⊗ qi[1] ) ⊗ f (xi )
i
= (IM ⊗ f )(νM (qi ⊗ xi[−1] ) ⊗ xi[0] ) Using the fact that νQ is C-colinear, we find (ρQ ⊗ IX )(νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ) = νQ (qi ⊗ xi[−2] ) ⊗ xi[−1] ⊗ xi[0] = (IQ ⊗ ρX )(νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ) so νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ∈ M C X, and this shows that IQ C f : QC X → QC Y is surjective. 3. ⇒ 2. Let M ∈ MC be finite dimensional, and take a subcomodule M ⊂ M . Then M ∗ and M ∗ are left C-comodules, and Proposition 6 implies that QC M ∗ ∼ = HomC (M, Q) and QC M ∗ ∼ = HomC (M , Q) Now M ∗ → M ∗ is surjective, so QC M ∗ → QC M ∗ is also surjective since Q is C-coflat, and we find that HomC (M, Q) → HomC (M , Q) is surjective, as needed. 2. ⇒ 1. Take an arbitrary N ∈ MC . From Lemma 4, we know that there
1.1 Coalgebras, bialgebras, and Hopf algebras
21
exists a collection {Mi | i ∈ I} of finite dimensional C-comodules and a Ccolinear surjection φ : ⊕i∈I Mi → N . Let P = Ker φ. Now take a subcomodule N ⊂ N , and let M = φ−1 (N ). Then P ⊂ M , so we have the following commutative diagram with exact rows in MC : 0
0
- P 6
- M 6
=
⊂
- P
- M
φ N 6
- 0
⊂ φ - N
- 0
Applying HomC (•, Q) to this diagram, we find 0
- HomC (N, Q)
- HomC (M, Q) - HomC (P, Q) 6 =
0
? ? - HomC (N , Q) - HomC (M , Q) - HomC (P, Q)
HomC (M, Q) → HomC (M , Q) is surjective, by Lemma 5. An easy diagram argument shows that HomC (N, Q) → HomC (N , Q) is surjective, as needed. Comodule algebras and comodule coalgebras Let H be a bialgebra. A right H-comodule A that is also a k-algebra is called a right H-comodule algebra , if the unit and multiplication are right H-colinear, that is ρr (ab) = a[0] b[0] ⊗ a[1] b[1] and ρr (1A ) = 1A ⊗ 1H
(1.20)
for all a, b ∈ A. Left H-comodule algebras are introduced in a similar way, and if A is a right H-comodule algebra, then Aop is a left H opcop -comodule algebra. A k-coalgebra C that is also a right H-comodule is called a right H-comodule coalgebra if the comultiplication and the counit are right H-colinear, or c[0](1) ⊗ c[0](2) ⊗ c[1] = c(1)[0] ⊗ c(2)[0] ⊗ c(1)[1] c(2)[1]
(1.21)
εC (c[0] )c[1] = εC (c)1H
(1.22)
and for all c ∈ C. Example 6. Let G be a (semi)group, and take H = kG. Then a kG-comodule algebra is nothing else then a G-graded k-algebra (see [146] for an extensive study of graded rings). A kG-comodule coalgebra is a G-graded coalgebra (see [144]).
22
1 Generalities
Proposition 7. Let C be a coalgebra which is finitely generated and projective as a k-module. There is a bijective correspondence between right Hcomodule coalgebra structures on C and left H-comodule algebra structures on C ∗ . Proof. Let {ci , c∗i | i = 1, · · · , n} be a finite dual basis of C, and assume that C is a right H-comodule coalgebra. We know from Proposition 4 that C ∗ is a left H-comodule, with left H-coaction given by ρl (c∗ ) =
n
ci[1] ⊗ c∗ , ci[0] c∗i
i=1
This makes C ∗ into a left H-comodule algebra since c∗[−1] d∗[−1] ⊗ c∗[0] d∗[0] =
n
ci[1] cj[1] ⊗ c∗ , ci[0] d∗ , cj[0] c∗i ∗ c∗j
i,j=1
(1.5)
=
n
ci(1)[1] ci(2)[1] ⊗ c∗ , ci(1)[0] d∗ , ci(2)[0] c∗i
i=1
(1.21)
=
n
ci[1] ⊗ c∗ ∗ d∗ , ci[0] c∗i
i=1 l ∗
= ρ (c ∗ d∗ ) ρl (εC ) =
n
ci[1] ⊗ εC , ci[0] c∗i
i=1
(1.22)
=
n
1H ⊗ εC , ci c∗i = 1H ⊗ εC
i=1
The further details of the proof are left to the reader.
1.2 Adjoint functors We give a brief discussion of properties of pairs of adjoint functors; of course these results are well-known, but we have organized them in such a way that they can be applied easily to Frobenius and separable functors in Chapter 3. We will occasionally use the Godement product of two natural transformations. Let us introduce the Godement product briefly, refering the reader to [21] for more detail. Let C, D and E be categories, and consider functors F, G : C → D and H, K : D → E and natural transformations
1.2 Adjoint functors
23
α : F → G and β : H → K The Godement product β ∗ α : HF → KG is defined by (β ∗ α)C = βG(C) ◦ H(αC ) = K(αC ) ◦ βF (C) : HF (C) → KG(C) If F = G, and α = 1F , then we find (β ∗ 1F )C = βF (C) If H = K, and β = 1H , then we find (1H ∗ α)C = H(αC ) Now consider, in addition, functors L : C → D and M : D → E and natural transformations γ : G → L and δ : K → M then we have the following formula: (δ ∗ γ) ◦ (β ∗ α) = (δ ◦ β) ∗ (γ ◦ α) Pairs of adjoint functors Let A, B, C and D be categories, and consider functors F : A → C, G : B → C, H : A → D, and K : B → D We have functors HomC (F, G), HomD (H, K) : Aop × B → Sets and we can consider natural transformations θ : HomC (F, G) → HomD (H, K) The naturality of θ can be expressed as follows: given a : A → A in A, b : B → B in B, and f : F (A) → G(B) in C, we have θA ,B (G(b) ◦ f ◦ F (a)) = K(b) ◦ θA,B (f ) ◦ H(a)
(1.23)
Proposition 8. For two functors F : C → D and G : D → C, we have the following isomorphisms of classes of natural transformations: Nat(1C , GF ) ∼ = Nat(HomD (F, •), HomC (•, G)) Nat(F G, 1D ) ∼ = Nat(HomC (•, G), HomD (F, •))
(1.24) (1.25)
24
1 Generalities
Proof. (Sketch) Consider a natural transformation η : 1C → GF . The corresponding natural transformation θ : HomD (F, •) → HomC (•, G) is defined by θC,D (f ) = G(f ) ◦ ηC (1.26) for all f : F (C) → D in D. Conversely, given θ, the corresponding η is given by ηC = θC,F (C) (IF (C) ) for all C ∈ C. Lemma 6. Let F and G be as in Proposition 8, and consider natural transformations θ : HomD (F, •) → HomC (•, G) and ψ : HomC (•, G) → HomD (F, •). let η : 1C → GF and ε : F G → 1D be natural transformations from Proposition 8. 1. ψ ◦ θ is the identity natural transformation if and only if (ε ∗ F ) ◦ (F ∗ η) = 1F
(1.27)
2. θ ◦ ψ is the identity natural transformation if and only if (G ∗ ε) ◦ (η ∗ G) = 1G
(1.28)
Proof. 1. Take f : F (C) → D in D. We easily compute that ψC,D (θC,D (f )) = εD ◦ F G(f ) ◦ F (ηC ) Now take D = F (C) and f = IF (C) . Then ψC,F (C) (θC,F (C) (IF (C) )) = εF (C) ◦ F (ηC ) and, under the assumption that ψ ◦ θ is the identity natural transformation, we find (1.27). Conversely, assume that (1.27) holds. ε is natural, so we have the following commutative diagram for any f : F (C) → D in D: F GF (C)
F G(f-)
εF (C) ? F (C)
F G(D) εD
F G(f ) - ? D
and we find that f = f ◦ εF (C) ◦ F (ηC ) = εD ◦ F G(f ) ◦ F (ηC ) = ψC,D (θC,D (f )) The proof of 2. is similar.
1.2 Adjoint functors
25
Recall that (F, G) is an adjoint pair of functors if HomD (F, •) and HomC (•, G) are naturally isomorphic, or, equivalently, if there exists natural transformations η : 1C → GF and ε : F G → 1D satisfying (1.27-1.28). In this case, F is called a left adjoint of G, and G is called an adjoint of F . η is called the unit of the adjunction, while ε is called the counit. It is well-known that the left or right adjoint of a functor is unique up to natural isomorphism; we include a proof for completeness sake. Proposition 9. (Kan) [101]. If G and G are both adjoints of a functor F : C → D, then G and G are naturally isomorphic. Proof. We have two adjunctions (F, G) and (F, G ). Let (η, ε) and (η , ε ) be the unit and counit of both adjunctions, and consider the natural transformations γ = (G ∗ ε) ◦ (η ∗ G) : G → G γ = (G ∗ ε ) ◦ (η ∗ G ) : G → G η is natural, so for any D ∈ D, we have a commutative diagram G (εD ) - G (D)
G F G(D) ηG F G(D)
ηG (D)
? ? GF G (εD-) GF G F G(D) GF G (D) or
(η ∗ G ) ◦ (G ∗ ε) = (GF G ∗ ε) ◦ (η ∗ G F G)
Now η is natural, and we have a commutative diagram G(D)
ηG(D)
- G F G(D)
ηG(D)
ηG F G(D)
? ? GF (η G(D) ) - GF G F G(D) GF G(D) or
(η ∗ G F G) ◦ (η ∗ G) = (GF ∗ η ∗ G) ◦ (η ∗ G)
The naturality of ε gives a commutative diagram F G F G(D)
F G (εD-)
εF G(D) ? F G(D)
F G (D) εD
εD
? - D
26
1 Generalities
or
ε ◦ (F G ∗ ε) = ε ◦ (ε ∗ F G)
and it follows that (G ∗ ε ) ◦ (GF G ∗ ε) = (G ∗ ε) ◦ (G ∗ ε ∗ F G) Combining all these formulas, we find γ ◦ γ = (G ∗ ε ) ◦ (η ∗ G ) ◦ (G ∗ ε) ◦ (η ∗ G) = (G ∗ ε ) ◦ (GF G ∗ ε) ◦ (η ∗ G F G) ◦ (η ∗ G) = (G ∗ ε) ◦ (G ∗ ε ∗ F G) ◦ (GF ∗ η ∗ G) ◦ (η ∗ G) = (G ∗ ε) ◦ G ∗ ((ε ∗ F ) ◦ (F ∗ η )) ∗ G ◦ (η ∗ G) = (G ∗ ε) ◦ G ∗ 1F ∗ G) ◦ (η ∗ G) = (G ∗ ε) ◦ (η ∗ G) = 1G In a similar way, we obtain that γ ◦ γ = 1G , and it follows that G and G are naturally isomorphic. Recall the following properties of adjoint pairs: Theorem 2. Let (F, G) be an adjoint pair of functors. F preserves colimits, and, in particular, coproducts, initial objects and cokernels. G preserves limits, and, in particular, products, final objects and kernels. If C and D are abelian categories, then F is right exact, and G is left exact. If F is exact, then G preserves injective objects. If G is exact, then F preserves projective objects. Here is another well-known property of adjoint functors that will be useful in the sequel. Proposition 10. Let (F, G) be an adjoint pair functors, then we have isomorphisms Nat(F, F ) ∼ = Nat(G, G) ∼ = Nat(1C , GF ) ∼ = Nat(F G, 1D ) Proof. We will show that Nat(G, G) ∼ = Nat(1C , GF ), the proof of the other assertions is left to the reader. For a natural transformation θ : 1C → GF , we define α = X(θ) : G → G by αD = G(εD ) ◦ θG(D)
(1.29)
Conversely, for α : G → G, θ = X −1 (α) : 1C → GF is defined by θC = αF (C) ◦ ηC
(1.30)
1.2 Adjoint functors
27
We are done if we can show that X and X −1 are each others inverses. First take α : G → G, and θ = X −1 (α). The diagram G(D)
ηG(D) - GF G(D) G(εD-) G(D)
@ @ αF G(D) αD @ θG(D) @ R @ ? G(ε ) ? D - G(D) GF G(D) commutes: the triangle is commutative because of (1.30), and the square commutes because α is natural. From (1.27), it follows that the composition of the two maps in the top row is IG(N ) , and then we see from the diagram that α = X(θ). Conversely, take θ : 1C → GF , and let α = X(θ). Then θ = X −1 (α) because the following diagram commutes: C
ηC
- GF (C)
@ @ θGF (C) @αF (C) @ @ R ? GF (η ) ? G(ε F (C) ) C - GF (C) - GF GF (C) GF (C) θC
A result of the same type is the following: Proposition 11. Let (F, G) be an adjoint pair of functors. Then we have isomorphisms Nat(GF, 1C ) ∼ = Nat(HomD (F, F ), HomC (•, •))
(1.31)
Nat(1D , F G) ∼ = Nat(HomC (G, G), HomD (•, •))
(1.32)
Proof. We outline the proof of the first statement. Given a natural transformation ν : GF → 1C , we define θ = α(ν) : HomD (F, F ) → HomC (•, •) as follows: take g : F (C) → F (C ) in D, and put θC,C (g) = νC ◦ G(g) ◦ ηC Straightforward arguments show that θ is natural. Conversely, given θ : HomD (F, F ) → HomC (•, •) we define α−1 (θ) = ν : GF → 1C by
28
1 Generalities
νC = θGF (C),C (εF (C) ) : GF (C) → C We leave it as an exercise to show that ν is natural, as needed, and that α and α−1 are inverses. The proof of the second statement is similar. Let us just mention that, given ζ : 1D → F G, we define β(ζ) = ψ : HomC (G, G) → HomD (•, •) as follows: given f : G(D) → G(D ) in C, we put ψD,D (f ) = εD ◦ F (f ) ◦ ζD
1.3 Separable algebras and Frobenius algebras In this Section, we give the classical Definition, and elementary properties of separable and Frobenius algebras. We will refer to it in Chapter 3, where we will introduce separable and Frobenius functors, and show that they are generalizations of the classical concepts. The Section on separable algebras is based on [109], and the one on Frobenius algebras on [113]. Separable algebras Let k be a commutative ring, A a k-algebra and M an A-bimodule. Recall that M can be viewed as a left Ae -module, where Ae = A ⊗ Aop is the enveloping algebra of A. A derivation of A in M is a k-linear map D : A → M such that D(ab) = D(a)b + aD(b)
(1.33)
for all a, b ∈ A. Derk (A, M ) will be the k-module consisting of all derivation of A into M . For any m ∈ M , we have a derivation Dm , given by Dm : A → M,
Dm (a) = am − ma
called the inner derivation asociated to m. It is clear that Dm = 0 if and only if m ∈ M A = {m ∈ M | am = ma, ∀a ∈ A}, so we have an exact sequence 0 → M A → M → Derk (A, M )
(1.34)
We also note that MA ∼ = HomAe (A, M ),
M∼ = HomAe (Ae , M )
(1.35)
The multiplication mA on A induces an epimorphism A ⊗ Aop → A of left Ae -modules, still denoted by mA , and we have another exact sequence 0 → I(A) = Ker (mA ) → A ⊗ Aop → A → 0
(1.36)
1.3 Separable algebras and Frobenius algebras
29
We have a derivation δ : A → I(A),
δ(a) = a ⊗ 1 − 1 ⊗ a
for all a ∈ A. It is clear that δ(a) ∈ I(A) and
Indeed, take x =
Aδ(A) = I(A) = δ(A)A
i
x=
ai ⊗ bi ∈ I(A), then ai (1 ⊗ bi − bi ⊗ 1) = −
i
ai δ(bi ) ∈ Aδ(A)
i
Lemma 7. Let M be an A-bimodule over a k-algebra A. Then we have an isomorphism of k-modules HomAe (I(A), M ) ∼ = Derk (A, M )
(1.37)
Proof. We define φ : HomAe (I(A), M ) → Derk (A, M ), φ−1 is given by
φ−1 (D)(
ai ⊗ bi ) = −
i
φ(f ) = f ◦ δ
ai D(bi )
i
We show that φ−1 (D) is left Ae -linear, and leave the other details to the reader. ai ⊗ bi )) = φ−1 (D)( aai ⊗ bi b) φ−1 (D)((a ⊗ b)( i
=−
aai D(bi b) = −
i
= (a ⊗ b)φ−1 (D)(
i
aai D(bi )b −
i
aai bi D(b)
i
ai ⊗ bi )
i
Applying the functor HomAe (•, M ) to the exact sequence (1.36) and taking (1.35) and (1.37) into account, we find a long exact sequence 0 → M A → M → Derk (A, M ) → Ext1Ae (A, M ) → 0
(1.38)
extending (1.34). Indeed, Ext1Ae (Ae , M ) = 0, since Ae is projective as a left Ae -module. H 1 (A, M ) = Ext1Ae (A, M ) is another notation, and H 1 (A, M ) is called the first Hochschild cohomology group of A with coefficients in M . For more information on Hochschild cohomology, we refer to [55, Ch. IX]. Thus (1.38) tells us that H 1 (A, M ) ∼ = Derk (A, M )/InnDerk (A, M ) A is called a separable k-algebra if it satisfies the equivalent conditions of the following theorem:
30
1 Generalities
Theorem 3. For a k-algebra A the following statements are equivalent: 1. A is projective as a left Ae -module; 2. the exact sequence splits as a sequence of left Ae -modules; (1.36) 1 2 3. there exists e = e ⊗ e ∈ A ⊗ A such that (1.39) ae = ea and e1 e2 = 1 for all a ∈ A. 4. H 1 (A, M ) = 0, for any A-bimodules M . 5. the derivation δ : A → I(A), δ(a) = a ⊗ 1 − 1 ⊗ a is inner; 6. every derivation D : A → M is inner, for any A-bimodule M . An element e ∈ Ae satisfying 1 2 ae = ea for all a ∈ A is called a Casimir element. If, in addition, e e = 1, then e is an idempotent, and it is called a separability idempotent. Proof. 1. ⇔ 2. is obvious. 2. ⇒ 3. If ψ : A → Ae is a left Ae -module map and section of mA then, e = ψ(1) satisfies (1.39). e e 3. ⇒ 2. Define ψ1 : 2 A → A , ψ(a) = ae = ea. ψ is left A -module map and mA ψ(a) = ae e = a. 1. ⇔ 4. is obvious. 4. ⇔ 6. follows from the exact sequence (1.38). 6. ⇒ 5. is trivial. 5. ⇒ 6. Let D : A → M be a derivation. From the above Lemma we know that there is a f ∈ HomAe (I(A), M ) such that D = f ◦ δ. δ is inner, so we can write δ = Dx , with x ∈ I(A). Now, D(a) = f (δ(a)) = f (ax − xa) = af (x) − f (x)a = Df (x) (a) i.e. D is inner. Let us now prove some immediate properties of separable algebras. Proposition 12. Any projective separable algebra A over a commutative ring k is finitely generated. Proof. We take a dual basis {si , s∗i | i ∈ I} for A. This means that, for all s ∈ A, the set I(s) = {i ∈ I | s∗i , s = 0} is finite, and s=
s∗i , ssi i∈I
For all i ∈ I, we define φi : A ⊗ A
op
→ A by
φi (s ⊗ t) = s∗i , ts
1.3 Separable algebras and Frobenius algebras
such that
31
φi (s s ⊗ t) = s∗i , ts s = s φi (s ⊗ t)
and φi is left A-linear. We now claim that {zi = 1 ⊗ si , φi | i ∈ I} is a dual basis of A ⊗ Aop as a left A-module. Take z = s ⊗ t ∈ A ⊗ Aop . If φi (z) = s∗i , ts = 0, then s∗i , t = 0, so i ∈ I(t), and we conclude that I(z) = {i ∈ I | φi (z) = 0} ⊂ I(t) is finite. Moreover s⊗t =
s ⊗ s∗i , tsi =
i∈I
=
s∗i , ts ⊗ si
i∈I
φi (s ⊗ t) ⊗ si =
i∈I
φi (s ⊗ t)(1 ⊗ si )
i∈I
A is separable, so we have a separability idempotent e = e1 ⊗ e2 ∈ A ⊗ Aop . Our next claim is that I(et) ⊂ I(e) for all t ∈ A. Indeed, we compute φi (et) = φi (e1 ⊗ e2 t) = φi (te1 ⊗ e2 ) = tφi (e) so i ∈ I(et), or φi (et) = 0, implies φi (e) = 0 and i ∈ I(e). For all t ∈ A, we finally compute t = 1t = m(e)t = m(et) = m φi (et)zi
=m
2
i∈I(e)
= Write e =
r
s∗i , e2 te1 m(zi ) i∈I(e)
j=1 ej
i∈I(e)
φi (e ⊗ e t)zi = m 1
=
s∗i , e2 te1 zi
i∈I(e)
s∗i , e2 te1 si
i∈I(e)
⊗ ej . We have shown that {ej si , s∗i , ej • | i ∈ I(e), j = 1, · · · , r}
is a finite dual basis for A. Proposition 13. A separable algebra A over a field k is semisimple. 1 Proof. Let e = e ⊗ e2 ∈ A ⊗ A be a separability idempotent and N an A-submodule of a right A-module M . As k is a field, the inclusion i : N → M splits in the category of k-vector spaces. Let f : M → N be a k-linear map such that f (n) = n, for all n ∈ N . Then
32
1 Generalities
f˜ : M → N,
f˜(m) :=
f (me1 )e2
is a right A-module map that splits the inclusion i. Thus N is an A-direct factor of M , and it follows that M is completely reducible. This shows that A is semisimple. Examples 1. 1. Let k be a field of characteristic p, and a ∈ k \ k p . l = k[X]/(X p − a) is then a purely inseparable field extension of k, and l is not a separable k-algebra in the above sense. Indeed, d : l→l dX is a derivation that is not inner. More generally, one can prove that a finite field extension l/k is separable in the classical sense if and only if l is separable as a k-algebra, see [66, Proposition III.3.4]. 2. Let k be a field. It can be show that a separable k-algebra is of the form A = Mn1 (D1 ) × · · · × Mnr (Dr )
(1.40)
where Di is a division algebra with center a finite separable field extension li of k. See [66, Theorem III.3.1] for details. 3. Any nmatrix ring Mn (k) is separable as a k-algebra: for any i = 1, · · · , n, ei = j=1 eji ⊗eij is a separablity idempotent. More generally, any Azumaya algebra A is separable as a k-algebra. Frobenius algebras In this Section we will recall the classical definition of a Frobenius algebra, thus showing how it came up in representation theory. We will work over a a field k. For a k-algebra A, the k-dual A∗ = Homk (A, k) is an A-bimodule via the actions r∗ · r, r = r∗ , rr ,
r · r∗ , r = r∗ , r r
(1.41)
for all r, r ∈ A and r∗ ∈ A∗ . Definition 3. A finite dimensional k-algebra A is called a Frobenius algebra if A ∼ = A∗ as right A-modules. Remarks 1. 1. A finite dimensional k-algebra A is Frobenius if and only if there exists a k-linear map λ : A → k such that for any ψ ∈ A∗ there exists a unique element r = rψ ∈ A such that ψ(x) = λ(rx) for all x ∈ A. In particular, the matrix algebra Mn (k) is Frobenius: take λ = Tr, the trace map. 2. The concept of Frobenius algebra is left-right symmetric: that is A ∼ = A∗ ∗ in MA if and only if A ∼ A in M. = A It suffices to observe that there exists a one to one correspondence between the following data:
1.3 Separable algebras and Frobenius algebras
33
– the set of all isomorphisms of right A-modules f : A → A∗ ; – the set of all bilinear, nondegenerate and associative maps B : A × A → k; – the set of all isomorphisms of left A-modules g : A → A∗ , given by the formulas f (x)(y) = B(x, y) = g(y)(x) (1.42) for all x, y ∈ A. Let us now explain how the original problem of Frobenius arises naturally in representation theory, as explained in the book of Lam [113]. We fix a basis {e1 , · · · , en } of a finite dimensional algebra A. Then for any r ∈ A we can (r) (r) find scalars aij and bij such that ei r =
n
(r)
aij ej ,
rei =
j=1
n
(r)
bji ej
(1.43)
j=1
for all i = 1, · · · , n. Hence we have constructed k-linear maps α, β : A → Mn (k),
(r)
α(r) = (aij ),
(r)
β(r) = (bij )
(1.44)
for all r ∈ A. It is straightforward to prove that α and β are algebra maps, i.e. they are representations of the k-algebra A. The problem of Frobenius: When are the above representations α and β equivalent? We recall that two representations α, β : A → Mn (k) are equivalent if there exists an invertible matrix U ∈ Mn (k) such that β(r) = U α(r)U −1 , for all r ∈ A. Before giving the answer to the problem we present one more construction: let (clij )i,j,l=1,n be the structure constants of the algebra A, that is ei ej =
n
ckij ek
k=1
for all i, j = 1, · · · , n. For a = (a1 , · · · , an ) ∈ k n , let Pa ∈ Mn (k) be the matrix given by n ak ckij (Pa )i,j = k=1
The matrix Pa is called the paratrophic matrix. In the next Theorem, the equivalence 2. ⇔ 3. was the original theorem of Frobenius, while the equivalence 1. ⇔ 2. translate the problem from representation theory into the language of modules. Theorem 4. For an n-dimensional algebra A, the following statements are equivalent:
34
1 Generalities
1. A is Frobenius; 2. the representations α and β : A → Mn (k) constructed in (1.44) are equivalent; 3. there exists a ∈ k n such that the paratrophic matrix Pa is invertible; 4. there exists a bilinear, nondegenerate and associative map B : A×A → k, i.e. B(xy, z) = B(x, yz), for all x, y, z ∈ A; 5. there exists a hyperplane of A that does not contain a nonzero right ideal of A; 6. thereexists a pair (ε, e), called a Frobenius pair , where ε ∈ A∗ and e= e1 ⊗ e2 ∈ A ⊗ A such that ae = ea, and ε(e1 )e2 = e1 ε(e2 ) = 1. (1.45) Before proving the Theorem, let us recall some well-known facts. First of all, let V be a n-dimensional vector space with basis B = {v1 , · · · , vn }. Let canV : Endk (V )op → Mn (k),
canV (f ) = MB (f )
be the canonical isomorphism of algebras; here, for f ∈ Endk (V ), MB (f ) = (aij ), is the matrix asociated to f with respect to the basis B written as follows n f (vi ) = aij vj j=1
for all i = 1, · · · , n. Secondly, a k-vector space M has a structure of right A-module if and only if there exists an algebra map ϕM : A → Endk (M )op ϕM is called the representation associated to M . The correspondence between the action ” · ” and the representation is given by ϕM (r)(m) = m · r. In particular, if dimk (M ) = n, M has a structure of right A-module if and only if there exists an algebra map ϕ˜M (= canM ◦ ϕM ) : A → Mn (k). Finally, let M and N be two right A-modules and ϕM : A → Endk (M ), ϕN : A → Endk (N ) the associated representations. Then M ∼ = N (as right A-modules) if and only if there exists an isomorphism of k-vector spaces θ : M → N such that ϕM (r) = θ−1 ◦ ϕN (r) ◦ θ for all r ∈ A. Indeed, a k-linear map θ : M → N is a right A-module map if and only if θ(m · r) = θ(m) · r for all m ∈ M , r ∈ A. This is equivalent to
1.3 Separable algebras and Frobenius algebras
35
θ(ϕM (r)(m)) = ϕN (r)(θ(m)) or θ ◦ ϕM (r) = ϕN (r) ◦ θ for all r ∈ A. Proof. (of Theorem 4). 1. ⇔ 2. This follows from the remarks made above if we can prove that α = ϕ˜A , β = ϕ˜A∗ . Let us prove first that α = ϕ˜A , where A ∈ MA , via right multiplication. The representation associated to this structure is ϕA : A → Endk (A),
ϕA (r)(r ) = r r
hence, ϕA (r)(ei ) = ei r =
n
(r)
aij ej
j=1
i.e. α = ϕ˜A . Let us show next that β = ϕ˜A∗ . Let {e∗i } be the dual basis of {ei } and ϕA∗ : A → Endk (A∗ ), ϕA∗ (r)(r∗ ) = r∗ (r). Now β(r) = ϕ˜A∗ (r) if and only if e∗i
·r =
n
bij e∗j (r)
j=1
or
n (r) e∗i , rek = bij e∗j , ek j=1 (r)
for all k. Both sides are equal to bik . 1. ⇔ 3. Any right A-module map f : A → A∗ has the form f (r) = λ · r, for some λ ∈ A∗ . Thus, there exists a1 , · · · , an ∈ k such that f (r) = (a1 e∗1 + · · · + an e∗n ) · r for any r ∈ A. Using the dual basis formula we have e∗k · ei , ej = e∗k , ei ej = ckij Hence e∗k · ei =
n
k ∗ j=1 cij ej ,
f (ei ) =
and it follows that
n k=1
ak e∗k · ei =
n n ( ckij ak )e∗j j=1 k=1
36
1 Generalities
for all i = 1, · · · , n. This means that the matrix associated to f in the pair of basis {ei , e∗i } is just the paratrophic matrix Pa , where a = (a1 , · · · , an ) ∈ k n . 1. ⇔ 4. follows from (1.42). 4. ⇒ 5. H = {a ∈ A | B(1, a) = 0} is a k-subspace of A of codimension 1. Assume that J is a right ideal of A and J ⊂ H, and take x ∈ J. using the fact that xA ⊂ J ⊂ H, and that B is associative, we obtain 0 = B(1, xA) = B(x, A) As B is nondegenerate we obtain that x = 0. 5. ⇒ 1. Let H be a such a hyperplane. As k is a field, we can pick a k-linear map λ : A → k such that Ker (λ) = H. Then f = fλ : A → A∗ ,
f (x), y = λ(xy)
for all x, y ∈ A, is an injective right A-linear map. Indeed, for x, y, z ∈ A we have f (xy), z = λ(xyz) = f (x), yz = f (x) · y, z On the other hand, from f (x) = 0 it follows that λ(xA) = 0, hence xA ⊂ Ker(λ) = H. We obtain, that xA = 0, i.e. x = 0. Thus, f is an injective right A-module map, that is an isomorphism as A and A∗ have the same dimension. 1. ⇒ 6. Let (ei , e∗i ) be a dual basis of A and → A∗ an isomorphism f : A −1 of right A-modules. Then (ε = f (1), e = (e∗i )) is a Frobenius i ei ⊗ f pair. This is an elementary computation left to the reader at this point; in Theorem 28, we give the same proof in a more general situation. 6. ⇒ 1. If (ε, e = e1 ⊗ e2 ) is a Frobenius pair, then f : A → A∗ ,
f (x), y = ε(xy)
is an isomorphism of right A-modules with inverse f −1 : A∗ → A, f −1 (a∗ ) = a∗ , e1 e2 for all a∗ ∈ A∗ . Examples 2. 1. Theorem 4 gives an elementary way to check whether an algebra A is Frobenius. Let A = k[X, Y ]/(X 2 , Y 2 ). Then A has a basis e1 = 1, e2 = x, e3 = y and e4 = xy. Through a trivial computation we find that the paratrophic matrix is a1 a2 a3 a4 a2 0 a4 0 Pa = a a 0 0 4 3 a4
0
0
0
1.3 Separable algebras and Frobenius algebras
37
Thus, if a4 is non-zero, then Pa is invertible, so A is a Frobenius algebra. 2. A similar computation shows that the k-algebra A = k[X, Y ]/(X 2 , XY 2 , Y 3 ) is not Frobenius. 3. Using the criterium 5) given by Theorem 4 we can see that any finite dimensional division k-algebra D is a Frobenius algebra. It can be proved that Mn (D) is also a Frobenius k-algebra. Using (1.40) and the fact that a product of Frobenius algebras is Frobenius algebra, we obtain that any separable algebra over a field is Frobenius.
2 Doi-Koppinen Hopf modules and entwined modules
In this Chapter, we introduce entwining structures and entwined modules. We show how various kinds of modules that appear in ring theory are special cases of entwined modules. We also show that there is a close analogy, based on duality arguments, with the factorization problem for algebras, and the smash product of algebras. Entwined modules themselves can be viewed as special cases of comodules over corings. Pairs of adjoint functors between categories of entwined modules are investigated, and it is discussed how one can make the category of entwined modules into a monoidal category.
2.1 Doi-Koppinen structures and entwining structures Entwining structures Throughout this Section, k is a commutative ring. A (right-right) entwining structure on k consists of a triple (A, C, ψ), where A is a k-algebra, C a k-coalgebra, and ψ : C ⊗ A → A ⊗ C a k-linear map satisfying the relations (ab)ψ ⊗ cψ = aψ bΨ ⊗ cψΨ (1A )ψ ⊗ cψ = 1A ⊗ c aψ ⊗ ∆C (c ) = aψΨ ⊗ ψ
cΨ (1)
(2.1) (2.2) ⊗
cψ (2)
ψ
εC (c )aψ = εC (c)a
(2.3) (2.4)
Here we used the sigma notation ψ(c ⊗ a) = aψ ⊗ cψ = aΨ ⊗ cΨ A morphism (α, γ) : (A, C, ψ) → (A , C , ψ ) consists of an algebra map α : A → A and a coalgebra map γ : C → C such that (α ⊗ γ) ◦ ψ = ψ ◦ (γ ⊗ α) or, equivalently, α(aψ ) ⊗ γ(cψ ) = α(a)ψ ⊗ γ(c)ψ
(2.5)
(2.6)
E•• (k) will denote the category of entwining structures. The category E•• (k) is monoidal. E∗ •• (k) is the full subcategory of E•• (k) consisting of entwining S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 39–87, 2002. c Springer-Verlag Berlin Heidelberg 2002
40
2 Doi-Koppinen Hopf modules and entwined modules
structures (A, C, ψ) with ψ invertible. Left-right, right-left, and left-left versions can also be introduced. For example, • E• (k) is the category with objects (A, C, ψ), where now ψ : A ⊗ C → A ⊗ C, ψ(a ⊗ c) = aψ ⊗ cψ is a map satisfying (2.2,2.3,2.4) and (ab)ψ ⊗ cψ = aψ bΨ ⊗ cΨ ψ
(2.7)
In • E• (k), we need maps ψ : C ⊗ A → C ⊗ A, satisfying (2.1,2.2,2.4) and aψ ⊗ ∆C (cψ ) = aψΨ ⊗ (c(1) )ψ ⊗ (c(2) )Ψ • • E(k),
In (2.8).
(2.8)
we will need maps ψ : A ⊗ C → C ⊗ A satisfying (2.2,2.4,2.7) and
Proposition 14. The categories E•• (k), • E• (k), • E• (k), and •• E(k) are isomorphic. Proof. It is easy to see that the isomorphism between E•• (k) and • E• (k) is given by sending (A, C, ψ) to (Aop , C, ψ ◦ τ ). The other isomorphisms are left to the reader. Obviously the isomorphisms in Proposition 15 restrict to the subcategories consisting of structures with invertible ψ. For these subcategories, there exists alternative isomorphisms. Proposition 15. The categories E∗ •• (k) and •• E∗ (k) are isomorphic via the functor S given by S(A, C, ψ) = (A, C, ψ −1 ) (2.9) Proof. Asssume that ψ : C ⊗ A → A ⊗ C satisfies (2.1-2.4). We have to show that ϕ = ψ −1 satisfies (2.2,2.4, 2.7) and (2.8). (2.2) and (2.4) are obvious. (2.1) is equivalent to commutativity of the diagram C ⊗A⊗A
ψ ⊗ IA
A⊗C ⊗A
IA ⊗ψ
IC ⊗ mA
A⊗A⊗C mA ⊗ IC
? C ⊗A
? - A⊗C
ψ
This is equivalent to commutativity of the following diagram A⊗A⊗C
IA ⊗ϕ
A⊗C ⊗A
mA ⊗ IC ? A⊗C
ϕ ⊗ IA
C ⊗A⊗A IC ⊗ mA
ϕ
? - C ⊗A
2.1 Doi-Koppinen structures and entwining structures
41
which is equivalent to cϕφ ⊗ aφ bϕ = cϕ ⊗ (ab)ϕ and this tells us that ϕ satisfies (2.7). In a similar way (2.3) implies that ϕ satisfies (2.8). Doi-Koppinen structures Let H be a bialgebra, A a right H-comodule algebra, and C a right H-module coalgebra. We call (H, A, C) a right-right Doi-Koppinen structure or DK structure over k. A morphism between two DK structures consists of a triple ϕ = (, α, γ) : (H, A, C) → (H , A , C ), where : H → H , α : A → A , and γ : C → C are respectively a bialgebra map, an algebra map, and a coalgebra map such that ρA (α(a)) = α(a[0] ) ⊗ (a[1] ) γ(ch) = γ(c)(h)
(2.10) (2.11)
for all a ∈ A, c ∈ C, and h ∈ H. The category of right-right Doi-Hopf structures over k is denoted by DK•• (k). DK•• (k) is a monoidal category, if we define (H, A, C) ⊗ (H , A , C ) = (H ⊗ H , A ⊗ A , C ⊗ C ) with the obvious structure maps. The unit element is (k, k, k). We will also consider the full subcategories H•• (k), HA•• (k), and HC•• (k) of DK•• (k), consisting of objects respectively of the form (H, H, H), (H, A, H), (H, H, C) The subcategory of DK•• (k) consisting of objects (H, A, C) and morphisms (, α, γ) where H has a twisted antipode S, and where preserves the twisted antipode, is denoted by DKs•• (k). In a similar way, we introduce the categories • DK• (k), • DK• (k), and •• DK(k), and their various subcategories. For example, • DK• (k) has objects (H, A, C), where A is a right H-comodule algebra, and C is a left H-module coalgebra. • In the definition of • DK∗ (k) and • DK∗ • (k), we require that the bialgebra H in each object is a Hopf algebra (i.e., it has an antipode). In the left-left case, we want a twisted antipode. Proposition 16. The categories DK•• (k), • DK• (k), • DK• (k), and •• DK(k) are isomorphic. Similar statements hold for the respective subcategories introduced above. Proof. Let (H, A, C) ∈ DK•• (k). Then the opposite algebra Aop with the original right H-coaction is a right H op -comodule algebra. The coalgebra C with left H op -action defined by hop · c = ch
42
2 Doi-Koppinen Hopf modules and entwined modules
is a left H op -module coalgebra. The functor DK•• (k) → • DK• (k), mapping (H, A, C) to (H op , Aop , C) is easily seen to be an isomorphism of categories. Observe also that H op has an antipode in case H has a twisted antipode, so we also find an isomorphism between the categories DKs•• (k) and • DKs• (k). The other statements follow in a similar way, let us mention that the objects corresponding to (H, A, C) ∈ DK•• (k) are (H cop , A, C cop ) ∈ • DK• and (H opcop , Aop , C cop ) ∈ •• DK(k). Proposition 17. We have faithful functors F : DK•• (k) → E•• (k) and
•
F : DK∗ • (k) → E∗ •• (k)
Proof. We define F by F (H, A, C) = (A, C, ψ) ; F (, α, γ) = (α, γ) with ψ : C ⊗ A → A ⊗ C given by ψ(c ⊗ a) = a[0] ⊗ ca[1] We leave it to the reader to check that ψ satisfies (2.1-2.4), and that (α, γ) satisfies (2.5). If (H, A, C) ∈ DKs•• (k), then H has a twisted antipode S, and the inverse of ψ is given by the formula ψ −1 (a ⊗ c) = cS(a[1] ) ⊗ a[0] Alternative Doi-Koppinen structures These structures were recently introduced by Schauenburg [163]. A left-right alternative Doi-Koppinen structure consists of a triple (H, A, C), where H is a bialgebra, A is left H-module algebra, and C is a right H-comodule coalgebra. We write • aDK• (k) for the category left-right alternative Doi-Koppinen structures. The morphisms are defined in the obvious way, and analogous definitions can be given in the right-left, left-left and right-right situations. The alternative version of Proposition 17 is the following: Proposition 18. We have a faithful functor Fa :
• • aDK (k)
→ • E• (k)
Proof. We put Fa (H, A, C) = (A, C, ψ) and Fa (, α, γ) = (α, γ) with ψ : A ⊗ C → A ⊗ C, ψ(a ⊗ c) = c[1] a ⊗ c[0] A straightforward computation shows that (A, C, ψ) is a left-right entwining structure.
2.1 Doi-Koppinen structures and entwining structures
43
Doi-Koppinen structures versus entwining structures An obvious question is the following: is an entwining structure (A, C, ψ) defined by a Doi-Koppinen structure, i.e. can we find a bialgebra H, an H-coaction on A, and an H-action on C such that (A, C, ψ) = F (H, A, C) We will see that a sufficient condition is that A is finitely generated and projective as a k-module. If C is finitely generated and projective, then every entwining structure comes from an alternative Doi-Koppinen structure. A recent counterexample due to Schauenburg shows that there exist entwining structures that do not arise from Doi-Koppinen structures. We start with a construction due to Sweedler [172, p.155], reformulated by Tambara [178]. Let A be finitely generated projective, with dual basis {ai , a∗i | i = 1, · · · , n}, and write H = H(A) = T (A∗ ⊗ A)/I the tensor algebra of A∗ ⊗ A divided by the ideal I generated by elements of the form a∗ , 1A − a∗ ⊗ 1A ∗
a ⊗ ab −
(a∗(1)
(2.12)
⊗ a) ⊗
(a∗(2)
⊗ b)
(2.13)
where a∗ ∈ A∗ and a, b ∈ A. We write [a∗ ⊗ a] for the class represented by a∗ ⊗ a. Proposition 19. Let A be a finitely generated projective k-algebra. Then H = H(A) is a bialgebra, with comultiplication and counit given by ∆H [a∗ ⊗ a] =
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ a] and εH [a∗ ⊗ a] = a∗ , a
i=1
Proof. A straightforward calculation; we will show that ∆H is well-defined, i.e. ∆H = 0 on I. First, ∆H (a∗ ⊗ 1A ) =
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ 1A ]
i=1
=
n
[a∗ ⊗ ai ] ⊗ a∗i , 1A = a∗ , 1A 1 ⊗ 1
i=1
Next ∆H ([(a∗(1) ⊗ a) ⊗ (a∗(2) ⊗ b)]) =
n i=1
[a∗(1) ⊗ ai ] ⊗ [a∗i ⊗ a]
n j=1
[a∗(2) ⊗ aj ] ⊗ [a∗j ⊗ b]
44
2 Doi-Koppinen Hopf modules and entwined modules
=
=
(1.4)
=
n
[a∗(1) ⊗ ai ][a∗(2) ⊗ aj ] ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 n
[a∗ ⊗ ai aj ] ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 n
[a∗ ⊗ ai ] ⊗ [a∗i(1) ⊗ a][a∗i(1) ⊗ b]
i=1
=
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ ab]
i=1
= ∆H [a∗ ⊗ ab] Remark 2. As above, let A be finitely generated and projective, and consider the functor F : k-Alg → k-Alg ; F (B) = A ⊗ B Tambara [178] observes that F has a right adjoint G, and, as a k-algebra, H(A) = G(A). For any k-algebra B, we write G(B) = a(A, B) = T (A∗ ⊗ B)/I with I defined as above (1A is replaced by 1B , and a, b ∈ B). The unit and counit of the adjunction are given by ηB : B → a(A, A ⊗ B) ; ηB (b) =
n
[a∗i ⊗ (ai ⊗ b)]
i=1
εB : A ⊗ a(A, B) → B ; εB (a ⊗ [a∗ ⊗ b]) = a∗ , ab The comultiplication and counit on a(A, A) = H(A) can be defined using the adjunction properties. Proposition 20. Let A be a finitely generated projective k-algebra, and H = H(A). Then A is a right H-comodule algebra, and A∗ is a left H-comodule coalgebra. The structure maps are ρr (a) = ρl (a∗ ) =
n i=1 n
ai ⊗ [a∗i ⊗ a]
(2.14)
[a∗ ⊗ ai ] ⊗ a∗i
(2.15)
i=1
Proof. A is a right H-comodule since (ρr ⊗ IH )(ρr (a)) =
n i,j=1
aj ⊗ [a∗j ⊗ ai ] ⊗ [a∗i ⊗ a] = (IA ⊗ ∆H )(ρr (a))
2.1 Doi-Koppinen structures and entwining structures
45
A is a right H-comodule algebra since r
ρ (ab) = (2.13) (1.4)
= =
n
ai ⊗ [a∗i ⊗ ab]
i=1 n
ai ⊗ [a∗i(1) ⊗ a][a∗i(2) ⊗ b]
i=1 n
ai aj ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 r
= ρ (a)ρr (b) n ρr (1A ) = ai ⊗ [a∗i ⊗ 1A ] i=1
(2.12)
=
n
ai ⊗ a∗i , 1A = 1A ⊗ 1H
i=1
From Proposition 7, it follows that A∗ is a left H-comodule coalgebra, with ρl (a∗ ) = =
=
n
ai[1] ⊗ a∗ , ai[0] a∗i
i=1 n
[a∗j ⊗ ai ] ⊗ a∗ , aj a∗i
i,j=1 n
[a∗ ⊗ ai ] ⊗ a∗i
i=1
Theorem 5. Let A be a finitely generated projective algebra, and C a coalgebra. There is a bijective correspondence between left H(A)-module coalgebra structures on C, and left-right entwining structures of the form (A, C, ψ). Consequently every entwining structure (A, C, ψ) with A finitely generated and projective can be derived from a Doi-Koppinen structure. Proof. First consider an entwining structure (A, C, ψ) ∈ • E• (k). As before, we write H = H(A). On C, we define the following left H-action: [a∗ ⊗ a] · c = a∗ , aψ cψ This action is well-defined since [a∗ ⊗ 1A ] · c = a∗ , (1A )ψ cψ = a∗ , 1A c and [a∗ ⊗ ab] · c = a∗ , (ab)ψ cψ = a∗ , aψ bΨ cΨ ψ = a∗(1) , aψ a∗(2) , bΨ cΨ ψ = [a∗(1) , a] · [a∗(2) , b] · c
(2.16)
46
2 Doi-Koppinen Hopf modules and entwined modules
The comultiplication and counit of C are left H-linear since n
[a∗ ⊗ ai ] · c(1) ⊗ [a∗i ⊗ a] · c(2) =
i=1
n
∗ Ψ a∗ , aiψ cψ (1) ⊗ ai , aΨ c(2)
i=1 ∗
= a
, aΨ ψ cψ (1)
⊗
cΨ (2)
= a∗ , aψ ∆(cψ ) = ∆([a∗ ⊗ a] · c)
and εC ([a∗ ⊗ a] · c) = εC (a∗ , aψ cψ ) = a∗ , aεC (c) = εH ([a∗ ⊗ a])εC (c) Conversely, let C be a left H-module coalgebra. We know from Proposition 20 that A is a right H-comodule algebra, so we have (H, A, C) ∈ • DK• (k) and (A, C, ψ) = F (H, A, C) ∈ • E• (k). Recall that ψ : A ⊗ C → A ⊗ C is given by ψ(a ⊗ c) = a[0] ⊗ a[1] c Let us check that we have a bijective correspondence, as needed. Let (A, C, ψ) be an entwining structure, (H, A, C) the corresponding Doi-Koppinen structure, and write F (H, A, C) = (A, C, ψ ). Then ψ (a ⊗ c) = a[0] ⊗ a[1] c =
n
ai ⊗ [a∗i ⊗ a] · c
i=1
=
n
ai ⊗ a∗i , aψ cψ = aψ ⊗ cψ = ψ(a ⊗ c)
i=1
If C is a left H(A)-module coalgebra, then we have a left-right Doi-Koppinen structure (H(A), A, C), and an entwining structure (A, C, ψ) = F (H(A), A, C). This entwining structure defines a left H(A)-module coalgebra structure on C, we denote this action temporarily by . This action equals the original one, since [a∗ ⊗ a]c = a∗ , aψ cψ = a∗ , a[0] a[1] · c n a∗ , ai [a∗i ⊗ a] · c = [a∗ ⊗ a] · c = i=1
Adapting our arguments, we see that every entwining structure (A, C, ψ), with C finitely generated and projective, comes from an alternative DoiKoppinen structure. This time, the involved bialgebra is H = H(C ∗ )cop . Observe that H(C ∗ )cop = T (C ∗ ⊗ C)/I, with I generated by elements of the form εC , c − εC ⊗ c and c∗ ∗ d∗ ⊗ c − (c∗ ⊗ c(1) ) ⊗ (d∗ ⊗ c(2) ) and
2.1 Doi-Koppinen structures and entwining structures
∆H ([c∗ ⊗ c]) =
n
47
[c∗ ⊗ ci ] ⊗ [c∗i ⊗ c] and εH ([c∗ ⊗ c]) = c∗ , c
i=1
From Proposition 20, it follows that C is a right H-comodule algebra, with ρr (c) =
n
ci ⊗ [c∗i ⊗ c]
i=1
Given a left-right entwining structure (A, C, ψ), we define a left H -action on A as follows: [c∗ ⊗ c] · a = c∗ , cψ aψ and this makes A into a left H -module algebra. (H , A, C) is a left-right alternative Doi-Koppinen structure. Further verifications are left to the reader. We summarize our results as follows: Theorem 6. Let C be a finitely generated projective coalgebra, and H = H(C ∗ )cop . Then C is a right H -comodule coalgebra. For a given k-algebra A, there is a bijective correspondence between left-right entwining structures (A, C, ψ) and left H -module algebra structures on A. Consequently every entwining structure comes from an alternative Doi-Koppinen structure. We will now show that not every entwining structure arises from a DoiKoppinen structure. Let k be a field. For a left-right entwining structure (A, C, ψ), c ∈ C and c∗ ∈ C ∗ , we consider the transformation Tc,c∗ : A → A ; Tc,c∗ (a) = c∗ , cψ aψ If (A, C, ψ) = F (H, A, C) arises from a Doi-Koppinen structure, then Tc,c∗ (a) = c∗ , a[1] ca[0] and then every H-subcomodule of A is Tc,c∗ -invariant. As every a ∈ A is contained in a finite dimensional H-subcomodule of A (cf. [172, Theorem 2.1.3b]), the Tc,c∗ -invariant subspace of A generated by a is finite dimensional. Example 7. (Schauenburg [163]) Let C = k ⊕ kt, with t primitive, and let A be the free algebra with generators Xi , where i ranges over the integers. We define ψ : A ⊗ C → A ⊗ C by ψ(a ⊗ 1) = a ⊗ 1 for all a ∈ k, and ψ(Xi1 Xi2 · · · Xin ⊗ t) = Xi1 +1 Xi2 +1 · · · Xin +1 ⊗ t A straightforward computation shows that ψ is entwining. Now take c∗ ∈ C ∗ such that c∗ , t = 1. Then Tt,c∗ (Xi ) = Xi+1 and the Tt,c∗ -invariant subspace of A generated by X0 is infinite dimensional, and (A, C, ψ) cannot be derived from a Doi-Koppinen structure.
48
2 Doi-Koppinen Hopf modules and entwined modules
2.2 Doi-Koppinen modules and entwined modules Entwined modules Let (A, C, ψ) ∈ E•• (k). An (A, C, ψ)-entwined module is a k-module with a right A-action and a right C-coaction such that ρr (ma) = m[0] aψ ⊗ mψ [1]
(2.17)
The category of (A, C, ψ)-entwined modules and A-linear C-colinear maps is denoted by M(ψ)C A . We also have left-left, left-right and right-left versions: For (A, C, ϕ) ∈ •• E(k), C A M(ϕ) consists of left A-modules and left Ccomodules such that ρl (am) = mϕ (2.18) [−1] ⊗ aϕ m[0] For (A, C, ψ) ∈ • E• (k), comodules such that
C A M (ψ)
consists of left A-modules and right C-
ρr (am) = aψ m[0] ⊗ mψ [1] For (A, C, ϕ) ∈ • E• (k), comodules such that
C
(2.19)
MA (ϕ) consists of right A-modules and left C-
ρl (ma) = mϕ [−1] ⊗ m[0] aϕ
(2.20)
E•• (k),
Examples 3. 1. For (A, C, ψ) ∈ A ⊗ C and C ⊗ A are right-right entwined modules. The structure is given by the formulas (c ⊗ a)b = c ⊗ ab (a ⊗ c)b = abψ ⊗ cψ
ρr (c ⊗ a) = (c(1) ⊗ aψ ) ⊗ cψ (2) ρr (a ⊗ c) = a ⊗ c(1) ⊗ c(2)
ψ : C ⊗ A → A ⊗ C is a morphism in E•• (k). See also Examples 13 and 14 2. Let H be a bialgebra with twisted antipode S, and consider the map ψ : H ⊗ H → H ⊗ H with ψ(h ⊗ k) = h(2) ⊗ h(3) kS(h(1) ) (H, H, ψ) ∈ • E• (k), and the objects of H M(ψ)H are k-modules with a left H-action and right H-coaction such that ρr (hm) = h(2) m[0] ⊗ h(3) m[1] S(h(1) ) These modules are known under the name Yetter-Drinfeld modules . YetterDrinfeld modules will be investigated in Section 4.4 and Chapter 5. 3. For any bialgebra H, (H, H, IH⊗H ) ∈ • E• (k). H M(ψ)H consists of kmodules with a left H-action and right H-coaction such that ρr (hm) = hm[0] ⊗ m[1] i.e. the right H-coaction is left H-linear. In the situation where H is commutative and cocommutative, this type of modules has been considered first by Long in [118]; in the sequel, we will refer to them as Long dimodules. If H is commutative and cocommutative, then Long dimodules and Yetter-Drinfeld modules coincide. We will come back more extensively to Long dimodules in Section 4.5 and Chapter 7.
2.2 Doi-Koppinen modules and entwined modules
49
Doi-Koppinen Hopf modules Let (H, A, C) ∈ DK•• (k), and (A, C, ψ) = F (H, A, C) the corresponding object in E•• (k). M(H)C A will be another notation for M(ψ)C . (2.17) takes the form A ρr (ma) = m[0] a[0] ⊗ m[1] a[1]
(2.21)
The objects of M(H)C A are called unified Hopf modules, Doi-Hopf modules or Doi-Koppinen Hopf modules. If H has a twisted antipode, then ψ is bijective, C −1 and (A, C, ψ −1 ) ∈ •• E(k). C ). This A M(H) will be a new notation for A M(ψ category consists of left A-modules and left C-comodules such that ρl (am) = m[−1] S(a[1] ) ⊗ a[0] m[0]
(2.22)
Similar constructions apply to left-right, right-left and left-left Doi-Koppinen structures. Examples 4. 1. (k, A, k) ∈ DK•• (k), and the corresponding entwining structure is (A, k, IA ) ∈ E•• (k). The category of entwined modules is nothing else then the category of right A-modules. 2. (k, k, C) ∈ DK•• (k) corresponds to (k, C, IC ) ∈ E•• (k). An entwined module is now simply a right C-comodule. 3. Let H be a bialgebra. (H, H, H) ∈ DK•• (k) corresponds to (H, C, ψ) ∈ E•• (k), with ψ(h ⊗ k) = k(1) ⊗ hk(2) An entwined module is now a Hopf module in the sense of Sweedler [172]. 4. If H is a bialgebra, and A is a right H-comodule algebra, then (H, A, H) ∈ DK•• (k) is a right-right Doi-Koppinen structure. The corresponding DoiKoppinen modules are the well-known (A, H)-relative Hopf modules (see e.g. [67]). 5. In a similar way, if C is a right H-module coalgebra, then (H, H, C) ∈ DK•• (k) is a right-right Doi-Koppinen structure, and the corresponding DoiKoppinen modules are the [H, C])-Hopf modules studied in [67]. 6. Let G be a group, and A a G-graded k-algebra. Then (kG, A, kG) is a Doi-Koppinen structure, and the corresponding category of Doi-Koppinen modules is the category of G-graded right A-modules. 7. Now let X be a right G-set. Then kX is a right kG-module coalgebra, and (kG, A, kX) is a Doi-Koppinen structure. M(kG)kX A consists of right Amodules graded by the G-set X (see [143] and [147] for a study of modules graded by G-sets). 8. Yetter-Drinfeld modules and Long dimodules are special cases of Doi-Hopf modules. This will be explained in Section 4.4 and 4.5. Proposition 21. For (A, C, ψ) ∈ E•• (k), the categories M(ψ)C A , Aop M(ψ ◦ cop cop τ )C , C M(τ ◦ ψ)A and C M(τ ◦ ψ ◦ τ ) are isomorphic. In particular, for Aop op C C cop cop op , M(H ) , M(H )A (H, A, C) ∈ DK•• (k), the categories M(H)C A A cop opcop and C M(H ) are isomorphic. Aop
50
2 Doi-Koppinen Hopf modules and entwined modules
Proof. Everything is straightforward. For example, if M ∈ M(ψ)C A , then the corresponding object in Aop M(ψ ◦ τ )C is equal to M as a right C-comodule, but with left Aop -action given by aop · m = ma All the other isomorphisms are defined in a similar way, and we leave further details to the reader.
2.3 Entwined modules and the smash product Let A and B be k-algebras, and consider a map R : B ⊗ A → A ⊗ B. We will use the following notation (summation understood): R(b ⊗ a) = aR ⊗ bR = ar ⊗ br
(2.23)
We put A#R B = A ⊗ B as a k-module, but with a new multiplication: mA#R B = (mA ⊗ mB ) ◦ (IA ⊗ R ⊗ IB )
(2.24)
(a#b)(c#d) = acR #bR d
(2.25)
or If this new multiplication makes A#R B into an associative algebra with unit 1#1, then we call A#R B a smash product, and (A, B, R) a smash product structure or a factorization structure. Theorem 7. ([44]) (A, B, R) is a smash product structure if and only if R(b ⊗ 1A ) = 1A ⊗ b R(1B ⊗ a) = a ⊗ 1B R(bd ⊗ a) = aRr ⊗ br dR R(b ⊗ ac) = aR cr ⊗ bRr
(2.26) (2.27) (2.28) (2.29)
for all a, c ∈ A, b, d ∈ B. Proof. Assume that (A, B, R) is a smash product structure. Then for all b ∈ B, we have 1A #b = (1A #b)(1A #1B ) = 1R #bR and (2.26) follows. (2.27) follows in a similar way. The multiplication is associative, so (1#b) (a#1)(c#1) = (1#b)(a#1) (c#1) and this implies (2.29). (2.28) follows from (1#b)(1#d) (a#1) = (1#b) (1#d)(a#1) the converse is left to the reader.
2.3 Entwined modules and the smash product
51
Remark 3. The smash product is related to the factorization problem. We say that a k-algebra X factorizes through k-algebras A and B if X ∼ = A ⊗ B as k-modules, and, after identifying X and A ⊗ B, the maps ιA : A → A ⊗ B = X ιB : B → A ⊗ B = X
ιA (a) = a ⊗ 1B ιB (b) = 1A ⊗ b
are algebra maps. It is not hard to show that there exists a bijective correspondence between the algebra structures on A ⊗ B for which ιA and ιB are algebra maps, and smash product structures of the form (A, B, R): given the multiplication mX on X, we put R(b ⊗ a) = mX (ιB (b) ⊗ ιA (a)) Let (A, B, R), (A , B , R ) be smash product structures. A morphism (A, B, R) → (A , B , R ) consists of a pair (α, β), where α : A → A and β : B → B are algebra maps such that (α ⊗ β) ◦ R = R ◦ (β ⊗ α), or α(aR ) ⊗ β(bR ) = α(a)R ⊗ β(b)R for all a ∈ A and b ∈ B. S(k) will denote the category of smash product structures over k. Proposition 22. If (A, B, R) ∈ S(k) is a smash product structure, then (B op , Aop , τ ◦R ◦τ ) is also a smash product structure. Furthermore the switch map τ : (A#R B)op → B op #τ ◦R◦τ Aop is an algebra isomorphism. Proof. The first statement is obvious. To prove the second one, we only need to show that τ is anti-multiplicative. Indeed, τ (c#d)τ (a#b) = (d#c)(b#a) = d · bR #cR · a = bR d#RacR = τ (acR #R bR d) = τ (a#b)(c#d) Proposition 23. Let (A, B, R) ∈ S(k) be a smash product structure, and assume that R is invertible. Then (B, A, S = R−1 ) is also a smash product structure, and R : B#S A → A#R B is an algebra isomorphism, with inverse S. Proof. We write down conditions (2.26-2.29) as commutative diagrams. Change the direction of the morphisms involving R, and replace R by R−1 = S. We then obtain the diagram telling that (B, A, R−1 ) is a smash product structure. We are left to prove that R is multiplicative. This works as follows: for all b, d ∈ B and a, c ∈ A, we have
52
2 Doi-Koppinen Hopf modules and entwined modules
R (b#a)(d#c) = R bdS #aS c = (aS c)R #(bdS )R (2.28) = (aS c)R1 R2 #bR2 dSR1 (2.29) (S = R
−1
= (aSR1 cR3 )R2 #bR2 dSR1 R3
)
= (acR3 )R2 #bR2 dR3
(2.29)
= aR2 cR3 R4 #bR2 R4 dR3 = (aR2 #bR2 )(cR3 #dR3 ) = R(b#a)R(d#c)
Theorem 8. Let A be a k-algebra, and C a k-coalgebra which is finitely generated and projective as a k-module. Then there is a bijective correspondence between left-right entwining structures of the form (A, C, ψ) and smash product structures of the form (A, C ∗ , R). If R corresponds to ψ, then the categories A M(ψ)C and A#R C ∗ M are isomorphic. Proof. Let {ci , c∗i | i = 1, · · · , n} be a dual basis for C. For an entwining structure (A, C, ψ) ∈ • E• (k), we define f (ψ) = R : C ∗ ⊗ A → A ⊗ C ∗ by ∗ R(c∗ ⊗ a) = c∗ , cψ (2.30) i aψ ⊗ ci i
We claim that (A, C , R) is a smash product structure. For all c∗ , d∗ ∈ C ∗ and a, b ∈ A, we have ∗ ∗ c∗ , cψ aRr ⊗ c∗r ∗ d∗R = i (AR )ψ ⊗ ci ∗ dR ∗
i
=
∗ Ψ ∗ ∗ c∗ , cψ i d , cj aΨ ψ ⊗ ci ∗ cj
i,j
(1.5)
=
(2.3)
=
c∗ , (ci(1) )ψ d∗ , (ci(2) )Ψ aΨ ψ ⊗ c∗i
i
ψ ∗ ∗ c∗ , (cψ i )(1) d , (ci )(2) aψ ⊗ ci
i
=
∗ c∗ ∗ d∗ , cα i aψ ⊗ ci
i
= R(c∗ ∗ d∗ ⊗ a) proving (2.28). aR br ⊗ (c∗ )Rr =
∗ c∗R , cψ i aR bψ ⊗ ci
i
=
∗ Ψ ∗ c∗j , cψ i c , cj aΨ bψ ⊗ ci
i,j
=
i
∗ c∗ , cψΨ i aΨ bψ ⊗ ci
2.3 Entwined modules and the smash product
(2.7)
=
53
∗ c∗ , cψ i (ab)ψ ⊗ ci
i
= R(c∗ ⊗ ab) proving (2.29). (2.26) and (2.27) are left to the reader. Conversely, if (A, C ∗ , R) is a smash product structure, then we define g(R) = ψ : A ⊗ C → A ⊗ C by (c∗i )R , caR ⊗ ci (2.31) ψ(a ⊗ c) = i
Then aΨ ψ ⊗ (c(1) )ψ ⊗ (c(2) )Ψ = =
(c∗i )R , c(1) aΨ R ⊗ ci ⊗ (c(2) )Ψ
i
(c∗i )R , c(1) (c∗j )r , c(2) arR
⊗ ci ⊗ cj
i,j
=
(c∗i )R ∗ (c∗j )r , carR ⊗ ci ⊗ cj
i,j
(2.28)
=
(c∗i ∗ c∗j )R , caR ⊗ ci ⊗ cj
i,j
(1.5)
=
(c∗i )R , caR ⊗ ∆(ci )
i
= aψ ⊗ δ(cψ ) proving (2.3). To prove (2.7): aψ bΨ ⊗ cΨ ψ = (c∗i )R , cΨ aR bΨ ⊗ ci = (c∗i )R , cj (c∗j )r , caR br ⊗ ci = (c∗i )R , cj c∗j r , c aR br ⊗ ci (2.29)
= (c∗i )Rr , caR br ⊗ ci = (c∗i )R , c(ab)R ⊗ ci = ψ(ab ⊗ c)
Next observe that (g(f (ψ)))(a ⊗ c) =
(c∗i )R , caR ⊗ ci i
=
∗ c∗i , cψ j cj , caψ ⊗ ci
i,j
=
c∗j , caψ ⊗ cψ j
j
= aψ ⊗ cψ = ψ(a ⊗ c)
54
2 Doi-Koppinen Hopf modules and entwined modules
and it follows that (g ◦ f )(ψ) = ψ. In a similar way, we can prove that (f ◦ g)(R) = R, and this finishes the proof of the first part of the Theorem. We will now define an isomorphism F : A M(ψ)C → A#R C ∗ M. For M ∈ C ∗ A M(ψ) , we define F (M ) = M as a k-module, with left A#R C -action defined by (a#c∗ ) · m = c∗ , m[1] a · m[0] (2.32) It is clear that M is an A#R C ∗ -module, since (a#c∗ )(b#d∗ ) · m = (abR #(c∗R ∗ d∗ )) · m = c∗R ∗ d∗ , m[1] abR m[0] ∗ ∗ = c∗ , cψ i ci , m[1] d , m(2) abψ m[0] i
= (a#c∗ ) · d∗ , m[1] bm[0] = (a#c∗ ) · (b#d∗ ) · m Conversely, if M is a left A#R C ∗ -module, we define G(M ) ∈ G(M ) = M as a k-module, with left A-action
C A M(ψ) :
am = (a#εC ) · m and right C-coaction ρr (m) =
(1#c∗i ) · m ⊗ ci
i
Further details are left to the reader. Theorem 9. Let A be a k-algebra, and C a coalgebra which is finitely generated and projective as a k-module. Then there is a bijection between right-left entwining structures of the form (A, C, ϕ) and smash product structures of the form (C ∗ , A, S). In this situation we have an isomorphism of categories C
M(ψ)A ∼ = MC ∗ #S A
Proof. We use the left-right dictionary. If (A, C, ϕ) ∈ • E• (k), then (Aop , C cop , τ ◦ ϕ ◦ τ ) ∈ • E• (k) (see Proposition 14). Using Theorem 8, we find (Aop , C cop∗ = C ∗op , R) ∈ S(k). Finally Proposition 22 gives the corresponding smash product structure (C ∗ , A, S = τ ◦ R ◦ τ ). From (2.30) and (2.31), it follows that the correspondence between S and ϕ is given by the formulas ∗ S(a ⊗ c∗ ) = c∗ , cϕ (2.33) i ci ⊗ aϕ i
(c∗i )R , cci ⊗ aR ϕ(c ⊗ a) = i
(2.34)
2.3 Entwined modules and the smash product
55
Take an entwining structure (A, C, ψ) ∈ • E• (k). Assume that C is finitely generated and projective, and that ψ is invertible. Then we have a right-left entwining structure (C, A, ϕ = τ ◦ ψ −1 ◦ τ ) ∈ • E• (k) (see Proposition 15). Let (A, C ∗ , R) and (C ∗ , A, S) be the two corresponding smash product structures from Theorems 8 and 9. One is then tempted to conjecture that S = R−1 , and therefore A#R C ∗ ∼ = C ∗ #S A, according to Proposition 23. Surprisingly, this is not true in general! a straightforward computation shows that S = R−1 if and only if c∗i , cϕ cψ (2.35) c⊗a = i ⊗ aψϕ i
c∗i , cψ cϕ c⊗a = i ⊗ aϕψ
(2.36)
i
for all c ∈ C and a ∈ A. The condition ϕ = τ ◦ ψ −1 ◦ τ amounts to c∗i , cϕ cψ c⊗a = i ⊗ aϕψ
(2.37)
i
c⊗a =
c∗i , cψ cϕ i ⊗ aψϕ
(2.38)
i
for all c ∈ C and a ∈ A. We will make the difference clear in the Doi-Hopf case. Examples 5. 1) Let A be a right H-comodule algebra, and B a right H-module algebra. Define R : B ⊗ A → A ⊗ B by R(b ⊗ a) = a[0] ⊗ ba[1]
(2.39)
Then (A, B, R) is a smash product structure, and the multiplication on A#R B is given by the formula (a#b)(c#d) = ac[0] #(bc[1] )d
(2.40)
If H has a twisted antipode, then R is invertible, and R−1 is given by the formula R−1 (b ⊗ a) = bS(a[1] ) ⊗ a[0] (2.41) 2) In a similar way, if A is a left H-comodule algebra, and B is a left H-module algebra, then we have a smash product structure (B, A, R), with R(a ⊗ b) = a[−1] b ⊗ a[0] 3) Let (H, A, C) ∈ • aDK• (k), i.e. A is a left H-module algebra, and C is a right H-comodule coalgebra. If C is finitely generated projective, then C ∗ is a left H-comodule algebra, cf. Proposition 7, so we find a smash product structure (A, C ∗ , R). Now (H, A, C) defines an entwining structure (see
56
2 Doi-Koppinen Hopf modules and entwined modules
Proposition 18), and Theorem 8 produces another smash product structure (A, C ∗ , R ). As one might expect, R = R , since R (c ⊗ a) =
n
∗ c∗ , cψ i aψ ⊗ ci
i=1
=
n
c∗ , ci[0] ci[0] a ⊗ c∗i
i=1
= c∗[−1] a ⊗ c∗[0] = R(c ⊗ a) Let (A, B, R) be a smash product structure. Can we find a bialgebra H, an H-coaction on A and an H-action on B such that R is given by (2.39). We have discussed this question already for entwining structures. For smash product structures, the answer is the following: Theorem 10. (Tambara [178]) Let A be a finitely generated and projective algebra, and H = H(A) as in Proposition 19. For every algebra B, we have a bijective correspondence between smash product structures of the form (A, B, R) and right H-module algebra structures on B. A similar result holds if B is finitely generated projective. Proof. The proof is similar to the corresponding proofs for entwining structures (Theorems 5 and 6). We know that A is a right H-comodule algebra. Given R, we define a right H-action on B as follows: b · [a∗ ⊗ a] = a∗ , aR bR We invite the reader to prove that this puts an H-module algebra structure on H. Conversely, if B is a right H-module algebra, then Example 5 1) tells us how to produce a smash product structure. Example 8. If (H, A, C) ∈ • DK• (k), then C ∗ is a right H-module algebra, the right H-action on C ∗ is given by c∗ h, c = c∗ , hc and we obtain a smash product structure (A, C ∗ , R). We have a functor F :
C A M(H)
→ A#R C ∗ M
F (M ) = M as a k-module, with left A#R C ∗ -action (a#c∗ ) · m = c∗ , m[1] am[0] If C is finitely generated and projective, then the map R coincides with the one from Theorem 8. First observe that c∗ , hci c∗i c∗ h = i
2.3 Entwined modules and the smash product
57
To see this, apply both sides to an arbitrary c ∈ C. Thus, according to (2.30), c∗ , a[1] ci a[0] ⊗ c∗i R(c∗ ⊗ a) = i
=
a[0] ⊗ c∗ a[1] , ci c∗i
i
= a[0] ⊗ (c∗ a[1] )
(2.42)
If C is projective, but not necessarily finitely generated, then F is fully faithful. Indeed, if f : M → N is a left A#R C ∗ -linear map between two Doi-Hopf modules M and N , then f is left A-linear and left C ∗ -linear, and therefore right C-colinear, by Proposition 3. Consequently, A M(H)C can be viewed as a full subcategory of A#R C ∗ M. Example 9. Now assume that H has an antipode. To (H, A, C) ∈ • DK• (k), we can associate (A, C, ψ) ∈ • E• (k) and (A, C, ϕ) ∈ • E• (k) Recall that ϕ : C ⊗ A → C ⊗ A is given by ϕ(c ⊗ a) = S(a[1] )c ⊗ a[0] We have associated smash product structures (A, C ∗ , R) and (C ∗ , A, S), with R given by (2.42), and S by (2.43) S(a ⊗ c∗ ) = c∗ S(a[1] ) ⊗ a[0] Even if C is not necessarily finitely generated and projective, (2.43) defines a smash product structure. In any case, we have a functor F : C M(H)A → MC ∗ #S A . F (M ) = M as a k-module, with action m · (c∗ #a) = c∗ , m[−1] m[0] a If C is finitely generated and projective, then F is an isomorphism of categories. If H has a twisted antipode, then the inverse of R exists and is given by (see (2.41)) (2.44) R−1 (a ⊗ c∗ ) = c∗ S(a[1] ) ⊗ a[0] If the antipode S of H is of order 2, then we can conclude from (2.43) and (2.44) that S = R−1 , and we have the following result. Proposition 24. Let (H, A, C) ∈ • DK(k)• , and assume that H has an antipode of order 2. Let (A, C ∗ , R) and (C ∗ , A, S) be defined as in Example 9. Then the smash products A#R C ∗ and C ∗ #S A are isomorphic. Koppinen’s smash product Let (A, C, ψ) ∈ • E• (k) be a left-right entwining structure. The Koppinen smash #ψ (C, A) is equal to Hom(C, A) as a k-module, but with twisted multiplication (f • g)(c) = f (cψ (1) )g(c(2) )ψ for all f, g : C → A and c ∈ C.
58
2 Doi-Koppinen Hopf modules and entwined modules
Proposition 25. If (A, C, ψ) is a left-right entwining structure, then #ψ (C, A) is an associative algebra, with unit ηA ◦ εC . Proof. The proof of the associativity goes as follows: ((f • g) • h)(c) = (f • g)(cψ (1) )h(c(2) )ψ ψ cψ = f cψ h(c(2) )ψ (1) (1) g (1) (2) ψ ψ ψ (2.3) = f cΨ h(c(3) )ψΨ (1) g c(2) ψ ψ g c(2) h(c(3) )ψ (2.7) = f cΨ (1) Ψ (g • h)(c = f cΨ ) (2) Ψ (1) = (f • (g • h))(c) From (2.2) and (2.4), it follows easily that ηA ◦ εC is the unit element of #ψ (C, A). Proposition 26. C ∗ and A are subalgebras of #ψ (C, A), via c∗ → ηA ◦ c∗ and a → aη ◦ εC Proof. Obvious. Proposition 27. For (A, B, ψ) ∈ • E• (k), we have a functor F :
C A M(ψ)
→ #ψ (C,A) M
For an entwined module M , F (M ) = M as a k-module, with left #ψ (C, A)action given by f · m = f (m[1] )m[0] Proof. We will prove that F (M ) is a #ψ (C, A)-module, leaving further details to the reader. For f, g : C → A, we have f · (g · m) = f · (g(m[1] )m[0] ) = f (mψ [1] )g(m(2) )ψ m[0] ) = (f • g)(m[1] )m[0] ) = (f • g) · m Proposition 28. Let (A, B, ψ) ∈ • E• (k), and assume that C is finitely generated and projective as a k-module. Let (A, C ∗ , R) be the corresponding smash product structure (cf. Theorem 8). Then we have an algebra isomorphism s : A#R C ∗ → #ψ (C, A) given by
s(a#c∗ )(c) = c∗ , ca
for all a ∈ A, c ∈ C and c∗ ∈ C ∗ .
2.4 Entwined modules and the smash coproduct
59
Proof. It is well-known that s is a k-module isomorphism if C is finitely generated and projective. So we only need to show that s is an algebra map. Let {ci , c∗i | i = 1, · · · , n} be a finite dual basis of C. For all a, b ∈ A, c ∈ C and c∗ , d∗ ∈ C ∗ , we have ∗ ∗ s (a#c∗ )(b#d∗ ) (c) = s c∗ , cψ i abψ #ci ∗ d (c) ∗ ∗ = c∗ , cψ i ci ∗ d , cabψ ∗ ∗ = c∗ , cψ i ci , c(1) d , c(2) abψ ∗ = c∗ , cψ (1) d , c(2) abψ ∗ = (aηA ◦ c∗ )(cψ (1) ) (bηA ◦ d )(c(2) ) ψ = (aηA ◦ c∗ ) • (bηA ◦ d∗ ) (c) = s(a#c∗ ) • s(b#d∗ ) (c)
Example 10. Let (H, A, C) ∈ • DK• (k), and let (A, C, ψ) be the associated entwining structure. We will write #H (C, A) = #ψ (C, A). The product on #H (C, A) is given by the formula (f • g)(c) = f g(c(2) )(1) c(1) g(c(2) )[0] This multiplication appeared first in [111, Def. 2.2]. In the situation where C = H, it appears already in [68] and [110].
2.4 Entwined modules and the smash coproduct The result in this Section may be viewed as the duals of the ones in Section 2.3. Let C and D be coalgebras, and consider a linear map V : C ⊗ D → D ⊗ C. We will use the notation V (c ⊗ d) = dV ⊗ cV C
(2.45)
We call (C, D, V ) a smash coproduct structure if C
60
2 Doi-Koppinen Hopf modules and entwined modules
εC (cV )dV = εC (c)d εD (dV )cV = εD (d)c ∆D (dV ) ⊗ cV = dV(1) ⊗ dv(2) ⊗ cV v
(2.46) (2.47)
dV ⊗ ∆C (cV ) = dV v ⊗ cv(1) ⊗ cV(2)
(2.49)
(2.48)
Proof. If εC < εD is a counit, then for all c ∈ C and d ∈ D, c < d = ((εC < εD ) ⊗ IC
= dV ⊗ ∆(cV )
and this proves (2.49). Applying εC to the first and third factor, and εD to the sixth factor, we get v dV(1) ⊗ dv(2) ⊗ cV v = εC (cv(1) ) dV (1) ⊗ dV (2) ⊗ cV(3) (2.46) = dV (1) ⊗ dV (2) ⊗ εC (c(1) )cV(2) = ∆(dV ) ⊗ cV and (2.48) follows. The converse is left to the reader. A morphism between to smash coproduct structures (C, D, V ) and (C , D , V ) consists of a pair of coalgebra maps (γ : C → C , δ : D → D ) such that (δ ⊗ γ) ◦ V = V ◦ (γ ⊗ δ) CS(k) will be the category of smash coproduct structures over the ground ring k. The proofs of the next two Proposition is similar to the proofs of the corresponding Propositions 22 and 23. Details are left to the reader. Proposition 29. (C, D, V ) ∈ CS(k) if and only if (Dcop , C cop , τ ◦ V ◦ τ ) ∈ CS(k). In this situation, the switch map τ : (C
2.4 Entwined modules and the smash coproduct
61
Proposition 30. Let (C, D, V ) ∈ CS(k), and assume that V is invertible. Then (D, C, V −1 = W ) ∈ CS(k), and V : C
∗ ∼ = MA
Proof. Recall that the comultiplication on A∗ is given by the formula ∆(a∗ ) = a∗ , ai aj a∗i ⊗ a∗j (2.50) i,j
where {ai , a∗i | i = 1, · · · , n} is a dual basis for A. Assume that ψ : A ⊗ D → D⊗A defines a left-right entwining structure, and define V : A∗ ⊗D → D⊗A∗ by (2.51) V (a∗ ⊗ d) = a∗ , aiψ dψ ⊗ a∗i for all a∗ ∈ A∗ and d ∈ D. We will prove that (A∗ , D, V ) ∈ CS(k). To this end, we need to show that (2.46-2.49) hold. (2.46) and (2.47) are obvious. Let us check (2.48) first. For all a∗ ∈ A∗ and d ∈ D, we have ∗ dV(1) ⊗ (a∗ )V , aiψ dψ dV(1) ⊗ dv(2) ⊗ (a∗ )V v = (2) ⊗ ai =
i ψ ∗ ∗ a∗ , ajΨ dΨ (1) ⊗ aj , aiψ d(2) ⊗ ai =
i,j
(2.3) =
i
ψ ∗ a∗ , aiψΨ dΨ (1) ⊗ d(2) ⊗ ai
i
a∗ , aiψ ∆D (dψ ) ⊗ a∗i = ∆D (dV ) ⊗ (A∗ )V
i
For fun, we will also verify (2.49). For all a∗ ∈ A∗ and d ∈ D, we have ∗V a(1) ∗, aiψ dV ψ ⊗ a∗i ⊗ a∗V dV v ⊗ a∗v (1) ⊗ a(2) = (2) =
i
a(1) ∗, aiψ a(2) ∗, ajΨ dΨ ψ ⊗ a∗i ⊗ a∗j
i,j
=
a∗ , aiψ ajΨ dΨ ψ ⊗ a∗i ⊗ a∗j
i,j
(2.7) =
i,j
and
a∗ , (ai aj )ψ dψ ⊗ a∗i ⊗ a∗j
62
2 Doi-Koppinen Hopf modules and entwined modules
dV ⊗ ∆(a∗V ) = =
a∗ , aiψ dψ ⊗ ∆(a∗i )
i ∗
a , aiψ a∗i , aj ak dψ ⊗ a∗j ⊗ a∗k =
i,j,k
a∗ , (aj ak )ψ dψ ⊗ a∗j ⊗ a∗k
j,k
and (2.49) follows. Conversely, if (A∗ , D, V ) is a smash coproduct structure, then we define ψ : A ⊗ D → A ⊗ D by V ψ(a ⊗ d) = a∗V (2.52) i , aai ⊗ d We leave it to the reader to show that (A, D, ψ) is a left-right entwining structure, and that (2.51) and (2.52) define a bijection between entwining structures (A, D, ψ) and smash coproduct structures (A∗ , D, V ). To complete ∗ the proof, we define a functor F : A M(ψ)D → MA
for all m ∈ M . A straightforward computation shows that ρrA∗
Then a left A-action and a right D-coaction are given by b∗j , aε(dj )mj and ρr (m) = b∗j , 1mj ⊗ dj a·m= j
j
Theorem 13. Let C be a coalgebra, and A a finitely generated projective algebra. We have a bijective correspondence between right-left entwining structures (A, C, ϕ) ∈ • E• (k) and smash coproduct structures (C, A∗ , V ) ∈ CS(k). In this case, we have an isomorphism between the categories C M(ψ)A and C
2.4 Entwined modules and the smash coproduct
63
We claim that (C, D, V ) ∈ CS(k). Conditions (2.46) and (2.47) follow immediately from respectively (1.22) and (1.13). Furthermore ∆D (dV ) ⊗ cV = d[0](1) ⊗ d[0](2) ⊗ cd[1] (1.21)
= d(1)[0] ⊗ d(2)[0] ⊗ cd(1)[1] d(2)[1] = dV(1) ⊗ dv(2) ⊗ cV v
proving (2.48). (2.49) can be proved as follows: dV v ⊗ cv(1) ⊗ cV(2) = (d[0] )v ⊗ cv(1) ⊗ c(2) d[1] = d[0] ⊗ c(1) d[1] ⊗ c(2) d[2] (1.13)
= d[0] ⊗ ∆C (cd[1] ) = dV ⊗ ∆C (cV )
If H has a twisted antipode S, then V is invertible, with inverse W (d ⊗ c) = cS(d[1] ) ⊗ d[0] The comultiplication on C
H studied in [38] (at least in the case where H has an invertible antipode, see [38, Remark p.1654]). Let (C, D, V ) be a smash coproduct structure. Can it be obtained using Example 11? We have already discussed this question for entwining structures and smash product structures. As one might expect, the result for smash coproduct structures is similar, and uses the methods developed in [178]. Theorem 14. Let D be a finitely generated projective coalgebra, and let H = H(D∗ )cop (cf. Proposition 19). Then D is a right H -comodule coalgebra. For any coalgebra C, there exists a bijective correspondence between right H module coalgebra structures on C and smash coproduct structures of the form (C, D, V ). Proof. We know from Proposition 20 that D is a right H -comodule coalgebra. If C is a right H -module coalgebra, then Example 11 produces the required smash coproduct structure. Conversely, given (C, D, V ) ∈ CS(k), we make C into a right H -module coalgebra by putting c · [d∗ ⊗ d] = d∗ , dV cV Further details are left to the reader. Example 12. Now let C be a left H-comodule coalgebra, and D a left Hmodule coalgebra. We now have a smash coproduct structure (C, D, V ), with
64
2 Doi-Koppinen Hopf modules and entwined modules
V (c ⊗ d) = c[−1] d ⊗ c[0]
(2.53)
The comultiplication on C
c∗ , c[0] c[−1] = c∗[0] , cc∗[1]
(2.54)
for all c ∈ C. Thus we obtain (H, C ∗ , D) ∈ • DK• (k), and (C ∗ , D, ψ) ∈ • E• (k). If C is finitely generated and projective, then ψ coincides with the map defined in the proof of Theorem 12. Indeed, if ψ : C ∗ ⊗ D → C ∗ ⊗ D is given by (2.52), then ψ(c∗ ⊗ d) = c∗ , cVi c∗i ⊗ dV = c∗ , ci[0] c∗i ⊗ ci[−1] d = c∗[0] , ci c∗i ⊗ c∗[1] d
(2.53) (2.54)
= c∗[0] ⊗ c∗[1] d as needed.
2.5 Adjoint functors for entwined modules Let k be a commutative ring, and (α, γ) : (A, C, ψ) → (A , C , ψ ) a morphism in E•• (k). We will always assume that the coalgebras C and C are flat as kmodules.
C Lemma 8. We have a functor F : M(ψ)C A → M(ψ )A . For any M ∈ C C M(ψ)A , F (M ) = M ⊗A A ∈ M(ψ )A , where A is viewed as a left Amodule via α. The structure maps are given by the formulas
(m ⊗ a )b = m ⊗ a b
ρ (m ⊗ a ) = (m[0] ⊗ r
aψ )
(2.55) ψ
⊗ γ(m[1] )
For a morphism f : M → N in M(ψ)C A , F (f ) = f ⊗ IA .
(2.56)
2.5 Adjoint functors for entwined modules
65
Proof. It is clear that (2.55) is well-defined. Let us show that (2.55) is welldefined. For all m ∈ M , a ∈ A and a ∈ A , we have ρr (m ⊗ α(a)a ) = m[0] ⊗ (α(a)a )ψ ⊗ γ(m[1] )ψ
(2.1)
= m[0] ⊗ α(a)ψ aΨ ⊗ γ(m[1] )ψ Ψ
(2.5)
Ψ = m[0] ⊗ α(aψ )aΨ ⊗ γ(mψ [1] ) Ψ = m[0] aψ ⊗ aΨ ⊗ γ(mψ [1] )
= (ma)[0] ⊗ aΨ ⊗ γ((ma)[1] )Ψ = ρr (ma ⊗ a )
Clearly (2.55-2.56) make M ⊗A A into a right A -module and a right C comodule. We still have to verify (2.17): ρr ((m ⊗ a )b ) = ρr (m ⊗ a b ) = m[0] ⊗ (a b )ψ ⊗ γ(m[1] )ψ (2.1)
= m[0] ⊗ aψ bΨ ⊗ γ(m[1] )ψ Ψ
= (m ⊗ a )[0] bΨ ⊗ (m ⊗ a )Ψ [1] as needed.
Let M ∈ M(ψ)C A . Since C is assumed to be k-flat, it follows from Lemma 2 that the natural map (M C C) ⊗ C → M C (C ⊗ C) mapping ( i mi ⊗ ci ) ⊗ c to i mi ⊗ (ci ⊗ c) is an isomorphism.
C Lemma 9. We have a functor G : M(ψ )C A → M(ψ)A . For any M ∈ C M(ψ )C A , G(M ) = M C C ∈ M(ψ)A , where C is viewed as a left C comodule via γ. The structure maps are given by the formulas mi ⊗ ci ) = mi ⊗ ci(1) ⊗ ci(2) (2.57) ρr ( i
(
i
i
mi
⊗ ci )a =
mi α(aψ ) ⊗ cψ i
(2.58)
i
For a morphism f : M → N in M(ψ )C A , G(f ) = f ⊗ IC . Proof. It is easy to see that ρr ( i mi ⊗ ci ) ∈ M C (C ⊗ C), and from the flatness of C, we know that this can be viewed as an element of (M C C)⊗C. Thus G(M ) is a right C-comodule. Let us show that G(M ) is also a right A-module. We have to show that ( i mi ⊗ ci )a ∈ M C C, or
66
2 Doi-Koppinen Hopf modules and entwined modules
mi(0) α(aψ )Ψ ⊗(mi(1) )Ψ ⊗cψ i =
i
ψ mi α(aψ )⊗γ (cψ i )(1) ⊗(ci )(2) (2.59)
i
We know that
i
mi ⊗ ci ∈ M C C, or mi(0) ⊗ mi(1) ⊗ ci =
i
mi ⊗ γ(ci(1) ) ⊗ ci(2)
(2.60)
i
Using (2.3), we find that the right hand side of (2.59) is equal to mi α(aψΨ ) ⊗ γ (ci(1) )Ψ ⊗ (ci(2) )ψ i
(2.5)
=
i
(2.60)
=
mi α(aψ )Ψ ⊗ γ(ci(1) )Ψ ⊗ (ci(2) )ψ Ψ mi(0) α(aψ )Ψ ⊗ mi(1) ⊗ (ci )ψ
i
which is exactly the left hand side of (2.59). Finally, we check that (2.17) holds. For any m = i mi ⊗ ci ∈ M C C, we have ρr (ma) = ρr ( mi α(aψ ) ⊗ cψ i =
i
(2.3)
=
i
ψ mi α(aψ ) ⊗ cψ i (1) ⊗ ci (2) Ψ ψ mi α(aψΨ ) ⊗ ci(1) ⊗ ci(2)
i
ψ = (mi ⊗ ci(1) )aψ ⊗ ci(2) = m[0] aψ ⊗ mψ [0] Theorem 15. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism in E•• (k), and let F and G be the functors defined in Lemmas 8 and 9. Then (F, G) is an adjoint pair of functors. Proof. The unit η : 1C → GF and counit ε : F G → 1C are given by the C following formulas, for all M ∈ M(ψ)C A and M ∈ M(ψ )A : ηM : M → GF (M ) εM : F G(M ) → M
ηM (m) = (m[0] ⊗ 1A ) ⊗ m[1] (2.61) εM (mi ⊗ ci ) ⊗ a = εC (ci )mi a(2.62) i
i
We leave it to the reader to verify that ηM and εM are well-defined, and that they define natural transformations. In order to have an adjoint pair of functors, we need the commutativity of the following diagrams:
2.5 Adjoint functors for entwined modules
G
η∗G -
@ @ @ = @ @ R
F ∗η F GF
F
GF G
@ @ @ = @ @ R
G∗ε ? G
67
ε∗F ? F
where ∗ is the Godement product. This means G(εM ) ◦ ηG(M ) = IG(M ) εF (M ) ◦ F (ηM ) = IF (M )
(2.63) (2.64)
C for all M ∈ M(ψ)C A and M ∈ M(ψ )A . These two conditions are easily verified.
Example 13. Let (A, C, ψ) be a right-right entwining structure, and consider the morphism (ηA , IC ) : (k, C, IC ) → (A, C, ψ) C ∼ As we have observed earlier, M(IC )C is just the category of Ck = M comodules. In this situation, the functor C G : M(ψ)C A →M
is nothing else then the functor forgetting the right A-action. Its left adjoint F is given by F (M ) = M ⊗ A with structure maps (m ⊗ a)b = m ⊗ ab
ρr (m ⊗ a) = m[0] ⊗ aψ ⊗ mψ [1]
;
(2.65)
In particular, F (C) = C ⊗ A is an entwined module, with structure (c ⊗ a)b = c ⊗ ab
;
ρr (c ⊗ a) = c(1) ⊗ aψ ⊗ cψ (2)
(2.66)
In the special case where (A, C, ψ) corresponds to a Doi-Koppinen datum, we obtain a functor F : MC → M(ψ)C A left adjoint to the functor forgetting the A-action. Now the structure on C ⊗ A is given by the formulas (c ⊗ a)b = c ⊗ ab
;
ρr (c ⊗ a) = c(1) ⊗ a[0] ⊗ c(2) a[1]
(2.67)
Observe that this garantees that M(ψ)C A always contains at least one object, namely C ⊗ A. Example 14. Again, let (A, C, ψ) be a right-right entwining structure, but now consider the morphism
68
2 Doi-Koppinen Hopf modules and entwined modules
(IA , εC ) : (A, C, ψ) → (A, k, IA ) Recall that M(IA )kA ∼ = MA , and now F : M(ψ)C A → MA is the functor forgetting the C-coaction. The right adjoint G of F is given by G(M ) = M ⊗ C with (m ⊗ c)a = maψ ⊗ cψ In particular, G(A) = A ⊗ C ∈
ρr (m ⊗ c) = m ⊗ c(1) ⊗ c(2)
;
M(ψ)C A,
(b ⊗ c)a = baψ ⊗ cψ
;
(2.68)
with action and coaction given by
ρr (b ⊗ c) = b ⊗ c(1) ⊗ c(2)
(2.69)
If (A, C, ψ) corresponds to a Doi-Koppinen datum (H, A, C), then takes the form (b ⊗ c)a = ba[0] ⊗ ca(1)
;
ρr (b ⊗ c) = b ⊗ c(1) ⊗ c(2)
(2.70)
We now have two objects in M(ψ)C A , namely C ⊗ A and A ⊗ C. In the next Lemma, we show that there is a morphism from one to the other, and that they are often isomorphic. Proposition 31. Let (A, C, ψ) be a right-right entwining structure. Then ψ : C ⊗ A → A ⊗ C is a morphism in M(ψ)C A . If ψ is invertible, then C ⊗ A and A ⊗ C are isomorphic objects in M(ψ)C A. Proof. We have to show that ψ is right A-linear and right C-colinear. ψ is right A-linear, since ψ(c ⊗ ab) = (ab)ψ ⊗ cψ = aψ bΨ ⊗ cψΨ = ψ(c ⊗ a)b ψ is right C-colinear, since (ψ ⊗ IC )ρr (c ⊗ a) = (ψ ⊗ IC )(c(1) ⊗ aψ ⊗ cψ (2) ) = aψΨ ⊗ (c(1) )Ψ ⊗ (c(2) )ψ = aψ ⊗ (cψ )(1) ⊗ (cψ )(2) = ρr (ψ(c ⊗ a))
2.6 Two-sided entwined modules Consider a right-right entwining structure (A, C, ψ) ∈ E•• (k) and a left-left entwining structure (B, D, ϕ) ∈ •• E(k). A two-sided entwined module is a kmodule M having the structure of a left-left (B, D, ϕ)-module and right-right (A, C, ψ)-module such that the following additional compatibility conditions hold:
2.6 Two-sided entwined modules
69
1. M is a (B, A)-bimodule; 2. M is a (D, C)-bicomodule; 3. the right A-action is left D-colinear: ρl (ma) = m[−1] ⊗ m[0] a 4. the left B-action is right C-colinear: ρr (bm) = bm[0] ⊗ m[1] C The category of two-sided entwined modules will be denoted by D B M(ϕ, ψ)A . In particular, we will be interested in the situation where A = B, C = D and ϕ = ψ −1 . Then A ⊗ C and C ⊗ A are isomorphic objects in M(ψ)C A , and the left-left version of the same result (use the left-right dictionary!) tells us that −1 they are also isomorphic objects in C ). The structure maps are the A M(ψ following: on C ⊗ A:
(c ⊗ a)b = c ⊗ ab ρ (c ⊗ a) = c(1) ⊗ aψ ⊗ cψ (2) r
b(c ⊗ a) = cϕ ⊗ bϕ a ρl (c ⊗ a) = c(1) ⊗ c(2) ⊗ a On A ⊗ C: (a ⊗ c)b = abψ ⊗ cψ ρr (a ⊗ c) = a ⊗ c(1) ⊗ c(2) b(a ⊗ c) = ba ⊗ c ρl (a ⊗ c) = cϕ (1) ⊗ aϕ ⊗ c(2) We leave it to the reader to write down these structure maps in the situation where (H, A, C) ∈ DK•• (k), and H has a twisted antipode S. We have shown the following result. Proposition 32. Let (A, C, ψ) ∈ E∗ •• (k), and ϕ = ψ −1 . Then ψ : C ⊗A→A⊗C is an isomorphism in
C −1 , ψ)C A M(ψ A.
Proposition 33. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism in E•• (k), and (B, D, ϕ) ∈ •• E(k). Then we have a pair of adjoint functors F :
D C B M(ϕ, ψ)A
C →D B M(ϕ, ψ )A ;
G:
D C B M(ϕ, ψ )A
C →D B M(ϕ, ψ)A
70
2 Doi-Koppinen Hopf modules and entwined modules
Proof. We define F (M ) = M ⊗A A with right structure as defined in Section 2.5. The left structure is the one induced by the left structure on M : b(m ⊗ a ) = bm ⊗ a ρl (m ⊗ a ) = m[−1] ⊗ m[0] ⊗ a In a similar way, G(M ) = M C C with right structure as in Section 2.5, and left structure mi ⊗ ci ) = bmi ⊗ ci b( i
l
ρ(
mi ⊗ ci ) =
i
mi(−1) ⊗ mi(0) ⊗ ci
i
Assume that ψ is invertible. We have seen that C ⊗ A ∼ = A ⊗ C ∈ C −1 , ψ)C A M(ψ A , and therefore F (C ⊗ A) = (C ⊗ A) ⊗A A ∼ = C ⊗ A ∼ = F (A ⊗ C)
−1 = (A ⊗ C) ⊗A A ∈ C , ψ )C A M(ψ A
and G(F (C ⊗ A)) = ((C ⊗ A) ⊗A A )C C ∼ = (C ⊗ A )C C −1 ∼ , ψ)C = G(F (A ⊗ C)) = ((A ⊗ C) ⊗A A ) ∈ C A M(ψ A For later use, we give the structure maps: (c ⊗ a )b = c ⊗ a b
(2.71)
ρ (c ⊗ a ) = c(1) ⊗ (c(2) ⊗ b(c ⊗ a ) = cϕ ⊗ α(bϕ )a lr
aψ )
ψ
⊗ γ(c(3) )
On ((C ⊗ A) ⊗A A )C C ∼ = (C ⊗ A )C C, we have (ci ⊗ ai ) ⊗ di b = (ci ⊗ ai α(bψ )) ⊗ dψ i i
ρlr
(ci ⊗ ai ) ⊗ di
i
(2.72) (2.73)
(2.74)
i
=
ci(1) ⊗ (ci(2) ⊗ ai ) ⊗ di(1) ⊗ di(2) (2.75)
i
ϕ ci ⊗ ai ) ⊗ di = (ci ⊗ α(bϕ )ai ) ⊗ di b i
(2.76)
i
On (A ⊗ C) ⊗A A , we have b (a ⊗ c) ⊗ a b = (ba ⊗ c) ⊗ a ρr (a ⊗ c) ⊗ a = (a ⊗ c(1) ) ⊗ aψ ⊗ γ(c(2) )ψ ρl (a ⊗ c) ⊗ a = cϕ (1) ⊗ (aϕ ⊗ c(2) ) ⊗ a
(2.77) (2.78) (2.79)
2.7 Entwined modules and comodules over a coring
On ((A ⊗ C) ⊗A A )C C, we have b (ai ⊗ ci ) ⊗ ai ⊗ di e = (bai ⊗ ci ) ⊗ ai α(eψ ) ⊗ dψ i i
ρr ρ
(2.80)
i
(ai ⊗ ci ) ⊗ ai ⊗ di = (ai ⊗ ci ) ⊗ ai ⊗ di(1) ⊗ di(2) (2.81)
i
l
71
(ai ⊗ ci ) ⊗
i
ai
⊗ di =
i
cϕ i(1) ⊗ (aiϕ ⊗ ci(2) ) ⊗ ai ⊗ di(2.82)
i
If ψ is not invertible, then we have no left A-action on C ⊗ A, since we need ψ −1 to describe this action. But we still have a left C-coaction, making C ⊗A C C C into an object of C k M(IC , ψ)A . consequently G(F (C ⊗ A)) ∈ k M(IC , ψ)A . In a similar way, we have that A ⊗ C, G(F (A ⊗ C)) ∈ kA M(IA , ψ)C A
2.7 Entwined modules and comodules over a coring Let A be a ring (with unit). An A-coring C is an (A, A)-bimodule together with two (A, A)-bimodule maps ∆C : C → C ⊗A C and εC : C → A such that the usual coassociativity and counit properties hold, i.e. (∆C ⊗A IC ) ◦ ∆C = (IC ⊗A ∆C ) ◦ ∆C (εC ⊗A IC ) ◦ ∆C = (IC ⊗A εC ) ◦ ∆C = IC
(2.83) (2.84)
Corings were introduced by Sweedler, see [173]. A right C-comodule is a right A-module M together with a right A-module map ρr : M → M ⊗A C such that (ρr ⊗A IC ) ◦ ρr = (IM ⊗A ∆C ) ◦ ρr (IM ⊗A εC ) ◦ ρr = IM
(2.85) (2.86)
In a similar way, we can define left C-comodules and (C, C)-bicomodules. If R = k is a commutative ring, then an R-coring is nothing else then a k-coalgebra. We will use the Sweedler-Heyneman notation for corings and comodules over corings: ∆C (c) = c(1) ⊗A c(2) ; ρr (m) = m[0] ⊗A m[1] etc. A map f : M → N between (right) C-comodules is called a C-comodule map if f is a right A-module map, and
72
2 Doi-Koppinen Hopf modules and entwined modules
ρr (f (m)) = f (m[0] ) ⊗A m[1] for all m ∈ M . MC is the category of right C-comodules and C-comodule maps. In a similar way, we introduce the categories C
M,
C
MC ,
C AM
For example, A MC is the category of right C-comodules that are also (A, A)bimodules such that the right C-comodule map is left A-linear. Takeuchi observed recently that entwined modules can be considered as comodules over a certain coring; this has been investigated further by Brzezi´ nski [26]. Let A be a k-algebra, and C a k-coalgebra. We say that an A-coring C factorizes through A and C if C ∼ = A ⊗ C as k-modules and (identifying the elements of C and A ⊗ C): a(b ⊗ c) = ab ⊗ c ∆C (a ⊗ c) = (a ⊗ c(1) ) ⊗A (1 ⊗ c(2) ) εC (a ⊗ c) = εC (c)a
(2.87) (2.88) (2.89)
for all a, b ∈ A and c ∈ C. Theorem 16. Let k be a commutative ring, A a k-algebra, and C a kcoalgebra. There exists a bijective correspondence between the A-coring structures on C = A⊗C satisfying (2.87-2.89) and right-right entwining structures of the form (A, C, ψ). Proof. Assume first that C = A ⊗ C satisfies (2.87-2.89), and define ψ : C ⊗ A → A ⊗ C ; ψ(c ⊗ a) = (1 ⊗ c)a We claim that (A, C, ψ) is a right-right entwining structure. Indeed, for all c ∈ C and a, b ∈ A, we have ψ(c ⊗ ab) = (1 ⊗ c)(ab) = ((1 ⊗ c)a)b = (aψ ⊗ cψ )b = aψ (1 ⊗ cψ )b = aψ (bΨ ⊗ cψΨ ) = aψ bΨ ⊗ cψΨ ψ(c ⊗ 1) = (1 ⊗ c)1 = 1 ⊗ c proving (2.1) and (2.2) In (A ⊗ C) ⊗A (A ⊗ C) ∼ = A ⊗ C ⊗ A, we have ∆C (aψ ⊗ cψ ) = aψ ⊗ ∆C (cψ ) and ∆C (aψ ⊗ cψ ) = ∆C ((1 ⊗ c)a) = ∆C (1 ⊗ c)a = (1 ⊗ c(1) ) ⊗A (1 ⊗ c(2) )a = (1 ⊗ c(1) ) ⊗A (aψ ⊗ cψ (2) ) ψ Ψ = (1 ⊗ c(1) )aψ ⊗ cψ (2) = aψΨ ⊗ c(1) ⊗ c(2)
2.7 Entwined modules and comodules over a coring
73
and (2.3) follows. Finally εC (cψ )aψ = εC ((1 ⊗ c)a) = εC (1 ⊗ c)a = εC (c)a proving (2.4). Conversely, let (A, C, ψ) be a right-right entwining structure. Being an object of kA M(IA , ψ)C A , C = A ⊗ C is an A-bimodule. ∆C and εC are defined by (2.88) and (2.89), and an immediate computation shows that (2.83) and (2.84) are satisfied. Theorem 17. Let C = A ⊗ C be a coring factorized through A and C, and assume that (A, C, ψ) is the corresponding right-right entwining structure. Then we have an isomorphism of categories MC ∼ = M(ψ)C A Proof. We have a functor F : MC → M(ψ)C A , with F (M ) = M as a kmodule. The right A-action on F (M ) is the same as the one on M , and the right C-coaction on F (M ) is the C-comodule map on M followed by the isomorphism M ⊗A (A ⊗ C) ∼ = M ⊗ C. We easily compute that ρrF (M ) (ma) = ρrM (ma) = ρrM (m)a = m[0] ⊗A (1 ⊗ m[1] )a = m[0] ⊗A aψ ⊗ mψ [1] ψ = m[0] aψ ⊗ m[1] as needed. We leave it to the reader to construct the inverse of F . We have a left-handed version of Theorems 16 and 17. We say that a coring C factorizes through C and A if C ∼ = C ⊗ A as k-modules, and (after identifying C and C ⊗ A): (c ⊗ b)a = c ⊗ ba ∆C (c ⊗ a) = (c(1) ⊗ 1) ⊗A (c(2) ⊗ a) εC (c ⊗ a) = εC (c)a for all a, b ∈ A and c ∈ C. Theorem 18. We have a bijective correspondence between A-coring structures on C = C ⊗ A and left-left entwining structures of the form (A, C, ϕ), and we have an isomorphism of categories C
M∼ =C A M(ϕ)
Proof. Similar to the proof of Theorems 16 and 17; we only mention that ϕ is given by the formula a(c ⊗ 1) = ϕ(a ⊗ c)
74
2 Doi-Koppinen Hopf modules and entwined modules
Proposition 34. Consider (A, C, ψ) ∈ E•• (k), with ψ invertible, and put ψ −1 = ϕ. Then ψ : C ⊗ A → A ⊗ C is an isomorphism of corings. Consequently we have an isomorphism of categories C −1 , ψ)C A M(ψ A
∼ = C MC
Proof. A straightforward computation: for all a, b ∈ A and c ∈ C, we have ψ(a(c ⊗ b)) = ψ(cϕ ⊗ aϕ b) = (aϕ b)ψ ⊗ cϕψ = aϕψ bΨ ⊗ cϕψΨ = abΨ ⊗ cΨ = aψ(c ⊗ b) ψ((c ⊗ b)a) = (ba)ψ ⊗ cψ = bψ aΨ ⊗ cψΨ = (bψ ⊗ cψ )a = ψ(c ⊗ b)a ∆A⊗C (ψ(c ⊗ a)) = aψ ⊗ (cψ )(1) ⊗A 1 ⊗ (cψ )(2) ψ = aψΨ ⊗ cΨ (1) ⊗A 1 ⊗ c(2) = (1 ⊗ c(1) )aψ ⊗A (1 ⊗ cψ (2) ) = (1 ⊗ c(1) ) ⊗A (aψ ⊗ cψ (2) ) = (ψ ⊗ ψ) (c(1) ⊗ 1) ⊗A (c(2) ⊗ a) = (ψ ⊗ ψ)∆C⊗A (c ⊗ a) Proposition 35. Let C be an A-coring, and write R = multiplication rule (f · f )(c) = f c(1) f (c(2) )
A Hom(C, A).
The
(2.90)
(for f, f ∈ R and c ∈ C) makes R into a ring with unit εC . The map ι : A → R, ι(a)(c) = εC (c)a is a ring homomorphism. Consequently R can be viewed as an A-bimodule: (af b)(c) = f (ca)b
(2.91)
Proof. It is easy to see that f ·f is left and right A-linear. The multiplication is associative since (f · (f · f ))(c) = (f · f )(c(1) f (c(2) )) = f c(1) f (c(2) ) (1) f c(1) f (c(2) ) (1) = f c(1) f c(2) f (c(3) ) = ((f · f ) · f )(c) It is easy to see that εC is a left and right unit. Finally
2.7 Entwined modules and comodules over a coring
75
(af )(c) = (ι(a) · f )(c) = f (c(1) εC (c(2) )a) = f (ca) and (f b)(c) = (f · ι(b))(c) = ι(b)(c(1) f (c(2) )) = f (c)b The ring R = A Hom(C, A) is due to Sweedler [173]. In the case where C = A ⊗ C factorizes through A and C, the multiplication on R is related to the Koppinen smash product: Proposition 36. Let (A, C, ψ) be a right-right entwining structure, and C = A ⊗ C. Then the canonical isomorphism of k-modules R = A Hom(A ⊗ C, A) ∼ = #ψ◦τ (C, Aop ) is an isomorphism of k-algebras. Proof. For f, g ∈ R, we have (f · g)(1 ⊗ c) = g((1 ⊗ c(1) )f (1 ⊗ c(2) )) = g(f (1 ⊗ c(2) )ψ ⊗ cψ (1) ) = f (1 ⊗ c(2) )ψ g(1 ⊗ cψ (1) ) and this is exactly the multiplication on the Koppinen smash product #ψ◦τ (C, Aop ) ((Aop , C, ψ ◦ τ ∈ • E(k)• ). Proposition 37. Let (A, C, ψ) be a right-right entwining structure, C = A ⊗ C, and assume that C is finitely generated and projective as a k-module. Then (A, C cop , τ ◦ ψ) ∈ • E(k)• , and we can consider C ∗op #R A. We have an isomorphism of k-algebras C ∗op #R A ∼ = R = A Hom(A ⊗ C, A) Proof. Let {ci , c∗i , | i = 1, · · · , m} be a dual basis for C. Take f = c∗ #a, g = d∗ #b ∈ R ∼ = C ∗op #R A. In C ∗op #R A, we have, using (2.33) ∗ ∗ (c∗ #a)(d∗ #b) = d∗R ∗ c∗ #aR b = d∗ , cψ i ci ∗ c #aψ b i
and ((c∗ #a)(d∗ #b))(c) =
∗ ∗ d∗ , cψ i ci , c(1) c , c(2) aψ b
i ∗
ψ = c , c(2) aψ d∗ , cψ (1) b = f (c(2) )ψ g(c(1) ) = (f · g)(c)
proving our result.
76
2 Doi-Koppinen Hopf modules and entwined modules
Lemma 10. Consider an A-coring C, which is finitely generated and projective as a left A-module, and let {ck , fk | k = 1, · · · , m} be a finite dual basis for C. Then fk ⊗A ∆C (ck ) = fk · fl ⊗A cl ⊗A ck (2.92) k
k,l
Proof. The fact that {ck , fk | k = 1, · · · , m} is a dual basis means fk (c)ck and f = fk f (ck ) c= k
(2.93)
k
for all c ∈ C and f ∈ R. Now ∆C (c) =
fk (c)ck(1) ⊗A ck(2)
and ∆C (c) = c(1) ⊗A c(2) = =
c(1) ⊗A fk (c(2) )ck
k
c(1) fk (c(2) ) ⊗A ck =
k
=
fl (c(1) fk (c(2) ))cl ⊗A ck
k,l
(fk · fl )(c)cl ⊗A ck
k,l
and the result follows. Proposition 38. Let C be an A-coring. We have a functor F : MC → MR F is an isomorphism if C is finitely generated and projective as a left Amodule. Proof. We put F (M ) = M with mf = m[0] f (m[1] ) for all f ∈ R and m ∈ M . F (M ) is a right R-module, since mεC = m[0] εC (m[1] ) = m and
(mf )f = m[0] f (m[1] ) [0] f m[0] f (m[1] ) [1] = m[0] f (m[1] f (m[2] )) = m(f · f )
Now let C be finitely generated and projective as a left A-module; take M ∈ MR , and put G(M ) = M , with C-comodule structure
2.7 Entwined modules and comodules over a coring
ρr (m) =
mfk ⊗A ck
77
(2.94)
k
ρr defines a right C-comodule structure on M since m[0] εC (m[1] ) = mfk εC (ck ) = mεC = m k
and (ρr ⊗A IC )(ρr (m)) =
m(fk · fl ) ⊗A cl ⊗A ck
k,l
((2.93))
=
mfk ⊗A ∆C (ck ) = (IM ⊗A ∆C )(ρr (m))
k
It is not difficult to show that F and G are functors, and that they are each others inverses: for M ∈ MR , the structure on F G(M ) is given by mfk f (ck ) = mf m•f = k C
while for M ∈ M , the structure on GF (M ) is given by m[0] fk (m[1] ) ⊗A ck = m[0] ⊗ m[1] ρrGF (M ) (m) = k
Corollary 1. Let C be an A-coring which is finitely generated and projective as a left A-module. Then R = A Hom(C, A) ∈ A MC . The (A, A)-bimodule structure is given by (2.91): (af b)(c) = f (ca)b and the right C-comodule structure is given by f · fk ⊗A ck ρr (f ) = f[0] ⊗A f[1] =
(2.95)
k
This means that f[0] (c)f[1] = c(1) f (c(2) )
(2.96)
Proof. R ∈ MR ∼ = M . (2.95) is then obtained using (2.94). (2.96) follows immediately. C
Corollary 2. Let (A, C, ψ) be a right-right entwining structure, and assume that C is finitely generated and projective as a left A-module. Then C ∗ ⊗ A ∈ C A M(IC , ψ)A . The structure maps are (c∗ ⊗ a)b = c∗ ⊗ ab ∗ b(c∗ ⊗ a) = c∗ , dψ i ci ⊗ bψ a
(2.97) (2.98)
i
ρr (c∗ ⊗ a) =
i
c∗i ∗ c∗ ⊗ aψ ⊗ dψ i
(2.99)
78
2 Doi-Koppinen Hopf modules and entwined modules
Proof. In Corollary 1, we take C = A ⊗ C, R = A Hom(A ⊗ C, A) ∼ = #R (C, A) ∼ = C ∗ ⊗ A, and we translate the structure on R into the structure on C ∗ ⊗ A. Let us do the left A-module structure in detail: for all a, b ∈ A, c ∈ C, and c∗ ∈ C ∗ , we have ∗ b(c ⊗ a) (c) = ι(b) · (c∗ ⊗ a) (c) ∗ ψ = ι(b)(c(2) )ψ (c∗ ⊗ a)(cψ (1) ) = εC (c(2) )bψ c , c(1) a ∗ = c∗ , cψ bψ a = c∗ , cψ i ci , cbψ a
and (2.98) follows. In Proposition 5, we have seen that the category of comodules over a flat kcoalgebra is a Grothendieck category. Brzezi´ nski informed us that the same proof can be used to prove the following result: Theorem 19. Let C be an A-coring. MC is a Grothendieck category and the forgetful functor MC → MA is exact if and only if C is flat as a left Amodule. In particular, if (A, C, ψ) is a right-right entwining structure, and C is flat as a k-module, then M(ψ)C A is a Grothendieck category.
2.8 Monoidal categories Let A and B be algebras and coalgebras (but not necessarily bialgebras), and assume that we have (A, B, R) ∈ S(k) and (A, B, V ) ∈ CS(k). Then we have an algebra structure (the smash product) and a coalgebra structure (the smash coproduct) on A ⊗ B; if these structures make A ⊗ B into a bialgebra, then we denote this bialgebra by AV R B. The smash biproduct has been investigated in [44] and, more extensively, in [18], [19] and [17]. In this Section, we restrict attention to the situation where A and B are bialgebras, and either R or V is the switch map. In these situation, we keep the old notation: Aτ R B = A#R B and C V τ D = C
(2.100) (2.101)
2.8 Monoidal categories
79
Proof. A direct consequence of Proposition 1: (2.100) is equivalent to (1.6), (2.101) is equivalent to (1.7), and (1.8-1.9) are automatically fulfilled. We have a similar property for the smash coproduct: Proposition 40. Let C and D be bialgebras, and (C, D, V ) ∈ CS(k). C
(2.102) (2.103)
for all c, e ∈ C and d, f ∈ D. Example 15. (cf. [185]) Let H be a Hopf algebra, and C an H-bicomodule coalgebra. Consider the map V : C ⊗ H → H ⊗ C given by V (c ⊗ h) = c[−1] hS(c[1] ) ⊗ c[0] Then (C, H, V ) ∈ CS(k); the smash coproduct is denoted C×r H, and is called the right twisted smash coproduct. Now assume that C is also a bialgebra. Then C ×r H is a bialgebra if and only if the following equations hold, for all c, d ∈ C and h, k ∈ H: c[−1] hS(c[1] ) d[−1] kS(d[1] ) ⊗ c[0] d[0] = (cd)[−1] (hk)S((cd)[1] ) ⊗ (cd)[0] (2.104) 1[−1] S(1[1] ) ⊗ 1[0] = 1H ⊗ 1C (2.105) We will now present similar results for entwining structures; we will closely follow [54], where the Doi-Hopf case is discussed. The basic idea is the following: if H is a bialgebra, then the categories H M and MH are monoidal categories: for two H-modules M and N , we find the following H-module structure on M ⊗ N : h · (m ⊗ n) = h(1) m ⊗ h(2) n
(2.106)
If M and N are H-comodules, then H coacts on M ⊗ N as follows: ρr (m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1]
(2.107)
In both cases, the unit object is k with respectively the trivial action and coaction. In fact, the two previous Propositions tell us how to make the category of modules over the smash product (resp. the category of comodules over the smash coproduct) into a monoidal category. We refer to [123, Chapter 7] for the definition of a monoidal category. Basically, a monoidal category is a category C together with a functor ⊗: C×C →C
80
2 Doi-Koppinen Hopf modules and entwined modules
and a unit object k ∈ C such that ⊗ is coherently associative, that is, we have isomorphisms C ⊗ (D ⊗ E) ∼ = (C ⊗ D) ⊗ E and
C ⊗k ∼ =k⊗C ⊗C
for all C, D, E ∈ C, satisfying certain coherence conditions. In our setting, the associativity condition will always be the natural one, and there will be no need to worry about the coherence conditions. Now let (A, C, ψ) be a left-right entwining structure, and assume that A and C are both bialgebras. For M, N ∈ A MC , we put the following left A-action and right C-coaction on M ⊗ N : a(m ⊗ n) = a(1) m ⊗ a(2) n ρr (m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1]
(2.108) (2.109)
With this structure, M ⊗N is a left A-module and a right C-comodule. M ⊗N is an entwined module if and only if ρr (a(m ⊗ n)) = aψ (m[0] ⊗ n[0] ) ⊗ (m[1] n[1] )ψ or Ψ ψ (2.110) a(1)ψ m[0] ⊗ a(2)Ψ n[0] ⊗ mψ [1] n[1] = aψ(1) m[0] ⊗ aψ(2) n[0] ⊗ (m[1] n[1] )
for all m ∈ M , n ∈ N and a ∈ A. It clearly suffices that ∆A (aψ ) ⊗ (cd)ψ = a(1)ψ ⊗ a(2)Ψ ⊗ cψ dΨ
(2.111)
for all a ∈ A and c, d ∈ C. (2.111) is also necessary: we take M = N = A ⊗ C (with a(b ⊗ c) = ab ⊗ c, ρr (b ⊗ c) = bψ ⊗ c(1) ⊗ cψ (2) ), m = 1A ⊗ c, n = 1A ⊗ d. Then (2.110) amounts to Ψ aψ(1) ⊗ c(1) ⊗ aψ(2) ⊗ d(1) ⊗ (c(2) d(2) )ψ = a(1)ψ ⊗ c(1) ⊗ a(2)Ψ ⊗ d(1) cψ (2) d(2)
and (2.111) follows after we apply εC to the second and the fourth factor. k is a left A-module, using the trivial A-action, and a right C-comodule, using the right C-coaction: a · x = εA (a)x and ρr (x) = x1C
(2.112)
for all x ∈ k and a ∈ A. Here we identified k ⊗ C and C. With this structure, k is an entwined module if and only if εA (a)1C = εA (aψ )1ψ C for all a ∈ A.
(2.113)
2.8 Monoidal categories
81
Theorem 20. Let (A, C, ψ) be a left-right entwining structure, and assume that A and C are bialgebras. For M, N ∈ A M(ψ)C , we define an A-action and a C-coaction on M ⊗ N using (2.108-2.109); k is a left A-module and a right C-comodule using (2.112). Then (A M(ψ)C , ⊗, k) is a monoidal category if and only if (2.111) and (2.113) hold for all a ∈ A and c, d ∈ C. In this case, we say that (A, C, ψ) is a monoidal entwining structure. Proof. We leave it to the reader to show that the natural isomorphisms (M ⊗ N ) ⊗ P ∼ = M ⊗ (N ⊗ P ) and M ⊗ k ∼ =k⊗M ∼ =M of k-modules are also isomorphisms of entwined modules. If (A, C, ψ) is a monoidal entwining structure, then A and C can be made into objects of A M(ψ)C : Proposition 41. Let (A, C, ψ) be a monoidal entwining structure. On A and C, we consider the following left A-action and right C-coaction: b · a = ba and ρr (a) = ψ(1C ⊗ a) = aψ ⊗ 1ψ C a · c = εA (aψ )cψ and ρr (c) = c(1) ⊗ c(2) Then A and C are entwined modules. Proof. We will show that A ∈ A M(ψ)C , and leave the other statement to the reader. First, A is a right C-comodule, since (IA ⊗ εC )ρr (a) = εC (1ψ C )aψ = εC (1C )a = a and ψ Ψ r r (IA ⊗ ∆C )ρr (a) = aψ ⊗ ∆(1ψ C ) = aψΨ ⊗ 1C ⊗ 1C = (ρ ⊗ IC )ρ (a)
A ∈ A M(ψ)C since Ψψ ψ r aψ b[0] ⊗ bψ [1] = aψ bΨ ⊗ 1C = (ab)ψ ⊗ 1C = ρ (ab)
Examples 6. 1. Let H be a Hopf algebra, and C a left H-module bialgebra. This means that C is a bialgebra, and that H acts on C in such a way that C is an H-module algebra and an H-module coalgebra. Now let A be a bialgebra and a right H-comodule algebra such that the following compatibility relation holds, for all a ∈ A: a[0](1) ⊗ a[0](2) ⊗ a[1](1) ⊗ a[1](2) = a(1)[0] ⊗ a(2)[0] ⊗ a(1)[1] ⊗ a(2)[1]
(2.114)
We know that (H, A, C) is a left-right Doi-Hopf datum, and we have a corresponding left-right entwining structure (A, C, ψ). It is straightforward to
82
2 Doi-Koppinen Hopf modules and entwined modules
check that (A, C, ψ) is monomial. In particular, let H be cocommutative, and let A = H, with right H-coaction given by right comultiplication. Then (2.114) holds, and we have a monomial entwining structure. As a special case, let H = k, and A and C bialgebras. Then (A, C, IA⊗C is a monomial entwining structure. 2. Our first example can be dualized: let A be a right H-comodule bialgebra, and C a bialgebra and a left H-module coalgebra such that (h · c)(h · c ) = (hh ) · (cc )
(2.115)
Then we obtain a left-right entwining structure (A, C, ψ) which is monomial. Remark 4. Assume that C is finitely generated and projective. In Theorem 9, we have seen that we have a bijective correspondence between (A, C, ψ) ∈ • E• (k) and (A, C ∗ , R) ∈ S(k). It can be shown that (A, C, ψ) is monoidal if and only if A#R C ∗ is a bialgebra as in Proposition 39. Similar observations apply to the smash coproduct: if A is finitely generated projective, then we have a one-to-one correspondence between (A, C, ψ) ∈ • E• (k) and (A∗ , C, V ) ∈ CS(k) (cf. Theorem 12); (A, C, ψ) is monoidal if and only if A∗
(α, γ) : (A, C, ψ) → (A , C , ψ )
is called monoidal if α and γ are bialgebra maps. Proposition 42. If (α, γ) : (A, C, ψ) → (A , C , ψ ) is monoidal, then the induction functor F : A M(ψ)C → A M(ψ )C is a monoidal functor. Its adjoint G is also monoidal, if we view it as a functor between the corresponding opposite categories. Proof. Take M, N ∈ A M(ψ)C , and define a map f = fM,N : F (M ⊗ N ) = A ⊗A (M ⊗ N ) → F (M ) ⊗ F (N ) = (A ⊗A M ) ⊗ (A ⊗A N ) by
f (a ⊗A (m ⊗ n)) = (a(1) ⊗A m) ⊗ (a(2) ⊗A n)
It is easily verified that f is a well-defined left A -linear map; let us check that f is also right C -colinear. For all a ∈ A , m ∈ M and n ∈ N , we have (f ⊗ IC )ρr (a ⊗ (m ⊗ n)) ψ = aψ (1) ⊗A m[0] ⊗ aψ (2) ⊗A n[0] ⊗ γ(m[1] n[1] ) (2.111) = a(1)ψ ⊗A m[0] ⊗ a(2)Ψ ⊗A n[0] ⊗ γ(m[1] )ψ γ(n[1] )Ψ = ρ( f (a ⊗ (m ⊗ n)))
2.8 Monoidal categories
83
We also have a map g : F (k) = A ⊗A k → k, given by g(a ⊗A x) = εA (a ). It is straightforward to verify that the maps fM,N and g satisfy all the necessary coherence axioms for having a monoidal functor (cf. [123, Sec. XI.2]). The second part of the proof is formally dual to the proof of the first part. For two entwined modules M , N ∈ A M(ψ )C , we have a map f = fM ,N : G(M )⊗G(N ) = (M C C)⊗(N C C) → G(M ⊗N ) = ((M ⊗N )C C) given by f (mi ⊗ ci ) ⊗ (ni ⊗ di ) = (mi ⊗ ni ) ⊗ ci di i
i
The map g : k → G(k) = kC c is defined by g(x) = x ⊗ 1C . We leave it to the reader to check that the fM ,N and g are well-defined morphisms in C A M(ψ) , and that they satisfy the axioms for having a monoidal functor. We now consider the situation where A = A and α = IA . We keep the same notation as above: F : A M(ψ)C → A M(ψ )C is the induction functor, and G is its adjoint. Observe that F (M ) = M as a left A-module, with C coaction ρrC (m) = m[0] ⊗ γ(m[1] ). Our next result is a generalization of [152, Theorem 7.1 (1) and (3)] and [54, Theorem 3.1]. Theorem 21. Let (IA , γ) : (A, C, ψ) → (A, C , ψ ) be a monoidal morphism of monoidal entwining structures. Then G(C ) = C
(2.116)
Let M ∈ A M(ψ)C be flat as a k-module, and take N ∈ A M(ψ )C . If C is a Hopf algebra, then M ⊗ G(N ) ∼ = G(F (M ) ⊗ N )
in
A M(ψ)
C
(2.117)
in
A M(ψ)
C
(2.118)
If C has a twisted antipode S, then G(N ) ⊗ M ∼ = G(N ⊗ F (M ))
Proof. We know that εC ⊗ IC : C C C → C is an isomorphism; the inverse map is (γ ⊗ IC ) ◦ ∆C . It is clear that εC ⊗ IC is A-linear and C-colinear, and this proves (2.116). We now define a map Γ : M ⊗ G(N ) = M ⊗ (N C C) → G(F (M ) ⊗ N ) = (F (M ) ⊗ N )C C by Γ (m ⊗
i
(ni ⊗ ci )) =
(m[0] ⊗ ni ) ⊗ m[1] ci
i
1) Γ is well-defined. We have to show that
84
2 Doi-Koppinen Hopf modules and entwined modules
Γ (m ⊗
(ni ⊗ ci )) ∈ (F (M ) ⊗ N )C C
i
This may be seen as follows: for all m ∈ M and have that (ρF (M )⊗N ⊗ IC ) m[0] ⊗ ni ⊗ m[1] ci =
ni ⊗ ci ∈ N C C, we
m[0] ⊗ ni[0] ⊗ γ(m[1] )ni[1] ⊗ m[2] ci
m[0] ⊗ ni ⊗ γ(m[1] )γ(ni[1] ) ⊗ m[2] ci[2]
i
=
i
i
i
=
m[0] ⊗ ni ⊗ m[1] ci
(IF (M )⊗N ⊗ ρC )
i
2) Γ is A-linear. For all a ∈ A, m ∈ M and i ni ⊗ ci ∈ N C C, we have that Γ a m⊗ (ni ⊗ ci ) = Γ a(1) m ⊗ (a(2)ψ ni ⊗ ci ψ i
=
ψ a(1)Ψ m[0] ⊗ a(2)ψ ni ⊗ mΨ [1] ci
i
(2.111)
=
i
aψ(1) m[0] ⊗ aψ(2) ni ⊗ (m[1] ci )ψ
i
=
aψ (m[0] ⊗ ni ) ⊗ (m[1] ci )ψ
i
= aΓ (m ⊗
(ni ⊗ ci ))
i
3) Γ is C-colinear. For all m ∈ M and ni ⊗ ci ∈ N C C, we have that ρ(Γ (m ⊗ (ni ⊗ ci ))) = ρ((m[0] ⊗ ni ) ⊗ m[1] ci ) i
i
(m[0] ⊗ ni ) ⊗ m[1] ci1 ⊗ m[2] ci2 = i
=
(Γ ⊗ IC ) m[0] ⊗ (ni ⊗ ci1 ) ⊗ m[1] ci2
i
= (Γ ⊗ IC ) ρm ⊗ (ni ⊗ ci )) i
Assume now that C has an antipode, and define Ψ : (F (M ) ⊗ N )C C → M ⊗ (N C C) by
2.8 Monoidal categories
85
Ψ ((mi ⊗ ni ) ⊗ ci ) = mi[0] ⊗ (ni ⊗ S(mi[1] )ci ) We have to show that Ψ is well-defined. M is flat, so M ⊗ (N C C) is the equalizer of the maps IM ⊗ IN ⊗ ρC : M ⊗ N ⊗ C - M ⊗ N ⊗ C ⊗ C IM ⊗ ρN ⊗ IC Now take i (mi ⊗ ni ) ⊗ ci ∈ (F (M ) ⊗ N )C C. Then (mi[0] ⊗ ni[0] ) ⊗ γ(mi[1] )ni[1] ⊗ ci = (mi ⊗ ni ) ⊗ γ(ci(1) ) ⊗ ci(2) Now
(IM ⊗ IN ⊗ ρC ) =
(2.119)
mi[0] ⊗ (ni ⊗ S(mi[1] )ci )
i
mi[0] ⊗ ni ⊗ γ S(mi[2] )ci(1) ⊗ S(mi[1] )ci(2)
i
and
mi[0] ⊗(ni ⊗S(mi[1] )ci ) = mi[0] ⊗ni[0] ⊗ni[1] ⊗S(mi[1] )ci
(IM ⊗ρN ⊗IC )
i
i
Apply (IM ⊗ γ ⊗ IC ) ◦ (IH ⊗ (∆C ◦ SC )) ◦ ρM to the first factor of (2.119) Then we obtain that mi[0] ⊗ γ(S(mi[2] )) ⊗ S(mi[1] ) ⊗ ni[0] ⊗ γ(mi[3] )ni[1] ⊗ ci i
=
mi[0] ⊗ γ(S(mi[2] )) ⊗ S(mi[1] ) ⊗ ni ⊗ γ(ci(1) ) ⊗ ci(2)
i
Multiplying the second and the fifth factor, and also the third and the sixth factor, we obtain that mi[0] ⊗ ni[0] ⊗ ni[1] ⊗ S(mi[1] )ci i
=
mi[0] ⊗ ni ⊗ γ(S(mi[2] ))γ(ci(1) ) ⊗ S(mi[1] )ci(2)
i
or
(IM ⊗ ρN ⊗ IC )(Ψ (
(mi ⊗ ni ) ⊗ ci )) = (IM ⊗ IN ⊗ ρC )(Ψ (
i
(mi ⊗ ni ) ⊗ ci ))
i
Let us finally point out that Γ and Ψ are each others inverses. (Γ ◦ Ψ )( (mi ⊗ ni ) ⊗ ci ) = Γ ( mi[0] ⊗ (ni ⊗ S(mi[1] )ci )) i
=
i
mi[0] ⊗ ni ⊗ mi[1] S(mi[2] )ci ))
i
=
i
(mi ⊗ ni ) ⊗ ci
86
2 Doi-Koppinen Hopf modules and entwined modules
In a similar way, we have for all m ∈ M and i ni ⊗ ci ∈ N C C that (Ψ ◦ Γ )(m ⊗ (ni ⊗ ci )) = Ψ ((m[0] ⊗ ni ) ⊗ m[1] ci i
=
i
m[0] ⊗ ni ⊗ S(m[1] )m[2] ci
i
=
m ⊗ (ni ⊗ ci )
i
This finishes the proof of (2.117). The proof of (2.118) is similar; we restrict here to giving the connecting isomorphisms Γ : G(N ) ⊗ M → G(N ⊗ F (M )) (ni ⊗ ci ) ⊗ m → (ni ⊗ m[0] ) ⊗ ci m[1] Ψ : G(N ⊗ F (M )) → G(N ) ⊗ M (ni ⊗ mi ) ⊗ ci → (ni ⊗ ci S(mi[1] ) ⊗ mi[0] Corollary 3. Let (A, C, ψ) be a monoidal entwining structure, and let F = • : A M(ψ)C → A M be the functor forgetting the C-coaction. For any flat entwined module M , we have an isomorphism M ⊗C ∼ =M ⊗C
(2.120)
in A M(ψ)C . If k is a field, then A M(ψ)C has enough injective objects, and any injective object in A M(ψ)C is a direct summand of an object of the form I ⊗ C, where I is an injective A-module. Proof. We apply Theorem 21 to the monoidal morphism (IA , εC ) : (A, C, ψ) → (A, k, IA ) Now M ⊗C ∼ = M ⊗ G(k) ∼ = G(F (M ) ⊗ k) = G(F (M )) = F (M ) ⊗ C = M ⊗ C If k is a field, then every k-module is flat, and (2.120) holds for every entwined module M . M embeds into an injective object I of A M. F is exact, so its right adjoint G = • ⊗ C preserves injective objects. Finally, we have the following monomorphisms in A M(ψ)C : M
IM ⊗ηC
- M ⊗C ∼ = M ⊗ C → I ⊗ C = G(I)
If M is injective, then M ⊗ I ⊗ C splits, so M is a direct summand of I ⊗ C. Corollary 4. Let (A, C, ψ) be a monomial entwining structure over a field k. Then the category of entwined modules A M(ψ)C is a Grothendieck category having enough injective objects.
2.8 Monoidal categories
87
For completeness sake, let us mention the dual version of Theorem 21. We omit the proof, as it can be derived from the proof of Theorem 21 using duality arguments. Theorem 22. Let (α, IC ) : (A, C, ψ) → (A , C, ψ ) be a monoidal morphism of monoidal entwining structures. Then F (A) = A
(2.121)
F (M ) ⊗ N ∼ = F (M ⊗ G(N ))
(2.122)
If A is a Hopf algebra, then
for all M ∈ A M(H)C and N ∈ A M(H)C . If A has a twisted antipode, then N ⊗ F (M ) ∼ = F (G(N ) ⊗ M )
(2.123)
Remark 5. Assume that (A, C, ψ) is a monoidal entwining structure. It is possible to examine when the category of entwined modules is braided monoidal. This was done first in the case of Doi-Hopf modules, and it was shown that braiding on the category of Yetter-Drinfeld modules can be considered as a special case (see [54]). Recently, Hobst and Pareigis [95] discussed the entwining case.
3 Frobenius and separable functors for entwined modules
We begin this Chapter with the introduction of separable functors and Frobenius functors. We discuss general properties, such as Maschke’s Theorem and Rafael’s Theorem, and the relationship between the two notions. The terminology comes from separable algebras and Frobenius algebras, as introduced in Section 1.3: an algebra is separable, resp. Frobenius, if and only if the restriction of scalars functor is separable, resp. Frobenius. We apply our results to Hopf algebras, recovering the classical result that a finite dimensional Hopf algebra is Frobenius, and the Larson-Sweedler version of Maschke’s Theorem. The techniques can be easily applied to decide when forgetfull functors from the category of entwined modules, or from the category of modules over a smash product is separable or Frobenius. This leads to several new versions of Maschke’s Theorem.
3.1 Separable functors and Frobenius functors Frobenius functors Let F : C → D be a covariant functor. If there exists a functor G : D → C which is at the same time a right and left adjoint of F , then we call F a Frobenius functor, and we say that (F, G) is a Frobenius pair for C and D. Observe that this notion is symmetric in F and G: if (F, G) is a Frobenius pair, then (G, F ) is also a Frobenius pair. This concept was first introduced by Morita [141], and the terminology is inspired by the fact that, for a ring homomorphism R → S, the restriction of scalars functor is Frobenius if and only if S/R is Frobenius in the classical sense (see [105], [142]). We will make this clear in Section 3.1. Frobenius functors regained popularity recently (see [49] and [58]). We collect the following properties of Frobenius functors; we keep the notation introduced at the end of the previous Section. Our first result is nothing else then a restatement of one of the equivalent definitions of a pair of adjoint functors; nevertheless, it will be a key result in the further development of our theory. Theorem 23. Let G be a right adjoint of F : C → D. Then (F, G) is a Frobenius pair if and only if there exist natural transformations ν ∈ V = Nat(GF, 1C ) and ζ ∈ W = Nat(1D , F G) such that
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 89–157, 2002. c Springer-Verlag Berlin Heidelberg 2002
90
3 Frobenius and separable functors
F (νC ) ◦ ζF (C) = IF (C) νG(D) ◦ G(ζD ) = IG(D)
(3.1) (3.2)
for all C ∈ C and D ∈ D From Theorem 2, we immediately obtain the folowing: Proposition 43. Let (F, G) be a Frobenius pair. Then F and G preserve limits and colimits. If C and D are abelian, then F and G are exact and preserve injective and projective objects. The following result is straightforward, and we omit the proof. Proposition 44. The composition of two Frobenius functors is Frobenius. The direct sum of two Frobenius functors between abelian categories is Frobenius. Assume that we know that (F, G) is an adjoint pair. We know from Proposition 11 that we have natural isomorphisms α : Nat(GF, 1C )
∼ =-
Nat(HomD (F, F ), HomC (•, •))
β : Nat(1D , F G)
∼ =-
Nat(HomC (G, G), HomD (•, •))
Assume that (F, G) is Frobenius, and let ν : GF → 1C and ζ : 1D → F G be the counit and unit of the adjunction (G, F ). Put α(ν) = TrF : HomD (F, F ) → HomC (•, •)) β(ζ) = TrG : HomC (G, G) → HomD (•, •) be the natural transformations corresponding to ν and ζ. TrF and TrG were introduced by Madar and Marcus [124], and are called the transfer maps associated to (F, G). They can be used to characterize Frobenius pairs: Proposition 45. Let (F, G) be an adjoint pair of functors. (F, G) is a Frobenius pair if and only if there exist natural transformations TrF : HomD (F, F ) → HomC (•, •) TrG : HomC (G, G) → HomD (•, •) such that F (Tr F (εF(C) )) ◦ Tr G (ηGF(C) ) = IF (C) Tr F (εFG(D) ) ◦ G(Tr G (ηG(D) )) = IG(D) for all C ∈ C and D ∈ D.
(3.3) (3.4)
3.1 Separable functors and Frobenius functors
91
Proof. Let (F, G) be a Frobenius pair, and let Tr F = α(τ ), Tr G = β(ξ). (3.3) and (3.4) then follow immediately from (3.1) and (3.2). Conversely, if (F, G) is an adjoint pair, and Tr F and Tr G satisfy (3.3) and (3.4), then τ = α−1 (Tr F ) and ν = β −1 (Tr G ) satisfy (3.1) and (3.2). Let (F, G) be an adjoint pair of functors. (F, G) is called a Frobenius pair of the second kind if their exist category equivalences α : C → C and β : D → D such that G is a left adjoint of β ◦F ◦α. For a more detailed study of Frobenius pairs of the second kind, we refer to [40]. Separable functors and Maschke’s Theorem Let F : C → D be a covariant functor. F induces a natural transformation F : HomC (•, •) → HomD (F (•), F (•)) ; FC,C (f ) = F (f ) We say that F is a separable functor if F splits, i. e. we have a natural transformation P : HomD (F (•), F (•)) → HomC (•, •) such that P ◦ F = 1HomC (•,•) the identity natural transformation on HomC (•, •). Separable functors were introduced in [145], were the definition can be found in the following more explicit form: F is separable if and only if for all C, C ∈ C, we have a map PC,C : HomD (F (C), F (C )) → HomC (C, C ) such that the following conditions hold: SF1 for any f : C → C in C, PC,C (F (f )) = f . SF2 If we have morphisms f : C → C and f1 : C1 → C1 in C and g : F (C) → F (C1 ), g : F (C ) → F (C1 ) in D such that the diagram F (C)
g-
F (C1 ) F (f1 )
F (f ) ? F (C )
g -
? F (C1 )
commutes in D, then the diagram C f
PC,C1 (g) -
C1 f1
?P (g) ? C ,C1 - C1 C
92
3 Frobenius and separable functors
commutes in C. The terminology comes from the fact that, for a ring homomorphism R → S, the restriction of scalars functor is separable if and only if S/R is separable. We will make this clear in Section 3.1. If F is separable, then for all C, C ∈ C, the map FC,C : HomC (C, C ) → HomD (F (C), F (C )) is injective, since it has a left inverse, and it follows that F is a faithful functor. Faithful functors are functors for which all FC,C have a left inverse; if the left inverse can be chosen to be natural in C and C , then the functor is separable. From the categorical point of view, a better name is perhaps naturally faithful functor, but we will stick to the terminology introduced in [145], and commonly used in the literature afterwards. Now we give some general properties of separable functors. Proposition 46. Consider functors F : C → D and G : D → E. 1. If F and G are separable, then G ◦ F is also separable; 2. if G ◦ F is separable, then F is separable. Proof. 1. is obvious; let us show 2. Consider the natural transformation G : HomD (•, •) → HomE (G(•), G(•)) induced by G, and Q : HomE (GF (•), GF (•)) → HomC (•, •) coming from the fact that G ◦ F is separable. P = Q ◦ G satisfies SF1 and SF2. Separable functors are important because they satisfy (a functorial version of) Maschke’s Theorem: an exact sequence that is split after we apply the functor F is itself split. Proposition 47. Let F : C → D be a separable functor, and f : C → C a morphism in C. If F (f ) has a left (or right or two-sided) inverse g in D, then f has a left (or right or two-sided) inverse in C, namely PC,C (g). Proof. Let g be a left inverse of F (f ). In D, we have a commutative diagram F (C) F (f ) ? F (C )
F (IC)
F (C) F (IC )
? g F (C)
and, using SF1 and SF2, we find the following commutative diagram in C:
3.1 Separable functors and Frobenius functors
C
PC,C1 (F (IC )) = IC-
f
93
C IC
? C
? - C
PC ,C (g)
so PC ,C (g) ◦ f = IC . Corollary 5. (Maschke’s Theorem for separable functors) Let F : C → D be a separable functor, and consider a short exact sequence 0
- C
- C
- C
- 0
(3.5)
in C. If the sequence 0
- F (C )
- F (C)
- F (C )
- 0
is split in D, then (3.5) is also split. Proposition 48. Let F : C → D be a separable functor between two categories C and D. 1. If F preserves epimorphisms, then F reflects projective objects; 2. if F preserves monomorphisms, then F reflects injective objects. Proof. We will prove the first statement. Assume that M ∈ C is such that F (M ) is projective, and take an epimorphism g : C → C , and a morphism f : M → P in C. F (g) is also an epimorphism, so there exists u : F (M ) → F (C) in D such that F (f ) = F (g)◦u. In D we have the following commutative diagram u F (M ) F (C) F (f )
F (g)
? F (I ) ? CF (C ) F (C ) F is separable, so we have the following commutative diagram in C M f ? C
PM,C (u) -
C g
IC - ? C
so f = g ◦ PM,C (u), and it follows that M is projective.
94
3 Frobenius and separable functors
Our next result is Rafael’s Theorem, giving necessary and sufficient conditions for a functor having an adjoint to be separable (see [158] and [159]). Rafael’s Theorem is a key result in the development of our further theory. In the sequel, G : D → C will be a right adjoint of F : C → D. In the proof of Theorem 15 we have explained that this is equivalent to the existence of natural transformations η : 1C → GF and ε : F G → 1D such that G(εN ) ◦ ηG(N ) = IG(N ) εF (M ) ◦ F (ηM ) = IF (M )
(3.6) (3.7)
for all M ∈ C and N ∈ D. Theorem 24. (Rafael) Assume that the functor F : C → D has a right adjoint G. 1. F is separable if and only if the unit η : 1C → GF of the adjunction splits; by this we mean that there exists a natural transformation ν : GF → 1C such that ν ◦ η = 11C , the identity natural transformation on 1C ; 2. G is separable if and only if the counit ε : F G → 1D of the adjunction cosplits; by this we mean that there exists a natural transformation ζ : 1D → F G such that ε ◦ ζ = 11D , the identity natural transformation on 1D . Proof. We prove the first statement, the second one then follows by duality arguments. Let θ : HomB (F, F ) → HomA (•, •) be a natural transformation. Let ν = α−1 (θ), using Proposition 11. The naturality of θ entails that νC ◦ ηC = θGF (C),C (εF (C) ) ◦ ηC = θC,C (εF (C) ◦ F (ηC )) = θC,C (IF (C) ) = θC,C (ResF (IC )) Assume that F is separable, and take θ = φ. Then we find ν : GF → 1A such that νC ◦ ηC = IC i.e. ν ◦ η is the identity natural transformation. Conversely, let ν : GF → 1A be natural, and put φ = α(ν). Then φC,C (F (f )) = νC ◦ GF (f ) ◦ ηC = f ◦ νC ◦ ηC Clearly if ν ◦ η is the identity natural transformation, then φ splits ResF , and F is separable. In Rafael’s Theorem, we assume that (F, G) is an adjoint pair, and then deduce an easier criterion for F or G to be separable. If we know that (F, G) is Frobenius, then we can find an even easier necessary and sufficient condition for the separability of F or G. From Proposition 10, we obtain immediately the following:
3.1 Separable functors and Frobenius functors
95
Corollary 6. Let (F, G) be a Frobenius pair of functors, then we have isomorphisms Nat(F, F ) ∼ = Nat(G, G) ∼ = Nat(1C , GF ) ∼ = Nat(F G, 1D ) ∼ ∼ = Nat(GF, 1C ) = Nat(1D , F G) For a Frobenius pair (F, G), we will write ν : GF → 1C and ζ : 1D → F G for the counit and unit of the adjunction (G, F ). For all C ∈ C and D ∈ D, we then have F (νC ) ◦ ζF (C) = IF (C) and νG(D) ◦ G(ζD ) = IG(D)
(3.8)
We can now state and prove the “Frobenius version” of Rafael’s Theorem: Proposition 49. [46] Let (F, G) be a Frobenius pair, and let η, ε, ν and ζ be as above. The following statements are equivalent: 1. F is separable; 2. ∃α ∈ Nat(F, F ) : νC ◦ G(αC ) ◦ ηC = IC for all C ∈ C; 3. ∃β ∈ Nat(G, G) : νC ◦ βF (C) ◦ ηC = IC for all C ∈ C. We have a similar characterization for the separability of G: the statements 1. G is separable; 2. ∃α ∈ Nat(F, F ) : εD ◦ αG(D) ◦ ζD = ID for all D ∈ D; 3. ∃β ∈ Nat(G, G) : εD ◦ F (βD ) ◦ ζD = ID for all D ∈ D. are equivalent. Proof. Assume that F is separable. By Rafael’s Theorem, there exists ν˜ ∈ Nat(GF, 1C ) such that ν˜C ◦ ηC = IC for all C ∈ C. Let α : F → F be the νC )◦ζF (C) , corresponding natural transformation of Corollary 6, i.e. αC = F (˜ and ν˜C = νC ◦G(αC ), and the first implication of the Proposition follows. The converse follows trivially from Rafael’s Theorem. All the other equivalences can be proved in a similar way. Let us finish this Section fixing some notation for pairs of adjoint functors. Let F : C → D be a functor having a right adjoint G. As above, we let η : 1C → GF and ε : F G → 1D be the unit and counit of the adjunction. We will write V = Nat(GF, 1C ) and W = Nat(1D , F G) V and W are classes, but in many particular examples they turn out to be sets. According to Theorems 23 and 24, the members of V and W decide whether F and G are Frobenius or separable functors. Remark also that there exists an associative multiplication on V and W . For ν, ν ∈ V , and ζ, ζ ∈ W , we put
96
3 Frobenius and separable functors
ν · ν = ν ◦ η ◦ ν and ζ · ζ = ζ ◦ ε ◦ ζ
(3.9)
If F is separable, then there exists a right unit for the multiplication on V ; if G is separable, then there exists a left unit for the multiplication on W . These one-sided units are idempotents and are called separability idempotents. Relative injective and projective objects Let F : C → D be a covariant functor with right adjoint G. An object M ∈ C is called relative injective if the following condition is satisfied: if i : C → C in C is such that F (i) : F (C) → F (C ) has a left inverse p in D, then for every f : C → M in C, there exists a morphism g : C → M such that f = g ◦ i, i.e. the diagram C
i - C ∃g
f ? M
= - ? M
commutes. In a similar way, we define relative projective objects in D. Proposition 50. As above, let (F, G) be an adjoint pair of functors. M ∈ C is relative injective if and only if the unit map ηM : M → GF (M ) has a left inverse νM . N ∈ D is relative projective if and only if εN : F G(N ) → N has a right inverse. Proof. First assume that ηM has a section νM in C. Let i : C → C , p : F (C ) → F (C) and f : C → M be as above. We define g : C → M as the composition η
G(p)
GF (f )
C M C −→ GF (C ) −→ GF (C) −→ GF (M ) −→ M
ν
In order to prove that g ◦ i = f , it suffices to prove that GF (f ) ◦ G(p) ◦ ηC ◦ i = ηM ◦ f since νM ◦ ηM = IM . We have a commutative diagram C ηC
i
- C ηC
? ? GF (i) - GF (C ) GF (C) hence G(p) ◦ ηC ◦ i = G(p) ◦ GF (i) ◦ ηC = G(p ◦ F (i)) ◦ ηC = ηC
(3.10)
3.1 Separable functors and Frobenius functors
97
so (3.10) is equivalent to GF (f ) ◦ ηC = ηM ◦ f and this follows from the fact that η is a natural transformation. Conversely assume that M is relative injective, and put i = ηM : M → GF (M ). F (i) = F (ηM ) has a section, namely εF (M ) , since (F, G) is an adjoint pair of functors, so there exists g : GF (M ) → M in C such that g ◦ i = g ◦ ηM = IM . The statement about projective objects is similar. Comparing Proposition 50 and Theorem 24, we obtain the following result: Corollary 7. Let (F, G) be an an adjoint pair of functors between C and D. If F is separable, then every object of C is relative injective, and if G is separable, then every object of D is relative projective. Remark 6. In [47], a functor F : C → D is called a Maschke functor if every object of C is relative injective. It is called a dual Maschke functor if every object is relative projective. It can be shown that a separable functor is always Maschke and dual Maschke. The converse is not true in general. We refer to the forthcoming [47] for details. Separable functors of the second kind Let F : C → D and H : C → E be covariant functors. We then have functors HomC (•, •), HomD (F, F ), HomE (H, H) : C op × C → Sets and natural transformations F : HomC (•, •) → HomD (F, F ) ; H : HomC (•, •) → HomE (H, H) given by FC,C (f ) = F (f ) ; HC,C (f ) = H(f )
for f : C → C in C. Definition 4. The functor F is called H-separable if there exists a natural transformation P : HomD (F, F ) → HomE (H, H) such that P ◦F =H
(3.11)
that is, H factors through F as a natural transformation, and we have a commutative diagram HomC (•, •) H ? HomE (H, H)
F - HomD (F, F )
P
98
3 Frobenius and separable functors
An H-separable functor is also called a separable functor of the second kind; for a detailed study, we refer the reader to the forthcoming paper [47]. Obviously, a 1C -separable functor is the same thing as a separable functor. The basic properties are very similar. Adapting the proof of Propositions 46 and 47, Corollary 5 and Theorem 24, we obtain the following results (see [47] for more detail). Proposition 51. Consider functors C
- D
F
- D1 and C
F1
- E
H
1. If F1 ◦ F is H-separable, then F is H-separable. 2. If F is H-separable, and F1 is separable, then F1 ◦ F is H-separable. Proposition 52. Let F be an H-separable functor. If f : C → C in C is such that F (f ) has a left, right, or two-sided inverse in D, then H(f ) has a left, right, or two-sided inverse in E. Corollary 8. (Maschke’s Theorem for H-separable functors) Let C, D and E be abelian categories, and assume that F : C → D is H-separable. An exact sequence in C that becomes split after we apply the functor F , also becomes split after we apply the functor H. Theorem 25. (Rafael’s Theorem for H-separability) Let G : D → C be a right adjoint of F : C → D, and consider functors H : C → E and K : D → E. 1. F is H-separable if and only if there exists a natural transformation ν : HGF → H such that νC ◦ H(ηC ) = IH(C) (3.12) for any C ∈ C. 2. G is K-separable if and only if there exists a natural transformation ζ : K → KF G such that K(εD ) ◦ ζD = IK(D) (3.13) for any D ∈ D. Fully faithful functors Let (F, G) be a pair of adjoint functors between the categories C and D. We have seen that V = Nat(GF, 1C ) and W = Nat(1D , F G) decide whether F and G are separable or Frobenius (see Theorems 23 and 24). It is well-known that V and W can also be used to determine when F and G are fully faithful, and when they establish an equivalence of categories. Theorem 26. Let G be a right adjoint of F : C → D.
3.2 Restriction of scalars and the smash product
99
−1 1. F is fully faithful if and only if there exists ν ∈ V such that νC = ηC , for all C ∈ C; 2. G is fully faithful if and only if there exists ζ ∈ W such that ζD = ε−1 D , for all D ∈ D.
Proof. We will prove 1.; the proof of 2. is similar. First assume that F is fully faithful. For each C ∈ C, we consider εF (C) : F GF (C) → F (C). F is full, so εF (C) = F (νC ), for some νC : GF (C) → C. Now F (νC ◦ ηC ) = F (νC ) ◦ F (ηC ) = εF (C) ◦ F (ηC ) = IF (C) = F (IC ) and it follows that νC ◦ ηC = IC , since F is faithful. In a similar way, we compute that εF (C) ◦ F (ηC ◦ νC ) = F (νC ) ◦ F (ηC ) ◦ F (νC ) = F (νC ) = F (νC ) ◦ F (IGF (C) ) = εF (C) ◦ F (IGF (C) )
(3.14)
(F, G) is an adjoint pair, and this implies that the map θC,D : HomC (C, G(D)) → HomD (F (C), D), θC,D (f ) = εD ◦ F (f ) is an isomorphism. (3.14) tells us that θGF (C),F (C) takes the same values at ηC ◦ νC and IGF (C) , hence ηC ◦ νC = IGF (C) . The naturality of ν now follows from the naturality of η. Conversely, if ηC is an isomorphism for all C ∈ C, then we have isomorphisms HomC (C, C ) ∼ = HomC (C, GF (C )) ∼ = HomD (F (C), F (C )) The composition of these two isomorphisms sends f : C → C to θC,F (C ) (ηC ◦ f ) = εF (C ) ◦ F (ηC ) ◦ F (f ) = IF (C ) ◦ F (f ) = F (f ) and we have shown that F is fully faithful. By definition, an adjoint pair of functors (F, G) establishes a category equivalence if F and G are fully faithful. Corollary 9. Let (F, G) be a category equivalence. Then (F, G) is a Frobenius pair, and F and G are separable.
3.2 Restriction of scalars and the smash product Let i : R → S be a ring homomorphism, and let F = • ⊗R S : MR → MS be the induction functor. The restriction of scalars functor G : MS → MR is a right adjoint of F ; the unit and counit of the adjunction are described as follows, for all M ∈ MR and N ∈ MS :
100
3 Frobenius and separable functors
ηM : M → M ⊗R S ; ηM (m) = m ⊗ 1 εN : N ⊗R S → N ; εN (n ⊗ s) = ns We will now use the results of the previous Sections to decide when F and G are Frobenius or separable. To this end, we give some explicit descriptions of V and W . Given ν : GF → 1MR in V , it is not hard to prove that ν = νR : S → R is a morphism in R MR . Conversely, given an R-bimodule map ν : S → R, we can construct a natural transformation ν ∈ V ; νM is given by νM (m ⊗ s) = mν(s) Thus we have
V ∼ = V1 = R HomR (S, R)
(3.15)
Now let ζ : 1MS → F G be a natural transformation in W . By definition, ζS : S → S ⊗R S is a right S-bimodule map. It is also a left S-bimodule map: for s ∈ S, we consider the map fs : S → S, fs (t) = st. fs is a morphism in MS , so that the naturality of ζ gives us a commutative diagram S
ζS-
S ⊗R S fs ⊗ IS
fs ? S
ζS-
? S ⊗R S
from which it follows that νS (st) = sνS (t). Now ζS (1) = e = e1 ⊗ e2 = ζS (1) ∈ S ⊗R S satisfies the following: se1 ⊗ e2 = νS (s) = e1 ⊗ e2 s
(3.16)
for all s ∈ S. Conversely if e satisfies (3.16), then we can recover ζ: ζN : N → N ⊗R S ; ζN (n) = ne1 ⊗ e2 In the sequel, we will omit the summation symbol, and write e = e1 ⊗ e2 , where it is understood implicitly that we have a summation. So we have W ∼ = W1 = S HomS (S, S ⊗R S) ∼ 1 2 = {e = e ⊗ e ∈ S ⊗R S | se1 ⊗ e2 = e1 ⊗ e2 s, for all s ∈ S}(3.17) Theorem 27. Let i : R → S be a ring homomorphism, F the induction functor, and G the restriction of scalars functor. 1. F is separable if and only if there exists a conditional expectation, that is ν ∈ V1 such that ν(1) = 1. This means that S/R is a split extension in the sense of [153].
3.2 Restriction of scalars and the smash product
101
2. G is separable if and only if there exists a separability idempotent, that is e ∈ W1 such that e1 e2 = 1. In the case where R is commutative, this means that S/R is a separable extension in the sense of [66] and [109], as discussed in Section 1.3. 3. (F, G) is a Frobenius pair if and only if there exist ν ∈ V1 and e ∈ W1 such that ν(e1 )e2 = e1 ν(e2 ) = 1 (3.18) In the literature, (ν, e) is called a Frobenius pair. Remarks 2. 1. Observe that the results in Theorem 27 are left-right symmetric. For example, MS → MR is separable if and only if S M → R M is separable. 2. explains the terminology for separable functors. 2. The monoid structure on W translates into a monoid structure on W1 . The multiplication on W1 is given by the formula e · f = e1 f 1 ⊗ f 2 e2 We also have an addition on W1 , which makes W1 into a ring (usually without unit). In Proposition 12, we have seen that a projective separable algebra over a commutative ring is finitely generated. For Frobenius algebras, we have the following result. Corollary 10. We use the same notation as in Theorem 27. If (F, G) is a Frobenius pair, then S is finitely generated and projective as a (left or right) R-module. Proof. For all s ∈ S, we have s = se1 ν(e2 ) = e1 ν(e2 s), hence {e1 , ν(e2 •)} is a dual basis for S as a right R-module. In the same way, {e2 , ν(•e1 )} is a dual basis for S as a left R-module. Using different descriptions of V and W , we find other criteria for F and G to be separable or Frobenius. Let HomR (S, R) be the set of right R-module homomorphisms from S to R. HomR (S, R) is an (R, S)-bimodule: (rf s)(t) = rf (st)
(3.19)
for all f ∈ HomR (S, R), r ∈ R and s, t ∈ S. Proposition 53. Let i : R → S be a ring homomorphism and use the notation introduced above. Then V = Nat(GF, 1C ) ∼ = V2 = R HomS (S, HomR (S, R))
102
3 Frobenius and separable functors
Proof. We define α1 : V1 → V2 as follows: for ν ∈ V1 , let α1 (ν) = φ : S → HomR (S, R) be given by φ(s)(t) = ν(st) Given φ ∈ V2 , we put
α−1 (φ) = φ(1)
We invite the reader to verify that α1 and α1−1 are well-defined and that they are each others inverses. Proposition 54. Let i : R → S be a ring homomorphism and assume that S is finitely generated and projective as a right R-module. Using the notation introduced above, we have that W = Nat(1D , F G) ∼ = W2 = R HomS (HomR (S, R), S) Proof. Let {si , σi | i = 1, · · · , m} be a finite dual basis of S as a right Rmodule. For all s ∈ S and f ∈ HomR (S, R), we have s= si σi (s) and f = f (si )σi i
i
We define β1 : W1 → W2 as follows: β1 (e) = φ, with φ(f ) = f (e1 )e2 for all f ∈ HomR (S, R). Let us show that φ is left R-linear and right S-linear: φ(f s) = f (se1 )e2 = f (e1 )e2 s = φ(f )s φ(rf ) = rf (e1 )e2 = rφ(f ) Conversely, for φ ∈ W2 , we put β1−1 (φ) = e =
si ⊗ φ(σi )
i
e ∈ W1 , since si ⊗ φ(σi )s = si ⊗ φ(σi s) = si ⊗ φ(σi (ssj )σj ) i
=
i
i,j
si σi (ssj ) ⊗ φ(σj ) =
i,j
ssj ⊗ φ(σj )
j
Finally, β1 and β1−1 are each others inverses: β1 (β1−1 (φ))(f ) = β1 ( si ⊗ φ(σi ))(f ) = β1−1 (β1 (e)) =
i
i
f (si )φ(σi ) =
i
si ⊗ β1 (e)(σi ) =
i
φ(f (si )σi ) = φ(f )
i
si ⊗ σi (e1 )e2 =
i
si σi (e1 ) ⊗ e2 = e
3.2 Restriction of scalars and the smash product
103
Theorem 28. Let i : R → S be a ring homomorphism. We use the notation introduced above. 1. F : MR → MS is separable if and only if there exists φ ∈ V2 such that φ(1)(1) = 1. 2. Assume that S is finitely generated and projective as a right R-module. Then G is separable if and only if there exists φ ∈ W2 such that si φ(σi ) = 1 i
3. (F, G) is a Frobenius pair if and only if S is finitely generated and projective as a right R-module and HomR (S, R) and S are isomorphic as (R, S)-bimodules. This means that S/R is Frobenius in the sense of [105]. In the case where R = k is a field, we recover Definition 3. Proof. The result is a translation of Theorem 27 in terms of V2 and W2 , using Proposition 12 (for 2.) and Corollary 10 (for 3.). Let us prove one implication of 3. Assume that (F, G) is a Frobenius pair. From Corollary 10, we know that S is finitely generated and projective. Let ν ∈ V1 and e ∈ W1 be as in part 3. of Theorem 27, and take φ = α1 (ν) ∈ V2 , φ = β1 (e) ∈ W2 . For all f ∈ HomR (S, R) and s ∈ S, we have (φ ◦ φ)(f )(s) = ν(φ(f )s) = ν(f (e1 )e2 s) = f (e1 )ν(e2 s) = f (se1 )ν(e2 ) = f (se1 ν(e2 )) = f (s) and (φ ◦ φ)(s) = φ(s)(e1 )e2 = ν(se1 )e2 = ν(e1 )e2 s = s We will now give alternative interpretations of V2 and W2 . Consider the functor F : MR → MS , F (M ) = HomR (S, M ) where the right S-action on HomR (S, M ) is the following (f · s)(t) = f (st) for all f ∈ HomR (S, M ) and s, t ∈ S. It is easy to check that F is a right adjoint of the restriction of scalars functor G, so that S/R is Frobenius if and only F and F are isomorphic functors. In fact we have the following Proposition 55. Let i : R → S be a ring homomorphism, and assume that S is finitely generated and projective as a right R-module. Then W = Nat(F , F ) ∼ = W2 = R HomS (HomR (S, R), S) V = Nat(F, F ) ∼ = V2 = R HomS (S, HomR (S, R))
104
3 Frobenius and separable functors
Proof. Take Φ : F → F . Then ΦR = φ ∈ W2 . Conversely, for φ ∈ W2 , we consider e = β1−1 (φ) ∈ W1 , and we define Φ : F → F by ΦM (f ) = f (e1 ) ⊗ e2 ∈ M ⊗R S = F (M )
(3.20)
for all M ∈ MR and f ∈ F (M ) = HomR (S, M ). Now if Φ : F → F is in V , then ΦR = φ ∈ V2 . Given φ ∈ V2 , we take ν = α1−1 (φ) ∈ V1 , and we define Φ ∈ V by ΦM (m ⊗ s)(t) = mν(st) = νM (m ⊗ st) (F, G) is a Frobenius pair if and only if we can find φ ∈ W2 and φ ∈ V2 that are each others inverses, or equivalently, the corresponding Φ ∈ W and Φ ∈ V are each others inverses. One direction is easy, the other one can be seen as follows: (ΦM ◦ ΦM )(m ⊗ s) = ΦM (m ⊗ s)(e1 ) ⊗ e2
(ΦM
= mν(se1 ) ⊗ e2 = m ⊗ s ◦ ΦM )(f ) (s) = ΦM (f (e1 ) ⊗ e2 )(s) = f (e1 )ν(e2 s) = f (se1 ν(e2 )) = f (s)
This provides an alternative explanation for the fact that (F, G) is a Frobenius pair if and only if F and F are isomorphic functors. As before, let i : R → S be a ring homomorphism, and F and G the induction and restriction of scalars functor. Also assume that we know that S/R is Frobenius. Arguments that are similar to the proof of e.g. (3.15) show that Nat(F, F ) ∼ (3.21) = R HomS (S, S) ∼ = CR (S) To a natural transformation α : F → F , there corresponds αR ∈ HomS (S, S), and x = αR (1) ∈ CR (S). From Proposition 49, we conclude that the separability of the functors F and G (i.e. the fact whether S/R is split or separable) can be decided using some x ∈ CR (S). The following result could be deduced from Proposition 49, but a direct proof can also be given. Proposition 56. Assume that S/R is Frobenius, and let (ν, e) be a Frobenius pair, as in Theorem 27. 1. S/R is separable if and only if there exists some x ∈ CR (S) such that e1 xe2 = 1. 2. S/R is split if and only if there exists some x ∈ CR (S) such that ν(x) = 1. Proof. 1. For any x ∈ CR (S), e1 ⊗xe2 is Casimir, and one implication follows. Conversely, if f = f 1 ⊗ f 2 is a separability idempotent, then x = ν(f 1 )f 2 ∈ CR (S), and
3.2 Restriction of scalars and the smash product
105
e1 ⊗R xe2 = e1 ⊗R ν(f 1 )f 2 e2 = e1 ⊗R ν(e2 f 1 )f 2 = f 1 e1 ⊗R ν(e2 )f 2 = f 1 e1 ν(e2 ) ⊗R f 2 = f 1 ⊗R f 2 = f and the first statement follows. 2. One direction is obvious: µ defined by µ(s) = ν(xs) is a conditional expectation. Conversely, assume that µ is a conditional expectation. Then x = µ(e1 )e2 ∈ CR (S), and ν(xs) = ν(µ(e1 )e2 s) = ν(µ(se1 )e2 ) = µ(se1 )ν(e2 ) = µ(se1 ν(e2 )) = µ(s) and the result follows. An immediate consequence is the following result, generalizing [5, Theorem 3.4]. Corollary 11. Let i : R → S be a morphism of commutative rings, and assume that S/R is Frobenius, with Frobenius pair (ν, e). Then S/R is separable if and only if e1 e2 is invertible in S. In this case, the separability idempotent is unique. Proof. According to Proposition 56, there exists x ∈ S such that e1 xe2 = (e1 e2 )x = 1. Abrams [5] calls e1 e2 the characteristic element. Application to the smash product Let (A, B, R) be a smash product structure (over a commutative ring k), and take R = A, S = B#R A. For ν ∈ V1 = R HomR (S, R), define κ : B → A by κ(b) = ν(b#1) Then ν can be recovered form κ, since ν(b#a) = κ(b)a. Furthermore aκ(b) = aν(b#1) = ν(bR #aR ) = κ(bR )aR and we find that V ∼ = V1 ∼ = V3 = {κ : B → A | aκ(b) = κ(bR )aR }
(3.22)
Now we simplify the description of W ∼ = W1 ⊂ (B#R A) ⊗A (B#R A). To this end, we observe that we have a k-module isomorphism γ : (B#R A) ⊗A (B#R A) → B ⊗ B ⊗ A
106
3 Frobenius and separable functors
defined by γ((b#a) ⊗ (d#c)) = b ⊗ dR ⊗ aR c γ −1 (b ⊗ d ⊗ c) = (b#1) ⊗ (d#c) Now let W3 = γ(W1 ) ⊂ B ⊗ B ⊗ A. Take e = b1 ⊗ b2 ⊗ a2 ∈ B ⊗ B ⊗ A (summation implicitly understood). e ∈ W3 if and only if (3.16) holds, for all s = b#1 and s = 1#a with b ∈ B and a ∈ A, if and only if bb1 ⊗ b2 ⊗ a2 = b1 ⊗ b2 bR ⊗ (a2 )R (b )R ⊗ (b )r ⊗ aRr a = b ⊗ b ⊗ a a 1
2
2
1
2
2
(3.23) (3.24)
for all a ∈ A, b ∈ B. We find W ∼ = W1 ∼ = W3 = {e = b1 ⊗ b2 ⊗ a2 ∈ B ⊗ B ⊗ A | (3.23) and (3.24) hold} (3.25) Using these descriptions of V and W , we find immediately that Theorem 27 takes the following form. Theorem 29. Let (B, A, R) be a factorization structure over a commutative ring k. 1. B#R A/A is separable (i.e. the restriction of scalars functor G : MB#R A→ MA is separable) if and only if there exists e = b1 ⊗ b2 ⊗ a2 ∈ W3 such that b1 b2 ⊗ a2 = 1B ⊗ 1A ∈ B ⊗ A (3.26) 2. B#R A/A is split (i.e. the induction functor F : MA → MB#R A is separable) if and only if there exists κ ∈ V3 such that κ(1B ) = 1A
(3.27)
3. B#R A/A is Frobenius (i.e. (F, G) is Frobenius pair) if and only if there exist κ ∈ V3 , e ∈ W3 such that (b2 )R ⊗ κ(b1 )R a2 = b1 ⊗ κ(b2 )a2 = 1B ⊗ 1A
(3.28)
In the same style, we can reformulate Theorem 28. In our situation, HomR (S, R) = HomA (B#R A, A) ∼ = Hom(B, A) The (A, B#R A)-bimodule structure on Hom(B, A) is the following (cf. (3.19)): cf (b#a) (d) = cf (db)a for all a, c ∈ C and b, d ∈ B. From Proposition 53, we deduce that V ∼ = V2 ∼ = V4 = A HomB#R A (B#R A, Hom(B, A))
(3.29)
3.2 Restriction of scalars and the smash product
107
If B is finitely generated and projective as a k-module, then we find using Proposition 54 W ∼ = W2 ∼ = W4 = A HomB#R A (Hom(B, A), B#R A)
(3.30)
Theorem 28 now takes the following form: Theorem 30. Let (B, A, R) be a factorization structure over a commutative ring k, and assume that B is finitely generated and projective as a k-module. Let {bi , b∗i | i = 1, · · · , m} be a finite dual basis for B. 1. B#R A/A is separable if and only if there exists an (A, B#R A)-bimodule map φ : Hom(B, A) ∼ = B ∗ ⊗ A → B#R A such that (bi #1)φ(b∗i #1) = 1B ⊗ 1A i
2. B#R A/A is split if and only if there exists an (A, B#R A)-bimodule map φ : B#R A → Hom(B, A) such that φ(1B #1A )(1B ) = 1A 3. B#R A/A is Frobenius if and only if B ∗ ⊗A and B#R A are isomorphic as (A, B#R A)-bimodules. This is also equivalent to the existence of κ ∈ V3 , e = b1 ⊗ b2 ⊗ a2 ∈ W3 such that the maps φ : Hom(B, A) → B#R A ; φ(f ) = f (b1 )b2 #a2 and φ : B#R A → Hom(B, A) ; φ(b#a)(d) = κ(bdR )aR are each others inverses. The same method can be applied to the extension B#R A/B. There are two ways to proceed: as above, but applying the left-handed version of Theorem 28 (using the left-right symmetry of the notion of separable and Frobenius extension). Another possibility is the use of “op”-arguments: if (B, A, R) is a factorization structure, then ˜ : B op ⊗ Aop → Aop ⊗ B op R ˜ into a factorization structure, and it is not hard to see makes (Aop , B op , R) that we have an algebra isomorphism (Aop #R˜ B op )op ∼ = B#R A Using the left-right symmetry again, we find that B#R A/B is Frobenius if and only if (Aop #R˜ B op )op /B op is Frobenius if and only if (Aop #R˜ B op )/ B op is Frobenius, and we can apply Theorems 29 and 30. We invite the reader to
108
3 Frobenius and separable functors
write down explicit results. Now let (B, A, R) be a factorization structure, with R invertible. We write R for the inverse of R. We also assume that A is a bialgebra. Our next aim is to compare the properties of the extensions A → B#R A and k → B. Let F : k → B and F # : A → B#R A be the respective induction functors. We use similar notation for the restriction of scalars functors, and for the corresponding modules consisting of natural transformations, for example V = Nat(GF, 1Mk ) and V # = Nat(G# F # , 1MA ) Proposition 57. Let (B, A, R) be a factorization structure; assume that R is invertible and that A is a bialgebra. With notation as above, we have k-module homomorphisms γ : V3# → V1 ; γ(κ) = εA ◦ κ δ : W3# → W1
: δ(b1 ⊗ b2 ⊗ a) = b1 ⊗ ε(a2R )b2R
Proof. The first property is obvious, since V1 = B ∗ . We have to show that δ is well-defined. From (3.23) and (2.28) (applied to the factorization structure (A, B, R), we find bb1 ⊗ a2R ⊗ b2R = b1 ⊗ a2RR ⊗ (b2 bR )R = b1 ⊗ a2RRr ⊗ b2r bRS Applyig εA to the second factor, we see that bb1 ⊗ ε(a2R )b2R = b1 ⊗ ε(a2R )b2R b so δ(b1 ⊗ b2 ⊗ a) ∈ W1 , as needed. Corollary 12. Let (B, A, R) be a factorization structure, with R invertible and A a bialgebra. If A#R B/A is Frobenius, then B/k is Frobenius. Proof. Take e = b1 ⊗ b2 ⊗ a ∈ W3 and κ ∈ V3 satisfying (3.28). It suffices to show that δ(e) and γ(κ) = ν satisfy (3.18). Applying R to (3.28), and using (2.27) and (2.29) (applied to (B, A, R), we find 1B ⊗ 1A = b2RR ⊗ (κ(b1 )R a2 )R = b2RRr ⊗ κ(b1 )RR a2r = b2r ⊗ κ(b1 )a2r Applying εA to the second factor, we find ν(b1 )ε(a2r )b2r = 1B proving one equality from (3.18). (3.18) tells us also that b1 ⊗ κ(b2 )a2 = 1B ⊗ 1A or, using (3.22) b1 ⊗ a2R κ(b2R ) = 1B ⊗ 1A
3.2 Restriction of scalars and the smash product
109
and, applying εA to the second factor: b1 ε(a2R )ν(b2R ) = 1B proving the second equality from (3.18). For later use, we now will describe the natural transformation Φ : F → F from Proposition 55, in the case where S = B#R A, and R = A. We will assume that R is invertible, and that B is finitely generated and projective as a k-module. First, we give alternative descriptions of F, F : MA → MB#R A . Using the fact that R is invertible, we find F (M ) = M ⊗A (B#R A) ∼ =M ⊗B where the right B#R A-action on M ⊗ B is the following (m ⊗ b ) (b#a) = maR ⊗ (b b)R Also
(3.31)
F (M ) = HomA (B#R A, M ) ∼ = Hom(B, M ) ∼ = B∗ ⊗ M
with right B#R A-action given by (b∗ ⊗ m) (b#a) =
b∗ , bbiR b∗i ⊗ maR
(3.32)
i
where {bi , b∗i | i = 1, · · · , n} is a dual basis for B, as usual. Now (3.20) tells us, for all M ∈ MA , that ΦM : B ∗ ⊗ M → M ⊗ B is given by
ΦM (b∗ ⊗ m) = b∗ , b1 ma2R ⊗ b2R
(3.33)
where e = b1 ⊗ b2 ⊗ a2 ∈ W3 . From the comments following Proposition 55, we conclude that B#R A/A is Frobenius if and only if e ∈ W3 can be chosen in such a way that ΦM is an isomorphism for every right A-module M . Application to Hopf algebras We now look more closely at the situation where H is a Hopf algebra over a commutative ring k. Then the additional structure on H allows us to simplify the conditions in Theorem 27. The results we present here go back to [117] (if k is a field), and [151] (if k is a commutative ring). First recall that t ∈ H is called a left (resp. right) integral in H if ht = ε(h)t
resp. th = ε(h)t
110
3 Frobenius and separable functors
l r for all h ∈ H. H (resp. H ) denote the k-modules consisting respectively of left and right integrals in H. In a similar way, we introduce left and right integrals in H ∗ (or on H). These are functionals ϕ ∈ H ∗ that have to verify respectively h∗ ∗ ϕ = h∗ , 1ϕ resp. ϕ ∗ h∗ = h∗ , 1ϕ for all h∗ ∈ H ∗ . The k-modules consisting of left and right integrals in H ∗ l r are denoted by H ∗ and H ∗ . Let us first show that there is a close relation between integrals and the elements e ∈ W1 (cf. (3.17)). Proposition 58. Let H be a Hopf algebra. We have the following maps
l
p : W1 →
;
p(e) = e1 ε(e2 )
;
p (e) = ε(e1 )e2
H
p : W1 →
r
H l
i: i :
→ W1
;
i(t) = t(1) ⊗ S(t(2) )
→ W1
;
i (t) = S(t(1) ) ⊗ t(2)
H r H
satisfying (p ◦ i)(t) = t
;
(p ◦ i )(t) = t
for every left (resp. right) integral t. Proof. We will show that i(t) ∈ W1 if t is a left integral, and leave all the other assertions to the reader. ht(1) ⊗ S(t(2) ) = h(1) t(1) ⊗ S(t(2) )S(h(2) )h(3) = (h(1) t)(1) ⊗ S((h(1) t)(2) )h(2) = (ε(h(1) )t)(1) ⊗ S((ε(h(1) )t)(2) )h(2) = t(1) ⊗ S(t(2) )h Corollary 13. A Hopf algebra H is separable if and only if there exists a (left or right) integral t ∈ H such that ε(t) = 1. Proof. An immediate consequence of Theorem 27: if t is a left integral with ε(t) = 1, then e = i(t) ∈ W1 satisfies e1 e2 = t(1) S(t(2) ) = ε(t) = 1. The converse is similar: if e1 e2 = 1, then ε(p(e)) = ε(e1 e2 ) = 1. Corollary 14. A Hopf algebra H over a field k is finite dimensional semisimple if and only if there exists a (left or right) integral t ∈ H such that ε(t) = 1.
3.2 Restriction of scalars and the smash product
111
Proof. One direction follows immediately from Corollary 13 and Proposition 13. Conversely, if H is semisimple, then H = I ⊕ Ker (ε) for some twol sided ideal I of H. We claim that I ⊂ H : For z ∈ I, and h ∈ H, we have (h − ε(h)) ∈ Ker (ε) hz = (h − ε(h))z + ε(h)z = 0 + hz are two decompositions of hz in Ker (ε) ⊕ I, so ε(h)z = hz, and z is a left integral. Choose z = 0 in I (this is possible since I is one-dimensional). ε(z) = 0 since z ∈ Ker (ε), and t = z/ε(z) is a left integral with ε(t) = 1. We now want to characterize Hopf algebras that are Frobenius. This problem is closely connected to the Fundamental Theorem for Hopf modules: Proposition 59. (Fundamental Theorem for Hopf modules) Let H be a Hopf algebra, and M ∈ M(H)H H a right-right Hopf module. Then we have an isomorphism α : M coH ⊗ H → M of right-right Hopf modules. α and α−1 are given by the following formulas: α(m ⊗ h) = m h and α−1 (m) = m[0] S(m[1] ) ⊗ m[2]
(3.34)
for all m ∈ M coH , m ∈ M and h ∈ H. An immediate application is the following: Proposition 60. Let H be a finitely generated projective Hopf algebra. H ∗ is a left H ∗ -module (by multiplication), and therefore a right H-comodule. It is also a right H-module, we let h∗ h, k = h∗ , kS(h) for all h∗ ∈ H ∗ , and h, k ∈ H. With these structure maps, H ∗ is a right-right l Hopf module, and (H ∗ )coH ∼ = H ∗ . Consequently we have an isomorphism
l
α: H∗
⊗H → H ∗
;
α(ϕ ⊗ h) = ϕh
(3.35)
l of right-right Hopf modules. In particular, it follows that H ∗ is a rank one projective k-module. Similar results hold for the right integral space. Proof. Remark that the right H-action on H ∗ is not the usual one. Recall that the usual H-bimodule structure on H ∗ is given by h · h∗ · k, l = h∗ , klh and we see immediately that
112
3 Frobenius and separable functors
h∗ h = S(h) · h∗ Also observe that the right H-coaction on H ∗ can be rewritten in terms of a dual basis {hi , h∗i | i = 1, · · · , n} of H: ρr (h∗ ) = h∗i ∗ h∗ ⊗ hi i
The only thing we have to check is the Hopf compatibility relation for H ∗ , i.e. ρr (h∗ h) = h∗[0] h(1) ⊗ h∗[1] h(2) for all h ∈ H, h∗ ∈ H ∗ . It suffices to prove that (h∗ h)[0] , k(h∗ h)[1] = h∗[0] h(1) , kh∗[1] h(2)
(3.36)
for all k ∈ H. We first compute the left hand side: ρr (h∗ h) = (h∗i ∗ S(h) · h∗ ) ⊗ hi i
so (h∗ h)[0] , k(h∗ h)[1] = =
h∗i ∗ S(h) · h∗ , khi
i
h∗i , k(1) h∗ , k(2) S(h)hi
i
= h∗ , k(2) S(h)k(1) The right hand side of (3.36) equals h∗i ∗ h∗ , kS(h(1) )hi h(2) = h∗i , k(1) S(h(2) )h∗ , k(2) S(h(1) )hi h(3) = h∗ , k(2) S(h(1) )k(1) S(h(2) )h(3) = h∗ , k(2) S(h(1) )k(1) as needed. As an application of Proposition 60, we can prove that the antipode of a finitely generated projective Hopf algebra is always bijective. Proposition 61. The antipode of a finitely generated projective Hopf algebra is bijective. l Proof. We know from Proposition 60 that J = H ∗ is projective of rank one. This implies that we have an isomorphism J∗ ⊗ J → k Let
l
;
p ⊗ ϕ → p(ϕ)
pl ⊗ ϕl be the inverse image of 1:
3.2 Restriction of scalars and the smash product
113
pl (ϕl ) = 1
l
The isomorphism α of Proposition 60 induces another isomorphism α : H → J∗ ⊗ H∗ ; α (h) = pl ⊗ α(ϕl ⊗ h) = pl ⊗ S(h) · ϕl l
l
If S(h) = 0, then it follows from the above formula that α (h) = 0, hence h = 0, since α is injective. Hence S is injective. The fact that S is surjective follows from a local global argument. Let Q = Coker (S). For every prime ideal p of k, Coker (Sp ) = Qp , since localization at a prime ideal is an exact functor. Now Hp /pHp is a finite dimensional Hopf algebra over the field kp /pkp , with antipode induced by Sp , the antipode of the localized kp -Hopf algebra Hp . The antipode of Hp /pHp is injective, hence bijective, by counting dimensions. Nakayama’s Lemma implies that Sp is surjective, for all p ∈ Spec(k), and it follows that S is bijective. Here is another application of Proposition 60: Proposition 62. Let H be a finitely generated projective Hopf algebra. Then l there exist ϕj ∈ H ∗ and hj ∈ H such that ϕj , hj = 1 j
and tj ∈
l H
and h∗j ∈ H ∗ such that
h∗j , tj = 1
j
Proof. Take α−1 (ε) = tion 61). Then
j
ϕj ⊗ S(hj ) (the antipode is bijective by Proposi-
1k = εH , 1H =
ϕj S(hj ), 1H = ϕj , hj j
j
the second statement follows after we apply the first one with H replaced by H ∗. The main result is now the following: Theorem 31. For a Hopf algebra H, the following assertions are equivalent: 1. H/k is Frobenius; 2. H ∗ /k is Frobenius; l 3. H is finitely generated and projective, and H is free of rank one; r 4. H is finitely generated and projective, and H is free of rank one;
114
3 Frobenius and separable functors
l 5. H is finitely generated and projective, and H ∗ is free of rank one; r 6. H is finitely generated and projective, and H ∗ is free of rank one; l l 7. H is finitely generated and projective, and there exist t ∈ H and ϕ ∈ H ∗ such that ϕ, t = 1; r r 8. H is finitely generated and projective, and there exist u ∈ H and ψ ∈ H ∗ such that ψ, u = 1; In conditions 7. and 8., the integrals can be chosen in such a way that they are generators of the integral space that they belong to. Proof. 1. ⇒ 3. Theorem 27 implies the existence of ν ∈ V1 = H ∗ and e ∈ W1 l such that ν(e1 )e2 = e1 ν(e2 ) = 1. Take t = p(e) = ε(e2 )e1 ∈ H . We claim l l that H is free with basis {t}. Take another left integral u ∈ H . Then u = ue1 ν(e2 ) = e1 ν(e2 u) = e1 ν(ε(e2 )u) = ε(e2 )e1 ν(u) = ν(u)t l and it follows that the map k → H sending x ∈ k to xt is surjective. This map is also injective: if xt = xε(e2 )e1 = 0 then 0 = ν(xε(e2 )e1 ) = ν(e1 )xε(e2 ) = xε(e2 ν(e1 )) = x l 5. ⇒ 2. and 5. ⇒ 1. Assume that H ∗ = kϕ, with ϕ a left integral, and consider the Hopf module isomorphism α : kϕ ⊗ H → H ∗ from Proposition 60. We first consider the map Θ : H → H∗
;
Θ(h) = α(ϕ ⊗ h) = S(h) · ϕ
α and Θ are right H-colinear, hence left H ∗ -linear. Θ is therefore an isomorphism of left H ∗ -modules, and it follows that H ∗ is Frobenius. A slightly more subtle argument shows that H is Frobenius: we consider the map φ = Θ ◦ S −1 : H → H ∗ , i.e. φ(h) = h · ϕ We know from Proposition 61 that S is bijective, so φ is well-defined, and is a bijection. φ is left H-linear since φ(kh) = (kh) · ϕ = k · (h · ϕ) = k · φ(h) The equivalence of assertions 1.-6. now follows after we apply the above implications 1. ⇒ 3. and 5. ⇒ 1. with H replaced by H ∗ (H is finitely generated and projective) or by H op (the Frobenius property is symmetric).
3.2 Restriction of scalars and the smash product
115
l
5. ⇒ 7. Let ϕ be a free generator of H ∗ , and consider the isomorphism φ : H → H ∗ , φ(h) = h · ϕ. Let φ be the inverse of φ, and put φ(ε) = t. this means that φ(t) = t · ϕ = ε, or ϕ(ht) = ε(h) for all h ∈ H, and, in particular, ϕ, t = 1. t is a left integral, since φ(ht), k = (ht) · ϕ, k = t · ϕ, kh = ε(kh) = ε(k)ε(h) = ε(h)φ(t), k for all h, k ∈ H, implying that φ(ht) = ε(h)φ(t), and ht = ε(h)t, as needed. l We also have that t is a free generator for H . If u is another left integral, then φ(u), h = u · ϕ, h = ϕ, hu = ε(h)ϕ, u = ϕ, htϕ, u = ϕ, hϕ(u)t = (ϕ(u)t) · ϕ, h = φ(ϕ(u)t), h implying φ(u) = φ(ϕ(u)t) and u = ϕ(u)t. Assume that xt = 0, for some x ∈ k. Then φ(xε) = 0, hence xε = 0, and l x = 0. This proves that t is a free generator for H . 7. ⇒ 1. The fact that ϕ is a left integral means that h∗ , h(1) ϕ, h(2) = h∗ , 1ϕ, h for all h ∈ H and h∗ ∈ H ∗ , and it follows that h(1) ϕ, h(2) = ϕ, h1 We now easily compute ϕ, t(2) S(t(1) ) = ϕ, t(3) t(2) S(t(1) ) = ϕ, t1 = 1 Now consider the left H-linear maps φ : H → H ∗ , φ(h) = h · ϕ, and φ : H ∗ → H, φ(h∗ ) = h∗ , S(t(1) )t(2) . It is straightforward to compute that φ is a left inverse of φ. Thus φ is injective, and a count of ranks as at the end of the proof of Proposition 61 tells us that φ is also surjective, hence H is Frobenius. Remarks 3. 1. It follows from the preceding Theorem that any finite dimensional Hopf algebra over a field k is Frobenius. l 2. In Proposition 58, we have seen that we have a map p : W1 → H , with a right inverse i. The map p is not an isomorphism. To see this, take a Frobenius Hopf algebra H (e.g. any finite dimensional Hopf algebra over a field). l As we have seen, H is free of rank one over k. Using Proposition 54 and the fact that H ∗ ∼ = H as H-modules, we find that W1 ∼ = HomH (H, H) ∼ =H = W2 = HomH (H ∗ , H) ∼
116
3 Frobenius and separable functors
and the rank of W1 equals the rank of H as a k-module. 3. Let t and ϕ be as in part 7. of the Theorem. From the proof of 7. ⇒ 1., it follows that (i (S(t)) = t(2) ⊗ S(t(1) ), ϕ) is a Frobenius pair for H. 4. As an example, take H = kG, with G a finite group. Then G is a finite basis for kG, and let {vσ |σ ∈ G} be the corresponding dual basis, i.e. vσ , τ = δσ,τ . t = σ∈G σ and v1 are generators of the integrals in and on kG. The isomorphism φ : kG → kG∗ is given by φ(σ) = vσ−1 . Frobenius pairs for kG, resp. kG∗ are respectively σ ⊗ σ −1 , ϕ) and ( vσ ⊗ vσ , t) ( σ∈G
σ∈G
We generalize the definition of integrals as follows: take α ∈ Alg(H, k) and g ∈ G(H), and define l = {t ∈ H | ht = α(h)t, for all h ∈ H}
α r
= {t ∈ H | th = α(h)t, for all h ∈ H}
α l
g r
= {ϕ ∈ H ∗ | h∗ ∗ ϕ = h∗ , gϕ, for all h∗ ∈ H ∗ } = {ϕ ∈ H ∗ | ϕ ∗ h∗ = h∗ , gϕ, for all h∗ ∈ H ∗ }
g
Of course we recover the previous definitions if α = ε and g = 1. We have the following generalization of Theorem 31. Proposition 63. Let H be a Hopf algebra, and assume that H/k is Frobenius. Then for all α ∈ Alg(H, k) and g ∈ G(H), the integral spaces r l l l , α , g and g are free k-modules of rank one. α Proof. Take t = α(e1 )e2 . Arguments almost identical to the ones used in the l proof of 1. ⇒ 3. in Theorem 31 prove that t is a free generator of α . The statements for the other integral spaces follow by duality arguments. l Now assume that H is Frobenius, and write H = kt. It is easy to prove that l th ∈ H , for all h ∈ H. Indeed, k(th) = (kt)h = ε(k)th for all k ∈ H. It follows that there exists a unique α(h) ∈ k such that th = α(h)t α : H → k is multiplicative, so we can restate our observation by saying that r t ∈ α . We call α the distinguished element of H ∗ . If α = ε, then we say that H is unimodular.
3.2 Restriction of scalars and the smash product
117
Proposition 64. Let H be a Frobenius Hopf algebra, and α ∈ H ∗ the distinr l guished element. Then α = H , and H is unimodular if and only if
r
l
= H
H
r Proof. We know from Proposition 63 that α = kt is free of rank one. For r all h ∈ H, we have that ht ∈ α , hence we find a unique multiplicative map β : H → k such that ht = β(h)t r for all h ∈ H. Now we have that t = xt for some x ∈ k, since t ∈ α . Thus ε(h)t = ht = xht = xβ(h)t = β(h)t l for all h ∈ H. This implies that β = ε, since t is a free generator of H . It l follows that t ∈ H , proving the first statement. r l r l r If α = ε, then it follows that H = H . Conversely, if H = H , then t ∈ H , and this means that the distinguished element is equal to ε. Frobenius modules and Λ-separability Let R and S be rings. For a left R-linear map f : M → N , we will write (m)f for the image of m ∈ M under f . For a right linear map, we keep the usual notation f (m), this will make our formalism more transparent. To a bimodule Λ ∈ S MR , we can associate four new bimodules: HomR (Λ, R) ∈ R MS S Hom(Λ, S) ∈ R MS HomR (Λ, Λ) ∈ S MS S Hom(Λ, Λ) ∈ R MR
(rϕs)(λ) = rϕ(sλ) (λ)(rϕs) = ((λr)ϕ)s (sϕs )(λ) = sϕ(s λ) (λ)(rϕr ) = ((λr)ϕ)r
Λ is called a Frobenius bimodule ([2], [100]) if Λ is finitely generated and projective as a left S-module and a right R-module, and HomR (Λ, R) ∼ = S Hom(Λ, S) in
R MS
S is called Λ-separable over R, or Λ is called a separable bimodule (see [171]) if the map µΛ : Λ ⊗R S Hom(Λ, S) → S, µΛ (λ ⊗R ϕ) = (λ)ϕ is split as a map of S-bimodules, or, equivalently, if there exists e = ϕi ∈ Λ ⊗R ∗ Λ such that (λi )ϕi = 1 and se = es i
i
λi ⊗R
118
3 Frobenius and separable functors
for all s ∈ S. We will show that these notions can also be introduced using Frobenius and separable functors, and we will classify all (additive) Frobenius functors between module categories. The induction functor F = Λ ⊗R • :
RM
→ S M, F (M ) = Λ ⊗R M
has a right adjoint, the coinduction functor G = S Hom(Λ, •) :
SM
→ R M, G(N ) = S Hom(Λ, N )
with the left R-action on G(N ) = S Hom(Λ, N ) given by (λ)(rf ) = (λr)f for all λ ∈ Λ, r ∈ R and f ∈ adjunction are
S Hom(Λ, N ).
The unit and counit of the
ηM : M → GF (M ) = S Hom(Λ, Λ ⊗R M ) ; (λ)(ηM (m)) = λ ⊗ m εN : F G(N ) = Λ ⊗R S Hom(Λ, N ) → N ; εN (λ ⊗ f ) = (λ)f The converse also holds: if (F, G) is an adjoint pair between R M and S M, then F and G are additive (see [11, I.7.2]). F has a right adjoint, and preserves therefore cokernels and arbitrary coproducts (see [11, I.7.1]), and from the Eilenberg-Watts Theorem (see [11, II.2.3]), it follows that F ∼ = Λ ⊗R • for some Λ ∈ S MR . From the uniqueness of the adjoint, if follows that G ∼ = Hom(Λ, •). S We also consider the functor G1 :
SM
→ R M ; G1 (N ) = ∗ Λ ⊗S N
We have a natural transformation γ : G1 → G given by γN :
∗
Λ ⊗S N → S Hom(Λ, N ) ; (λ)(γN (f ⊗ n)) = (λ)f n
If Λ is finitely generated and projective as a left S-module, then γ is a natural isomorphism. Now take an (R, S)-bimodule X, and consider the functor G2 :
SM
→ R M ; G2 (N ) = X ⊗S N
When is (F, G2 ) an adjoint pair? From the uniqueness of the adjoint, we can see that an equivalent question is: when is G ∼ = G2 ? Or, in other words, when is the induction functor G2 representable? The clue to the answer is the following Lemma, where we also consider the functors G2 = • ⊗S Λ : MS → MR and F = • ⊗R X : MR → MS
3.2 Restriction of scalars and the smash product
119
Lemma 11. Let R, S, Λ, X, F, G, G1 , G2 be as above. Then we have isomorphisms Nat(1R M , G2 F ) ∼ = Nat(1MR , G2 F ) ∼ R HomR (R, X ⊗S Λ) ∼ = CR (X ⊗S Λ) = ={ xi ⊗S λi ∈ X ⊗S Λ | i
i
rxi ⊗S λi =
xi ⊗S λi r, for all r ∈ R}
i
Nat(F G2 , 1S M ) ∼ = Nat(F G2 , 1MS ) ∼ = S HomS (Λ ⊗R X, S) ∼ = R HomS (X, S Hom(Λ, S)) ∼ Nat(1S M , F G) = {e ∈ Λ ⊗R S Hom(Λ, S) | se = es, for all s ∈ S} Proof. For a natural transformation α : 1R M → G2 F , consider i xi ⊗S λi = αR (1R ). Thenaturality of α implies that i xi ⊗S λi ∈ CR (X ⊗S Λ). Conversely, given i xi ⊗S λi ∈ CR (X⊗S Λ), we consider α ∈ Nat(1MR , G2 F ) given by xi ⊗S λi ⊗R m αM : M → G2 F (M ) = X ⊗S Λ ⊗R M, αM (m) = i
Given a natural transformation β : F G2 = Λ ⊗R X ⊗S • → 1S M , we take β = βS : Λ ⊗R X → S β is right S-linear because β is natural. Conversely, given an S-bimodule map β : Λ ⊗R X → S, we define a natural transformation β by βN : Λ ⊗R X ⊗S N → N ; βN = β ⊗S IN The corresponding map ∆ : X → S Hom(Λ, S) is given by ⊗ x) ∆(x)(λ) = β(λ Finally consider a natural transformation γ : 1S M → F G, and define e = γS (1S ) ∈ F G(S) = Λ ⊗R S Hom(Λ, S). The fact that es = se follows easily from the naturality of γ. Given e = λ ⊗ R ϕi , with es = se for all s ∈ S, we define a natural i i transformation γ as follows: λi ⊗R ϕi · n γN : N → Λ ⊗R S Hom(Λ, N ), γN (n) = i
Given ϕ ∈ S Hom(Λ, S) and n ∈ N , ϕ · n ∈ S Hom(Λ, N ) is defined by (λ)(ϕ · n) = ((λ)ϕ)n We leave it to the reader to show that γN is left S-linear, and that γ is natural.
120
3 Frobenius and separable functors
The following is an extended version of [141, Theorem 2.1]. We include a very short proof, based on Lemma 11. Theorem 32. Let R and S be rings, Λ ∈ S MR and X ∈ following are equivalent.
R MS .
Then the
1. (F = Λ ⊗R •, G2 = X ⊗S •) : R M → S M is an adjoint pair of functors; 2. (F = • ⊗R X, G2 = • ⊗S Λ) : MR → MS is an adjoint pair of functors; 3. G = S Hom(Λ, •) and G2 = X ⊗S • are naturally isomorphic; 4. G = HomS (X, •) and G2 = • ⊗S Λ are naturally isomorphic; 5. Λ is finitely generated projective as a left S-module, and X∼ = S Hom(Λ, S) in
R MS
6. X is finitely generated projective as a right S-module, and
7. there exists z = S MS such that
i
Λ∼ = HomS (X, S) in
S MR
xi ⊗S λi ∈ CR (X ⊗S Λ) and ω : Λ ⊗R X → S in λ=
ω(λ ⊗ xi )λi
(3.37)
xi ω(λi ⊗ x)
(3.38)
i
x=
i
for all x ∈ X and λ ∈ Λ; 8. the same condition as 7), but with z = i xi ⊗S λi ∈ X ⊗S Λ; 9. there exist ∆ : R → X ⊗SΛ in R MR and ε : X → S Hom(Λ, S) in R MS such that, with ∆(1R ) = xi ⊗ λi , λ= (λ)(ε(xi ))λi (3.39) i
x=
xi (λi )(ε(x))
(3.40)
i
for all x ∈ X and λ ∈ Λ; in R MR and ε : Λ → HomS (X, S) in 10. there exist ∆ : R → X ⊗S Λ xi ⊗ λi , S MR such that, with ∆(1R ) = λ= ε (λ)(xi )λi (3.41) i
x=
i
for all x ∈ X and λ ∈ Λ;
xi ε (λi )(xi )
(3.42)
3.2 Restriction of scalars and the smash product
121
11. the same as 9), but we require that ε is an isomorphism; 12. the same as 10), but we require that ε is an isomorphism; 13. there exists
i
xi ⊗S λi ∈ CR (X ⊗S Λ) such that the map
ε :
S Hom(Λ, S)
→ X, ε(g) =
xi ((λi )g)
i
is an isomorphism in R MS , and the following condition holds for λ, λ ∈ Λ: g(λ) = g(λ ), for all g ∈ S Hom(Λ, S) =⇒ λ = λ 14. there exists i xi ⊗S λi ∈ CR (X ⊗S Λ) such that the map ε : HomS (X, S) → Λ, ε (g) =
g(xi )λi
i
is an isomorphism in R MS , and the following condition holds for x, x ∈ X: g(x) = g(x ), for all g ∈ HomS (X, S) =⇒ x = x Proof. 1. ⇒ 7. Let (F, G2 ) be an adjoint pair, and let η : 1R M → G2 F and η : F G2 → 1S M be the unit and counit of the adjunction, and take i xi ⊗S λi as in Lemma 11. (3.37) and (3.38) follow immediately from the adjointness property of the unit and the counit. 1. ⇔ 3. follows from the uniqueness of adjoints. 7. ⇒ 1. The natural transformations η and ε corresponding to i xi ⊗ λi and ω are the unit and counit of the adjunction. 3. ⇒ 5. (3.37) tells us that {λi , ω(• ⊗R xi )} is a finite dual basis for Λ as a left S-module. Let γ2 : G2 → G be a natural isomorphism. Obviously γ = γ2,S : X → S Hom(Λ, S) is an isomorphism in R M, and we are done if we can show that γ is right S-linear. This follows essentially from the naturality of γ. For any t ∈ S, we consider the map ft : S → S, ft (s) = st. ft ∈ S M, and we have a commutative diagram X = X ⊗S S
IX ⊗S ftX = X ⊗S S
γ
γ ?
S Hom(Λ, S)
) S Hom(Λ, ft-
Now observe that (IX ⊗S ft )(x) = xt, and commutativity of the diagram implies that
? S Hom(Λ, S) S Hom(Λ, ft )
= ft ◦ -, and the
122
3 Frobenius and separable functors
(λ)(γ(xt)) = (λ)(ft ◦ γ(x)) = ((λ)(γ(x)))t = (λ)(γ(x)t) and γ is right S-linear. 5. ⇒ 3. Λ is finitely generated and projective as a left S-module, so we have a natural isomorphism γ:
S Hom(Λ, S)
⊗ • → S Hom(Λ, •)
and from 5. it also follows that S Hom(Λ, S)
⊗•∼ = X ⊗S •
7. ⇒ 8. is trivial. 8. ⇒ 7. For all r ∈ R, we have rz =
rxi ⊗S λi
i
(3.38)
=
xj ω(λj ⊗R rxi ) ⊗S λi
i,j
=
xj ⊗S ω(λj r ⊗R xi )λi
i,j
(3.37)
=
xj ⊗S λj r = zr
j
7. ⇒ 9. ε is defined by (λ)(ε(x)) = ω(λ ⊗S x). It is easy to show that ε is left R-linear and right S-linear, and (3.39) and (3.40) follow from (3.37) and (3.38). 9. ⇒ 11. The inverse of ε is given by ε−1 (g) = xi ((λi )g) i
11. ⇒ 7. Define ω by ω(λ ⊗S x) = (λ(ε(x)) and put i xi ⊗R λi = ∆(1R ). 9. ⇒ 13. It is clear that ε is a morphism in R MS , and form the proof of 9. ⇒ 11., it follows that ε is the inverse of ε. Assume that (λ)g = (λ )g for all g ∈ S Hom(Λ, S). Using (3.39), we find λ= (λ)(ε(xi ))λi = (λ )(ε(xi ))λi = λ i
i
13. ⇒ 9. Assume that ε has an inverse, and call it ε. Then for all x ∈ X, we have x = ε(ε(x)) = xi ((λi )ε(x)) i
and (3.39) follows. For all g ∈ S Hom(Λ, S), we have
3.2 Restriction of scalars and the smash product
123
(λ)g = (λ) ε( ε(g)) = (λ) ε( xi ((λi )g)) =
i
(λ) ε(xi ) (λi )g
i
= (λ)(ε(xi ))λi g
and (3.40) follows also. The proof of the implications 2. ⇔ 7. ⇒ 10. ⇒ 12. ⇒ 7., 3. ⇔ 4. ⇔ 6. and 10. ⇔ 14. is similar. Remarks 4. 1. Theorem 32 together with the Eilenberg-Watts Theorem (cf. [11, Th. II.2.3]) shows that there is a one-to-one correspondence between adjoint pairs between R M and S M, and between MS and MR . 2. The adjunctions in Theorem 32 are equivalences if and only if the unit and counit are natural isomorphisms, and, using Lemma 11, we see that this means that the maps ω : Λ ⊗R X → S and ∆ : R → X ⊗S Λ from part 7. of Theorem 32 have to be isomorphisms. Using (3.39) and (3.40), we see that (R, S, X, Λ, ∆−1 , ω) is a strict Morita context. Thus we recover the classical result due to Morita that (additive) equivalences between module categories correspond to strict Morita contexts. For a detailed discussion of Morita contexts, we refer to [11, Ch. II]. Theorem 32 allows us to determine when the pair (F, G) is Frobenius: this is equivalent to one of the 14 equivalent conditions of the Theorem, combined with one of the 14 conditions, but applied in the situation where R and S, and Λ and X are interchanged. Using the Eilenberg-Watts Theorem once more, we find the following result. Theorem 33. [58] Let R and S be rings. There is a one-to-one correspondence between – Frobenius functors between R M and S M; – Frobenius functors between MR and MS ; – pairs (X, Λ), with X ∈ R MS and Λ ∈ S MR satisfying one of the following equivalent conditions: 1. X is finitely generated and projective on both sides and Λ∼ = HomS (X, S) ∼ = R Hom(X, R) in
S MR
2. Λ is finitely generated and projective on both sides and X∼ = S Hom(Λ, S) ∼ = HomR (Λ, R) in
R MS
In fact Theorem 33 tells us that a Frobenius pair between R M and S M is of the type (Λ ⊗R •, X ⊗S •), where Λ and X are Frobenius bimodules. Let us next look at separable bimodules.
124
3 Frobenius and separable functors
Theorem 34. Let R and S be rings, and Λ an (S, R)-bimodule. Then the functor G = S Hom(Λ, •) : S M → R M is separable if and only if S is Λ-separable over R. Proof. An immediate application of the third part of Lemma 11 and Rafael’s Theorem 24. Remarks 5. 1. It is surprising that there is no algebraic interpretation for the functor F = Λ ⊗R • to be separable - unless Λ is finitely generated and projective as a left R-module, in which case F is also a Hom functor. 2. Let i : R → S be a ring homomorphism, and let Λ = S considered as an (S, R)-bimodule, and X = S considered as an (R, S)-bimodule. Obviously Λ is finitely generated and projective as a left S-module, and G = G2 is the restriction of scalars functors. We then recover some results of the first part of this Section.
3.3 The functor forgetting the C-coaction Let (A, C, ψ) be a right-right entwining structure. Using the methods developed in the previous Section, we want to study the functor F : M(ψ)C A → MA and its right adjoint. This can be done directly, cf. [30]. But it turns out to be more efficient to start with a more general situation: let A be a ring, and C an A-coring, and look at the forgetful functor F : MC → MA Proposition 65. F has a right adjoint G. For N ∈ MA , G(N ) = N ⊗A C, with structure (n ⊗A c)a = n ⊗A ca and ρr (n ⊗A c) = n ⊗A c(1) ⊗A c(2) For a morphism f ∈ MA , G(f ) = f ⊗A IC . Proof. One easily verifies that G is a functor. The unit and the counit of the adjunction are defined as follows: ηM = ρr : M → M ⊗A C ηM (m) = m[0] ⊗A m[1] η : 1MC → GF ε : F G → 1MA εN = IN ⊗A εC : N ⊗A C → N εN (n ⊗A c) = nεC (c) (3.6) and (3.7) are easily verified. We will now describe V = Nat(GF, 1MC ) and W = Nat(1MA , F G). First we need a Lemma. Lemma 12. Take ν ∈ V , and N ∈ MA , G(N ) = N ⊗A C. Then νN ⊗A C = IN ⊗A νC
3.3 The functor forgetting the C-coaction
125
Proof. For n ∈ N , we consider fn : C → N ⊗A C, fn (c) = n ⊗A c. fn is a morphism in MC , and the naturality of ν produces a commutative diagram C ⊗A C
νC
- C
fn ⊗A IC
fn
? ? νN ⊗AC N ⊗A C ⊗A C N ⊗A C and we find νN ⊗A C (n ⊗A c) = n ⊗A νC (c) Proposition 66. Let C be an A-coring, and put V1 = C HomC (C ⊗A C, C) V2 = {θ ∈ A HomA (C ⊗A C, A) | c(1) θ(c(2) ⊗A d) = θ(c ⊗A d(1) )d(2) , ∀c, d ∈ C} Then
V = Nat(GF, 1MC ) ∼ = V1 ∼ = V2
Proof. We define α : V → V1 , α(ν) = ν = νC . By definition, ν is right A-linear and a right C-comodule map. The properties on the left-hand side follow from the naturality of ν: for all a ∈ A, we consider the map fa : C → C, fa (c) = ac, which is a morphism in MC , so that the naturality of ν gives a commutative diagram C ⊗A C
ν
fa ⊗A IC ? C ⊗A C
- C fa
ν
? - C
Appying the diagram to c ⊗A d ∈ C ⊗A C, we find ν(ac ⊗A d) = aν(c ⊗A d) and ν is left A-linear. The left C-comodule structure map on C ⊗A C is ∆C ⊗ IC , and this map is a morphism in MC , so that we have another commutative diagram C ⊗A C ∆C ⊗A IC
ν
- C ∆C
? ? νC⊗AC C ⊗A C C ⊗A C ⊗A C
126
3 Frobenius and separable functors
Taking into account that νC⊗A C = IC ⊗A ν (Lemma 12), we find that ν is a left C-comodule map, and α is well-defined. Next, we define α1 : V1 → V2 : α1 (ν) = θ = εC ◦ ν It is obvious that θ is an (A, A)-bimodule map. From the fact that ν is a C-bicomodule map, we find c(1) ⊗A ν(c(2) ⊗A d) = ∆C (ν(c ⊗A d)) = ν(c ⊗A d(1) ) ⊗A d(2) Applying IC ⊗A εC to the first equality and εC ⊗A IC to the second one, we find ν(c ⊗A d) = c(1) θ(c(2) ⊗A d) = θ(c ⊗A d(1) )d(2) (3.43) and it follows that α1 is well-defined. We define α1−1 (θ) = ν defined by (3.43). It is clear from (3.43) and the fact that θ is an (A, A)-bimodule map, that ν is a morphism in C MC ; (3.43) also tells us that α1−1 (α1 (ν)) = ν. Conversely, α1 (α1−1 (θ))(c ⊗A d) = εC (c(1) )θ(c(2) ⊗A d) = θ(c ⊗A d) We still need to show that α is invertible. For ν ∈ V1 , and θ = α1 (ν), we define ν = α−1 (ν) : GF → 1C by νM : M ⊗A C → M ; νM (m ⊗A c) = m[0] θ(m[1] ⊗A c) ν is natural, since for every morphism f : M → M in MC , we have that νM (f ⊗A IC )(m ⊗A c) = νM (f (m) ⊗A c) = f (m)[0] θ(f (m)[1] ⊗A c) = f (m[0] )θ(m[1] ⊗A c) = f m[0] θ(m[1] ⊗A c) = f (νM (m ⊗A c)) It is clear that α(α−1 (ν)) = ν, since νC (c ⊗A d) = c(1) θ(c(2) ⊗A d) = ν(c ⊗A d) Finally, let us show that α−1 (α(ν)) = ν. The map ρr : M → M ⊗A C is in MC . From Lemma 12, we know that νM ⊗A C = IM ⊗A ν, so the naturality of ν generates a commutative diagram M ⊗A C ρr ⊗A IC
νM
- M ρr
? ? IM ⊗Aν M ⊗A C ⊗A C M ⊗A C and we find
3.3 The functor forgetting the C-coaction
127
ρr (νM (m ⊗A c)) = m[0] ⊗A ν(m[1] ⊗A c) Apply εC to the second factor: νM (m ⊗A c) = m[0] θ(m[1] ⊗A c) This means precisely that α−1 (α(ν)) = ν. Remark 7. The multiplication in V2 induced by the multiplication on V can be described in several ways: (θ · θ )(c ⊗A d) = θ(c(1) ⊗A c(2) )θ (c(3) ⊗A d) = θ (c ⊗A d(1) )θ(d(2) ⊗A d(3) )
(3.44)
Proposition 67. For an A-coring C, we have W = Nat(1MA , F G) ∼ = W1 = A HomA (A, C) ∼ = W2 = {z ∈ C | az = za for all a ∈ A} Proof. We give the definitions of the connecting maps; other details are left to the reader. β : W → W1 ; β(ζ) = ζA = ζ β1 : W1 → W2 ; β1 (ζ) = ζ(1) = z β −1 : W1 → W ; β −1 (ζ) = ζ with ζN : N → N ⊗A C ; ζN (n) = n ⊗A ζ(1) Theorem 35. Let A be a ring, and C an A-coring. 1. The following assertions are equivalent: a. F : MC → MA is separable; b. F : C M → A M is separable; c. there exists θ ∈ V2 such that θ ◦ ∆C = εC , i.e. θ(c(1) ⊗A c(2) ) = εC (c) for all c ∈ C. 2. The following assertions are equivalent: a. G = • ⊗A C : MA → MC is separable; b. G = C ⊗A • : A M → C M is separable; c. there exists z ∈ W2 such that εC (z) = 1A . 3. The following assertions are equivalent: a. (F, G) is a Frobenius pair; b. (F , G ) is a Frobenius pair;
(3.45)
128
3 Frobenius and separable functors
c. there exists θ ∈ V2 and z ∈ W2 such that θ(z ⊗A c) = θ(c ⊗A z) = εC (c)
(3.46)
for all c ∈ C. Proof. This is an immediate application of Rafael’s Theorem 24, Theorem 23, and Propositions 66 and 67. The equivalence of a. and b. in each case follows from the fact that c. is left-right symmetric. To illustrate the method, we provide the proof of a.⇒c. in part 3. of the Theorem. If (F, G) is Frobenius, then Theorem 23 gives us ν ∈ V and ζ ∈ W such that (3.1) and (3.2) hold. Let θ = α1 (α(ν)) and z = β1 (β(ζ)) be the corresponding maps in V2 and W2 . (3.1), with C = C gives us c = c(1) θ(c(2) ⊗A z) Applying εC to this equality, we find εC (c) = θ(c ⊗A z). In a similar way, (3.2) implies εC (c) = θ(z ⊗A c). Corollary 15. We use the same notation as in Theorem 35. If (F, G) is a Frobenius pair, then C is finitely generated and projective as a (left and right) A-module. Proof. For all c ∈ C, we have c = c(1) εC (c(2) ) = c(1) θ(c(2) ⊗A z) = θ(c ⊗A z(1) )z(2)
(3.47)
and it follows that {z(2) , θ(• ⊗A z(1) )} is a finite dual basis for C as a left A-module. In a similar way {z(1) , θ(z(2) ⊗A •)} is a finite dual basis for C as a right A-module. Let us from now on assume that C is finitely generated and projective as a left A-module. We have seen in Corollary 1 that R = A Hom(C, A) is a ring, and that it can be viewed as an object in MC ∼ = MR . This leads to alternative descriptions of V and W , and to new characterizations for the separability and Frobenius properties of F and G. As in Section 2.7, let {ci , fi | i = 1, · · · , m} be a finite dual basis for C as a left A-module. Proposition 68. Let C be an A-coring which is finitely generated and projective as a left A-module. Then V ∼ = V3 = A HomC (C, R)
3.3 The functor forgetting the C-coaction
Proof. α2 : V2 → V3 is defined as follows: α2 (θ) = φ with φ(c) =
129
fi θ(ci ⊗A c)
i
Observe that φ(c)(d) =
fi (d)θ(ci ⊗A c) = θ(d ⊗A c)
(3.48)
i
Let us show that α2 is well-defined. α2 (θ) = φ is an (A, A)-bimodule map, since (aφ(c))(d) = φ(c)(da) = θ(da ⊗A c) = θ(d ⊗A ac) = φ(ac)(d) (φ(c)a)(d) = (φ(c)(d))a = θ(d ⊗A c)a = θ(d ⊗A ca) = φ(ca)(d) In order to prove that φ is a right C-comodule map, we have to show that φ(c(1) ) ⊗A c(2) = φ(c)[0] ⊗A φ(c)[1] ∈ R ⊗A C Using (2.96), we find, for all c, d ∈ C: φ(c)[0] (d)φ(c)[1] = d(1) φ(c)(d(2) ) = d(1) θ(d(2) ⊗ c) = θ(d ⊗ c(1) )c(2) = φ(c(1) )(d)c(2) as needed. For φ ∈ V3 , we define θ = α2−1 (φ) by θ(d ⊗ c) = φ(c)(d) θ is an (A, A)-bimodule map, since θ(ad ⊗A c) = φ(c)(ad) = a(φ(c)(d)) = aθ(d ⊗A c) θ(d ⊗A ca) = φ(ca)(d) = (φ(c)a)(d) = (φ(c)(d))a = θ(d ⊗A c)a Furthermore c(1) θ(c(2) ⊗A d) = c(1) φ(d)(c(2) ) (2.96)
= φ(d)[0] (c)φ(d)[1]
(φ is a C comodule map) = φ(d(1) )(c)(d(2) ) = θ(c ⊗A d(1) )d(2) and it follows that θ ∈ V2 . It follows from (3.48) that α2−1 is a left inverse for α2 ; it is also a right inverse since α2 (α2−1 (φ))(c)(d) = α2−1 (φ)(d ⊗ c) = φ(c)(d)
130
3 Frobenius and separable functors
Proposition 69. Let C be an A-coring which is finitely generated and projective as a left A-module. Then W ∼ = W3 = A HomC (R, C) Proof. We define β2 : W2 → W3 by β2 (z) = φ with φ(f ) = z(1) f (z(2) ) φ is left and right A-linear since φ(af ) = z(1) (af )(z(2) ) = z(1) f (z(2) a) = az(1) f (z(2) ) = aφ(f ) φ(f a) = z(1) (f a)(z(2) ) = z(1) (f (z(2) )a) = (z(1) f (z(2) ))a = φ(f )a We used the fact that, for z ∈ V2 , az(1) ⊗ z(2) = a∆C (z) = ∆C (az) = ∆C (za) = z(1) ⊗ z(2) a Let us next show that φ is a right C-comodule map. For all f ∈ R, we have φ(f[0] ) ⊗A f[1] = z(1) f[0] (z(2) ) ⊗A f[1] = z(1) ⊗A f[0] (z(2) )f[1] (2.96) = z(1) ⊗A z(2) f (z(3) ) (ρ is right A-linear) = ρr (z(1) f (z(2) )) = ρr (φ(f )) r
β2−1 : W3 → W2 is defined by β2−1 (φ) = z = φ(εC ) z ∈ W2 since az = aφ(εC ) = φ(aεC ) = φ(εC a) = φ(εC )a = za where we used that (aεC )(c) = εC (ca) = εC (c)a = (εC a)(c) Finally, β2 and β2−1 are each others inverses, since β2−1 (β2 (z)) = z(1) εC (z(2) ) = z β2 (β2−1 (φ))(f ) = φ(εC )(1) f φ(εC )(2) = φ(εC[0] )f εC[1] (2.95) = φ(fk )f (ck ) = φ(fk f (ck )) = φ(f ) We can use V3 and W3 to give new criteria for F and G to be separable or Frobenius. We begin with the Frobenius property.
3.3 The functor forgetting the C-coaction
131
Theorem 36. Let C be an A-coring, and let F : MC → MA be the functor forgetting the C-comodule structure, and G its right adjoint. Then the following assertions are equivalent. 1. (F, G) is a Frobenius pair; 2. C is finitely generated and projective as a left A-module, and there exist θ ∈ V2 , z ∈ W2 such that the maps φ : C → R,
φ(c)(d) = θ(d ⊗A c)
φ : R → C, φ(f ) = z(1) f (z(2) ) are each others inverses; 3. C is finitely generated and projective as a right A-module, and there exist θ ∈ V2 , z ∈ W2 such that the maps
φ : C → R ,
φ(c)(d) = θ(c ⊗A d)
φ : R → C, φ (f ) = f (z(1) )z(2) are each others inverses; 4. C is finitely generated and projective as a left A-module, and C ∼ = R in C ∼ M M ; = A R A 5. C is finitely generated and projective as a right A-module, and C ∼ = R = C ∼ A Hom(C, A) in MA = R MA ; 6. C is finitely generated and projective as a left A-module, and R/A is Frobenius; 7. C is finitely generated and projective as a right A-module, and R /A is Frobenius. Proof. 1. ⇒ 2. From Theorem 35, we know that there exist θ ∈ V2 and z ∈ W2 satisfying (3.46). Put φ = α2 (θ), φ = β2 (z). Then for all f ∈ R and c ∈ C, we have φ(φ(f ))(c) = φ(z(1) f (z(2) ))(c) = θ(c ⊗A z(1) f (z(2) )) (θ is right A-linear) = θ(c ⊗A z(1) )f (z(2) ) (f is left A-linear) = f θ(c ⊗A z(1) )z(2) (3.43) = f c(1) θ(c(2) ⊗A z) (3.46) = f c(1) εC (c(2) ) = f (c) and φ(φ(c)) = z(1) φ(c)(z(2) ) = z(1) θ(z(2) ⊗ c) = θ(z ⊗ c(1) )c(2) = εC (c(1) )c(2) = c
(3.43), (3.46)
2. ⇒ 4. Obvious; we know from Proposition 38 that A MC ∼ = A MR . 4. ⇒ 1. Let φ : R → C in A MC , with inverse φ. Put θ = α2−1 (φ), z = β2−1 (φ). We have to show that (3.46) holds. For all c ∈ C, we have
132
3 Frobenius and separable functors
θ(c ⊗A z) = φ(φ(εC ))(c) = εC (c) and c = φ(φ(c)) = z(1)) φ(c)(z(2) ) = z(1)) θ(z(2) ⊗A c) Applying εC to both sides, we find εC (c) = θ(z ⊗A c) 4. ⇔ 6. This follows from 3. in Theorem 28, after we remark that C ∼ = HomA (R, A). The connecting isomorphism is given by sending c ∈ C to Fc : R → A, with Fc (f ) = f (c). Fc is right A-linear, since Fc (f a) = (f a)(c) = f (c)a = Fc (f )a The proof of the equivalences is similar: 3., 5. and 7. are the left-handed versions of 2., 4. and 6. Now let us do the separability properties; we restrict ourselves to the righthanded versions. Theorem 37. Let C be an A-coring, and assume that C is finitely generated and projective as a left A-module. 1. The following assertions are equivalent: a. F : MC → MA is separable; b. there exists φ ∈ V3 such that φ(c(2) )(c(1) ) = εC (c); c. G : MR → MA is separable; d. R/A is separable. 2. The following assertions are equivalent: a. G : MA → MC is separable; b. there exists φ ∈ W3 such that εC (φ(εC )) = 1A ; c. F : MA → MR is separable; d. R/A is a split extension. Proof. We give an outline of the proof of Part 1., leaving Part 2. to the reader. a. ⇔ b. follows immediately from Theorem 35 and Proposition 68; a. ⇔ c. follows from Proposition 38, and c. ⇔ d. is Part 1. of Theorem 27. We will now apply our results to categories of entwined modules. Let (A, C, ψ) be a right-right entwining structure, and put C = A⊗C. First, we give explicit descriptions of the Vi and Wi . Proposition 70. Let (A, C, ψ) be a right-right entwining structure. 1. If ψ is invertible, then C V1 ∼ = V4 = C A HomA (A ⊗ C ⊗ C, A ⊗ C)
3.3 The functor forgetting the C-coaction
133
2. V2 ∼ = V5 , consisting of ϑ ∈ Hom(C ⊗ C, A) satisfying ϑ(c ⊗ d)a = aψΨ ϑ(cΨ ⊗ dψ ) ϑ(c ⊗ d(1) ) ⊗ d(2) = ϑ(c(2) ⊗ d)ψ ⊗
cψ (1)
(3.49) (3.50)
for all c, d ∈ C and a ∈ A. 3. If C is finitely generated and projective as a k-module, then ∗ V3 ∼ = V6 = A HomC A (A ⊗ C, C ⊗ A)
Proof. 1. As in Section 2.7, we identify (A ⊗ C) ⊗A (A ⊗ C) and A ⊗ C ⊗ C. (a⊗c)⊗A (b⊗d) corresponds to abψ ⊗cψ ⊗d. The two-sided structure induced on A ⊗ C ⊗ C is the one presented in (2.80-2.82) (with A = A, C = k). 1. now follows immediately, using Proposition 34. 2. For θ : (A⊗C)⊗A (A⊗C) ∼ = A⊗C ⊗C → A in V2 , we put ϑ : C ⊗C → A, ϑ(c ⊗ d) = θ(1 ⊗ c ⊗ d). It is easily checked that ϑ ∈ V5 . 3. follows after we observe that R = A Hom(A ⊗ C, A) ∼ = Hom(C, A) ∼ = C∗ ⊗ A The entwined structure on C ∗ ⊗ A induced by the one on R (see (2.91) and (2.95)) is ∗ b(c∗ ⊗ a)b = c∗ , cψ (3.51) i ci ⊗ bψ ab i
∗
ρ (c ⊗ a) = r
c∗i ∗ c∗ ⊗ aψ ⊗ cψ i
(3.52)
i
{1 ⊗ ci , c∗i ⊗ 1 | i = 1, · · · , m} is a dual basis of A ⊗ C as a left A-module. The ring structure on R translates into a ring structure (in fact a k-algebra structure) on C ∗ ⊗ A. The multiplication we obtain is nothing else then the multiplication on the smash product C ∗op #R A. Proposition 71. Let (A, C, ψ) be a right-right entwining structure. 1. W2 ∼ = W5 , the submodule of A ⊗ C consisting of z = a1 ⊗ c1 ∈ A ⊗ C, satisfying aa1 ⊗ c1 = a1 aψ ⊗ c1ψ (3.53) 2. If C is finitely generated and projective as a k-module, then ∗ W3 ∼ = W6 = A HomC A (C ⊗ A, A ⊗ C)
Proof. Similar to the proof of Proposition 70. We can use Propositions 70 and 71 to reformulate the criteria for separability and Frobenius properties in the case of entwined modules.
134
3 Frobenius and separable functors
Theorem 38. Let (A, C, ψ) be a right-right entwining structure. 1. The forgetful functor F : M(ψ)C A → MA is separable if and only if there exists ϑ ∈ V5 such that ϑ ◦ ∆C = ηA ◦ εC . If C is finitely generated and projective as a k-module, this is equivalent to the separability of C ∗op #R A over A. 2. The induction functor G : MA → M(ψ)C A is separable if and only if there exists z = a1 ⊗ c1 ∈ W5 such that εC (c1 )a1 = 1A . If C is finitely generated and projective as a k-module, this is equivalent to C ∗op #R A/A being a split extension. 3. (F, G) is a Frobenius pair if and only if there exists ϑ ∈ V5 and z = a1 ⊗ c1 ∈ W5 such that a1 ϑ(c1 ⊗ c) = (a1 )ψ ϑ(cψ ⊗ c1 ) = εC (c)1A
(3.54)
If C is finitely generated and projective as a k-module, this is also equivalent to each of the following assertions: a. there exist z = al ⊗ cl ∈ W5 and ϑ ∈ V5 such that the maps φ : C ∗ ⊗ A → A ⊗ C and φ : A ⊗ C → C ∗ ⊗ A given by φ(c∗ ⊗ a) = φ(a ⊗ c) =
a1 aψ ⊗ c∗ , c1(2) (c1(1) )ψ
(3.55)
d∗i ⊗ aψ ϑ(dψ i ⊗ c)
(3.56)
i
are each others inverses; ∼ b. C ∗ ⊗Aand A⊗C are isomorphic as objects in A M(ψ)C A = A MC ∗op #R A ; ∗op c. C #R A/A is a Frobenius extension. If (F, G) is a Frobenius pair, then A⊗C is finitely generated and projective as a left A-module. In some cases, we can conclude that C is finitely generated and projective as a k-module. Corollary 16. Let (A, C, ψ) be a right-right entwining structure, and assume that (F, G) is a Frobenius pair. 1. If A is faithfully flat as a k-module, then C is finitely generated as a k-module. 2. If A is commutative and faithfully flat as a k-module, then C is finitely generated projective as a k-module. 3. If k is a field, then C is finite dimensional as a k-vector space. 4. If A = k, then C is finitely generated projective as a k-module. Proof. 1. Assume that ϑ ∈ V5 and z ∈ W5 satisfy (3.54). The corresponding θ ∈ V2 is given by
3.3 The functor forgetting the C-coaction
135
θ((a ⊗ c) ⊗A (b ⊗ d)) = abψ θ(cψ ⊗ d) Using (3.47), we find for all d ∈ C that 1⊗d = θ((1 ⊗ d) ⊗A (a1 ⊗ c1(1) )) ⊗ c1(2) = a1ψ ϑ(dψ ⊗ c1(1) ) ⊗ c1(2) Let M be the k-module generated by the c1(2) . M is finitely generated, and 1 ⊗ d ∈ A ⊗ M . Since A is faithfully flat, it follows that d ∈ M , hence M = C is finitely generated. 2. From descent theory: if a k-module becomes finitely generated and projective after a faithfully flat commutative base extension, then it is itself finitely generated and projective. 3. Follows immediately from 1): since k is a field, A is faithfully flat as a k-module, and C is projective as a k-module. 4. Follows immediately from 2). As a special case, we consider the situation where A = k and ψ = IC . Now V5 = {ϑ ∈ (C ⊗ C)∗ | ϑ(c ⊗ d(1) )d(2) = ϑ(c(2) ⊗ d)c(1) } and W5 = C Corollary 17. Let C be a k-coalgebra. 1. The following assertions are equivalent: a. F : MC → M is separable; b. F : C M → M is separable; c. there exists ϑ ∈ V5 such that ϑ ◦ ∆C = εC . 2. The following assertions are equivalent: a. G : M → MC is separable; b. G : M → C M is separable; c. there exists f ∈ C such that εC (f ) = 1. 3. The following assertions are equivalent: a. (F, G) is a Frobenius pair; b. (F , G ) is a Frobenius pair; c. there exist ϑ ∈ V5 and f ∈ C such that ϑ(f ⊗ c) = ϑ(c ⊗ f ) = ε(c) for all c ∈ C. If C is finitely generated and projective, then these conditions are also equivalent to d. there exist ϑ ∈ V5 and f ∈ C such that φ : C ∗ → C and φ : C → C ∗ given by
136
3 Frobenius and separable functors
φ(c∗ ) = c∗ , f(2) f(1) φ(c), d = ϑ(d ⊗ c)
(3.57) (3.58)
are each others inverses. e. C ∗ ∼ = C in C ∗ M; f. C ∗ is a Frobenius extension of k. Condition 1c. means that C is a coseparable coalgebra in the sense of [116]. Now we consider the following problem: let (A, C, ψ) be an entwining structure, and assume that C ∗ /k is Frobenius. When is the forgetful functor M(ψ)C A → MA Frobenius? The following is a partial answer to this question. Proposition 72. Consider (A, C, ψ) ∈ E•• (k), with ψ invertible, and C finitely generated and projective as a k-module. We also assume that C ∗ is Frobenius, which implies that there exists f ∈ C such that the map φC : C ∗ → C ; φC (c∗ ) = c∗ , f(2) f(1) is bijective. If a ⊗ f = ψ(f ⊗ a) for all a ∈ A, then
M(ψ)C A
(3.59)
→ MA is also Frobenius.
Proof. First of all we remark that (3.59) tells us that f ⊗ 1A ∈ W5 . Then we compute the map φ : C∗ ⊗ A → A ⊗ C from part 3a. of Theorem 38, using (3.55): ψ φ(c∗ ⊗ a) = aψ ⊗ c∗ , f(2) f(1)
φ is bijective, since φC and ψ are bijective. The result is now an immediate application of Theorem 38. C Relative separability Consider the functor G : M(ψ)C A → M and the following diagram of forgetful functors:
M(ψ)C A
F MA
G ? MC
(3.60) ? - kM
The problem is now the following: assume that we have a short exact sequence in M(ψ)C A , which is split exact as a sequence of A-modules. When is it split exact as a sequence of C-comodules? This is an application of separability of the second kind as introduced at the end of Section 3.1: we have investigate
3.4 The functor forgetting the A-action
137
when F is G -separable. From Theorem 25, it follows that we have to examine the k-space of natural transformations X = Nat(G GF, G ) Adapting the proof of Proposition 70, we find the following. Proposition 73. Let (A, C, ψ) be a right-right entwining structure. We then have isomorphisms X∼ = X4 = A HomA (A ⊗ C ⊗ C, A ⊗ C) ∼ = X5 = {ϑ : C ⊗ C → A | (3.50) holds} Consequently F is G -separable if and only if there exists a ϑ ∈ X5 such that ϑ ◦ ∆C = ηA ◦ εC .
3.4 The functor forgetting the A-action As in the previous Section, let (A, C, ψ) be a right-right entwining structure. C We are now interested in the functor G : M(ψ)C A → M and its left adjoint F . It turns out that there exist dual versions of the results obtained in the previous Section. It is possible to obtain these results using algebroids over a coalgebra (see [26]), which can be viewed as the formal duals of corings over a ring. However, to avoid technical difficulties, we have chosen for the more direct approach, avoiding algebroids. Recall that the unit and counit of the adjunction are given by the following formulas µ : F G → 1C and η : 1MC → G F µM : M ⊗ A → M ; µM (m ⊗ a) = ma ηN : N → N ⊗ A; ηN (n) = n ⊗ 1 C C C Lemma 13. Let M ∈ A M(ψ)C A , N ∈ M(ψ)A . Then F G (M ) ∈ A M(ψ)A C C and G F (N ) ∈ M(ψ)A . The left structures are given by a(m ⊗ b) = am ⊗ b and ρl (n ⊗ b) = n[−1] ⊗ n[0] ⊗ b
for all a, b ∈ A, m ∈ M , n ∈ N . Furthermore µM is left A-linear, and νN is left C-colinear. Now write V = Nat(G F , 1MC ), W = Nat(1C , F G ). Following the philosophy of the previous Sections, we give more explicit descriptions of V and W . We will not give full detail of the proofs; the arguments are dual to the ones in the previous Section. We define V1 = {ϑ ∈ (C ⊗ A)∗ | ϑ(c(1) ⊗ aψ )cψ (2) = ϑ(c(2) ⊗ a)c(1) , for all c ∈ C, a ∈ A} (3.61)
138
3 Frobenius and separable functors
Proposition 74. The map α : V → V1 , α(ν ) = ε ◦ νC is an isomorphism. Proof. We leave verification of the details to the reader; ν is reconstructed from ϑ as follows: for N ∈ MC , νN : N ⊗ A → N is given by νN (n ⊗ a) = ϑ(n[1] ⊗ a)n[0] Let e : C → A ⊗ A be a k-linear map. We will use the notation (summation understood, as usual): e(c) = e1 (c) ⊗ e2 (c) Let W1 be the k-submodule of Hom(C, A ⊗ A) consisting of maps e satisfying e1 (c(1) ) ⊗ e2 (c(1) ) ⊗ c(2) = e1 (c(2) )ψ ⊗ e2 (c(2) )Ψ ⊗ cψΨ (1)
(3.62)
e1 (c) ⊗ e2 (c)a = aψ e1 (cψ ) ⊗ e2 (cψ )
(3.63)
Proposition 75. The map β : W → W1 given by ◦ (ηA ⊗ IC ) β(ζ ) = (ε ⊗ IA ⊗ IA ) ◦ ζA⊗C = (IA ⊗ ε ⊗ IA ) ◦ ζC⊗A ◦ (IC ⊗ ηA )
is an isomorphism. Given e ∈ W1 , ζ = β −1 (e) is recovered from e as follows: for M ∈ M(ψ)C A, ζM (m) = m[0] e1 (m[1] ) ⊗ e2 (m[1] ) Proof. Let us show that β is well-defined, leaving the other details to the reader. We have a commutative diagram C
ζC⊗A IC ⊗ ηA - C ⊗A⊗A C ⊗A
IC
ψ ⊗ IA
ψ
?η ⊗I ? ? ζ A C A⊗C - A⊗C ⊗A C A⊗C λ = ζC⊗A ◦ (IC ⊗ ηA ) is left and right C-colinear. Write
λ(c) =
ci ⊗ ai ⊗ ai
i
Then c(1) ⊗ λ(c(2) ) =
ci(1) ⊗ ci(2) ⊗ ai ⊗ ai
i
Applying ε to the second factor, we find
3.4 The functor forgetting the A-action
139
c(1) ⊗ e(c(2) ) = λ(c) Using the right C-colinearity of λ, we find λ(c(1) ) ⊗ c(2) = aiψ ⊗ aiΨ ⊗ cψΨ i i
= e (c(2) )ψ ⊗ e2 (c(2) )Ψ ⊗ cψΨ (1) 1
proving (3.62). λ = ζA⊗C ◦ (ηA ⊗ IC ) is left and right A-linear, hence ((1 ⊗ c)a)) e1 (c) ⊗ e2 (c)a = (IA ⊗ ε ⊗ IA )(ζA⊗C (aψ ⊗ cψ ) = (IA ⊗ ε ⊗ IA )(ζA⊗C (1 ⊗ cψ ) = aψ (IA ⊗ ε ⊗ IA )(ζA⊗C
= aψ e1 (cψ ) ⊗ e2 (cψ ) proving (3.63). Proposition 76. Let (A, C, ψ) be a right-right entwining structure. 1. F = • ⊗ A : MC → M(ψ)C A is separable if and only if there exists ϑ ∈ V1 such that ϑ(c ⊗ 1) = ε(c) (3.64) for all c ∈ C. C 2. G : M(ψ)C A → M is separable if and only if there exists e ∈ W1 such that e1 (c)e2 (c) = ε(c)1 (3.65) for all c ∈ C. 3. (F , G ) is a Frobenius pair if and only if there exists ϑ ∈ V1 and e ∈ W1 such that ε(c)1 = ϑ(c(1) ⊗ e1 (c(2) ))e2 (c(2) ) =
ϑ(cψ (1)
⊗ e (c(2) ))e (c(2) )ψ 2
1
(3.66) (3.67)
Proof. We will prove 3. If (F , G ) is a Frobenius pair, then there exists ν ∈ V and ζ ∈ W such that (3.1) and (3.2) hold. We take ϑ ∈ V1 and e ∈ W1 corresponding to ν and ζ . We write down (3.1) applied to n ⊗ 1 with n ∈ N ∈ MC . This gives 1 1 n ⊗ 1 = (νN ⊗ IA ) ◦ ζN ⊗A ) (n ⊗ 1) = ϑ(n[1] ⊗ e (n[2] ))n[0] ⊗ e (n[2] )) (3.68) Taking N = C, and n = c, and applying εC to the first factor, we find (3.66). Conversely, if we have ϑ ∈ V1 and e ∈ W1 satisfying (3.66), then (3.68) is satisfied for all N ∈ MC , and (3.1) follows from the fact that νN ⊗ IA and are right A-linear. ζN ⊗A Now we write down (3.1) applied to m ∈ M ∈ M(ψ)C A . We find
140
3 Frobenius and separable functors
ψ 2 1 m = νG (M ⊗A) ◦ G (ζM ) (m) = θ(m [1] ⊗ e (m[2] ))m[0] e (m[2] )ψ
(3.69)
Take M = C ⊗ A, m = c ⊗ 1, and apply εC to the first factor. This gives (3.67). Conversely, if we have ϑ ∈ V1 and e ∈ W1 satisfying (3.67), then we can show that (3.69) holds for all M ∈ M(ψ)C A : apply (3.67) to the second and third factor in m[0] ⊗ m[1] ⊗ 1, and then apply εC to the second factor. Finally remark that (3.69) is equivalent to (3.1). Inspired by the results in the previous Section, we ask the following question: assuming (F , G ) is a Frobenius pair, when is A finitely generated projective as a k-module. We give a partial answer in the next Proposition. We will assume that ψ is bijective; in the Doi-Hopf case, this is true if the underlying Hopf algebra H has a twisted antipode. The inverse of ψ is then given by the formula ψ −1 (a ⊗ c) = cS(a(1) ) ⊗ a(0) Proposition 77. Let (A, C, ψ) be a right-right entwining structure. With notation as above, assume that (F , G ) is a Frobenius pair. If there exists c ∈ C such that ε(c) = 1, and if ψ is invertible, with inverse ϕ = ψ −1 : A ⊗ C → C ⊗ A, then A is finitely generated and projective as a k-module. Proof. Recall (Proposition 15) that (A, C, ϕ) is a left-left entwining structure. Now fix c ∈ C such that ε(c) = 1. Then for all a ∈ A a = ε(c)a = ε(cϕ )aϕ (3.66) = ϑ (cϕ )(1) ⊗ e1 ((cϕ )(2) ) e2 ((cϕ )(2) )aϕ 2 φ 1 φ (2.8) = ϑ cϕ (1) ⊗ e (c(2) ) e (c(2) )aϕφ 2 φψ 1 φψ (3.63) = ϑ cϕ (1) ⊗ aϕφψ e (c(2) ) e (c(2) ) 2 1 = ϑ cϕ (ϕ = ψ −1 ) (1) ⊗ aϕ e (c(2) ) e (c(2) )
Now write (I ⊗ e)∆(c) =
m
ci ⊗ bi ⊗ ai ∈ C ⊗ A ⊗ A
i=1
For i = 1, · · · , m, we define ai ∗ ∈ A∗ by a∗i , a = ϑ cϕ (1) ⊗ aϕ bi Then {ai , a∗i | i = 1, · · · , m} is a finite dual basis of A as a k-module. Assume from now on that A is finitely generated and projective with finite dual basis {ai , a∗i | i = 1, · · · , m}. The proof of the next Lemma is straightforward, and therefore left to the reader.
3.4 The functor forgetting the A-action
141
Lemma 14. Let (A, C, ψ) be a right-right entwining structure, and assume that A is finitely generated and projective as a k-module. Then A∗ ⊗ C ∈ C M(ψ)C A . The structure is given by the formulas (a∗ ⊗ c)b = a∗ , bψ ai a∗i ⊗ cψ ρr (a∗ ⊗ c) = a∗ ⊗ c(1) ⊗ c(2)
(3.70) (3.71)
∗ ρl (a∗ ⊗ c) = a∗ , aiψ cψ (1) ⊗ ai ⊗ c(2)
(3.72)
We will now give alternative descriptions of V and W . Proposition 78. Let (A, C, ψ) be a right-right entwining structure, and assume that A is finitely generated and projective as a k-module. Then we have an isomorphism ∗ β1 : W1 → W2 = C HomC A (A ⊗ C, C ⊗ A)
β1 (e) = Ω, with 2 Ω(a∗ ⊗ c) = a∗ , e1 (c(2) )ψ cψ (1) ⊗ e (c(2) )
β1−1 (Ω) = e with e(c) =
ai ⊗ (εC ⊗ IA )Ω(a∗ ⊗ c)
i
Proof. We first prove that β1 is well-defined. a) β1 (e) = Ω is right A-linear: for all a∗ ∈ A∗ , c ∈ C and b ∈ A, we have ∗ Ω (a∗ ⊗ c)b = a , bψ ai Ω(a∗i ⊗ cψ ) i
=
2 ψ a∗ , bψ ai a∗i , e1 ((cψ )(2) )Ψ (cψ )Ψ (1) ⊗ e ((c )(2) )
i 2 ψ = a∗ , bψ e1 ((cψ )(2) )Ψ (cψ )Ψ (1) ⊗ e ((c )(2) )
(2.3) (2.1) (3.63)
ψ Ψ 2 ψ = a∗ , bψψ e1 (cψ (2) )Ψ c(1) ⊗ e (c(2) ) Ψ 2 ψ = a∗ , bψ e1 (cψ (2) ) Ψ c(1) ⊗ e (c(2) ) 2 = a∗ , e1 (c(2) )Ψ cΨ (1) ⊗ e (c(2) )b = Ω(a∗ ⊗ c)b
b) β1 (e) = Ω is right C-colinear: for all a∗ ∈ A∗ and c ∈ C, we have 2 ρr (Ω(a∗ ⊗ c)) = a∗ , e1 (c(2) )ψ ρr (cψ (1) ⊗ e (c(2) ) ψ Ψ 2 = a∗ , e1 (c(2) )ψ cψ (1) (1) ⊗ e (c(2) )Ψ ⊗ c(1) (2)
(2.3) (3.62)
ψΨ 2 = a∗ , e1 (c(3) )ψψ cψ (1) ⊗ e (c(3) )Ψ ⊗ c(2)
2 = a∗ , e1 (c(2) )ψ cψ (1) ⊗ e (c(2) ) ⊗ c(3)
= Ω(a∗ ⊗ c(1) ) ⊗ c(2)
142
3 Frobenius and separable functors
c) β1 (e) = Ω is left C-colinear: for all a∗ ∈ A∗ and c ∈ C, we have ψ 2 ρl (Ω(a∗ ⊗ c)) = a∗ , e1 (c(2) )ψ (cψ (1) )(1) ⊗ (c(1) )(2) ⊗ e (c(2) )
(2.3)
ψ 2 = a∗ , e1 (c(3) )ψψ cψ (1) ⊗ c(2) ⊗ e (c(3) ) ψ 2 a∗ , aiψ a∗i , e1 (c(3) )ψ cψ = (1) ⊗ c(2) ⊗ e (c(3) ) i
=
∗ a∗ , aiψ cψ (1) ⊗ Ω(ai ⊗ c(2) )
i
We leave it to the reader to show that β1−1 (Ω) = e satisfies (3.62) and (3.63). Let us make clear that β1 and β1−1 are each others inverses. 2 ai ⊗ (εC ⊗ IA )a∗i , e1 (c(2) )ψ cψ β −1 (β(e))(c) = (1) ⊗ e (c(2) ) i
=
2 ai ⊗ a∗i , e1 (c(2) )ψ εC (cψ (1) ) ⊗ e (c(2) )
i
= a∗i , e1 (c)ai ⊗ e2 (c) = e1 (c) ⊗ e2 (c) ∗ β(β −1 (ω))(a∗ ⊗ c) = a∗ , (ai )ψ cψ (1) ⊗ (εC ⊗ IA )Ω(ai ⊗ c(2) ) ∗ = (IC ⊗ εC ⊗ IA ) a∗ , (ai )ψ cψ (1) ⊗ Ω(ai ⊗ c(2) )
= (IC ⊗ εC ⊗ IA )ρl (Ω(a∗ ⊗ c)) = Ω(a∗ ⊗ c) At the last step, we used that (IC ⊗ εC ⊗ IA )ρl (c ⊗ a) = c ⊗ a for all c ∈ C and a ∈ A. Proposition 79. Let (A, C, ψ) be a right-right entwining structure, and assume that A is finitely generated and projective as a k-module. Then we have an isomorphism ∗ α1 : V1 → V2 = C HomC A (C ⊗ A, A ⊗ C)
α1 (ϑ) = Ω, with Ω(c ⊗ a) = ϑ, c(1) ⊗ aψ ai a∗i ⊗ cψ (2) α1−1 (Ω) = ϑ with
ϑ(c ⊗ a) = Ω(c ⊗ a), 1A ⊗ εC
3.4 The functor forgetting the A-action
143
Proof. We first show that α1 is well-defined. Take ϑ ∈ V1 , and let α1 (ϑ) = Ω. a) Ω is right A-linear. For all a, b ∈ A and c ∈ C, we have Ω(c ⊗ a)b = ϑ, c(1) ⊗ aψ ai a∗i , bΨ aj a∗j ⊗ cψΨ (2) i,j
=
ϑ, c(1) ⊗ aψ bΨ aj a∗j ⊗ cψΨ (2)
j
(2.1)
=
ϑ, c(1) ⊗ (ab)ψ aj a∗j ⊗ cψ (2)
j
= Ω(c ⊗ ab) b) Ω is right C-colinear. For all a ∈ A and c ∈ C, we have ψ ρr (Ω(c ⊗ a)) = ϑ(c(1) ⊗ aψ ai )a∗i ⊗ (cψ (2) )(1) ⊗ (c(2) )(2) ψ = ϑ(c(1) ⊗ aψΨ ai )a∗i ⊗ cΨ (2) ⊗ c(3)
(2.3)
= Ω(c(1) ⊗ aψ ) ⊗ cψ (2) c) Ω is left C-colinear. For all a ∈ A and c ∈ C, we have ψ Ψ ∗ ρl (Ω(c ⊗ a)) = ϑ(c(1) ⊗ aψ ai )a∗i , ajΨ (cψ (2) )(1) ⊗ aj ⊗ (c(2) )(2) i,j
(2.3)
=
Ψ ψ ∗ ϑ(c(1) ⊗ aψψ ai )a∗i , ajΨ cψ (2) ⊗ aj ⊗ c(3)
i,j
=
Ψ ψ ∗ ϑ(c(1) ⊗ aψψ ajΨ )cψ (2) ⊗ aj ⊗ c(3)
j
(2.1)
=
ψ ∗ ϑ(c(1) ⊗ (aψ aj )ψ )cψ (2) ⊗ aj ⊗ c(3)
j
(3.61)
=
ϑ(c(2) ⊗ aψ aj )c(1) ⊗ a∗j ⊗ cψ (3)
j
= c(1) ⊗ Ω(c(2) ⊗ a) Conversely, given Ω, we have to show that α1−1 (Ω) = ϑ satisfies (3.61). Fix c ⊗ a ∈ C ⊗ A, and write Ω(c ⊗ a) = b∗l ⊗ dl ∈ A∗ ⊗ C l
Ω is right C-colinear, so Ω(c(1) ⊗ aψ ) ⊗ cψ (2) =
l
Ω is left C-colinear, so
b∗l ⊗ dl(1) ⊗ dl(2)
144
3 Frobenius and separable functors
c(1) ⊗ Ω(c(2) ⊗ a) =
∗ b∗l , aiψ dψ l(1) ⊗ ai ⊗ dl(2)
l
and we compute that ϑ(c(2) ⊗ a)c(1) = Ω(c(2) ⊗ a), 1A ⊗ εc(1) = b∗l , aiψ a∗i , 1ε, dl(2) dψ l(1) l
=
b∗l 1ψ dψ l
l
=
b∗l 1dl
l
= Ω(c(1) ⊗ aψ ), 1A ⊗ εC cψ (2) = ϑ(c(1) ⊗ aψ )cψ (2) and (3.61) follows. Finally, let us show that α1 and α1−1 are each others inverses. α1−1 (α1 (ϑ))(c ⊗ a) = ϑ(c(1) ⊗ aψ ai )a∗i ⊗ cψ (2) , 1A ⊗ εC = ϑ(c(1) ⊗ aai )a∗i , 1A = ϑ(c ⊗ a) We know that α1 (α1−1 (Ω)) is right A-linear. Hence it suffices to show that α1 (α1−1 (Ω))(c ⊗ 1) = c ⊗ 1 for all c ∈ C. From (3.70), we compute (a∗ ⊗ c)b, 1A ⊗ εC = a∗ , bε(c) = a∗ ⊗ c, b ⊗ εC Now write Ω(c ⊗ 1) = r a∗r ⊗ cr . We compute α1 (α1−1 (Ω))(c ⊗ 1) = Ω(c(1) ⊗ ai ), 1A ⊗ εC a∗i ⊗ c(2) = Ω(c(1) ⊗ 1), ai ⊗ εC a∗i ⊗ c(2) = Ω(c ⊗ 1)[0] , ai ⊗ εC a∗i ⊗ Ω(c ⊗ 1)[1] = a∗r , ai ε, cr(1) a∗i ⊗ cr(2) r
=
a∗r ⊗ cr = Ω(c ⊗ 1)
r
Theorem 39. Let (A, C, ψ) be a right-right entwining structure, and assume that A is finitely generated and projective as a k-module. With notation as above, we have the following properties:
3.4 The functor forgetting the A-action
145
1. F is separable if and only if there exists Ω ∈ V2 such that Ω(c ⊗ 1), 1A ⊗ εC = εC (c) for all c ∈ C. 2. G is separable if and only if there exists Ω ∈ W2 such that ai (εC ⊗ IA )Ω(a∗i ⊗ c) = εC (c)1 i
for all c ∈ C. 3. The following assertions are equivalent: a. (F , G ) is a Frobenius pair; b. there exist e ∈ W1 , ϑ ∈ V1 such that Ω = β1 (e) and Ω = α1 (ϑ) are each others inverses; c. A∗ ⊗ C and C ⊗ A are isomorphic objects in C M(ψ)C A. Proof. We will only prove 3a. ⇒ 3b. First we show that Ω is a left inverse of Ω. Since Ω ◦ Ω is right A-linear, it suffices to show that ϑ(c(1) ⊗ ai )Ω(a∗i ⊗ c(2) ) Ω(Ω(c ⊗ 1)) = i
=
2 ϑ(c(1) ⊗ ai )a∗i , e1 (c(3) )ψ cψ (2) ⊗ e (c(3) )
i 2 = ϑ(c(1) ⊗ e1 (c(3) )ψ cψ (2) ⊗ e (c(3) )
(3.61)
2 = ϑ(c(2) ⊗ e1 (c(3) )ψ cψ (1) ⊗ e (c(3) )
(3.66)
= c⊗1
Now we will show that Ω is a right inverse of Ω. Using the fact that Ω ◦ Ω is right C-colinear, we find that it suffices to show that (IA∗ ⊗ εC )(Ω(Ω(a∗ ⊗ c))) = εC (c)a∗ for all c ∈ C and a∗ ∈ A∗ . Both sides of the equation are in A∗ , so we are done if we can show that both sides are equal after we apply them to an arbitrary a ∈ A. Now observe that ∗ ψ Ψ 2 Ω(Ω(a∗ ⊗ c)) = a∗ , e1 (c(2) )ψ ϑ (cψ (1) )(1) ⊗ e (c(2) )Ψ ai ai ⊗ (c(1) )(2) i
hence (IA∗ ⊗ εC )(Ω(Ω(a∗ ⊗ c)))(a) 2 = a∗ , e1 (c(2) )ψ ϑ cψ (1) ⊗ e (c(2) )a ψ 2 Ψ (3.63) = a∗ , (aΨ e1 (cΨ (2) ))ψ ϑ c(1) ⊗ e (c(2) ) ψψ 2 Ψ (2.1) = a∗ , aΨ ψ e1 (cΨ (2) )ψ ϑ c(1) ⊗ e (c(2) ) 2 ψ (2.3) = a∗ , aψ e1 ((cψ )(2) )ψ ϑ (cψ )ψ (1) ⊗ e ((c )(2) ) (3.67)
= a∗ , aψ ε(cψ ) = a∗ , aε(c)
146
3 Frobenius and separable functors
as needed.
3.5 The general induction functor Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism of (right-right) entwining structures. The results of the previous two Sections can be extended to the general induction functor
C F : C = M(ψ)C A → C = M(ψ )A
and its right adjoint G (see Section 2.5). The idea is the same: we study the natural transformations V = Nat(GF, 1C ) and W = Nat(1C , F G) Under appropriate assumptions, ν ∈ V is completely determined by νC⊗A , and νC⊗A is left and right A-linear and C-colinear, more precisely, we get a bijective correspondence C V ∼ =C A HomA (GF (C ⊗ A), C ⊗ A)
and similar results apply to W . The results in this Section have been presented in [43] (in the case of Doi-Koppinen Hopf modules). In order to avoid technical complications, we will assume that the entwining map ψ is bijective, and write ψ −1 = ϕ. The general case, where ψ is not necessarily bijective, is more technical, and has been discussed by Brzezi´ nski, we refer to [25] for more detail. We have already seen that A ⊗ C is an entwined module, often playing a particular role. However, A ⊗ C is not a generator for the category of entwined modules; it has a weaker property, and we can use this to show that ν ∈ V is completely determined by νA⊗C . this is what we will discuss first. T -generators Let T : C → D be a functor between two abelian categories. We call D ∈ D a T -generator if D generates all T (C), with C ∈ C. This means that if we have a morphism f : T (C) → M in D such that f ◦ g = 0 for all g : D → T (C) in D, then f = 0. If we put I = HomD (D, T (C)), then D(I) → T (C) is epic. Theorem 40. Consider the morphism (IA , εC ) : (A, C, ψ) → (A, k, IA ) in M(ψ)C A , let (F, G) be the corresponding adjoint pair of functors, and write T = GF . Then A ⊗ C is a T -generator. Proof. Let f : T (M ) = M ⊗ C → N in M(ψ)C A be such that f ◦ g = 0 for all g : A ⊗ C → M ⊗ C. Take m ∈ M , and consider gm : A ⊗ C → M ⊗ C, gm (a ⊗ c) = am ⊗ c. Then gm is right A-linear and C-colinear, and for all c ∈ C, we have that
3.5 The general induction functor
147
f (m ⊗ c) = (f ◦ gm )(1a ⊗ c) = 0 hence f = 0. Theorem 41. Let C and D be abelian categories, H, H : C → D additive functors, T : C → C a functor, and ρ : 1C → T a natural transformation. Assume that the following three conditions hold: 1. C is a T -generator for C; 2. H preserves epimorphisms; 3. H (ρM ) is monic, for all M ∈ C. Then any natural transformation ν : H → H is completely determined by νC : H(C) → H(C ). . Proof. Let ν, ν : H → H be two natural transformations such that νC = νC We have to show that νM = νM for all M ∈ C. C is a T -generator, so we have an epimorphism g : C (I) → T (M ). From the naturality of ν and ν , it follows that we have commutative diagrams
H(C (I) ) H(g)
νC (I)-
H (C (I) )
H(C (I) )
H (g)
? νT (M ) ? - H (T (M )) H(T (M )) Now
H(g)
νC (I) - H (C (I) )
H (g)
? ? νT (M ) - H (T (M )) H(T (M ))
(I) νC (I) = (νC )(I) = (νC ) = νC (I)
hence
νT (M ) ◦ H(g) = νT (M ) ◦ H(g)
and
νT (M ) = νT (M )
since H(g) is epic. Now consider ρM : M → T (M ), and the commutative diagrams H(M ) H(ρM )
νM - H (M ) H(ρM )
? νT (M ) ? - H (T (M )) H(T (M )) We see that
H(M ) H(ρM )
H(ρM )
? ? νT (M ) - H (T (M )) H(T (M ))
H (ρM ) ◦ νM = H (ρM ) ◦ νM
and νM = νM since H (ρM ) is monic.
νM - H (M )
148
3 Frobenius and separable functors
Corollary 18. Under the assumptions of Theorem 41, Nat(F, F ) is a set, and the map Nat(F, F ) → HomD (F (C), F (C )),
ν → νC
is injective. Separability of F Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism C in E•• (k). We will write C = M(ψ)C A and C = M(ψ )A . (F, G) will be the associated pair of adjoint functors between C and C . The functor T : C → C will be as above, so that A ⊗ C is a T -generator for C. Our aim is now to compute V = Nat(GF, 1C ). Proposition 80. If G is an exact functor (e.g. C is right C -coflat), then ν is completely determined by νA⊗C . Proof. In Theorem 41, take C = D = M(ψ)C A , H = GF , H = 1C , and ρ : 1C → T given by the coaction
ρM : M → T (M ) = M ⊗ C for all M ∈ C. A ⊗ C is aT -generator and 1C (ρM ) = ρM is monic for all M . Indeed, if ρM (m) = m[0] ⊗ m[1] = 0, then m = ε(m[1] )m[0] = 0. Furthermore F preserves epimorphisms since the tensor product is right exact, and G is exact by assumption, so GF preserves epimorphisms. Thus the assumptions of Theorem 41 are satisfied, and the result follows. If G is exact, then we know from Corollary 18 that V is a set. Actually V is a k-algebra. Addition and scalar multiplication are defined in the obvious way, and the multiplication is given by ν • ν = ν ◦ η ◦ ν
(3.73)
Lemma 15. Let ν : GF → 1C be a natural transformation, M ∈ C, and N ∈ k M. N ⊗ M is an object of C: ρrN ⊗M = IN ⊗ ρrM ; (n ⊗ m)a = n ⊗ ma If N is k-flat, then GF (N ⊗ M ) = N ⊗ GF (M ) and νN ⊗M = IN ⊗ νM Proof. For all n ∈ N , we consider the map fn : M → N ⊗M , fn (m) = n⊗m. Then fn ∈ C, and the naturality of ν implies that the following diagram is commutative: νM M GF (M ) GF (fm )
fm
? ? νN ⊗M - N ⊗M GF (N ⊗ M )
3.5 The general induction functor
149
Using the fact that N is k-flat, we obtain GF (N ⊗ M ) = (N ⊗ M ⊗A A )C C ∼ = N ⊗ GF (M ) For x ∈ GF (M ), we have GF (fn )(x) = n ⊗ x, and the commutativity of the diagram tells us that νN ⊗M (n ⊗ x) = n ⊗ νM (x) Theorem 42. Assume that (B, D, φ) ∈ •• E(k) is a left-left entwining strucC ture, and that M ∈ D B M(φ, ψ)A is a two-sided entwined module. Then νM is left D-colinear and left B-linear. In particular, if ψ is invertible, then −1 νC⊗A ∈ C , ψ)C A M(ψ A
Proof. We automatically have that νM ∈ C, i.e. νM is right A-linear and right C-colinear. Let us prove first that νM is left B-linear. For any b ∈ B, we consider the map fb : M → M, fb (m) = bm. From conditions 1) and 4) in the Definition of two-sided entwining modules, we find that fb ∈ C, and we have a commutative diagram GF (M ) GF (fb )
νM M fb
? ν ? M GF (M ) M Applying the diagram to x = i (mi ⊗ ai ) ⊗ ci ∈ GF (M ), we find bνM (x) = fb (νM (x)) = νM (GF (fb )(x)) = νM ( (bmi ⊗ ai ) ⊗ ci ) = νM (bx) i
and νM is left B-linear. To show that νM is left D-colinear, we consider the left coaction ρl : M → D ⊗ M . We will apply Lemma 15 with N = D. Conditions 2) and 4) in the definition of two-sided entwined module entail that ρl ∈ C, so we have a commutative diagram GF (M ) GF (ρl )
νM M ρl
? ? νD⊗M - D⊗M GF (D ⊗ M )
150
3 Frobenius and separable functors
Recall from Lemma 15 that GF (D ⊗ M ) = D ⊗ GF (M ), and νD⊗M = ID ⊗ νM . For x = i (mi ⊗ ai ) ⊗ ci ∈ GF (M ), we obtain (GF )(ρl )(x) =
(mi[−1] ⊗ mi[0] ⊗ ai ) ⊗ ci
and the commutative diagram tells us that x[−1] ⊗ νM (x[0] ) = ρl (νM (x)) proving that νM is left C-colinear. Corollary 19. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) is a morphism of entwining structures, with ψ invertible. We then have a well-defined map C f : V = Nat(GF, 1C ) → V1 = C A HomA (GF (C ⊗ A), C ⊗ A)
given by f (ν) = νC⊗A . If G is exact, then f is injective. Next we present an easier description of V1 . Proposition 81. As above, let (α, γ) be a morphism of entwining structures, and assume that ψ is invertible. Let V2 consist of all left and right A-linear maps λ : GF (C ⊗ A) → A satisfying λ( (ci ⊗ ai ) ⊗ di(1) ) ⊗ di(2) = λ((ci(2) ⊗ ai ) ⊗ di )ψ ⊗ cψ (3.74) i(1) i
for all
i
i (ci
⊗ ai ) ⊗ di ∈ GF (C ⊗ A). We have a k-linear isomorphism f1 : V1 → V2 ; f1 (ν) = (ε ⊗ IA ) ◦ ν
Proof. λ = f1 (ν) is left and right A-linear since ν and ε ⊗ IA are left and right A-linear. Take i (ci ⊗ ai ) ⊗ di ∈ GF (C ⊗ A), and write
ν(
(ci ⊗ ai ) ⊗ di ) =
i
cj ⊗ aj
j
Using the left C-colinearity of ν, we obtain ci(1) ⊗ ν( (ci(2) ⊗ ai ) ⊗ di ) = cj(1) ⊗ cj(2) ⊗ aj i
i
j
and, applying εC to the second factor, ci(1) ⊗ λ( (ci(2) ⊗ ai ) ⊗ di ) = ν( (ci ⊗ ai ) ⊗ di ) i
i
ν is also right C-colinear, hence
i
3.5 The general induction functor
151
ν( (ci ⊗ ai ) ⊗ di(1) ) ⊗ di(2) = cj(1) ⊗ ajψ ⊗ cψ j(2) i
j
Applying εC to the first factor, we obtain λ( (ci ⊗ ai ) ⊗ di(1) ) ⊗ di(2) = ajψ ⊗ cψ j i
j
and we have shown that λ satisfies (3.74), and f1 is well-defined. The inverse of f1 is given by (ci ⊗ ai ) ⊗ di = ci(1) ⊗ λ (ci(2) ⊗ ai ) ⊗ di f1−1 (λ) i
i
i
It is obvious that ν = f1−1 (λ) is left C-colinear and right A-linear. ν is right C-colinear since ν((ci ⊗ ai ) ⊗ di(1) ) ⊗ di(2) i
=
ci(1) ⊗ λ((ci(2) ⊗ ai ) ⊗ di(1) ) ⊗ di(2)
i
(3.74)
=
i
(2.66)
r
ci(1) ⊗ λ((ci(3) ⊗ ai ) ⊗ di )ψ ⊗ cψ i(2)
=ρ (
ci(1) ⊗ λ((ci(2) ⊗ ai ) ⊗ di ))
i
= ρ (ν((ci ⊗ ai ) ⊗ di ) r
ν is left A-linear since ν a (ci ⊗ ai ) ⊗ di i
(2.76)
=ν =
(cϕ i ⊗ α(aϕ )ai ) ⊗ di
i
(2.8)
=
i
(2.76)
=
⊗d (cϕ ) ⊗ α(a )a ϕ i (2) i i
(cϕ i )(1) ⊗ λ
i
cφi(2) ⊗ α(aϕφ )ai ⊗ di
cϕ i(1) ⊗ λ
φ (c cϕ ⊗ λ a ⊗ a ) ⊗ d ϕ i i i(1) i(2)
i
=a
ci(1) ⊗ λ (ci(1) ⊗ ai ) ⊗ di )
i
= aν
(ci ⊗ ai ) ⊗ di
i
We leave it to the reader to show that g1 = f1−1 .
152
3 Frobenius and separable functors
Theorem 43. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism of entwining structures. Assume that ψ is invertible, and let V , V1 and V2 be defined as above. If C is left C -coflat, then V , V1 and V2 are isomorphic as k-modules. Proof. In view of the previous results, it suffices to show that f ◦f1 : V → V2 is surjective. Starting from λ ∈ V2 , we have to construct a natural transformation ν, that is, for all M ∈ M(ψ)C A , we have to construct a morphism νM : GF (M ) = (M ⊗A A )C C → M First we remark that the map φ : M ⊗A A → M ⊗A (C ⊗ A ) ; φ(m ⊗A a ) = m[0] ⊗A (m[1] ⊗ a ) is well-defined. Indeed, φ(ma ⊗A a ) = (ma)[0] ⊗A ((ma)[1] ⊗ a ) = m[0] aψ ⊗A (mψ [1] ⊗ a ) = m[0] ⊗A (mψϕ [1] ⊗ aψϕ a ) = φ(m ⊗A aa )
From the fact that C is left C -coflat, and using Lemma 3, we find (M ⊗A (C ⊗ A ))C C ∼ = M ⊗A ((C ⊗ A )C C) and we can consider the map νM = (IM ⊗A λ) ◦ (φC IC ) : GF (M ) → M ⊗A A ∼ =A given by νM
(mi ⊗ ai ) ⊗ ci = mi[0] λ (mi[1] ⊗ ai ) ⊗ ci
i
i
Let us first show that νM is right A-linear (mi ⊗ ai ) ⊗ ci a = νM (mi ⊗ ai α(aψ )) ⊗ cψ νM i i
=
i
(2.74)
=
i
= νM
mi[0] λ (mi[1] ⊗
i
ai α(aψ ))
⊗ cψ i
mi[0] λ (mi[1] ⊗ ai ) ⊗ ci a
i
νM is right C-colinear since
(mi ⊗ ai ) ⊗ ci a
3.5 The general induction functor
r
ρ νM
(mi ⊗ ai ) ⊗ ci
i
(3.74)
=
=
mi[0] λ (mi[2] ⊗ ai ) ⊗ ci
ψ
153
⊗ mψ i[1]
i
mi[0] λ (mi[1] ⊗ ai ) ⊗ ci[1] ⊗ ci[2]
i
= νM (mi ⊗ ai ) ⊗ ci[1] ⊗ ci[2] as needed. Let us show that Let g : M → N be a morphism in M(ψ)C A, ν is natural. and take x = i (mi ⊗ ai ) ⊗ ci ∈ (M ⊗A A )C C. Then νN (GF (g))(x) = νN ( (g(mi ) ⊗ ai ) ⊗ ci ) =
i
g(mi[0] )λ((mi[1] ⊗ ai ) ⊗ ci )
i
=g
mi[0] λ((mi[1] ⊗ ai ) ⊗ ci ) = g(νM (x))
i
Finally, we have to show that f1 (f (ν)) = λ. Indeed, (εC ⊗ IA ) νC⊗A (ci ⊗ ai ) ⊗ di i
= (εC ⊗ IA )
=λ
(ci(1) ⊗ 1)λ((ci(2) ⊗ ai ) ⊗ di )
i
(ci ⊗
ai )
⊗ di
i
as needed. Corollary 20. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism of entwining structures, and assume that ψ is invertible, and that C is left C -coflat. C The induction functor F : M(ψ)C A → M(ψ )A is separable if and only if there exists λ ∈ V2 such that λ (c(1) ⊗ 1A ) ⊗ c(2) = ε(c)1A (3.75) for all c ∈ C and a ∈ A. F is full and faithful if and only if ηC⊗A is an isomorphism. Proof. If F is separable, then there exists ν ∈ V such that ν ◦ η is the identity natural transformation, in particular νC⊗A ◦ ηC⊗A = IC⊗A Write ν = f (ν) and λ = f1 (ν), and apply both sides to c ⊗ 1A : ν (c(1) ⊗ α((1A )ψ )) ⊗ cψ (2) = c ⊗ 1A
154
3 Frobenius and separable functors
and (3.75) follows after we apply ε to the first factor. Conversely, if λ ∈ V2 satisfies (3.75), and ν is the natural transformation corresponding to λ (see Theorem 43), then νM (ηM (m)) = νM (m[0] ⊗ 1A ) ⊗ m[1] = m[0] ⊗ λ (m[1] ⊗ 1A ) ⊗ m[2] = m[0] ε(m[1] )1A = m The second statement is proved in the same way. Separability of the adjoint of the induction functor As before, let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism of entwining structures. We will now assume that ψ is invertible, and study the k-module of natural transformations ζ ∈ W = Nat(1C , F G) The results below are largely dual to the ones presented above, so we will be somewhat more sketchy. However, there are some differences in the technical assumptions. Lemma 16. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism in E•• (k). Assume that ψ is bijective, and that F G(ρM ) is a monomorphism, for all M ∈ M(ψ )C A . Then any ζ ∈ W is completely determined by ζA ⊗C .
Proof. In Theorem 41, we take C and D equal to M(ψ )C A . For ρ, we take the natural transformation given by the right C -coaction; for H we take the identity natural transformation, and for H we take F G. Then all the conditions of Theorem 41 are fulfilled, and our result follows.
C Lemma 17. With assumptions as above, let ζ ∈ W , and M ∈ M(ψ )A . For C any flat k-module N , let N ⊗ M ∈ M(ψ )A be the entwined module with structure induced by the structure on M . Then
F G(N ⊗ M ) ∼ = N ⊗ F G(M )
in M(ψ )C A , and ζN ⊗M = IN ⊗ ζM . Proof. We omit the proof, as it is similar to the proof of Lemma 15.
Proposition 82. Assume that (B , D , φ ) ∈ •• E(k), and that M ∈ D B M(φ , C ψ )A . Then ζM is left D -colinear and left B -linear. In particular
ζA ⊗C ∈ C A M(ψ
−1
, ψ )C A
and, under the assumption that F G(ρM ) is injective for all M ∈ M(ψ )C A , we have a well-defined monomorphism
C f : W = Nat(1C , F G) → W1 = C A HomA (A ⊗ C , F G(A ⊗ C ))
3.5 The general induction functor
155
Proof. We omit the details, since the proof is very similar to the proof of Theorem 42. W1 can be described in an easier way. Recall first that F G(A ⊗ C ) = ((A ⊗ C )C C) ⊗A A ∼ = (A ⊗ C) ⊗A A
(3.76)
with structure b ((a ⊗ c) ⊗A a )b = (b a ⊗ c) ⊗A a b
(3.77)
ρr ((a ⊗ c) ⊗A a ) = ((a ⊗ c(1) ) ⊗A aψ ) ⊗ γ(c(2) )ψ
ϕ
ρ ((a ⊗ c) ⊗A a ) = γ(c(1) ) l
⊗
((aϕ
⊗ c(2) ) ⊗A a )
(3.78) (3.79)
Given ζ ∈ W1 , we define e : C → (A ⊗ C) ⊗A A by e(c ) = ζ(1A ⊗ c ) Obviously e is left and right C -colinear, and ψ
ψ
e(c )a = ζ((1 ⊗ c )a ) = ζ(aψ ⊗ c ) = aψ e(c )
(3.80)
Let W2 be the k-module consisting of C -bicolinear maps e : C → (A ⊗ C) ⊗A A satisfying (3.80). conversely, given e ∈ W2 , we define ζ ∈ W1 by ζ(a ⊗ c ) = a e(c ) We leave it to the reader to show that this defines a bijective correspondence between the maps in W1 and W2 : Proposition 83. The k-modules W1 and W2 defined above are isomorphic. The following result can be viewed as the dual of Theorem 43. Theorem 44. Let (α, γ) : (A, C, ψ) → (A , C , ψ ) be a morphism in E•• (k). We assume that ψ is bijective, A is flat as a left A-module, and F G(ρM ) is ∼ injective for all M ∈ M(ψ )C A . Then f : W → W1 = W2 is an isomorphism. Remark 8. The condition that F G(ρM ) is injective is automatically fulfilled if k is a field: G is then left exact, and F is exact by assumption. Proof. In view of Propositions 82 and 83, we have to show that f1 ◦ f : W → W2 is surjective. For e ∈ W2 , we will construct a natural transformation ζ : 1C → F G. We proceed in different steps. Step 1. For any M ∈ M(ψ )C A , we have a well-defined map φ : M C (A ⊗ C) → M C C given by mi ⊗ (ai ⊗ ci )) = mi ai ⊗ ci φ( i
i
156
3 Frobenius and separable functors
Indeed, if i mi ⊗ (ai ⊗ ci ) ∈ M C (A ⊗ C), then mi[0] ⊗ mi[1] ⊗ (ai ⊗ ci ) = mi ⊗ γ(ci(1) )ϕ (aiϕ ⊗ ci(2) ) i
i
We apply ψ to the two middle factors, and then we let the second factor act on the first one; using the fact that ϕ is the inverse of ψ , we obtain ψ mi[0] aiψ ⊗ m i[1] ⊗ ci = mi ai ⊗ γ(ci(1) ) ⊗ ci(2) i
i
which means exactly that i mi ai ⊗ ci ∈ M C C. Step 2. A is flat as a left A-module, so, by Lemma 3 M C ((A ⊗ C) ⊗A A ) ∼ = (M C (A ⊗ C)) ⊗A A )
Step 3. For M ∈ M(ψ )C A , we define ζM as the composition ζM : M ∼ = M C C
IM ⊗e
- M C ((A ⊗ C) ⊗A A ) ∼ (M C (A ⊗ C)) ⊗A A ) = - (M C C) ⊗A A )
φ⊗IA
More explicitely, this means the following: if we write, according to our traditions, e(c ) = (e1 (c ) ⊗ eC (c )) ⊗ e2 (c ) ∈ (A ⊗ C) ⊗A A then ζM (m ) = (m[0] e1 (m[1] ) ⊗ eC (m[1] )) ⊗ e2 (m[1] ) ∈ (M C C) ⊗A A We will use the following more transparent notation: ζM (m ) = m[0] e(m[1] ) Using this notation, it is not hard to see that ζM is right A -linear and right C -colinear, and that ζ is natural: Step 4. ζM is right C -colinear. ψ ρr (ζM (m )) = (m[0] e1 (m[1] ) ⊗ eC (m[1] )(1) ) ⊗ e2 (m[1] )ψ ⊗ γ eC (m[1] )(1) (3.78)
= m[0] ρr (e(m[1] )) = m[0] e(m[1] ) ⊗ (m[2] = ζM (m[0] ) ⊗ m[1]
Step 5. ζM is right A -linear.
(e is right C -colinear)
3.5 The general induction functor
157
ζM (m a ) = (m a )[0] e((m a )[1] ) ψ
= m[0] aψ e(m [1] ) = m[0] e(m[1] )a = ζM (m )a
(3.80)
Step 6. ζ is natural: for f : M → N in M(ψ )C A , we have F G(f )(ζM (m )) = f (m[0] )e(m[1] ) = ζN (f (m)) where we use implicitly the fact that f is right C -colinear and A-linear. Step 7. f1 (f (ζ)) = e. ψ ζA ⊗C (1A ⊗ c ) = (e1 (c(2) )ψ ⊗ c(1) ) ⊗ eC (c(2) ) ⊗e2 (c(2) ) Using the identification ((A ⊗ C )C C) ⊗A A ∼ = (A ⊗ C) ⊗A A , we find ψ
ζA ⊗C (1A ⊗ c ) = (e1 (c(2) )ψ ⊗ ε(c(1) )eC (c(2) )) ⊗ e2 (c(2) ) = e(c ) as needed. Corollary 21. Under the assumptions of Theorem 44, the functor G is separable if and only if there exists e ∈ W2 such that εC (eC (c ))e1 (c )e2 (c ) = εC (c )1A
(3.81)
for all c ∈ C . G is full and faithful if and only if εA ⊗C is an isomorphism. Proof. If G is separable, then there exists ζ ∈ W such that ε◦ζ is the identity natural transformation, in particular εA ⊗C ◦ζA ⊗C = IA ⊗C . Applying both sides to 1A ⊗ c ∈ A ⊗ C , and using (2.62), we obtain (3.81). Conversely, if (3.81) holds, then we compute easily that εM (ζM (m )) = εM (m[0] e(m[1] ) = εM (m[0] e1 (m[1] ) ⊗ eC (m[1] )) ⊗ e2 (m[1] ) (2.62)
= m[0] εC (eC (m[1] ))e1 (m[1] )e2 (m[1] )
(3.81)
= m[0] εC (m[1] )1A = m
for all m ∈ M ∈ M(ψ )C A . Corollary 22. Under the assumptions of Theorems 43 and 44, that is, ψ and ψ are bijective, C is left C -coflat, and A is left A-flat, the functors F and G are inverse equivalences if and only if ηC⊗A and εA⊗C are bijective. Remark 9. The condition that F G(ρM ) is monic for all M is automatically fulfilled since F and G are exact functors. In the case where k is a field, Corollary 22 has been shown using different methods in [51].
4 Applications
In this Chapter, we will apply the results of the previous two Chapters to some special categories of modules, such as relative Hopf modules, Yetter-Drinfeld modules, Long dimodules and graded modules. We will find new proofs of some existing results, and our techniques also allow us to find new results. We also discuss how Galois theory and descent theory can be introduced using entwined modules and comodules over a coring.
4.1 Relative Hopf modules Let H be a bialgebra with twisted antipode, and A a right H-comodule algebra. Then (H, A, H) ∈ HA•• (k), and we have a corresponding entwining structure, (A, H, ψ), with ψ : H ⊗ A → A ⊗ H given by the formula ψ(h ⊗ a) = a[0] ⊗ ha[1] An integral ϕ : H → A is a right H-colinear map. This means that ϕ(h(1) ) ⊗ h(2) = ϕ(h)[0] ⊗ ϕ(h)[1] ϕ is called a total integral [69] if, in addition, ϕ(1H ) = 1A In the situation where A = k, we recover Sweedler’s definition of integral in H ∗: ϕ(h(1) )h(2) = ϕ(h)1H As we have seen, the separability of the forgetful functor M(H)H A → MA is determined by the k-module V5 (see Proposition 70). In our particular situation, V5 consists of k-linear maps ϑ : H ⊗ H → A satisfying the conditions ϑ(h ⊗ k)a = a[0] ϑ(ha[1] ⊗ ka[2] ) ϑ(h ⊗ k(1) ) ⊗ k(2) = ϑ(h(2) ⊗ k)[0] ⊗ h(1) ϑ(h(2) ⊗ k)[1]
(4.1) (4.2)
(cf. (3.49-3.50)). We will now see that there is a close relationship between V5 and the space of integrals, and consequently how integrals can be used to determine the separability of the forgetful functor.
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 159–213, 2002. c Springer-Verlag Berlin Heidelberg 2002
160
4 Applications
Theorem 45. Let H be a Hopf algebra with twisted antipode, and A a right H-comodule algebra. We have a map p : V5 → HomH (H, A), p(ϑ) = ϕ being given by the formula ϕ(h) = ϑ(1 ⊗ h) (4.3) Given an integral ϕ, we define s(ϕ) = ϑ : H ⊗ H → A by ϑ ∈ V5 if and only if
ϑ(h ⊗ k) = ϕ(kS(h))
(4.4)
ρr (ϕ(H)) ⊂ Z(A ⊗ H)
(4.5)
Proof. p(ϑ) = ϕ is right H-colinear since ϕ(h(1) ) ⊗ h(2) = ϑ(1 ⊗ h(1) ) ⊗ h(2) (4.2) = ϑ(1 ⊗ h)[0] ⊗ ϑ(1 ⊗ h)[1] = ρr (ϕ(h)) Before we prove the second statement, we observe that (4.5) is equivalent to the following two conditions: ϕ(H) ⊂ Z(A) (1 ⊗ g)ρr (ϕ(h)) = ρr (ϕ(h))(1 ⊗ g)
(4.6) (4.7)
Assuming that (4.6-4.7) hold, and defining ϑ using (4.4), we have a[0] ϑ(ha[1] ⊗ ka[2] ) = a[0] ϕ(ka[2] S(ha[1] )) = aϕ(kS(h)) = ϑ(h ⊗ k)a and ϑ(h(2) ⊗ k)[0] ⊗ h(1) ϑ(h(2) ⊗ k)[1] = ϕ(kS(h(2) ))[0] ⊗ h(1) ϕ(kS(h(2) ))[1] (4.7)
= ϕ(kS(h(2) ))[0] ⊗ ϕ(kS(h(2) ))[1] h(1) = ϕ(k(1) S(h(3) )) ⊗ k(2) S(h(2) )h(1) = ϕ(k(1) S(h)) ⊗ k(2) = ϑ(h ⊗ k(1) ) ⊗ k(2)
and it follows that ϑ ∈ V5 . Conversely, if ϑ = s(ϕ) ∈ V5 , then (4.1) implies immediately that ϕ(H) ⊂ Z(A). Using (4.2), we find that ϕ(k(1) S(h)) ⊗ k(2) = ϕ(k(1) S(h(3) )) ⊗ h(1) k(2) S(h(2) ) and this yields ϕ(k(1) ) ⊗ k(2) h = ϕ(k(1) h(2) S(h(1) )) ⊗ k(2) h(3) = ϕ((kh(2) )(1) S(h(1) )) ⊗ (kh(2) )(2) = ϕ((kh(4) )(1) S(h(3) )) ⊗ h(1) (kh(4) )(2) S(h(2) ) = ϕ(k(1) h(4) S(h(3) )) ⊗ h(1) k(2) h(5) S(h(2) ) = ϕ(k(1) ) ⊗ hk(2)
4.1 Relative Hopf modules
161
proving (4.7) Corollary 23. Let H be a bialgebra with twisted antipode, and A a right H-comodule algebra. If there exists a total integral ϕ : H → A such that ρr (ϕ(H)) ⊂ Z(A ⊗ H), then the forgetful functor M(H)H A → MA is separable, and, consequently, we have a Maschke type Theorem for this forgetful functor. Proof. By Theorem 45, ϑ = ϕ(s) ∈ V5 . Furthermore ϑ(h(1) ⊗ h(2) ) = ϕ(h(2) S(h(1) )) = ϕ(ε(h)1H ) = ε(h)1A and our result follows from Theorem 38. The technical condition in Corollary 23 is (4.5). We will now give some alternative sufficient conditions. Lemma 18. Let H and A be as in Corollary 23, and assume that ϕ(H) ⊂ Z(A), and 2 ϕ(gh) = ϕ(hS (g)) (4.8) for all g, h ∈ H. Then ρr (ϕ(H)) ⊂ Z(A ⊗ H). Proof. We have seen in the proof of Theorem 45 that it suffices to show that (4.7) holds. We proceed in different steps. Using (4.8), we find, for all h ∈ H: 2
2
ϕ(h) = ϕ(h1H ) = ϕ(1H S (h)) = ϕ(S (h)) Next 2
ϕ(h(1) ) ⊗ h(2) = ρr (ϕ(h)) = ρr (ϕ(S (h))) 2
2
2
= ϕ(S (h(1) )) ⊗ S (h(2) ) = ϕ(h(1) ) ⊗ S (h(2) )
(4.9)
and 2
2
2
ϕ(h(1) ) ⊗ h(2) S (g) = ϕ(h(1) ) ⊗ S (h(2) )S (g) 2
2
= ϕ(h(1) g(2) S(g(1) )) ⊗ S (h(2) )S (g(3) ) (4.8)
2
2
2
(4.8)
2
= ϕ(S(g(1) S (h(1) g(2) )) ⊗ S (h(2) )S (g(3) ) 2
2
2
2
= ϕ(S(g(3) ))S (h(1) )S (g(4) )) ⊗ S (g(1) )S(g(2) )S (h(2) )S (g(5) ) 2 2 2 2 2 = ϕ S(g(2) )S (h)S (g(3) ) (1) ⊗ S (g(1) ) S(g(2) )S (h)S (g(3) ) (2) 2 2 2 = 1A ⊗ S (g(1) ) ρr ϕ S(g(2) )S (h)S (g(3) ) 2 = 1A ⊗ S (g(1) ) ρr ϕ hg(3) S(g(2) )) 2 = 1A ⊗ S (g) ρr (ϕ(h)) 2
= ϕ(h(1) ) ⊗ S (g)h(2)
162
4 Applications
and we have the following formula, already close to (4.7): 2
2
ϕ(h(1) ) ⊗ h(2) S (g) = ϕ(h(1) ) ⊗ S (g)h(2)
(4.10)
We now compute 2
ϕ(h(1) ) ⊗ S(g)h(2) = ϕ(h(1) ) ⊗ S(g(1) )h(2) S (g(2) )S(g(3) ) (4.10)
2
= ϕ(h(1) ) ⊗ S(g(1) )S (g(2) )h(2) S(g(3) ) = ϕ(h(1) ) ⊗ h(2) S(g)
(4.11)
and, applying the same trick, ϕ(h(1) ) ⊗ gh(2) = ϕ(h(1) ) ⊗ g(3) h(2) S(g(2) )g(1) (4.11)
= ϕ(h(1) ) ⊗ g(3) S(g(2) )h(2) g(1) = ϕ(h(1) ) ⊗ h(2) g
and (4.7) follows. Corollary 24. Let H be a Hopf algebra, and A a right H-comodule algebra, and ϕ : H → A an integral. In each of the following situations, we have that ρr (ϕ(H)) ⊂ Z(A ⊗ H): 1. H is commutative and ϕ(H) ⊂ Z(A); 2. S 2 = IH , ϕ(H) ⊂ Z(A), and ϕ(hk) = ϕ(kh); 3. ϕ(H) ⊂ k1A . Proof. 1. follows immediately from the fact ρr (ϕ(H)) ⊂ Z(A ⊗ H) is equivalent to (4.6-4.7). 2. is an immediate consequence of Lemma 18. Finally, if ϕ(H) ⊂ k1A , then ρ(ϕ(h)) = ϕ(h)1A ⊗ 1H ∈ k(1A ⊗ 1H ) ⊂ Z(A ⊗ H) Remark 10. Combining Corollary 24 and Corollary 23, we recover [69, Theorem 1.7] and [70, Theorem 1]: if there exists a total integral ϕ such that one of the three conditions of Corollary 24 is satisfied, then the forgetful functor M(H)H A → MA is separable, and we have a Maschke type Theorem. We point out that the results in [69], [70] and [41] are stated in the right-left case. Of course, they are easily translated into the right-right case that we discuss here; this also explains why we work with a twisted antipode, while the results in the above references need an antipode. Classical integrals We now take A = k, with trivial H-coaction. Now ϕ ∈ H ∗ is an integral if and only if ϕ, h(1) h(2) = ϕ, h1H for all h ∈ H. This is equivalent to
4.1 Relative Hopf modules
163
ϕ, h(1) h∗ , h(2) = ϕ, hh∗ , 1H for all h ∈ H and h∗ ∈ H ∗ , or ϕ ∗ h∗ = h∗ , 1H ϕ ∗ which means that r ϕ is a right integral in H in the classical sense (cf. [172]). In the sequel, H ∗ will denote the k-module consisting of right integrals in H ∗ . It is now obvious that ρr (ϕ(H)) ⊂ Z(A ⊗ H). Recall (see Corollary 17) that V5 consists of maps ϑ ∈ (H ⊗ H)∗ such that
ϑ(h ⊗ k(1) )k(2) = ϑ(h(2) ⊗ k)h(1) Using Theorem 45, we now obtain the following result. Theorem 46. Let H be a bialgebra with a twisted antipode. We then have maps r r and s : → V5 p : V5 → H∗
H∗
such that s is a section of p (i.e. s◦p is the identity). Moreover Im (s) consists of those ϑ ∈ V5 that satisfy ϑ(h ⊗ k) = ϑ(1 ⊗ kS(h))
(4.12)
Proof. We recall that p(ϑ)(h) = ϑ(1 ⊗ h) and s(ϕ)(h ⊗ k) = ϕ(kS(h)) It is clear that (p ◦ s)(ϕ) = ϕ. Obviously, if ϑ = s(ϕ), then (4.12) holds. If ϑ satisfies (4.12), then ϑ = s(p(ϑ)). r Corollary 25. If there exists ϕ ∈ H ∗ such that ϕ(1H ) = 1, then MH → M is separable, this means that H is coseparable as a k-coalgebra. Remarks 6. 1. Theorem 46 and Corollary 25 are stated for bialgebras with a twisted antipode. They are also valid for Hopf algebras: it suffices to look at the forgetful functor H M(H)A → M(H)A (using the left-right dictionary), and then take the case A = k. 2. The separability of the functor G : M → MH can also be investigated easily. In fact G is always separable: it suffices to put f = 1H in condition IIc in Corollary 17. 3. Let H be a finite dimensional Hopf algebra over a field. It follows from the Fundamental Theorem for Hopf modules (see [172]) that the functor MH → M is Frobenius, and that H ∗ is a Frobenius extension of k. We sketch the proof, and show how this is related to Part 3. of Corollary 17. First, for every right-right Hopf module M , we have an isomorphism α: M ∼ = M coH ⊗ H
164
namely
4 Applications
α(m) = m[0] S(m[1] ) ⊗ m[2] ; α−1 (m ⊗ h) = m h
This can be restated in the language of Hopf modules: (η, η, η) : (k, k, k) → (H, H, H) is a morphism of Doi-Hopf structures. The corresponding adjoint pair of functors, between M and M(H)H H is an equivalence of categories. Secondly, H ∗ is a right-right Hopf module. The right H-action is given by h∗ h, k = h∗ , kS(h) and the right H-coaction is induced by left multiplication in H ∗ : ρr (h∗ ) = h∗[0] ⊗ h∗[1] ⇔ k ∗ ∗ h∗ = k ∗ , h∗[1] h∗[0] for all k ∗ ∈ H ∗ . Next, (H ∗ )coH =
l
H∗
and we find that H∗ ∼ =
l
H∗
⊗H
l as Hopf modules. It also follows that H ∗ is one dimensional. Fixing a nonzero left integral ϕ, we obtain an isomorphism φ : H → H ∗ , φ(h) = ϕh, or φ(h), k = ϕ(kS(h)) of right H-comodules, and therefore also of left H ∗ -modules, and from Corollary 17 it follows that MH → M is Frobenius. Now define ϑ : H ⊗ H → k by ϑ(k ⊗ h) = ϕ(kS(h)) An immediate verification shows that ϑ ∈ V5 : ϑ(h ⊗ k(1) )k(2) = ϕ(hS(k(1) ))k(2) = ϕ(h(2) S(k(1) ))h(1) S(k(2) )k(3) = ϕ(h(2) S(k))h(1) = ϑ(h(2) ⊗ k)h(1) This is consistent with condition 3d. in Corollary 17. 4. In general, the map p : V5 → HomH (H, A) is not an isomorphism. To see this, we take A = k a field, and H a finite dimensional Hopf algebra. First observe that V5 ∼ = H ∗ Hom(H, H ∗ ) ϑ ∈ V5 corresponds to φ as in Corollary 17:
4.1 Relative Hopf modules
165
φ(h), k = ϑ(k ⊗ h) Now we know that H ∼ = H ∗ as left H ∗ -modules; consequently dimk (V5 ) = dimk (H ∗ Hom(H ∗ , H ∗ )) = dimk (H ∗ ) On the other hand
dimk (HomH (H, A)) = dimk (
r
)=1 H∗
The Heisenberg double Let H be a bialgebra with twisted antipode, and take A = H in Corollary 23. Proposition 84. If H is commutative Hopf algebra, then the forgetful functor M(H)H H → MH is separable. Proof. IH : H → H is a total integral, and ∆(H) ⊂ Z(H ⊗ H), since H is commutative. Now let H be finitely generated and projective; H(H) = H#H ∗ is called the Heisenberg double of H. Corollary 26. Let H be a commutative finitely generated projective Hopf algebra. Then the Heisenberg double H(H) is a separable extension of H. Proof. From Proposition 21, Theorem 8, and Proposition 84, it follows that H op #H ∗ M
∼ = M(H)H H → MH
is separable. H is commutative, so H op = H, and it follows from Theorem 27 that H op #H ∗ is a separable extension of H. Total integrals and relative separability Theorem 45 and Corollary 23 tell us when the existence of a total integral implies the separability of F . What can we conclude if we have a total integral, without any further conditions fulfilled? Let A be an H-comodule algebra, and consider the following commutative diagram of forgetful functors M(H)H A G ? MH
F MA G ? - kM
G will denote the right adjoint of F . Recall (Proposition 73) that X = Nat(G GF, G ) ∼ = X5 = {θ : H ⊗ H → A | (3.50) holds} In our situation, (3.50) takes the form (4.2). We will now show that we have a projection of the k-module X5 onto the space of integrals HomH (H, A), at least in the situation where H is a Hopf algebra.
166
4 Applications
Proposition 85. Let H be a Hopf algebra, and A a right H-comodule algebra. We have a projection p : X5 → HomH (H, A), with section s : HomH (H, A) → X5 given by p(ϑ)(h) = ϑ(1 ⊗ h) and s(ϕ)(h ⊗ k) = ϕ(S(h)k) Proof. Recall that X5 consists of ϑ : C ⊗ C → A satisfying (3.50), or, equivalently, (4.2). As in the proof of Theorem 45, it follows that p(ϑ) is right H-colinear. Now take ϕ : H → A right H-colinear. We have to show that s(ϕ) = ϑ satisfies (4.2). This is straightforward: ϑ(h ⊗ k(1) ) ⊗ k(2) = ϕ(S(h)k(1) ) ⊗ k(2) = ϕ(S(h(3) )k(1) ) ⊗ h(1) S(h(2) )k(2) = ϕ(S(h(2) )k)[0] ⊗ h(1) ϕ(S(h(2) )k)[1] = ϑ(h(2) ⊗ k)[0] ⊗ h(1) ϑ(h(2) ⊗ k)[1] It is obvious that p(s(ϕ)) = ϕ. We can now state some conditions equivalent to the existence of a total integral. More equivalent conditions can be found in [69, (1.6)] and the forthcoming [47, Theorem 4.20]. Theorem 47. For a Hopf algebra H and a right H-comodule algebra A, the following assertions are equivalent: 1. The forgetful functor F : M(H)H A → MA is G -separable; H 2. the forgetful functor G ◦ F : M(H)A → M is G -separable; 3. there exists a map θ : H ⊗ H → A such that θ ◦ ∆H = ηA ◦ εH and (4.2) holds for all h, k ∈ H; 4. there exists a total integral ϕ : H → A; 5. every relative Hopf module is relative injective as an H-comodule; 6. A is relative injective as an H-comodule.
Proof. 2. ⇒ 1. follows from Proposition 51. 1. ⇔ 3. is the final statement of Proposition 73 applied to relative Hopf modules. 3. ⇔ 4. follows from Proposition 85. 4. ⇒ 5. Let M be a relative Hopf module. According to Proposition 50, it suffices to show that the map ρM : M → M ⊗ H splits as a map of right H-comodules. The splitting λM : M ⊗ H → M is given by λ(m ⊗ h) = m[0] ϕ(S(m[1] )h). 5. ⇒ 6. is trivial. 6. ⇒ 4. The unit map ηH : k → H is H-colinear, and split as a k-module map by εH . According to the definition of relative injectivity, there exists an H-colinear map ϕ : H → A such that ϕ ◦ ηH = ηA . ϕ is then a total integral. 4. ⇒ 5. Let ϕ be then a total integral. The functor F has right adjoint G,
4.1 Relative Hopf modules
167
and G has a right adjoint K given by K (N ) = Hom(A, N ). Thus G ◦ K is a right adjoint of G ◦ F . We will apply Rafael’s Theorem 25: we define a natural transformation ν : G GK G F → G as follows: νM : Hom(A, M ) ⊗ H → M ; νM (f ⊗ h) = f (1A )[0] ϕ(S(f (1A )[1] )h) It is straightforward to verify that f is natural. Finally νM (G (ηM )(m)) = νM (m[0] • ⊗m[1] ) = m[0] ϕ(S(m[1] )m[2] ) = mϕ(1H ) = m Here m• : A → M , m • (a) = ma. Assume that we have a total integral ϕ. Then F will be separable if ϑ = s(ϕ) not only satisfies (4.2) but also (4.1). Proposition 86. Take ϑ = s(ϕ) ∈ X5 . ϑ ∈ V5 if and only if ϕ satisfies the condition ϕ(h)a = a[0] ϕ(S(a[1] )ha[2] ) (4.13) for all h ∈ H and a ∈ A. Consequently p restricts to a projection p|V5 : V5 → {ϕ ∈ HomH (H, A) | (4.13) holds} and F : M(H)H A → MA is separable if and only if there exists a total integral satisfying condition (4.13). Proof. Assume that ϕ satisfies (4.13). Then a[0] ϑ(ha[1] ⊗ ka[2] ) = a[0] ϕ(S(ha[1] )ka[2] ) = a[0] ϕ(S(a[1] )S(h)ka[2] ) = ϕ(S(h)k)a = ϑ(h ⊗ k)a and (4.1) follows. The rest is left to the reader. The Frobenius property Proposition 87. Let H be a Frobenius Hopf algebra, and A a right Hcomodule algebra. The functor F : M(H)H A → MH and its right adjoint G form a Frobenius pair of functors. Proof. This is a consequence of Proposition 72: take f = t, with t a free r generator of H . Then φH : H ∗ → H, φH (h∗ ) = h∗ , t(2) t(1) is bijective, and ψ(t ⊗ a) = a[0] ⊗ ta[1] = ε(a[1] )a[0] ⊗ t = a ⊗ t as needed. Corollary 27. Let k be a field, H a Hopf algebra, and A a right H-comodule algebra. Then (F, G) is a Frobenius pair if and only if H is finite dimensional.
168
4 Applications
4.2 Hopf-Galois extensions Schneider’s affineness Theorems Throughout this Section, H will be a Hopf algebra with bijective antipode over a commutative ring k. At some places, we will assume that k is a field, we will mention this explicitly. A will be a right H-comodule algebra; we will always assume that A and H are flat as k-modules. B will be the algebra of coinvariants of A, B = AcoH . As H is at the same time a left and right H-module coalgebra (via left and right comultiplication), we have that (H, A, H) ∈ • DK• (k) and (H, A, H) ∈ DK•• (k), and we can consider the categories A M(H)H and M(H)H A of leftright and right-right relative Hopf modules. Let us first look at the right-right case. We have the following morphism in DK•• (k): (ηH , i, ηH ) : (k, B, k) → (H, A, H) and a pair of adjoint functors (cf. Theorem 15): coH F = • ⊗B A : MB → M(H)H : M(H)H A and G = (•) A → MB
For a right B-module M , the right H-coaction on M ⊗B A is IM ⊗ρA , and the right A-action is given by (m ⊗ a)a = m ⊗ aa , for all m ∈ M and a, a ∈ A. We describe the unit and the counit of the adjunction: for M ∈ MB and N ∈ M(H)H A, µN : N coH ⊗B A → N ; µN (n ⊗ a) = na ηM : M → (M ⊗B A)coH ; ηM (m) = m ⊗ 1 Recall that A ⊗ H ∈ M(H)H A , the right H-coaction is just IA ⊗ ∆H , while the right A-action is (a ⊗ h)b = ab[0] ⊗ hb[1] Using the fact A is k-flat, we can easily show that (A ⊗ H)coH ∼ = A ⊗ H coH = A ⊗ k = A so that the counit map µA⊗H : (A ⊗ H)coH ⊗B A → A ⊗ H translates into a map can : A ⊗B A → A ⊗ H We find easily that can(a ⊗ b) = (a ⊗ 1)b = ab[0] ⊗ b[1]
(4.14)
Similar constructions can be carried out in the left-right case. We have a pair of adjoint functors F = A ⊗B • :
BM
→ A M(H)H and G = (•)coH :
H A M(H)
→ BM
4.2 Hopf-Galois extensions
169
A ⊗ H ∈ A M(H)H , and the adjunction map µA⊗H now defines a map can : A ⊗B A → A ⊗ H given by
can (a ⊗ b) = a(b ⊗ 1) = a[0] b ⊗ a[1]
(4.15)
Lemma 19. The map Φ : A ⊗ H → A ⊗ H given by Φ(a ⊗ h) = a[0] ⊗ a[1] S(h) is an isomorphism. Furthermore can = Φ ◦ can, so can is an isomorphism if and only if can is an isomorphism. Proof. We easily compute that Φ−1 (a ⊗ h) = a[0] ⊗ S(h)a[1] and all the rest follows. A is called a Hopf-Galois extension of B if can (or can ) is an isomorphism. If (F, G) or (F , G ) is a pair of inverse equivalences of categories, then clearly can and can are isomorphisms. In this Sections, we give some affineness Theorems, giving additional sufficient conditions for (F, G) or (F , G ) to be pairs of inverse equivalences. For general pairs of adjoint functors, such sufficient conditions are given in Corollary 22. In the particular situation under discussion here, we can relax these conditions. Our first result tells that we have a pair of inverse equivalences if and only if A is a faithfully flat Galois extension. It is originally due to Doi and Takeuchi [73]; a more general version where H is replaced by a quotient coalgebra has been presented by Schneider in [165, Theorem 3.7]. Theorem 48. Let H be a Hopf algebra with bijective antipode, A a right Hcomodule algebra, and assume that A and H are k-flat. Then the following are equivalent: 1. A is faithfully flat as a left B-module, and A is a Hopf-Galois extension of B; 2. (F, G) is a pair of inverse equivalences between the categories MB and M(H)H A. Proof. 2. ⇒ 1. We have already seen that A is a Hopf-Galois extension of B. Let M → M be an injective map between right B-modules. Since MB and M(H)H A are equivalent, F (M ) = M ⊗B A → F (M ) = M ⊗B A is monic in H H M(H)A , and a fortiori in MA since monics in M(H)A are injective maps. Thus A is left B-flat. Faithful flatness is treated in a similar way: assume that we have a sequence 0 → M → M → M → 0
170
4 Applications
such that
0 → M ⊗B A → M ⊗B A → M ⊗B A → 0
is exact in MA . The three A-modules have the structure of relative Hopf module, and the sequence is exact in M(H)H A . It stays exact after we apply G (since we have equivalence of categories), hence the original sequence in MB is exact. 1. ⇒ 2. First we will prove that, for any N ∈ M(H)H A , the adjunction map µN is an isomorphism. If X is a right A-module, then the map canX = IX ⊗A can : X ⊗B A ∼ = X ⊗A A ⊗B A → X ⊗ H = X ⊗A A ⊗ H given by canX (x ⊗ a) = xa[0] ⊗ a[1] is bijective because can is bijective. Now we have a commutative diagram - N coH ⊗B A
1
µN
ρN ⊗ IA - N ⊗ H ⊗B A IN ⊗ ηH ⊗ IA canN ⊗H
canN
? - N
1
- N ⊗B A
∆N -
? N ⊗H
? ρN ⊗ IN - N ⊗H ⊗H IN ⊗ ∆H
The bottom row is exact, since the coequalizer of ρN ⊗ IN and IN ⊗ ∆H is N H H ∼ = N . The top row is also exact, since N coH is the coequalizer of ρN and IN ⊗ ηH , and since A is flat as a left B-module. canN and canN ⊗H are isomorphisms, so µN is an isomorphism by the Five Lemma. Now take M ∈ MB . We want to show that εM : M → (M ⊗ BA )coH is an isomorphism. Define i1 , i2 : M ⊗B A → M ⊗B A ⊗B A as follows: i1 (m ⊗ a) = m ⊗ 1 ⊗ a and i2 (m ⊗ a) = m ⊗ a ⊗ 1 for all m ∈ M and a ∈ A. We have a commutative diagram 1
- M εM
1
? - (M ⊗B A)coH
IM ⊗ ηA M ⊗B A
i1 i2
IM ⊗ can
=
? ⊂ M ⊗B A
- M ⊗ B A ⊗B A
IM ⊗ ρA
? - M ⊗B A ⊗ H
4.2 Hopf-Galois extensions
171
Indeed, we compute easily that (IM ⊗ can)(i2 (m ⊗ a)) = m ⊗ can(a ⊗ 1) = m ⊗ a ⊗ 1 = (IM ⊗ IA ⊗ ηH )(m ⊗ a) (IM ⊗ can)(i1 (m ⊗ a)) = m ⊗ can(1 ⊗ a) = m ⊗ a[0] ⊗ a[1] = (IM ⊗ ρA )(m ⊗ a) The top row is exact because A is faithfully flat as a left B-module. The bottom row is exact, by definition of coinvariants. can is an isomorphism, so εN is an isomorphism, again by the Five Lemma. If there exists a total integral ϕ : H → A and if H is projective over k, then it suffices to verify that can is surjective, and we do not have to verify that A is faithfully flat over B. This is the context of the following imprimitivity Theorem. Theorem 49. [165, Theorem 3.5] As before, we assume that H is a Hopf algebra with bijective antipode. A is a right H-comodule algebra, and H and A are k-flat. If can is surjective, and if there exists a total integral ϕ : H → A, then the adjoint pair (F, G) between MB and M(H)H A is a pair of inverse equivalences. Before we give the proof of the Theorem, we need some Lemmas. We follow [165]. Let M be a left H-comodule. Then M is also a right H-comodule, via m → m[0] ⊗ S(m[−1] ). Applying the induction functor A ⊗ • : MH → H H A M(H) , we find that A ⊗ M ∈ A M(H) , with structure a(b ⊗ m) = ab ⊗ m and ρr (b ⊗ m) = b[0] ⊗ m[0] ⊗ b[1] S(m[−1] ) Lemma 20. With notation as above, we have for any left H-comodule M that (A ⊗ M )coH = AH M Proof. Take x = i ai ⊗ mi ∈ (A ⊗ M )coH . Then ρr (x) = ai[0] ⊗ mi[0] ⊗ ai[1] S(mi[−1] ) = ai ⊗ mi ⊗ 1 i
i
Apply ρM to the second factor, and then multiply the fourth factor on the right by the second factor. We then find ai[0] ⊗ mi[0] ⊗ ai[1] S(mi[−2] )mi[−1] = ai ⊗ mi[0] ⊗ mi[−1] i
i
Switching the second and the third factor, we find ai[0] ⊗ ai[1] ⊗ m = ai ⊗ mi[−1] ⊗ mi[0] i
i
and x ∈ AH M . The converse inclusion can be done in a similar way.
172
4 Applications
Lemma 21. Take N ∈ M(H)H A . N is a left H-comodule via n → S(n[1] ) ⊗ n[0] , and we have well-defined maps i : N coH → AH N ; i(n) = 1 ⊗ n ai ⊗ ni ) = ai ni p : AH N → N coH ; p( i
i
such that p ◦ i = IN coH . Proof.It is obvious that i is well-defined; let us show that p is well-defined. Take i ai ⊗ ni ∈ AH N . Then ai[0] ⊗ ai[1] ⊗ ni = ai ⊗ S(ni[1] ) ⊗ ni[0] i
i
Applying ρN to the last factor, we find ai[0] ⊗ ai[1] ⊗ ni[0] ⊗ ni[1] = ai ⊗ S(ni[2] ) ⊗ ni[0] ⊗ ni[1] i
i
and consequently ρN ( ai ni ) = ni[0] ai[0] ⊗ ni[1] ai[1] i
=
i
ni[0] ai[0] ⊗ ni[1] S(ni[2] ) =
i
ai ni ⊗ 1
i
as needed. It is clear that p ◦ i = IN coH . Lemma 22. Assume that A and H are k-flat. The following are equivalent. 1. A is right H-coflat; 2. G = (•)coH : M(H)H A → MB is an exact functor; 3. G = (•)coH : A M(H)H → B M is an exact functor. Proof. 1. ⇒ 2. It is clear that G is left exact. Assume that f : N → N is surjective in M(H)H A . IA H f is surjective because A is right H-coflat. Now looking at the commutative diagram AH N 6 p
i
? N coH
IA Hf
AH N 6 p
i
? f - coH N
we find that f : N coH → N coH is surjective. 3. ⇒ 1. Using Lemma 20, we see that AH • is the composition of the functors
4.2 Hopf-Galois extensions H
-
A⊗•
M
-
G
H A M(H)
BM
173
- kM
G is exact, by assumption. A ⊗ • is exact since A is k-flat. It follows that AH • is exact, and A is right H-coflat. 1. ⇒ 3. An application of the left-right dictionary. We have (H, A, H) ∈ • • op op • DK (k). According to Proposition 16, (H , A , H) ∈ DK• (k) and the catH op H egories A M(H) and M(H )Aop are isomorphic. The right H op -action on the coalgebra H is given by right multiplication in H op , as needed. Aop is still right H-coflat (we did not change the coaction and comultiplication), so we can 1) ⇒ 2) to (H op , Aop , H), and find that G is exact. 2. ⇒ 1. This is done in a similar way: we use the left-right dictionary, and apply 3. ⇒ 1. to (H op , Aop , H) ∈ • DK• (k). Lemma 23. Assume that A is a right H-comodule algebra, and that ϕ : H → A is a total integral. For any M ∈ MB , the adjunction map ηM : M → (M ⊗B A)coH is an isomorphism. Proof. We have a well-defined map t : A → B ; t(a) = a[0] ϕ(S(a[1] )) Indeed, ρr (t(a)) = a[0] ϕ(S(a[2] ))[0] ⊗ a[1] ϕ(S(a[2] ))[1] = a[0] ϕ(S(a[3] )) ⊗ a[1] S(a[2] ) = a[0] ϕ(S(a[1] )) ⊗ 1H = t(a) ⊗ 1H and t(a) ∈ AcoH = B. Now define ψM : (M ⊗B A)coH → M
: ψM (
mi ⊗ ai ) =
i
mi t(ai )
i
A direct computation shows that ψM is the inverse of ηM : ψM (ηM (m)) = ψ(m ⊗ 1A ) = mt(1A ) = m1A ϕ(S(1H )) = m and
ηM (ψM (
mi ⊗ ai )) =
i
(t(ai ) ∈ B)
=
mi ⊗ t(ai ) =
mi ⊗ ai[0] ϕ(S(ai[1] ))
i
mi ⊗ ai ϕ(S(1H )) =
i
At the last step, we used that
mi t(ai ) ⊗ 1A
i
i
=
mi ⊗ ai
i
i
mi ⊗ ai ∈ (M ⊗B A)coH .
174
4 Applications
Proof. (of Theorem 49) We have shown in Lemma 23 that ηM is bijective. We still need to show that µN is an isomorphism, for all N ∈ M(H)H A. We prove this first in the case where N = V ⊗ A, where V is an arbitrary k-module, and the structure on N is induced by the structure on A, i.e. (v ⊗ a)b = v ⊗ b and ρV ⊗A = IV ⊗ ρA By Lemma 23, we have (V ⊗ A)coH ∼ = (V ⊗ B ⊗B A)coH ∼ =V ⊗B and we have a commutative diagram (V ⊗ A)coH ⊗B A ∼ =
6
V ⊗ B ⊗B A
µV ⊗A -
V ⊗A 6 IV ⊗A
∼ =V ⊗A
and we see that µV ⊗A is an isomorphism. Let ϕ : H → A be a total integral. We have seen that ρA : A → A ⊗ H has a section νA , namely νA (a ⊗ h) = a[0] ϕ(S(a[1] )h) (cf. Theorem 47 and Proposition 50). N ⊗ A ⊗ H ∈ M(H)H A , with structure induced by the structure on A ⊗ H, i.e. (n ⊗ a ⊗ h)b = n ⊗ ab0] ⊗ hb[1] and ρr = IN ⊗ IA ⊗ ∆H The map f : N ⊗ A ⊗ H → N ; f (n ⊗ a ⊗ h) = n[0] a[0] ϕ(a[1] S(n[1] )h) is H-colinear. It is a k-split epimorphism, since f (n[0] ⊗ 1 ⊗ n[1] ) = n[0] ϕ(S(n[1] )n[2] ) = n H is projective as a k-module, so A ⊗ H is projective as a left A-module. The map can : A ⊗ A → A ⊗ H is a left A-linear epimorphism, so it has an A-linear splitting, and a fortiori a k-linear splitting. N ⊗ A ⊗ A ∈ M(H)H A , with structure defined on the third factor: (n ⊗ a ⊗ a )b = n ⊗ a ⊗ a b and ρr = IN ⊗ IA ⊗ ρA It is easy to check that
4.2 Hopf-Galois extensions
175
IN ⊗ can : N ⊗ A ⊗ A → N ⊗ A ⊗ H is a morphism in M(H)H A , which is surjective, and k-split. Therefore g = f ◦ (IN ⊗ can) : N ⊗ A ⊗ A → N is surjective and k-split. Writing Ker (g) = N , we obtain an exact sequence - N
0
- N ⊗A⊗A
- N
g
- 0
(4.16)
H in M(H)H A which is split as a sequence of k-modules. Because M(H)A → MA is relative separable (see Proposition 50), (4.16) is also a split exact sequence of H-comodules. If we repeat the argument with N replaced by N , H we find another exact sequence in M(H)H A and split in M , namely
0
- N
- N ⊗ A ⊗ A
g
- N
- 0
(4.17)
Now write N1 = N ⊗ A ⊗ A, N2 = N ⊗ A ⊗ A Combining the two sequences, we find an exact sequence N2
g
- N1
- N
g
- 0
(4.18)
H in M(H)H A . Since (4.16-4.17) are split exact in M , they stay exact after we take H-coinvariants, and, combining them, we find an exact sequence in MB , N2coH - N1coH - N coH - 0
The tensor product is always right exact, so we have an exact sequence N2coH ⊗B A
- N1coH ⊗B A
- N coH ⊗B A
- 0
in M(H)H A . Now we have the following commutative diagram with exact sequences in M(H)H A: N2coH ⊗B A - N1coH ⊗B A - N coH ⊗B A µN2 ? N2
µN1 g
? - N1
- 0
µN g - ? N
- 0
Since the structure on N1 = N ⊗A⊗A and N2 = N ⊗A⊗A is induced by the structure on the third factor A, we know that µN1 and µN2 are isomorphisms, and µN is an isomorphism. Theorem 50. [165, Theorem 1] Let H be a Hopf algebra with bijective antipode over a field k, and A a right H-comodule algebra. The following assertions are equivalent.
176
4 Applications
1. There exists a total integral ϕ : H → A, and the map can : A ⊗B A → A ⊗ H is surjective; 2. The functors F and G are a pair of inverse equivalences between the categories M(H)H A and MB ; 3. The functors F and G are a pair of inverse equivalences between the categories A M(H)H and B M; 4. A is a Hopf-Galois extension of B, and is faithfully flat as a left Bmodule; 5. A is a Hopf-Galois extension of B, and is faithfully flat as a right Bmodule. Proof. 1. ⇒ 2. follows from Theorem 49, and 2. ⇔ 4. follows from Theorem 48 (over a field k, every module is projective). 4. ⇒ 1. We have to show that A is relatively injective, or, equivalently, that A is right H-coflat, by Theorem 1. As can is bijective, can : A⊗B A → A⊗H is also bijective. can is a morphism in A M(H)H , and it is therefore a morphism in B LH . Recall that the right H-comodule structure on A⊗B A is determined by the first factor, i.e. ρr (a ⊗ a ) = (a[0] ⊗ a ) ⊗ a[1] . Keeping this in mind, we have a left B-linear map (AH V ) ⊗B A → (A ⊗B A)H V ; ( ai ⊗ vi ) ⊗ b → (ai ⊗ b) ⊗ vi i
i
for every left H-comodule V . This map is an isomorphism, because A is flat as a left B-module. Using the fact that also can is an isomorphism, we have a sequence of left B-module isomorphisms (AH V ) ⊗B A ∼ = (A ⊗B A)H V ∼ = (A ⊗ H)H V ∼ =A⊗V The functor C
M → k M ; V → A ⊗ V ∼ = (AH V ) ⊗B A
is exact because k is a field; since A is faithfully flat as a left B-module, it follows that C M → k M ; V → AH V is also exact, and this proves that A is right H-coflat. Hopf-Galois extensions and separability properties We use the same notation as before: H is a (k-flat) Hopf algebra, A is a right H-comodule algebra, and B = AcoH . Proposition 88. If A/B is separable, then the functor • ⊗ H : MA → M(H)H A is separable.
4.2 Hopf-Galois extensions
177
Proof. Let e = e1 ⊗ e2 ∈ A ⊗B A be the separability idempotent. We apply part 2. of Theorem 38. Take z = can(e) = e1 e2[0] ⊗ e2[1] z ∈ W5 , since, for all a ∈ A, 1 2 2 e1 e2[0] aψ ⊗ e2ψ [1] = e e[0] a[0] ⊗ e[1] a[1]
= e1 (e2 a)[0] ⊗ (e2 a)[1] = ae1 e2[0] ⊗ e2[1] Furthermore z is normalized, since e1 e2[0] ε(e2[1] ) = e1 e2 = 1 The converse property holds if A is a Hopf-Galois extension of B: Proposition 89. If A is a Hopf-Galois extension of B, and the functor • ⊗ H : MA → M(H)H A is separable, then A/B is separable. Proof. Let z ∈ W5 be normalized. Then can−1 (z) = e is a separability idempotent. Example 16. If A is a strongly G-graded ring, then A is separable over Ae if and only if the induction functor • ⊗ kG : MA → M(kG)kG A = gr-A is separable. We will next study the separability of the functor F : MB → M(H)H A and its adjoint G. Proposition 90. If k is left H-coflat, then the functor F : MB → M(H)H A is separable. Proof. We apply Theorem 43 and its Corollary 20. We have that k ⊗ B = B, GF (k ⊗ B) = AcoH = B, so V1 = B HomB (B, B) and IB ∈ V1 satisfies the required normalizing property. If k is a field, then k is left H-coflat if and only if k is injective as a left H-module, by Theorem 1. This condition is equivalent to the existence of a total integral H → k, by Theorem 47, and this is equivalent to H being cosemisimple. So we have the following Corollary. Corollary 28. If H is a cosemisimple algebra over a field k, then the functor F : MB → M(H)H A is separable.
178
4 Applications
Proposition 91. Assume that A is flat as a left B-module, and that A is a Hopf-Galois extension of B. Then the functor G = (•)coH : M(H)H A → MB is separable. Proof. The functor F is exact, since A is left B-flat. G is left exact, so F G is left exact, and F G(ρN ) is injective for every relative Hopf module N ∈ M(H)H A . Thus the assumptions in Theorem 44 and Corollary 21 are fulfilled, and we can conclude that G is separable if µA⊗H has a right inverse which is left and right A-linear and left and right H-colinear. Now can has a two-sided inverse, so µA⊗H has a two-sided inverse, and this inverse is left and right H-colinear and A-linear, since µA⊗H itself is left and right H-colinear and A-linear. Under some additional flatness assumptions, Proposition 91 has a converse: Theorem 51. Assume that k is right H-coflat and that A is left B-flat. The following assertions are equivalent: 1. A is a Hopf-Galois extension of B; 2. G = (•)coH : M(H)H A → MB is separable; 3. G is full and faithful. Proof. 1. ⇒ 2. follows from Proposition 91, and 3. ⇒ 1. is obvious. 2. ⇒ 3. If G is separable, then there exists a natural transformation ζ : 1M(H)H → F G such that µ ◦ ζ is the identity natural transformation; in A particular µA⊗H ◦ ζA⊗H = IA⊗H , and µA⊗H is surjective. A ∈ M(H)H A , the A-action is given by right multiplication. We claim that A is a T -generator of M(H)H A (see Section 3.5). By Theorem 40, we know that A ⊗ H is a T -generator, so it suffices to show that A generates A ⊗ H. For every a ∈ A, we consider ψa : A → A ⊗ H ; ψa (b) = can(a ⊗ b) = ab[0] ⊗ b[1] It is easily verified that ψa ∈ M(H)H A: ψa (bb ) = ab[0] b[0] ⊗ b[1] b[1] = ψa (b)b and (ψa ⊗ I)ρr (b) = ab[0] ⊗ b[1] ⊗ b[2] = ρr (ψa (b)) can is surjective, so for any x ∈ A ⊗ H, we can find y = i ai ⊗ bi ∈ A ⊗B A such that can(y) = ψai (bi ) = x i
If f : A ⊗ H → N in in M(H)H A , then
M(H)H A
is such that f ◦ g = 0 for all g : A → A ⊗ H
f (x) =
i
f (ψai (bi )) = 0
4.3 Doi’s [H, C]-modules
179
for all x ∈ A ⊗ H, and f = 0. This proves that A generates A ⊗ H. Now µA : F G(A) = AcoH ⊗B A ∼ = A → A is the identity map on A, and ζA ◦ µA = IA = IF G(A) We conclude that ζ ◦ µ and 1F G coincide on the T -generator A, hence they coincide, by Theorem 41. This proves that G is fully faithful.
4.3 Doi’s [H, C]-modules Let H be a bialgebra with twisted antipode. H is a right H-comodule algebra: take ρr = ∆H . Consider a right H-comodule algebra C. Then (H, H, C) is a right-right Doi-Hopf structure, and the corresponding Doi-Hopf modules are [C, H]-modules in the sense of [67] (up to left-right conventions). In this C Section, we examine the functor M(H)C and its adjoint. As one H → M might expect, the results are in duality with the results of Section 4.1. A right H-linear map φ : C → H is called an integral . An integral is called total if εH ◦ φ = εC . There is a close relationship between integrals and the space W1 introduced in Section 3.4. In our situation, ψ : C ⊗ H → H ⊗ C is given by ψ(c ⊗ h) = h(1) ⊗ ch(2) Using the notation of Section 3.4, we have that V1 consists of those ϑ ∈ (C ⊗ H)∗ that satisfy ϑ(c(1) ⊗ h(1) )c(2) h(2) = ϑ(c(2) ⊗ h)c(1)
(4.19)
for all c ∈ C and h ∈ H. W1 consists of maps e ∈ Hom(C, H ⊗ H) satisfying e1 (c(1) ) ⊗ e2 (c(1) ) ⊗ c(2) = e1 (c(2) )(1) ⊗ e2 (c(2) )(1) ⊗ c(1) e1 (c(2) )(2) e2 (c(2) )(2) e (c) ⊗ e (c)h = h(1) e (ch(2) ) ⊗ e (ch(2) ) 1
2
1
2
(4.20) (4.21)
Proposition 92. We have a map p : W1 → HomC (C, H) given by p(e) = φ = (εH ⊗ IH ) ◦ e Proof. We see immediately that φ(c) = εH (e1 (c))e2 (c) Applying εH to the first factor in (4.21), we find that φ(c)h = φ(ch), and φ is right H-linear, as needed.
180
4 Applications
Proposition 93. Let φ : C → H be right H-linear. We define s(φ) = e : C → H ⊗ H by e(c) = S(φ(c)(2) ) ⊗ φ(c)(1) Then e ∈ W1 if and only if the following two conditions hold for all c ∈ C and h ∈ H: φ(c(2) ) ⊗ c(1) = φ(c(1) ) ⊗ c(2) h(2) ⊗ φ(ch(1) ) = h(1) ⊗ φ(ch(2) )
(4.22) (4.23)
Proof. Let A1 and A2 be respectively the left and right hand side of (4.20). Then A1 = S(φ(c(1) )(2) ) ⊗ φ(c(1) )(1) ⊗ c(2) A2 = S(φ(c(2) )(2) )(1) ⊗ φ(c(1) )(1)(1) ⊗ c(1) S(φ(c(2) )(2) )(2) φ(c(1) )(1)(2) = S(φ(c(2) )(4) ) ⊗ φ(c(2) )(1) ⊗ c(1) S(φ(c(2) )(3) )φ(c(2) )(2) = S(φ(c(2) )(2) ) ⊗ φ(c(2) )(1) ⊗ c(1) Assuming that A1 = A2 , apply εH ⊗ IH ⊗ IC to both sides. Then we find (4.22). Conversely, if (4.22) holds, then (4.20) follows easily after we apply (S ⊗ IH ) ◦ ∆cop H to the first factor of both sides of (4.22). Now let B1 and B2 be the left and right hand side of (4.21): B1 = S(φ(c)(2) ) ⊗ φ(c)(1) h B2 = h(1) S(φ(ch(2) )(2) ) ⊗ φ(ch(2) )(1) Assume that B1 = B2 , apply ∆cop H to the second factor, and then multiply the first two factors. We obtain S(φ(c)(3) )φ(c)(2) h(2) ⊗ φ(c)(1) h(1) = h(2) ⊗ φ(c)h(1) = h(2) ⊗ φ(ch(1) ) = h(1) S(φ(ch(2) )(3) )φ(ch(2) )(2) ⊗ φ(ch(2) )(1) = h(1) ⊗ φ(ch(2) ) and (4.23) follows. Conversely, (4.23) implies that B2 = h(2) S(φ(ch(1) )(2) ) ⊗ φ(ch(1) )(1) = h(3) S(φ(c)(2) h(2) ) ⊗ φ(c)(1) h(1) = S(φ(c)(2) ) ⊗ φ(c)(1) h = B1 finishing our proof. Corollary 29. Let H be a bialgebra with twisted antipode, and C a right Hmodule coalgebra. If there exists a total integral φ : C → H satisfying (4.22) C and (4.23), then the forgetful functor M(H)C H → M is separable. Proof. According to Proposition 93, s(φ) = e ∈ W1 . We easily compute that e1 (c)e2 (c) = S(φ(c)(2) )φ(c)(1) = εH (φ(c)) = εC (c) and our result follows from part 2) of Proposition 76
4.4 Yetter-Drinfeld modules
181
4.4 Yetter-Drinfeld modules Let K and H be bialgebras, A a (K, H)-bicomodule algebra, and C an (H, K)bimodule coalgebra. A left-right Yetter-Drinfeld module is a k-module M together with a left A-action and a right C-coaction satisfying the compatibility relation (a[0] m)[0] ⊗ (a[0] m)[1] a[−1] = a[0] m[0] ⊗ a[1] m[1] (4.24) In a similar way, we define right-left Yetter-Drinfeld modules: now we need a right A-action and a left C-coaction such that a[1] (ma[0] )[−1] ⊗ (ma[0] )[0] = m[−1] a[−1] ⊗ m[0] a[0]
(4.25)
Our notation for the category of left-right Yetter-Drinfeld modules and Alinear C-colinear maps will be A YD(K, H)C . A similar notation will be used in the right-left case. If K has a twisted antipode S K , then (4.24) is clearly equivalent to ρr (am) = a[0] m[0] ⊗ a[1] m[1] S K (a[−1] )
(4.26)
We will then call (K, H, A, C) a left-right Yetter-Drinfeld structure. A morphism (K, H, A, C) → (K , H , A , C ) between two Yetter-Drinfeld structures consists of a fourtuple (κ, , α, γ), with κ : K → K , : H → H bialgebra maps, α : A → A an algebra map, and γ : C → C a coalgebra map such that lr ρlr A ◦ α = (κ ⊗ α ⊗ ) ◦ ρA
and γ(hck) = (h)γ(c)κ(k) for all h ∈ H, c ∈ C, and k ∈ K. • YD• (k) is the category of left-right YetterDrinfeld structures. In a similar way, if H has a twisted antipode S H , then (4.25) is equivalent to ρl (ma) = S H (a[1] )m[−1] a[−1] ⊗ m[0] a[0]
(4.27)
and we call (K, H, A, C) a right-left Yetter-Drinfeld structure. The category of right-left Yetter-Drinfeld structures is denoted by • YD• (k). Proposition 94. We have a functor F :
• YD
•
(k) → • DK• (k)
defined as follows: F (K, H, A, C) = (K op ⊗ H, A, C)
182
4 Applications
where the right K op ⊗ H-coaction on A and the left K op ⊗ H-action on C are given by the formulas ρrK op ⊗H (a) = a[0] ⊗ S K (a[−1] ) ⊗ a[1] (k ⊗ h) c = hck
(4.28) (4.29)
for all a ∈ A, c ∈ C, h ∈ H and k ∈ K. Moreover, we have an isomorphism of categories C ∼ op ⊗ H)C (4.30) A YD(K, H) = A M(K Let (A, C, ψ) be the corresponding entwining structure (cf. Proposition 17). The map ψ : A ⊗ C → A ⊗ C is then given by the formula ψ(a ⊗ c) = a[0] ⊗ a[1] cS K (a[−1] )
(4.31)
Proof. A routine verification. We will use the notation ρrK op ⊗H (a) = a{0} ⊗ a{1} A is a right K op ⊗ H-comodule algebra since a{0} ⊗ ∆K op ⊗H (a{1} ) = a[0] ⊗ S K (a[−1] ) ⊗ a[1] ⊗ S K (a[−2] ) ⊗ a[2] = ρrK op ⊗H (a{0} ) ⊗ a{1} and ρrK op ⊗H (ab) = a[0] b[0] ⊗ S K (a[−1] b[−1] ) ⊗ a[1] b[1] = a[0] b[0] ⊗ S K (b[−1] )S K (a[−1] ) ⊗ a[1] b[1] = ρrK op ⊗H (a)ρrK op ⊗H (b) Clearly C is a left K op ⊗ H-module. C is a K op ⊗ H-module coalgebra, since ∆C ((k ⊗ h) c) = ∆C (hck) = h(1) c(1) k(1) ⊗ h(2) c(2) k(2) = (k(1) ⊗ h(1) ) c(1) ⊗ (k(2) ⊗ h(2) ) c(2) M ∈ A YD(K, H)C if and only if A M(K op ⊗ H)C . Indeed, (2.19) amounts to ρr (am) = a[0] m[0] ⊗ (S K (a[−1] ) ⊗ a[1] ) m[1] = a[0] m[0] ⊗ a[1] )m[1] (S K (a[−1] ) and this is exactly (4.26) Remarks 7. 1. If H has an antipode, then K op ⊗ H also has an antipode, and the map ψ is bijective. 2. If A = C = H = K, then we obtain the classical Yetter-Drinfeld modules, also named crossed modules or quantum Yang-Baxter modules. In this situation, ψ is bijective if and only if H has a bijective antipode.
4.4 Yetter-Drinfeld modules
183
Using our left-right dictionary, we can find the right-left version of Proposition 94. First observe that we have an isomorphism of categories • YD
•
(k) ∼ = • YD• (k)
We send (K, H, A, C) to (H opcop , K opcop , Aop , C cop ). Moreover, C A YD(K, H)
cop ∼ = C YD(H opcop , K opcop )Aop
Using this isomorphism, and Proposition 94, we obtain a functor F :
•
YD• (k) → • DK• (k)
such that the diagram • YD
•
F (k) - • DK• (k)
∼ = •
∼ =
? F ? YD• (k) - • DK• (k)
commutes, namely F (K, H, A, C) = (K ⊗ H op , A, C) with ρl (a) = a[−1] ⊗ S H (a[1] ) ⊗ a[0] and c (k ⊗ h) = hck We can also introduce one-sided Yetter-Drinfeld modules. Let A be a (K, H)bimodule algebra, and C a (K, H)-bimodule coalgebra. The compatibility conditions for left-left and right-right Yetter-Drinfeld modules are respectively (a[0] m)[−1] a[1] ⊗ (a[0] m)[0] = a[−1] m[−1] ⊗ a[0] m[0] (ma[0] )[0] ⊗ a[−1] (ma[0] )[1] = m[0] a[0] ⊗ m[1] a[1]
(4.32) (4.33)
If H (resp. K) is a Hopf algebra, these relations are equivalent to ρl (am) = a[−1] m[−1] SH (a[1] ) ⊗ a[0] m[0] ρr (ma) = m[0] a[0] ⊗ SK (a[−1] )m[1] a[1]
(4.34) (4.35)
In this situation, (K, H, A, C) is called a left-left (resp. a right-right) YetterDrinfeld structure. YD•• (k) and •• YD(k) are the categories of right-right and left-left Yetter-Drinfeld module structures. The categories of left-left (resp. right-right) Yetter-Drinfeld modules are denoted by C A YD(K, H) (resp. YD(K, H)C ). A The above results can be extended easily to the one-sided case. For example, we have a functor F : YD•• (k) → DK•• (k), mapping (K, H, A, C) to (K op ⊗ H, A, C), with
184
4 Applications
ρrK op ⊗H (a) = a[0] ⊗ SK (a[−1] ) ⊗ a[1] and c (k ⊗ h) = kch and we have an isomorphism of categories op ∼ YD(K, H)C ⊗ H)C A = M(K A
Proposition 95. The categories YD•• (k), •• YD(k), • YD• (k), and • YD• (k) are isomorphic. The corresponding categories of Yetter-Drinfeld modules are also isomorphic. Proof. We have already seen that • YD• (k) ∼ = • YD• (k). Let us define the isomorphism P : YD•• (k) → • YD• (k) Take (K, H, A, C) ∈ YD•• (k). This means that K has an antipode, A is a (K, H)-bicomodule algebra and C is a (K, H)-bimodule coalgebra. We define P (K, H, A, C) = (K op , H op , Aop , C) K op has a twisted antipode, as needed. Aop is a (K op , H op )-bicomodule (the “op” does not matter), and a (K op , H op )-comodule algebra (we did put the “op” everywhere). C is a (K, H)-bimodule, and therefore an (H op , K op )bimodule. C is an (H op , K op )-bimodule coalgebra since ∆(h·c·k) = ∆(kch) = k(1) c(1) h(1) ⊗k(2) c(2) h(2) = h(1) ·c(1) ·k(1) ⊗h(2) ·c(2) ·k(2) Remark 11. We have a commutative diagram of functors (cf. Proposition 15) YD•• (k) F ? DK•• (k)
P - • YD• (k) F ? - • DK• (k)
Applying Theorem 19, we obtain Corollary 30. If C is flat as a k-module, then A YD(K, H)C is a Grothendieck category. Now we take A = C = H = K, so that we have classical Yetter-Drinfeld modules. We view A and C as bialgebras, respectively A = H and C = H op . Proposition 96. Let H be a bialgebra with twisted antipode. Then the category of Yetter-Drinfeld modules H YD(H, H)H is a monoidal category.
4.4 Yetter-Drinfeld modules
185
Proof. It suffices to verify (2.111) and (2.113). From (4.31), we know that, for all a, c ∈ H: aψ ⊗ cψ = a(2) ⊗ a(3) cS(a(1) ) We easily compute that a(1)ψ ⊗ a(2)Ψ ⊗ dψ cΨ = a(2) ⊗ a(5) ⊗ a(6) dS(a(4) )a(3) cS(a(1) ) = a(2) ⊗ a(3) ⊗ a(4) dcS(a(1) ) = ∆(aψ ) ⊗ (dc)ψ as needed (the multiplication on C is opposite to the one on H). Finally εA (aψ )1ψ C = εA (a(2) )a(3) 1C S(a(1) ) = εA (a)1C From Corollary 4, we find Corollary 31. The category of Yetter-Drinfeld modules over a field k is a Grothendieck category with enough injective objects. The Drinfeld double Now let (K, H, A, C) ∈ • YD• (k) (K has a twisted antipode), and assume that C is finitely generated and projective as a k-module, with finite dual basis {di , d∗i | i = 1, · · · , n}. We have (A, C, ψ) ∈ • E• (k) (see (4.31)), and Theorem 8 delivers a smash product structure (A, C ∗ , R) ∈ S(k). We write D1 = A#R C ∗ = A C ∗ (2.30) tells us that R(c∗ ⊗ a) =
∗ c∗ , dψ i aψ ⊗ di i
=
c∗ , a[1] di S K (a[−1] )a[0] ⊗ d∗i
i
=
S(a[−1] )c∗ a[1] , di a[0] ⊗ d∗i
i
= a[0] ⊗ S(a[−1] )c∗ a[1] Thus the multiplication on D1 is given by the formula (a c∗ )(b d∗ ) = ab[0] (S K (b[−1] c∗ b[1] ) ∗ d∗
(4.36)
Now assume that (K, H, A, C) ∈ • YD• (k) (H has a twisted antipode). Now we find (A, C, ϕ) ∈ • E• (k), and, if C is finitely generated and projective, a smash product structure (C ∗ , A, S) (see Theorem 9). We now find S : A ⊗ C ∗ → C ∗ ⊗ A given by S(a ⊗ c∗ ) = a[−1] c∗ S H (a[1] ) ⊗ a[0] The multiplication on
186
4 Applications
D2 = C ∗ #S A = C ∗ A is the following (c∗ a)(d∗ b) = c∗ ∗ a[−1] d∗ S H (a[1] ) a[0] b Now assume that both H and K have a twisted antipode. It is easy to check that S = R−1 , and we conclude from Proposition 23 that A C ∗ ∼ = C ∗ A. ∗ We call A C the Drinfeld double associated to (K, H, A, C). The classical situation is K = H = A = C, where H is a bialgebra with twisted antipode. The classical Drinfeld double is D(H) = H ∗ H with multiplication rule (h∗ h)(k ∗ k) = h∗ ∗ (h(1) k ∗ S(h(3) )) h(2) k
(4.37)
(Compare to [108, IX.4.2]). Let us make clear also that the Drinfeld double that we introduced also coincides with the one of Majid’s book [128]. First observe that ∗ ∗ ∗ hk ∗ k, l = k∗ , klh = k(1) , kk(2) , lk(3) , h
hence
∗ ∗ ∗ hk ∗ k = k(1) , kk(3) , hk(2)
and (4.37) can be rewritten as ∗ ∗ ∗ (h∗ h)(k ∗ k) = k(1) , S(h(3) )k(3) , h(1) h∗ ∗ k(2) h(2) k
(4.38)
which is the multiplication rule found in [128, Exercise 7.1.2]. If we regard H ∗ and H as subalgebras of D(H) = H ∗ H via h → ε h and h∗ → h∗ ⊗ 1, then (4.38) can be written as a commutation rule: ∗ ∗ ∗ hk ∗ = k(1) , S(h(3) )k(3) , h(1) k(2) h(2)
and this can be written in a more symmetric form: ∗ ∗ ∗ ∗ k(1) , h(2) h(1) k(2) = k(2) , h(1) k(1) h(2)
(4.39)
Proposition 97. Let H be a finitely generated projective Hopf algebra with invertible antipode. Then D(H) = H ∗cop H is a Hopf algebra. The antipode is given by ∗ SD(H) (h∗ h) = (ε S(h))(S (h∗ ) 1) (4.40) Proof. This is an immediate application of Proposition 39. Using (4.38), we find R(h ⊗ h∗ ) = h∗(1) , S(h(3) )h∗(3) , h(1) h∗(2) ⊗ h(2) (4.41)
4.4 Yetter-Drinfeld modules
187
and h∗(2)R ⊗ h(1)R ⊗ h∗(1)r ⊗ h(2)r = h∗(4) , S(h(3) )h∗(6) , h(1) h∗(1) , S(h(6) )h∗(3) , h(4) h∗(5) ⊗ h(2) ⊗ h∗(2) ⊗ h(5) = h∗(1) , S(h(4) )h∗(4) , h(1) h∗(3) ⊗ h(2) ⊗ h∗(2) ⊗ h(3) = h∗R(2) ⊗ hR(1) ⊗ h∗R(1) ⊗ hR(2) proving (2.100). (2.101) is easy. Frobenius properties Since Yetter-Drinfeld modules are Doi-Koppinen Hopf modules, and the Drinfeld double is a smash product, we can use the results of Sections 3.2 and 3.3 to obtain criteria for the forgetful functor H → H M, or the extension D(H)/H to be separable of FrobeH YD(H, H) nius. We will apply these results in this Section; in particular, we will show that there is a connection between Frobenius properties for the Drinfeld double and the unimodularity of the underlying Hopf algebra. Our first result is due to Radford [154] in the case where k is a field; in the case where k is a commutative ring, it appeared in [49]. Theorem 52. If H is a Frobenius Hopf algebra, then D(H) is Frobenius and unimodular. l l Proof. Using Theorem 31, we find free generators t and ϕ for H and H ∗ such that ϕ, t = 1. Let α be the distinguished element in H ∗ , i.e. th = α(h)t for all h ∈ H (cf. Proposition 63). The multiplication in D(H) can be written in the following way: (h∗ h)(k ∗ k) = h∗ ∗ h(1) · k ∗ · S(h(3) ) h(2) k {ki , ki∗ | i = 1, · · · , n} will be a dual basis for H. We will first compute A left integral y in D(H) can be written under the form y=
n
h∗i ki
i=1
for some h∗i ∈ H ∗ . For all h∗ ∈ H ∗ , we have (h∗ 1)y = h∗ , 1y or
n i=1
h∗ ∗ h∗i ki = h∗ , 1
n i=1
h∗i ki
(4.42) l D(H)
.
188
4 Applications
For a fixed index j, we apply kj∗ to the second factor. This gives us h∗ ∗
n
kj∗ , ki h∗i =
i=1
n
h∗ , 1kj∗ , ki h∗i
i=1
and
n
kj∗ , ki h∗i ∈
H∗
i=1
so that we can write
n
l
kj∗ , ki h∗i = xj ϕ
i=1
for some xj ∈ k. Then y=
n
kj∗ , ki h∗i kj =
i,j=1
where we wrote k =
n
xj ϕ ⊗ kj = ϕ ⊗ k
j=1
n j=1
xj kj . Next (ε h)y = ε(h)y
for all h ∈ H, so ε(h)ϕ k = (ε h)(ϕ k) = h(1) · ϕ · S(h(3) ) h(2) k We apply the first factor to t, and find, using ϕ, t = 1, ε(h)k = ϕ, S(h(3) )th(1) h(2) k = ϕ, ε(h(3) )α(h(1) )th(2) k = α, h(1) h(2) k and α, S(h)k = hk Apply S to both sides: α, S(h)S(k) = S(k)S(h) This holds for all h ∈ H, and since S is bijective, we have α, hS(k) = S(k)h This implies that S(k) ∈
r α
= ⊂
and k ∈
H
l
D(H)
l
l
H
, and we have shown that
r
H∗
r
= k(ϕ u) H
where u = S(t). This inclusion is an equality. To see this, we put
4.4 Yetter-Drinfeld modules
189
l
I = {x ∈ k | x(ϕ u) ∈
} D(H)
Clearly I is an ideal of k. Using Proposition 62, we find yj ∈ yj∗ ∈ D(H)∗ such that m yj∗ , yj = 1
l D(H)
and
j=1
We can write yj = xj (ϕ u), with xj ∈ I; we obtain 1=
m
yj∗ , (ϕ u)xj ∈ I
j=1
and I = k, or
l
= k(ϕ u) D(H)
This shows that D(H) is Frobenius; to complete the proof, it suffices to show that ϕ u is also a right integral in D(H). To see this, we proceed as follows. ϕ u is a left integral, so ε(h)(ϕ u) = (ε h)(ϕ u) = h(1) · ϕ · S(h(3) ) α(S(h(2) ))u for all h ∈ H, and this implies α, S(h(2) )h(1) · ϕ · S(h(3) ) = ε(h)ϕ
(4.43)
(4.43) holds in any Frobenius Hopf algebra. We write down (4.43) with H replaced by H ∗ . Let g be the distinguished element in H, then (4.43) takes the form ∗
h∗ , 1t = S ∗ (h∗(2) ), gh∗(1) · t · S (h∗(3) ) = h∗(2) , S(g)h∗(1) , t(3) h∗(3) , S(t(1) )t(2) = h∗ , t(3) S(g)S(t(1) )t(2)
(4.44)
Now t = S(u), so S ∗ (h∗ ), 1S(u) = h∗ , 1t = h∗ , S(u(1) )S(g)u(3) S(u(2) ) = S ∗ (h∗ ), S(u(3) )gu(1) S(u(2) ) S and S ∗ are bijective, so h∗ , 1u = h∗ , S(u(3) )gu(1) u(2) Now we can prove that ϕ u is a right integral:
(4.45)
190
4 Applications
(ϕ u)(ε h) = ϕ uh = ε(h)(ϕ u) and (ϕ u)(h∗ a) = ϕ ∗ (u(1) · h∗ · S(u(3) )) u(2) = ϕu(1) · h∗ · S(u(3) ), g u(2) = ϕ h∗ , S(u(3) )gu(1) ranu(2) (4.45)
= h∗ , 1ϕ u
We are now able to give necessary and sufficient conditions for the Drinfeld double D(H) to be Frobenius over H. Theorem 53. For a Hopf algebra H with bijective antipode over a commutative ring k, the following conditions are equivalent: 1. H is finitely generated and projective, and D(H)/H is Frobenius; 2. H is Frobenius over k and unimodular; 3. H is finitely generated and projective, and there exists t ∈ H such that the map (4.46) φH : H ∗ → H ; φH (h∗ ) = h∗ , t(2) t(1) is bijective, and h(2) ⊗ th(1) = h(1) ⊗ h(2) t
(4.47)
for all h ∈ H. Proof. 1. ⇒ 2. It follows from Corollary 12 that H/k is Frobenius (take into account that the Drinfeld double is a particular case of the smash product). From Theorem 52, it follows that D(H) is unimodular. We will use the same notation as in the proof of Theorem 52, namely
l H
with
r
= kt ;
l
= ku ;
r
= kϕ ; H∗
H
= kψ H∗
∗
u = S(t) ; ψ = S (ϕ) ; ϕ, t = 1 Recall from Proposition 55 that we have an isomorphism of functors Φ : F → F , where F, F : MH → MD(H) with F (M ) = M ⊗ H ∗ and F (M ) = H ⊗ M for every right H-module M . According to (3.31), the right D(H)-action on F (M ) = M ⊗ H ∗ is given by (m ⊗ k ∗ ) (h∗ h) = mhR ⊗ (k ∗ ∗ h∗ )R and the right D(H)-action on F (M ) = H ⊗ M is (cf. (3.32))
(4.48)
4.4 Yetter-Drinfeld modules
(k ⊗ m) (h∗ h) =
∗ h∗ ∗ kiR , kki ⊗ mhR
191
(4.49)
i
M = kt is a right H-module, and we have an isomorphism ΦM : H ⊗ kt → kt ⊗ H ∗ We write
Φ−1 M (t ⊗ ψ) = h0 ⊗ t
with h0 ∈ H. In Theorem 52, we have seen that ϕ u is a free generator of r l = D(H) . This implies that D(H) ∗
SD(H) (ϕ u) = (ε S(u))(S (ϕ) 1) = (ε t)(ψ 1) is also a free generator of
l D(H)
=
r D(H)
. In particular, we find for all h ∈ H:
ε(h)(ε t)(ψ 1) = (ε t)(ψ 1)(ε h) = (ε t)(ψ h) = (ε t)(ε hR )(ψR 1) = (ε thR )(ψR 1) We can rewrite this as ε(h)R(t ⊗ ψ) = R(thR ⊗ ψR ) or, since R is invertible, ε(h)t ⊗ ψ = thR ⊗ ψR Using this formula, we compute (t ⊗ ψ) (h∗ ⊗ h) = thR ⊗ (ψ ∗ h∗ )R
= h∗ , 1thR ⊗ ψR = h∗ , 1ε, ht ⊗ ψ
We also used (4.48) and the fact that ψ is a right integral. The clue point is now that ΦM and ΦM −1 are right D(H)-linear. This implies that ΦM −1 ((t ⊗ ψ) (h∗ h)) = (h0 ⊗ ψ) (h∗ h) or
h∗ , 1ε, hh0 ⊗ t = (h0 ⊗ ψ) (h∗ h)
Take h = 1. Then h∗ , 1h0 ⊗ t = =
h∗ , ki∗ , h0 ki ⊗ t
i
i
h∗ , h0(1) h0(2) ⊗ t = h∗ , 1h0 ⊗ t
(4.50)
192
4 Applications
Applying ε to the first factor, we find h∗ , 1ε, h0 t = h∗ , h0 t and, since kt is free of rank one, h∗ , 1ε, h0 = h∗ , h0 for all h∗ ∈ H ∗ . This implies that h0 = ε(h0 )1H ∈ k1H and we can write h0 = x0 1H with x0 ∈ k. Now we apply (4.50) with h∗ = ε. For all h ∈ H, we have that ∗ x0 kiR , 1H ki ⊗ thR ε(h)x0 1H ⊗ t = =
i
x0 S(h(1) ) · ki∗ · h(3) , 1H ki ⊗ th(2)
i
= x0 h(3) S(h(1) ) ⊗ th(2) We apply ε to the first factor. This yields x0 ε(h)t = x0 α(h)t and x0 (ε(h) − α(h)) = 0 for all h ∈ H, since t is a free generator of
l H
. Now
0 = ΦM ((ε(h) − α(h))x0 1H ⊗ t) = (ε(h) − α(h))t ⊗ ψ l so ε(h) = α(h) for all h ∈ H, since t ⊗ ψ is a free generator D(H) . This shows that H is unimodular. 2. ⇒ 3. H/k is finitely generated and projective, since H/k is Frobenius. Let r l t be a free generator of H = H . We know from the proof of Theorem 31 that φH is bijective. Furthermore h(2) ⊗ th(1) = h(1) ⊗ h(2) t = h ⊗ t since t is at the same time a left and right integral. 3. ⇒ 1. We compute easily that ψ(h ⊗ t) = h(2) ⊗ h(3) tS(h(1) ) = h(3) ⊗ th(2) S(h(1) ) = h ⊗ t ∼ and it follows from Proposition 72 (applied to M(ψ ◦ τ )H H op = H ∼ H YD(H, H) = D(H) M that D(H)/H is Frobenius.
H H M(ψ)
∼ =
Remark 12. An imprimitivity Theorem for Yetter-Drinfeld modules has been proved recently by the second author and Menini, we refer to [130].
4.5 Long dimodules
193
4.5 Long dimodules Let A be an algebra and C a coalgebra, and consider the identity IA⊗C : A ⊗ C → A ⊗ C It is obvious that (A, C, IA⊗C ) ∈ • E• . The corresponding entwined modules satisfy the compatibility relation ρr (am) = am[0] ⊗ m[1] i.e. the left A-action is right C-colinear, or, equivalently, the right C-coaction is left A-linear. The objects of C A M(IA⊗C )
= A LC
are called (generalized) Long dimodules. If C is finitely generated and projective, then C ∼ A L = A⊗C ∗ M where A ⊗ C ∗ is the “trivial” smash product: the “braiding” map R : C ∗ ⊗ A → A ⊗ C ∗ is nothing else then the switch map. Long dimodules can be viewed as a special case of Doi-Koppinen Hopf modules: let H be any bialgebra (e.g. H = k), and let H coact trivially on A and act trivially on C. In Chapter 7, we will study the case where A = C = H is a bialgebra, in the context of nonlinear equations. In the situation where H is commutative and cocommutative, Long dimodules are then the same thing as Yetter-Drinfeld modules. Long dimodules were first introduced in [118] to study H-Azumaya algebras and the Brauer-Long group of a finitely generated projective, commutative, cocommutative Hopf algebra. We refer to [35] for actual computation of the Brauer-Long group. One can generalize the Brauer-Long group to the situation where one works over a general Hopf algebra (not necessarily commutative or cocommutative), see [52]. Then one has to consider Yetter-Drinfeld modules instead of dimodules. This is related to the fact that Yetter-Drinfeld modules form a braided monoidal category, and that one can define the Brauer group of a braided monoidal category (see [182]). Obviously, we can also consider right-right, left-left and right-left versions of Long dimodules. We will now discuss the results of Section 3.3 for categories of Long dimodules. Proposition 98. For an algebra A and a coalgebra C, the following conditions are equivalent: 1. A LC → A L is separable; 2. LC A → LA is separable; 3. there exists ϑ : C ⊗ C → A such that a. Im ϑ ⊂ Z(A);
194
4 Applications
b. ϑ(c ⊗ d(1) ) ⊗ d(2) = ϑ(c(2) ⊗ d) ⊗ c(1) ; c. ϑ(∆(c)) = ε(c)1A for all c ∈ C and a ∈ A. 4. C A L → A L is separable; 5. C LA → LA is separable; 6. there exists ϑ : C ⊗ C → A such that a. Im ϑ ⊂ Z(A); b. ϑ (c ⊗ d(2) ) ⊗ d(1) = ϑ(c(1) ⊗ d) ⊗ c(2) ; c. ϑ(∆cop (c)) = ε(c)1A for all c ∈ C and a ∈ A. In particular, if C is coseparable as a coalgebra, then A LC → A L is a separable functor. Proof. 2. ⇔ 3. follows immediately from Theorem 38. 1. ⇔ 2. follows from the fact that 3) is the same for A as for Aop . 3. ⇔ 6. follows after we put ϑ = ϑ ◦ τ . The equivalence of 4., 5. and 6. is the same as the equivalence of 1., 2. and 3., but with C replaced by C cop . The final statement follows after we look at part 1. of Corollary 17. Proposition 99. Let H be a bialgebra. Then H LH → functor if and only if H is coseparable as a coalgebra.
HL
is a separable
Proof. One direction has already been shown in Proposition 98. Conversely, assume that H LH → H L is separable. Then there exists ϑ : H ⊗ H → H satisfying the requirements of Proposition 98. It is easy to prove that θ = εH ◦ ϑ is a Larson coseparability idempotent, and the result follows using part I of Corollary 17. Similar results apply to the other forgetful functor. Proposition 100. Let A be an algebra and C a coalgebra. The following assertions are equivalent. C 1. LC A → L is separable; C C 2. LA → L is separable; 3. there exists e : C → A ⊗ A, e(c) = e1 (c) ⊗ e2 (c), such that a. e(c(1) ) ⊗ c(2) = e(c(2) ) ⊗ c(1) ; b. e1 (c) ⊗ e2 (c)a = ae1 (c) ⊗ e2 (c); c. e1 (c)e2 (c) = ε(c)1A , for all c ∈ C and a ∈ A. C 4. C A L → L is separable; C 5. A L → LC is separable; 6. there exists e : C → A ⊗ A, e(c) = e1 (c) ⊗ e2 (c), such that a. e(c(1) ) ⊗ c(2) = e(c(2) ) ⊗ c(1) ;
4.6 Modules graded by G-sets
195
b. e1 (c)a ⊗ e2 (c) = e1 (c) ⊗ ae2 (c); c. e2 (c)e1 (c) = ε(c)1A , for all c ∈ C and a ∈ A. C If A is a separable k-algebra, then LC A → L is separable. The converse holds if A = C = H is a bialgebra.
Proof. Similar to the proof of Propositions 98 and 99, but now using part 1. of Proposition 76.
4.6 Modules graded by G-sets Let G be a group, and X a right G-set. Let A be a G-graded k-algebra: A = ⊕σ∈G Aσ , with Aσ Aτ ⊂ Aστ for all σ, τ ∈ G. Then kG is a Hopf algebra, A is a right kG-comodule algebra, and kX is a right kG-module coalgebra, and we have (kG, A, kX) ∈ DK•• (k). The entwining map ψ : kX ⊗ A → A ⊗ kX is given by ψ(x ⊗ aσ ) = aσ ⊗ xσ for x ∈ X and aσ ∈ Aσ homogeneous of degree σ. As we have seen, the corresponding category of entwined modules M(kG)kX A is isomorphic to the category gr-(G, A, X) of right A-modules graded by kX. This means that ∼ M ∈ M(kG)kX A = gr-(G, A, X) can be written as M = ⊕x∈X Mx , with Mx Aσ ⊂ Mxσ for all x ∈ X and σ ∈ G. Proposition 101. With notation as above, the forgetful functor F : M(kG)kX A → MA is separable. Proof. This is a direct application of Theorem 38. Consider ϑ : kX ⊗kX → A defined by ϑ(x ⊗ y) = δxy 1A ϑ satisfies (3.49) and (3.50). For aσ ∈ Aσ and x, y ∈ X, the right hand side of (3.49) is aσ ϑ(xσ ⊗ yσ) = δxσ,yσ aσ 1A = δx,y aσ 1A = ϑ(x ⊗ y)a and this is the left hand side of (3.49). (3.50) takes the form ϑ(x ⊗ y) ⊗ y = ψ(x ⊗ ϑ(x ⊗ y)) and this is OK since deg(ϑ(x⊗y)) = e. Finally ϑ(∆(x)) = ϑ(x⊗x) = ε(x)1A .
196
4 Applications
Let us now investigate the separability of the adjoint functor G : MA → M(kG)kX A Recall from that the separability of G is determined by the elements of the k-module W ∼ = W5 = {z = a1 ⊗ c1 ∈ A ⊗ kX | aa1 ⊗ c = a1 aψ ⊗ c1ψ } Write z under the form z=
z(x) ⊗ x
x∈X
with only a finite number of z(x) different from 0. z ∈ W5 if and only if for all σ ∈ G and aσ ∈ Aσ , we have aσ z(x) ⊗ x = z(x)aσ ⊗ xσ x∈X
x∈X
The right hand side equals x∈X z(xσ −1 )aσ ⊗ x. Using the fact that X is a free basis for kX, we conclude that W5 consists of families z(x) x∈X ⊂ A such that z(x) = 0 for a finite number of x ∈ X and aσ z(x) = z(xσ −1 )aσ for all x ∈ X, σ ∈ G and aσ ∈ Aσ . From the second part of Theorem 38, we can now conclude the following result, which was originally proved in [145] (for X = G) and [159] (in general); the present proof appeared in [41]. Proposition 102. Let G be a group, X a right G-set, and A a G-graded kX k-algebra. The functor G = • ⊗ kX : MA → M(kG)A is separable if and only if there exists a family z(x) x∈X ⊂ A with all but a finite number of z(x) equal zero such that aσ z(x) = z(xσ −1 )aσ and z(x) = 1A x∈X
for all σ ∈ G and aσ ∈ Aσ . Corollary 32. With notation as in Proposition 102, we have 1. If there exists a finite G-subset X ⊂ X such that |X | is invertible in k, then the functor G = • ⊗ kX is separable. 2. Conversely, if A is G-strongly graded, and G is separable, then there exists a finite G-subset X ⊂ X. Proof. 1. Put
z(x) =
|X |−1 1A
if x ∈ X
0
if x ∈ X
4.6 Modules graded by G-sets
197
2. Assume that z(x) x∈X ⊂ A satisfies the conditions of Proposition 102. We claim that X = {x ∈ X | z(x) = 0} is a finite G-subset of X. As A is strongly graded, so for every σ ∈ G, Aσ Aσ−1 = Ae , and there exist b1 , · · · , bm ∈ Aσ and b1 , · · · , bm ∈ Aσ−1 such that m bj bj = 1 j=1
Now for every x ∈ X , we have z(x) =
m j=1
bj bj z(x) =
m
bj z(xσ)bj
j=1
and z(x) = 0 implies z(xσ) = 0, and xσ ∈ X . Corollary 33. Let k be a field of characteristic zero, G a group, X a right G-set, and A a strongly G-graded k-algebra. The following statements are equivalent: 1. The functor G = • ⊗ kX : MA → gr-(G, X, A) is separable. 2. there exists a finite G-subset X of X; 3. There exists x ∈ X with finite G-orbit O(x). Remarks 8. 1. If A is not strongly graded, then the separability of the functor G does not imply that X has a finite G-subset. For example, take X any Gset without finite orbit, and A an arbitrary k-algebra with trivial G-grading. For any fixed y ∈ X, the family z(x) = δx,y 1A meets the requirements of Proposition 102, so G is separable. 2. Let X = G = Z, and A a strongly G-graded k-algebra. Then the functor G : MA → gr-A is not separable. Now take X = G, where the action is the usual multiplication of G. Then the category gr-(G, G, A), also denoted by gr-A is the category of G-graded A-modules, and we obtain the following properties: 1. if F : MA → gr-A is separable then G is finite; 2. if G is finite and |G| is invertible in k, then F is a separable functor. Let us now examine when (F, G) is a Frobenius pair. Theorem 54. [49] Let G be a group, X a right G-set, and A a G-graded ring. The functor F : M(kG)kX A → MA and its right adjoint G = • ⊗ kX form a Frobenius pair if and only if X is finite.
198
4 Applications
Proof. First assume that (F, G) is a Frobenius pair. From Theorem 36, we know that C = A⊗kX is finitely generated and projective as a left A-module, and this is only possible if X is finite. To prove the converse, we apply part III of Theorem 38. We have already seen that the map ϑ : kX ⊗ kX → A ϑ(x ⊗ y) = δx,y 1A belongs to V5 , and that z = 1A ⊗ x∈X x ∈ W5 . It is immediate to verify that ϑ and z satisfy (3.54).
4.7 Two-sided entwined modules revisited Let H be a Hopf algebra. In [161], Schauenburg has shown that the category of Yetter-Drinfeld modules is equivalent to a category of two-sided two-cosided Hopf modules. Surprisingly, this category is not a special cases of the two-sided categories of entwined modules that we introduced in Section 2.6, and that played an important role in the development of the theory of Chapter 3. The left-right compatibility conditions are different: in fact Schauenburg needs the Hopf module compatibility conditions, while in Section 2.6, the left-right compatibility conditions were just Long’s compatibility conditions. A common feature of the two types of two-sided modules is that they can both be viewed as one-sided Doi-Hopf modules (and a fortiori of entwined modules, see [14] and [162]). Compatible entwining structures Let (A, C, ψ) and (B, C, λ) be two right-right entwining structures, and consider the map θ = (IA ⊗ ϕ) ◦ (ψ ⊗ IB ) : C ⊗ A ⊗ B → A ⊗ B ⊗ C that is, θ(c ⊗ a ⊗ b) = aψ ⊗ bλ ⊗ cψλ We say that (A, C, ψ) and (B, C, λ) are compatible if (A ⊗ B, C, θ) is a rightright entwining structure. Proposition 103. (A, C, ψ) and (B, C, λ) are compatible if and only if aψ ⊗ bλ ⊗ cψλ = aψ ⊗ bλ ⊗ cλψ for all a ∈ A, b ∈ B and c ∈ C. Proof. Assume first that (4.51) holds. Then
(4.51)
4.7 Two-sided entwined modules revisited
(a ⊗ b)(a ⊗ b )
θ
199
⊗ cθ = (aa ⊗ bb )θ ⊗ cθ
= (aa )ψ ⊗ (bb )λ ⊗ cψλ (2.1)
= aψ aψ ⊗ bλ bΛ ⊗ cψΨ λΛ
(4.51)
= aψ aψ ⊗ bλ bΛ ⊗ cψλΨ Λ = (a ⊗ b)θ (a ⊗ b )Θ ⊗ cθΘ
This means that θ satisfies (2.1). The other requirements (2.2-2.4) are obvious. Conversely, if (A⊗B, C, θ) is a right-right entwining structure, then θ satisfies (2.1), and the above computation shows that this means that aψ aψ ⊗ bλ bΛ ⊗ cψΨ λΛ = aψ aψ ⊗ bλ bΛ ⊗ cψλΨ Λ After we take a = 1A and b = 1B , we find (4.51). Example 17. Let (H, A, C) and (K, B, C) be right-right Doi-Koppinen structures. If C is a right H ⊗ K-module coalgebra, then the corresponding entwining structures (A, C, ψ) and (B, C, λ) are compatible. Proof. Recall that ψ(c ⊗ a) = a[0] ⊗ ca[1] and λ(c ⊗ b) = b[0] ⊗ cb[1] We easily find that aψ ⊗ bλ ⊗ cψλ = a[0] ⊗ b[0] ⊗ (ca[1] )b[1] = a[0] ⊗ b[0] ⊗ (cb[1] )a[1] = aψ ⊗ bλ ⊗ cλψ If (A, C, ψ) and (B, C, λ) are compatible, then the category M(θ)C A⊗B consists of k-modules together with a right C-coaction and compatible right A and C B-actions such that M ∈ M(H)C A and M ∈ M(H)B . • Now take (A, C, ψ) ∈ • E (k), and (B, C, λ) ∈ E•• (k), respectively a leftright and right-right entwining structure. From Proposition 14, we know that (Aop , C, ψ ◦ τ ) ∈ E•• (k), and we call (A, C, ψ) and (B, C, λ) compatible if and only if (Aop , C, ψ ◦ τ ) and (B, C, λ) are compatible. We then consider the category C A M( λ)B ψ consisting of k-modules M with a right C-coaction and an (A, B)-bimodule structure such that M ∈ A M(ψ)C and M ∈ M(λ)C B
200
4 Applications
Example 18. (A, C, IA⊗C ) ∈ • E• (k) is compatible with every right-right entwining structure (B, C, λ) ∈ E•• (k). The corresponding modules are (A, B)bimodules, right-right (B, C, λ) entwined modules, and left-right (A, C)-Long dimodules. The same game can be played with the coalgebras: (A, C, ψ), (A, D, κ) ∈ E•• (k) are called compatible if θ = (ψ ⊗ ID ) ◦ (IC ⊗ κ) : C ⊗ D ⊗ A → A ⊗ C ⊗ D makes (A, C ⊗ D, θ) into an object of E•• (k). The proof of the next result is left to the reader. Proposition 104. (A, C, ψ), (A, D, κ) ∈ E•• (k) are compatible if and only if aψκ ⊗ cψ ⊗ dκ = aκψ ⊗ cψ ⊗ dκ
(4.52)
for all a ∈ A, c ∈ C and d ∈ D. (A, C, ψ) ∈ • E• and (A, D, κ) ∈ E•• (k) are called compatible if (A, C cop , τ ◦ ψ) and (A, D, κ) are compatible. We can then consider the category C
ψ M( κ)D B
Example 19. Take (H, A, C) ∈ • DK• (k) and (K, A, D) ∈ DK•• (k). If A is an (H, C)-bicomodule algebra, then the associated entwining structures are compatible. Our next step is to combine the two constructions. Consider (A, C, ψ), (B, D, ϕ), (B, C, λ), (A, D, κ) ∈ E•• (k) These 4 structures are pairwise compatible if and only if θ = (IA ⊗ λ ⊗ ID ) ◦ (ψ ⊗ ϕ) ◦ (IC ⊗ κ ⊗ IB ) : C ⊗ D ⊗ A ⊗ B → A ⊗ B ⊗ C ⊗ D makes (A ⊗ B, C ⊗ D, θ) into an object of E•• (k). Using left-right arguments as before, let (A, C, ψ) ∈ •• E(k) ; (B, D, ϕ) ∈ E•• (k) (B, C, λ) ∈ • E• (k) ; (A, D, κ) ∈ • E• (k) We can then consider the category D λ ψ ϕ κ B
C AM
4.7 Two-sided entwined modules revisited
201
Examples 7. 1. Let κ = IA⊗C , λ = IB⊗C . Then we recover the two-sided entwined modules from Section 2.6. 2. Let H, K, L, M be bialgebras, and – – – –
A an (H, K)-comodule algebra; B an (L, M )-comodule algebra; C an (H, L)-module coalgebra; D a (K, M )-module coalgebra.
Then we have Doi-Koppinen structures (H, A, C) ∈ •• DK(k) ; (M, B, D) ∈ DK•• (k) (L, B, C) ∈ • DK• (k) ; (K, A, D) ∈ • DK• (k) and the corresponding entwining structures are compatible. We obtain the following category of two-sided Doi-Hopf modules C AM
H
L M K
D B
3. Let us consider some particular case of Example 2); these will be of some use in the Proposition 105 and 105. Let H be a bialgebra. Via left and right multiplication an comultiplication, H is an (H, H)-bicomodule algebra and an (H, H)-bimodule coalgebra, and we can consider the category H HM
H
H H H
H H
Its objects are k-modules with a left and right H-action and H-coaction, such that they are at the same time left-left, left-right, right-left and right-right Hopf modules. Now k viewed as a bialgebra acts and coacts trivially on H, and we find that H is an (H, k)-bicomodule algebra, a (k, H)-bicomodule algebra, an (H, k)bimodule coalgebra, and a (k, H)-bimodule coalgebra, and we can consider the category H k H H M H H k H Its objects are left-left and right-right Hopf modules, and left-right, right-left Long dimodules. Finally, consider the category H HM
k H k k
H H
Its objects are left-left Hopf modules, and left-right, right-left, and right-right Long dimodules.
202
4 Applications
Proposition 105. [161] Let H be Hopf algebra, and assume that H is flat as a k-module. We have pairs of inverse equivalences between the following categories: k k H MH and H M H H H H H H k MH and H HM H k k H H H H YD(H, H)H and H M H H H H Proof. The Fundamental Theorem for Hopf modules (see Proposition 59) tells us that we have an equivalence between the categories M and H H M(H). coH (•). The left structure Recall that F = H ⊗ • : M → H H M(H) and G = on F (M ) = H ⊗ M is induced by the structure on H. 1. Take M ∈ MH . On H ⊗ M , we then define a right H-action as follows: (h ⊗ m)k = hk(1) ⊗ mk(2) Easy computations show that H ⊗ M is now an H-bimodule, and also an object of H M(H)H , since ρl ((h ⊗ m)k) = ρl (hk(1) ⊗ mk(2) ) = h(1) k(1) ⊗ h(2) k(2) ⊗ mk(3) = h(1) k(1) ⊗ (h(2) ⊗ m)k(2) and this shows that
H ⊗M ∈
k Conversely, for N ∈ H HM H H H coH
H HM
k H H H
k
k H
, we define a right H-action on G(N ) = H
N by n · h = S(h(1) )nh(2)
We have to show that ρl (n · h) = 1 ⊗ n · h: ρl (n · h) = ρl (S(h(1) )nh(2) ) = S(h(2) )n[−1] h(3) ⊗ S(h(1) )n[0] h(4) = 1 ⊗ S(h(1) )nh(2) = 1 ⊗ n · h 2. For M ∈ MH , we define a right H-coaction on H ⊗ M as follows: ρr (h ⊗ m) = (h(1) ⊗ m[0] ) ⊗ h(2) m[1] It can be verified easily that H ⊗ M is an (H, H)-bicomodule, and a left-right Hopf module, and consequently
4.7 Two-sided entwined modules revisited
H ⊗M ∈H HM H
H Conversely, for N ∈ H HM H k k
H k k
203
H k
H , the right H-coaction on N restricts to a k
right H-coaction on G(N ). 3. Let N be a right-right Yetter-Drinfeld module. We already know that H H H ⊗ M is an object of H H M(H), M(H)H , H M(H) , and that is an (H, H)bimodule and bicomodule. We are left to show that it is a right-right Hopf module. ρr ((h ⊗ m)k) = ρr (hk(1) ⊗ mk(2) ) = (hk(1) )(1) ⊗ (mk(2) )[0] ⊗ (hk(1) )(2) (mk(2) )[1] = h(1) k(1) ⊗ m[0] k(4) ⊗ h(2) k(2) S(k(3) )m[1] k(5) = h(1) k(1) ⊗ m[0] k(2) ⊗ h(2) m[1] k(3) = (h ⊗ m)[0] h(1) ⊗ (h ⊗ m)[1] h(2) as needed. Conversely, let N ∈
H HM
HH HH
H . We have seen above that H
G(N ) is a right H-module and a right H-comodule. Let us show that it is a Yetter-Drinfeld module. For n ∈ coH N , we have ρr (n · h) = ρr (S(h(1) )nh(2) ) = S(h(2) )n[0] h(3) ⊗ S(h(1) )n[1] h(4) = n[0] · h(2) ⊗ S(h(1) )n[1] h(3) as needed. We have similar results for the categories of Hopf modules and Long dimodules: they are equivalent to categories of two-sided Hopf modules, but with different compatibility conditions. Proposition 106. Let H be Hopf algebra, and assume that H is flat as a k-module. We have pairs of inverse equivalences between the following categories: H k H H and M H M(H)H H H k H H k H LH H and H M H k k H Proof. We proceed as in Proposition 105. For M ∈ MH , we now define the following action on H ⊗ M : (h ⊗ m)k = h ⊗ mk It is clear that this action makes H ⊗ M into an (H, H)-bimodule, and a right-left H-dimodule. Similarly, if M is a right H-comodule, then H ⊗ M
204
4 Applications
is a right H-comodule, with coaction IH ⊗ ρM . This makes H ⊗ M into an (H, H)-bicomodule, and a left-right H-dimodule. If M is a right-right Hopf H k module, then we find that H ⊗ M is an object of H , and if M HM H k H H is a right-right Long dimodule, then we find that H ⊗ M is an object of H k H M H k . Further details are left to the reader. H k H
4.8 Corings and descent theory Effective descent morphisms Let i : B → A be a ring homomorphism. It can be verified easily that C = A ⊗B A, with structure maps ∆C : A⊗B A → (A⊗B A)⊗A (A⊗B A) ∼ = A⊗B A⊗B A and εC : A⊗B A → A given by ∆C (a ⊗B b) = (a ⊗B 1) ⊗A (1 ⊗B b) = a ⊗B 1 ⊗B b εC (a ⊗B b) = ab is an A-coring. C is called the canonical coring associated to the ring morphism i. A right C-comodule consists of a right A-module M together with a right A-module map ρM : M → M ⊗A (A ⊗B A) ∼ = M ⊗B A We will use the Sweedler-Heyneman notation ρM (m) = m[0] ⊗B m[1] ∈ M ⊗B A The coassociativity condition and the counit condition then take the form m[0][0] ⊗B m[0][1] ⊗B m[1] = m[0] ⊗B 1 ⊗B m[1]
(4.53)
m[0] m[1] = m
(4.54)
and We have a functor, called the comparison functor K = • ⊗B A : MB → MC where the C-comodule structure on N ⊗B A is the following: ρN ⊗B A (n ⊗B a) = n ⊗B 1 ⊗B a i is called an effective descent morphism if K is an equivalence of categories. In this situation, a right A-module M is isomorphic to N ⊗B A for some right B-module N if and only if we can define a right C-comodule structure on M .
4.8 Corings and descent theory
205
In the situation where A and B are commutative, there is an isomorphism between the category of comodules over the canonical coring, and the category of descent data, as introduced by Knus and Ojanguren in [109]. Recall from [109, II.3.1] that a descent datum consists of a pair (M, g), with M ∈ MA , and g : A ⊗B M → M ⊗B A an A ⊗B A-module homorphism such that g2 = g3 ◦ g1 : A ⊗B A ⊗B M → M ⊗B A ⊗B A
(4.55)
µM (g(1 ⊗B m)) = m
(4.56)
and for all m ∈ M . Here gi is defined by applying IA to the i-th tensor position, and g to the two other ones. A morphism of two descent data (M, g) and (M , g ) consists of an A-module homomorphism f : M → M such that the diagram g A ⊗B M - M ⊗B A IA ⊗B f
f ⊗B IA
? ? g A ⊗B M - M ⊗B A commutes. Desc(A/B) will be the category of descent data. Theorem 55. Let i : B → A be a morphism of commutative rings. We have an isomorphism of categories Desc(A/B) ∼ = MA⊗B A Proof. For a right C-comodule (M, ρM ), we define g : A ⊗B M → M ⊗B A by g(a ⊗B m) = m[0] a ⊗B m[1] It is easy to see that g is an A ⊗B A-module map, and that (g3 ◦ g1 )(a ⊗B b ⊗B m) = g3 (a ⊗B m[0] b ⊗B m[1] = m[0] a ⊗B m[1] b ⊗B m[2] (4.53)
= m[0] a ⊗B b ⊗B m[1] = g2 (a ⊗B b ⊗B m)
From (4.54), it follows that µM (g(1 ⊗B m)) = m[0] εC (1 ⊗B m[1] ) = m[0] m[1] = m and we see that (M, g) is a descent datum. Conversely, if (M, g) is a descent datum, then the map ρM : M → M ⊗B A ; ρM (m) = g(1 ⊗B m) makes M into a right C-comodule. All other details are left to the reader.
206
4 Applications
Corollary 34. Let i : B → A be a morphism of commutative rings. For M ∈ MA and g : A ⊗B M → M ⊗B A, (M, g) is a descent datum if and only if (4.53) holds and g is an isomorphism. Proof. First assume that (M, g) is a descent datum, and let ρM be the associated C-comodule structure on M . For all a ∈ A and m ∈ M , we compute (τ ◦ g ◦ τ ◦ g)(a ⊗ m) = (τ ◦ g)(m[1] ⊗B m[0] a) = τ g(m[1] ⊗B m[0] )(1 ⊗B a) = τ m[0] m[2] ⊗B m[1] a = m[1] a ⊗B m[0] m[2] = a ⊗B m and it follows that τ ◦ g ◦ τ is a (left and right) inverse of g. Conversely, assume that g is bijective. We can still consider the associated map ρM , and we know that ρM satisfies (4.53). We are done if we can show that it also satisfies (4.54). Multiplying the second and third factor in (4.53), we see that m[0] ⊗B m[1] m[2] = m[0] ⊗B m[1] or g(1 ⊗B m[0] m[1] ) = g(1 ⊗B m[0] )m[1] = g(1 ⊗B m) −1
Applying g to both sides, and then multiplying the two tensor factors, we see that m[0] m[1] = m, as needed. In the situation where A and B are not necessarily commutative, descent data have been introduced by Cipolla [59]. Cipolla’s descent data are exactly comodules over the canonical coring. Nuss [149] has proposed alternative descriptions of the category of descent data. The “faithfully flat descent theorem” states that a sufficient condition for a morphism i : B → A to be an effective descent morphism is that A is faithfully flat as a B-module. Proposition 107. Let i : B → A be a morphism of rings. Then the comparison functor K : MB → MC has a right adjoint R. Proof. R is defined as follows R : MC → MB ; R(M ) = M coC = {m ∈ M | ρM (m) = m ⊗B 1} The unit and counit of the adunction are given by ηN : N → (N ⊗B A)coC ; ηN (n) = n ⊗B 1 εM : M coC ⊗B A → M ; εM (m ⊗B a) = ma for all N ∈ MA and M ∈ MC .
4.8 Corings and descent theory
207
Proposition 108. Let i : B → A be a morphism of rings, and assume that A is flat as a left B-module. Then the right adjoint R of K is fully faithful. Proof. M coC is the coequalizer of the maps 0
- M coC
- M ⊗B A
ρM
- M
IM ⊗B i
A is flat as a left B-module, so we have an exact sequence - M coC ⊗B A
0
- M ⊗B A
ρM ⊗B IA
- M ⊗B A ⊗B A
IM ⊗B i⊗B IA
(4.57)
From the coassociativity of ρM , it now follows that ρr (m) ∈ M coC ⊗B A ⊂ M ⊗B A ∼ = M ⊗A (A ⊗B A), for all m ∈ M , and we have a map ρM : M → M coC ⊗B A. From the counit property, it follows that εM ◦ ρM = IM . For m ∈ M coC and a ∈ A, we have ρM (εM (m ⊗B a)) = ρM (ma) = ρM (m)a = (m ⊗B 1)a = m ⊗B a Thus the counit εM has an inverse, for all M , and R is fully faithful. Proposition 109. Let i : B → A be a morphism of rings, and assume that A is faithfully flat as a left B-module. Then i is an effective descent morphism. Proof. We allready know that K has a fully faithful right adjoint R. It remains to be shown that K itself is fully faithful, i.e. ηN is an isomorphism, for all N ∈ MB . We first show that the sequence 0
- N ⊗B A
ρN ⊗B A ⊗B IA
- N ⊗B A ⊗B A ⊗B A
IN ⊗B i⊗B IA
- N ⊗B A ⊗B A
IN ⊗B A ⊗B i⊗B IA
is exact. Indeed, if i ni ⊗B ai ⊗B bi ∈ Ker (ρN ⊗B A ⊗B IA −IN ⊗B A ⊗B i⊗B IA ), then ni ⊗B ai ⊗B 1 ⊗B bi = ni ⊗B 1 ⊗B ai ⊗B bi i
i
Multiplying the third and fourth tensor factor, we find ni ⊗B ai ⊗B bi = ni ⊗B 1 ⊗B ai bi ∈ Im (IN ⊗B i ⊗B IA ) i
i
Now A/B is faithfully flat, and we find that the sequence 0
- N
IN ⊗B i
- N ⊗B A
ρN ⊗B A
- N ⊗B A ⊗B A
IN ⊗B A ⊗B i
is exact, and this means exactly that ηN = IN ⊗B i : N → (N ⊗B A)coC is an isomorphism.
208
4 Applications
It is somewhat surprising that the converse of Proposition 109 does not hold. Let B be a commutative ring. A morphism f : M → M of B-modules is called pure if for any B-module N the morphism f ⊗B IN : M ⊗B N → M ⊗B N is monic. The following result is due to Joyal and Tierney (unpublished). For detail, we refer the reader to [132] and [97]. Theorem 56. Let i : B → A be a morphism of commutative rings. The following assertions are equivalent: 1. i is an effective descent morphism; 2. K is fully faithful; 3. i is pure as a morphism of B-modules. Galois type corings As before let i : B → A be a morphism of rings, and let C = A ⊗B A be the canonical coring. We also consider a second A-coring D. Homcoring (C, D) will be the k-module consisting of all A-coring maps from C to D. Also G(D) = {d ∈ D | ∆D (d) = d ⊗A d and εD (d) = 1A } is the set of grouplike elements of D. Clearly 1A ⊗B 1A ∈ G(C). Proposition 110. With notation as above, we have an isomorphism of sets Homcoring (C, D) ∼ = G(D)B = {d ∈ G(D) | bd = db, for all b ∈ B} Proof. Take a coring homomorphism can : C → D. Then d = can(1 ⊗B 1) ∈ G(D)B . can is completely determined by d, because can is an A-bimodule map: can(a ⊗B b) = acan(1 ⊗B 1)b = adb (4.58) For d ∈ G(D), we define can : C → D using (4.58). Then can is an A-bimodule map, and a coring morphism since ∆D (can(a ⊗B b)) = ∆D (adb) = ad ⊗A db = (can ⊗A can)((a ⊗B 1) ⊗A (1 ⊗B b)) and εD (can(a ⊗B b)) = εD (adb) = ab = εC (a ⊗B b) Proposition 111. Let D be an A-coring. Then G(D) ∼ = {ρ : A → A ⊗A D ∼ = D | ρ makes A into a D-comodule}
4.8 Corings and descent theory
209
Proof. For d ∈ G(D), we define ρ : A → D by ρ(a) = 1 ⊗A da = da. It is then clear that ρ is right A-linear. (A, ρ) is a right D-comodule since (ρ ⊗A ID )ρ(a) = 1 ⊗A d ⊗A da = 1 ⊗A ∆D (d)a = (IA ⊗A ∆D )ρ(a) and (IA ⊗A εD )ρ(a) = 1 ⊗A εD (da) = 1 ⊗A a = a Conversely, if A is a D-comodule, then ρ(1A ) = d is grouplike. Corollary 35. An A-coring D is isomorphic to the canonical coring C if and only if there exists a grouplike element d ∈ G(D)B such that the map can : C → D, can(a ⊗B b) = adb is bijective, Take d ∈ G(D), and let ρ be the corresponding D-comodule structure on A. We define AcoD = {a ∈ A | ρ(a) = a ⊗A d = ad} = {a ∈ A | ad = da} For D equal to the canonical coring C and d = 1 ⊗B 1, we find AcoC = {a ∈ A | a ⊗B 1 = 1 ⊗B a} Lemma 24. Let D be an A-coring, d ∈ G(D)B , and ρ the corresponding D-comodule structure on A. Then i(B) ⊂ AcoC ⊂ AcoD . If C and D are isomorphic as corings, then AcoC = AcoD . Proof. i(B) ⊂ A is clear. If a ∈ AcoC , then a ⊗B 1 = 1 ⊗B a, and consequently ad = can(a ⊗B 1) = can(1 ⊗B a) = da If can is bijective, then the converse also holds. Let C be the canonical coring associated to a ring morphism i : B → A. An A-coring is said to be of Galois type if there exists a grouplike d ∈ G(D)B such that the corresponding map can : C → D is an isomorphism, and B∼ = i(B) = AcoC = AcoD Now assume that i : B → A is a morphism of k-algebras, and that (A, C, ψ) is a right-right entwining structure. As we have seen in Section 2.7, we can associate an A-coring D = A ⊗ C to it. Assume that this coring is isomorphic to the canonical coring C: C = A ⊗B A ∼ =D =A⊗C We will write
210
4 Applications
d=
ai ⊗ ci ∈ G(A ⊗ C)B
i
From Proposition 111, we know that A is a right D-comodule, and therefore an entwined module (see Theorem 17), and a fortiori a right C-comodule. The right C-coaction is ρ(a) = a[0] ⊗ a[1] = da = ai aψ ⊗ cψ i i
and can can be rewritten in terms of the coaction: can(a ⊗B b) = adb = ab[0] ⊗ b[1] and we conclude that 1. A is a right C-comodule; 2. can : A ⊗B A → A ⊗ C, can(a ⊗B b) = ab[0] ⊗ b[1] , is an isomorphism; 3. for all b ∈ B: ρ(i(b)) = i(b)ρ(1). Conversely, assume that i : B → A is a morphism of k-algebras, and that C is a k-coalgebra such that the three above conditions hold. can is bijective, so the coring structure on A ⊗B A induces a coring structure on A ⊗ C. We will show that this coring structure comes from an entwining structure (A, C, ψ). To this end, we apply Theorem 16. We have to verify (2.87-2.89). It is clear that the natural left A-module structure on A ⊗ C makes can into a left A-linear map, so (2.87) holds. The right A-module structure on A ⊗ C induced by can is given by (b ⊗ c)a = can(can−1 (b ⊗ c)a) Since can−1 (1[0] ⊗ 1[1] ) = 1 ⊗B 1, we have (1[0] ⊗ 1[1] )a = can(1 ⊗ a) = a[0] ⊗ a[1]
(4.59)
The comultiplication ∆ on A ⊗ C is given by ∆(a ⊗ c) = (can ⊗A can)∆C (can−1 (a ⊗ c)) ∈ (A ⊗ C) ⊗A (A ⊗ C), for all a ∈ A and c ∈ C. can is bijective, so we can find ai , bi ∈ A such that can( ai ⊗B bi ) = ai bi[0] ⊗B bi[1] = a ⊗ c i
i
and we compute that
∆(a ⊗ c) = (can ⊗A can)∆C ( =
i
ai ⊗B bi )
i
can(ai ⊗B 1) ⊗A can(1 ⊗B bi )
4.8 Corings and descent theory
=
211
(ai 1[0] ⊗ 1[1] ) ⊗A (bi[0] ⊗ bi[1] )
i
=
(ai 1[0] ⊗ 1[1] )bi[0] ⊗A (1 ⊗ bi[1] ) i
(4.59)
(ai bi[0] ⊗ bi[1] ) ⊗A (1 ⊗ bi[2] ) = i
= (a ⊗ c(1) ) ⊗A (1 ⊗ c(2) ) proving (2.88). (2.89) can be proved as follows: ai ⊗ bi ) = ai bi ε(a ⊗ c) = εC ( =
i
i
ai bi[0] εC (bi[1] ) = aεC (c)
i
Coalgebra Galois extensions have been introduced in [30] (see also [25] and [32] ). Let i : B → A be a morphism of k-algebras, and C a k-coalgebra. A is called a C-Galois extension of B if the following conditions hold: 1. A is a right C-comodule; 2. can : A ⊗B A → A ⊗ C, can(a ⊗B a ) = aa[0] ⊗B a[1] is an isomorphism; 3. B = {a ∈ A | ρ(a) = aρ(1)}. Collecting the arguments above, we find Proposition 112. Let i : A → B be a morphism of k-algebras, and C a k-coalgebra. A is C-Galois extension of B for some right C-coaction on A if and only if there exists a right-right entwining structure (A, C, ψ) such that A ⊗ C is an A-coring of Galois type. Now consider the special case where C = H is a bialgebra, A is a right H-comodule algebra, d = 1A ⊗ 1H and ψ : H ⊗ A → A ⊗ H, ψ(h ⊗ a) = a[0] ⊗ ha[1] i.e. (A, C, ψ) comes from a Doi-Koppinen datum (H, A, H). Now can : A ⊗B A → A ⊗ H, can(a ⊗B a ) = aa[0] ⊗B a[1] and AcoH = {a ∈ A | ρ(a) = a ⊗ 1}. A is then an H-Galois extension of B if and only if can is an isomorphism, and AcoH = B, which means that A is a Hopf-Galois extension of B in the sense of Section 4.2. Corings and comonads Let D be a category. A comonad on D is a threetuple T = (T, ε, ∆), where T : D → D is a functor, and ε : T → 1D and ∆ : T → T ◦ T are natural transformations, such that T (∆M ) ◦ ∆M = ∆T (M ) ◦ ∆M
212
4 Applications
and T (εM ) ◦ ∆M = εT (M ) ◦ ∆M = IT (M ) for all M ∈ D. A morphism between two D-comonads T = (T, ε, ∆) and T = (T , ε , ∆ ) consists of a natural transformation α : T → T such that ε ◦ α = ε and (α ∗ α) ◦ ∆ = ∆ ◦ α Here ∗ is the Godement product: (α ∗ α)M = αT (M ) ◦ T (αM ) for all M ∈ D. Comonad(D) will be the category of comonads on D. For T ∈ Comonad(D), a T-coalgebra is a pair (M, ξ), with M ∈ D, and ξ : M → T (M ) a morphism in D such that εM ◦ ξ = IM and ∆M ◦ ξ = T (ξ) ◦ ξ A morphism between (M, ξ) and (M ξ ) consists of a morphism f : M → M in D such that T (f ) ◦ ξ = ξ ◦ f The category of T-coalgebras is denoted by DT . Monads and algebras over a monad are defined in a similar way, in fact a monad on a category is a comonad on the dual category. Now let F : C → D be a functor having a right adjoint G, and denote the unit and counit of the adjunction by η : 1C → GF and ε : F G → 1D We can associate a monad on C and a comonad on D to this adjunction. The comonad on D can be described as follows: T = F G : D → D ; ∆ = 1F ∗ η ∗ 1G : F G → F GF G i.e. ∆M = F (ηG(M ) ), and ε is the counit of the adjunction. Monads and comonads are the right tools to develop categorical descent theory, see [22, Ch. 4] or [123, Ch. 6] for a detailed discussion. Let us explain how comonads are related to corings. Proposition 113. For a ring A, we have a full and faithful functor i : A-Coring → Comonad(MA ) If B → A is a morphism of rings, then the comonad on MA associated to the restriction of scalars functor and its adjoint corresponds to the canonical coring A ⊗B A. For any A-coring C, we have an isomorphism of categories i(C)
MA
∼ = MC
4.8 Corings and descent theory
213
Proof. An A-coring C is an A-bimodule, so we have a functor T : • ⊗A C : MA → MA . We define i(C) = (T, ε, ∆) with εM = IM ⊗A εC : T (M ) = M ⊗A C → M ∆M = IM ⊗A ∆C : T (M ) = M ⊗A C → T (T (M )) = M ⊗A C ⊗A C for all M ∈ MA . It is straightforward to verify that (T, ε, ∆) is a comonad, and all other verifications are left to the reader.
5 Yetter-Drinfeld modules and the quantum Yang-Baxter equation
he study of the quantum Yang-Baxter equation (QYBE) R12 R13 R23 = R23 R13 R12 was one of the stimuli for the development of the theory of quantum groups ([108]). We will prove that special types of Hopf algebras (quasitriangular or co-quasitriangular) play a major role in solving this equation. The main result of this Chapter is the famous FRT theorem due to Faddeev, Reshetikhin, and Takhtajan ([85]). The alternative version of the FRT theorem (using Yetter-Drinfeld modules) proven by Radford in [155] is included, and will be the key in the unification schedule as described in the Preface. On the other hand, some recent results and new directions for studying the QYBE (for instance at set-theoretical level) are also included.
5.1 Notation In this Section, we introduce some notation that will be used in the subsequent Chapters. Throughout part II, k will be a commutative field and vector spaces are taken over k. Let A be a k-algebra, and R ∈ A ⊗ A. We write R = R1 ⊗ R2 where the summation is implicitly understood. We write R12 = R1 ⊗ R2 ⊗ 1A , R13 = R1 ⊗ 1A ⊗ R2 , R23 = 1A ⊗ R1 ⊗ R2 ∈ A⊗3 We will use this in particular in the situation where A = Endk (M ), with M a finite dimensional vector space. For R ∈ Endk (M )⊗Endk (M ) ∼ = Endk (M ⊗2 ), 12 13 23 ⊗3 we then obtain R , R , R ∈ Endk (M ). If M is infinite dimensional, then Endk (M ) ⊗ Endk (M ) is no longer isomorphic to Endk (M ⊗2 ), but the above notation still makes sense for R ∈ Endk (M ⊗2 ). Let M be finite dimensional, fix a basis {m1 , · · · , mn } of M , and {p1 , · · · , pn } the corresponding dual basis of the dual module M ∗ , such that pi , mj = δji
(5.1)
for all i, j ∈ {1, · · · , n}. δji is the Kronecker symbol. Then {eij = pi ⊗mj | i, j = 1, · · · , n} and {cij = mj ⊗ pi | i, j = 1, · · · , n} are free bases for respectively
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 217–243, 2002. c Springer-Verlag Berlin Heidelberg 2002
218
5 The quantum Yang-Baxter equation
Endk (M ) ∼ = M ∗ ⊗M and Endk (M ∗ ) ∼ = M ⊗M ∗ . The isomorphisms are given by the rules eij (mk ) = δki mj and cij (pk ) = δjk pi (5.2) Endk (M ) ∼ = M ∗ ⊗ M is isomorphic to the n × n-matrix algebra Mn (k), and the algebra structure M ∗ ⊗ M is given by the rules eij ekl
=
δli ekj
and
n
eii = 1
(5.3)
i=1
Endk (M ∗ ) ∼ = M ⊗ M ∗ is isomorphic to the n × n-comatrix coalgebra Mn (k), and the comultiplication on M ⊗ M ∗ is given by cik ⊗ ckj and ε(cij ) = δji (5.4) ∆(cij ) = k
We obtain the matrix algebra Mn (k) and the comatrix coalgebra Mn (k) after we take M = k n and the canonical basis. eij is then identified with the elementary matrix having 1 in the (j, i)-position and 0 elsewhere. A linear map R : M ⊗ M → M ⊗ M can be described by its matrix X, with n4 entries xij uv ∈ k, where i, j, u, v range from 1 to n. This means ij R(mk ⊗ ml ) = xkl mi ⊗ mj (5.5) ij
or R=
k l xij kl ei ⊗ ej
(5.6)
ijkl
We will often identify R and its matrix, i.e., we will write R = xij kl Also we will use the Einstein summation convention. In a summation, all indices run from 1 to n. If in an expression an index occurs twice, namely once as an upper index and once as a lower index, then it is understood implicitly that we take the sum where this index runs from 1 to n. Indices that occur only once are not summation indices, and an index is not allowed to occur more than twice in one expression. For example, (5.6) is rewritten as k l R = xij kl ei ⊗ ej
5.2 The quantum Yang-Baxter equation and the braid equation Definition 5. Let M be a vector space and R ∈ Endk (M ⊗ M ).
5.2 The quantum Yang-Baxter equation and the braid equation
219
1. R is called a solution of the quantum Yang-Baxter equation (QYBE) if R12 R13 R23 = R23 R13 R12
(5.7)
in Endk (M ⊗3 ). 2. R is a solution of the braid equation (BE) if R12 R23 R12 = R23 R12 R23
(5.8)
in Endk (M ⊗3 ). Some authors call (5.8) the quantum Yang-Baxter equation. We will see in Proposition 114 that the two equations are equivalent. Both can be equations stated in a more general context: let A be a k-algebra. R = R1 ⊗R2 ∈ A⊗A is a solution of the quantum Yang-Baxter equation (resp. the braid equation) if the equation (5.7) (resp. (5.8)) holds in A ⊗ A ⊗ A. Proposition 114. Let R ∈ Endk (M ⊗ M ). The following statements are equivalent: 1. 2. 3. 4.
R is a solution of the BE; Rτ is a solution of the QYBE; τ R is a solution of the QYBE; τ Rτ is a solution of the BE.
Proof. 1. ⇔ 2. Let T = Rτ , and observe that T 12 T 13 T 23 = τ 13 R23 R12 R23 ,
T 23 T 13 T 12 = τ 13 R12 R23 R12
The equivalence of 1. and 2. now follows from the fact that τ 13 is an automorphism of M ⊗M ⊗M . 1. ⇔ 3. can be proved in a similar way, and 1. ⇔ 4. follows from the equivalence of 1. and 2. and the fact that τ R = τ Rτ τ . If M is finite dimensional, then we can write the QYBE in matrix form. Using the notation introduced in Section 5.1, we find ∈ Proposition 115. Let M be a n-dimensional vector space. R = xij kl Endk (M ⊗ M ) is a solution of the QYBE if and only if pi rj ij pu rv xvu lk xrv xgu = xvu xrk xql
(5.9)
for all i, j, k, l, p, q = 1, · · · , n. Proof. A direct computation that is left to the reader. Observe that (5.9) is a system of n6 simultaneous equations in n4 indeterminates. We will now give some standard solutions of the QYBE. In the finite dimensional case, we keep on using the notation introduced in Section 5.1.
220
5 The quantum Yang-Baxter equation
Examples 8. 1. The identity map IM ⊗M and the switch map τM are solutions of both QYBE and BE. More generally, if {aij | i, j = 1, · · · , n} is a family of scalars and dim(M ) = n, then R : M ⊗ M → M ⊗ M,
R(vi ⊗ vj ) = aij vj ⊗ vi
is a solution of the QYBE. 2. Suppose that R ∈ Endk (M ⊗ M ) is bijective. Then R is a solution of the QYBE if and only if R−1 is a solution of the QYBE. 3. Let M be a finite dimensional vector space and u an automorphism of M . If R is a solution of the QYBE then u R = (u ⊗ u)R(u ⊗ u)−1 is also a solution of the QYBE. 4. Let f , g ∈ Endk (V ) such that f g = gf . Then R = f ⊗ g is a solution of the QYBE since R12 R13 R23 = f 2 ⊗ gf ⊗ g 2 and R23 R13 R12 = f 2 ⊗ f g ⊗ g 2 . For example, take dim(M ) = 2, and let f , g ∈ Endk (M ) be given by their matrices with respect to the fixed basis {m1 , m2 }: a 1 b c f= g= 0 a 0 b where a, b, c ∈ k. The matrix of R = f ⊗ g with respect to the basis {m1 ⊗ m1 , m1 ⊗ m2 , m2 ⊗ m1 , m2 ⊗ m2 } is
ab
0 R= 0 0
ac
b
c
ab
0
0
ab
b ac
0
0
ab
(5.10)
As f g = gf , R is a solution for the QYBE. 5. Let a, b two non-zero scalars of k. Then
a
0 R= 0 0
b
0
0
−a
2a
0
a
0 0
0
0
a
is a solution of the QYBE. 6. Let q be a scalar of k, q = 0, and M a n-dimensional vector space. Then Rq : M ⊗ M → M ⊗ M given by
5.2 The quantum Yang-Baxter equation and the braid equation
qvi ⊗ vi Rq (vi ⊗ vj ) = vi ⊗ vj vi ⊗ vj + (q − q −1 )vj ⊗ vi
221
if i = j if i < j
(5.11)
if i < j
is a solution of the QYBE. Rq is called the classical Yang-Baxter operator on M . 7. Let A be a k-algebra acting on M . Solutions of QYBE on M can be constructed in a natural way from solutions of the QYBE in A⊗3 : if R = R1 ⊗ R2 ∈ A ⊗ A, then R = R(R,M,·) : M ⊗ M → M ⊗ M,
R(v ⊗ w) = R · (v ⊗ w) = R1 · v ⊗ R2 · w
is a solution of the QYBE in End(M ⊗ M ⊗ M ) since R12 R13 R23 (x ⊗ y ⊗ z) = R12 R13 R23 · (x ⊗ y ⊗ z) = R23 R13 R12 · (x ⊗ y ⊗ z) = R23 R13 R12 (x ⊗ y ⊗ z) for all x, y, z ∈ V . Of course we can take A = Mn (k). A acts on M = k n , and then we obtain the previous identification of solutions on M ⊗ M and in A ⊗ A. We give a particular example. For a fixed index i ∈ {1, · · · , n}, Ri =
n
eij ⊗ eji
j=1
is a solution of the braid equation in Mn (k) ⊗ Mn (k) ⊗ Mn (k). V = M n is a left Mn (k)-module via f · v = f (v), for all f ∈ Endk (M ) and v ∈ M . It follows that Ri : M ⊗ M → M ⊗ M,
Ri (v ⊗ w) =
n
eij · v ⊗ eji · w
j=1
is a solution of the braid equation and hence τ Ri and Ri τ are solution of the QYBE. 8. The above example can be generalized as follows. Let A be an algebra and R = R1 ⊗ R2 ∈ A ⊗ A be an A-central element, that is aR1 ⊗ R2 = R1 ⊗ R2 a for all a ∈ A. This means that R ∈ W1 in the terminology of Section 3.2. We will prove in Proposition 144 that R is a solution of the FS-equation R12 R23 = R23 R13 = R13 R12 in A⊗A⊗A and a fortiori of the braid equation. We have seen in Theorem 27 that any separable or Frobenius algebra gives us an R ∈ W1 , and therefore a
222
5 The quantum Yang-Baxter equation
solution of the braid equation. We will come back to this in Chapter 8. 9. If A is an algebra, then we have the following interesting solution of the braid equation (cf. [148], [149]): R = R(A,·,1A ) : A ⊗ A → A ⊗ A,
R(x ⊗ y) = xy ⊗ 1 + 1 ⊗ xy − x ⊗ y (5.12)
for all x, y ∈ A, is a solution of the braid equation and hence Rτ is a solution of QYBE. A generalization of this solution was recently given in [64]: let A be a k-algebra of dimension greater than one and a, b, c ∈ k. Then R = Ra,b,c : A ⊗ A → A ⊗ A,
R(x ⊗ y) = axy ⊗ 1 + b1 ⊗ xy − cx ⊗ y (5.13)
is a bijective solution of the braid equation if and only if one of the following conditions holds: 1. a = c = 0 and b = 0 2. b = c = 0 and a = 0 3. a = b = 0 and c = 0 −1 = Rb−1 ,a−1 ,c−1 (in case 1) and 2)), and The inverse of Ra,b,c is given by Ra,b,c −1 R0,0,c = R0,0,c−1 (in case 3)). Indeed, for x, y, z ∈ A, we find that
R12 R23 R12 (x ⊗ y ⊗ z) = R23 R12 R23 (x ⊗ y ⊗ z) if and only if a(a − c)(b − c)(x ⊗ yz ⊗ 1 − xy ⊗ 1 ⊗ z) = b(a − c)(b − c)(1 ⊗ xy ⊗ z − x ⊗ 1 ⊗ yz) and the “if part” follows. It remains to prove the “only if” part. As dimk (A) > 1, we can take y = 1 and x, z ∈ A\k1A . Then the last equation is equivalent to the following two equations a(a − c)(b − c) = 0,
b(a − c)(b − c) = 0
and the conclusion follows. Dually, solutions of the QYBE can be constructed using a coalgebra structure on the vector space M . Let C be a coalgebra. Then R : C ⊗ C → C ⊗ C,
R(c ⊗ d) = ε(c)d(1) ⊗ d(2) + ε(d)c(1) ⊗ c(2) − d ⊗ c
for all c, d ∈ C is a bijective solution of the QYBE. 10. Let H be a Hopf algebra with a bijective antipode. Then the maps
and
R : H ⊗ H → H ⊗ H,
R(g ⊗ h) = h(3) gS −1 (h(2) ) ⊗ h(1)
R : H ⊗ H → H ⊗ H,
R(g ⊗ h) = h(3) S −1 (h(1) )g ⊗ h(2)
for all g, h ∈ H are solutions of the QYBE. This can be proved by a long but routine computation but a more efficient approach is to use Theorem 59 for the Verma Yetter-Drinfeld structures given in Remark 15.
5.2 The quantum Yang-Baxter equation and the braid equation
223
We have seen that for an algebra A the map R(A,·,1A ) : A ⊗ A → A ⊗ A,
R(A,·,1A ) (x ⊗ y) = xy ⊗ 1 + 1 ⊗ xy − x ⊗ y (5.14)
is a solution of the braid equation; moreover R(A,·,1A ) is involutory, i.e. 2 R(A,·,1 = IA⊗A . In Theorem 57 we will describe all involutory solutions A) of the braid equation that can be obtained in this way. Theorem 57. (Nichita [148]) Let k be a field such that char(k) = 2, M an n-dimensional vector space and R ∈ End(M ⊗ M ) an involutory solution of the braid equation. The following statements are equivalent: 1. there exists an algebra structure (M, ·, 1M ) on M such that R = R(M,·,1M ) ; 2. the following two conditions holds: a. rankk (R + Id) ≤ n; b. there exists 0 = x0 ∈ M such that R(x0 ⊗ x) = x ⊗ x0 for all x ∈ M . Proof. 2. ⇒ 1. As R is involutory it follows from 2b) that R(x ⊗ x0 ) = x0 ⊗ x
(5.15)
for all x ∈ M . Hence, for x ∈ M (R + Id)(x0 ⊗ x) = x ⊗ x0 + x0 ⊗ x
(5.16)
Consider the n-dimensional vector space W = {x ⊗ x0 + x0 ⊗ x | x ∈ M }. Then W ⊆ (R + Id)(x0 ⊗ M ) ⊆ (R + Id)(M ⊗ M ) It follows from here and 2a) that (R + Id)(M ⊗ M ) = W. Hence there exists a k-linear map mM : M ⊗ M → M (this will be the multiplication on M ) such that (R + Id)(v ⊗ w) = mM (v ⊗ w) ⊗ x0 + x0 ⊗ mM (v ⊗ w)
(5.17)
for all v, w ∈ M . We denote mM (v ⊗ w) = vw. From (5.17) we obtain that R = R(M,mM ,1M =x0 ) . We still have to prove that (M, mM , 1M = x0 ) is an algebra structure on M . Using (5.16) and (5.17) we obtain that (x0 x) ⊗ x0 + x0 ⊗ (x0 x) = x ⊗ x0 + x0 ⊗ x for all x ∈ M , and x0 x = x for any x ∈ M . In a similar way, we obtain that xx0 = x for all x ∈ M , using (5.15) and (5.17), and it follows that x0 is a unit of M . The associativity of mM will follow from the braid condition. For a, b, c ∈ M the braid equation
224
5 The quantum Yang-Baxter equation
R12 R23 R12 (a ⊗ b ⊗ c) = R23 R12 R23 (a ⊗ b ⊗ c) takes the form 2(ab)c ⊗ x0 ⊗ x0 + 2x0 ⊗ (ab)c ⊗ x0 + 2x0 ⊗ x0 ⊗ (ab)c = 2a(bc) ⊗ x0 ⊗ x0 + 2x0 ⊗ a(bc) ⊗ x0 + 2x0 ⊗ x0 ⊗ a(bc) As char(k) = 2 we obtain that (ab)c = a(bc), hence (M, mM , 1M = x0 ) is an algebra structure on M . The proof of the other implication 1. ⇒ 2. is straightforward. Let A be a k-algebra and R ∈ A⊗A a solution of the QYBE. Let F ∈ A⊗A be ˜ = τ (F −1 )RF , where τ switches the two tensor components. invertible and R If F = u ⊗ u, where u is an invertible element of A, then τ (F −1 )RF = (u−1 ⊗ u−1 )R(u ⊗ u) is again a solution of the QYBE, but for general F this ˜ is a solution of the QYBE then we call it a twist of is no longer true. If R ˜ remains a R. To find out a necessary and sufficient condition such that R solution of QYBE is not easy. We give a sufficient condition in Theorem 58. We will use the following notation: for x = x1 ⊗ x2 ⊗ x3 ∈ A ⊗ A ⊗ A, we write x123 = x, x231 = x2 ⊗ x3 ⊗ x1 , etc. Theorem 58. (Kulish, Mudrov [112]) Let A be a k-algebra, R ∈ A ⊗ A an invertible solution of the QYBE in A ⊗ A ⊗ A and F an invertible element of A ⊗ A. Assume that there exists invertible elements U , V ∈ A ⊗ A ⊗ A such that U F 12 = V F 23 R12 U = U 213 R12
(5.18)
R23 V = V 132 R23
(5.20)
(5.19)
˜ := τ (F −1 )RF is a solution on the QYBE. Then R Proof. In the computations presented below, bars are used to denote the inverse, for example F = F −1 . Using (5.18), (5.19) and (5.20) we find 12
F V
312
R13 R23 U 123 F 12 12
312
12
312
=F V
R13 R23 V 123 F 23
12 312 ˜ 23 = F V R13 V 132 R23 F 23 = F V R13 V 132 F 32 R 12 312 13 132 13 23 12 312 312 13 13 23 ˜ =F V U R F R ˜ =F V R U F R 12 312 312 31 13 23 12 312 312 12 13 23 ˜ R ˜ =F V V F R ˜ R ˜ =F V U F R 13 ˜ 23 ˜ =R R
Using this formula we obtain
5.3 Hopf algebras versus the QYBE
225
˜ 12 )−1 ˜ 12 R ˜ 13 R ˜ 23 (R R 21
12
= F R12 F 12 F V 21
= F R12 V 21
312
312
12
12
R13 R23 U 123 F 12 F R F 21 12
21
R13 R23 U 123 R F 21 = F V
321
12
= F V R23 R13 U 213 F 21 = τ 12 (F V ˜ 13 R ˜ 23 R ˜ 23 ) = R ˜ 13 = τ 12 (R
312
321
12
R12 R13 R23 R U 213 F 21
R13 R23 U 123 F 12 )
˜ is a solution of the QYBE. and R Remarks 9. 1. In fact, the pair (U, V ) satisfying conditions (5.18-5.20) is determined by an invertible G ∈ A ⊗ A ⊗ A such that 21
R12 G = G213 F R12 F 12 and
32
R23 G = G132 F R23 F 23 . 12
23
Indeed, let us denote G = U F 12 = V F 23 . Then U = GF , V = GF and the equations (5.19)-(5.20) are written in terms of G as above. 2. An important example of pairs (F, G) can be obtained using Drinfeld’s construction [76]. Let H be a Hopf algebra and F ∈ H ⊗ H an invertible element. Now, we deform the comultiplication on H using F and define ∆F (h) := F −1 ∆(h)F , for all h ∈ H. Drinfeld proved that if F satisfy the twist equation (∆ ⊗ Id)(F )F 12 = (Id ⊗ ∆)(F )F 23 then HF , which is equal to H as an algebra, and with comultiplication ∆F , is a Hopf algebra. Now it is easy to see that (F, G := (∆ ⊗ Id)(F )F 12 ) satisfy the conditions of Theorem 58. Consequently, if F is a twist in the sense of Drinfeld and ˜ = τ (F −1 )RF is a solution on R ∈ H ⊗ H is a solution of the QYBE, then R the QYBE. A method of constructing twists for H = kG, where G is a nonabelian group was given in [81].
5.3 Hopf algebras versus the QYBE In this Section, we discuss how we can construct solutions of the QYBE starting from a Hopf algebra. Quasitriangular bialgebras Definition 6. A quasitriangular bialgebra (resp. Hopf algebra) is a pair (H, R), where H is a biagebra (resp. a Hopf algebra) and R = R1 ⊗ R2 ∈ H ⊗ H such that (QT 1) ∆(R1 ) ⊗ R2 = R13 R23 ;
226
(QT 2) (QT 3) (QT 4) (QT 5)
5 The quantum Yang-Baxter equation
ε(R1 )R2 = 1; R1 ⊗ ∆(R2 ) = R13 R12 ; R1 ε(R2 ) = 1; ∆cop (h)R = R∆(h), for all h ∈ H.
Remarks 10. 1. If (H, R) is a quasitriangular Hopf algebra, then R is invertible and R−1 = S(R1 ) ⊗ R2 . In this case, the condition (QT5) tell us that ∆cop is a twist of ∆ in the sense of Drinfeld. A quasitriangular Hopf algebra (H, R) is called triangular if R−1 = τ (R). 2. Let H be a finite dimensional bialgebra and R = R1 ⊗ R2 ∈ H ⊗ H. We define λR : H ∗ → H cop , λR (f ) = f, R1 R2 for all f ∈ H ∗ . Then R satisfies (QT1)-(QT4) if and only if λR is a bialgebra map. 3. Let (H, R) be a quasitriangular Hopf algebra and M , N left H-modules. Then the map σM,N : M ⊗ N → N ⊗ M,
σM,N (m ⊗ n) = R2 n ⊗ R1 m
for all m ∈ M , n ∈ N , is an isomorphism of left H-modules. The inverse of it is given by −1 σM,N : N ⊗ M → M ⊗ N,
−1 σM,N (n ⊗ m) = S(R1 )m ⊗ R2 n
for all m ∈ M , n ∈ N . Examples 9. 1. Let H be a cocommutative Hopf algebra. Then (H, R = 1⊗1) is a quasitriangular Hopf algebra. Conversely, assume that (H, R) is a quasitriangular Hopf algebra such that R ∈ Im(∆). Then H is cocommutative. Indeed, let h ∈ H such that R = ∆(h). Using (QT1) we obtain (h = h): h(1) ⊗ h(2) ⊗ h(3) = h(1) ⊗ h(1) ⊗ h(2) h(2) and
h = h(1) h(2) S(h(3) ) = h(1) h(1) S(h(2) h(2) ) = ε(h)ε(h )1H .
and we conclude that h = a1H for some a ∈ k. From (QT2), we get that a = 1 and therefore, R = ∆(1) = 1 ⊗ 1. Now, (QT5) gives us ∆cop = ∆, i.e. H is cocommutative. 2. The most important example of quasitriangular Hopf algebra is probably the Drinfeld double D(H) of a finite dimensional Hopf algebra. We have discussed the Drinfeld double already in Section 4.4. Here we will use the representation D(H) = H H ∗ . The bialgebra structure is then given by the formulas
5.3 Hopf algebras versus the QYBE
(h f )(h f ) =
227
h(2) h f ∗ f , S −1 h(3) ?h(1)
∆D(H) (h f ) = (h(1) f(2) ) ⊗ (h(2) f(1) ) εD(H) = εH ⊗ εH ∗ cop
for h, h ∈ H and f, f ∈ H ∗ . Let {ei , e∗i } be a finite dual basis of H and R be the canonical element R= (ei ε) ⊗ (1 e∗i ). i
Then, by a routine computation using the multiplication rule of D(H) and the fact that {ei , e∗i } is a dual basis, we can prove that (D(H), R) is a quasitriangular Hopf algebra ([125]). Moreover, as H → D(H), h → h ε, is a Hopf algebra map we conclude that any finite dimensional Hopf algebra can be embeded into a quasitriangular one. Now, using Proposition 118 we obtain that the above canonical element R is a solution of the QYBE in D(H) ⊗ D(H) ⊗ D(H). Sometimes it is possible to describe all quasitriangular structures on a given Hopf algebra. Consider the Hopf algebra E(n) over a field k with characteristic different from 2, with generators g and x1 , · · · , xn and relations g 2 = 1,
x2i = 0,
xi g = −gxi ,
xi xj = −xj xi
and the coalgebra structure is given by ∆(g) = g ⊗ g,
∆(xi ) = xi ⊗ g + 1 ⊗ xi ,
ε(g) = 1,
ε(xi ) = 0
for all i, j = 1, · · · n. This Hopf algebra can be obtained using the Ore extension method, as described in [13], and plays a role in the classification theory of pointed Hopf algebras (cf. [36]). For n = 1, we recover Sweedler’s four dimensional Hopf algebra, which was the first example of a noncommutative noncocommutative Hopf algebra. The following result is due to Panaite and Van Oystaeyen [150]. Proposition 116. Let k be a field of characteristic different from 2. There is a bijective correspondence between all quasitriangular structures on E(n) and the n × n-matrices with entries in k. Proof. We refer to [150] for full detail, and restrict ourselves to giving the description of the quasitriangular structure RA corresponding to a given A ∈ Mn (k). For two subsets P and F of {1, · · · , n} with the same cardinality, we consider the matrix AP,F = (aij )i∈P,j∈F
228
5 The quantum Yang-Baxter equation
If P = {i1 , · · · , is } ⊂ {1, · · · , n} such that i1 < i2 < · · · < is , we denote xP = xi1 · · · xis . For a matrix A ∈ Mn (k) let |P |(|P |−1) 2 (−1) det(AP,F )× RA = 2−1 1 ⊗ 1 + g ⊗ 1 + 1 ⊗ g − g ⊗ g +2−1 |P |=|F |
× xP ⊗ g |P | xF + gxP ⊗ g |P | xF + xP ⊗ g |P |+1 xF − gxP ⊗ g |P |+1 xF Then, taking into account that {g j xP |P ⊂ {1, · · · , n}, j = 0, 1} is a basis of E(n), we can prove that (E(n), RA ) is a quasitriangular Hopf algebra. As we observed above, E(1) = H4 , Sweedler’s Hopf algebra. The quasitriangular structures on H4 are given by Ra = 2−1 1⊗1+g ⊗1+1⊗g −g ⊗g +2−1 a x⊗x+x⊗gx+gx⊗gx−gx⊗x where a ranges over k. There exist Hopf algebras without quasitriangular structure. The easiest example is the dual of a nonabelian finite group ring. Proposition 117. For any finite nonabelian group G, (kG)∗ has no quasitriangular structure. Proof. Fist we recall the coalgebra structure of (kG)∗ : let {pg | g ∈ G} be the dual basis of {g | g ∈ G}, that is pg (h) = δg,h for all h ∈ G. Then px ⊗ px−1 g , and pg ph = δg,h pg ∆(pg ) = x∈G
for all g, h ∈ G. As G is non-abelian we can pick g, h ∈ G such that gh = hg; let M = kph and N = kpg . If we can show that M ⊗ N and N ⊗ M are not isomorphic as (kG)∗ -modules, then it follows from Remark 10 3) that there is no R such that ((kG)∗ , R) is quasitriangular. We have px ph ⊗ px−1 gh pg = 0, pgh · (M ⊗ N ) = pgh · (ph ⊗ pg ) = x∈G
and, on the other hand pgh · (N ⊗ M ) = pgh · (pg ⊗ ph ) =
px pg ⊗ px−1 gh ph = pg ⊗ ph .
x∈G
Hence, pgh ∈ AnnkG∗ (M ⊗ N ) \ AnnkG∗ (N ⊗ M ). Proposition 118. Let (H, R) be a quasitriangular bialgebra. Then R is a solution of the QYBE in H ⊗ H ⊗ H. In particular, for any left H-module M , the homotety R = RR,V : M ⊗ M → M ⊗ M,
R(v ⊗ w) = R · (v ⊗ w) = R1 · v ⊗ R2 · w
is a solution of the QYBE in End(M ⊗ M ⊗ M ).
5.3 Hopf algebras versus the QYBE
229
Proof. It follows from (QT1) and (QT5) that 1 1 ∆cop (R1 ) ⊗ R2 = R(2) ⊗ R(1) ⊗ R2 = (τ ⊗ Id)(∆(R1 ) ⊗ R2 )
= (τ ⊗ Id)(R13 R23 ) = R23 R13 and we have proved the formula (QT 1 ) R23 R13 = ∆cop (R1 ) ⊗ R2 . Using this formula and then (QT5) and (QT1) we obtain (r = R): R23 R13 R12 = (∆cop (r1 ) ⊗ r2 )R12 = ∆cop (r1 )R ⊗ r2 = R∆(r1 ) ⊗ r2 = R12 (∆(r1 ) ⊗ r2 ) = R12 R13 R23 i.e. R is a solution of the QYBE. Remark 13. Using Proposition 118 we obtain a rich class of solutions of the QYBE. However, not every solution is of this form: Radford [155] constructed a counterexample, namely the classical Yang-Baxter operator Rq constructed in (5.11), in the case where q is not a root of unity. Coquasitriangular bialgebras Let H be a bialgebra. Recall that a k-linear map σ : H ⊗ H → k is called convolution invertible if it has an inverse σ −1 in the convolution algebra Hom(H ⊗ H, k), that is, σ −1 : H ⊗ H → k satisfies σ(x(1) ⊗ y(1) )σ −1 (x(2) ⊗ y(2) ) = σ −1 (x(1) ⊗ y(1) )σ(x(2) ⊗ y(2) ) = ε(x)ε(y)1H for all x, y ∈ H. In the sequel, σ12 , σ13 , σ23 : H ⊗ H ⊗ H → k will be the k-linear maps defined by σ12 (x ⊗ y ⊗ z) = ε(z)σ(x ⊗ y), σ13 (x ⊗ y ⊗ z) = ε(y)σ(x ⊗ z), σ23 (x ⊗ y ⊗ z) = ε(x)σ(y ⊗ z), for all x, y, z ∈ H. Definition 7. A coquasitriangular (or braided) bialgebra (resp. Hopf algebra) is a pair (H, σ), where H is a biagebra (resp. Hopf algebra) and σ : H ⊗H → k is a k-linear map such that (B1) σ(x(1) ⊗ y(1) )y(2) x(2) = σ(x(2) ⊗ y(2) )x(1) y(1) ; (B2) σ(x ⊗ 1) = ε(x); (B3) σ(x ⊗ yz) = σ(x(1) ⊗ y)σ(x(2) ⊗ z); (B4) σ(1 ⊗ x) = ε(x); (B5) σ(xy ⊗ z) = σ(y ⊗ z(1) )σ(x ⊗ z(2) ), for all x, y, z ∈ H. Remarks 11. 1. The two notions of quasitriangular bialgebra and coquasitriangular bialgebra are dual: a finite dimensional bialgebra H is coquasitriangular if and only if H ∗ a quasitriangular. The correspondence between R and
230
5 The quantum Yang-Baxter equation
σ is the following: assume that (H ∗ , R = i fi ⊗ gi ) is quasitriangular, and define σ = σR by the formula σ(h ⊗ g) = fi (g)gi (h) i
for all g, h ∈ H. Then (H, σ) is coquasitriangular. Conversely, assume that (H, σ) is coquasitriangular. As H is finite dimensional, we can identify H ∗ ⊗ H ∗ and (H ⊗ H)∗ , and we take R ∈ H ∗ ⊗ H ∗ equal to σ ∈ (H ⊗ H)∗ . Then (H ∗ , R) is a quasitriangular bialgebra. In particular, the duals of all the finite dimensional quasitriangular bialgebras discussed in Example 9 are coquasitriangular. 2. Coquasitriangular bialgebras can be viewed as generalizations of commutative bialgebras. In particular, if H is commutative, then (H, εH ⊗ εH ) is coquasitriangular. We remark that the condition (B1) is then noting else then the comutativity condition yx = xy. 3. If (H, σ) is a coquasitriangular Hopf algebra then σ is convolution invertible with inverse σ −1 (x ⊗ y) = σ(x ⊗ S(y)). Sometimes it is possible to describe all possible coquasitriangular structures on a Hopf algebra. An elementary example is given in Proposition 119 Proposition 119. [53, Lemma 1.2] Let H be a finite dimensional commutative and cocommutative bialgebra. Then we have a bijective correspondence between coquasitriangular structures σ on H and bialgebra maps θ : H → H ∗ . Proof. If H is commutative and cocommutative, then (B1) holds for every σ. The canonical isomorphism (H ⊗ H)∗ ∼ = Hom(H, H ∗ ) sending σ ∈ (H ⊗ H)∗ to θ : H → H ∗ given by θ(h), k = σ(h ⊗ k) restricts to an isomorphism between the vector space consisting of all σ satisfying (B2)-(B5) and bialgebra maps H → H ∗ . Example 20. Assume that H = kCn , and let g be a generator of the cyclic group Cn . A Hopf algebra map θ : H → H ∗ is completely determined by θ(g), g = λ. Since θ(g n ) = ε, it follows that λn = 1, and we find that coquasitriangular structures on kCn are in bijective correspondence with the n-th roots of 1 in k. Our next result is the dual version of Proposition 118. Proposition 120. Let (H, σ) be a coquasitriangular bialgebra. Then σ12 ∗ σ13 ∗ σ23 = σ23 ∗ σ13 ∗ σ12
5.3 Hopf algebras versus the QYBE
231
in the convolution algebra Hom(H ⊗3 , k). In particular, for any right H-comodule (M, ρ), the k-linear map R = R(σ,M,ρ) : M ⊗M → M ⊗M,
R(u⊗v) = σ(u[1] ⊗v[1] )u[0] ⊗v[0] (5.21)
is a solution of the QYBE in End(M ⊗ M ⊗ M ). Proof. Let x, y, z ∈ H. A direct calculation shows that (σ23 ∗ σ13 ∗ σ12 )(x ⊗ y ⊗ z) = σ(y(1) ⊗ z(1) )σ(x(1) ⊗ z(2) )σ(x(2) ⊗ y(2) ) On the other hand (σ12 ∗ σ13 ∗ σ23 )(x ⊗ y ⊗ z) = σ(x(1) ⊗ y(1) )σ(x(2) ⊗ z(1) )σ(y(2) ⊗ z(2) ) (B3) = σ x ⊗ σ(y(2) ⊗ z(2) )y(1) z(1) (B1) = σ x ⊗ σ(y(1) ⊗ z(1) )z(2) y(2) = σ(y(1) ⊗ z(1) )σ(x ⊗ z(2) y(2) ) (B3) = σ(y(1) ⊗ z(1) )σ(x(1) ⊗ z(2) )σ(x(2) ⊗ y(2) ) hence we have proved that σ is a solution of the QYBE σ12 ∗ σ13 ∗ σ23 = σ23 ∗ σ13 ∗ σ12 in the convolution algebra Hom(H ⊗ H ⊗ H, k). The second statement then follows from the formulas R12 R13 R23 (u ⊗ v ⊗ w) = σ12 ∗ σ13 ∗ σ23 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] R23 R13 R12 (u ⊗ v ⊗ w) = σ23 ∗ σ13 ∗ σ12 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] for all u, M , w ∈ V . Remarks 12. 1. Let (H, σ) be a finite dimensional coquasitriangular bialgebra and (M, ρ) a right H-comodule. Then (H ∗ , Rσ ) is a quasitriangular bialgebra, M has a structure · of left H ∗ -module and R(σ,M,ρ) = R(Rσ ,M,·) . 2. We have a remarkable difference between quasitriangular and coquasitriangular bialgebras. In the next Section, we will prove the FRT-Theorem: if R : M ⊗ M → M ⊗ M is a solution of the QYBE, with M finite dimensional, then we can find a coquasitriangular bialgebra H coacting on M in such a way that R = R(σ,M,ρ) . We do not have a similar result for quasitriangular bialgebras, as is shown by Radford’s example, see Remark 13. Apparently this is in contradiction with the duality between Propositions 120 and 118. This is explained by the fact that the coquasitriangular bialgebra H constructed in the FRT Theorem is not necessarily finite dimensional (even if M is finite dimensional). Yetter-Drinfeld modules Let H be a bialgebra. We recall from Section 4.4 that a (left-right) Yetter-Drinfeld module over H is a threetuple (M, ·, ρ), where (M, ·) is a left H-module, (M, ρ) is a right H-comodule and
232
5 The quantum Yang-Baxter equation
h(1) · m[0] ⊗ h(2) m[1] = (h(2) · m)[0] ⊗ (h(2) · m)[1] h(1)
(5.22)
for all h ∈ H, m ∈ M . H YDH is the category of Yetter-Drinfeld module over H and H-linear H-colinear maps. As we have seen in Section 4.4, the compatibility relation (5.22) can be rewritten in the case where H has a twisted antipode S, and then it takes the form ρ(h · m) = h(2) · m[0] ⊗ h(3) m[1] S(h(1) )
(5.23)
Example 21. Let G be a group and H = kG. Using (5.23) we easily obtain that M ∈ kG YDkG if and only if M is a crossed G-module , that is M is a left k[G]-module and there exists {Mσ | σ ∈ G} a family of k-subspaces of M such that M = ⊕σ∈G Mσ and g · Mσ ⊆ Mgσg−1 for all g, σ ∈ G. The following Theorem generalizes and unifies Propositions 118 and 120. Theorem 59. (Yetter [189]) Let H be a bialgebra and (M, ·, ρ) ∈ be a left-right Yetter-Drinfeld module. Then the map R = R(M,·,ρ) : M ⊗ M → M ⊗ M,
H YD
H
R(m ⊗ n) = n[1] · m ⊗ n[0]
for all m, n ∈ M , is a solution of the QYBE. Futhermore, if H is a Hopf algebra then R is bijective. Proof. For l, m, n ∈ M we have
R23 R13 R12 (l ⊗ m ⊗ n) = R23 R13 m[1] · l ⊗ m[0] ⊗ n = R23 n[1] m[1] · l ⊗ m[0] ⊗ n[0] = n[2] m[1] · l ⊗ n[1] · m[0] ⊗ n[0]
and
R12 R13 R23 (l ⊗ m ⊗ n) = R12 R13 l ⊗ n[1] · m ⊗ n[0] = R12 n[1] · l ⊗ n[2] · m ⊗ n[0] = (n[2] · m)[1] · (n[1] · l) ⊗ (n[2] · m)[0] ⊗ n[0] = (n[1](2) · m)[1] n[1](1) ·l ⊗ (n[1](2) · m)[0] ⊗ n[0] (5.22) = n[2] m[1] · l ⊗ n[1] · m[0] ⊗ n[0]
i.e. R is a solution of the QYBE. Now, if S is the antipode of H then the map R−1 : M ⊗ M → M ⊗ M, for all m, n ∈ M is the inverse of R.
R−1 (m ⊗ n) = S(n[1] ) · m ⊗ n[0]
5.3 Hopf algebras versus the QYBE
233
Remark 14. If (M, ·, ρ) ∈ H YDH then the map R(M,·,ρ) := τ Rτ given by : M ⊗ M → M ⊗ M, R(M,·,ρ)
R(M,·,ρ) (m ⊗ n) = m[0] ⊗ m[1] · n
is also a solution of the QYBE. We will prove in Section 5.4 that Theorem 59 has a converse in the situation where M is finite dimensional. The problem will be to find a Yetter-Drinfeld module structure on M . We end this Section presenting two different ways to construct Yetter-Drinfeld modules. First, any representation (resp. corepresentation) of a quasitriangular (resp. coquasitriangular) bialgebra gives us a Yetter-Drinfeld module. This also explains why Propositions 118 and 120 are special cases of Theorem 59. Proposition 121. Let (H, R) be a quasitriangular bialgebra and (M, ·) be a left H-module. Then M is an object in H YDH via the right H-coaction ρ = ρR : M → M ⊗ H,
ρ(m) = R2 · m ⊗ R1
for all m ∈ M . Furthermore, R(R,M,·) = R(M,·,ρ) . Proof. First we show that (M, ρ = ρR ) is a right H-comodule. A direct calculation gives us that (r = R): (ρ ⊗ Id)ρ(m) = r2 R2 · m ⊗ r1 ⊗ R1 , and (Id ⊗ ∆)ρ(m) = R2 · m ⊗ ∆(R1 ) for all m ∈ M . Applying τ to (QT1), we obtain R2 ⊗ ∆(R1 ) = r2 R2 ⊗ r1 ⊗ R1 and (ρ ⊗ Id)ρ = (Id ⊗ ∆)ρ. Taking (QT2) into account, we have ε(m[1] )m[0] = ε(R1 )R2 · m = m i.e. (M, ρ) is a right H-comodule. Let us prove now the compatibility condition (5.22). For h ∈ H and m ∈ M we have h(1) · m[0] ⊗ h(2) m[1] = h(1) R2 · m ⊗ h(2) R1 and (h(2) · m)[0] ⊗ (h(2) · m)[1] h(1) = R2 h(2) · m ⊗ R1 h(1) . Applying τ to (QT5) we find h(1) R2 ⊗ h(2) R1 = R2 h(2) ⊗ R1 h(1) hence (M, ·, ρR ) ∈ H YDH . Finally R(M,·,ρR ) (m ⊗ n) = n[1] · m ⊗ n[0] = R1 · m ⊗ R2 · n = R(R,M,·) (m ⊗ n) which finishes our proof.
234
5 The quantum Yang-Baxter equation
The dual result is the following; we leave the details as an exercise to the reader. Proposition 122. Let (H, σ) be a coquasitriangular bialgebra and (M, ρ) be a right H-comodule. Then M is an object in H YDH via the left H-action h · m = σ(m[1] ⊗ h)m[0] for all h ∈ H and m ∈ M . Furthermore R(σ,M,ρ) = R(M,·,ρ) . Recall from Chapters 2 and 4 that the forgetful functors H YDH → H M and H YDH → MH respectively have a right and left adjoint. This gives us a second method of constructing Yetter-Drinfeld modules: start with an Hmodule or an H-comodule, and apply the appropriate adjoint functors. We describe this explicitly in the next Proposition. Proposition 123. Let H be a bialgebra with a twisted antipode. 1. For any left H-module M , M ⊗ H ∈ H YDH via the following structures h · (m ⊗ k) = h(2) m ⊗ h(3) kS −1 (h(1) ) and ρ(m ⊗ h) = m ⊗ h(1) ⊗ h(2) for all h, k ∈ H and m ∈ M . The functor • ⊗ H : H M → H YDH is a right adjoint of the forgetful functor H YDH → H M. 2. For any right H-comodule N , H ⊗N ∈ H YDH via the following structures h · (k ⊗ n) = hk ⊗ n and ρ(h ⊗ n) = h(2) ⊗ n[0] ⊗ h(3) n[1] S −1 (h(1) ) for all h, k ∈ H and n ∈ N . The functor H ⊗ • : MH → H YDH is a left adjoint of the forgetful functor H YDH → MH . Proof. This is a special case of Theorem 15, Example 13 and Example 14. Remark 15. k is a left H-module and a right H-comodule, the structure maps are the trivial ones. It follows that H can be viewed as an Yetter-Drinfeld module in two ways, namely h · k = h(2) kS −1 (h(1) ) and ρ = ∆ and
h · k = hk and ρ(h) = h(2) ⊗ h(3) S −1 (h(1) )
for all h, k ∈ H. These Yetter-Drinfeld modules are sometimes called the Verma Yetter-Drinfeld modules [83].
5.4 The FRT Theorem
235
5.4 The FRT Theorem First, we recall some well-known Lemmas. Lemma 25. Let M be a finite dimensional vector space with basis {m1 , · · · , mn }, and let C be a coalgebra. Let {cvl | v, l = 1, · · · , n} be a family of elements in C, and consider the k-linear map ρ : M → M ⊗ C, ρ(ml ) = mv ⊗ cvl (M, ρ) is a right C-comodule if and only if the matrix (cvl ) is comultiplicative, i.e. ∆(cjk ) = cju ⊗ cuk and ε(cjk ) = δkj (5.24) for all j, k = 1, · · · , n. Writing B = (cvl ), we can formally rewrite (5.24) as ∆(B) = B ⊗ B and ε(B) = In Lemma 26. Let (C, ∆, ε) be a coalgebra. On the tensor algebra (T (C), M, u), there exists a unique bialgebra structure (T (C), M, u, ∆, ε) such that ∆(c) = ∆(c) and ε(c) = ε(c) for all c ∈ C. In addition, the inclusion i : C → T (C) is a coalgebra map. Furthermore, if M is a vector space and µ : C ⊗ M → M , µ(c ⊗ m) = c · m is a linear map, then there exists a unique left T (C)-module structure on M , µ : T (C) ⊗ M → M , such that µ(c ⊗ m) = c · m, for all c ∈ C, m ∈ M . Proof. This is an immediate application of the universal property of the tensor algebra T (C). Lemma 27. Let H be a bialgebra generated as an algebra by G = (gij ) and let σ : H ⊗ H → k be a k-linear map satisfying (B3) and (B5). If σ satisfies (B1) for any x, y ∈ G, then (B1) holds for all elements of H. Proof. Left to the reader. Now, we can prove the main results of this chapter. Theorem 60. Let M be a finite dimensional vector space and R ∈ Endk (M ⊗ M ) a solution of the QYBE. Then there exists a bialgebra A(R) and a unique k-linear map σ : A(R) ⊗ A(R) → k such that (A(R), σ) is a coquasitriangular bialgebra, M has a structure of right A(R)-comodule (M, ρ), and R = R(σ,M,ρ) . Proof. First we construct the bialgebra A(R). Let {m1 , · · · , mn } be a basis of M and (xij uv ) a family of scalars of k such that R(mu ⊗ mv ) = xij uv mi ⊗ mj
(5.25)
236
5 The quantum Yang-Baxter equation
for all u, v = 1, · · · , n. Let (C, ∆, ε) = Mn (k) be the comatrix coalgebra (see Section 5.1), and let ρ : M → M ⊗ C given by ρ(ml ) = mv ⊗ cvl
(5.26)
for all l = 1, · · · , n. Then, by Lemma 25, M is a right C-comodule. We know from Lemma 26 that T (C) is a bialgebra. i : C → T (C) is a coalgebra map, so M is a right T (C)-comodule via M
- M ⊗C
ρ
- M ⊗ T (C)
I⊗i
This right T (C)-coaction will also be denoted by ρ. Now, we define the quantum Yang-Baxter obstructions ybij kl by the following formula ij u v vu i j ybij (5.27) kl = xvu ck cl − xlk cv cu for all i, j, k, l = 1, · · · , n. We can easily prove that ij ij u v i j uv ∆(ybij kl ) = ybuv ⊗ ck cl + cu cv ⊗ ybkl and ε(ybkl ) = 0
for all i, j, k, l = 1, · · · , n. Let I be the two-sided ideal of T (C) generated by all ybij kl . It follows from the above formulas that I is a biideal of T (C); hence A(R) = T (C)/I is a bialgebra and M has a right A(R)-comodule structure via the canonical projection T (C) → A(R). We conclude A(R) is the free algebra with generators cji , i, j = 1, · · · , n, and relations u v vu i j xij vu ck cl = xlk cv cu
for all i, j, k, l = 1, · · · , n. The comultiplications and the counit are given in such a way that (cij ) is a comultiplicative matrix. Now we construct σ : A(R) ⊗ A(R) → k such that (A(R), σ) is coquasitriangular and R = R(σ,M,ρ) . First, we define σ : C ⊗ C → k,
σ(civ ⊗ cju ) = xij vu
(5.28)
for all i, j, u, v = 1, · · · , n. Then we extend the map σ to the whole of T (C) using the relations (B2-B5). The new map T (C) ⊗ T (C) → k is also denoted by σ. We claim that this map factorizes to a map A(R) ⊗ A(R) → k; we have to show that σ(T (C) ⊗ I) = σ(I ⊗ T (C)) = 0 Using (B3) and (B5), this comes down to proving ij p σ(cpq ⊗ ybij kl ) = 0 and σ(ybkl ⊗ cq ) = 0
5.4 The FRT Theorem
237
We prove only first formula and leave the second one to the reader. ij p u v vu p i j σ(cpq ⊗ ybij kl ) = xvu σ(cq ⊗ ck cl ) − xlk σ(cq ⊗ cv cu )
(B3)
p u r v vu p i r j = xij vu σ(cr ⊗ ck )σ(cq ⊗ cl ) − xlk σ(cr ⊗ cv )σ(cq ⊗ cu ) pu rv vu pi rj = xij vu xrk xql − xlk xrv xqu
(5.9)
=0
We have constructed σ : A(R) ⊗ A(R) → k such that (B2)-(B5) holds. From Lemma 27 we know that it suffices to verify (B1) on the algebra generators cij . For x = cpq , y = crs , we have u v σ(x(1) ⊗ y(1) )y(2) x(2) = σ(cpv ⊗ cru )cus cvq = xpr vu cs cq
and p r σ(x(2) ⊗ y(2) )x(1) y(1) = σ(cvq ⊗ cus )cpv cru = xvu qs cp cu
These are equal in A(R) since u v vu p r ybpr xpr sq = vu cs cq − xqs cp cu = 0 u,v
in A(R). We conclude that σ satisfies (B1) and that (A(R), σ) is a coquasitriangular bialgebra. Finally R = R(σ,M,ρ) : for any two elements mu , mv , in the basis of M , we have R(σ,M,ρ) (mv ⊗ mu ) = σ(civ ⊗ cju )mi ⊗ mj = xij vu mi ⊗ mj (5.25)
= R(mv ⊗ mu )
This formula also shows that σ is unique with the condition R = R(σ,M,ρ) and this completes our proof. Theorem 60 is originally due to Faddeev, Reshetikhin, and Takhtajan (see [85]) and is now usually referred to as the “FRT” theorem. An alternative version, using Yetter-Drinfeld modules instead of coquasitriangular bialgebras, was proved by Radford in [155]. Combining Theorem 60 and Proposition 122, we obtain Radford’s version of the FRT Theorem. Theorem 61. Let M be a finite dimensional vector space and R ∈ Endk (M ⊗ M ) a solution of the QYBE. Then there exists a bialgebra A(R) and an A(R)Yetter-Drinfeld module structure (M, ·, ρ) on M such that R = R(M,·,ρ) . Remark 16. Recently Di-Ming Lu [119] showed that the bialgebra A(R) can be viewed as a special case of the bialgebras that arise from Tannaka duality.
238
5 The quantum Yang-Baxter equation
Example 22. Take a scalar q = 0 and
q
0
0
0 1 q − q −1 Rq = 0 0 1 0
0
0
0
0 0 q
the classical Yang-Baxter operator in dimension two. The bialgebra A(Rq ) from the FRT theorem is Mq (2), using the notation from quantum group theory. It has the following description: as an algebra, A(Rq ) has generators x, y, z, t, and relations xy = qyx, yt = qty,
xz = qzx,
zt = qtz,
yz = zy
xt − tx = (q − q −1 )yz
The comultiplication ∆ and the counit ε of A(Rq ) are given by xy xy xy xy ∆ = ⊗ , ε = I2 z t z t z t z t To see this, we consider the sixteen relations u v i j xij xvu vu ck cl = lk cv cu u,v
u,v
for all i, j, k, l = 1, 2. Writing c11 = x, c12 = y, c21 = z, c22 = t, we find after a lengthy calculation that the six commutation rules given above are the only independent ones among the sixteen relations. It is interesting to note that the element detq = xt − qyz is a central and grouplike element of Mq (2), called the quantum determinant of order two. In particular, taking the quotient Mq (2)/(detq − 1) we obtain the Hopf algebra SLq (2). Its antipode is given by the formula S(x) = t, S(y) = −qy, S(z) = −q −1 z, S(t) = x This construction can be generalized for an arbitrary n leading to the bialgebras (resp. Hopf algebras) Mq (n) (resp. SLq (n)).
5.5 The set-theoretic braid equation The quantum Yang-Baxter equation and the braid equation can be considered on the level of sets. Let S be a set, and R : S × S → S × S a (bijective) map. We can still define the maps
5.5 The set-theoretic braid equation
239
R12 , R13 , R23 : S 3 → S 3 and we say that S is a solution of the quantum Yang-Baxter equation (resp. the braid equation) if (5.7) (resp. (5.8)) is an equality for functions S 3 → S 3 . As in the vector space case, R is a solution of the QYBE if and only if τ R is a solution of the braid equation; now the map τ : S × S → S × S is the transposition τ (x, y) = (y, x). Example 23. [92] Let : S × S → S, (x, y) → x y be a function. Then R : S × S → S × S,
R(x, y) = (x y, x)
is a solution of the braid equation if and only if x (y z) = (x y) (x z) for all x, y, z ∈ S. A pair (S, ) is called a rack if the above equation holds and the map x• : S → S, y → xy, is a bijection for all x ∈ S. Racks served as a tool for the construction of invariants of knots and links and recently for the construction of finite dimensional pointed Hopf algebras. For a further study and examples of racks we refer to [6] and the references therein. The problem now is to find solutions of the set theoretic QYBE. First we will construct solutions in the case that S = G is a group. The study of the braid equation on sets was started recently in [84] and then continued in [121], [160]. Definition 8. Let G be a group and let ξ : G × G → G,
ξ(x, y) = x y
be a left action of G on itself, and η : G × G → G,
η(x, y) = xy
be a right action of G on itself. (ξ, η) is called a compatible pair of actions if xy = (x y)(xy )
(5.29)
for all x, y ∈ G. In a pair of compatible actions, the left action is completely determined by the right action, and vice versa. So we could reformulate the definition in terms of one of the two actions, but this is not very appropriate since the conditions for the remaining action is not very transparent. An immediate example of a pair of compatible actions is (ξ, η) where ξ is trivial and η is the conjugation, i.e. x y = y and y x = x−1 yx
240
5 The quantum Yang-Baxter equation
Theorem 62. (Lu, Yan, Zhu [121]) Let (ξ, η) be a compatible pair of actions of a group G on itself. Then the map R = R(ξ,η) : G × G → G × G,
R(x, y) = (x y, xy )
(5.30)
is a bijective solution of the braid equation on the set G. Proof. We write R12 R23 R12 (x, y, z) = (x1 , y1 , z1 ),
R23 R12 R23 (x, y, z) = (x2 , y2 , z2 ).
It follows from the compatibility condition (5.29) that x1 y1 z1 = x2 y2 z2 , so it suffices to show that x1 = x2 and z1 = z2 . x y A direct calculation tells us that x1 = ( y)(x ) z and x2 = (xy) z and using (5.29) once more, we find that x1 = x2 . In a similar way we can prove that z1 = z2 . We still have to prove that R is bijective. First we claim that y −1
y −1 = (x
)
(x y)−1 ,
(x y)−1 x−1 = (xy )−1
(5.31)
for all x, y ∈ G. Indeed, we have y y −1 (5.29) (5.29) x (y −1 ) x = x (y −1 ) (xy )y = xy y −1 = (x y)−1 x, proving the first formula follows; the proof of the second formula is similar. Let i be the bijection i : G × G → G × G, i(x, y) = (y −1 , x−1 ). For all x, y ∈ G, we compute that (i ◦ R)2 (x, y) = i
(xy )−1
(x y)−1 (5.31) (x y)−1 , (xy )−1 = i(y −1 , x−1 ) = (x, y)
This means that (i ◦ R)2 = Id, and R is bijective. In Theorem 63, we will provide two alternative descriptions of the class of solutions given by compatible pairs. First we introduce braiding operators on groups, which can be viewed as set theoretic analogs of coquasitriangular Hopf algebras. Definition 9. Let G be a group with multiplication m. A braiding operator on G is a bijective map σ : G×G → G×G satisfying the following conditions: (BO1) σ(m × Id) = (Id × m)σ12 σ23 ; (BO2) σ(Id × m) = (m × Id)σ23 σ12 ; (BO3) σ(1, x) = (x, 1) and σ(x, 1) = (1, x), for all x ∈ G; (BO4) mσ = m.
5.5 The set-theoretic braid equation
241
Remarks 13. 1. The map σ(x, y) = (y, y −1 xy) is a braiding operator on any group G, called the conjugate braiding. 2. (BO4) can be viewed as a generalized commutativity condition. 3. In Section 2.3, we have discussed the smash product of algebras. A similar construction can be carried out in the category of groups: given two groups G and H, consider the product set G × H, a map σ : H × G → G × H, and the following multiplication on G × H: mG×σ H = (m × m)(Id × σ × Id) The group theoretic analog of Theorem 7 tells us that (G × H, mG×σ G , (1G , 1H )) is a group if and only if (BO1)-(BO3) are satisfied (in Definition 9 these conditions are stated in the case G = H, but they still make sense in the case G = H. This can also restated in terms of the factorization problem for groups. A groups X factorizes through G and H if and only if there exist morphisms of groups i : G → X and j : H → X such that mX ◦ (i ◦ j) : G × H → X is a group isomorphism. We can show that there is a one-one correspondence between group structures on G × H for which the canonical inclusions are group homomorphisms and maps σ : H ×G → G×H satisfying (BO1-BO3). Next we recall the notion of 1-cocycle. Definition 10. Let G, A be groups, and assume that G acts as a group of automorphisms on A. A bijective 1-cocycle of G with coefficients in A is a bijection π : G → A such that π(xy) = π(x)(x · π(y))
(5.32)
for all x, y ∈ G. We will next show that there is a close relation between the three concepts introduced above, namely compatible pairs of actions, braiding operators, and 1-cocycles. Theorem 63. (Lu, Yan, Zhu [121]) Let G be a group. The following three sets are isomorphic as sets: 1. CA consisting of all compatible pairs of actions (ξ, η) of G on itself; 2. BO consisting of all braiding operators σ : G × G → G × G; 3. BC consisting of all triple (A, ·, π), where A is a group on which G acts as a group of automorphisms, π : G → A is a bijective 1-cocycle. Proof. First, let (ξ, η) ∈ CA be a a compatible pair of actions and define σ = σ(ξ,η) by the formula σ(x, y) = (x y, xy ).
(5.33)
242
5 The quantum Yang-Baxter equation
We have to prove that σ is a braiding operator. We denote (id × m)σ12 σ23 (x, y, z) = (x2 , y2 ).
σ(xy, z) = (x1 , y1 ),
It follows from the compatibility condition (5.29) that x1 y1 = xyz = x2 y2 . So, it is enough to prove that x1 = x2 . A direct computation gives x1 =
(xy)
z,
and x2 = x (y z)
which are equal because ξ is a left action, and (BO1) follows. (BO2) can be proved in a similar way. Taking x = 1 in (5.29), we find that 1y = 1 for any y ∈ G, and σ(1, y) = (1 y, 1y ) = (y, 1) and the first equality of (BO3) follows; the second one is obtained in a similar way. Finally, the equality (BO4) is exactly the compatibility condition (5.29), and we can conclude that σ is a braiding operator. Now let σ is a braiding operator and define (ξ, η) = (ξ, η)σ using (5.33). Then looking at the first component of (BO1), we obtain (xy) z = x (y z); the first equality in (BO3) implies 1 x = x. This proves ξ is a left action. Similarly, we can show that η is a right action. The compatibility condition (5.29) then follows directly from (BO4). We now have well-defined maps CA → BO and BO → CA, and it is clear that they are each others inverses. Take a compatible pair (ξ, η) ∈ CA, and define (A, ·, π) = (A, ·, π)(ξ,η) as follows: A = G as a set, with new multiplication −1
x " y = x(x y).
(5.34)
for all x, y ∈ G. Replacing y by x y in (5.34), we find xy = x " x y and π = I : G → A is a bijective 1-cocycle. We still have to show that G is a group and that G acts as a group of automorphisms on A. First, it is obvious that 1 is a left unit, and x (x−1 ) is a right inverse of x with respect to the multiplication ". Using (5.29), we find −1
x " y = x(x y) = y((x−1 )y )−1
(5.35) −1
and it follows that 1 is also a right unit, and that (x(x ) )−1 is a left inverse of x. Now let us show that G acts on A. Using (5.29), we find y x (5.36) (yz) x(yz) = xyz = x y xy z = x y x z (xy )z Now η is a right action, so x
y
(yz) = ( x y)( x z)
(5.37)
5.5 The set-theoretic braid equation
Let t ∈ G and put z = x
y
−1
t in (5.37). We obtain y
y −1
(y " t) = x (yz) = ( x y)( x z) = ( x y)( (x
(5.29)
243
x
= ( y)(
x
( y)
−1
x
x
y
)
t)
x
t) = y " t,
(5.38)
and we have shown that G acts on A. Finally A is associative, since (5.38) −1 x−1 −1 −1 −1 x " (y " t) = x x (y " t) = x x y ( y) x t −1 = (x " y) (xy) ) t = (x " y) " t. Our next step is to the inverse of the map (ξ, η) → (A, ·, π)(ξ,η) . Take (A, ·, π) ∈ BC. we use the bijection π to identify G and A, and use " to denote the product on G induced by the product on A, and ξ to denote the left action of G on G induced by the action · on G on A. Then (5.32) is equivalent to (5.34). Finally, we define η using (5.29), so that (5.35) also holds. It remains to be shown that η is a right action. Since ξ is a left action, we have −1 1 " 1 = 1(1 1) = 1. It follows that 1 is also a unit with respect to ". Taking x = 1 in (5.35), we obtain y 1 = y, for all y ∈ G. On the other hand, by the compatibility condition (5.29), we still have the equality (5.36). Moreover, since ξ acts as automorphisms of (G, "), we have x
y
(yz) = x (y " t) = x y " x t = ( x y)( x z),
where, in the last equality, we used part of the computations in (5.38). It follows from (5.36) that x(yz) = (xy )z , i.e. η is a right action. Combining Theorem 62 and Theorem 63, we obtain the following Corollary. Corollary 36. Any braiding operator σ : G × G → G × G is a bijective solution of the braid equation. Recall that a map R : S × S → S × S is called non-degenerate if the maps R(x, −), R(−, x) : S → S are bijective, for all x ∈ S. A remarkable result, due to Lu, Yan, and Zhu [121], is the following set-theoretic version of the FRT Theorem. Theorem 64. Let R : S × S → S × S be a bijective and non-degenerate solution of the braid equation. Then there exists a group G = G(S, R), a map i : S → G and a braiding operator σ : G × G → G × G on G such that σ(i × i) = (i × i)R. Furthermore (G, σ) is universal with this property. Proof. (Sketch - we refer to [121] for full detail) The group G is the group generated by the set S and subject to the relations xy = uv whenever R(x, y) = (u, v). The hard part of the proof is then to extend R to σ. An open problem is the following: is i : S → G injective. If R is involutory, then we can show that i is injective, see [84].
6 Hopf modules and the pentagon equation
We study the Hopf equation R12 R23 = R23 R13 R12 ; this equation is equivalent to the pentagon equation R12 R13 R23 = R23 R12 . It is older ([122]) than the QYBE, and, in a sense, more basic, as it is an expression of an associativity constraint rather than a commutativity constraint. The pentagon equation plays a key role in the theory of duality for von Neumann algebras ([174], [10]). In this Chapter we study these equivalent equations from a purely algebraic point of view. This brings us in the framework of Chapter 5, where we study the quantum Yang-Baxter equation. We will see that the category of Hopf modules plays (via an FRT type theorem) the same role in the study of the Hopf equation and the pentagon equation, as the category of Yetter-Drinfeld modules in the study of the QYBE. We will apply our results to constructing new examples of noncommutative noncocommutative bialgebras which are different from those arising from the FRT theorem for the QYBE. One of the main applications is that our theory leads to a structure Theorem and the classification of finite dimensional Hopf algebras ([137]).
6.1 The Hopf equation and the pentagon equation We will start with the following Definition 11. Let M be a vector space and R ∈ End(M ⊗ M ). 1. R is called a solution of the Hopf equation if R23 R13 R12 = R12 R23
(6.1)
2. R is called a solution of the pentagon equation if R12 R13 R23 = R23 R12
(6.2)
In the literature (cf. [170]), the pentagon equation is sometimes called the fusion equation. The Hopf equation (resp. the pentagon equation) can be obtained from the QYBE R23 R13 R12 = R12 R13 R23
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 245–300, 2002. c Springer-Verlag Berlin Heidelberg 2002
246
6 Hopf modules and the pentagon equation
by deleting the middle term from the right (resp. left) hand side. The Hopf (resp. pentagon) equation can be defined more generally for elements R ∈ A ⊗ A, where A is a k-algebra. In this case the Equations 6.1 and 6.2 should hold in A ⊗ A ⊗ A. Let us now show that the Hopf equation is equivalent to the pentagon equation. Proposition 124. Let M be a vector space and R ∈ End(M ⊗ M ). 1. R is a solution of the Hopf equation if and only if W = τ Rτ is a solution of the pentagon equation. 2. If R ∈ End(M ⊗M ) is bijective, then R is a solution of the Hopf equation if and only if R−1 is a solution of the pentagon equation. Proof. 1. follows from the formulas: W 12 W 13 W 23 = R23 R13 R12 τ 13 ,
W 23 W 12 = R12 R23 τ 13
2. is straightforward. Remark 17. Let R ∈ End(M ⊗ M ) and put T = Rτ . Then T 23 T 12 T 23 = R23 R13 R12 τ 13 ,
T 12 τ 23 T 12 = R12 R23 τ 13 .
and it follows that R is a solution of the Hopf equation if and only if T is a solution of the equation T 12 τ 23 T 12 = T 23 T 12 T 23
(6.3)
In [170], this equation is called the 3-cocycle equation. Similar arguments show that R is a solution of the pentagon equation if and only if τ R is a solution of the 3-cocycle equation. In the next Proposition, we present the Hopf equation in matrix form. Proposition 125. We fix a basis {v1 , · · · , vn } of a finite dimensional vector space M . Let R, S ∈ End(M ⊗ M ) given by their matrices R(vk ⊗ vl ) = xij kl vi ⊗ vj ,
ij S(vk ⊗ vl ) = ykl vi ⊗ vj ,
ji for all k, l = 1, · · · , n, where (xji uv ), (yuv ) are two families of scalars in k. Then R23 S 13 S 12 = S 12 R23
if and only if pu βv αj pi xij vu yβk yql = xlk yqα
for all i, j, k, l, p, q = 1, · · · , n. In particular, R is a solution of the Hopf equation if and only if pu βv αj pi xij (6.4) vu xβk xql = xlk xqα for all i, j, k, l, p, q = 1, · · · , n.
6.1 The Hopf equation and the pentagon equation
247
Proof. For k, l, q = 1, · · · , n we have:
βv vβ ⊗ vv ⊗ vk R23 S 13 S 12 (vq ⊗ vl ⊗ vk ) = R23 S 13 yql pu βv = R23 yβk yql vp ⊗ vv ⊗ vu pu βv = xij vu yβk yql vp ⊗ vi ⊗ vj
and
S 12 R23 (vq ⊗ vl ⊗ vk ) = S 12 xαj lk vq ⊗ vα ⊗ vj pi = xαj lk yqα vp ⊗ vi ⊗ vj αj pi xlk yqα vp ⊗ vi ⊗ vj = i,j,p
and the result follows. Now we will see how solutions of the Hopf equation can be used to construct bialgebras without counit. Proposition 126. Let A be an algebra and R ∈ A ⊗ A an invertible solution of the Hopf equation. The comultiplication ∆l : A → A ⊗ A,
∆l (a) = R(1 ⊗ a)R−1
is coassociative and an algebra map, i.e. (A, ·, 1A , ∆l ) is a bialgebra without counit. The same result holds for the comultiplication ∆r : A → A ⊗ A,
∆r (a) = R−1 (a ⊗ 1)R
Proof. For a ∈ A, we compute (I ⊗ ∆l )∆l (a) = R23 R13 (1 ⊗ 1 ⊗ a)(R23 R13 )−1 (∆l ⊗ I)∆l (a) = R12 R23 (1 ⊗ 1 ⊗ a)(R12 R23 )−1 R is a solution of the Hopf equation, so (R13 )−1 (R23 )−1 R12 R23 = R12 , and it follows that ∆l is coassociative if and only if (1 ⊗ 1 ⊗ a)R12 = R12 (1 ⊗ 1 ⊗ a) both sides are equal to R12 ⊗ a, and it follows that ∆l is coassociative. It is obvious that ∆l is an algebra map. Solutions of the pentagon equation in operator algebras are given in [10]. We mention one example, playing a key role in the classification of multiplicative commutative operators, and then we turn to purely algebraic examples. Let G be a locally compact group and dg a right Haar measure on G. Then VG (ξ)(s, t) = ξ(st, t) is a solution of the pentagon equation.
248
6 Hopf modules and the pentagon equation
Examples 10. 1. The identity map IdM ⊗M is a solution of the Hopf equation. 2. Let M be a finite dimensional vector space and u an automorphism of M . If R is a solution of the Hopf equation then u R = (u ⊗ u)R(u ⊗ u)−1 is also a solution of the Hopf equation. Indeed, as End(M ⊗ M ) ∼ ⊗ End(M ), we can view R = fi ⊗ gi , = End(M ) where fi , gi ∈ End(M ). Then u R = ufi u−1 ⊗ ugi u−1 and (u R)12 (u R)23 = (u ⊗ u ⊗ u)R12 R23 (u ⊗ u ⊗ u)−1 , (u R)23 (u R)13 (u R)12 = (u ⊗ u ⊗ u)R23 R13 R12 (u ⊗ u ⊗ u)−1 , hence u R is also a solution of the Hopf equation. 3. Let f , g ∈ End(M ) such that f 2 = f , g 2 = g and f g = gf . Then, R = f ⊗g is a solution of the Hopf equation. A direct computation shows that R23 R13 R12 = f 2 ⊗ f g ⊗ g 2 ,
R12 R23 = f ⊗ gf ⊗ g
so the above conclusion follows. With this example in mind, we can view solutions of the Hopf equation as generalizations of idempotent endomorphisms of M , since R = f ⊗ I (or R = I ⊗ f ) is a solution of the Hopf equation if and only if f 2 = f . Let M be a two dimensional vector space with basis {m1 , m2 }, and identify End(M ) with M2 (k). Consider 1 q (6.5) fq = 0 0 where q is a scalar of k. Then fq2 = fq . gq = IV − fq is also an idempotent endomorphism of M and gq fq = fq gq . Thus, we obtain that Rq = fq ⊗ gq is a solution of the Hopf equation. The matrix of Rq with respect to the basis {m1 ⊗ m1 , m1 ⊗ m2 , m2 ⊗ m1 , m2 ⊗ m2 } is
0
0 Rq = 0 0
−q 2
−q
0
1
0
0
0
q 0
0
0
0
Now let g = IM and Rq = fq ⊗ IM . The matrix of Rq is
1
0 Rq = 0 0
0
0
q
1
0
0
0
q 0
0
0
0
6.1 The Hopf equation and the pentagon equation
249
and Rq is also a solution of the Hopf equation. 4. Let A be a k-algebra a, b ∈ A and R ∈ A ⊗ A given by R=1⊗1+a⊗b Then R is a solution of the pentagon equation if and only if a ⊗ (ab − ba − 1) ⊗ b = a ⊗ a ⊗ b2 + a2 ⊗ b ⊗ b + a2 ⊗ ba ⊗ b2
(6.6)
Moreover, if a or b are nilpotent elements then R is invertible. Now if we take a and b such that 2 a = b2 = 0 (6.7) ab − ba = 1 or 2 a = 0 b2 = b (6.8) ab − ba = a + 1 then (6.6) holds and R is a solution of the pentagon equation. We give some explicit examples of this type in the case where A = Mn (k), with char(k)|n. Assume that char(k) = 2 and let n = 2q where q is a positive integer. Then a = e21 + e43 + · · · + e2q 2q−1 ,
2q−1 b = e12 + e34 + · · · + e2q
is a solution of the (6.7), and hence R = In ⊗ In +
q
2j−1 e2i 2i−1 ⊗ e2j
i,j=1
is an invertible solution of the pentagon equation. On the other hand, for an arbitrary invertible matrix X ∈ Mq (k), the matrices a, b ∈ Mn (k) given by Iq X −1 b = e11 + e22 + · · · eqq , a = X Iq form a solution of the (6.8) and hence R = RX = In ⊗ In +
q
a ⊗ eii
i=1
is an invertible solution of the pentagon equation. 5. Rq and Rq from Example 3. are also solutions of the QYBE, because they are of the form f ⊗ g with f g = gf . Let us now construct a solution of the Hopf equation which is not a solution of the QYBE. Let G be a group and M a left G-graded k[G]-module. The map
250
6 Hopf modules and the pentagon equation
R : M ⊗ M → M ⊗ M,
R(u ⊗ v) =
σ · u ⊗ vσ
(6.9)
σ
is a solution of the Hopf equation and is not a solution of the quantum YangBaxter equation. To prove that R is a solution of the Hopf equation, it suffices to show that it (6.1) holds for homogeneous elements of M ⊗3 . For uσ ∈ Mσ , uτ ∈ Mτ and uθ ∈ Mθ , we compute R23 R13 R12 (uσ ⊗ uτ ⊗ uθ ) = R23 R13 (τ · uσ ⊗ uτ ⊗ uθ ) = R23 (θτ · uσ ⊗ uτ ⊗ uθ ) = θτ · uσ ⊗ θ · uτ ⊗ uθ = R12 (uσ ⊗ θ · uτ ⊗ uθ ) = R12 R23 (uσ ⊗ uτ ⊗ uθ ) and R is a solution of the Hopf equation. A similar calculation shows that R12 R13 R23 (uσ ⊗ uτ ⊗ uθ ) = θτ θ · uσ ⊗ θ · uτ ⊗ uθ and we see that R is not a solution of the QYBE. 6. Let G be a group and M be a G-crossed module, that is a kG-YetterDrinfeld module. These means that M is G-graded and a left k[G]-module such that g · Mσ ⊆ Mgσg−1 for all g, σ ∈ G. Then R given by (6.9) is a solution of the QYBE but not a solution of the Hopf equation. 7. Take 0 = q = 1 ∈ k, and consider the classical two dimensional YangBaxter operator q 0 0 0 0 1 q − q −1 0 R= 0 0 1 0 0
0
0
q
R is a solution of the QYBE and is not a solution for the Hopf equation. Indeed, the element in the (1, 1)-position of R23 R13 R12 is q 3 , while the element in the (1, 1)-position of R12 R23 is q 2 , i.e. R is not a solution of the Hopf equation. 8. For a bialgebra H, R : H ⊗ H → H ⊗ H, R(g ⊗ h) = h(1) g ⊗ h(2) for all g, h ∈ H, is a solution of the Hopf equation. This operator was introduced by Takesaki in [174] for a Hopf-von Neumann algebra (A, ∆). On the other hand the map
6.1 The Hopf equation and the pentagon equation
W : H ⊗ H → H ⊗ H,
W (g ⊗ h) =
251
g(1) ⊗ g(2) h
for all g, h ∈ H, is a solution of the pentagon equation. 9. Let H be a Hopf algebra with an antipode S. Then H/k is a Hopf-Galois extension ([140]), i.e. the canonical map β : H ⊗ H → H ⊗ H, β(g ⊗ h) = gh(1) ⊗ h(2) is bijective. β is a solution of the Hopf equation. Furthermore, R : H ⊗ H → H ⊗ H, R (g ⊗ h) = g(1) ⊗ S(g(2) )h is also a solution of the Hopf equation. We have already mentioned that the pentagon equation is in a certain sense more basic than the QYBE. By this we mean that the pentagon equation expresses an associativity constraint (cf. [122], [170]), while the QYBE expresses a commutativity constraint. The following two results explain this point of view. Proposition 127 is in fact an elementary observation. Theorem 65 is a more recent result due to Davydov [65]. Proposition 127. Let M be a vector space, m : M ⊗ M → M , m(v ⊗ w) = v · w be a k-linear map and R = Rm : M ⊗ M → M ⊗ M,
R(v ⊗ w) = v ⊗ w · v
for all v, w ∈ M . Then Rm is a solution of the pentagon equation if and only if (M, m) is an associative algebra. Proof. For u, v, w ∈ M we have R12 R13 R23 (u ⊗ v ⊗ w) = u ⊗ v · u ⊗ (w · v) · u and R23 R12 (u ⊗ v ⊗ w) = u ⊗ v · u ⊗ w · (v · u) Hence R is a solution of the pentagon equation iff (V, m) is an algebra structure. Let R ∈ End(M ⊗ M ) be an invertible solution of the pentagon equation. We introduce the following product M on Mk : U1 M U2 = U1 ⊗ M ⊗ U2 For any U1 , U2 , U3 ∈ Mk , we have an isomorphism ϕU1 ,U2 ,U3 = R24 : U1 ⊗ M ⊗ U2 ⊗ M ⊗ U3 → U1 ⊗ M ⊗ U2 ⊗ M ⊗ U3 One can verify that this makes (Mk , , ϕ) into a monoidal category (without identity object). In fact, we have
252
6 Hopf modules and the pentagon equation
Theorem 65. There exists a bijective correspondence between all monoidal structures on the category of k-vector spaces (without identity object) and all bijective solutions of pentagon equation on various k-vector spaces. Proof. Let Mk be the category of k-vector spaces. By the functoriality any tensor product functor : Mk × Mk → Mk is defined by the vector space M := k k. Let M be the tensor product functor corresponding to a given vector space M . For arbitrary vector spaces U1 , U2 , we have that U1 M U2 = U1 ⊗ M ⊗ U2 . We have to explain what the associativity constraint means for the tensor product M . Using the naturality, any associativity constraint ϕ for M is given by the automorphism R = Rϕ = ϕk,k,k ∈ Aut(M ⊗2 ) that arises from the sequences of isomorphisms: ϕk,k,k
M ⊗M = k M M = k M (k M k) −→ (k M k)M k = M M k = M ⊗M For arbitrary vector spaces U1 , U2 , U3 the associativity constraint ϕU1 ,U2 ,U3 : U1 M (U2 M U3 ) → (U1 M U2 ) M U3 is given in terms of R via the following diagram U1 M (U2 M U3 )
ϕU1 ,U2 ,U3 -
(U1 M U2 ) M U3 =
=
? (U1 ⊗ M ⊗ U2 ) M U3
? U1 M (U2 ⊗ M ⊗ U3 ) =
=
? U1 ⊗ M ⊗ U2 ⊗ M ⊗ U3
? R24 U1 ⊗ M ⊗ U2 ⊗ M ⊗ U3
i.e. ϕU1 ,U2 ,U3 = R24 . In particular we obtain ϕkM k,k,k = ϕM,k,k = R23 , ϕk,kM k,k = ϕk,M,k = R13 , ϕk,k,kM k = ϕk,k,M = R12 On the other hand Ik M ϕk,k,k = R23 ,
ϕk,k,k M Ik = R12 .
Now the commutativity of the pentagon diagram for ϕ from the definition of a monoidal category is equivalent to the pentagon equation for R R12 R13 R23 = R23 R12 and the proof is complete.
6.2 The FRT Theorem for the Hopf equation
253
Definition 12. Let M be a vector space and R ∈ End(M ⊗ M ). 1. R is called commutative if R12 R13 = R13 R12 . 2. R is called cocommutative if R13 R23 = R23 R13 . Remarks 14. 1. Let R ∈ End(M ⊗ M ). Then R is a commutative solution of the Hopf equation if and only if W = τ Rτ is a cocommutative solution of the pentagon equation. Indeed, R12 R13 = R13 R12 if and only if τ 12 W 12 τ 12 τ 13 W 13 τ 13 = τ 13 W 13 τ 13 τ 12 W 12 τ 12 .
(6.10)
Using the formulas τ 12 τ 13 = τ 23 τ 12 ,
τ 13 τ 12 = τ 12 τ 23 ,
W 12 τ 23 = τ 23 W 13 ,
τ 12 W 13 = W 23 τ 12 ,
W 13 τ 12 = τ 12 W 23 ,
τ 23 W 12 = W 13 τ 23
we find that the equation (6.10) is equivalent to τ 12 τ 23 W 13 W 23 τ 12 τ 13 = τ 13 τ 12 W 23 W 13 τ 23 τ 12 . The conclusion follows after we observe that τ 12 τ 13 τ 12 τ 23 = τ 23 τ 12 τ 13 τ 12 = Id. 2. Suppose that R ∈ End(M ⊗ M ) is bijective. Then, R is a cocommutative solution of the Hopf equation if and only if τ R−1 τ is a commutative solution of the Hopf equation.
6.2 The FRT Theorem for the Hopf equation We begin with the following Proposition, which can be viewed as an analog of Theorem 59. It explains the role of Hopf modules in the theory of the Hopf equation. Proposition 128. Let H be a bialgebra and (M, ·, ρ) ∈ module. Then the natural map
H HM
an H-Hopf
R(M,·,ρ) (m ⊗ n) = n[1] · m ⊗ n[0] is a solution of the Hopf equation. If H is commutative then R(M,·,ρ) is a commutative solution of the Hopf equation.
254
6 Hopf modules and the pentagon equation
Proof. Let R = R(M,·,ρ) . For l, m, n ∈ M we have R23 R13 R12 (l ⊗ m ⊗ n) = R23 R13 m[1] · l ⊗ m[0] ⊗ n = R23 n[1] m[1] · l ⊗ m[0] ⊗ n[0] = n[2] m[1] · l ⊗ n[1] · m[0] ⊗ n[0] = n[1](2) m[1] · l ⊗ n[1](1) · m[0] ⊗ n[0] = (n[1] · m)[1] · l ⊗ (n[1] · m)[0] ⊗ n[0] = R12 l ⊗ n[1] · m ⊗ n[0] = R12 R23 (l ⊗ m ⊗ n) and R is a solution of the Hopf equation. We have R12 R13 (l ⊗ m ⊗ n) = m[1] n[1] · l ⊗ m[0] ⊗ n[0] and R13 R12 (l ⊗ m ⊗ n) = n[1] m[1] · l ⊗ m[0] ⊗ n[0] If H is commutative, then it follows that R12 R13 = R13 R12 . Remark 18. If (M, ·, ρ) is an H-Hopf module then the map R(M,·,ρ) = τ R(M,·,ρ) τ : M ⊗ M → M ⊗ M,
R(M,·,ρ) (m ⊗ n) = m[0] ⊗ m[1] · n
is a solution of the pentagon equation. Before stating the FRT Theorem, we state two Lemmas; the proofs are straighforward. Lemma 28. Let H be a bialgebra, (M, ·) a left H-module and (M, ρ) a right H-comodule. Then the set {h ∈ H | ρ(h · m) = h(1) · m[0] ⊗ h(2) m[1] , ∀m ∈ M } is a subalgebra of H. Lemma 29. Let H be a bialgebra, (M, ·) a left H-module and (M, ρ) a right H-comodule. If I is a biideal of H such that I · M = 0, then, with the natural structures, (M, · ) is a left H/I-module, (M, ρ ) a right H/I-comodule and R(M,· ,ρ ) = R(M,·,ρ) Theorem 66. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ) be a solution of the Hopf equation. 1. There exists a bialgebra B(R) such that M has a structure of B(R)-Hopf module (M, ·, ρ) and R = R(M,·,ρ) .
6.2 The FRT Theorem for the Hopf equation
255
2. The bialgebra B(R) is universal with respect to this property: if H is a bialgebra such that (M, · , ρ ) ∈ H MH and R = R(M,· ,ρ ) then there exists a unique bialgebra map f : B(R) → H such that ρ = (I ⊗ f )ρ. Furthermore, a · m = f (a) · m, for all a ∈ B(R), m ∈ M . 3. If R is commutative, then there exists a commutative bialgebra B(R) such that M has a structure of B(R)-Hopf module (M, · , ρ ) and R = R(M,· ,ρ ) . Proof. 1. The proof will be given in several steps. Let {m1 , · · · , mn } be a basis for M and (xji uv ) be the matrix of R, i.e. R(mv ⊗ mu ) = xij vu mi ⊗ mj
(6.11)
for all u, v = 1, · · · , n. Let (C, ∆, ε) = Mn (k), be the comatrix coalgebra of order n. As in Section 5.1, {cij | i, j = 1, · · · , n} is the canonical basis of C. Recall that ∆(cij ) = ciu ⊗ cuj ,
ε(cij ) = δji
(6.12)
for all j, k = 1, · · · , n. Let ρ : M → M ⊗ C given by ρ(ml ) = mv ⊗ cvl
(6.13)
for all l = 1, · · · , n. It follows from Lemma 25 that M is a right C-comodule. Let T (C) be the bialgebra structure on the tensor algebra T (C) which extends ∆ and ε (using Lemma 26). As the inclusion i : C → T (C) is a coalgebra map, M has a right T (C)-comodule structure via M
- M ⊗C
ρ
- M ⊗ T (C)
I⊗i
The right T (C)-comodule structure on M will also be denoted by ρ. We will now define a left T (C)-module structure on M in such a way that R = R(M,·,ρ) . First we define µ : C ⊗ M → M,
µ(cju ⊗ mv ) = xij vu mi
for all j, u, v = 1, · · · n. From lemma 26, there exists a unique left T (C)module structure on (M, ·) such that cju · mv = µ(cju ⊗ mv ) = xij vu mi for all j, u, v = 1, · · · , n. We then have for all u, v = 1, · · · , n that R(M,·,ρ) (mv ⊗ mu ) = cju · mv ⊗ mj = xij vu mi ⊗ mj = R(mv ⊗ mu )
256
6 Hopf modules and the pentagon equation
and (M, ·, ρ) has a structure of left T (C)-module and right T (C)-comodule such that R = R(M,·,ρ) . Now, we define the obstructions χ(i, j, k, l), measuring how far away M is from being a T (C)-Hopf module. Keeping in mind that T (C) is generated as an algebra by (cij ) and using Lemma 28 we compute h(1) · m[0] ⊗ h(2) m[1] − ρ(h · m) for h = cjk , and m = ml , for j, k, l = 1, · · · , n. We have u v h(1) · m[0] ⊗ h(2) m[1] = cju · mv ⊗ cuk cvl = mi ⊗ xij vu ck cl and ρ(h · m) = ρ(cjk · ml ) = xαj lk (mα )[0] ⊗ (mα )[1] αj i i = xαj m ⊗ c = m ⊗ x c i i α α lk lk Let αj i ij u v χij kl = xvu ck cl − xlk cα
(6.14)
for all i, j, k, l = 1, · · · , n. Then h(1) · m[0] ⊗ h(2) m[1] − ρ(h · m) = mi ⊗ χ(i, j, k, l)
(6.15)
Let I be the two-sided ideal of T (C) generated by all χij kl , i, j, k, l = 1, · · · , n. The following assertion is the key point of our proof: I is a bi-ideal of T (C) and I · M = 0. We first prove that I is a coideal; this will result from the following formula: ij pj a b i ∆(χij kl ) = χab ⊗ ck cl + cp ⊗ χkl
Indeed, we have: αj ij u v i ∆(χij kl ) = xvu ∆(ck )∆(cl ) − xlk ∆(cα ) αj i u v a b p = xij vu ca cb ⊗ ck cl − xlk cp ⊗ cα αj p u v a b i = xij vu ca cb ⊗ck cl − cp ⊗ xlk cα γj i a b = χij + x c ab ba γ ⊗ck cl pj r s − cip ⊗ −χpj kl + xsr ck cl pj a b i = χij ab ⊗ ck cl + cp ⊗ χkl
where in the last equality we use the fact that i a b pj i r s xγj ba cγ ⊗ ck cl = xsr cp ⊗ ck cl
(6.16)
6.2 The FRT Theorem for the Hopf equation
257
Hence, the formula (6.16) holds. On the other hand ij ij ε χij ) kl = xlk − xlk = 0 so we proved that I is a coideal of T (C). The proof of the fact that I · M = 0 relies on the fact that R is a solution of the Hopf equation. For z ∈ M , j, k = 1, · · · , n, we have the following formula: R23 R13 R12 − R12 R23 (z ⊗ mk ⊗ mj ) = χ(r, s, j, k) · z ⊗ mr ⊗ ms (6.17) Indeed:
R23 R13 R12 (z ⊗ mk ⊗ mj ) = R23 R13 (cα k · z ⊗ m α ⊗ mj ) = R23 cβj cα k · z ⊗ m α ⊗ mβ ) β α = xrs αβ cj ck · z ⊗ mr ⊗ ms
and
R12 R23 (z ⊗ mk ⊗ mj ) = R12 (z ⊗ csj · mk ⊗ ms ) = R12 (z ⊗ xαs kj mα ⊗ ms ) r = xαs kj cα · z ⊗ mr ⊗ ms
so we obtain R23 R13 R12 − R12 R23 (z ⊗ mk ⊗ mj ) β α αs r rs = xrs αβ cj ck − xkj cα ·z ⊗ mr ⊗ ms = χjk · z ⊗ mr ⊗ ms proving (6.17). But R is a solution of the Hopf equation, hence χrs jk · z = 0, for all z ∈ M , j, k, r, s = 1, · · · , n, and we conclude that I · M = 0. Now we define B(R) = T (C)/I M has a right B(R)-comodule structure via the canonical projection T (C) → B(R) and a left B(R)-module structure as I · M = 0. The (cij ) generate B(R) B(R) and χij kl = 0 in B(R), so using (6.15) we find that (M, ·, ρ) ∈ B(R) M and, using Lemma 29, R = R(M,·,ρ) . This finishes the proof of Part 1) of the Theorem. 2. Let H be a bialgebra and suppose that (M, · , ρ ) ∈ H MH and R = R(M,· ,ρ ) . Let (cij ) be a family of elements in H such that ρ (ml ) = mv ⊗ cv l Then
258
6 Hopf modules and the pentagon equation R(mv ⊗ mu ) = cj u · mv ⊗ m j
and Let
ij j cj u · mv = xvu mi = cu · mv αj i ij u v χij kl = xvu ck cl − xlk cα
The universal property of the tensor algebra T (C) implies that there exists a unique algebra map f1 : T (C) → H such that f1 (cij ) = ci j , for all i, j = ij H 1, · · · , n. As (M, · , ρ ) ∈ H M we get that χkl = 0, and hence f1 (χij kl ) = 0, for all i, j, k, l = 1, · · · , n. So the map f1 factorizes through a map f : B(R) → H,
f (cij ) = ci j
Of course, for ml an arbitrary element of the given basis of M , we have (I ⊗ f )ρ(ml ) = mv ⊗ f (cvl ) = mv ⊗ cv l = ρ (ml )
Conversely, the relation (I ⊗ f )ρ = ρ necessarily implies f (cij ) = ci j , which proves the uniqueness of f . This completes the proof of part 2) the theorem. 3. For z ∈ M and j, k = 1, · · · , n we have the formula: R12 R13 − R13 R12 (z ⊗ mk ⊗ mj ) = crk csj − csj crk ·z ⊗ mr ⊗ ms (6.18) Let I be the two-sided ideal of T (C) generated by I and all [crk , csj ]. It follows from the formula ∆ [crk , csj ] = [cra , csb ] ⊗ cbj cak + cra csb ⊗ [cak , cbj ] that I is also a coideal of T (C) and from equation (6.18) we get that I ·M = 0. Define now B(R) = T (C)/I. Then B(R) is a commutative bialgebra, M has a structure of B(R)-Hopf module (M, · , ρ ) and R = R(M,· ,ρ ) . Combining Proposition 128 and Theorem 66, we obtain the following result. Corollary 37. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ). Then R is a solution of the Hopf equation if and only if there exists a bialgebra B(R) such that M has a structure of left-right B(R)-Hopf module (M, ·, ρ) and R = R(M,·,ρ) . Remarks 15. 1. The obstruction elements χij kl constructed in Theorem 66 are different from the homogenous elements ybij kl defined in 5.27 which correspond to the quantum Yang-Baxter equation. Despite that, as in the case of the quantum Yang-Baxter equation, the elements χij kl play the same role: the
6.2 The FRT Theorem for the Hopf equation
259
two-sided ideal generated by them is also a coideal which annihilates M , which is the key point of the proof. 2. All commutative bialgebras B(R) which come from a commutative solution of the Hopf equation are quotients for various bialgebra structures which can be given on a polynomial algebra k[Y1 , · · · , Yn ] in commutative variables Y1 , · · · , Yn . Let (M, ·, ρ) ∈ H MH be a Hopf module over a bialgebra H. In Remark 18, we have seen that the map R = R(M,·,ρ) : M ⊗ M → M ⊗ M,
R(M,·,ρ) (m ⊗ n) = m[0] ⊗ m[1] · n (6.19)
is a solution of the pentagon equation. Furthermore, if H has an antipode, then R is invertible with inverse R−1 (m ⊗ n) = m[0] ⊗ S(m[1] )n We have seen in Proposition 124 that the Hopf equation and the pentagon equation are equivalent; thus it is no surprise that we have also an FRT Theorem for the pentagon equation. Theorem 67. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ). R is a solution of the pentagon equation if and only if there exists a bialgebra P (R) such that M has a structure of left-right P (R)-Hopf module (M, ·, ρ) and R = R(M,·,ρ) . Proof. The proof is completely similar to the one of Theorem 66. Keeping the same notation, the bialgebra P (R) is defined as follows: • P (R) is the free algebra generated by (cij )i,j=1,···,n and the relations u v ia j xij uv ck cl = xkl ca
(6.20)
for all i, j, k, l = 1, · · · , n. • the comultiplication and the counit is defined in such a way that the matrix (cij ) is comultiplicative. The structure of M as an object in P (R) MP (R) is the following: M is a right P (R)-comudule via ρ : M → M ⊗ P (R),
ρ(mu ) = mj ⊗ cju
and has a unique left P (R)-module structure · such that R = R(M,·,ρ) given by cjv · mu = xji vu mi
for all j, u, v = 1, · · · , n. Furthermore, looking at the obstruction (6.14), we see that P (R) = B(τ Rτ ).
260
6 Hopf modules and the pentagon equation
We have seen in Chapter 5 that the FRT Theorem 61 for the QYBE is a special case of Theorem 60, stating that every solution R of the QYBE has the form R = R(σ,M,ρ) , for a coquasitriangular bialgebra (A(R), σ) and a right A(R)-comodule structure ρ on M . We will prove a similar result for the Hopf equation; the first step is to introduce a new class of bialgebras, that will play same role for the Hopf equation as coquasitriangular algebras for the QYBE. Definition 13. Let H be a bialgebra and C be a subcoalgebra of H. A klinear map σ : C ⊗ H → k is called a Hopf function if: (H1) σ(c(1) ⊗ h(1) )h(2) c(2) = σ(c(2) ⊗ h)c(1) (H2) σ(c ⊗ 1) = ε(c) (H3) σ(c ⊗ hk) = σ(c(1) ⊗ h)σ(c(2) ⊗ k) for all c ∈ C, h, k ∈ H. In this case we will say that (H, C, σ) is a bialgebra with a Hopf function. Remarks 16. 1. An immediate question is the following: why is σ only defined on C ⊗ H, and not on the whole of H ⊗ H (as in Definition 7)? The answer is that the bialgebra with a Hopf function (H, H, σ) is (k, k, Ik ), and this is due to the condition (H1). This can be easily seen: taking c = 1H in (H1), we find σ(1H ⊗ h)1H = σ(1H ⊗ h(1) )h(2) (6.21) and Tσ : H → k, Tσ (h) := σ(1H ⊗ h) is a right integral in H ∗ , and it is also an algebra map, by (H3), implying H = k. 2. (H2) and (H3) are exactly (B2) and (B3), of course with the restriction that they hold only on C ⊗ H. Also the left hand side of (H1) is the same as the left hand side of (B1), but their right hand sides are considerably different. 3. Let (H, C, σ) be a bialgebra with a Hopf function. If σ is right convolution invertible in Hom(C ⊗ H, k), then (H2) follows from (H3). Indeed, for c ∈ C we have: σ(c ⊗ 1) = σ(c(1) ⊗ 1)ε(c(2) ) = σ(c(1) ⊗ 1)σ(c(2) ⊗ 1)σ −1 (c(3) ⊗ 1) = σ(c(1) ⊗ 1)σ −1 (c(2) ⊗ 1) = ε(c) If H has an antipode S, then σ is invertible and σ −1 (c ⊗ h) = σ(c ⊗ S(h)), for all c ∈ C, h ∈ H. Now we will explain the relation between (H1) and the concept of right integral in H ∗ . Let H be a bialgebra and C a subcoalgebra of H. If T ∈ H ∗ is a right integral then the map σT : C ⊗ H → k,
σT (c ⊗ h) = ε(c)T (h),
∀c ∈ C, h ∈ H
6.2 The FRT Theorem for the Hopf equation
261
satisfies (H1). Conversely, if 1H ∈ C and σ : C ⊗ H → k satisfies (H1) then the map Tσ : H → k,
Tσ (h) := σ(1H ⊗ h),
∀h ∈ H
is a right integral. Assume that H has an antipode and that (H2) holds. Then Tσ (1H ) = 1k , and, by Maschke’s Theorem for Hopf algebras, H is cosemisimple. As TσT = T , we obtain that the map r , σ → Tσ p : {σ : C ⊗ H → k | σ satisfies (H1)} → H∗
is a projection, with section r → {σ : C ⊗ H → k | σ satisfies (H1)}, i: H∗
T → σT
We summarize our results in the next Proposition. Proposition 129. Let H be a bialgebra and C a subcoalgebra of H. 1. If T ∈ H ∗ is a right integral on H, then the map σT : C ⊗ H → k satisfies (H1). 2. if 1H ∈ C and σ : C ⊗ H → k satisfies (H1), then the map Tσ : H → k is a right integral on H. Furthermore, if (H2) holds and H has an antipode, then H is cosemisimple. Proposition 130. Let H be a bialgebra, C a subcoalgebra of H and σ : C ⊗ H → k a k-linear map satisfying (H3). Suppose that (H1) holds on a basis of C and a system of algebra generators of H. Then (H1) holds for any c ∈ C and h ∈ H. Proof. Let c ∈ C be an element of the given basis of C and x, y ∈ H two of the algebra generators of H. We are done if we can show that (H1) holds for (c, xy). This is a straightforward computation. σ(c(2) ⊗ xy)c(1) = σ(c(2)(1) ⊗ x)σ(c(2)(2) ⊗ y)c(1) = σ(c(2) ⊗ y)σ(c(1)(2) ⊗ x)c(1)(1) = σ(c(2) ⊗ y)σ(c(1)(1) ⊗ x(1) )x(2) c(1)(2) = σ(c(1) ⊗ x(1) )x(2) σ(c(2)(2) ⊗ y)c(2)(1) = σ(c(1) ⊗ x(1) )x(2) σ(c(2)(1) ⊗ y(1) )y(2) c(2)(2) = σ(c(1) ⊗ x(1) )σ(c(2) ⊗ y(1) )x(2) y(2) c(3) = σ(c(1) ⊗ x(1) y(1) )x(2) y(2) c(2) = σ(c(1) ⊗ (xy)(1) )(xy)(2) c(2)
262
6 Hopf modules and the pentagon equation
Examples 11. 1. Let H = k[G] be a group algebra. Let C be an arbitrary subcoalgebra of H. Then there exists no Hopf function σ : C ⊗ k[G] → k. Indeed, any subcoalgebra of k[G] has the form k[F ], where F is a subset of G. Suppose that there exists σ : k[F ] ⊗ k[G] → k a Hopf function. From (H2) we get that σ(f ⊗ 1) = 1 for all f ∈ F . Now let g ∈ G, g = 1 and f ∈ F . From (H1) we obtain σ(f ⊗ g)gf = σ(f ⊗ g)f i.e. σ(f ⊗ g) = 0 for all f ∈ F . But then, using (H3), we find a contradiction: 1 = σ(f ⊗ 1) = σ(f ⊗ gg −1 ) = σ(f ⊗ g)σ(f ⊗ g −1 ) = 0, 2. Let H = kq < x, y | xy = qyx > be the quantum plane: ∆(x) = x ⊗ x,
∆(y) = y ⊗ 1 + x ⊗ y,
ε(x) = 1,
ε(y) = 0.
C = kx is a subcoalgebra of H. For a ∈ k let σa : C ⊗ H → k be given by σa (x ⊗ 1) = 1,
σa (x ⊗ x) = 0,
σa (x ⊗ y) = a
and extend σa to the whole of C ⊗ H using (H3). Then, σa is a Hopf function. By Proposition 130, it suffices to check that (H1) holds for h ∈ {x, y}. For h = x, (H1) is σa (x ⊗ x)x2 = σa (x ⊗ x)x which holds if and only if σa (x ⊗ x) = 0. For h = y, (H1) has the form σa (x ⊗ y)x + σa (x ⊗ x)yx = σa (x ⊗ y)x, which is true, as σa (x ⊗ x) = 0. In fact, we have also proved the converse: if (H, C, σ) is a bialgebra with a Hopf function, then σ = σa for some a ∈ k. 3. Let M be a monoid and N = {n ∈ M | xn = n, for all x ∈ M }. Let H = k[M ] and C = k[F ], with F ⊂ N . Let σ : k[F ] ⊗ k[M ] → k be such that σ(f ⊗ 1) = 1 and σ(f, •) : M → (k, ·) is a morphism of monoids for all f ∈ F . Then σ is a Hopf function. We give such a concrete example. Let a ∈ k and Fa (k) = {u : k → k | u(a) = a}. (Fa (k), ◦) is a monoid, and F = {Ik } ⊂ N . σ : k[F ] ⊗ k[Fa (k)] → k,
σ(Ik ⊗ u) = 1
is a Hopf function. Let H be a bialgebra, C ⊂ H a subcoalgebra, and σ : C ⊗ H → k a k-linear map. As usual, σ12 , σ13 , σ23 : C ⊗ C ⊗ H → k are defined by: σ12 (c ⊗ d ⊗ x) = ε(x)σ(c ⊗ d),
σ13 (c ⊗ d ⊗ x) = ε(d)σ(c ⊗ x)
σ23 (c ⊗ d ⊗ x) = ε(c)σ(d ⊗ x) for all c, d ∈ C, x ∈ H.
6.2 The FRT Theorem for the Hopf equation
263
Proposition 131. Let (H, C, σ) be a bialgebra with a Hopf function σ : C ⊗ H → k. Then σ23 ∗ σ13 ∗ σ12 = σ12 ∗ σ23 (6.22) in Hom(C ⊗ C ⊗ H, k). If (M, ρ) is a right C-comodule, then the map R = R(σ,M,ρ) : M ⊗ M → M ⊗ M,
R(m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the Hopf equation. Proof. For c, d ∈ C and x ∈ H, we have: (σ12 ∗ σ23 )(c ⊗ d ⊗ x) = ε(x(1) )σ(c(1) ⊗ d(1) )ε(c(2) )σ(d(2) ⊗ x(2) ) = σ(c ⊗ d(1) )σ(d(2) ⊗ x) and (σ23 ∗ σ13 ∗ σ12 )(c ⊗ d ⊗ x) = ε(c(1) )σ(d(1) ⊗ x(1) )ε(d(2) )σ(c(2) ⊗ x(2) )ε(x(3) )σ(c(3) ⊗ d(3) ) = σ(c(1) ⊗ x(2) )σ(c(2) ⊗ d(2) )σ(d(1) ⊗ x(1) ) = σ(c ⊗ x(2) d(2) )σ(d(1) ⊗ x(1) ) = σ c ⊗ σ(d(1) ⊗ x(1) )x(2) d(2) = σ c ⊗ σ(d(2) ⊗ x)d(1) = σ(c ⊗ d(1) )σ(d(2) ⊗ x) proving (6.22). The fact that R is a solution of the Hopf equation follows from (6.22) and from the formulas: R12 R23 (u ⊗ v ⊗ w) = R12 σ(v[1] ⊗ w[1] )u ⊗ v[0] ⊗ w[0] = σ(u[1] ⊗ v[0][1] )σ(v[1] ⊗ w[1] )u[0] ⊗ v[0][0] ⊗ w[0] = σ(u[1] ⊗ v[1](1) )σ(v[1](2) ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] = σ12 ∗ σ23 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] and R23 R13 R12 (u ⊗ v ⊗ w) = R23 R13 σ(u[1] ⊗ v[1] )u[0] ⊗ v[0] ⊗ w = R23 σ(u[0][1] ⊗ w[1] )σ(u[1] ⊗ v[1] )u[0][0] ⊗ v[0] ⊗ w[0] = R23 σ(u[1] ⊗ w[1] )σ(u[2] ⊗ v[1] )u[0] ⊗ v[0] ⊗ w[0]
264
6 Hopf modules and the pentagon equation
= σ(v[0][1] ⊗ w[0][1] )σ(u[1] ⊗ w[1] )σ(u[2] ⊗ v[1] ) u[0] ⊗ v[0][0] ⊗ w[0][0] = σ(v[1] ⊗ w[1] )σ(u[1] ⊗ w[2] )σ(u[2] ⊗ v[2] )u[0] ⊗ v[0] ⊗ w[0] = σ(u[1](1) ⊗ w[1](2) )σ(u[1](2) ⊗ v[1](2) )σ(v[1](1) ⊗ w[1](1) ) u[0] ⊗ v[0] ⊗ w[0] = σ23 ∗ σ13 ∗ σ12 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] We will present the analog of Theorem 60 for the Hopf equation. The bialgebra B(R) and the coaction ρ of B(R) on M are as in Theorem 66. Theorem 68. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ) a solution of the Hopf equation. Let C be the subcoalgebra of B(R) generated by the (cij ). 1. There exists a unique Hopf function σ : C ⊗ B(R) → k such that R = R(σ,M,ρ) . 2. If R is bijective and commutative, then σ is invertible in the convolution algebra Hom(C ⊗ B(R), k). Proof. 1. First we prove that σ is unique. Let σ : C ⊗ B(R) → k be a Hopf function such that R = Rσ . Let u, v = 1, · · · , n. Then R(σ,M,ρ) (mv ⊗ mu ) = σ (mv )[1] ⊗ (mu )[1] (mv )[0] ⊗ (mu )[0] = σ(civ ⊗ cju )mi ⊗ mj and the fact that Rσ (mv ⊗ mu ) = R(mv ⊗ mu ) implies σ(civ ⊗ cju ) = xij vu
(6.23)
As B(R) is generated as an algebra by the (cij ), the relations (6.23) with (H2) and (H3) imply the uniqueness of σ. Our next goal is to prove the existence of σ. First we define σ0 : C ⊗ C → k using (6.23). Then we extend σ0 to a map σ1 : C ⊗ T (C) → k such that (H2) and (H3) hold. In order to prove that σ1 factorizes trough a map σ : C ⊗ B(R) → k, we have to show that σ1 (C ⊗ I) = 0, whith I the two-sided ideal of T (C) generated by all χij kl . This follows from the fact that αj ij p u v p σ1 (cpq ⊗ χij kl ) = xvu σ1 (cq ⊗ ck cl ) − xlk σ1 (cq ⊗ cα i) p αj pi u β v = xij vu σ1 (cβ ⊗ ck )σ1 (cq ⊗ cl ) − xlk xqα pu βv αj pi = xij vu xβk xql − xlk xqα = 0
We have constructed σ : C ⊗ B(R) → k such that (H2) and (H3) hold and R = R(σ,M,ρ) . We are left to prove (H1). By proposition 130, it suffices to check (H1) for c = cil and h = cjk , for all i, j, k, l = 1, · · · , n. We have
6.2 The FRT Theorem for the Hopf equation
σ(c(1) ⊗ h(1) )h(2) c(2) =
σ(civ ⊗ cju )cuk cvl =
u,v
and σ(c(2) ⊗ h)hc(1) =
265
u v xji uv ck cl
u,v
j i σ(cα l ⊗ ck )cα =
α
i xjα kl cα
α
so σ(c(1) ⊗ h(1) )h(2) c(2) − σ(c(2) ⊗ h)hc(1) = χ(i, j, k, l) = 0 and (H1) also holds, as needed. ij 2. Assume that R is bijective and let S = R−1 . Let (yvu ) be the matrix of S, i.e. ij S(mv ⊗ mu ) = yvu mi ⊗ m j , As RS = SR = IdM ⊗M we have
We define
αβ p i xpi αβ yqj = δq δj ,
pi αβ yαβ xqj = δqp δji
σ0 : C ⊗ C → k,
ij σ0 (civ ⊗ cju ) = yvu
and we extend σ0 to a map σ1 : C ⊗ T (C) → k in such a way that σ1 satisfies (H2) and (H3). First we prove that σ1 is the convolution inverse of σ1 : β σ1 (cpq )(1) ⊗ (cij )(1) σ1 (cpq )(2) ⊗ (cij )(2) = σ1 (cpα ⊗ ciβ )σ1 (cα q ⊗ cj ) αβ p i p i = xpi αβ yqj = δq δj = ε(cq )ε(cj )
and β σ1 (cpq )(1) ⊗ (cij )(1) σ1 (cpq )(2) ⊗ (cij )(2) = σ1 (cpα ⊗ ciβ )σ1 (cα q ⊗ cj ) pi αβ xqj = δqp δji = ε(cpq )ε(cij ) = yαβ
To show that σ : C ⊗ B(R) → k is also convolution invertible, we have to show that σ1 factorizes through a map σ : C ⊗ B(R) → k. We will prove that this happens if and only if R12 R13 = R13 R12 , i.e. R is commutative. σ1 factorizes if and only if σ1 (cpq ⊗ χij kl = 0, or, equivalently,
αj p p u v i xij vu σ1 (cq ⊗ ck cl ) = xlk σ1 (cq ⊗ cα )
which is equivalent to αj pi p u β v xij uv σ1 (cβ ⊗ ck )σ1 (cq ⊗ cl ) = xlk yqα
and pu βv αj pi xij vu yβk yql = xlk yqα
266
6 Hopf modules and the pentagon equation
From Proposition 125, it follows that this is equivalent to R23 S 13 S 12 = S 12 R23 Since S = R−1 , this last equation turns into R12 R23 = R23 R12 R13
(6.24)
Finally R is a bijective solution of the Hopf equation, i.e. R12 R23 = R23 R13 R12 , and we see that (6.24) is equivalent to R12 R13 = R13 R12 , completing the proof of the Theorem. Remark 19. There is a major difference between part 2) of Theorem 68 and the corresponding statement for the quantum Yang-Baxter equation. In the QYBE case, the bijectivity of a solution R implies that the map σ : A(R) ⊗ A(R) → k making A(R) coquasitriangular is convolution invertible. Behind this is the observation that R is a solution of the QYBE if and only if R−1 is a solution of the QYBE. In the Hopf equation situation, things are complicated by the fact that a bijective R is a solution of the Hopf equation if and only if R−1 is a solution of the pentagon equation. Now we introduce the dual notion of a bialgebra with a Hopf function (H, C, σ). This corresponds to the concept of quasitriangular bialgebra. Definition 14. Let H be a bialgebra and A a subalgebra of H. An element R = R1 ⊗ R2 ∈ A ⊗ H is called a Hopf element if (HE1) ∆(R1 ) ⊗ R2 = R13 R23 (HE2) ε(R1 )R2 = 1 (HE3) ∆cop (a)R = R(1 ⊗ a) for all a ∈ A. We will say that (H, A, R) is a bialgebra with a Hopf element. Remarks 17. 1. (HE1) and (HE2) are exactly (QT1) and (QT2), up to the fact that R ∈ A⊗H in the Hopf case, while R ∈ H ⊗H in the quasitriangular case. (HE3) is obtained by modifying the right hand side of (QT5), in such a way that an integral type condition appears. Let us explain this in detail. Take t = R1 ε(R2 ) ∈ A. (HE3) can be re written as a(2) R1 ⊗ a(1) R2 = R1 ⊗ R2 a
(6.25)
for all a ∈ A. Applying I ⊗ ε to (6.25), we find at = ε(a)t for all a ∈ A. Hence, if A is a subbialgebra of H, then t is a left integral in A. Applying I ⊗ ε ⊗ ε to (HE1) we get t2 = t. It follows that t = tt = ε(t)t, hence ε(t) = 1. Using the Maschke theorem for Hopf algebras, we conclude that: if (H, A, R) is a bialgebra with a Hopf element and A is a finite dimensional subbialgebra of H with an antipode, then A is semisimple.
6.3 Noncommutative noncocommutative bialgebras
267
Conversely, if t is a left integral in A, then R = t ⊗ 1 satisfies (HE3). 2. Let H be a bialgebra and A a subalgebra of H. Then R = 1 ⊗ 1 is a Hopf element if and only if A = k. Indeed, if R = 1 ⊗ 1 then (HE3) takes the form ∆cop (a) = 1 ⊗ a, for all a ∈ A. Hence, a = ε(a)1H , for all a ∈ A, i.e. A = k. 3. Let (H, A, R) be a bialgebra with a Hopf element. Suppose that H has an antipode S. Then R is invertible and R−1 = S(R1 ) ⊗ R2 . Moreover u = S(R2 )R1 ∈ H satisfies the condition S(a)u = ε(a)u, for all a ∈ A. This formula is obtained if we apply mH τ (I ⊗ S) to (6.25). We observe that if A = k then u is not invertible (if u is invertible, then R−1 = S(R1 ) ⊗ R2 = ε(R1 ) ⊗ R2 = 1 ⊗ 1, i.e. A = k). Proposition 132. If (H, A, R) is a bialgebra with a Hopf element, then R23 R13 R12 = R12 R23
(6.26)
in A ⊗ H ⊗ H. If (M, ·) is a left H-module, then the map R = R(R,M,·) : M ⊗ M → M ⊗ M,
R(m ⊗ n) = R1 · m ⊗ R2 · n
is a solution of the Hopf equation. Proof. (HE1) is equivalent to (HE1 ) ∆cop (R1 ) ⊗ R2 = R23 R13 Writing r = R, we compute R23 R13 R12 = ∆cop (R1 ) ⊗ R2 (r1 ⊗ r2 ⊗ 1) = ∆cop (R1 )R ⊗ R2 = R(1 ⊗ R1 ) ⊗ R2 = r1 ⊗ r2 R1 ⊗ R2 = R12 R23 The second statement follows from (6.26) since R23 R13 R12 (l ⊗ m ⊗ n) = R23 R13 R12 · (l ⊗ m ⊗ n) and R12 R23 (l ⊗ m ⊗ n) = R12 R23 · (l ⊗ m ⊗ n) for all l, m, n ∈ M .
6.3 New examples of noncommutative noncocommutative bialgebras Using Theorem 66 we can construct new examples of noncommutative noncocommutative bialgebras. These are different from the ones that appear in
268
6 Hopf modules and the pentagon equation
quantum group theory, and this is because the relations that we factor out are not always homogeneous. If M is a vector space with basis {m1 , · · · , mn }, then the matrix of a klinear map R : M ⊗ M → M ⊗ M is an n2 × n2 -matrix. We will write this matrix with respect to the basis {mi ⊗ mj | i, j = 1, · · · , n}, in the lexicographic order. For example (6.27) is the matrix of R with respect to the basis {m1 ⊗ m1 , m1 ⊗ m2 , m2 ⊗ m1 , m2 ⊗ m2 }. Our first example is a solution of the Hopf equation in characteristic two, giving rise to a five dimensional noncommutative noncocommutative bialgebra B(R). Proposition 133. Let k be a field and R be 1 0 0 0 1 1 R= 0 0 1 0
0
0
the matrix of M4 (k) given by 0 0 (6.27) 0 1
1. R is a solution of the Hopf equation if and only if k has the characteristic two. In this case R is commutative. 2. If char(k) = 2, then the bialgebra B(R) is the algebra with generators x, y, z and relations x2 = x,
y 2 = z 2 = yx = yz = 0,
xy = y,
xz = zx = z
The comultiplication ∆ and the counit ε are given by: ∆(x) = x ⊗ x + y ⊗ z,
∆(y) = x ⊗ y + y ⊗ x + y ⊗ zy,
∆(z) = z ⊗ x + x ⊗ z + zy ⊗ z, ε(x) = 1,
ε(y) = ε(z) = 0.
Furthermore, dimk (B(R)) = 5. Proof. 1. A direct computation tells 1 0 0 1 0 0 0 0 R12 R23 = 0 0 0 0 0 0 0
0
us that 0
0
0
0
0
1
0
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
0
0
0 0 0 0 0 0 1
6.3 Noncommutative noncocommutative bialgebras
and
R23 R13 R12
1
0 0 0 = 0 0 0 0
0
0
0
0
0
0
1
1
0
α
0
0
0
1
0
1
0
0
0
0
1
0
1
1
0
0
0
1
0
0
0
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
269
0 0 0 0 0 0 1
where α = 1 + 1. Hence, R is a solution of the Hopf equation if and only if char(k) = 2. In this case we also have that 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 1 0 R12 R13 = R13 R12 = 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0
0
0
0
0
0
0
1
i.e. R is commutative. kl 2. Suppose that char(k) = 2. Among the scalars (xkl ij ) that define R = (xij ) via the formula (6.11) the only nonzero elements are 22 12 21 12 x11 11 = x22 = x12 = x21 = x21 = 1.
Now, the relations χij kl = 0, written in lexicographic order are: c11 c11 = c11 ,
c11 c12 = c12 ,
c12 c11 = 0,
c12 c12 = 0,
c21 c11 + c11 c21 = 0,
c12 c12 + c11 c22 = c11 ,
c22 c11 + c12 c21 = c11 ,
c22 c12 + c12 c22 = c12 ,
c11 c21 = c21 , c21 c21 = 0,
c11 c22 = c22 , c21 c22 = c21 ,
c12 c21 = 0,
c12 c22 = 0,
c22 c21 = c21 ,
c22 c22 = c22 .
270
6 Hopf modules and the pentagon equation
Now, if we write c11 = x, c12 = y, c21 = z, c22 = t and using that char(k) = 2 we get the following relations: x2 = x,
y 2 = z 2 = yx = yz = 0,
xy = y,
zy = x + t,
t2 = t,
xt = t,
yt = 0,
ty = y,
zt = tz = z.
xz = zx = z,
tx = x,
So, t is in the free algebra generated by x, y, z and t = zy − x. If we substitute t in all the relations in which t is involved, then these become identities. The relations given in the statement of the proposition remain valid. The formula for ∆ follows, as the matrix x y z
t
is comultiplicative. We will now prove that dimk (B(R)) = 5. More precisely, we will show in an elementary way (without using the Diamond Lemma) that {1, x, y, z, zy} is a k-basis for B(R). From the relations which define B(R) we obtain: x(zy) = zy,
(zy)2 = (zy)x = y(zy) = (zy)y = z(zy) = (zy)z = 0.
These relations tell us that {1, x, y, z, zy} generate B(R) as a vector space over k. We are done if we can show that {1, x, y, z, zy} is linearly independent. Let a, b, c, d, e ∈ k be such that a + bx + cy + dz + e(zy) = 0 Left multiplication by y gives a = 0. Right multiplication by z gives b = 0, and then right multiplication by x gives d = 0. Finally left multiplication by z gives c = 0, and e = 0 follows automatically. Hence {1, x, y, z, zy} is a k-basis for B(R). Remarks 18. 1. The proof also tells us that B(R) can be described as follows: – As a vector space, B(R) is five dimensional with basis {1, x, y, z, t}. – The multiplication is given by:
xy = y, yz = 0,
x2 = x,
y 2 = z 2 = 0,
t2 = t,
yx = 0,
xz = zx = z,
xt = t,
zy = x + t,
yt = 0,
ty = y,
tx = x,
zt = tz = z.
6.3 Noncommutative noncocommutative bialgebras
271
– The comultiplcation ∆ and the counity ε are defined in such way that the matrix x y z
t
is comultiplicative. Now, let C be the four dimensional subcoalgebra of B(R) with k-basis {x, y, z, t}. The map σ : C ⊗ B(R) → k defined by σ(x ⊗ 1) = 1, σ(x ⊗ x) = 1, σ(x ⊗ y) = 0, σ(x ⊗ z) = 0, σ(x ⊗ t) = 1, σ(y ⊗ 1) = 0, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 1, σ(y ⊗ t) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = 0, σ(z ⊗ y) = 0, σ(z ⊗ z) = 1, σ(z ⊗ t) = 0, σ(t ⊗ 1) = 1, σ(t ⊗ x) = 1, σ(t ⊗ y) = 0, σ(t ⊗ z) = 0, σ(t ⊗ t) = 0, is a Hopf function. As R is bijective and commutative we obtain that σ is convolution invertible. Since k has characteristic two, R−1 = R, and σ −1 = σ. The bialgebra B(R) is not a Hopf algebra. Localizing B(R) we obtain a Hopf algebra isomorphic to the group algebra kC2 . Indeed, let S be a potential antipode. Then: S(x)x + S(y)z = 1 and S(x)y + S(y)t = 0 If we multiply the second equation to the right with z we get S(y)z = 0, so S(x)x = 1. But x2 = x, so x = 1 and then y = 0, t = 1. We obtain the Hopf algebra k < z | z 2 = 0 >, ∆(z) = z ⊗ 1 + 1 ⊗ z, ε(z) = 0. If we denote g = z + 1 then g 2 = 1, ∆(g) = g ⊗ g, ε(g) = 1, hence B(R) is isomorphic to the Hopf algebra kG, where G = {1, g} is a group with two elements. 2. R is commutative, so we can construct the bialgebra B(R) = k[X, Z]/(X 2 − X, Z 2 , XZ − Z) Its coalgebra structure is given by ∆(X) = X ⊗ X,
∆(Z) = X ⊗ Z + Z ⊗ X
ε(X) = 1,
ε(Z) = 0.
In the next Propositions we will construct the bialgebras that arise from the solutions of the Hopf equation given in Example 10 3). Taking quotients of one of these bialgebras, we will be able to construct a noncommutative noncocommutative bialgebra of dimension 2n + 1, for any positive integer n and any field k. First we will construct the bialgebra B(Rq ), which can be associated to the solution Rq = fq ⊗ fq , where fq is defined in Example 10. It is worthwhile to note that these bialgebras do not depend on q ∈ k \ {0}.
272
6 Hopf modules and the pentagon equation
Proposition 134. Let q ∈ k and Rq by 1 0 Rq = 0 0 Let
Eq2 (k)
be the bialgebra
the solution of the Hopf equation given q2
q
q
0
0
0
0
0 0
0
0
0
B(Rq ).
1. If q = 0, then E02 (k) is the free algebra with generators x, y, z and relations x2 = x, xz = zx = z 2 = 0. The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
2. If q = 0, then Eq2 (k) is the free algebra with generators A, B and relations: B3 = B2 The comultiplication ∆ and the counity ε are given by: ∆(A) = A ⊗ A,
∆(B) = B ⊗ A + B 2 ⊗ (B − A) ε(A) = ε(B) = 1
Proof. We proceed as in Proposition 133. The (xji uv ) that are different from zero are: 11 2 x11 x11 x11 11 = 1, 21 = x12 = q, 22 = q . Now the relations χij kl = 0 are c11 c11 + qc21 c11 + qc11 c21 + q 2 c21 c21 = c11 c11 c12 + qc21 c12 + qc11 c22 + q 2 c21 c22 = qc11 c12 c11 + qc22 c11 + qc12 c21 + q 2 c22 c21 = qc11 c12 c12 + qc22 c12 + qc12 c22 + q 2 c22 c22 = q 2 c11 0 = 0, 0 = c21 ,
0 = 0,
0 = qc21 ,
0 = 0,
0 = 0,
0 = 0, 0 = qc21 , 0 = 0,
0 = 0, 0 = q 2 c21 , 0 = 0,
Hence c21 = 0. If we denote c11 = x, c22 = y, c12 = z then we obtain the description of Eq2 (k) as a bialgebra:
6.3 Noncommutative noncocommutative bialgebras
273
– as an algebra, Eq2 (k) has generators x, y, z and relations x2 = x,
xz + qxy = qx,
zx + qyx = qx,
z 2 + qyz + qzy + q 2 y 2 = q 2 x.
– the comultiplication ∆ and the counit ε are given by the equations ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
∆(z) = x ⊗ z + z ⊗ y
(6.28)
and ε(x) = ε(y) = 1,
ε(z) = 0.
(6.29)
For q = 0, we obtain the relations for E02 (k). If q = 0, then x is in the free algebra generated by y and z and x = y 2 + q −1 zy + q −1 yz + q −2 z 2 = (y + q −1 z)2 . Substituting x in the other three relations, the only remaining defining relation is (y + q −1 z)3 = (y + q −1 z)2 The other two are linearly dependent on it. Writing A = y and B = y + q −1 z, we obtain the desired description of Eq2 (k). Remarks 19. 1. Let C be the three dimensional subcoalgebra of Eq2 (k) with k-basis {x, y, z}. Then σ : C ⊗ Eq2 (k) → k defined by σ(x ⊗ 1) = 1, σ(x ⊗ x) = 1, σ(x ⊗ y) = 0, σ(x ⊗ z) = q, σ(y ⊗ 1) = 1, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = q, σ(z ⊗ y) = 0, σ(z ⊗ z) = q 2 . is a Hopf function. 2. If q = 0, then B 2 is a grouplike element of Eq2 (k), since ∆(B 2 ) = B 2 ⊗ A2 + B 2 ⊗ (AB − A2 ) + B 2 ⊗ (BA − A2 ) + B 4 ⊗ (B 2 − BA − AB + A2 ) = B 2 ⊗ A2 + B 2 ⊗ (AB − A2 ) + B 2 ⊗ (BA − A2 ) + B 2 ⊗ (B 2 − BA − AB + A2 ) = B2 ⊗ B2 3. Let n ≥ 2 be a positive integer. The two-sided ideal I of E02 (k) generated by y n − y, zy, xy − x and yx − x is a biideal (cf. [96]) and B2n+1 (k) = E02 (k)/I is a 2n + 1-dimensional noncommutative noncocommutative bialgebra. The bialgebra B2n+1 (k) can be described as follows:
274
6 Hopf modules and the pentagon equation
– B2n+1 (k) is algebra with generators x, y, z and relations x2 = x,
xz = zx = z 2 = zy = 0,
y n = y,
xy = yx = x.
– The comultiplication ∆ and the counity ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
4. Observe that y does not appear in the relations of E02 (k). As ∆(y − 1) = (y − 1) ⊗ y + 1 ⊗ (y − 1) and ε(y − 1) = 1, we get that the two-sided ideal generated by y − 1 is also a coideal. We can add the new relation y = 1 in the definition of E02 (k) and we obtain a three dimensional noncocommutative bialgebra T (k). More explicitly: – {1, x, z} is a basis of T (k) as a vector space. – The multiplication is given by x2 = x,
xz = zx = z 2 = 0.
– The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(z) = x ⊗ z + z ⊗ 1,
ε(x) = 1,
ε(z) = 0.
In [103], Kaplansky gives two examples of three dimensional bialgebras over a field of characteristic zero, both of them are commutative and cocommutative. The only difference between our T (k) and one of Kaplansky’s bialgebras is the relation ∆(z) = x ⊗ z + z ⊗ 1 (in [103]: ∆(z) = 1 ⊗ z + z ⊗ 1). This minor change of ∆ (in our case k being a field of arbitrary characteristic) makes T (k) noncocommutative. The proofs of Propositions 135 and 136 are left to the reader, since they are similar to the proofs of Propositions 133 and 134. Proposition 135. Let q ∈ k and Rq the by 0 −q 0 1 Rq = 0 0 0 Let
Bq2 (k)
0
solution of the Hopf equation given 0
−q 2
0
q 0
0
0
0
be the bialgebra B(Rq ).
1. If q = 0, the bialgebra B02 (k) has generators x, y, z and relations yx = x,
yz = 0
6.3 Noncommutative noncocommutative bialgebras
275
The comultiplication ∆ and the counity ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0
2. If q = 0, the bialgebra Bq2 (k) has generators A, B and relations A2 B = AB The comultiplication ∆ and the counit ε are given by: ∆(A) = A ⊗ A,
∆(B) = q −1 AB ⊗ B + (B − AB) ⊗ A, ε(A) = 1,
ε(B) = q.
Remark 20. Bq2 (k) is not a Hopf algebra. We can localize it to obtain a Hopf algebra. As ∆(x) = x ⊗ x, ∆(y) = y ⊗ y and ε(x) = ε(y) = 1 we should add new generators which make x and y invertible. But then y = 1 and z = q(x − 1). It follows that the localization of the bialgebra Bq2 (k), is the usual Hopf algebra k[X, X −1 ], with ∆(X) = X ⊗ X, ε(X) = 1, and antipode S(X) = X −1 . Example 24. Let C be the three dimensional subcoalgebra of B02 (k) with basis {x, y, z}. An easy but long and boring computation shows that σ : C ⊗ B02 (k) → k is a Hopf function if and only if there exists a, b ∈ k such that σ(x ⊗ 1) = 1, σ(x ⊗ x) = 0, σ(x ⊗ y) = a, σ(x ⊗ z) = b, σ(y ⊗ 1) = 1, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = 0, σ(z ⊗ y) = 0, σ(z ⊗ z) = 0. and ab = 0. Proposition 136. For q ∈ k, let Rq given by 1 0 Rq = 0 0
be the solution of the Hopf equation 0
0
q
1
0
0
0
q 0
0
0
0
B(Rq )
Write = Dq2 (k). The bialgebra D02 (k) has
generators x, y, z and relations
x2 = x = yx,
zx = xz = z 2 = yz = 0
The comultiplication ∆ and the counit ε are given by:
276
6 Hopf modules and the pentagon equation
∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
For q = 0, Dq2 (k) has generators A, B and relations A3 = A2 ,
BA = 0.
The comultiplication ∆ and the counit ε are given by: ∆(A) = A ⊗ A + q −1 (A2 − A) ⊗ B,
∆(B) = A2 ⊗ B + B ⊗ A − q −1 B ⊗ B,
ε(A) = 1,
ε(B) = 0.
Remarks 20. 1. Rq is also a solution of the quantum Yang-Baxter equation. The bialgebra A(Rq ) from Theorem 60 has generators x, y, z, t and relations xy − yx = qyz,
zx = xz = zy = z 2 = zt = 0, xy + qxt = qx2 ,
y 2 + qyt = qxy,
xt − tx = qtz,
ty + qt2 = qxt.
The comultiplication ∆ and the counit ε are such that the matrix x y z
t
is comultiplicative. 2. The bialgebra D02 (k) is the quotient of B02 (k) by the two-sided ideal (which is also a coideal) generated by x2 − x,
zx,
xz,
z2.
D02 (k) is also the quotient of E02 (k) by the two-sided ideal generated by yx − x,
yz.
3. Let n ≥ 2 be a positive integer. The bialgebras B02 (k), D02 (k) and E02 (k) constructed in the previous Propositions can be generalized to B0n (k), D0n (k) and E0n (k). We will describe B0n (k). Let π1 : k n → k n be the projection of k n onto the first component, i.e. π1 ((x1 , x2 , · · · , xn )) = (x1 , 0, · · · , 0), and let π 1 = Idkn − π1 be the projection of k n onto the hyperplane x1 = 0, that is π 1 ((x1 , x2 , · · · , xn )) = (0, x2 , · · · , xn ). Then π1 ⊗ π 1 is a solution of the Hopf equation and the bialgebra B0n (k) = B(π1 ⊗ π 1 ) can be described as follows: – B0n (k) has generators (cij )i,j=1,···,n and relations cii = 0,
cjk c11 = δkj δ1l c11
for all i, j ≥ 2 and k, l ≥ 1. As before, δvu is the Kronecker symbol.
6.4 The structure of finite dimensional Hopf algebras
277
– The comultiplcation ∆ and the counity ε are such that the matrix (cii ) is comultiplicative. The proof is similar to the one of Proposition 134. Among the elements (xij vu ), which define π1 ⊗ π 1 , the only nonzero ones x1t 1t = 1,
∀t ≥ 2.
If i = 1, all the relations χij kl = 0 are 0 = 0, with the exception of the relations ij χj1 = 0 for all j ≥ 2, which give us 0 = ci1 for all i ≥ 2. The relations χ1j kl = 0 give us cjk c1l = δkj δl1 c11 for all j ≥ 2 and k, l ≥ 1. Other new examples of bialgebras can be constructed starting from projections of k n on different intersections of hyperplanes. We end this section with one more example communicated to us by G. Mititica. Example 25. Let n be a positive integer and k a field such that n is invertible in k. Let R = (xkl ij ) given by xkl ij =
0, if i + j ≡ k + l(mod n) n−1 , if i + j ≡ k + l(mod n)
(6.30)
It is easy to show that R = (xkl ij ) is a solution of the Hopf equation. For n = 2, the bialgebra algebra B(R) from Theorem 66 is given as follows: B(R) has generators x and y and relations x2 + y 2 = x,
xy + yx = y.
The comultiplication ∆ and the counit ε are given by: ∆(x) = x ⊗ x + y ⊗ y,
∆(y) = x ⊗ y + y ⊗ x,
ε(x) = 1,
ε(y) = 0.
6.4 The pentagon equation versus the structure and the classification of finite dimensional Hopf algebras In this section we will present a fundamental construction related to the pentagon equation that associate to any solution of the pentagon equation a finite dimensional Hopf algebra. This construction is originally due to Baaj and Skandalis (see [10]) in the case of unitary multiplicatives R ∈ L(K ⊗ K), where K is a separable Hilbert space and to Davydov (see [65]) for vector spaces K over arbitrary fields. We will follow [137], leading us to the structure and the classification of finite dimensional Hopf algebras. A key role is played by the canonical element of the Heisenberg double of a Hopf algebra. Let M be a finite dimensional vector space and
278
6 Hopf modules and the pentagon equation
ϕ : M ⊗ M ∗ → End(M ),
ϕ(v ⊗ v ∗ )(w) = v ∗ , wv
the canonical isomorphism. The element R = ϕ−1 (IdM ) is called the canonical element of M ⊗ M ∗ . Of course R=
n
ei ⊗ e∗i
i=1
where {ei , e∗i | i = 1, · · · , n} is a dual basis and R is independent of the choice of the dual basis. Throughout this Section, A will be a finite dimensional algebra and R ∈ A⊗A will be an invertible solution of the pentagon equation R12 R13 R23 = R23 R12
(6.31)
For later use, we remark that if R ∈ A ⊗ A is a solution of the pentagon equation and f : A → B is an algebra map, then (f ⊗ f )(R) is a solution of the pentagon equation in B ⊗ B ⊗ B. We can define the category Pent of the pentagon objects: the objects are pairs (A, R), where A is a finite dimensional algebra and R ∈ A ⊗ A is an invertible solution of the pentagon equation. A morphism f : (A, R) → (B, T ) between two pentagon objects (A, R) and (B, T ) is an algebra map f : A → B such that (f ⊗ f )(R) = T . Pent is a monoidal category under the product (A, R) ⊗ (B, T ) = (A ⊗ B, R13 T 24 ). The following is the pentagon equation version of Proposition 126. Proposition 137. Let A be an algebra and R ∈ A ⊗ A an invertible solution of the pentagon equation. Consider the comultiplications ∆l , ∆r : A → A ⊗ A given by ∆r (a) = R−1 (1A ⊗ a)R = (6.32) U 1 R1 ⊗ U 2 aR2 ∆l (a) = R(a ⊗ 1A )R−1 = R1 aU 1 ⊗ R2 U 2 (6.33) where U = U 1 ⊗ U 2 = R−1 . Then Ar = (A, ·, ∆r ) and Al = (A, ·, ∆l ) are bialgebras without counit. Proof. It is obvious that ∆r and ∆l are algebra maps. For a ∈ A we have (Id ⊗ ∆r )∆r (a) = (R23 )−1 (R13 )−1 (1A ⊗ 1A ⊗ a)R13 R23 and
(∆r ⊗ Id)∆r (a) = (R12 )−1 (R23 )−1 (1A ⊗ 1A ⊗ a)R23 R12
so ∆r is coassociative if and only if R23 R12 (R23 )−1 (R13 )−1 (1A ⊗ 1A ⊗ a) = (1A ⊗ 1A ⊗ a)R23 R12 (R23 )−1 (R13 )−1
6.4 The structure of finite dimensional Hopf algebras
279
using the pentagon equation (6.31), we find that this is equivalent to R12 (1A ⊗ 1A ⊗ a) = (1A ⊗ 1A ⊗ a)R12 and this equality holds for any a ∈ A. In a similar way we can prove that ∆l is also coassociative. It follows from Proposition 137 that we can put two different algebra structures (without unit) on A∗ : the multiplications are the convolutions ∗l and ∗r which are the dual maps of ∆l and ∆r , i.e. ω, R1 aU 1 ω , R2 U 2 ω ∗l ω , a = ω ∗r ω , a = ω, U 1 R1 ω , U 2 aR2 for all ω, ω ∈ A∗ , a ∈ A. We have seen in Example 9 2) that the canonical element of the Drinfeld double is a solution of the QYBE. If the case of the pentagon equation, a similar result holds for the Heisenberg double H(L) of a Hopf algebra L (cf. Section 4.1). Let L be a Hopf algebra. Recall that L∗ is a left L-module algebra in the usual way h · g ∗ , h = g ∗ , h h and the Heisenberg double is by definition the smash product H(L) = L#L∗ The multiplication is given by (h#h∗ )(g#g ∗ ) = h(2) g#h∗ ∗ (h(1) · g ∗ ) Recall also that the maps iL : L → H(L), and
iL∗ : L∗ → H(L),
iL (l) = l#εL iL∗ (l∗ ) = 1L #l∗
are injective algebra maps. The Heisenberg double H(L) satisfies the following universal property: given a k-algebra A and algebra maps u : L → A, v : L∗ → A such that u(l)v(l∗ ) = v(l(1) · l∗ )u(l(2) ) (6.34) there exists a unique algebra map F : H(L) → A (given by F (l#l∗ ) = v(l∗ )u(l), for all l ∈ L, l∗ ∈ L∗ ) such that the following diagram commutes
280
6 Hopf modules and the pentagon equation
H(L) iL
L
I @ @ iL∗ @ @ @ F
@ @ @ u @ @ R
? A
L∗
v
The Heisenberg double H(L) presented above, differs from H(L) introduced in [140, Example 4.1.10], where H(L) = L# L∗ , with multiplication given by (h# h∗ )(g# g ∗ ) = hh∗(1) , g(2) g(1) # h∗(2) g ∗ for all h, g ∈ L, h∗ , g ∗ ∈ L∗ . However, [140, Corollary 9.4.3] and Proposition 138 show that the two descriptions of the Heisenberg double H(L) and H(L) are isomorphic as algebras, both of them being isomorphic to the matrix algebra Mn (k), where n = dim(L). Proposition 138. Let L be a finite dimensional Hopf algebra. Then there exists an algebra isomorphism H(L) ∼ = Mdim(L) (k). Proof. As L is finite dimensional, the functor T : L ML → MH(L) ,
T (M ) = M
where the right H(L)-action on M is given by l∗ , m<−1> m<0> · l m • (l#l∗ ) = is an equivalence of categories (see Theorem 8). As the antipode of L is bijective ([172]) we have the following equivalences of categories MH(L) ∼ = L ML ∼ = Mk i.e. H(L) is Morita equivalent to k. It follows from the Morita theory (see, for instance, [2], pag. 265) that there exists an algebra isomorphism H(L) ∼ = Mn (k). Taking into account that dim(H(L)) = dim(L)2 , we obtain that n = dim(L). Remark 21. Let L be a finite dimensional Hopf algebra. Kashaev ([106]) proved that the Drinfeld double D(L) can be realized as a subalgebra in the tensor product of two Heisenbergs H(L) ⊗ H(L∗ ). This can be proved
6.4 The structure of finite dimensional Hopf algebras
281
immediately using Proposition 138: if dim(L) = n then dim(D(L)) = n2 and hence D(L) ⊂ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) ∼ = H(L) ⊗ H(L) ∼ = H(L) ⊗ H(L∗ ). The following theorem is [181, Theorem 5.2] and [106, Theorem 1]. In [181], the Heisenberg double does not appear explicitly, and in [106] Heisengerg double is described in terms of structure constants, and not as a smash product. Theorem 69. Let L be a finite dimensional Hopf algebra and {ei , e∗i | i = 1, · · · , n} a dual basis of L. Then the canonical element (ei #ε) ⊗ (1#e∗i ) ∈ H(L) ⊗ H(L) R= i
is an invertible solution of the pentagon equation in H(L) ⊗ H(L) ⊗ H(L). Consequently, if A is and algebra, and f : H(L) → A an algebra map, then (f ⊗ f )(R) is an invertible solution of the pentagon equation in A ⊗ A ⊗ A. Proof. Taking into account the multiplication rule of H(L) we find (ej #ε) ⊗ (ei(2) #ei(1) · e∗j ) ⊗ (1#e∗i ) R23 R12 = i,j
and R12 R13 R23 =
(ea eb #ε) ⊗ (ec #e∗a ) ⊗ (1#e∗b ∗ e∗c )
a,b,c
so we have to prove the equality ej ⊗ ei(2) ⊗ ei(1) · e∗j ⊗ e∗i = ea eb ⊗ ec ⊗ e∗a ⊗ e∗b ∗ e∗c i,j
(6.35)
a,b,c
in L ⊗ L ⊗ L∗ ⊗ L∗ . Fix indices x, y, z, t ∈ {1, · · · , n}, and evaluate (6.35) at e∗x ⊗ e∗y ⊗ ez ⊗ et . (6.35) is then equivalent to e∗y , et(2) e∗x , ez et(1) = e∗x , ez eb e∗b ∗ e∗y , et b
Applying the definition of the convolution product e∗b e∗y , et = e∗b , et(1) e∗y , et(2) and the dual basis formula
eb e∗b , et(1) = et(1)
b
we find that (6.35) holds, as needed. We will now prove that
282
6 Hopf modules and the pentagon equation
U=
(S(ei )#ε) ⊗ (1#e∗i ) i
is the inverse of R, where S is the antipode of L. As H(L)⊗H(L) is isomorphic to n2 ×n2 -matrix algebra over k, it is enough to prove that RU = 1⊗1. Indeed, (ei S(ej )#ε) ⊗ (1#e∗i e∗j ) RU = i,j
Hence, we have to prove the formula ei S(ej ) ⊗ e∗i e∗j = 1 ⊗ ε i,j
which holds, as for indices x, y = 1, · · · , n we have e∗x , ei S(ej )e∗i e∗j , ey = e∗x , ei S(ej )e∗i , ey(1) e∗j , ey(2) i,j
i,j
e∗x , ei e∗i , ey(1) S(ej e∗j , ey(2) ) = i,j
=
e∗x , ey(1) S(ey(2) )
= e∗x , 1ε, ey 1 From now let R = R ⊗ R2 ∈ A ⊗ A be an invertible solution of the pentagon equation, on a finite dimensional algebra A. The subspaces AR,l = {a ∈ A | R(a⊗1A ) = a⊗1A } and AR,r = {a ∈ A | (1A ⊗a)R = 1A ⊗a} are called the spaces of left, respectively right, R-invariants of A. R(l) = { a∗ , R2 R1 | a∗ ∈ A∗ } and R(r) = { a∗ , R1 R2 | a∗ ∈ A∗ } are called the spaces of left, respectively right coefficients of R. We will denote them as follows P = P (A, R) = R(l) ; H = H(A, R) = R(r) . m Assume now that R = i=1 ai ⊗ bi , where m is as small as possible. Then m is called the length of R and will be denoted l(R) = m. From the choice of m, the sets {ai | i = 1, · · · , m}, respectively {bi | i = 1, · · · , m} are linear independent in A and hence bases of R(l) , respectively R(r) . In particular, dim(R(l) )=dim(R(r) )=l(R). Two elements R and S ∈ A ⊗ A are called equivalent (we will write R ∼ S) if there exists u ∈ U (A) an invertible element of u −1 A such that S then l(R)=l(S). Indeed, m S = R := (u ⊗ u)R(u ⊗ u) . If R ∼ m let R = i=1 ai ⊗ bi where m = l(R). Then S = i=1 uai u−1 ⊗ ubi u−1 and hence l(S) ≤ l(R); in a similar way we obtain that l(R) ≤ l(S). In particular,
6.4 The structure of finite dimensional Hopf algebras
283
if {ai | i = 1, · · · , m} is a basis of R(l) , then {uai u−1 | i = 1, · · · , m} is a basis of S(l) = u R(l) . Now consider a∗i ∈ P ∗ and b∗i ∈ H ∗ defined by a∗i , aj = δij = b∗i , bj i.e. {ai , a∗i } and {bi , b∗i } are dual bases of P and H. Extend a∗i : P → k and b∗i : H → k to respectively ωi∗ : A → k and λ∗i : A → k. We then have m
ωk , ai bi = bk and
i=1
m
aj λk , bj = ak
(6.36)
j=1
for all k = 1, · · · , m. We will use two different notations for R: R=
m
ai ⊗ bi =
i=1
m
aj ⊗ bj
j=1
when we are interested in the basis elements of P and H, and the generic notation R= R1 ⊗ R2 = r1 ⊗ r2 = r ; U = U 1 ⊗ U 2 = R−1 1 Theorem 70. Let A be a finite dimensional algebra, R = R ⊗R2 ∈ A⊗A an invertible solution of the pentagon equation and P = P (A, R) = R(l) , H = H(A, R) = R(r) the subspaces of coefficients of R. 1. P and H are unitary subalgebras of A and Hopf algebras with comultiplication given by ∆P (x) = ∆r (x) = R−1 (1A ⊗ x)R and ∆H (y) = ∆l (y) = R(y ⊗ 1A )R−1 (6.37) for all x ∈ P , y ∈ H. Furthermore, the k-linear map f : P ∗ → H, f (p∗ ) = p∗ , R1 R2 (6.38) is an isomorphism of Hopf algebras. 2. The k-linear map F : H(P ) → A,
F (p#p∗ ) =
p∗ , R1 R2 p
is an algebra map and R = (F ⊗ F )(R) where R ∈ H(P )⊗H(P ) is the canonical element associate of the Heisenberg double.
284
6 Hopf modules and the pentagon equation
3. The multiplication on A defines isomorphisms AR,r ⊗ P ∼ =A
(resp. H ⊗ AR,l ∼ = A)
of right P -modules (resp. left H-modules). In particular, A is free as a right P -module and as a left H-module and dim(P ) = dim(H) =
dim(A) dim(A) = = l(R) dim(AR,l ) dim(AR,r )
4. If f : (A, R) → (B, S) is an isomorphism in Pent, then the Hopf algebras P (A, R) and P (B, S) are isomorphic. Consequently, if S ∈ A ⊗ A is equivalent to R, then the Hopf algebras P (A, R) and P (A, S) are isomorphic. Proof. 1. We will use the notation introduced above. First we will prove that P (resp. H) are unitary subalgebras in A and subcoalgebras of Ar = (A, ∆r ) (resp. Al = (A, ∆l )). This will follow from the formulas: ap aq =
m
λp ∗l λq , bj aj ∈ P
(6.39)
j=1 m
∆r (ap ) =
λp , bi bj ai ⊗ aj ∈ P ⊗ P
(6.40)
i,j=1
and b p bq =
m
ωp ∗r ωq , aj bj ∈ H
(6.41)
j=1 m
∆l (bp ) =
ωp , ai aj bi ⊗ bj ∈ H ⊗ H
(6.42)
i,j=1
for all p, q = 1, · · · , m. We prove (6.39) and (6.40), leaving (6.41) and (6.42) to the reader. m m aj λp ∗l λq , bj = aj λp , R1 bj U 1 λq , R2 U 2 j=1
j=1 m = (Id ⊗ λp ⊗ λq )( aj ⊗ R1 bj U 1 ⊗ R2 U 2 ) j=1
(6.31)
= (Id ⊗ λp ⊗ λq )(R23 R12 (R23 )−1 ) = (Id ⊗ λp ⊗ λq )(R12 R13 ) m aj ak ⊗ bj ⊗ bk ) = (Id ⊗ λp ⊗ λq )( j,k=1
=
m j,k=1
aj λp , bj ak λq , bk = ap aq
6.4 The structure of finite dimensional Hopf algebras
285
i.e. P is a subalgebra of A. On the other hand m aj λp , bj ) ∆r (ap ) = ∆r (
(6.32)
=
(6.36)
j=1 m
U 1 R1 ⊗ U 2 aj R2 λp , bj
j=1 m = (Id ⊗ Id ⊗ λp )( U 1 R1 ⊗ U 2 aj R2 ⊗ bj ) j=1
(6.31)
= (Id ⊗ Id ⊗ λp )((R12 )−1 R23 R12 ) = (Id ⊗ Id ⊗ λp )(R13 R23 ) m ai ⊗ aj ⊗ bi bj ) = (Id ⊗ Id ⊗ λp )( i,j=1
=
m
ai ⊗ aj λp , bi bj
i,j=1
and P is subcoalgebra of (A, ∆r ). A similar computation yields (6.41) and (6.42), proving that H is a subalgebra of A and a subcoalgebra of (A, ∆l ). Moreover R ∈ P ⊗ H so, for any positive integer t, there exist scalars αij ∈ k such that m Rt = αij ai ⊗ bj (6.43) i,j=1
We will prove now that 1A ∈ P and 1A ∈ H. As A is finite dimensional, A can embeded into a matrix algebra A ⊂ Mn (k), where n =dim(A). We view R ∈ A ⊗ A ⊂ Mn (k) ⊗ Mn (k) ∼ = Mn2 (k) R is invertible, and it follows from the Cayley-Hamilton Theorem that the identity matrix In2 can be written as a linear combination of powers of R. Using (6.43), we find 1A ⊗ 1A =
m
γi,j ai ⊗ aj
i,j=1
for some γi,j ∈ k. Hence in Mn2 (k), the identity matrix In2 can be representated as a linear combinations of powers of R. Hence, using (6.43), we obtain in A ⊗ A a linear combination 1A ⊗ 1A =
m i,j=1
γi,j ai ⊗ bj
286
6 Hopf modules and the pentagon equation
for some γi,j ∈ k. Let p : A → k be the projection of A onto the one dimensional subspace spanned by 1A . Then 1A =
m
ai p, γi,j bj =
i,j=1
m
p, γi,j ai bj ∈ P ∩ H
i,j=1
i.e. P and H are unitary subalgebras of A and hence we can view U = R−1 ∈ P ⊗ H. The counit and the antipode of P and H are defined by the formulas: U 1 b∗k , U 2 εP : P → k, εP (ak ) = b∗k , 1A , SP : P → P, SP (ak ) = (6.44) and εH : H → k, εH (bk ) = a∗k , 1A , SH : H → H, SH (bk ) = a∗k , U 1 U 2 (6.45) for all k = 1, · · · , m. We will prove that P is a Hopf algebra, the fact that H is a Hopf algebra is proved in a similar way. First, we remark that, as H is a subalgebra of A, (6.40) can be rewritten as ∆r (ap ) =
m
b∗p , bi bj ai ⊗ aj
(6.46)
i,j=1
Now, for p = 1, · · · , m we have m
(Id ⊗ εP )∆r (ap ) =
ai b∗p , bi bj b∗j , 1A
(6.46)
i,j=1
=
m
ai b∗p , bi bj b∗j , 1A =
i,j=1
m
ai b∗p , bi = ap
i=1
i.e. (IP ⊗ εP )∆r = Id. A similar computation shows that (εP ⊗ IP )∆r = Id, and εP is a counit. Using (6.46), we find that SP is a right convolution inverse of IP : (Id ⊗ SP )∆r (ap ) =
n
ai b∗p , bi bj U 1 b∗j , U 2
i,j=1
=
=
n
ai U 1 b∗p , bi bj b∗j , U 2
i,j=1 n
ai U 1 b∗p , bi U 2 = (Id ⊗ b∗p )(RR−1 )
i=1
= 1A b∗p , 1A = εP (ap )1A
6.4 The structure of finite dimensional Hopf algebras
287
From the fact that P is finite dimensional, it follows that SP is an antipode of P . Let us prove now that f : P ∗ → H is an isomorphism of Hopf algebras. f (a∗j ) =
m
a∗j , ai bi = bj
i=1
so f is an isomorphism of vector spaces. Let us prove that f is an algebra map: m m εP , ai bi = b∗i , 1A bi = 1A f (1P ∗ ) = f (εP ) = i=1
i=1
(6.41) can be rewritten as b p bq =
m
a∗p ∗r a∗q , aj bj
j=1
which means that
f (a∗p )f (a∗q ) = f (a∗p ∗r a∗q )
for all p, q = 1, · · · , m, i.e. f is an algebra isomorphism. Let us prove now that f is also a coalgebra map. We recall the definition of the comultiplication ∆P ∗ : ∆P ∗ (a∗p ) = X1 ⊗ X2 ∈ P ∗ ⊗ P ∗ if and only if a∗p , xy =
X 1 , xX 2 , y
for all x, y ∈ P . It follows that f (X 1 ) ⊗ f (X 2 ) = X 1 , R1 R2 ⊗ X 2 , r1 r2 (f ⊗ f )∆P ∗ (a∗p ) = =
a∗p , R1 r1 R2 ⊗ r2 =
m
a∗p , ai aj bi ⊗ bj
i,j=1
(6.42)
= ∆H (bp ) = (∆H ◦ f )(a∗p )
i.e. f is also a coalgebra map. Hence, we have proved that f is an isomorphism of bialgebras, and as P and H are finite dimensional it is also a isomorphism of Hopf algebras (see [172]). 2. We remark that F (ai #a∗j ) =
m
a∗j , at bt ai = bj ai
t=1
for all i, j = 1, · · · , m. The fact that F is an algebra map can be proved directly using this formula; another way to proceed is to use the universal property of the Heisenberg double H(P ) for the diagram
288
6 Hopf modules and the pentagon equation
H(P ) iP
I @ @ iP ∗ @ @ @
P
F
P∗
@ @ @ u @ @ R
v ? A Here u : P → A is the usual inclusion and v : P ∗ → A is the composition v = f ◦ j, where f : P ∗ → H is the isomorphism from part 1) and j : H → A is the usual inclusion. We only have to prove that the compatibility condition (6.34) holds, i.e. hv(g ∗ ) = v(h(1) · g ∗ )h(2) for any h ∈ P and g ∗ ∈ P ∗ , which turns out to be hg ∗ , R1 R2 = g ∗ , R1 h(1) R2 h(2) or, equivalently
R1 ⊗ hR2 =
R1 h(1) ⊗ R2 h(2) .
This equation holds, as ∆P (h) = R−1 (1H ⊗ h)R, for any h ∈ P . m Now let R = i=1 (ai #εP ) ⊗ (1A #a∗i ) be the canonical element of H(P ) ⊗ H(P ). Then (F ⊗ F )(R) =
m
εP , at bt ai ⊗ bi =
i,t=1
m
b∗t , 1A bt ai ⊗ bi =
i,t=1
m
ai ⊗ bi = R.
i=1
3. Consider the map ρ = ρP : A → P ⊗ A,
ρ(a) = (1A ⊗ a)R =
R ⊗ aR = 1
2
m
ai ⊗ abi
i=1
for all a ∈ A. We will show that (A, ·, ρP ) ∈ P MP is a right-left P -Hopf module, where the structure of right P -module is just the multiplication · of A. Indeed, for a ∈ A we have (Id ⊗ ρ)ρ(a) = R1 ⊗ ρ(aR2 ) = R1 ⊗ r1 ⊗ aR2 r2 = (1A ⊗ 1A ⊗ a)R13 R23 (6.31) = (1A ⊗ 1A ⊗ a)(R12 )−1 R23 R12 = U 1 r1 ⊗ U 2 R1 r2 ⊗ aR2 = R−1 (1A ⊗ R1 )R ⊗ aR2 = ∆P (R1 ) ⊗ aR2 = (∆P ⊗ Id)ρ(a)
6.4 The structure of finite dimensional Hopf algebras
and
m
εP , ai abi =
i=1
m
289
b∗i , 1A abi = a
i=1
so (A, ρ) is a left P -comodule. We still have to prove the compatibility relation ρ(a)∆P (ai ) = (1A ⊗ a)RR−1 (1A ⊗ ai )R = (1A ⊗ aai )R = ρ(aai ) for all i = 1, · · · , m. Hence, (A, ·, ρP ) ∈ P MP and the coinvariants Aco(P ) = {a ∈ A | ρ(a) = 1 ⊗ a} = AR,r the right R-invariants of A. From the right-left version of the Fundamental Theorem of Hopf modules it follows that the multiplication on A, µ : AR,r ⊗ P → A,
µ(a ⊗ x) = ax
defines an isomorphism of P -Hopf modules, and, in particular, of right P modules. We recall that AR,r ⊗ P is a right P -module via (a ⊗ x) · y = a ⊗ xy, for all a ∈ AR,r , x, y ∈ P . It follows that A is free as a right P -module and dim(A) = dim(P )dim(AR,r ). In a similar way we can show that (A, ·, ρH ) ∈ plication on A and ρH : A → A ⊗ H,
H HM ,
ρH (a) = R(a ⊗ 1A ) =
where · is the multi-
R1 a ⊗ R2
for all a ∈ A. Moreover, Aco(H) = AR,l . If we apply once again the fundamental Theorem of Hopf modules (this time the left-right version) we obtain the other part of the statement. 4. isomorphism such that S = (f ⊗ f )(R) = nLet f : A → B is an algebra −1 1 f (a ) ⊗ f (b ). Then S = f (U ) ⊗ f (U 2 ). It follows that {f (ai ) | i i i=1 i = 1, · · · , n} is a basis of P (B, S) and hence the restriction of f to P (A, R) gives an algebra isomorphism between P (A, R) and P (B, S) that is also a coalgebra map since f (U 1 )f (R1 ) ⊗ f (U 2 )f (ai )f (R2 ) (f ⊗ f )∆P (A,R) (ai ) = = S −1 (1 ⊗ f (ai ))S = ∆P (B,S) (f (ai )) for all i = 1, · · · , n. The last statement is obtain taking B = A and f : A → A, f (x) = uxu−1 for all x ∈ A. Using Theorems 69 and 70, we obtain the following Corollary, which is a pure algebraic version of [10, Theorem 4.7]: the role of the operator VG on a local compact group G is played by the canonical element of the Heisenberg double.
290
6 Hopf modules and the pentagon equation
Corollary 38. Let A be a finite dimensional algebra and R ∈ A ⊗ A an invertible element. Then R is a solution of the pentagon equation if and only if there exists a finite dimensional Hopf algebra L and an algebra map F : H(L) → A such that R = (F ⊗ F )(R), where R is the canonical element associated to the Heisenberg double H(L). Remarks 21. 1. Part 3. of Theorem 70 is a Lagrange type theorem, useful to evaluate the dimension of the Hopf algebra P (A, R) coming from a solution of the pentagon equation. Let (A, R) ∈ Pent. It follows from Corollary 38 and Proposition 138 that there exists an algebra map F : Mn (k) → A, where j n = l(R). As Mn (k) is a simple algebra, F is injective. nHence aij = F (ei ) = 0 ∈ A, i, j = 1, · · · , n; then aij akl = δjk ail and 1A = i=1 aii . It follows from the Reconstruction Theorem of the matrix algebra ([113, Theorem 17.5]) that there exists an algebra isomorphism A∼ = Mn (B), where B = {x ∈ A | xaij = aij x, ∀i, j = 1, · · · , n}. Hence A is a matrix algebra if R is non-trivial (l(R) > 1 or, equivalently, R = 1A ⊗ 1A ). Furthermore, dim(A) = n2 dim(B) and hence, l(R)2 |dim(A). 2. We can compute the space of integrals on P , (resp. on H); we have to use the space AR,r of right R-coinvariants of A (resp. AR,l ). Let a ∈ AR,r and χ : A → H be an arbitrary right H-linear map. Then ϕ(ai ) = b∗i , χ(a)
ϕ : P → k,
is a right integral on P . Indeed, a ∈ AR,r , hence a ⊗ 1A = χ is right H-linear we obtain χ(a) ⊗ 1A =
n
n i=1
abi ⊗ ai . As
χ(a)bi ⊗ ai
(6.47)
i=1
Now, for ap ∈ P we have ϕ((ap )(1) ) ⊗ (ap )(2) = b∗p , bi bj b∗i , χ(a) ⊗ aj i,j
= (6.47)
=
b∗p , χ(a)bj ⊗ aj
j b∗p , χ(a)
⊗ 1A = ϕ(ap ) ⊗ 1A
which shows that ϕ is right P -colinear i.e. a right integral on P . Similarly, if b ∈ AR,l and ψ : A → P is an arbitrary left P -linear map, γ : H → k, is a left integral on H.
γ(bi ) = a∗i , χ(b)
6.4 The structure of finite dimensional Hopf algebras
291
Theorem 71. Let L be a finite dimensional Hopf algebra. Then there exists an isomorphism of Hopf algebras L∼ = P (H(L), R) where R is the canonical element of the Heisenberg double H(L). Proof. Let {ei | i = 1, · · · , m} be a basis of L, {e∗i | i = 1, · · · , m} the dual basis of L∗ , and R=
m
(ei #εL ) ⊗ (1L #e∗i ) ∈ H(L) ⊗ H(L)
i=1
the canonical element. We have to prove that the Hopf algebra P (H(L), R) extracted from part 1. of Theorem 70 is isomorphic to L, with the initial structure of Hopf algebra. Of course, iL : L → H(L),
iL (l) = l#εL
is an injective algebra map. We identify L∼ = Im(iL ) = L#εL From the construction, P (H(L), R) is the subalgebra of H(L) having {ei #εL | i = 1, · · · , m} as a basis; i.e. there exists an algebra isomorphism L ∼ = Im(iL ) = P (H(L), R). It remains to prove that the coalgebra structure (resp. the antipode) of P (H(L), R) extracted from Theorem 70 is exactly the original coalgebra structure (resp. the antipode) of L. As the counit and the antipode of a Hopf algebra are uniquely determined by the multiplication and the comultiplication, the only thing left to show is the fact that, via the above identification, ∆P = ∆L . This means that ∆L (ei #εL ) = R−1 (1H(L) ⊗ ei #εL )R or, equivalently, R∆L (ei #εL ) = (1L #εL ) ⊗ (ei #εL ) R Now we compute m (ej #εL ) ⊗ (1L #e∗j ) (1L #εL ) ⊗ (ei #εL ) R = (1L #εL ) ⊗ (ei #εL )
j=1
=
m j=1
On the other hand
(ej #εL ) ⊗ (ei(2) #ei(1) · e∗j )
292
6 Hopf modules and the pentagon equation
R∆L (ei #εL ) =
m
(ej #εL ) ⊗ (1L #e∗j )
(ei(1) #εL ) ⊗ (ei(2) #εL )
j=1
=
m
(ej ei(1) #εL ) ⊗ (ei(2) #e∗j )
j=1
Hence, we have to show m
ej ei(1) ⊗ ei(2) ⊗
e∗j
=
j=1
m
ej ⊗ ei(2) ⊗ ei(1) · e∗j
(6.48)
j=1
For indices a, b, k ∈ {1, · · · , m}, evaluate (6.48) at e∗a ⊗ e∗b ⊗ ek . (6.48) is then equivalent to e∗a , ek ei(1) e∗b , ei(2) =
m
e∗a , ej e∗b , ei(2) ei(1) · e∗j , ek
j=1
and this is easily verified: m
e∗a , ej e∗b , ei(2) ei(1) · e∗j , ek =
j=1
=
m j=1 m
e∗a , ej e∗b , ei(2) e∗j , ek ei(1) e∗a , ej e∗j , ek ei(1) e∗b , ei(2)
j=1
= e∗a , ek ei(1) e∗b , ei(2) It follows that ∆L = ∆P and L ∼ = P (H(L), R) as Hopf algebras. Let L be a finite dimensional Hopf algebra. Proposition 138 proves that there exists an algebra isomorphism H(L) ∼ = Mn (k), where n = dim(L). Via this isomorphism the canonical element R ∈ H(L)⊗H(L) is viewed as an element of Mn (k) ⊗ Mn (k), or as a matrix of Mn2 (k). We will now give the data which show us how any finite dimensional Hopf m algebra is constructed. Let R = i=1 Ai ⊗ Bi ∈ Mn (k) ⊗ Mn (k) be an invertible solution of the pentagon equation such that the sets of matrices {Ai | i = 1, · · · m} and {Bi | i = 1, · · · m} are linearly independent over k. Let {Bi∗ | i = 1, · · · m} be the dual basis of {Bi | i = 1, · · · m} and write U = R−1 = U 1 ⊗ U 1. The Hopf algebra P (Mn (k), R) is described as follows: – as an algebra P (Mn (k), R) is the subalgebra of the n × n-matrix algebra Mn (k) with {Ai | i = 1, · · · m} as a k-basis; – the coalgebra structure and the antipode of P (Mn (k), R) are given by the following formulas:
6.4 The structure of finite dimensional Hopf algebras
293
∆ : P (Mn (k), R) → P (Mn (k), R) ⊗ P (Mn (k), R), ∆(Ai ) = R−1 (In ⊗ Ai )R
(6.49)
Bi∗ , In
(6.50)
ε : P (Mn (k), R) → k, S : P (Mn (k), R) → P (Mn (k), R),
ε(Ai ) = S(Ai ) =
Bi∗ , U 2 U 1
(6.51)
for all i = 1, · · · , m. Theorems 70 and 71 imply the following Structure Theorem for finite dimensional Hopf algebras. Theorem 72. L is a finite dimensional Hopf algebra if and only if there exist a positive integer n and an invertible solution of the pentagon equation R ∈ Mn (k) ⊗ Mn (k) ∼ = Mn2 (k) such that L ∼ = P (Mn (k), R). Furthermore, dim(L) = l(R) =
n2 dim(Mn (k)R,r )
where Mn (k)R,r is the subspace of right R-invariants of Mn (k). Remark 22. Let L be a Hopf algebra with a comultiplication ∆ and R ∈ L⊗L an invertible element. On the algebra L, Drinfeld ([75]) introduced a new comultiplication ∆R given by ∆R (l) = R−1 ∆(l)R for all l ∈ L. Let LR := L as an algebra and having ∆R as a comultiplication. If LR is a structure of Hopf algebra it is called a twist of L. It was proved in [75] that if R is a Harrison cocycle1 , i.e. (∆ ⊗ Id)(R)(R ⊗ 1) = (Id ⊗ ∆)(R)(1 ⊗ R) (ε ⊗ Id)(R) = (Id ⊗ ε)(R) = 1
(6.52)
then LR is a Hopf algebra, i.e. a twist of L. The twist construction plays a crucial role in the theory of finite dimensional triangular semisimple Hopf algebras classification ([82]). Let Mn (k) be the matrix algebra having the trivial bialgebra structure (without counit) ∆ : Mn (k) → Mn (k) ⊗ Mn (k),
∆(x) = In ⊗ x
for all x ∈ Mn (k). Any subalgebra of Mn (k) is a subbialgebra. Theorem 72 and the comultiplication (6.37) show that any finite dimensional Hopf algebra L, viewed as a subalgebra of the matrix algebra, is obtained as a twist in the sense of Drinfeld: the trivial bialgebra structure of Mn (k) is twisted by an invertible element R. An important difference with the previous situation is that R is not a Harrison cocycle in the sense of (6.52): R is not a solution of the equation R23 R12 = R13 R23 ) but a solution of the pentagon equation. 1
The fact that (6.52) is a Harrison cocycle condition is shown in [37]: in the literature (see e.g. [82]), (6.52) called the twist equation.
294
6 Hopf modules and the pentagon equation
Let n be a positive integer. We have proved that an n-dimensional Hopf algebra L is isomorphic to a P (n, R) for R ∈ Mn (k) ⊗ Mn (k) an invertible solution of the pentagon equation such that l(R) = n. We are now going to prove the Classification Theorem for finite dimensional Hopf algebras. Let Pentn be the set Pentn = {R ∈ Mn (k) ⊗ Mn (k) | (Mn (k), R) ∈ Pent and l(R) = n}. Theorem 73. Let n be a positive integer. Then there exists a one to one correspondence between the set of types of n-dimensional Hopf algebras and the set of the orbits of the action GLn (k) × Pentn → Pentn ,
(u, R) → (u ⊗ u)R(u ⊗ u)−1 .
(6.53)
Proof. In part 5. of the Theorem 70 we proved that there exists a Hopf algebra isomorphism P (Mn (k), R) ∼ = P (Mn (k), u R) for any u ∈ GLn (k), which means that all the Hopf algebras associated to the elements of an orbit of the action (6.53) are isomorphic. We will now prove the converse. First we show that two finite dimensional Hopf algebras L1 and L2 are isomorphic if and only if (H(L1 ), RL1 ) and (H(L2 ), RL2 ) are isomorphic as objects in Pent. Indeed, let f : L1 → L2 be a Hopf algebra isomorphism. Then, f ∗ : L∗2 → L∗1 , f ∗ (l∗ ) = l∗ ◦ f is an isomorphism of Hopf algebras and f˜ : H(L1 ) → H(L2 ),
f˜(h#h∗ ) := f (h)#(f ∗ )−1 (h∗ ) = f (h)#h∗ ◦ f −1
for all h ∈ L1 , h∗ ∈ L∗1 is an algebra isomorphism. Indeed, let h, g ∈ L1 and h∗ , g ∗ ∈ L∗1 ; using the fact that f is an algebra map, we have f (h(2) )f (g)# h∗ (h(1) · g ∗ ) ◦f −1 f˜ (h#h∗ )(g#g ∗ ) = and as f is a coalgebra map f (h(2) )f (g)# h∗ ◦ f −1 f (h(1) ) · (g ∗ ◦ f −1 ) f˜(h#h∗ )f˜(g#g ∗ ) = It follows that f˜ is an algebra map, since for any l ∈ L2 we have h∗ ◦ f −1 f (h(1) ) · (g ∗ ◦ f −1 ) , l = h∗ ◦ f −1 , l(1) g ∗ ◦ f −1 , l(2) f (h(1) ) = h∗ , f −1 (l(1) )g ∗ , f −1 (l(2) )h(1) = h∗ (h(1) · g ∗ ) , f −1 (l) = h∗ (h(1) · g ∗ ) ◦f −1 , l On the other hand, if {ei , e∗i } is a dual basis of L1 , then {f (ei ), e∗i ◦ f −1 } is a dual basis of L2 and hence (f˜ ⊗ f˜)(RL1 ) = RL2 , and this proves
6.4 The structure of finite dimensional Hopf algebras
295
that f˜ is an isomorphism in Pent. Let ni = dim(Li ), i = 1, 2. Using Proposition 138 we obtain that (H(L1 ), RL1 ) ∼ = (H(L2 ), RL2 ) if and only if (Mn1 (k), R1 ) ∼ = (Mn2 (k), R2 ) in Pent, where Ri is the image of RLi under the algebra isomorphism H(Li ) ∼ = Mni (k). Now, the two matrix algebras Mn1 (k) and Mn2 (k) are isomorphic if and only if n1 = n2 and the Skolem-Noether theorem tells us that any automorphism g of the matrix algebra Mn1 (k) is an inner one: there exists u ∈ GLn1 (k) such that g(x) = gu (x) = uxu−1 . Hence we obtain that (Mn1 (k), R1 ) ∼ = (Mn2 (k), R2 ) in Pent if and only if n1 = n2 and there exists u ∈ GLn1 (k) such that R2 = (gu ⊗ gu )(R1 ) = (u ⊗ u)R1 (u ⊗ u)−1 , i.e. R2 is equivalent to R1 , as needed. We conclude with a few examples, evidencing our general method to determine invertible solutions of the pentagon equation R ∈ Mn (k) ⊗ Mn (k). Let A = (aij ), B = (bij ) ∈ Mn (k). We recall that, via the canonical isomorphism Mn (k) ⊗ Mn (k) ∼ = Mn2 (k), A ⊗ B viewed as a matrix of Mn2 (k) is given by: a11 B a12 B · · · a1n B a21 B a22 B · · · a2n B A⊗B = (6.54) · · ··· · an1 B an2 B · · · ann B Let (eji )i,j=1,n be the canonical basis of Mn (k). An element R ∈ Mn (k) ⊗ Mn (k) can be written as follows R=
n
eji ⊗ Aij
(6.55)
i,j=1
for some matrices Aij ∈ Mn (k). Using the formula (6.54), R viewed Kronecker product: A11 A21 R= · An1
as a matrix in Mn2 (k) is given by the A12 A22 · An2
· · · A1n · · · A2n ··· · · · · Ann
(6.56)
and we can quickly check if R is invertible (det(R) = 0). A large class of invertible R is given by choosing (Aij ) such that R is upper triangular (i.e. Aij = 0 for all i > j) and Aii is invertible in Mn (k) for all i = 1, · · · n. The next Proposition clarify the condition for R = (Aij )i,j=1,n ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) to be a solution of the pentagon equation. Proposition 139. Let n be a positive integer and R = (Aij )i,j=1,n ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k), Aij ∈ Mn (k), be an invertible matrix. Then R is a solution of the pentagon equation if and only if
296
6 Hopf modules and the pentagon equation n
Aij ⊗ Ajp = R(Aip ⊗ In )R−1
(6.57)
j=1
for all i, p = 1, · · · , n. Proof. Taking into account the multiplication rule for elementary matrices, we find R12 R13 R23 = R23 R12 =
n
epi ⊗ Aij esr ⊗ Ajp Ars
i,j,p,r,s=1 n
epi ⊗ eba Aip ⊗ Aab .
a,b,i,p=1
Hence, R is a solution of the pentagon equation if and only if n
eba Aip ⊗ Aab =
Aij esr ⊗ Ajp Ars
(6.58)
i,r,s=1
a,b=1
or, equivalently,
n
n R(Aip ⊗ In ) = ( Aij ⊗ Ajp )R
(6.59)
j=1
for all i, p = 1, · · · n. Viewing R as a matrix in Mn2 (k) (see (6.56)) and using (6.54), we find that the pentagon equation (6.59) can be rewritten in Mn2 (k). As R is invertible, (6.59) is equivalent to n
Aij ⊗ Ajp = R(Aip ⊗ In )R−1 = ∆l (Aip )
j=1
which means that the matrix (Aij ) is comultiplicative with respect to ∆l . Example 26. In this example we will show that the two constructions of bialgebras that follow from the solutions of the pentagon equation using Theorems 67 and 70 give very different objects. Let R ∈ M4 (k) ∼ = M2 (k) ⊗ M2 (k) be given by
1
0 R= 0 0
0
0
0
1
0
1
1
0 0
0
0
1
Viewing R as an element of M2 (k) ⊗ M2 (k), we have
6.4 The structure of finite dimensional Hopf algebras
297
R = I2 ⊗ I2 + e12 ⊗ e21 where {eij } is the canonical basis of the matrix algebra M2 (k) (see Section 5.1). As char(k) = 2, R−1 = R. It follows from Proposition 133 that R is is an invertible solution of the pentagon equation if and only if char(k) = 2, and in this case P (R) = B(τ ◦R◦τ ) is a five dimensional noncommutative noncocommutative bialgebra. It is easy to see that the Hopf algebra P (M2 (k), R) obtained applying Theorem 70 to R, is the groupring kC2 where C2 = {1, g} is the group with two elements. Indeed, from the construction it follows that P (M2 (k), R) is the subalgebra of M2 (k) with {I2 , e21 } as a basis. If we denote g = I2 − e21 , then g 2 = I2 and P (M2 (k), R) and kC2 are isomorphiic as Hopf algebras (using the fact that char(k) = 2 in the formula for ∆P ). Example 27. Let n be a positive integer and + en1 ∈ Mn (k) A = e12 + e23 + · · · + en−1 n Let R ∈ Mn (k) ⊗ Mn (k) be given by R = e11 ⊗ In + e22 ⊗ A + · · · + enn ⊗ An−1
(6.60)
A routine computation shows that R is an invertible solution of the pentagon equation with inverse R−1 = e11 ⊗ In + e22 ⊗ An−1 + · · · + enn ⊗ A.
(6.61)
Then P (Mn (k), R) ∼ = (kG)∗ , the Hopf algebra of functions on a cyclic group with n elements G. Proof. We will prove that H = H(Mn (k), R) ∼ = kG, the group algebra of G by and then we use the duality between P (Mn (k), R) and H(Mn (k), R) given n Theorem 70. We remark that R is already written in the form R = i=1 Ai ⊗ Bi , with (Ai ) and (Bi ) are linear independent. Then H(Mn (k), R) is the commutative subalgebra of Mn (k) with basis {In , A, A2 , · · · , An−1 }. We also note that An = In . Using (6.61), (6.37), and (6.45), we obtain that the comultiplication, the counit and the antipode of H are given by ∆H (A) = A ⊗ A,
εH (A) = In ,
SH (A) = An−1 = A−1
i.e. H ∼ = kG. Example 28. Let R ∈ M16 (k) ∼ = M4 (k) ⊗ M4 (k) be the upper triangular matrix given by I4 0 B 0 0 A 0 C R= 0 0 A 0 0 0 0 I4
298
where
6 Hopf modules and the pentagon equation
0 1 A= 0 0
1 0 0 0
0 0 0 1
0 0 , 1 0
0 0 00 0 0 0 0 B= 1 0 0 0, 0 −1 0 0
0 0 C= 0 1
0 00 0 0 0 −1 0 0 0 00
If we view R ∈ M4 (k) ⊗ M4 (k), then R is given by R = (e11 + e44 ) ⊗ I4 + (e22 + e33 ) ⊗ (e21 + e12 + e43 + e34 ) + e31 ⊗ (e13 − e24 ) + e42 ⊗ (e14 − e23 )
(6.62)
R is an invertible solution of the pentagon equation, H(M4 (k), R) ∼ = H4 , and hence P (M4 (k), R) ∼ = H4∗ ∼ = H4 , where H4 , Sweedler’s four dimensional Hopf algebra. Proof. Similar to the proof in Example 27. The inverse of R is R−1 = (e11 +e44 )⊗I4 +(e22 +e33 )⊗(e21 +e12 +e43 +e34 )+e31 ⊗(e14 −e23 )+e42 ⊗(e42 −e13 ) and therefore the Hopf algebra H = H(M4 (k), R) is the four dimensional subalgebra of M4 (k) with k-basis {I4 , e21 + e12 + e43 + e34 , e14 − e23 , e13 − e24 } Now, writing x = e13 − e24 and g = e21 + e12 + e43 + e34 we find that x2 = 0,
g 2 = I4 ,
gx = −xg = e14 − e23
On the other hand the formula of the comultiplication of H given by (6.37), namely ∆(A) = R(A ⊗ I4 )R−1 , for all A ∈ H gives, using the expresion of R−1 , ∆(g) = g ⊗ g, ∆(x) = x ⊗ g + I4 ⊗ x i.e. H ∼ = H4 , the Sweedler four dimensional Hopf algebra (see Example 9 4)). Theorem 73 opens a new road for describing the types of isomorphisms for Hopf algebras of a certain dimension. The first step, and the most important, is however the development of a new Jordan type theory (we called it restricted Jordan theory). From the point of view of actions, the classical Jordan theory gives the most elementary description of the representatives of the orbits of the action GLn (k) × Mn (k) → Mn (k),
(U, A) → U AU −1 .
The restricted Jordan theory refers to the following open problem: Problem: Describe the orbits of the action GLn (k)×(Mn (k)⊗Mn (k)) → Mn (k)⊗Mn (k), (U, R) → (U ⊗U )R(U ⊗U )−1 .
6.4 The structure of finite dimensional Hopf algebras
299
We recall that the canonical Jordan form JA of a matrix A is the matrix equivalent to A which has the greatest number of zeros. For practical reasons, in the restricted Jordan theory we are in fact interested in finding the elements of each orbit that have the greatest number of zeros. Of these, we retain only those which are invertible solutions of the pentagon equation. The set of types of n-dimensional Hopf algebras will be those Hopf algebras associated (using Theorem 72) to the solutions of length n (or, equivalently, the space of coinvariant elements is n-dimensional); all other Hopf algebras will have a dimension that is a divisor of n2 . We mention that, as a general rule, the set of types of n-dimensional Hopf algebras is infinite (this was proved recently in [7], [12], [90]). If however we limit ourselves to classifying certain special types of Hopf algebras, then this set can be finite. For instance, the set of types of n-dimensional semisimple and cosemisimple Hopf algebras is finite ([168]). The restricted Jordan Theory, is also involved in the theory of classification of separable algebras (see the This has to do with those orbits last1 chapter). whose representatives R = R ⊗ R2 ∈ Mn (k) ⊗ Mn (k)are solutions of the separability equation R12 R23 = R23 R13 = R13 R12 and R1 R2 = In . A new interesting direction related to the study of the pentagon equation is a general representation theory, whose objects are defined bellow. Let (M, R) be a pair, where M is a vector space and R ∈ End(M ⊗ M ) is a solution of the pentagon equation. A representation of (M, R), or an (M, R)-module is a pair (V, ψV ), where V is a vector space and ψV : M ⊗ V → V ⊗ M is a k-linear map such that 23 (τM,M R)12 = (τM,M R)23 ψV12 ψV23 ψV12 τM,V
as maps M ⊗ M ⊗ V → V ⊗ M ⊗ M . A morphism of two (M, R)-modules (V, ψV ), (W, ψW ) is a k-linear map f : V → W such that (f ⊗ IdM )ψV = ψW (IdM ⊗ f ). (M,R) M will be the monoidal category of (M, R)-modules, where the monoidal structure is given by 23 12 ψV ). (V, ψV ) ⊗ (W, ψW ) = (V ⊗ W, ψW
We will see that the category of representations of a Hopf algebra, or more generally of an algebra, is a subcategory of (M,R) M. Examples 12. 1. Let L be a Hopf algebra and RL : L ⊗ L → L ⊗ L the canonical solution of the pentagon equation, RL (g ⊗ h) = g(1) ⊗ g(1) h, for all g, h ∈ L. Then any left L-module (V, ·) has a natural structure of (L, RL )module with the map ψV : L ⊗ V → V ⊗ L,
ψV (g ⊗ v) = g(2) · v ⊗ g(1)
Indeed, for g, h ∈ L and v ∈ V , we have that 23 (τL,L RL )12 (g ⊗ h ⊗ v) = (τL,L RL )23 ψV12 ψV23 (g ⊗ h ⊗ v) ψV12 τL,V = g(3) h(2) · v ⊗ g(2) h(1) ⊗ g(1)
300
6 Hopf modules and the pentagon equation
2. Now let A be a k-algebra and R = RA : A ⊗ A → A ⊗ A,
R(a ⊗ b) = a ⊗ ba
the solution of the pentagon equation constructed in Proposition 127. Let V be a vector space and · : V ⊗ A → V a k-linear map. We define ψV = ψ(V,·) : A ⊗ V → V ⊗ A,
ψV (a ⊗ v) = v · a ⊗ a
Then (V, ψ(V,·) ) is an (A, RA )-module if and only if (V, ·) is a right A-module in the usual sense. Indeed, the statement follows from the formulas (τA,A RA )23 ψV12 ψV23 (a ⊗ b ⊗ v) = (v · b) · a ⊗ ba ⊗ a and 23 ψV12 τA,V (τA,A RA )12 (a ⊗ b ⊗ v) = v · (ba) ⊗ ba ⊗ a
for all a, b ∈ A, v ∈ V . Hence the category category of modules over an algebra A.
(M,R) M
generalizes the usual
7 Long dimodules and the Long equation
In this Chapter, we will show that the nonlinear equation R12 R23 = R23 R12 (called Long equation) can be associated to the category of Long dimodules H over a bialgebra H. The Long equation is obtained from the quantum HL Yang-Baxter equation by deleting the middle term from both sides. Our theory is similar to the one developed in Chapters 5 and 6, where we have discussed how Yetter-Drinfeld modules and Hopf modules are connected to respectively the quantum Yang-Baxter equation and the pentagon equation. A different approach to solving the Long equation, in a general monoidal category, is given in [16].
7.1 The Long equation Definition 15. Let M be a vector space over a field k. R ∈ End(M ⊗ M ) is called a solution of the Long equation if R12 R23 = R23 R12
(7.1)
in End(M ⊗ M ⊗ M ). For later use, we rewrite the Long equation in matrix form. The proof is left to the reader. Proposition 140. Let {m1 , · · · , mn } be a basis of M , and let R and S ∈ ij End(M ⊗ M ) be given by their matrices (xij uv ) and (yuv ), i.e. ij R(mu ⊗ mv ) = xij uv mi ⊗ mj and S(mu ⊗ mv ) = yuv mi ⊗ mj
Then R23 S 12 = S 12 R23 if and only if pv αj pi xij vk yql = xlk yqα
(7.2)
In particular, R is a solution for the Long equation if and only if pv αj pi xij vk xql = xlk xqα
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 301–316, 2002. c Springer-Verlag Berlin Heidelberg 2002
(7.3)
302
7 Long dimodules and the Long equation
In Proposition 141, we give some other nonlinear equations that are equivalent to the Long equation. Proposition 141. Let M be a vector space and R ∈ End(M ⊗ M ). The following statements are equivalent 1. R is a solution of the Long equation; 2. T = Rτ is a solution of the equation T 12 T 13 = T 23 T 13 τ (123) 3. U = τ R is a solution of the equation U 13 U 23 = τ (123) U 13 U 12 4. W = τ Rτ is a solution of the equation τ (123) W 23 W 13 = W 12 W 13 τ (123) Proof. 1. ⇔ 2. As R = T τ , R is a solution of the Long equation if and only if T 12 τ 12 T 23 τ 23 = T 23 τ 23 T 12 τ 12 (7.4) Now τ 12 T 23 τ 23 = T 13 τ 13 τ 12 ,
and τ 23 T 12 τ 12 = T 13 τ 12 τ 13
and (7.4) is equivalent to T 12 T 13 τ 13 τ 12 = T 23 T 13 τ 12 τ 13 The equivalence of 1. and 2. follows since τ 12 τ 13 τ 12 τ 13 = τ (123) . 1. ⇔ 3. R = τ U , so R is a solution of the Long equation if and only if τ 12 U 12 τ 23 U 23 = τ 23 U 23 τ 12 U 12
(7.5)
Using the fact that τ 12 U 12 τ 23 = τ 23 τ 13 U 13 ,
and τ 23 U 23 τ 12 = τ 23 τ 12 U 13 .
we find that (7.5) is equivalent to U 13 U 23 = τ 13 τ 12 U 13 U 12 and we are done since τ 13 τ 12 = τ (123) . 1. ⇔ 4. R = τ W τ is a solution of the Long equation if and only if τ 12 W 12 τ 12 τ 23 W 23 τ 23 = τ 23 W 23 τ 23 τ 12 W 12 τ 12 Using the formulas
(7.6)
7.1 The Long equation
τ 12 τ 23 = τ 13 τ 12 ,
303
τ 23 τ 12 = τ 13 τ 23 ,
τ 12 W 12 τ 13 = τ 12 τ 13 W 23 ,
τ 12 W 23 τ 23 = W 13 τ 12 τ 23
τ 23 W 23 τ 13 = τ 23 τ 13 W 12 ,
τ 23 W 12 τ 12 = W 13 τ 23 τ 12
we find that (7.6) is equivalent to τ 12 τ 13 W 23 W 13 τ 12 τ 23 = τ 23 τ 13 W 12 W 13 τ 23 τ 12 proving the equivalence of 1. and 4. since τ 13 τ 23 τ 12 τ 13 = τ 23 τ 12 τ 23 τ 12 = τ (123) Examples 13. 1. If R ∈ End(M ⊗ M ) is bijective, then R is a solution of the Long equation if and only if R−1 is also a solution of the Long equation. 2. Let (mi )i∈I be a basis of M and (aij )i,j∈I be a family of scalars of k. Then, R : M ⊗ M → M ⊗ M , R(mi ⊗ mj ) = aij mi ⊗ mj , for all i, j ∈ I, is a solution of the Long equation. In particular, the identity map IdM ⊗M is a solution of the Long equation. 3. Let M be a finite dimensional vector space and u an automorphism of M . If R is a solution of the Long equation then u R = (u ⊗ u)R(u ⊗ u)−1 is also a solution of the Long equation. 4. Let R ∈ M4 (k) given by
a
0
0
0
0 R= 0
b
c
d
e
0 0
0
0
0
f
A direct computation shows that R is a solution of the Long equation if and only if c = d = 0. In particular, if q ∈ k, q = 0, q = 1, the two dimensional Yang-Baxter operator Rq is a solution of the QYBE and is not a solution for the Long equation. 5. Let G be a group and M be a left kG-module. Assume also that M is G-graded, i.e. M = ⊕σ∈G Mσ where the Mσ are subspaces of M . If the Mσ are kG-submodules of M , then the map R : M ⊗ M → M ⊗ M, R(n ⊗ m) = σ · n ⊗ mσ , ∀n, m ∈ M (7.7) σ
is a solution of the Long equation. If G is non-abelian, then R is not a solution of the QYBE.
304
7 Long dimodules and the Long equation
It suffices to show that (7.1) holds for homogenous elements. Let mσ ∈ Mσ , mτ ∈ Mτ and mθ ∈ Mθ . Then R23 R12 (mσ ⊗ mτ ⊗ mθ ) = R23 (τ · mσ ⊗ mτ ⊗ mθ ) = τ · mσ ⊗ θ · mτ ⊗ mθ and R12 R23 (mσ ⊗ mτ ⊗ mθ ) = R12 (mσ ⊗ θ · mτ ⊗ mθ ) = τ · mσ ⊗ θ · mτ ⊗ mθ and it follows that R is a solution of the Long equation. On the other hand R12 R13 R23 (mσ ⊗ mτ ⊗ mθ ) = τ θ · mσ ⊗ θ · mτ ⊗ mθ and R23 R13 R12 (mσ ⊗ mτ ⊗ mθ ) = θτ · mσ ⊗ θ · mτ ⊗ mθ and we see that R is not a solution of the QYBE if G is not abelian.
7.2 The FRT Theorem for the Long equation We reconsider the Long dimodules introduced in Section 4.5, in the situation where the underlying algebra A and coalgebra C are equal to a given bialgebra H. For completeness sake, we recall the Definition. In the case where H is commutative and cocommutative, it is due to Long [118]. Definition 16. Let H be a bialgebra. A (left-right) Long H-dimodule is a threetuple (M, ·, ρr ), where (M, ·) is a left H-module and (M, ρr ) is a right H-comodule such that ρ(h · m) = h · m[0] ⊗ m[1]
(7.8)
for all h ∈ H and m ∈ M . The category of H-dimodules and H-linear H-colinear maps will be denoted by H LH . Examples 14. 1. Let G be a group. A left-right kG-dimodule M is a kGmodule M , together with a family {Mσ | σ ∈ G} of kG-submodules of M such that M = ⊕σ∈G Mσ (cf. Example 13 5)). Indeed, we know that M is a right kG-comodule if M = ⊕σ∈G Mσ , with every Mσ a subspace of M . The compatibility relation (7.8) means exactly that the Mσ are kG-submodules of M . Let us now suppose that M is a decomposable representation on G with the Long length smaller or equal to the cardinal of G. Let X be a subset of G
7.2 The FRT Theorem for the Long equation
305
and {Mx | x ∈ X} a family of idecomposable k[G]-submodules of M such that M = ⊕x∈X Mx . Then M is a right comodule over k[X], and as X ⊆ G, M can be viewed as a right k[G]-comodule. Hence, M ∈ k[G] Lk[G] . We obtain that the category of representations on G with the Long length smaller or equal to the cardinal of G can be viewed as a subcategory of k[G] Lk[G] . 2. For any left H-module (N, ·), N ⊗ H ∈ H LH with structure: ρ(n ⊗ l) = n ⊗ l(1) ⊗ l(2)
h · (n ⊗ l) = h · n ⊗ l,
for all h, l ∈ H, n ∈ N . Thus we have a functor G=•⊗H :
HM
→ H LH
which is a right adjoint of the forgetful functor F :
H HL
→ HM
3. For any (M, ρ) be a right H-comodule (M, ρr ), H ⊗ M ∈ structure
H HL
with
ρH⊗M (l ⊗ m) = l ⊗ m[0] ⊗ m[1]
h · (l ⊗ m) = hl ⊗ m,
for all h, l ∈ H, m ∈ M . We obtain a functor F = H ⊗ • : MH → H LH which is a left adjoint of the forgetful functor G:
H HL
→ MH
4. Let (N, ·) be a left H-module. Then, with the trivial structure of right H-comodule, ρr : N → N ⊗ H, ρ(n) = n ⊗ 1, for all n ∈ N , (N, ·, ρ) ∈ H LH . 5. Let (M, ρ) be a right H-comodule. Then, with the trivial structure of left H-module, h · m = ε(h)m, for all h ∈ H, m ∈ M , (M, ·, ρ) ∈ H LH . Remarks 22. 1. Let M , N ∈ structure maps are given by
H HL .
h · (m ⊗ n) = h(1) · m ⊗ h(2) · n,
M ⊗ N is also an object in
H HL ,
the
ρ(m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1]
for all h ∈ H, m ∈ M , n ∈ N . k ∈ H LH with the trivial structure h · a = ε(h)a,
ρ(a) = a ⊗ 1
for all h ∈ H, a ∈ k. It is easy to show that (H LH , ⊗, k) is a monoidal category. 2. We have a natural functor H HL
→ H⊗H ∗ M
306
7 Long dimodules and the Long equation
Any Long dimodule M is a left H ⊗ H ∗ -module; the left H ⊗ H ∗ -action is given by (h ⊗ h∗ ) · m :=< h∗ , m[1] > hm[0] for all h ∈ H, h∗ ∈ H ∗ and m ∈ M . If H is finite dimensional, then the categories H LH and H⊗H ∗ M are isomorphic. The next Proposition is a generalization of Example 13 5). Proposition 142. Let H be a bialgebra and (M, ·, ρ) be a Long H-dimodule. Then the natural map R(M,·,ρ) (m ⊗ n) = n[1] · m ⊗ n[0] is a solution of the Long equation. Proof. Let R = R(M,·,ρ) . For l, m, n ∈ M we have R12 R23 (l ⊗ m ⊗ n) = R12 l ⊗ n[1] · m ⊗ n[0] = (n[1] · m)[1] · l ⊗ (n[1] · m)[0] ⊗ n[0] = m[1] · l ⊗ n[1] · m[0] ⊗ n[0] = R23 m[1] · l ⊗ m[0] ⊗ n = R23 R12 (l ⊗ m ⊗ n) and it follows that R is a solution of the Long equation. Lemma 30. Let H be a bialgebra, (M, ·) a left H-module and (M, ρ) a right H-comodule. Then the set {h ∈ H | ρ(h · m) = h · m[0] ⊗ m[1] , ∀m ∈ M } is a subalgebra of H. Proof. Straightforward. As a direct consequence of Lemma 30, we find that a vector space M on which H acts and coacts is a Long dimodule if and only if the compatibility condition (7.8) is satisfied for m running through a basis of M , and h running through a set of algebra generators of H. Now we will present the FRT Theorem for the Long equation: in the finite dimensional case, every solution R of the Long equation is of the form R = R(M,·,ρ) , where (M, ·, ρ) is a Long dimodule over a bialgebra D(R). As one might expect, the proof is similar to the corresponding proofs of the FRT Theorems for the quantum Yang-Baxter equation and the pentagon equation. Theorem 74. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ) a solution of the Long equation. Then
7.2 The FRT Theorem for the Long equation
307
1. There exists a bialgebra D(R) acting and coacting on M (with structure maps · and ρ) such that (M, ·, ρ) ∈ D(R) LD(R) , and R = R(M,·,ρ) 2. The bialgebra D(R) is universal with respect to this property: if H is a bialgebra acting and coacting on M (with structure maps · and ρ ) such that (M, · , ρ ) ∈ H LH and R = R(M,· ,ρ ) then there exists a unique bialgebra map f : D(R) → H such that ρ = (I ⊗ f )ρ and a · m = f (a) · m, for all a ∈ D(R), m ∈ M . Proof. 1. As usual, let {m1 , · · · , mn } be a basis for M and (xij uv ) the matrix of R, i.e. R(mu ⊗ mv ) = xij (7.9) uv mi ⊗ mj Let (C, ∆, ε) = Mn (k) be the comatrix coalgebra of order n. The tensor algebra T (C) has a unique bialgebra structure, with comultiplication and counit extending ∆ and ε. Following the arguments in the proof of Theorem 66, we find a left T (C)- action and a right T (C)-coaction on M such that R = R(M,·,ρ) . The action and coaction are given by the formulas ρ(ml ) = mv ⊗ cvl cju · ml = xij lu mi
(7.10) (7.11)
We now define the obstructions oij kl , measuring how far M is from being a Long dimodule over T (C). Keeping in mind the fact that T (C) is generated as an algebra by the cij , and using Lemma 30, we restrict ourselves to computing h · m[0] ⊗ m[1] − ρ(h · m) for h = cjk , and m = ml : h · m[0] ⊗ m[1] = cjk · ml[0] ⊗ ml[1] ij v v = cjk · mv ⊗ cvl = xij vk mi ⊗ cl = mi ⊗ xvk cl
and αj i ρ(h · m) = ρ(cjk · ml ) = xαj lk mα[0] ⊗ mα[1] = mi ⊗ xlk cα
Writing ij v αj i oij kl = xvk cl − xlk cα
(7.12)
h · m[0] ⊗ m[1] − ρ(h · m) = mi ⊗ oij kl
(7.13)
we find Let I be the two-sided ideal of T (C) generated by all oij kl . We claim that I is a bi-ideal of T (C) and I · M = 0. The fact that I is also a coideal will result from the following formula:
308
7 Long dimodules and the Long equation ij uj u i ∆(oij kl ) = oku ⊗ cl + cu ⊗ okl
(7.14)
First we observe that (7.12) can be rewritten as ij v vj i oij kl = xvk cl − xlk cv
so ij v vj i u u ∆(oij kl ) = xvk cu ⊗ cl − xlk cu ⊗ cv u = xij cv ⊗ cul − ciu ⊗ xvj lk cv vk u vj i uj uj v i = oij ku + xuk cv ⊗cul − cu ⊗ −okl + xvk cl uj u i = oij ku ⊗ cl + cu ⊗ okl
proving (7.14) holds. Now ij ij ε oij kl = xlk − xlk = 0 so we have proved that I is a coideal of T (C). I · M = 0 will follow from the fact that R is a solution of the Long equation. For any n ∈ M , we compute R23 R12 (n ⊗ mk ⊗ mj ) = R23 (cα k · n ⊗ mα ⊗ mj ) s = cα k · n ⊗ cj · mα ⊗ ms rs = cα k · n ⊗ xαj mr ⊗ ms rs α = xαj ck · n ⊗ mr ⊗ ms
and
R12 R23 (n ⊗ mk ⊗ mj ) = R12 (n ⊗ csj · mk ⊗ ms )
= R12 (n ⊗ xαs kj mα ⊗ ms ) r = xαs kj cα · n ⊗ mr ⊗ ms
so it follows that α αs r R23 R12 − R12 R23 (n ⊗ mk ⊗ mj ) = xrs αj ck − xkj cα ·n ⊗ mr ⊗ ms = ors jk · n ⊗ mr ⊗ ms
(7.15)
Now R is a solution of the Long equation, hence ors jk · n = 0, for all n ∈ M , so can and I · M = 0. Now we define D(R) = T (C)/I D(R) coacts on M via the canonical projection T (C) → D(R), and D(R) acts on M since I · M = 0. The cij generate D(R) and oij kl = 0 in D(R), so
7.2 The FRT Theorem for the Long equation
309
we find from (7.13) that (M, ·, ρ) ∈ D(R) LD(R) and R = R(M,·,ρ) . 2. Let H be a bialgebra and suppose that (M, · , ρ ) ∈ H LH is such that R = R(M,· ,ρ ) . Let (cij )i,j=1,···,n be a family of elements in H such that ρ (ml ) = mv ⊗ cv l Then and Let
R(mv ⊗ mu ) = cj u · mv ⊗ m j j ij cj u · mv = xvu mi = cj · mv αj i v o kl = xij vk cl − xlk cα ij
From the universal property of the tensor algebra T (C), it follows that there exists a unique algebra map f1 : T (C) → H such that f1 (cij ) = ci j , for all ij H ij i, j = 1, · · · , n. Now (M, · , ρ ) ∈ H L , so 0 = o kl = f1 (okl ), for all i, j, k, l = 1, · · · , n. So the map f1 factorizes through a map f : D(R) → H,
f (cij ) = cij
Obviously we have, for any l ∈ {1, · · · , n}, (I ⊗ f )ρ(ml ) = mv ⊗ f (cvl ) = mv ⊗ cvl = ρ (ml ) v
v
so ρ = (I ⊗ f )ρ. Conversely, (I ⊗ f )ρ = ρ necessarily implies f (cij ) = cij , proving the uniqueness of f . This completes the proof of the theorem. Remark 23. In the graded algebra T (Mn (k)) the obstruction elements oij kl are of degree one, i.e. are elements of the comatrix coalgebra Mn (k). This will lead us in the next section to the study of some special functions defined only for a coalgebra, which will also play an important role in solving the Long equation. Examples 15. 1. Let a, b, c ∈ k and R ∈ M4 (k) given by equation (5.10). R is then a solution of the quantum Yang-Baxter equation and the Long equation. We will describe the bialgebra D(R), which is obtained considering R as a solution for the Long equation. If (b, c) = (0, 0) then R = 0 and D(R) = T (M4 (k)). Suppose now that (b, c) = (0, 0), and write R(mu ⊗ mv ) =
2
xij uv mi ⊗ mj
i,j=1
the only xij uv that are different from zero are
310
7 Long dimodules and the Long equation
x21 21
x11 x11 11 = ab, 12 = ac, 11 = ab, x12 = c, x12 22 = b,
x12 12 = ab, 21 x22 = ac,
x11 21 = b, 22 x22 = ab
(7.16)
The sixteen relations oij kl = 0 are the following ones, written in the lexicografical in (i, j, k, l).
0 = 0,
abc11 + bc21 = abc11 ,
abc12 + bc22 = bc11 + abc12
acc11 + cc21 = acc11 ,
acc12 + cc22 = cc11 + acc12
0 = 0,
abc21 = abc21 , 0 = 0,
abc11 + bc21 = abc11 ,
abc22 = abc22 , 0 = 0,
abc12 + bc22 = bc11 + abc12
acc21 = acc21 ,
abc21 = abc21 ,
acc22 = cc21 + acc22
abc22 = abc22 + bc21
The only nontrivial relations are bc21 = 0,
bc22 = bc11 ,
cc21 = 0,
cc22 = cc11
As, (b, c) = (0, 0), these reduce to c21 = 0,
c22 = c11
Now, if we denote c11 = x, c12 = y we obtain that D(R) can be described as follows: – As an algebra D(R) = k < x, y >, the free algebra generated by x and y. – The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(y) = x ⊗ y + y ⊗ x,
ε(x) = 1,
ε(y) = 0.
We observe that the bialgebra D(R) does not depend on the parameters a, b, c. 2. Take q ∈ k, and consider the solution Rq ∈ M4 (k) of the Hopf equation constructed in Proposition 135. Rq is also a solution of the Long equation, as Rq has the form f ⊗ g with f g = gf . In Proposition 135 we have described the bialgebra B(Rq ), using the FRT Theorem for the Hopf equation. The description of D(Rq ), using the FRT Theorem for the Long equation is much simpler, and does not depend on the parameter q. – As an algebra D(Rq ) = k < x, y >, the free algebra generated by x and y. – x and y are grouplike elements. This describes the coalgebra structure. This construction follows from a computation similar to the one in the previous example. The only surviving obstruction relations are c21 = 0,
c12 = q(c11 − c22 )
We obtain the above construction after we put c11 = x and c22 = y.
7.3 Long coalgebras
311
7.3 Long coalgebras In this Section we will define Long maps on coalgebras: if C is a coalgebra and I a coideal of C, then a Long map is a k-linear map σ : C⊗C/I → k satisfying (7.17). This condition ensures that, for any right C-comodule (M, ρ), the natural map R(σ,M,ρ) is a solution of the Long equation. Conversely, over a finite dimensional vector space M , any solution of the Long equation arises in this way. Hence Long maps on a coalgebra may be viewed as a Long equation analog of coquasitriangular bialgebras. The image of c ∈ C in the quotient coalgebra will be denoted by c. If (M, ρ) is a right C-comodule, then then (M, ρ) is a right C/I-comodule via ρ(m) = m[0] ⊗ m[1] , for all m ∈ M . Definition 17. Let C be a coalgebra and I be a coideal of C. A k-linear map σ : C ⊗ C/I → k is called a Long map if σ(c(1) ⊗ d) c(2) = σ(c(2) ⊗ d) c(1)
(7.17)
for all c, d ∈ C. If I = 0, then σ is called proper Long map and (C, σ) is called a Long coalgebra. Examples 16. 1. If C is cocommutavive then any k-linear map σ : C ⊗ C/I → k is a Long map. In particular, any cocommutative coalgebra is a Long coalgebra for any σ : C ⊗ C → k. 2. Let C be a coalgebra, I a coideal, and f ∈ Homk (C/I, k). Then σf : C ⊗ C/I → k,
σf (c ⊗ d) = ε(c)f (d)
for all c, d ∈ C, is a Long map. In particular, any coalgebra has a trivial structure of Long coalgebra via σ : C ⊗ C → k, σ(c ⊗ d) = ε(c)ε(d). 3. Let C = Mn (k) be the comatrix coalgebra of order n. For any a ∈ k, the map σ : C ⊗ C → k, σ(cij ⊗ cpq ) = δji a is a proper Long map. Indeed, for c = cij , d = cpq , we have σ(c(1) ⊗ d)c(2) = σ(cit ⊗ cpq )ctj = acij , and σ(c(2) ⊗ d)c(1) = σ(ctj ⊗ cpq )cit = acij , Proposition 143. Let σ : C ⊗ C/I → k a Long map, and consider a right C-comodule (M, ρ). Then the map R(σ,M,ρ) : M ⊗ M → M ⊗ M,
R(σ,M,ρ) (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the Long equation.
312
7 Long dimodules and the Long equation
Proof. A straightforward computation. We write R = R(σ,M,ρ) . For all l, m, n ∈ M , we have R12 R23 (l ⊗ m ⊗ n) = R12 σ(m[1] ⊗ n[1] )l ⊗ m[0] ⊗ n[0] = σ(m[2] ⊗ n[1] )σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] (7.17)
= σ(l[1] ⊗ m[2] )σ(m[1] ⊗ n[1] )l[0] ⊗ m[0] ⊗ n[0] = R23 σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n = R23 R12 (l ⊗ m ⊗ n)
so R is a solution of the Long equation. Theorem 75. Let M be an n-dimensional vector space and R ∈ End(M ⊗M ) a solution of the Long equation. 1. There exists a coideal I(R) of the comatrix coalgebra Mn (k), a coaction ρ of Mn (k) on M , and a unique Long map σ : Mn (k) ⊗ Mn (k)/I(R) → k such that R = R(σ,M,ρ) . Furthermore, if R is bijective, σ is convolution invertible in Homk (Mn (k) ⊗ Mn (k)/I(R), k). 2. If R is commutative or Rτ = τ R, then there exists a Long coalgebra (L(R), σ ˜ ) and a coaction ρ˜ of L(R) on M such that R = R(˜σ,M,ρ) ˜ . Proof. 1. As before, let {m1 , · · · , mn }, and write R in matrix form R(mu ⊗ mv ) = xij uv mi ⊗ mj
(7.18)
The comatrix coalgebra Mn (k) coacts on M : ρ(ml ) = mv ⊗ cvl Let I(R) be the k-subspace of Mn (k) generated by the oij kl . We know from (7.14) that I(R) is a coideal of Mn (k). We will first prove that σ is unique. Let σ : Mn (k) ⊗ Mn (k)/I(R) → k be a Long map such that R = Rσ . Then Rσ (mv ⊗ mu ) = σ (mv )[1] ⊗ (mu )[1] (mv )[0] ⊗ (mu )[0] = σ(civ ⊗ cju )mi ⊗ mj and it follows using (7.18) that σ(civ ⊗ cju ) = xij uv and σ is completely determined. We will now prove the existence of σ. First define
(7.19)
7.3 Long coalgebras
σ0 : Mn (k) ⊗ Mn (k) → k,
313
σ0 (civ ⊗ cju ) = xij uv
We have to show that σ0 factorizes through a map σ : Mn (k) ⊗ Mn (k)/I(R) → k To this end, it suffices to show that σ0 (Mn (k) ⊗ I(R)) = 0. This can be seen as follows ij αj p v p i σ0 (cpq ⊗ oij kl ) = xvk σ0 (cq ⊗ cl ) − xlk σ0 (cq ⊗ cα ) pv αj pi = xij vk xql − xlk xqα = 0
We still have to show that σ is a Long map. For c = cij and d = cpq , we find v σ(c(1) ⊗ d) c(2) = σ(civ ⊗ cpq ) cvj = xip vq cj
and p αp i i σ(c(2) ⊗ d) c(1) = σ(cα j ⊗ cq ) cα = xjq cα
so σ(c(1) ⊗ d) c(2) − σ(c(2) ⊗ d) c(1) = oip qj = 0 and it follows that σ is a Long-map. ij ) be a family of Suppose now that R is bijective and let S = R−1 . Let (yuv scalars of k such that ij S(mu ⊗ mv ) = yuv mi ⊗ m j ,
S is the inverse of R, so αβ pi αβ p i xpi αβ yqj = δq δj = yαβ xqj
We define σ0 : Mn (k) ⊗ Mn (k) → k,
ij σ0 (civ ⊗ cju ) = yvu
First we prove that σ0 is a convolution inverse of σ0 . We easily compute that σ0 (cpq )(1) ⊗ (cij )(1) σ0 (cpq )(2) ⊗ (cij )(2) β pi αβ p i i p = σ0 (cpα ⊗ ciβ )σ0 (cα q ⊗ cj ) = xαβ yqj = δq δj = ε(cj )ε(cq )
and
σ0 (cpq )(1) ⊗ (cij )(1) σ0 (cpq )(2) ⊗ (cij )(2) β pi αβ p i i p = σ0 (cpα ⊗ ciβ )σ0 (cα q ⊗ cj ) = yαβ xqj = δq δj = ε(cj )ε(cq )
We next show that σ0 factorizes through a map σ : Mn (k)⊗Mn (k)/I(R) → k. We have
314
7 Long dimodules and the Long equation ij pv αj pi σ0 (cpq ⊗ oij kl ) = xvk yql − xlk yqα
S = R−1 and R is a solution of the Long equation, so R23 S 12 = S 12 R23 . Using (7.2) we obtain that σ0 (cpq ⊗ oij kl ) = 0, hence σ0 factorizes through a map σ . σ is a convolution inverse of σ. 2. Suppose first that Rτ = τ R. Then ij xji uv = xvu
(7.20)
Let L(R) = Mn (k)/I(R) The rest of the proof is similar to the proof of part 1), we only have to prove that the map σ : Mn (k) ⊗ Mn (k)/I(R) → k factorizes through a map σ ˜ : L(R) ⊗ L(R) → k. This can be seen as follows p ij p αj p v i σ(oij kl ⊗ cq ) = xvk σ(cl ⊗ cq ) − xlk σ(cα ⊗ cq ) vp αj pi = xij vk xlq − xlk xαq
(7.20)
pv αj pi = xij vk xql − xlk xqα = 0
If R is commutative, then we can prove that R12 R13 = R13 R12 if and only if vp αj ip xij vk xlq = xlk xαq
Thus, p ij vp αj ip σ(oij kl ⊗ cq ) = xvk xlq − xlk xαq = 0
and again σ factorizes through a map σ ˜ : L(R) ⊗ L(R) → k. As an immediate consequence, we obtain Corollary 39. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ). The following statements are equivalent: 1. R is a solution of the system 12 13 R R = R13 R12 R12 R23 = R23 R12
(7.21)
in End(M ⊗ M ⊗ M ); 2. there exists a Long coalgebra (L(R), σ) and a structure of right L(R)comodule (M, ρ) such that R = R(σ,M,ρ) . Remark 24. If R is a commutative solution of the Long equation, then R satisfies the integrability condition [R12 , R13 + R23 ] = 0 which appears in the study of the Knizhnik-Zamolodchikov equation (see [108], [166]).
7.3 Long coalgebras
315
Examples 17. 1. Let C = M4 (k) and I the two dimensional k-subspace of C with k-basis {c21 , c22 − c11 }. Then I is a coideal of C. Take scalars a, b, c ∈ k and let (xij uv ) be given by the formulas (7.16). Then σ : M4 (k) ⊗ M4 (k)/I → k,
σ(ciu ⊗ cjv ) = xij uv
is a Long map. 2. Let C = M4 (k), q ∈ k and I the two-dimensional coideal of C with k-basis {c21 , c12 + qc22 − qc11 }. Let x11 12 = −q,
x12 12 = 1,
2 x11 22 = −q ,
x12 22 = q
and (xij uv ) = 0 in all other situations. Then σ : M4 (k) ⊗ M4 (k)/I → k,
σ(ciu ⊗ cjv ) = xij uv
is a Long map. 3. Let C = M4 (k) and I the two-dimensional coideal of C with basis {c12 , c21 }. Let a, b ∈ k and σ : M4 (k) ⊗ M4 (k)/I → k such that σ(c11 ⊗ c11 ) = a,
σ(c11 ⊗ c22 ) = b
and all others are zero. Then σ is a Long map. 4. Let n be a positive integer and φ : {1, · · · , n} → {1, · · · , n} a function with φ2 = φ. It is easy to see that R = (xkl ij ) given by xji uv = δuv δφ(i)v δφ(j)v for all i, j, u, v = 1, · · · , n is a commutative solution of the Long equation. Hence we can construct the Long coalgebra L(R). By a long but trivial computation we can show that the coalgebra L(R) is a quotient of the comatrix coalgebra Mn (k) through the coideal I generated by elements of the form for all l = φ(j) c φ(j)l , ciα , for all φ(i) = φ(j) −1 (7.22) α∈φ (φ(j)) c − c , for all i = 1, · · · , n φ(i)φ(i) α∈φ−1 (φ(i)) iα We will now construct a specific example. Let n = 4 and φ given by φ(1) = 1,
φ(2) = φ(3) = φ(4) = 2.
Then I is the coideal generated by (cf.(7.22)): c1l , c2j , ci1 , c32 +c33 +c34 −c22 , c42 + c43 + c44 − c22 for all l = 1, j = 2, i = 1. Now, if we denote c11 = x1 , c22 = x2 , c32 = x3 , c33 = x4 , c42 = x5 , c44 = x6 we obtain the description of the corresponding Long coalgebra L(R): L(R) is the six dimensional vector
316
7 Long dimodules and the Long equation
space with {x1 , · · · , x6 } as a basis. The comultiplication ∆ and the counity ε are given by ∆(x1 ) = x1 ⊗ x1 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x2 + x4 ⊗ x3 + (x2 − x3 − x4 ) ⊗ x5 ∆(x4 ) = x4 ⊗ x4 + (x2 − x3 − x4 ) ⊗ (x2 − x5 − x6 ), ∆(x5 ) = x5 ⊗ x2 + (x2 − x5 − x6 ) ⊗ x3 + x6 ⊗ x5 ∆(x6 ) = (x2 − x5 − x6 ) ⊗ (x2 − x3 − x4 ) + x6 ⊗ x6 ε(x1 ) = ε(x2 ) = ε(x4 ) = ε(x6 ) = 1,
ε(x3 ) = ε(x5 ) = 0.
8 The Frobenius-Separability equation
We introduce and study the Frobenius-separability equation (or FS-equation) R12 R23 = R23 R13 = R13 R12 ; we will see that it implies the braid equation, in the sense that all solutions of the FS-equation are solutions of the braid equation. The FS-equation can be used to determine the structure of separable algebras and Frobenius algebras. Given a solution R of the FS-equation satisfying a certain normalizing condition, we construct a Frobenius or a separable algebra A(R) that can be described using generators and relations. Furthermore, any finite dimensional Frobenius or separable algebra is isomorphic to such an A(R). It is remarkable that the same equation can be used to describe two different kinds of algebras, namely separable algebras and Frobenius algebras. We had a similar phenomenon in Chapter 3, where we gave a categorical explanation for the relation between separability properties and Frobenius type properties. Here also we will see that the difference lies in the normalizing properties.
8.1 Frobenius algebras and separable algebras 1 Let A be a k-algebra. An element e = e ⊗ e2 = e1 ⊗ e2 ∈ A ⊗ A will be called A-central if for any a ∈ A we have a·e=e·a
(8.1)
where A ⊗ A is viewed as an A-bimodule in the usual way a1 · (b ⊗ c) · a2 = a1 b ⊗ ca2 for all a1 , a2 , b, c ∈ A. Of course, there exists a bijection between the set of all A-central elements and the set of all A-bimodule maps ∆ : A → A ⊗ A. Recall that A is called a separable algebra if there exists a separability idempotent, that is an A-central element e = e1 ⊗ e2 ∈ A ⊗ A satisfying the normalizing separability condition e1 e2 = 1.
(8.2)
A is separable if and only if the multiplication map m : A ⊗ A → A splits in the category A MA of A-bimodules.
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 317–343, 2002. c Springer-Verlag Berlin Heidelberg 2002
318
8 The Frobenius-Separability equation
Remark 25. We have proved in Proposition 12 that any separable k-algebra is finite dimensional over k, and in Proposition 13 we have show that any separable algebra is semisimple. Also recall that a k-coalgebra C is called coseparable if there exists a coseparability idempotent, that is a k-linear map σ : C ⊗ C → k such that σ(c ⊗ d(1) )d(2) = σ(c(2) ⊗ d)c(1) and σ(c(1) ⊗ c(2) ) = ε(c) for all c, d ∈ C. For a k-algebra A, the dual A∗ = Hom(A, k) is an (A, A)-bimodule with A-actions given by the formulas (a · f · b)(x) = f (bxa) for all a, b, x ∈ A and f ∈ A∗ . We recall that a k-algebra A is called a Frobenius algebra if A is finite dimensional over k and there exists an isomorphism of left A-modules A ∼ = A∗ . Furthermore, A is called a symmetric ∗ ∼ algebra if A = A as A-bimodules. Of course, a symmetric algebra is always a Frobenius algebra. Remark 26. Using Wedderburn’s Theorem, Eilenberg and Nakayama proved that any semisimple k-algebra is symmetric (cf. e.g. [113, p. 443]). In particular, any separable k-algebra is Frobenius. There exist necessary and sufficient conditions for a Frobenius algebra to be separable; for a detailed discussion, we refer to [100]. We will see below that a Frobenius algebra is separable if its characteristic element ωA is invertible. Proposition 144. Let A be a k-algebra, ∆ : A → A⊗A an A-bimodule map and e = ∆(1A ). 1. In A ⊗ A ⊗ A, we have the equality e12 e23 = e23 e13 = e13 e12
(8.3)
2. ∆ is coassociative; 3. If (A, ∆, ε) is a coalgebra structure on A, then A is finite dimensional over k. Proof. 1. From the fact that ∆ is an A-bimodule map, it follows immediately that e A-central. Write E = E 1 ⊗ E 2 = e. Then e12 e23 = (e1 ⊗ e2 ⊗ 1)(1 ⊗ E 1 ⊗ E 2 ) = e1 ⊗ e2 E 1 ⊗ E 2 = E 1 e1 ⊗ e2 ⊗ E 2 = e13 e12 and e12 e23 = e1 ⊗ e2 E 1 ⊗ E 2 = e1 ⊗ E 1 ⊗ E 2 e2 = e23 e13
8.1 Frobenius algebras and separable algebras
319
2. follows from 1. and the formulas (∆ ⊗ I)∆(a) = e12 e23 · (1A ⊗ 1A ⊗ a) (I ⊗ ∆)∆(a) = e23 e13 · (1A ⊗ 1A ⊗ a) 3. Let a ∈ A. Applying ε ⊗ IA and IA ⊗ ε to (8.1), we obtain, using the fact that ε is a counit map, a = ε(ae1 )e2 = e1 ε(e2 a) and it follows that {e1 , ε(e2 •)} (or {e2 , ε(•e1 )}) are dual bases of A as a vector space over k. The next Corollary is a special case of the item 3) of Theorem 27: the equivalence 1) ⇔ 3) has been proved by Abrams (see [4, Theorem 2.1]). Corollary 40. For a k-algebra A, the following statements are equivalent: 1. A is a Frobenius algebra; 2. there exist e = e1 ⊗ e2 ∈ A ⊗ A and ε ∈ A∗ such that e is A-central and the normalizing Frobenius condition ε(e1 )e2 = e1 ε(e2 ) = 1A
(8.4)
is satisfied. (ε, e) is called a Frobenius pair. 3. there exist a coalgebra structure (A, ∆A , εA ) on A such that the comultiplication ∆A : A → A ⊗ A is an A-bimodule map. Proof. 1. ⇔ 2. is a special case of Theorem 27. 2. ⇔ 3.: observe that an A-bimodule map ∆ : A → A ⊗ A is completely determined by e = ∆(1A ), and that there is a bijective corespondence between the set of all A-central elements and the set of all A-bimodule maps ∆ : A → A ⊗ A, and use the second statement in Proposition 144. It is easy to see that the counit property is satisfied if and only if (8.4) holds. Let A be a Frobenius algebra over a field k. The element ωA = (mA ◦∆)(1A ) ∈ A is called the characteristic element of A and it generalizes the classical Euler class e(X) of a connected orientated compact manifold X ([5]). If the characteristic element ωA is invertible in A then A is a separable k-algebra. Indeed, let ∆(1A ) = e1 ⊗ e2 . Then ae1 ⊗ e2 = e1 ⊗ e2 a for all a ∈ A. Hence, ωA ∈ Z(A), the center of A. It follows that its inverse −1 ωA is also an element in Z(A). Now, −1 1 (e ⊗ e2 ) R = ωA
is a separability idempotent, so A is a separable k-algebra. Our next result is the coalgebra version of Corollary 40.
320
8 The Frobenius-Separability equation
Theorem 76. For a coalgebra C over a field k, the following statements are equivalent: 1. C is a co-Frobenius coalgebra, i.e. the forgetful functor F : C M → k M is Frobenius; 2. C is finite dimensional and there exists an associative and unitary algebra structure (C, mC , 1C ) on C such that the multiplication mC : C ⊗C → C is left and right C-colinear.
8.2 The Frobenius-separability equation Proposition 144 leads us to the following definition. Definition 18. Let A be a k-algebra and R = R1 ⊗ R2 ∈ A ⊗ A. 1. R is called a solution of the FS-equation (or Frobenius-separability equation) if R12 R23 = R23 R13 = R13 R12 (8.5) in A ⊗ A ⊗ A. 2. R is called a solution of the S-equation (or separability equation) if R is a solution of the FS-equation and the normalizing separability condition holds: (8.6) R1 R2 = 1A 3. (R, ε) is called a solution of the F-equation (Frobenius equation) if R is a solution of the F S-equation, and ε ∈ A∗ is such that the normalizing Frobenius condition holds: ε(R1 )R2 = R1 ε(R2 ) = 1A .
(8.7)
4. Two solutions of the FS-equation are called equivalent if there exists an invertible element u ∈ A such that S = (u ⊗ u)R(u−1 ⊗ u−1 ). Remarks 23. 1. The FS-equation appeared first in [15, Lemma 3.6]. If R is a solution of the FS-equation then R is also a solution of the braid equation R12 R23 R12 = R23 R12 R23
(8.8)
2. In many applications, the algebra A is of the form A = End(M ), where M is a finite dimensional vector space. Then we can view R as an element of End(M ⊗ M ), using the canonical isomorphism End(M ) ⊗ End(M ) ∼ = End(M ⊗ M ). (8.5) can then be viewed as an equation in End(M ⊗ M ⊗ M ). This is what we will do in Proposition 145 and in most of the examples in this Section. 3. We will prove now that the FS-equation is at the same time an associativity and coassociativity constraint. First, let H be a Hopf algebra and let
8.2 The Frobenius-separability equation
R = β : H ⊗ H → H ⊗ H,
321
R(g ⊗ h) = gh(1) ⊗ h(2)
be the canonical map that shows that H is a Hopf-Galois extension of k. We have proved in Examples 10 that R is a solution of the Hopf equation. Moreover, the comultiplication ∆H and the multiplication mH of H can be recovered from R, namely ∆H (h) = R(1H ⊗ h) and mH = (Id ⊗ ε)R We generalize this construction as follows. Let M be a vector space, 1M ∈ M , ε ∈ M ∗ , and R ∈ End(M ⊗ M ). We define ∆ = ∆R : M → M ⊗ M,
∆(x) = R(1M ⊗ x)
for all x ∈ M and m = mR : M ⊗ M → M,
m = (Id ⊗ ε)R
Then we have (∆ ⊗ Id)∆(x) = (R12 R23 )(1M ⊗ 1M ⊗ x) (Id ⊗ ∆)∆(x) = (R23 R13 )(1M ⊗ 1M ⊗ x) for all x ∈ M and m(Id ⊗ m) = (Id ⊗ ε ⊗ ε)R12 R23 ,
m(m ⊗ Id) = (Id ⊗ ε ⊗ ε)R13 R12 .
We conclude that, if R is a solution of the FS-equation, then ∆R is coassociative and mR is an associative. The FS-equation can be rewritten in matrix form. We follow the notation introduced in Section 5.1. Fix a basis {m1 , · · · , mn } of M . A linear map R : M ⊗ M → M ⊗ M can be described by its matrix X = (xij uv ). We have R(mu ⊗ mv ) = xij uv mi ⊗ mj
(8.9)
and u v R = xij uv ei ⊗ ej
(8.10)
It is then straightforward to compute kl R12 R23 (mu ⊗ mv ⊗ mw ) = xij uk xvw mi ⊗ mj ⊗ ml
R R (mu ⊗ mv ⊗ mw ) = 23
13
R R (mu ⊗ mv ⊗ mw ) = 13
12
ik xjl vk xuw mi kj xil kw xuv mi
(8.11)
⊗ mj ⊗ ml
(8.12)
⊗ mj ⊗ ml
(8.13)
Proposition 145. For R ∈ End(M ⊗ M ), we have, with notation as above.
322
8 The Frobenius-Separability equation
1. R is a solution of the FS-equation if and only if jl ik kl il kj xij uk xvw = xvk xuw = xkw xuv
(8.14)
for all i, j, l, u, v, w ∈ {1, · · · , n}. 2. R satisfies (8.6) if and only if j xkj ik = δi
(8.15)
for all i, j ∈ {1, · · · , n}. 3. Let ε be the trace map. Then (R, ε) satisfies (8.7) if and only if jk j xkj ki = xik = δi
(8.16)
for all i, j ∈ {1, · · · , n}. Proof. 1. follows immediately from (8.11-8.13). 2. and 3. follow from (8.10), using the multiplication rule eij ekl = δli ekj and the formula for the trace ε(eij ) = δji As a consequence of Proposition 145, we can show that R ∈ End(M ⊗ M ) is a solution of the equation R12 R23 = R13 R12 if and only if a certain multiplication on M ⊗ M is associative. Corollary 41. Let {m1 , · · · , mn } be a basis of M and R ∈ End(M ⊗ M ), given by (8.9). Then R is a solution of the equation R12 R23 = R13 R12 if and only if the multiplication on M ⊗ M given by (mk ⊗ ml ) · (mr ⊗ mj ) = xak jl mr ⊗ ma (k, l, r, j = 1, · · · , n) is associative. In this case M is a left M ⊗ M -module with structure (mk ⊗ ml ) • mj = xak jl ma a
for all k, l, j = 1, · · · , n. Proof. Write mkl = mk ⊗ ml . Then ak ip mpq · (mkl · mrj ) = xak jl mpq mra = xjl xaq mri
and ik ap (mpq · mkl ) · mrj = xap lq mka mrj = xja xlq mri
8.2 The Frobenius-separability equation
323
such that
ap ak ip x − x x (mpq · mkl ) · mrj − mpq · (mkl · mrj ) = xik ja lq jl aq mri
and the right hand side is zero for all indices p, q, k, l, r and j if and only if (8.14) holds, and in Proposition 145, we have seen that (8.14) is equivalent to R12 R23 = R13 R12 . The last statement follows from ap ak ip x − x x (mpq · mkl ) • mj − mpq • (mkl • mj ) = xik ja lq jl aq mi = 0 where we used (8.14) at the last step. Examples 18. 1. The identity IM ⊗M and the switch map τM are trivial solutions of the FS-equation in End(M ⊗ M ⊗ M ). 2. Let A = Mn (k). Then n Rj = eji ⊗ eij i=1
n is a solution of the S-equation and (R = j=1 Rj , trace) is a solution of the F-equation. 3. Let R ∈ A ⊗ A be a solution of FS-equation and u ∈ A invertible. Then u
R = (u ⊗ u)R(u−1 ⊗ u−1 )
(8.17)
is also a solution of the FS-equation. Let FS(A) be the set of all solutions of the FS-equation, and U (A) the multiplicative group of invertible elements in A. Then (8.17) defines an action of U (A) on FS(A). 4. If a ∈ A is an idempotent, then a ⊗ a is a solution of the FS-equation. 5. Let A be a k-algebra, and e ∈ A ⊗ A an A-central element. Then for any left A-module M , the homotety R = Re : M ⊗ M → M ⊗ M given by R(m ⊗ n) = e1 · m ⊗ e2 · n
(8.18)
(m, n ∈ M ) is a solution of the FS-equation in End(M ⊗ M ⊗ M ). This is an easy consequence of (8.3). Moreover, if e is a separability idempotent (respectively (e, ε) is a Frobenius pair), then R is a solution of the S-equation (respectively a solution of the F-equation). g ⊗ g −1 is an A-central 6. Let G be a finite group, and A = kG. Then e = g∈G
element and (e, p1 ) is a Frobenius pair (p1 : kG → k is the map defined by p1 (g) = δ1,g , for all g ∈ G). Hence, kG is a Frobenius algebra. Furthermore, if (char(k), |G|) = 1, then e = |G|−1 e is a separability idempotent. 7. Using a computer, Bogdan Ichim computed for us that FS(M2 (Z2 )) (resp. FS(M2 (Z3 ))) consists of exactly 38 (resp. 187) solutions of the FS-equation. We will present only two of them. Let k be a field of characteristic 2 (resp. 3). Then
324
8 The Frobenius-Separability equation
1
1 R= 1 0
0
1
1
0
0
0
0
1 , 1
1
1
1
1
0 (resp. R = 0 1
1
0
0
1
1
1
1
2 ) 2
2
2
1
are solutions of F S-equation. 8. Let {m1 , m2 , · · · , mn } be a basis of M , aij scalars of k and let R be given by R(mu ⊗ mv ) = aju mv ⊗ mj j i j Thus xij uv = δv au , with notation as in (8.9). An immediate verification shows that R is a solution of the FS-equation. If ε is the trace map, then (R, ε) is a solution of the F-equation if and only if aji = δij . R is a solution of the S-equation if and only if n is invertible in k, and naji = δij .
If H is a finite dimensional unimodular involutory Hopf algebra, and t is a two-sided integral in H, then R = t(1) ⊗ S(t(2) ) is a solution of the quantum Yang-Baxter equation (cf. [114, Theorem 8.3.3]). In our next Proposition, we will show that, for an arbitrary Hopf algebra H, R is a solution of the FS-equation and the braid equation. Proposition 146. Let H be a Hopf algebra and t ∈ H a left integral of H. Then R = t(1) ⊗ S(t(2) ) ∈ H ⊗ H is H-central, and therefore a solution of the FS-equation and the braid equation. Proof. For all h ∈ H, we have that ht = ε(h)t and, subsequently, h(1) t ⊗ h(2) = t ⊗ h h(1) t(1) ⊗ S(h(2) t(2) ) ⊗ h(3) = t(1) ⊗ S(t(2) ) ⊗ h Multiplying the second and the third factor, we obtain ht(1) ⊗ S(t(2) ) = t(1) ⊗ S(t(2) )h proving that R is H-central. Remarks 24. 1. If ε(t) = 1, then R is a separability idempotent and H is separable over k, and we recover Maschke’s Theorem for Hopf algebras (see [1]). 2. If t is a right integral, then a similar argument shows that S(t(1) ) ⊗ t(2) is an H-central element. Let H be a Hopf algebra and σ : H ⊗ H → k be a normalized Sweedler 2-cocycle, i.e. σ(h ⊗ 1H ) = σ(1H ⊗ h) = ε(h)
8.2 The Frobenius-separability equation
σ(k(1) ⊗ l(1) )σ(h ⊗ k(2) l(2) ) = σ(h(1) ⊗ k(1) )σ(h(2) k(2) ⊗ l)
325
(8.19)
for all h, k, l ∈ H. The crossed product algebra Hσ is equal to H as k-module and the (associative) multiplication is given by g · h = σ(g(1) ⊗ h(1) )g(2) h(2) for all g, h ∈ Hσ = H. Proposition 147. Let H be a cocommutative Hopf algebra over a commutative ring k, t ∈ H a right integral, and σ : H ⊗ H → k a normalized convolution invertible Sweedler 2-cocycle. Then Rσ = σ −1 S(t(2) ) ⊗ t(3) S(t(1) ) ⊗ t(4) ∈ Hσ ⊗ Hσ (8.20) is Hσ -central, and a solution of the FS-equation. Consequently, if H is a separable algebra, then Hσ is also a separable algebra. Proof. The method of proof is the same as in Proposition 146, but the situation is more complicated. For all h ∈ H, we have th = ε(h)t, and h ⊗ t = h(1) ⊗ th(2) and h ⊗ S(t(1) ) ⊗ S(t(2) ) ⊗ t(3) ⊗ t(4) = h(1) ⊗ S(h(2) )S(t(1) ) ⊗ S(h(3) )S(t(2) ) ⊗ t(3) h(4) ⊗ t(4) h(5) We now compute easily that h · Rσ = σ −1 (S(t(2) ) ⊗ t(3) )h · S(t(1) ) ⊗ t(4) = σ −1 (S(h(3) )S(t(2) ⊗ t(3) h(4) )h(1) · (S(h(2) )S(t(1) )) ⊗ t(4) h(5) = σ −1 (S(h(3) )S(t(2) ⊗ t(3) h(4) )σ((h(1) )(1) ⊗ S(h(2) )(1) S(t(1) )(1) )(h(1) )(2) S(h(2) )(2) S(t(1) )(2) ⊗ t(4) h(5) =σ
−1
(S(h(3) )S(t(3) ⊗ t(4) h(4) )σ(h(1) ⊗ S(h(2) S(t(2) ))S(t(1) ) ⊗ t(5) h(5)
On the other hand Rσ · h = σ −1 (S(t(2) ) ⊗ t(3) )S(t(1) ) ⊗ t(4) · h = σ −1 (S(t(2) ) ⊗ t(3) )σ(t(4) ⊗ h(1) )S(t(1) ) ⊗ t(5) h(2) In order to prove that Rσ is Hσ -central, it suffices to show that, for all f, g ∈ H: σ −1 (S(h(3) )S(g(2) ) ⊗ g(3) h(4) )σ(h(1) ⊗ S(h(2) )S(g(1) )) = σ −1 (S(g(1) ) ⊗ g(2) )σ(g(3) ⊗ h)
(8.21)
326
8 The Frobenius-Separability equation
So far, we have not used the cocommutativity of H. If H is cocommutative, then we can omit the Sweedler indices, since they contain no information. Hence we can write ∆(h) = h⊗h=h⊗h The cocycle relation (8.19) can then be rewritten as σ(h ⊗ kl) = σ −1 (k ⊗ l)σ(h ⊗ k)σ(kh ⊗ l) σ(hk ⊗ l) = σ
−1
(h ⊗ k)σ(k ⊗ l)σ(h ⊗ kl)
(8.22) (8.23)
Using (8.22), (8.23) and the fact that σ is normalized, we compute σ −1 (S(h)S(g) ⊗ gh)σ(h ⊗ S(h)S(g)) = σ(S(h) ⊗ S(g))σ −1 (S(g) ⊗ gh)σ −1 (S(h) ⊗ S(g)gh) σ −1 (S(h) ⊗ S(g))σ(h ⊗ S(h))σ(hS(h) ⊗ g) = σ(g ⊗ h)σ −1 (S(g) ⊗ g)σ −1 (S(g)g ⊗ h) = σ −1 (S(g) ⊗ g)σ(g ⊗ h) proving (8.21). We also used that σ(h ⊗ S(h)) = σ(S(h) ⊗ h) which follows from the cocycle condition and the fact that σ is normalized. Finally, if H is separable, then we can find a right integral t such that ε(t) = 1, and we easily see that mHσ (Rσ ) = 1, proving that Rσ is a solution of the S-equation. In [84] solutions of the braid equation are constructed starting from 1-cocycles on a group G. The interesting point in this construction is that, at set theory level, any “nondegenerate symmetric” solution of the braid equation arises in this way (see [84, Theorem 2.9]). Now, taking G a finite group and H = kG in Proposition 147, we find a large class of solutions to the braid equation, arising from 2-cocycles σ : G × G → k ∗ . These solutions R can be described using a family of scalars (xij uv ), as in Proposition 145, where the indices now run through G. Let n = |G|, and write Mn (k) ∼ = MG (k), with entries indexed by G × G. Corollary 42. Let G be a finite group of order n, and σ : G × G → k ∗ a normalized 2-cocycle. Then Rσ = (xij uv )i,j,u,v∈G given by −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) xij uv = δj, ui−1 v σ
(8.24)
(i, j, u, v ∈ G) is a solution of the FS-equation. If n is invertible in k, then n−1 R is a solution of the S-equation.
8.2 The Frobenius-separability equation
327
Proof. The twisted group algebra kσ G is the k-module with basis G, and multiplication given by g · h = σ(g, h)gh, for any g, h ∈ G. t = g∈G g is a left integral in kG and the element Rσ defined in (8.20) takes the form Rσ = σ −1 (g −1 , g)g −1 ⊗ g g∈G
Using the multiplication rule on kσ G, we find that the map R˜σ : kσ G ⊗ kσ G → kσ G ⊗ kσ G,
R˜σ (u ⊗ v) = Rσ · (u ⊗ v)
is given by R˜σ (u ⊗ v) =
σ −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) i ⊗ ui−1 v
i∈G
If we write R˜σ (u ⊗ v) =
xij uv i ⊗ j
i,j∈G
then
xij uv
is given by (8.24).
We will now present a coalgebra version of Example 18.3. First we adapt an old definition of Larson ([116]). Definition 19. Let C be a k-coalgebra. A map σ : C ⊗ C → k is called an FS-map if σ(c ⊗ d(1) )d(2) = σ(c(2) ⊗ d)c(1) . (8.25) If, in addition, σ satisfies the normalizing condition σ(c(1) ⊗ c(2) ) = ε(c)
(8.26)
then σ is called a coseparability idempotent. If there exists an f ∈ C such that the FS-map σ satisfies the normalizing condition σ(f ⊗ c) = σ(c ⊗ f ) = ε(c) (8.27) for all c ∈ C, then we call (σ, f ) an F-map. The following Corollary is a special case of Theorem 35. Corollary 43. Let C be a coalgebra. 1. The forgetful functor F : MC → Mk is separable (or equivalently C is a coseparable coalgebra) if and only if there exists a coseparability idempotent σ. 2. Let G = − ⊗ C : Mk → MC be the right adjoint of F . Then (F, G) is a Frobenius pair if and only if there exists an F -map (σ, f ).
328
8 The Frobenius-Separability equation
Using FS-maps, we can construct solutions of the FS-equation. Examples 19. 1. The comatrix coalgebra Mn (k) is coseparable and σ : Mn (k) ⊗ Mn (k) → k,
σ(cji ⊗ clk ) = δkj δll
is a coseparability idempotent. 2. Let k be a field of characteristic zero, and consider the Hopf algebra C = k[Y ], with ∆(Y ) = Y ⊗ 1 + 1 ⊗ Y and ε(Y ) = 0. Then there is only one FS-map σ : C ⊗ C → k, namely the zero map. Indeed, σ is completely described by σ(Y i ⊗ Y j ) = aij . We will show that all aij = 0. Using the fact that ∆(Y n ) = ∆(Y )n , we easily find that (8.25) is equivalent to m m j=0
j
j
an,m−j Y =
n n i=0
i
an−i,m Y i
(8.28)
for all positive integers n and m. Taking n > m, and identifying the coefficients in Y n , we find anm = 0. If m > n, we also find anm = 0, now identifying coefficients in Y m . We can now write anm = an δnm . Take m > n. The right-hand side of (8.28) amounts to zero, while the left-hand side is m an Y n−m m−n It follows that an = 0 for all n, and σ = 0. Proposition 148. Let C be a coalgebra, σ : C ⊗ C → k an F S-map and M a right C-comodule. Then the map Rσ : M ⊗ M → M ⊗ M,
Rσ (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the F S-equation in End(M ⊗ M ⊗ M ). Proof. Write R = Rσ and take l, m, n ∈ M . R12 R23 (l ⊗ m ⊗ n) = R12 σ(m[1] ⊗ n[1] )l ⊗ m[0] ⊗ n[0] = σ(m[2] ⊗ n[1] )σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] Applying (8.25), with m = c, n = d, we obtain R12 R23 (l ⊗ m ⊗ n) = σ(m[1] ⊗ n[1] )σ(l[1] ⊗ n[2] )l[0] ⊗ m[0] ⊗ n[0] = R23 σ(l[1] ⊗ n[1] )l[0] ⊗ m ⊗ n[0] = R23 R13 (l ⊗ m ⊗ n) Applying (8.25), with m = d, l = c, we obtain
(8.29)
8.2 The Frobenius-separability equation
329
R12 R23 (l ⊗ m ⊗ n) = σ(l[1] ⊗ n[1] )σ(l[2] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] = R13 σ(l[1] ⊗ n[1] )l[0] ⊗ m[0] ⊗ n = R13 R12 (l ⊗ m ⊗ n) proving that R is a solution of the F S-equation. Remark 27. If C is finite dimensional and A = C ∗ is its dual algebra, then there is a one-to-one correspondence between F S-maps σ : C ⊗ C → k and A-central elements e ∈ A ⊗ A. The correspondence is given by the formula σ(c ⊗ d) = c, e1 d, e2 . In this situation, the map Re from Example 18.5 is equal to Rσ . Indeed, Rσ (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0] = m(1) , e1 n(1) , e2 = e1 · m ⊗ e2 · n = Re (m ⊗ n) We will now present two more classes of solutions of the F S-equation. Proposition 149. Take a ∈ k, X = {1, . . . , n}, and θ : X 3 → X a map satisfying θ(u, i, j) = v ⇐⇒ θ(v, j, i) = u θ(i, u, j) = v ⇐⇒ θ(j, v, i) = u
(8.30) (8.31)
θ(i, j, u) = v ⇐⇒ θ(j, i, v) = u θ(i, j, k) = θ(u, v, w) ⇐⇒ θ(j, i, u) = θ(k, w, v)
(8.32) (8.33)
1. R = (xuv ij ) ∈ Mn2 (k) given by u xuv ij = aδθ(i,v,j)
(8.34)
is a solution of the FS-equation. 2. Assume that n is invertible in k, and take a = n−1 . 2a. R is a solution of the S-equation if and only if θ(k, k, i) = i for all i, k ∈ X. 2b. Let ε be the trace map. (R, ε) is a solution of the F-equation if and only if θ(i, k, k) = i for all i, k ∈ X.
330
8 The Frobenius-Separability equation
Proof. 1. We have to verify (8.14). Using (8.32), we compute kl 2 i k xij uk xvw = a δθ(u,j,k) δθ(v,l,w) θ(j,u,i)
k k = a2 δθ(j,u,i) δθ(v,l,w) = a2 δθ(v,l,w)
(8.35)
In a similar way, we find ik 2 xjl vk xuw = a δθ(l,v,j)
θ(w,i,u)
(8.36)
θ(i,w,l) a2 δθ(u,j,v)
(8.37)
kj xil kw xuv =
Using (8.33), we find that (8.35), (8.36), and (8.37) are equal, proving (8.14). 2a. We easily compute that j −1 k δθ(i,j,k) = n−1 δθ(k,k,i) xkj ik = n
and it follows that R is a solution of the S-equation if and only if θ(k, k, i) = i for all i and k. 2b. We compute j −1 k xkj δθ(k,j,i) = n−1 δθ(i,k,k) ki = n −1 j xjk δθ(i,k,k) ik = n
and it follows that (R, ε) is a solution of the F-equation if and only if θ(i, k, k) = i for all i, k. Examples 20. 1. Let G be a finite group. Then the map θ : G × G × G → G, θ(i, j, k) = ij −1 k satisfies conditions (8.30-8.33). 2. Let G be a group of order n acting on X = {1, 2, · · · , n}, and assume that the action of G is transitive and free, which means that for every i, j ∈ X, there exists a unique g ∈ G such that g(i) = j. Then the map θ : X×X×X → X defined by θ(i, j, k) = g −1 (k) where g ∈ G is such that g(i) = j, satisfies conditions (8.30-8.33). Proposition 150. Let n be a positive integer, φ : {1, · · · , n} → {1, · · · , n} a function with φ2 = φ and {m1 , · · · , mn } a basis of M . Then Rφ : M ⊗ M → M ⊗ M, Rφ (mi ⊗ mj ) = δij ma ⊗ mb (8.38) a,b∈φ−1 (i)
is a solution of the F S-equation.
8.2 The Frobenius-separability equation
331
Proof. Write R = Rφ , and take p, q, r ∈ {1, . . . , n}. Then R12 R23 (mp ⊗ mq ⊗ mr ) = R12 δqr mp ⊗ m a ⊗ m b = δqr
a,b∈φ−1 (q)
δap
a,b∈φ−1 (q)
= δqr δφ(p)q
mc ⊗ m d ⊗ m b
c,d∈φ−1 (p)
mc ⊗ m d ⊗ m b .
b∈φ−1 (q) c,d∈φ−1 (p)
If φ−1 (p) is nonempty (take x ∈ φ−1 (p)), and φ(p) = q, then q = φ(p) = φ2 (x) = φ(x) = p, so we can write R12 R23 (mp ⊗ mq ⊗ mr ) = δpqrφ(p) ma ⊗ m b ⊗ m c . a,b,c∈φ−1 (p)
In a similar way, we can compute that R23 R13 (mp ⊗ mq ⊗ mr ) = R13 R12 (mp ⊗ mq ⊗ mr ) = δpqrφ(p) ma ⊗ m b ⊗ m c . a,b,c∈φ−1 (p)
Now we will generalize Example 18.3 to algebras without unit. Recall that a left A-module M is called unital (or regular) if the natural map A⊗A M → M is an isomorphism. Proposition 151. Let M be a unital A-module, and f : A → A ⊗ A an A-bimodule map. Then the map R : M ⊗ M → M ⊗ M mapping m ⊗ a · n to f (a)(m ⊗ n) is a solution of the F S-equation. Proof. Observe first that it suffices to define R on elements of the form m ⊗ a · n, since the map A ⊗A M → M is surjective. R is well-defined since R(m ⊗ a · (b · n)) = f (a)(m ⊗ b · n) = f (a)(IM ⊗ b)(m ⊗ n) = f (ab)(m ⊗ n) Write f (a) = a1 ⊗ a2 , for all a ∈ A. Then f (ab) = a1 ⊗ a2 b = ab1 ⊗ b2 . Now R12 (R23 (m ⊗ bn ⊗ ap)) = R12 (m ⊗ a1 bn ⊗ a2 p) = a1 b1 m ⊗ b2 n ⊗ a2 p = R13 (b1 m ⊗ b2 n ⊗ p) = R13 (R12 (m ⊗ bn ⊗ ap)) In a similar way, we prove that R12 R23 = R23 R13 .
332
8 The Frobenius-Separability equation
8.3 The structure of Frobenius algebras and separable algebras The first statement of Proposition 144 can be restated as follows: for a kalgebra A, any A-central element R ∈ A ⊗ A is a solution of the FS-equation. Over a field k, we can prove the converse: any solution of the FS-equation arises in this way. Theorem 77. Let A be a k-algebra and R = R1 ⊗ R2 ∈ A ⊗ A a solution of the FS-equation. 1. Then there exists a k-subalgebra A(R) of A such that R ∈ A(R) ⊗ A(R) and R is A(R)-central. 2. If R ∼ S are equivalent solutions of the F S-equation, then A(R) ∼ = A(S). 3. (A(R), R) satisfies the following universal property: if (B, mB , 1B ) is an algebra, and e ∈ B ⊗ B is an B-central element, then any algebra map α : B → A with (α ⊗ α)(e) = R factors through an algebra map α ˜: B→ A(R). 4. If R ∈ A ⊗ A is a solution of the S-equation (resp. the F-equation), then A(R) is a separable (resp. Frobenius) algebra. Proof. 1. Let A(R) = {a ∈ A | a·R = R·a}. Obviously A(R) is a k-subalgebra of A and 1A ∈ A(R). We also claim that R ∈ A(R) ⊗ A(R). To this end, we observe first that A(R) = Ker(ϕ), with ϕ : A → A ⊗ Aop defined by the formula ϕ(a) = (a ⊗ 1A − 1A ⊗ a)R. A is flat as a k-algebra, so A(R) ⊗ A = Ker(ϕ ⊗ IdA ). Now, (ϕ ⊗ IA )(R) = r1 R1 ⊗ R2 ⊗ r2 − R1 ⊗ R2 r1 ⊗ r2 = R13 R12 − R12 R23 = 0 and it follows that R ∈ A(R) ⊗ A. In a similar way, using that R12 R23 = R23 R13 , we get that R ∈ A ⊗ A(R), and we find that R ∈ A(R) ⊗ A(R). Indeed, k is a field, so A(R) ⊗ A(R) = A ⊗ A(R) ∩ A(R) ⊗ A Hence, R is an A(R)-central element of A(R) ⊗ A(R). 2. Let u ∈ U (A) such that S = (u ⊗ u)R(u−1 ⊗ u−1 ). Then fu : A(R) → A(S),
fu (a) = uau−1
for all a ∈ A(R) is a well-defined isomorphism of k-algebras. 3. Let b ∈ B. If we apply α ⊗ α to the equality (b ⊗ 1B )e = e(1B ⊗ b) we find
8.3 The structure of Frobenius algebras and separable algebras
333
that the image of α is contained in A(R), and the universal property follows. 4. The first statement follows from the definition of separable algebras and the second one follows from 3) of Corollary 40. Remark 28. The converse of 2. does not hold: let A be a separable k-algebra and R ∈ A ⊗ A a separability idempotent. Then R and S = 0 ⊗ 0 are solutions of the F S-equation, A(R) = A(S) = A and, of course, R and S are not equivalent. If A is finite dimensional, then we can describe the algebra A(R) using generators and relations. Let {m1 , m2 , . . . , mn } be a basis of a finite dimensional vector space M . We have seen that an endomorphism R ∈ End(M ⊗ M ) can be written in matrix form (see (8.9) and (8.10)). Suppose that R is a solution of FS-equation. Identifying End(M ) and Mn (k), we will write A(n, R) for the subalgebra of Mn (k) corresponding to A(R). An easy computation shows that α iα j A(n, R) = { aij ∈ Mn (k) | xij αv au = xuv aα , for all i, j, u, v = 1, · · · , n} (8.39) where R is a matrix satisfying (8.14). We can now prove the main result of this Chapter. Theorem 78. For an n-dimensional k-algebra A, the following statements are equivalent: 1. A is a Frobenius (resp. separable) algebra. 2. There exists an algebra isomorphism A∼ = A(n, R), ∼ where R = (xij uv ) ∈ Mn (k) ⊗ Mn (k) = End(A) ⊗ End(A) is a solution of the Frobenius (resp. separability) equation. Proof. 1. ⇒ 2. Both Frobenius and separable algebras are characterized by the existence of an A-central element with some normalizing properties. Let e = e1 ⊗ e2 ∈ A ⊗ A be such an A-central element. Then the map R = Re : A ⊗ A → A ⊗ A,
R(a ⊗ b) = e1 a ⊗ e2 b
(a, b ∈ A), is a solution to the FS-equation. Here we view Re ∈ Endk (A⊗A) ∼ = Endk (A) ⊗ Endk (A) (A is finite dimensional over k). Consequently, we can construct the algebra A(R) ⊆ Endk (A). We will prove that A and A(R) are isomorphic when A is a Frobenius algebra, or a separable algebra. First we consider the injection i : A → Endk (A), with i(a)(b) = ab, for a, b ∈ A. Then image of i is included in A(R). Indeed, A(R) = {f ∈ Endk (A) | (f ⊗ IA ) ◦ R = R ◦ (IA ⊗ f )}
334
8 The Frobenius-Separability equation
Using the fact that e is an A-central element, it follows easily that (i(a) ⊗ IA ) ◦ R = R ◦ (IA ⊗ i(a)), for all a ∈ A, proving that Im(i) ⊆ A(R). If f ∈ A(R), then (f ⊗ IA ) ◦ R = R ◦ (IA ⊗ f ), and, evaluating this equality at 1A ⊗ a, we find (8.40) f (e1 ) ⊗ e2 a = e1 ⊗ e2 f (a) Now assume that A is a Frobenius algebra. Then there exists ε : A → k such that ε(e1 )e2 = e1 ε(e2 ) = 1A . Applying ε ⊗ IA to (8.40) we obtain that f (a) = (ε(f (e1 ))e2 )a for all a ∈ A. Thus, f = i(ε(f (e1 ))e2 ). This proves that Im(i) = A(R), and the corestriction of i to A(R) is an isomorphism of algebras. If A is separable, then e1 e2 = 1A . Applying mA to (8.40) we find f (a) = (f (e1 )e2 )a for all a ∈ A. Consequently f = i(f (e1 )e2 ), proving again that A and A(R) are isomorphic. 2. ⇒ 1. This is the last statement of Theorem 77. Remark 29. Over a field k that is algebraically closed or of characteristic zero, the structure of finite dimensional separable k-algebras is given by the classical Wedderburn-Artin Theorem: a finite dimensional algebra A is separable if and only if is semisimple, if and only if it is a direct product of matrix algebras. As we have seen in Lemma 27, the dual A∗ of a separable finite dimensional algebra is a coseparable finite dimensional coalgebra. Thus we obtain a structure Theorem for coseparable coalgebras, by using duality arguments. More precisely, let C be a coseparable k-coalgebra which of dimension n over k. Then there exists an FS-map σ : C ⊗ C → k satisfying the normalizing condition (8.26). Let A = C ∗ and A ⊗ A ∼ = C ∗ ⊗ C ∗ . Then σ is an A-central element of A, satisfying the normalizing condition (8.6). It follows from Theorem 78 and (8.39) that A ∼ = A(n, σ), and therefore C ∼ = A(n, σ)∗ . Now A(n, σ)∗ can be described as a quotient of the comatrix coalgebra: A(n, σ)∗ = Mn (k)/I, where I is the coideal of Mn (k) that annihilates A(n, σ); I is generated by ij iα {oij uv = cuα xαv − xuv cαj | i, j, k, l = 1 . . . , m}
This coalgebra will be denoted by C(n, R) = Mn (k)/I We have obtain the following Theorem 79. For an n-dimensional C, the following statements are equivalent.
8.3 The structure of Frobenius algebras and separable algebras
335
1. C is a co-Frobenius (resp. coseparable) coalgebra. 2. There exists a coalgebra isomorphism A∼ = C(n, R), ∼ where R = (xij uv ) ∈ Mn (k) ⊗ Mn (k) = Endk (A) ⊗ Endk (A) is a solution of the Frobenius (resp. separability) equation. Examples 21. 1. Let M be a n-dimensional vector space and R = IM ⊗M . Then A(R) = {f ∈ End(M ) | f ⊗ IM = IM ⊗ f } = k 2. Now let R = τM be the switch map. For all f ∈ End(M ) and m, n ∈ M , we have ((f ⊗ IM ) ◦ τ )(m ⊗ n) = f (n) ⊗ m = (τ ◦ (IM ⊗ f ))(m ⊗ n) and, consequently, A(τ ) ∼ = Mn (k). 3. Let M be a finite dimensional vector space over a field k, and f ∈ End(M ) an idempotent. Then A(f ⊗ f ) = {g ∈ End(M ) | f ◦ g = g ◦ f = αf for some α ∈ k} Indeed, g ∈ A(f ⊗ f ) if and only if g ◦ f ⊗ f = f ⊗ f ◦ g. Multiplying the two factors, we find that g ◦ f = f ◦ g. Now g ◦ f ⊗ f = f ⊗ g ◦ f implies that g ◦ f = αf for some α ∈ k. The converse is obvious. In particular, assume that M has dimension 2 and let {m1 , m2 } be a basis of M . Let f be the idempotent endomorphism with matrix 1 − rq q r(1 − rq)
rq
Assume first that rq = 1 and r = 0, and take g ∈ End(M ) with matrix a b c
d
g ∈ A(f ⊗ f ) if and only if α = a + br
;
c + dr = r(a + br)
;
br(1 − rq) = qc
The two last equations can be easily solved for b and c in terms of a and d, and we see that A(f ⊗ f ) has dimension two. We know from the proof of Theorem 77 that f ∈ A(f ⊗ f ). Another solution of the above system is IM , and we find that A(f ⊗ f ) is the two-dimensional subalgebra of End(M ) with basis {f, IM }. Put f = IM − f . Then {f, f } is also a basis for A(f ⊗ f ), and A(f ⊗ f ) = k × k. C(f ⊗ f ) is the grouplike coalgebra of dimension two. We find the same result if rq = 1 or q = 0.
336
8 The Frobenius-Separability equation
4. Let R ∈ Mn2 (k) given by equation (8.34) as in Proposition 149. Then the algebra A(n, R) is given by A(n, R) = { aij ∈ Mn (k) | auθ(j,v,i) = ajθ(u,i,v) , (∀) i, j, u, v ∈ X} (8.41) Proposition 149 tells us when this algebra is separable or Frobenius over k. Assume now that G is a finite group with |G| = n invertible in k and θ is given as in Example 20.2. Then the above algebra A(n, R) equals A = { aij ∈ Mn (k) | aig(i) = ajg(j) , (∀) i, j ∈ X and g ∈ G} Indeed, if (aij ) ∈ A, then (aij ) ∈ A(n, R), since g(θ(j, v, i)) = u implies that g(j) = θ(u, i, v). Conversely, for (aij ) ∈ A(n, R), we choose i, j ∈ X and g ∈ G. Let u = g(i) and v = g(j). Then θ(j, v, u) = i, hence aig(i) = aθ(j,v,u) = ajθ(u,u,v) = ajv = ajg(j) , u showing that A = A(n, R). From the fact that G acts transitively, it follows that a matrix in A(n, R) is completely determined by its top row. For every g ∈ G, we define Ag ∈ A(n, R) by (Ag )1i = δg(1),i . Then A(n, R) = {Ag | g ∈ G}, and we have an algebra isomorphism f : A(n, R) → kG,
f (Ag ) = g
For example, take the cyclic group of order n, G = Cn . x2 · · · xn x1 x x · · · x n 1 n−1 A(n, R) = xn−1 xn · · · xn−2 | x1 , · · · , xn ∈ k ∼ = kCn · · ··· · x2 x3 · · · x1 5. Let G be a finite group of order n and σ : G × G → k∗ a 2-cocycle. Let Rσ be the solution of the F S-equation given by (8.24). We then obtain directly from (8.39) that A(n, Rσ ) consists of all G × G-matrices (aij )i,j∈G satisfying the relations ajv u
−1
i
σ −1 (ui−1 vj −1 , jv −1 )σ(vj −1 , jv −1 i)σ(jv −1 , v) =
ajui−1 v σ −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) for all i, j, u, v ∈ G. This algebra is separable if n is invertible in k. We will now present some new classes of examples, starting from the solution Rφ of the F S-equation discussed in Proposition 150. In this case, we find easily that
8.3 The structure of Frobenius algebras and separable algebras
337
xij uv = δuv δφ(i)u δφ(j)v
and, according to (8.39), A(Rφ ) consists of matrices aij satisfying n
ij aα u xαv
=
α=1
n
j xiα uv aα
α=1
or avu δφ(i)v δφ(j)v =
δuv δφ(i)u ajα
(8.42)
α∈φ−1 (v)
for all i, j, v, u = 1, . . . , n. The left hand side of (8.42) is nonzero if and only if φ(i) = φ(j) = v. If φ(i) = φ(j) = v = u, then (8.42) amounts to φ(i) aφ(i) = aiα . {α|φ(α)=φ(i)}
If φ(i) = φ(j) = v = u, then (8.42) takes the form auφ(i) = 0. Now assume that the left hand side of (8.42) is zero. If φ(i) = φ(j), then the right hand side of (8.42) is also zero, except when u = v = φ(i). Then (8.42) yields ajα = 0. {α|φ(α)=φ(i)}
If φ(i) = φ(j) = u, then (8.42) reduces to 0 = 0. We summarize our results as follows. Proposition 152. Consider an idempotent map φ : {1, . . . , n} → {1, . . . , n}, and the corresponding solution Rφ of the F S-equation. Then A(Rφ ) is the subalgebra of Mn (k) consisting of all matrices (aij ) satisfying φ(i)
aφ(i) =
aiα
(i = 1, . . . , n)
(8.43)
(φ(i) = j)
(8.44)
(φ(i) = φ(j))
(8.45)
{α|φ(α)=φ(i)} φ(i)
aj
=
0=
0
ajα
{α|φ(α)=φ(i)}
A(Rφ ) is a separable k-algebra if and only if (Rφ , ε = trace) is a solution of the F-equation if and only if φ is the identity map. In this case, A(Rφ ) is the direct sum of n copies of k. Proof. The first part was done above. A(R) is separable if and only if (8.15) holds. This comes down to
338
8 The Frobenius-Separability equation
δju =
n
δuv δφ(u)u δφ(j)v = δφ(u)u δφ(j)u
v=1
and this implies that φ(u) = u for all u. (8.43,8.44,8.45) reduce to aij = 0 for i = j, and A(RI ) consists of all diagonal matrices. (Rφ , ε = trace) is a solution of the F-equation if and only if (8.16) holds, and a similar computation shows that this also implies that φ is the identity. Examples 22. 1. Take n = 4, and φ given by φ(1) = φ(2) = 2, φ(3) = φ(4) = 4 (8.43,8.44,8.45) take the following form a21 and
a22 = a11 + a12 a44 = a33 + a34 2 2 4 = a2 = a4 = 0 a1 = a42 = a43 = 0 a13 = −a14 a31 = −a32
x 0 A(4, Rφ ) = v 0
y−x u y
0
−v
z
0
0
−u
0 | x, y, z, t, u, v ∈ k t−z t
The dual coalgebra can also be described easily. Write xi = cii (i = 1, . . . , 4), x5 = c13 and x6 = c31 . Then C(Rφ ) is the six dimensional coalgebra with basis {x1 , . . . , x6 } and ∆(x1 ) = x1 ⊗ x1 + x5 ⊗ x6 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x3 + x6 ⊗ x5 , ∆(x4 ) = x4 ⊗ x4 , ∆(x5 ) = x1 ⊗ x5 + x5 ⊗ x3 , ∆(x6 ) = x6 ⊗ x1 + x3 ⊗ x6 , ε(xi ) = 1 (i = 1, 2, 3, 4), ε(xi ) = 0 (i = 5, 6). 2. Again, take n = 4, but let φ be given by the formula φ(1) = 1, φ(2) = φ(3) = φ(4) = 2 Then (8.43,8.44,8.45) reduce to a22 = a32 + a33 + a34 = a42 + a43 + a44 a12 = a13 = a14 = a21 = a23 = a24 = a31 = a41 = 0, hence
8.4 The category of FS-objects
x 0 φ A(4, R ) = 0 0
0
0
y
0
u
z
v
y−t−v
0
339
| x, y, z, t, u, v ∈ k y − z − u t 0
Putting cii = xi (i = 1, . . . , 4), x5 = c32 and x6 = c42 , we find that C(Rφ ) is the six dimensional coalgebra with structure maps ∆(x1 ) = x1 ⊗ x1 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x3 + (x2 − x3 − x5 ) ⊗ (x2 − x4 − x6 ), ∆(x4 ) = x4 ⊗ x4 + (x2 − x4 − x6 ) ⊗ (x2 − x3 − x5 ), ∆(x5 ) = x5 ⊗ x2 + x3 ⊗ x5 + (x2 − x3 − x5 ) ⊗ x6 , ∆(x6 ) = x6 ⊗ x2 + x4 ⊗ x6 + (x2 − x4 − x6 ) ⊗ x5 , (i = 1, 2, 3, 4), ε(xi ) = 0 (i = 5, 6). ε(xi ) = 1
8.4 The category of FS-objects We have seen in Corollary 41 that the equation R12 R23 = R13 R12 is equivalent to the fact that a certain multiplication on M ⊗ M is associative. We will now prove that the other equation, namely R12 R23 = R23 R13 , is equivalent to the fact that a certain comultiplication is coassociative. Proposition 153. Let (A, mA , 1A ) be an algebra, R = R1 ⊗ R2 ∈ A ⊗ A and δ : A → A ⊗ A,
δ(a) = R1 ⊗ R2 a
for all a ∈ A. The following statements are equivalent: 1. (A, δ) is a coassociative coalgebra (not necessarily with counit); 2. R12 R23 = R23 R13 in A ⊗ A ⊗ A. In this case any left A-module (M, ·) has a structure of left comodule over the coalgebra (A, δ) via ρ : M → A ⊗ M,
ρ(m) = R1 ⊗ R2 · m
for all m ∈ M . Proof. The equivalence of 1. and 2. follows from the formulas (δ ⊗ I)δ(a) = R12 R23 · (1A ⊗ 1A ⊗ a) (I ⊗ δ)δ(a) = R23 R13 · (1A ⊗ 1A ⊗ a) for all a ∈ A. The final statement follows from
340
8 The Frobenius-Separability equation
(δ ⊗ I)ρ(m) = R12 R23 · (1A ⊗ 1A ⊗ m) (I ⊗ ρ)ρ(m) = R23 R13 · (1A ⊗ 1A ⊗ m) for all m ∈ M . Suppose now that (A, mA , 1A ) is an algebra over k and let R ∈ A ⊗ A be a A-central element. Then R12 R23 = R23 R13 in A ⊗ A ⊗ A. It follows that (A, ∆R = δ cop ) is also a coassociative coalgebra, where ∆R (a) = δ cop (a) = R2 a ⊗ R1 , for all a ∈ A. We remark that ∆R is not an algebra map, i.e. (A, mA , ∆R ) is not a bialgebra. Any left A-module (M, ·) has a structure of right comodule over the coalgebra (A, ∆R ) via ρR : M → M ⊗ A,
ρR (m) = R2 · m ⊗ R1
for all m ∈ M . Moreover, for any a ∈ A and m ∈ M we have that ρR (a · m) = a(1) · m ⊗ a(2) = m[0] ⊗ am[1] . Indeed, from the definition of the coaction on M and the comultiplication on A, we have immediately ρR (a · m) = R2 a · m ⊗ R1 = a(1) · m ⊗ a(2) On the other hand m[0] ⊗ am[1] = R2 · m ⊗ aR1 = R2 a · m ⊗ R1 where in the last equality we used that R is a A-central element. These considerations lead us to the following Definition 20. Let (A, mA , ∆A ) be at once an algebra and a coalgebra (but not necessarily a bialgebra). An F S-object over A is a k-module M that is at once a left A-module and a right A-comodule such that ρ(a · m) = a(1) · m ⊗ a(2) = m[0] ⊗ am[1]
(8.46)
for all a ∈ A and m ∈ M . The category of F S-objects and A-linear A-colinear maps will be denoted by A A FS . This category measures how far A is from a bialgebra. If A has not a unit (resp. a counit), then the objects in A FS A will be assumed to be unital (resp. counital). Proposition 154. If (A, mA , 1A , ∆A , εA ) is a bialgebra with unit and counit, then the forgetful functor F : A FS A → k M is an isomorphism of categories.
8.4 The category of FS-objects
341
Proof. Define G : k M → A FS A as follows: G(M ) = M as a k-module, with trivial A-action and A-coaction: ρ(m) = m ⊗ 1A and a · m = εA (a)m for all a ∈ A and m ∈ M . It is clear that G(M ) ∈ A FS A . Now, assume that M is an F S-object over A. Applying I ⊗ εA to (8.46), we find that a · m = m[0] εA (a)εA (m[1] ) = εA (a)m Taking a = 1A in (8.46), we find ρ(m) = 1A · m ⊗ 1A = m ⊗ 1A . Hence, G is an inverse for the forgetful functor F : A FS A → k M. Definition 21. A triple (A, mA , ∆A ) is called a weak Frobenius algebra (W F -algebra, for short) if (A, mA ) is an algebra (not necessarily with unit), (A, ∆A ) is a coalgebra (not necessarily with counit) and (A, mA , ∆A ) ∈ A A FS , that is ∆(ab) = a(1) b ⊗ a(2) = b(1) ⊗ ab(2) . (8.47) for all a, b ∈ A. Remarks 25. 1. Assume that A is an W F -algebra with unit, and write ∆(1A ) = e2 ⊗ e1 . From (8.47), it follows that ∆(a) = e2 a ⊗ e1 = e2 ⊗ ae1
(8.48)
and this implies that ∆cop (1A ) = e1 ⊗ e2 is an A-central element. Conversely, if A is an algebra with unit, and e = e1 ⊗ e2 is an A-central element, then e12 e23 = e23 e13 (see (8.3)), and it is easy to prove that this last statement is equivalent to the fact that ∆ : A → A ⊗ A given by ∆(a) = e2 a ⊗ e1 is coassociative. Thus A is a W F -algebra. We have proved that W F algebras with unit correspond to algebras with unit together with an Acentral element. 2. From (8.47), it follows immediately that f := ∆cop : A → A ⊗ A is an Abimodule map. Conversely, if f is an A-bimodule map, then it is easy to prove that ∆ = τ ◦ f defines a coassociative comultiplication on A, making A into a W F -algebra. Now, using Corollary 40, we obtain that a finitely generated projective and unitary k-algebra (A, mA , 1A ) is Frobenius if and only if A is an unitary and counitary W F -algebra. Thus, we can view W F -algebras as a generalization of Frobenius algebras. Proposition 155. Let (A, mA , 1A , ∆A ) be a W F -algebra with unit. Then the forgetful functor F : A FS A → A M is an isomorphism of categories.
342
8 The Frobenius-Separability equation
Proof. We define a functor G : A M → A FS A as follows: G(M ) = M as an A-module, with right A-coaction given by the formula ρ(m) = e2 · m ⊗ e1 where ∆(1A ) = e2 ⊗ e1 . ρ is a coaction because e12 e23 = e23 e13 , and, using (8.48), we see that ρ(a · m) = e2 a · m ⊗ e1 = a(1) · m ⊗ a(2) = e2 · m ⊗ ae1 = m[0] ⊗ am[1] as needed. Now, G and F are each others inverses. Now, we will give the coalgebra version of Proposition 155. Consider a W F algebra (A, mA , ∆A , εA ), with a counit εA , and consider σ = ε ◦ mA ◦ τ : A ⊗ A → k, that is, σ(c ⊗ d) = ε(dc) for all c, d ∈ A. Now ∆(cd) = c(1) d ⊗ c(2) = d(1) ⊗ cd(2) so σ(d ⊗ c(1) )c(2) = (ε ⊗ IC )(∆(cd)) = (IC ⊗ ε)(∆(cd)) = σ(d(2) ⊗ c)d(1) and σ is an F S-map. Conversely, let (A, ∆A ) be a coalgebra with counit, and assume that σ : A ⊗ A → k is an F S-map. A straightforward computation shows that the formula c · d = σ(d(2) ⊗ c)d(1) defines an associative multiplication on A and that (A, ·, ∆A ) is a W F algebra. Thus, W F -algebras with counit correspond to coalgebras with counit together with an F S-map. Proposition 156. Let (A, mA , ∆A , εA ) be a W F -algebra with counit. Then the forgetful functor F : A FS A → MA is an isomorphism of categories. Proof. The inverse of F is the functor G : MA → A FS A defined as follows: G(M ) = M as a A-comodule, with A-action given by a · m = σ(m[1] ⊗ a)m[0] for all a ∈ A, m ∈ M . Further details are left to the reader.
8.4 The category of FS-objects
343
As an immediate consequence of Proposition 155 and Proposition 156 we obtain the following generalization of Abrams’ result [4, Theorem 3.3] Corollary 44. Let (A, mA , 1A , ∆A , εA ) be a W F -algebra with unit and counit. Then we have an equivalence of categories A FS
A
∼ = MA = AM ∼
Let us finally show that Proposition 155 also holds over W F -algebras that are unital as modules over themselves. Proposition 157. Let A be a W F -alegebra that is unital as a module over itself. We have an equivalence between the categories A M and A FS A . Proof. For a unital A-module M , we define F (M ) as the A-module M with A-coaction given by ρ(a · m) = a(1) · m ⊗ a(2) It is clear that ρ defines an A-coaction. One equality in (8.46) is obvious, and the other one follows from (8.47): for all a, b ∈ A and m ∈ M , we have that (b · m)[0] ⊗ a(b · m)[1] = b(1) · m ⊗ ab(2) = (ab)(1) · m ⊗ (ab)(2) = ρ(a · (b · m)) It follows that F (M ) is an F S-object, and F defines the desired category equivalence.
Index
A-central 317 adjoint pair 25 algebroid 137 alternative Doi-Koppinen structure 42 antipode 6 augmentation map 4 bialgebra 6 bijective 1-cocycle 241 braid equation 219 braiding operator 240 Brauer-Long group 193 canonical coring 204 canonical element 278 Casimir element 30 characteristic element 105, 319 classical Yang-Baxter operator 221 coalgebra 4 coalgebra Galois extension 211 coalgebra over a comonad 212 coassociative 4 cocommutative coalgebra 4 coinvariants 12 comodule algebra 21 comodule coalgebra 21 comonad 211 comparison functor 204 compatible entwining structures 198 compatible pair of actions 239 comultiplication 4 comultiplicative map 5 conjugate braiding 241 convolution 5 coquasitriangular bialgebra 229 coring 71 coseparable coalgebra 136, 318 counit 4 crossed G-module 232 crossed module 182
crossed product 325 derivation 28 descent datum 205 diagonal map 4 distinguished element 116 DK structure 41 Doi-Hopf modules 49 Doi-Koppinen Hopf modules 49 Doi-Koppinen structure 41 Drinfeld double 186 effective descent morphism 204 entwined module 48 entwining structure 39 factorization structure 50 Frobenius algebra 32,318 Frobenius bimodule 117 Frobenius functor 89 Frobenius pair 34, 89, 101 Frobenius pair of the second kind 91 Frobenius-separability equation 320 FS-map 327 FS-object 340 fusion equation 245 Galois type coring 209 Godement product 22 Harrison cocycle 293 Heisenberg double 165, 279 Hochschild cohomology 29 Hopf algebra 6 Hopf element 266 Hopf equation 245 Hopf function 260 Hopf-Galois extension 169 integral 109, 179 Knizhnik-Zamolodchikov equation 314 Koppinen smash 57 Long coalgebra 311 Long dimodule 48, 193, 304
354
Index
Long equation 301 Long map 311 Maschke functor 97 module algebra 8 module coalgebra 8 monoidal entwining structure 81 naturally faithful functor 92 paratrophic matrix 33 pentagon equation 245 pentagon objects 278 pure morphism 208 quantum determinant 238 quantum Yang-Baxter equation 219 quasitriangular bialgebra 225 rack 239 relative Hopf module 49 relative injective object 96 relative projective object 96 right twisted smash coproduct 79 separability idempotent 30, 96, 317
separable algebra 29, 317 separable bimodule 117 separable functor 91 smash coproduct structure 59 smash product 50 smash product structure 50 symmetric algebra 318 T -generator 146 3-cocycle equation 246 total integral 159, 179 transfer map 90 triangular Hopf algebra 226 twist 224 twist equation 225, 293 two-sided entwined module 68 unified Hopf modules 49 unimodular Hopf algebra 116 weak Frobenius algebra 341 Yetter-Drinfeld module 48, 181 Yetter-Drinfeld structure 181