Lecture Notes in Control and Information Sciences Editor: M. Thoma
254
Springer London Berlin Heidelberg New York Ba...
168 downloads
2635 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Lecture Notes in Control and Information Sciences Editor: M. Thoma
254
Springer London Berlin Heidelberg New York Barcelona Hong Kong Milan Paris Singapore
Tokyo
BarbaraHammer
Learning with Recurrent Neural Networks With 24 Figures
~ Springer
Series Advisory Board A. B e n s o u s s a n • M.J. G r i m b l e • P. K o k o t o v i c • A.B. K u r z h a n s k i • H. K w a k e r n a a k • J.L. M a s s e y • M. M o r a r i
Author Barbara Hammer, Dr rer. naL, DiplMath
Department of Mathematics and Computer Science,Universityof Osnabriick, D-49069 O s n a b r t i c k , G e r m a n y
ISBN 1-85233-343-X Springer-Verlag London Berlin Heidelberg British Library Cataloguing in Publication Data Hammer, Barbara Learning with recurrent neural networks, - (Lecture notes in control and information sciences) 1.Neural networks (Computer science) 2.Machine learning l.Tifie 006.3'2 ISBN 185233343X Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior perm~ion in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. © Springer-Verlag London Limited 2000 Printed in Great Britain The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore f~ee for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal respous~ility or liability for any errors or omissions that may be made. Typesetting: Camera ready by author Printed and bound at the Athenaeum Press Ltd., Gateshead, Tyne & Wear 69/3830-543210 Printed on acid-free paper SPIN 10771572
To Sagar and Volker.
Preface
A challenging question in machine learning is the following task: Is it possible to combine symbolic and connectionistic systems in some mechanism such that it contains the benefits of both approaches? A satisfying answer to this question does not exist up to now. However, approaches which tackle small parts of the problem exist. This monograph constitutes another piece in the puzzle which eventually may become a running system. It investigates so-called folding networks - neural networks dealing with structured, i.e., symbolic inputs. Classifications of symbolic data in a connectionistic way can be learned with folding neural networks which are the subject of this monograph. This capability is obtained in a very natural way via enlarging standard neural networks with appropriate recurrent connections. Several successful applications in classical symbolic areas exist - some of which are presented in the second chapter of this monograph. However, the main aim of this monograph is a precise mathematical foundation of the ability to learn with folding networks. Apart from the in-principle guarantee that folding networks can succeed in concrete tasks, this investigation yields practical consequences: Bounds on the number of neurons which are sufficient to represent the training data are obtained. Furthermore, explicit bounds on the generalization error in a concrete learning scenario can be derived. Finally, the complexity of training is investigated. Moreover, several results of general interest in the context of neural networks and learning theory are included in this monograph since they form the basis of the results for folding networks: Approximation results for discrete time recurrent neural networks, in particular explicit bounds on the number of neurons and a short proof of the super-Turing universality of sigmoidal recurrent networks, are presented. Several contributions to distribution-dependent learnability, an answer to an open question posed by Vidyasagar, and a generalization of the luckiness framework are included. The complexity of standard feed-forward networks is investigated and several new results on the so-called loading problem are derived in this context. Large parts of the research reported in this monograph were performed while I was preparing my Ph.D. thesis in Theoretical Computer Science at the Universiy of Osnabriick. I am pleased to acknowledge Thomas Elsken,
VIII Johann Hurink, Andreas Kfichler, Michael Schmitt, Jochen Steil, and Matei Toma for valuable scientific discussions on the topics presented in this volume. Besides, I had the opportunity to present and improve parts of the results at several institutes and universities, in particular during visits at Rutgers University (U.S.A.) and the Center for Artificial Intelligence and Robotics (Bangalore, India). I am deeply indebted to Bhaskar Dasgupta, Eduardo Sontag, and Mathukumalli Vidyasagar for their hospitality during my stay at Rutgers University and CAIR, respectively. Furthermore, I gratefully acknowledge the people whose encouragement was of crucial support, in particular Grigoris Antoniou, Ute Matecki, and, of course, Manfred. Special thanks go to Teresa Gehrs who improved the English grammar and spelling at the same rate as I introduced new errors, to Prof. Manfred Thoma as the Editor of this series, and Hannah Ransley as the engineering editorial assistant at Springer-Verlag. Finally, I would like to express my gratitude to my supervisor, Volker Sperschneider, who introduced me to the field of Theoretical Computer Science. He and Mathukumalli Vidyasagar, who introduced me to the field of learning theory - first through his wonderful textbook and subsequently in person - laid the foundation for the research published in this volume. I would like to dedicate this monograph to them.
Osnabriick, April 2000.
Table of C o n t e n t s
..............................................
lo
Introduction
2.
Recurrent and Folding Networks .......................... 2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 T r a i n i n g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 B a c k g r o u n d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 A p p l i c a t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 T e r m Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 L e a r n i n g Tree A u t o m a t a . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 C o n t r o l of Search H e u r i s t i c s for A u t o m a t e d D e d u c t i o n 2.4.4 Classification of C h e m i c a l D a t a . . . . . . . . . . . . . . . . . . . . 2.4.5 L o g o Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 3.2
3.3
3.4 .
Ability .................................... Foundations ........................................... A p p r o x i m a t i o n in P r o b a b i l i t y . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 I n t e r p o l a t i o n of a F i n i t e Set of D a t a . . . . . . . . . . . . . . . . 3.2.2 A p p r o x i m a t i o n of a M a p p i n g in P r o b a b i l i t y . . . . . . . . . . 3.2.3 I n t e r p o l a t i o n w i t h a = H . . . . . . . . . . . . . . . . . . . . . . . . . . A p p r o x i m a t i o n in t h e M a x i m u m N o r m . . . . . . . . . . . . . . . . . . . . 3.3.1 N e g a t i v e E x a m p l e s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 A p p r o x i m a t i o n on U n a r y Sequences . . . . . . . . . . . . . . . . 3.3.3 Noisy C o m p u t a t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 A p p r o x i m a t i o n on a F i n i t e T i m e I n t e r v a l . . . . . . . . . . . . Discussion a n d O p e n Q u e s t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . .
Approximation
Learnability .............................................. 4.1 T h e L e a r n i n g S c e n a r i o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 D i s t r i b u t i o n - d e p e n d e n t , M o d e l - d e p e n d e n t L e a r n i n g . . . 4.1.2 D i s t r i b u t i o n - i n d e p e n d e n t , M o d e l - d e p e n d e n t L e a r n i n g . 4.1.3 M o d e l - f r e e L e a r n i n g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 D e a l i n g w i t h Infinite C a p a c i t y . . . . . . . . . . . . . . . . . . . . . 4.1.5 V C - D i m e n s i o n of N e u r a l N e t w o r k s . . . . . . . . . . . . . . . . . . 4.2 P A C L e a r n a b i l i t y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 5 5 11 13 15 15 15 16 16 18 19 20 25 25 30 35 36 36 39 44 46 48 51 53 54 57 58 60 62 63
x
Table of Contents
4.3
4.4 4.5 4.6 .
6.
4.2.1 D i s t r i b u t i o n - d e p e n d e n t L e a r n i n g . . . . . . . . . . . . . . . . . . . 4.2.2 Scale Sensitive T e r m s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Noisy D a t a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 M o d e l - f r e e L e a r n i n g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 D e a l i n g w i t h Infinite C a p a c i t y . . . . . . . . . . . . . . . . . . . . . B o u n d s on t h e V C - D i m e n s i o n of F o l d i n g N e t w o r k s . . . . . . . . . . 4.3.1 T e c h n i c a l D e t a i l s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 E s t i m a t i o n o f t h e V C - D i m e n s i o n . . . . . . . . . . . . . . . . . . . 4.3.3 Lower B o u n d s on fat~ ( F ) . . . . . . . . . . . . . . . . . . . . . . . . . . C o n s e q u e n c e s for L e a r n a b i l i t y . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lower B o u n d s for t h e L R A A M . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion a n d O p e n Q u e s t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . .
Complexity ............................................... 5.1 T h e L o a d i n g P r o b l e m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 T h e P e r c e p t r o n Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 P o l y n o m i a l S i t u a t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 N P - R e s u l t s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 T h e S i g m o i d a l C a s e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Discussion a n d O p e n Q u e s t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . .
103 105 110 110 113 122 130
Conclusion
133
Bibliography Index
63 66 70 74 75 79 79 83 89 93 97 98
...............................................
..................................................
.........................................................
137 145
Chapter 1
Introduction
In many areas of application, computers far outperform human beings as regards both speed and accuracy. In the field of science this includes numerical computations or computer algebra, for example, and in industrial applications hardly a process runs without an automatic control of the machines involved. However, some tasks exist which turn out to be extremely difficult for computers, whereas a human being can manage them easily: comprehension of a spoken language, recognition of people or objects from their image, driving cars, playing football, answering incomplete or fuzzy questions, to mention just a few. The factor linking these tasks is that there is no exact description of how humans solve the problems. In contrast to the area in which computers are used very successfully, here a detailed and deterministic program for the solution does not exist. Humans apply several rules which are fuzzy, incomplete, or even partially contradictory. Apart from this explicit knowledge, they use implicit knowledge which they have learned from a process of trial and error or from a number of examples. The lack of exact rules makes it difficult to tell a robot or a machine to behave in the same way. Exact mathematical modeling of the situation seems impossible or at least very difficult and time-consuming. The available rules are incomplete and not useful if they are not accompanied by concrete examples. Such a situation is addressed in the field of machine learning. Analogously to the way in which humans solve these tasks, machines should be able to learn a desired behavior if they are aware of incomplete information and a number of examples instead of an exact model of the situation. Several methods exist to enable learning to take place: Frequently, the learning problem can be formulated as the problem of estimating an unknown mapping f if several examples (x ~ f(x)) for this mapping are given. If the input and output values are discrete, e.g., symbolic values, a symbolic approach like a decision tree can be applied. Here the input for f consists of several features which are considered in a certain order depending on the actual i n p u t . The exact number and order of the considered attributes is stored in a tree structure, the actual output of a special input can be found at the leaves of such a decision tree. Learning means finding a decision tree which represents a mapping such that the examples are mapped correctly and the entire tree has the most simple structure. The learning process is successful if not only
2
I. Introduction
the known examples but even the underlying regularity is represented by the decision tree. In a stochastic learning method, the function class which is represented by the class of decision trees in the previous learning method is substituted by functions which correspond to a probability distribution on the set of possible events. Here the output in a certain situation may be the value such that the probability of this output, given the actual input, is maximized. Learning means estimating an appropriate probability which mirrors the training data. Neural networks constitute another particularly successful approach. Here the function class is formed by so-cailed neural networks which are motivated by biological neural networks, that means the human brain. They compute complex mappings within a network of single elements, the neurons, which each implement only very simple functions. Different global behavior results from a variation of the network structure, which usually forms an arbitrary acyclic graph, and a variation of the single parameters which describe the exact behavior of the single neurons. Since the function class of neural networks is parameterized, a learning algorithm is a method which chooses an appropriate number of parameters and appropriate values for these parameters such that the function specified in this way fits to the given examples. Artificial neural networks mirror several aspects of biological networks like a massive parallelism improving the computation speed, redundancy in the representation in order to allow fault tolerance, or the implementation of a complex behavior by means of a network of simple elements, but they remain a long way away from the power of biological networks. Artificial neural networks are very successful if they can classify vectors of real numbers which represent appropriately preprocessed data. In image processing the data may consist of feature vectors which are obtained via Fourier transformation, for example. But when confronted with the raw image data instead, neural networks have little chance of success. (From a theoretical point of view they should even succeed with such complex data assuming that enough training examples are available - but it is unlikely that success will come about in practice in the near future.) The necessity of intense preprocessing so that networks can solve the learning task in an efficient way is of course a drawback of neural networks. Preprocessing requires specific knowledge about the area of application. Additionally, it must be fitted to neural networks, which makes a time-consuming trial and error process necessary in general. But this problem is not limited to the field of neural networks: Any other learning method also requires preprocessing and adaption of the raw data. Up to now, no single method is known which can solve an entire complex problem automatically instead of only a small part of the problem. This difficulty suggests the use of several approaches simultaneously, each solving just one simple part of the learning problem. If different methods are used for the different tasks because various approaches seem suitable for the different subproblems, a unified interface between the single methods
1. Introduction
3
is necessary such that they can exchange their respective data structures. Additionally, the single methods should be able to deal with data structures which are suitable to represent the entire learning methods involved. Then they can control parts of the whole learning process and partially automate the modularization of the original task. Here a problem occurs if neural networks are to be combined with sym: bolic approaches. Standard neural networks deal with real vectors of a fixed dimension, whereas symbolic methods work with symbolic data, e.g., terms, formulas, . . . , i.e., data structures of an in principle unbounded length. If standard networks process these data, either because they classify the output of another learning method or because they act as global control structures of the learning algorithms involved, it becomes necessary to encode the structured data in a real vector. But universal encoding is not generally fitted to the specific learning task and to the structure of the artificial network which uses the encoded values as inputs. In an encoding process an important structure may be hidden in such a way that the network can hardly use the information; on the contrary, redundant encoding can waste space with superfluous information and slow down the training process and generalization ability of the network. An alternative to encoding structured data is to alter the network architecture. Recurrent connections can be introduced such that the network itself selects the useful part of the structured data. Indeed, recurrent networks are able to work directly on data with a simple structure: lists of a priori unlimited length. Therefore, encoding such data into a vector of fixed dimension becomes superfluous if recurrent networks are used instead of standard feedforward neural networks. A generalization of the recurrence to arbitrary tree structures leads to so-called folding networks. These networks can be used directly in connection with symbolic learning methods, and are therefore a natural tool in any scenario where both symbolic data and real vectors occur. They form a very promising approach concerning the integration of symbolic and subsymbolic learning methods. In the following, we will consider recurrent and folding networks in more detail. Despite successful practical applications of neural networks - which also exist for recurrent and folding networks - the theoretical proof of their ability to learn has contributed to the fact that neural networks are accepted as standard learning tools. This theoretical investigation implies two tasks: a mathematical formalization of learnability and a proof that standard neural networks fit into this definition. Learnability can be formalized by Valiant's paradigm of 'probably approximately correct' or PAC learnability. The possibility of learning with standard neural networks in this formalism is based on three properties: They are universal approximators, which means that they are capable of approximating anything that they are intended to approximate; they can generalize from the training examples to unseen data such that a trained network mirrors the underlying regularity that has to be
4
1. Introduction
learned, too; and fitting the network to the training examples is possible with an efficient training algorithm. In this monograph, we will examine the question of approximation capability, learnability, and complexity for folding networks. A positive answer to these questions is a necessary condition for the practical use of this learning approach. In fact, these questions have not yet been entirely solved for recurrent or standard feed-forward networks and even the term of 'probably approximately correct' learnability leaves some questions concerning the formalization of learnability unsolved. Apart from the results which show the theoretical possibility of learning with folding networks we will obtain results which are of interest in the field of recurrent networks, standard feed-forward networks, or learnability in principle, too. The volume consists of 5 chapters which are as independent as possible of each other. Chapter 2 introduces the general notation and the formal definition of folding networks. A brief description of their in-principle use and some concrete applications are mentioned. Chapters 3 to 5 each examine one of the above mentioned topics: the approximation ability, learnability, and complexity of training, respectively. They each start with a specification of the problems that will be considered in the respective chapter and some bibliographical references and end with a summary of the respective chapter. The monograph is concluded with a discussion of several open questions.
Chapter 2
Recurrent and Folding Networks
This chapter contains the formal definition of folding networks, the standard training algorithm, and several applications. Folding networks constitute a general learning mechanism which enables us to learn a function from a set of labeled k-trees with labels in some finite dimensional vector space into a finite dimensional vector space. By labeled k-trees we address trees where the single nodes are equipped with labels in some alphabet and every node has at most k successors. This sort of data occurs naturally in symbolic areas, for example. Here the objects one is dealing with are logical formulas or terms, the latter consisting of variables, constants, and function symbols, the former being representable by terms over some enlarged signature with function symbols corresponding to the logical symbols. Terms have a natural representation as k-trees, k being the maximum arity of the function symbols which occur: The single symbols are enumerated and the respective values can be found in the labels of a tree. The tree structure mirrors the structure of a term such that subtrees of a node correspond to subterms of the function symbol which the respective node represents. Hence folding networks enable us to use connectionistic methods in classical symbolic areas. Other areas of application are the classification of chemical data, graphical objects, or web data, to name just a few. We will show how these data can be encoded by tree structures in the following. But first we formally define folding networks which constitute a generalization of standard feed-forward and recurrent networks.
2.1 Definitions One characteristic of a neural network is that a complex mapping is implemented via a network of single elements, the neurons, which in each case implement a relatively simple function. D e f i n i t i o n 2.1.1. A feed-forward neural network consists of a tuplc F = (N, - r , w, 8, f, I, 0 ) . N is a finite set, the set of neurons or units, and (N, ~ ) is an acyclic graph with vertices in N x N . We write i --+ j if neuron i is connected to neuron j in this graph. Each connection i -+ j is equipped with
6
2. Recurrent and Folding Networks input neurons
hidden n e u r o n s
t
o1 w I
/
~'9
02 03 / w 3
output neurons
f(w
o)
activation function
Fig. 2.1. Example for a feed-forward network with a multilayered structure; given an input, the neurons compute successively their respective activation.
a weight wij E R, w is the vector of all weights. The neurons without predecessor are called input neurons and constitute the set I. All other neurons are called c o m p u t a t i o n units. A nonempty subset of the computation units is specified and called o u t p u t units, denoted by O. All computation units, which are not output neurons, are called hidden neurons. Each computation unit i is equipped with a bias 0i E R and an activation function fi : IR --+ IR. 0 and f are the corresponding vectors. We assume w.l.o.g, that N C N and that the input neurons are { 1 , . . . , m } . A network with m inputs and n outputs computes the function
f : ~ m _+ Rn, I(zl
.... , z,,)
= (oil,...,
o~.)
where il, . . . , in are the output units and oi is defined reeursivcly for any neuron i by
oi =
xi f i ( Y ~ j ~ i wjioj + Oi)
if i is an input unit, otherwise.
The term ~ j - ~ i wjioj + Oi is called the activation of neuron i. A n architecture is a tuple Y = ( N , ~ , w ' , O ' , f , I , O ) as above, with the difference that we require w ' and O' to specify a weight w~j for only some of the connections i ~ j , or a bias O~ for some of the neurons i, respectively.
2.1 Definitions H(x)
l sgd(x)
~X
7
~ lin(x)
mX
~X
Fig. 2.2. Several activation functions: perceptron activation H, standard sigmoidal function sgd, and semilinear activation lin; all three functions are squashing functions. In an obvious way, an architecture stands for the set of networks that results from the above tuple by completing the specification of the weight vector and biases. By an abuse of notation, the set of functions that result from these networks is sometimes called an architecture, too. Since these functions all have the same form, we sometimes denote the architecture by a symbol which typically denotes such a function. For a concrete network F the corresponding architecture refers to the architecture where no wii or 0i are specified. A network computes a mapping which is composed of several simple functions computed by the single neurons. See Fig.2.1 as an example. The activation functions fi of the single computation units are often identical for all units or all but the o u t p u t units, respectively. We drop the subscript i in these cases. The following activation functions will be considered (see Fig.2.2): The identity id : I~ -~ R, id(x) = x, which is often used as an output activation function in order to implement a scaling or shifting of the desired output domain; the perceptron activation H : R ~ R, H(x)=/
0 ( 1
x<0 x_>0
'
which is a binary valued function occurring mostly in theoretical investigations of networks. In practical applications the activation functions are similar to the perceptron activation for large or small values and have a smooth transition between the two binary values. The standard sigmoidal ]unction sgd(x) = (1 + e-Z) -1 and the scaled version tanh(x) = 2. sgd(2x) - 1 are common functions of this type. A more precise approximation of these sigmoidal activations which is sometimes considered in theoretical investigations is the semilinear ]unction lin(x)=
1 x 0
x>l xE]0,1[ x_<0
,
8
2. Recurrent and Folding Networks
which fits the asymptotic behavior of sgd and the linearity at the point 0. For technical reasons we will consider the square activation x ~ x 2. By a squashing activation we mean any function which is monotonous with function values that tend to 1 for large inputs x and to 0 for small input values x, respectively. A function is C n if it is n times continuously differentiable. A p r o p e r t y of a function holds locally if it is valid in the neighborhood of at least one point. For technical reasons we will consider multiplying units, which simply compute a product of the output values of their predecessing units instead of a weighted sum, i.e., Oi = Hj--~i Oj for multiplying units i. The connection structure --+ often has a special form: the neurons decompose into several groups No . . . . , Nh+l, where No are the input neurons, Nh+l are the output neurons, and i ~ j if and only if i E Nk, j E Nk+l for some k. Such a network is called a multilayer ]eed-forward network, or M L P for short, with h hidden layers; the neurons in Ni constitute the hidden layer number i for i E { 1 , . . . , h } . Feed-forward networks can handle real vectors of a fixed dimension. More complex objects are trees with labels in a real vector space. We consider trees where any node has a fixed fan-out k, which means t h a t any n o n e m p t y node has at most k successors, some of which m a y be the e m p t y tree. Hence, a tree with labels in a set 57 is either the e m p t y tree which we denote by _L or it consists of a root which is labeled with some value a E 2: and k subtrees tl, .. 9 tk some of which may be empty. In the latter case we denote the tree by a(tl . . . . , tk). The set of trees which can be defined as above is denoted by 57~. In the following, 57 is a finite set or a real vector space. One can use the recursive nature of trees to construct an induced mapping which deals with trees as inputs from any vector valued mapping with a p p r o p r i a t e arity:
Definition 2.1.2. Assume R is a set. Any mapping g : 57 x R k ~ R and initial context y E R induces a mapping ~ : 57~, -r R, which is defined recursively as follows: ~v(a(tl,...,tk))
=
y,
=
g(a,~(tl),...,#~(tk)).
This definition can be used to formally define recurrent and folding networks: D e f i n i t i o n 2.1.3. A folding network consists of two feed-forward networks which compute g : 1~'~+h't ~ ~ and h : ~ ~ R n , respectively, and an initial context y e I~. It computes the mapping
h o ~ : (Xm)~ -~ X". A folding architecture is given by two feed-forward architectures 3: with m + k 9 l inputs and l outputs, and ~ with l inputs and n outputs and an only partially defined initial context y~ in ~ .
2.1 Definitions folding n e t w o r k g i v e n by the networks:
c o m p a c t notation:
g
', ,_ . . . . . . . . . . . . . .
e n c o d i n g layer , ..... i
context-
thea input tree
"
..L---*
neurons .... ' : i ~',
b/N,c I A de
~
recurrentg f
d
,; ....................
-,
i~'l
I
;o'
,,
. . . . . . :i . .". . . L . . . .Y ............
"'
feed-forwardpart h
~=
J-
b
-L
leads to the computation
9
a
.L Q
e _1_
c
J_ f .1_
~ /
computationinduced by g
A_ Fig. 2.3. Example for the unfolding process if a concrete value is computed with a very simple folding network; the folding network is formally unfolded according to the structure of the input tree; for every subtree one copy of the recursive part of the folding network can be found in the computation.
The input neurons m + 1, . . . , m + k . l of g are called context neurons. g is referred to as the recursive part of the network, h is the feed-forward part. The input neurons of a folding network or architecture are the neurons "1, . . . , m of g, the output units are the output neurons of h. I l k = 1 we call folding networks recurrent networks.
Specifying all weights and biases in an architecture given by j r and ~ and all coefficients in y ' leads to a class of folding networks. As before, we sometimes identify the function class which can be c o m p u t e d by these folding networks with the folding architecture. To understand how a folding network computes a function value one can think of the recursive p a r t as an encoding part: a tree is encoded recursively into a real vector in 1~. Starting at the e m p t y tree Z, which is encoded by the initial context y, a leaf a ( l , . . . , • is encoded via ] as f ( a , y , . . . , y ) using the code of • Proceeding in the same way, a subtree a ( t l , . . . ,tk) is
I0
2. Recurrent and Folding Networks
recurrent network
input
_ ~
a ~ . ~ f ~ ~ _ , ~
C
• Fig. 2.4. A simple recurrent neural network: here the recurrent connection is a one-to-one correspondence due to the linearity of the input structure.
encoded via f using the already computed codes of the k subtrees tl, . . . , tk. The feed-forward part maps the encoded tree to the desired output value. We refer to l as the encoding dimension. A folding network which takes binary trees with real valued nodes as inputs is depicted in Fig. 2.3. In a concrete computation, part g is applied several times according to the structure of the input. Recurrent links in part g indicate the implicit recurrence which depends on the input structure. The input tree a(b(d(.J_, • .J_),c(e(J_, .l_), f(_l_, .1_))), for example, is mapped to the value h(g(a, g(b, g(d, y, y), y), g(c, g(e, y, y), g(f, y, y)))). Note that for each fixed structure of input trees one can find an equivalent feed-forward network which computes the same output by simply unfolding the folding network. According to the recurrence, several weights are identical in this unfolded network. For k = 1, the inputs consist of sequences of real vectors in IRrn . Here the recurrent connections have a one-to-one correspondence since the unfolding process is linear (see Fig. 2.4). In the case k = 1 we will drop the subscript k. Then a tree (al(a2(a3(... (a,(.J_))...)))) is denoted by [an,.-. ,a2,al]. Any mappings we consider are total mappings unless stated otherwise. A mapping between real vector spaces is measurable or continuous, respectively, if it is measurable with respect to the standard Borel a-algebra or continuous with respect to the standard topology, respectively. A mapping f : (R'n)~ --+ R n is measurable or continuous if and only if any restriction of f to trees of a fixed structure is measurable or continuous as a mapping between real vector spaces. This latter requirement defines the topology and a-algebra, respectively, on the set of trees. In particular, the set which consists of all trees of one fixed structure is open and measurable. If we restrict our consideration to trees of a fixed structure, a set is open or measurable if and only if it is
2.2 Training
11
open or measurable, respectively, with respect to the standard topology or a-algebra on a real vector space. In the following, we denote by ,U
2.2 Training
In a concrete learning task several examples (xi, f(xi))~= l, the training set, of an unknown function are available. Here the xi E ( ~ ) ~ are vectors, sequences, or trees, depending on the learning task, and f : (R 'n)~ ~ R n is a vector valued function which is to be learned. For a fixed folding architecture h o ~y with m inputs and n outputs, where all weights and biases are unspecified, we can define the quadratic error P
E(w) = Z
If(x~) - h o ~y(Xi)] 2
i----1
which depends on the weights and biases w. Often, the initial context y is chosen as some fixed vector, the origin, for example. Minimizing E ( w ) with respect to the weights yields a folding network which outputs on data xi some value as similar as possible to the desired output f(xi). Since the activation functions used in practice are usually differentiable, the error minimization can be performed with a simple gradient descent method, i.e., some procedure of the form: w
:= s o m e
initial
values
while E(w) is large w
:= w - ~ . V E ( w )
with some value 0 < 7/. For appropriate 7/and appropriate starting point w, this converges to a local optimum of the function E with positive definite Hessian. The gradient V E ( w ) can be computed very efficiently, i.e., in linear time with respect to the number of weights, using a method called backpropagation [105, 136, 138] for feed-forward architectures, back-propagation through time for recurrent architectures [90, 106, 137], and back-propagation through structure for folding architectures [41, 42, 73]. In order to speed up the above procedure one can adapt factor ~/ appropriately or take secondorder information into account. To avoid local minima one can use heuristic modifications of the above rule or add some form of randomization to the process [53, 103, 141]. Hence this algorithm allows us to find weights for a specific architecture such that the examples are fitted. Commonly, the architecture is chosen by a trial and error process. More precisely: One fixes several different architectures which seem appropriate - the proceeding chapters will give us hints of how this can be made more precise. Roughly, they yield upper bounds on the
12
2. Recurrent and Folding Networks
number of neurons such that the capability of bringing the quadratic error to zero or the in-principle possibility of valid generalization to unseen examples, respectively, is guaranteed. Hence the search space for an appropriate architecture becomes finite. After choosing several architectures, the quality of each architecture is estimated as follows: Each architecture is trained on one part of the training set with (a modification of) the above training method. Afterwards, the quadratic error on the remaining part of the training set is used to estimate the quality of the architecture. Obviously, one cannot use the quadratic error on the training set because in general, it will be smaller than the value one is interested in, the deviation of the network function from the function that is to be learned. Hence one uses the quadratic error on a set not used for training as an estimation of f If(x) - h o ~ y ( X ) [ 2 d p , P denoting the probability measure on the input trees in accordance to which the input patterns are chosen. Unfortunately, this method yields a large variance. The variance can be reduced if the single architectures are trained and tested several times on different divisions of the training data using so-called cross-validation [125]. Hence we get a ranking of the different architectures and can use the best architecture for the final network training as described above. This roughly describes the in-principle learning method. Note that this yields a training method, but up to now no theoretical justification of its quality. For feed-forward networks the training method is well founded because their universal approximation capability and information theoretical and complexity theoretical learnability are (with some limitations) well understood. These theoretical properties do not transfer to folding networks immediately. The necessary investigations are the topic of this volume and altogether establish the in-principle possibility of learning with the method as described above. However, a couple of refinements and modifications of the training algorithm exist for folding networks as for standard feed-forward networks. We mention just a few methods, the ideas of which can be transferred immediately: integration of prior knowledge via a penalty term in the quadratic error which penalizes solutions contradicting the prior information or regularization of the network via weight restriction [21, 96]; modification of the architecture during the training or simplification of the final network via pruning less important weights and neurons [77, 92]. One problem is to be dealt with, while training recurrent and folding architectures, which does not occur while training feed-forward networks: the problem of long-term dependencies [16]. Roughly, the problem tells us that the training scenario is numerically ill-behaved if a function is to be learned where entries at the very beginning of long sequences or labels near the leaves of deep trees determine the output of the function. Hence several approaches try to modify the architecture, the training algorithm, or both
2.3 Background
13
[14, 40, 54, 67]. This problem has not yet been satisfactorily solved; we will consider some correlated theoretical problems in Chapter 5.
2.3 B a c k g r o u n d The above argumentation tells us that a large variety of training methods are available in order to use folding networks in practice. Before describing several areas of application in more detail, we want to mention the origin of the approach. Despite the success of neural networks in several different areas dealing with continuous data [25, 121], their ability to process symbolic data was doubted in [34], for example. One of the main criticisms is that a natural representation of symbolic data in a finite dimensional vector space does not exist because symbolic data is highly structured and consists of a n a priors unlimited length. Since standard networks only deal with distributed representations of the respective data in a finite dimensional vector space, they cannot be used for processing symbolic data. In reaction to the criticism [34], several approaches including the RAAM and LRAAM [101, 122, 124], tensor constructions [117], BoltzCONS [128], and holographic reduced representations [100} were proposed as mechanisms to deal with some kind of structured data. One key problem is to find a mechanism that maps the structured data into a subsymbolic representation, i.e., a real vector of fixed dimension. For this purpose, connectionistic models are equipped with an appropriate recurrent dynamics in the above approaches. Actually, the RAAM and LRAAM already contain the same dynamics as folding networks, but training proceeds in a different way. They consist of two parts, the first of which is used for recursive encoding of trees into a finite dimensional vector space, the second of which performs a recursive decoding of finite dimensional vectors to trees. Formally, the LRAAM consists of two networks ] and g with input dimension m + k. n or n, respectively, and output dimension n or m + k -n, respectively. It computes h y o ~ where h r : •n __+ (Rm)~ is defined by hy(x) = { _l_ ho(x) (hy (hi ( x ) ) , . . . , hy (hk (x)))
x e Y otherwise,
where Y C R '~ a n d h = ( h o , h l , . . . , h j , ) . Hence ho outputs the label, hi, 9.., hk compute codes for the k subtrees, h performs a dual computation compared to ~. The LRAAM performs some kind of universal encoding of tree structures. The two networks g and h are trained simultaneously such that the composition yields the identity on k-trees. For this purpose, some kind of truncated gradient descent method is used [101]. Afterwards, one can address any learning task that deals with tree structures as inputs or outputs with a standard network, since the composition of a standard network with one or two parts of the LRAAM reduces the learning task to the learning of a mapping between finite dimensional vector spaces. See Fig.2.5 as an illustration.
14
2. Recurrent and Folding Networks encoding
decoding
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.!i
a
A b
A
d
c
e
\
i
a
b f
c
I i
d
I
~
e
f
-
may be composed with a standard network Fig. 2.5. The LRAAM performs both encoding and decoding. Hence composition of these parts with a standard network can be used to learn arbitrary mappings with trees as input or output. Folding networks deal with only one part of the problem: mappings from tree structures into a real vector space. Hence they use only the encoding part of the LRAAM and combine it directly with a standard network. The training is fitted to the specific learning task. This method allows them to store only the valuable information that is necessary for the specific task and makes the efficient processing of the desired information possible. Additionally, it will turn out in Chapter 4 that folding networks are more likely to succeed from a theoretical point of view compared to the LRAAM. The other approaches mentioned above have in common the fact that complex data is encoded into a representation as a real vector which is universal and not fitted to the specific learning problem, either. The tensor construction, BoltzCONS, and the holographic reduced representation deal with a fixed encoding which is not learned or modified at all. As already mentioned, RAAM and LRAAM learn the encoding, but only the structure of the input data is relevant for this process - the classification task is not taken into account. In contrast, recurrent and folding networks fit the encoding to the specific learning task. Actually, this method is already used very successfully in the special case of linear trees, i.e., lists. Recurrent networks are a natural tool in any domain where time plays a role, such as speech recognition and production, control, or time series prediction, to name just a few [39, 43, 80, 87, 91,126, 140]. They are also used for the classification of symbolic data as DNA sequences [102]. Elman networks are a special case [29, 30], with the main difference that the learning algorithm is only a truncated gradient descent and hence training is commonly less efficient compared to standard methods used for training recurrent networks [54]. Both architectures compete with feed-forward net-
2.4 Applications
15
works which cut the maximum length of an input sequence to a fixed value and can therefore only deal with an a priori limited time context [15, 57, 87]. Folding networks have been designed to enable neural networks to classify classical terms, logic formulas, and tree structures so that they can be used in symbolic domains, for the control of search heuristics in automated deduction [41, 111], for example. Furthermore, successful applications exist in various areas.
2.4 Applications In the following, we have a closer look at some of these applications focusing on the method of how the respective data is encoded by a tree structure.
2.4.1 Term Classification The set of terms over a finite signature and finite set of variables can be represented by trees in the following way: Denote by k the maximum arity of the function symbols, by c some injective function which maps the symbols, variables, constants, and function symbols, to real numbers. A variable X is encoded by the k-tree C ( X ) = c ( X ) ( • • a term f ( t l , . . . ,tkl) (kl <_ k) is encoded by the k-tree C ( f ( t l , . . . , fk~)) = c ( f ) ( C ( t l ) , . . . , C(tkl), • • Reasonable training tasks are to learn mappings f : {t E I~ It = C ( T ) for some term T} --~ {0,1} where f ( C ( T ) ) = 1 ~:~ T ~ is a subterm of T for some fixed term T', or f ( C ( T ) ) = 1 r T is an instance of a fixed term T' containing variables, or f ( C ( T ) ) = 1 ~ a Boolean combination of similar characteristics as above holds, . . . . Results for these kinds of training problems can be found in [42, 73]. Most of the considered tasks can be solved with folding networks h o ~y where h and g do not possess a hidden layer, with standard training methods, and more than 93% correct classified data on a test set.
2.4.2 Learning Tree A u t o m a t a A (finite, bottom-up, deterministic) tree automaton is a tuple (,U, Q, b, F, 6) where ,U, the input alphabet, and Q, (Q n ~: = 0), the set of states, are finite sets, b E Q is the initial state, F C Q is the nonempty set of final states, and 6 : ~7 • Qk ~ Q is the transition function. A tree t E Z~ is accepted by the automaton if and only if $b(t) E F. Assuming ,U C ~ natural tasks for a folding architecture are to learn the function f : (,Uk)* ~ {0, 1} with f ( t ) = 1 if and only if the tree t is accepted by some specified tree automaton. In [72], folding networks are successfully trained on tasks of this kind with standard training methods; in all cases an accuracy of nearly 100% is obtained on a test set. Furthermore, their in-principle ability of simulating
i6
2. Recurrent and Folding Networks
tree a u t o m a t a is established. Moreover, training turns out to be successful (i.e., more than 94% accuracy on a test set) for languages that are not recognizable by tree a u t o m a t a either. [72] reports very good results for the recognition of the language {t E {f, a, b}~ I t contains an instance of the term .f(X, X) as subterm}. We will deal with correlated problems concerning the computational capability of folding networks in Chapter 3. 2.4.3 C o n t r o l o f S e a r c h H e u r i s t i c s for A u t o m a t e d D e d u c t i o n Several calculi in a u t o m a t e d theorem proving, for example, the resolution calculus or model elimination [81, 104], try to resolve a goal ~ successively with a set of formulas M in order to see whether ~ follows from M or not. The proof process can be modeled by a search tree with nodes labeled with the actual states of the theorem prover and sons of the respective nodes which represent the possible proof steps. If ~ follows from M, then at least one path in this tree leads to a valid proof. See Fig. 2.6 as an example for such a proof tree. Obviously, the success of the theorem prover depends on the order according to which the single possible states are explored. Since the fan-out of the nodes in the search tree is typically larger than one at every proof step, a naive search leads to an exponential amount of time. In order to prevent this fact, one can try to find heuristics telling us which state is to be visited first. Formally, we want to find a mapping from the single nodes of the search tree into the real numbers such that a search visiting the states in accordance with the induced ranking leads to a proof in a short timeThe single states of the theorem prover consist of a finite set of logical formulas and hence can be represented by a finite set of trees in a natural way. Therefore folding networks can be used in order to learn the mapping of finite sets of trees into the real numbers which represents an appropriate ranking. In [41], this method is applied to several word problems from group theory. The training d a t a is obtained from several different (truncated) proof trees where the states on a proof path are mapped to high values and states on a failure path are mapped to low values in a first approximation. Since the same state may occur on a proof path as well as on a failure path, this naive approach does not work. Hence in [41] the quadratic error is modified such that the error is small if at least one state on every failure path is ranked with a lower value than all states on at least one proof path. The use of this modified error function training yields very good results: 17 of 19 proofs which could not be obtained within reasonable time without a ranking could be performed with the ranking which wa~ learned with a folding network and standard training methods. 2.4.4 Classification o f C h e m i c a l D a t a In [110], folding networks are used compared to standard feed-forward networks for the task of mapping chemical data (here: triazines) to their activity
2.4 Applications
Set of formulas: l(a,b), l(b,c), l(e,c). L(X,X). L(X,Y):-L(Y,X).
17
L(X,Y):-I(X,Z),L(Z,Y).
The goal L(a,e) leads to the following search tree: ?-L(a,e). 7-h(a,e). 9 ..
?-h(c,a). o ' " a - "
infinite path! . ~
.~L(c,a). ?-L(a,c). "'"
'
'
~s2~'
?-L(e,c). / ?-L(c,e). failure!
?-L(b,c).
/ ?-L(c,b). "'"
~ ?-L(c,c).
L(c,c).
L(e,e).
[]
?-L(/).~,c []
success! F i g . 2.6. Example for a search tree of an a u t o m a t e d theorem prover: The question whether L(a, e) follows from the given set leads to a search tree with several proof paths, but several failure and infinite paths as well.
(here: their ability to inhibit the reproduction of cancer cells).For this purpose the chemical data is represented by terms in an appropriate way. A typical structure is depicted in Fig. 2.7. T w o basic phenyl rings can be found in every structure of the training set, the second of which possesses at positions 3 and 4 variable structures: atoms, basic molecules, another ring, or a bridge, as an example. Hence a representation by a binary symbol ring(_,_) which captures the basic ring structure is possible. The two places indicate that at positions 3 and 4 variable structures may be found. These are represented by real values ifwe deal with atoms or basic molecules, by the symbol ring(_,_),ifanother ring isfound, or bridge(_, _),ifa bridge takes place. In the latter case, the firstplace represents the structure of the bridge itselfand the second place represents the free position. Hence we obtain a term representation, for example, the term ring(CL,bridge(CH2CH2, _)) for the structure in Fig. 2.7. In addition to the single symbols, physio-chemical attributes likepolarizabilitymay be encoded in the labels of the corresponding tree. Training on these data with standard methods yields a folding network such that the spear-man's correlation (i.e.,the correlation of the ranking induced by the network) is improved from 0.48 for feed-forward networks to 0.57 for folding networks on a test set. Note that both the labels and the structure encode valuable information in this example if physio-chemical attributes are added in the trees representation.
18
2. Recurrent and Folding Networks
CL
variable ~176176
C H 2C H
N Fig. 2.7. Structure of a triazine: Two phenyl rings form the basic structure, the second of which may possess different structures at positions 3 and 4.
2.4.5 Logo Classification Images of logos that are subject to noise are to be recognized in an approach presented in [24]. The logos are represented by trees in the following way: First, classical edge detection algorithms extract edges. These are encoded via features like their curvature, length, center point, ... and constitute the labels of the nodes. The nodes are arranged in a tree such that a node becomes the son of another node if the edge represented by the latter node surrounds the former edge by at least 270 degrees. Since this method yields trees with a large fan-out, the approach in [24] applies a method which substitutes the trees by binary trees first. For this data, the approach in [24] reports a test set accuracy of at least 88% for several different settings which are trained with folding networks and standard training algorithms. Other possible areas of application are reported in [36, 38]. Hence folding networks have turned out to be successful in practice in several different areas. In the following, we investigate a theoretical justification of this fact.
Chapter 3
Approximation Ability
In this chapter we deal with the ability of folding networks to approximate a function from structured data into a real vector space in principle. In a concrete learning task some empirical data (xi, .f(xi)) of an unknown function f which is to be learned is presented. We want to find a folding network that maps the data correctly. T h a t is, the inputs xi are mapped by the network to values approximating .f(xi). Furthermore, this approximating folding network should represent the entire function ] underlying the d a t a correctly. T h a t is, it should map any tree x to a value approximating f ( x ) even if x is different from all training examples. But we have no chance of finding such a network unless any finite set of d a t a can be correctly approximated by a folding network and, more significantly, unless any function we want to approximate can be approximated in some way by a folding network with an appropriate number of neurons. Of course, the notation of approximation is to be made more precise. A function can be approximated in the maximum norm, i.e., the network's output differs for any input at most a small value e from the desired output. In particular, trees of arbitrary height are to be predicted correctly. This means that we want to classify any tree correctly even if the tree is so high such that it will rarely occur. However, situations exist where approximation in the maximum norm seems appropriate, e.g., if the long-term behavior of a dynamic system is to be approximated. Alternatively, we could restrict the approximation to trees that seem reasonable, excluding, for example, trees of a large height. Formally, we demand only that the probability of trees where the network's output differs significantly from the desired o u t p u t can be made arbitrarily small. This demand corresponds to an approximation in probability. Since trees with more than a certain height become nearly impossible in any probability measure, this notation of approximation restricts the consideration to inputs with only a restricted recurrence. It would be nice to not only ensure the approximation capability for some maybe very large - network but to find explicit bounds on the number of hidden layers and neurons that are necessary in such an architecture. Bounds on the number of hidden layers and neurons limit a priori the architectures -
20
3. Approximation Ability
we have to consider unless a correct representation of the empirical data is found. The following questions are worth considering as a consequence: 1. Can any finite set of data be approximated or interpolated, respectively, by a folding network? 2. Can any reasonable function be approximated in probability? 3. Can any reasonable function be approximated in the maximum norm? 4. Can the number of neurons and hidden layers that ensure the approximation capability be limited? The first two questions are answered with 'yes' in section 3: Any finite set of data can be interpolated and any measurable function can be approximated in probability. Furthermore, to interpolate a set of p patterns, it is sufficient to use a number of neurons which is quadratic in p for an activation function like the sigmoidal function. It is sufficient to consider folding networks with only a constant number of layers to approximate a measurable function. On the contrary, the third question is answered with 'no' in general in section 4. But first we introduce some notation.
3.1 Foundations We start with a formal definition of the term 'approximation' that we consider here. D e f i n i t i o n 3.1.1. A s s u m e X is a set and Y C R n /or some n. A function f : X - r Y is approximated by a function 9 : X ~ Y in the maximum norm w/th accuracy e if sup If(z) - g(z)[ < e. zEX
A s s u m e X is equipped with a a-algebra and P is a probability measure on X . A measurable function f : X ~ Y is approximated by a measurable function g : X --r Y in probability with accuracy e and confidence ~ if P(x E X [If(x)-g(x)l
> e) < 5.
The situation where a finite set of data, i.e., X C Rn is finite, is approximated or even interpolated, i.e., e = 0, is of particular interest when dealing with a learning algorithm which tries to fit a network such that it represents some empirical data. In the following, X C Rm will be an arbitrary subset or X C (R m)~ will contain sequences or trees if f is a recursive mapping. Y will be the real vector space Rn or a compact subset of this space. Any result that leads to a positive or negative fact concerning these two formalisms is called an approximation result in the following. For feed-forward networks it is shown in [59] that any continuous function R m ~ Rn with compact domain can be approximated arbitrarily well
3.1 Foundations
21
in the maximum norm, and any Borel-measurable function can be approximated arbitrarily well in probability with a network with only one hidden layer with squashing activation function and linear output. Additionally, it is shown in [58] that it is possible to approximate a continuous function in the maximum norm or a measurable function in probability, respectively, with single hidden layer networks and bounded weights if the activation function of the hidden nodes is locally Riemann integrable and nonpolynomial. Furthermore, to interpolate p points with outputs in R exactly by such a network with unbounded weights it is sufficient to use only p neurons in the hidden layer [119]. For an input dimension m, o u t p u t dimension n, and continuously differentiable activation with some other conditions, this bound is improved to 2 p m / ( n + m ) hidden neurons, which are sufficient for the interpolation of p points in general position [28]. We will use these results in our constructions in order to show some universal approximation property for folding networks. The respective feed-forward networks will be accompanied by a recursive network. The recursive network encodes the respective data in a finite dimensional vector space and hence reduces the approximation problem to a problem for feed-forward networks. However, as a consequence of these results folding networks with appropriate activation functions can approximate any function of the special form g o ]y : K
22
3. Approximation Ability
data line WlW2W 3 0 0 ... 0
data line 0 0 0 0 ... 0 f([w I w2w3] )
indicator line
indicator line 0000...01
1 1 1 00...0 I I
Fig. 3.1. Recurrent network as a computational model; both, input and output are encoded by two lines indicating the fact that data is present and the data, respectively
mation of some functions with outputs in a finite but not necessarily binary alphabet. Again, this approach only leads to the approximation of functions in a very special form. In particular, the functions a priori have a recursive structure. This property is not known a priori in general term classification tasks. Apart from this fact, the results allow the approximation of recursive inputs with unlimited height. Furthermore, the argumentation gives rise to the question as to whether the simulation of finite automata can be generalized to a simulation of general Turing machines with recurrent networks, too. In order to answer this question it is necessary to define what a computation with a recurrent network looks like. In [116], for example, two possible formalisms are presented. The first formalism operates on binary words w of length at least 1, i.e., on w E {0, 1} +. We refer to the single letters in w with the symbols wi. D e f i n i t i o n 3.1.2. A recurrent neural network computes a function f : {0, 1} + --r {0, 1} (which may be a partial function) on on-line inputs /f the network computes h o/~y : (R2) * --~ R 2 such that the following holds: - For every word w 9 {0, 1} + where f ( w ) is defined, a number t 9 N, the computation time, exists with
h o ~y([(Wl, 1), (w~, 1 ) , . . . , (wn, 1), (0, 0 ) , . . . , (0, 0)]) = ( f ( w ) , 1) t-lwl~imes and for every prefix of this input string with length in [Iwh t[ the network
outputs (0, 0). -
If f ( w ) is not defined,
h o ~y([(wl, 1), (w2, 1 ) , . . . , (wn, 1), (0, 0 ) , . . . , (0, 0~) = (0, 0) t times for all t E N.
3.1 Foundations
23
Consequently, two types of input and output neurons exist: one where the input or output values, respectively, of function f can be found, and one which only indicates with a value 1 that data is present and with a value 0 that the network is still computing. See Fig. 3.1. Another formalization of a computation with a recurrent network gets the input information from the activation value of one specified neuron which encodes the input w. After reading this input, the network is allowed to compute for some time unless the output can be found analogous to the first formalism. Since no further input information is necessary, but the computation may take some time, such a network formally works on sequences with elements in a 0 dimensional vector space R~ . We denote the dummy element in R~ b y T . D e f i n i t i o n 3.1.3. Assume c : {0, 1} + ~ I~ is an injective function which serves as an encoding function, f : {0, 1} + --~ {0, 1} is computed by a recurrent network on off-line inputs which are encoded via c if a recurrent architecture h o (~(_,y) exists, where only one coefficient of the initial context is not specified, such that for every word w E {0, 1} + and the function h o ~(c(w),y) : (IR~ * - r R2 the following holds: -
For any word w E {0, 1} +, where f ( w ) is defined, a number t E N, the computation time, exists with h o O(c(w),y)([T, .~., TJ) = ( f ( w ) , 1) and for
t times every prefix of this input string with shorter length the network h o g(c(w),y) outputs (0, 0). - I l l ( w ) is not defined, ho[t(c(w),y)(~T,...,T~) = (0,0) for every t E N. t times As in the first formalism two output neurons exist: one for the data and one that indicates whether data is present. But the input is encoded in the initial value of one neuron directly using c. All input neurons of the recurrent network are dropped, c can be chosen as c([wl . . . . , Wn]) = Y]n=a (2wi + 1)/(4i), for example. This encoding is used in [116]. Obviously, the demand for an output of exactly 0 or 1, respectively, can be modified to demand values near 0 or 1. This is appropriate in the case of real valued outputs if we deal with a sigmoidal output activation, for example. Taking t as the computation time, the notion of a computation in linear, polynomial, or exponential time is defined in both of the above formalisms. In [116] it is shown that any Turing machine can be simulated by a recurrent network with semilinear activation in the on-line mode and in the off-line mode. The simulation only leads to a linear time delay. Therefore it follows that any mapping which can be computed by a Turing machine in constant time can be interpolated in the maximum norm by a recurrent network with semilinear activation. The number of neurons which is sufficient for such an approximation depends on the Turing machine which computes the mapping. A simulation result for the standard sigmoidal function also exists [66]. But
24
3. Approximation Ability
no approximation results can be derived from this simulation because it only works for off-line inputs and requires exponential computation time. Another approach addressing the computational capability of recurrent networks is [115]. There it is shown that nonuniform Boolean circuits can be simulated by recurrent networks with semilinear activation. Nonuniform Boolean circuits here mean an arbitrary family (Bn),,eN of Boolean circuits Bn, where Bn has n input nodes and computation nodes with a function from {AND, OR, NOT} from which one node is specified as the output. The computation time of a simulating network is polynomially correlated to the depth of the circuits. The number of neurons necessary for the simulation is fixed. This simulation demonstrates that recurrent networks as a computational model can compute every function in exponential time, because every function can be computed by appropriate nonuniform Boolean circuits. Furthermore, in [115] the other direction, the simulation of recurrent networks with nonuniform Boolean circuits is also considered. It is shown that any recurrent network with an activation function with bounded range that is Lipschitz continuous in a neighborhood of every point, e.g., the standard sigmoidal function, can be computed by nonuniform Boolean circuits with resources polynomially correlated to the computation time of the network. In particular, any function that cannot be computed by nonuniform circuits with polynomial resources cannot be approximated by a recurrent network in the maximum norm - a negative result concerning our third question at the beginning of this chapter. Another demonstration of the computational capability of recurrent networks is given in [114]. Another proof of the Turing universality can be found in [68]. Although all these simulations demonstrate the computational power of recurrent networks, they depend on the fact that the activations of the neurons are real numbers where an arbitrary amount of data can be stored and the computation can be performed with arbitrary precision. Otherwise, e.g., for simple perceptron networks, the internal stack would be limited. In order to take into account that in real computations the internal stack representation may be partially disturbed, recurrent networks are investigated in [84], where the computation is subject to some noise. If the noise is limited the approximation of finite automata is still possible with noisy networks but Turing machines can no longer be simulated. Any network affected by a piecewise equicontinuous noise process can only recognize regular languages. The situation becomes even worse if the noise is not limited, e.g., Gaussian. [85] deals with this scenario. It is shown that any language L that can be accepted by such a network has the property that the fact w E L only depends on the last k values in the word w for a fixed number k. This result reduces the power to deal with recursive data considerably. In fact, it shows that simple feedforward networks with an a p r i o r i restricted recurrence of the inputs have the same computational capabilities as recurrent networks if the computation is noisy.
3.2 Approximation in Probability
25
3.2 Approximation in Probability As already mentioned in the previous section, approximation results for some special functions which are written a priori in a recursive form exist. In contrast, we will consider the situation as it usually occurs in recursive learning tasks where symbolic data is to be classified: An arbitrary function f : ( Rm )i ~ Rn is to be learned if some examples of f are present. Consequently, an arbitrary function is to be approximated by a folding network. 3.2.1 I n t e r p o l a t i o n o f a F i n i t e Set o f D a t a First we show that every finite set of data can be approximated with accuracy e = 0, hence it can be interpolated. In symbolic learning tasks the complex terms are made up of symbols from a finite alphabet. Therefore, a finite number of terms from E~ with a finite alphabet ,U is to be interpolated in this case. We can assume that the elements of 5: are encoded by natural numbers. L e m m a 3.2.1. Assume S = { a l , . . . , a , } C 1~{0,1} is a finite alphabet, k E N, tl, . . . , tp E S~ is a finite set o f p terms, and f : LT,~ --+ R" is a function. There exists a folding network h o go : S~ --+ R n which interpolates the finite set of data, i.e., y(t~) = h o ~o(t~) holds for all ti. The network can be chosen as follows: The part g : 27 • R2k ~ R2 consists of 2(k - 1) multiplying hidden neurons and 2 outputs with linear activation function, the part h : R2 ~ R n is an M L P with one hidden layer with n 9 p neurons with squashing or locally Riemann integrable and nonpolynomial activation and linear output neurons. Proof. The rough idea is as follows: The trees ti are first encoded into R2 via g0 and then mapped to f ( t i ) via h. If g0 is injective on tl, . . . , tp, the possibility of interpolating with h follows from interpolation results concerning MLPs. Assume each of the numbers ai needs exactly d digits for a representation with maybe leading digits 0. A unique representation of a tree t is the string representation 0...0 St
if t is the empty tree .l_,
d digits O~Q~Ol a i s t l 9 8th d digits
if t is a i ( t l , . . . ,tk).
The scaled number O.st can be computed by a recursive function as follows: For two strings (0.sl, (0.1)length(s1)), (0.82, (0.1) length(s2)) the concatenation is
(0.8182, (0.1) length(*1"2)) : (0.81 + (0-1) length(sx) " 0-S2, (0-1) length(s1) " (0.1)length(m2)),
26
3. Approximation Ability
consequently (0.st,(0.1)length(st)-d) can be computed as (0,0) ift is the empty tree, and it can be computed by ( 0 . 0 . . . 0 1 + (0.1)2d 9ai + (0.1) 2d. O.stl
0.8t2 + (0.1)2dWlength(a'l)+length(st2 ) . 0.St s
+ (0.1)2d+length(', I ) .
"Jl- "'"
-4- (0.1)2d+length(stl)+...+length(stk-i (0.1)2d+length(st 1 )+...+length( Jt k )-d)
) . O.8tj,,
if t = a i ( t l , . . . ,tk). This mapping equals g(o,o), where g : ~ • R 2k --~ IR2 is defined by g(x, x l , Y l , . . . ,xk,yk) = ( Z l , Z 2 ) with zx
=
(0.1) a . (1 + (0.1)dx + (0.1)dxl + (0.1)2dylx2 -t9-. + (0-1)kdYx . - - y k - l X k ) ,
Z2
=
(0.1)a(k+l)Yl.-.Yk,
which can be computed by a network as stated in the lemma. g0 embeds the trees tl, . . . , tp injectively into R 2. The function which m a p s the finite set of images in R 2 to the values f(ti) can be completed to a continuous function and therefore be approximated arbitrarily well by an MLP with one hidden layer with activation functions as stated above [58, 59]. From [119] it follows t h a t even np hidden neurons are sufficient for an exact interpolation of the p images in R n . [] As a consequence, a network can be found which maps the empirical d a t a correctly in any concrete learning task. Explicit bounds on the number of neurons limit the search space for an interpolating architecture. If a concrete algorithm does not m a n a g e to find appropriate weights such t h a t a network with an architecture as described in L e m m a 3.2.1 produces a small error on the data, this is not due to the limited capacity of the architecture but to the weakness of the learning algorithm. In particular, situations to test the in principle capability of an algorithm of minimizing the empirical error can be derived from these bounds. However, the architecture in the recursive part is not a common one. But one can substitute it by a standard M L P with a number of neurons correlated to k. This is due to the fact t h a t every activation function we used in the previous construction can be a p p r o x i m a t e d in an appropriate way by a standard activation function. In the following we will use the fact t h a t the possibility of approximating an activation function enables us to a p p r o x i m a t e the entire folding network, too, in a formal notation:
L e m m a 3.2.2. Assume f : R m+~'" --r R n is a function which is the compo-
sition of continuous functions fl : R nl=rn+k'n _~ Rn2, f2 : Rn2 ~ I]~ns, . . . , ft : R na ~ R n, and every function fi can be uniformly approximated by f ~ (~ ~ O) on compact sets Ci C R '~' w/th f,(Ci) C Ci+x, ft(Ct) C C2x where
3.2 Approximation in Probability
x I .~-.. X2
fl
~
x:.<
f2 ~
fl ='''I
xl i .x:2.
',f~
If~
X22
,
,
=
f2
)
,'
27
u n foldi n g for t--2
-'---
I ....
ft I =
X2+l
9
9
~
Fig. 3.2. Unfolding of a recursive mapping for an input tree of height 2: The network is copied for each node in the tree C1 = C~ x (C~) k, such that the respective image of C~ under fi has a positive distance to the boundary of Ci+l or C~, respectively. Assume that T E N. Then the function ]y can be approximated arbitrarily well in the maximum I~
28
3. Approximation Ability
O ( k ) neurons with an activation f u n c t i o n a that is locally C 2 with a " ~ O. y depends on a. Proof. The mapping g in Lemma 3.2.1 is to be substituted by a mapping that can be computed by an MLP with activation function a. Note that 9 contains the terms Yl, Yl "Y2, . . - , Yl "... "Yk, which can be computed in O(lg k) layers with O(k) neurons in each layer, each unit computing the product of at most two predecessors. Adding k + 1 units in each layer which simply copy the input values x, Xl, . . . , xk of g, adding one layer with k + 1 multiplying units, and one layer with two units with identical activation function, g can be computed in an MLP with O(lg k) layers each containing O(k) units. Since x . y = ((x + y)2 _ x2 _ y2)/2 we can substitute the multiplying units by units with square activation functions. The order of the bounds remains the same. Since a is locally C 2 with a " ~ 0, points x0 and xl exist with
lim a(xo + e x ) - a ( x o ) = x lim a(xx + ex) + O ' ( X l - - e X ) ,-,0 e2 "(Zl)
--
and
2 a ( x l ) = x2
for all x. The convergence is uniform on compact intervals; in particular, we can choose 9 such that the recursive mapping which results if the square and identical activation functions in g are substituted by the above terms is still injective on the ti's. Note that the substituting terms consist of an afline combination of a, consequently an MLP with activation function a results: the coefficients in the affine combination are considered to be part of the weights of the preceding or following layer, respectively, of the corresponding unit. In the last layer, of course, no following layer exists in g; but due to the recurrence we can put the factors into the weights of the first hidden layer of g and h. It is necessary to change the initial context from (0, 0) to (a(xo), a ( x o ) ) . Formally, this method uses the identity (A o f l ) y = A o (]2)y, which holds every mappings A and f l of appropriate arity, vectors y = A(y'), and f2(x, Y l , . . . , Yk) = f l (x, A ( y l ) , . . . , A(yk)). We have constructed a mapping that is injective on the ti's and can be computed with an MLP with activation function a. Hence a feed-forward network h which maps the images of the t~ to the desired outputs as stated in the theorem exists. []
Consequently, even with a standard architecture the approximation of a finite set of data is possible. In the case of sequences, i.e., k = 1, the resources necessary for an interpolation can be further restricted. L e m m a 3.2.4. A s s u m e 27 C 1~{0, 1} is finite, f : 27* ~ R n is a function, and t l , . . . , tp G 27". T h e n a folding network ho~o exists with ho~o(ti ) = f ( t i ) for all i. g : 27 • R --r R is a network without hidden units and one output
3.2 Approximation in Probability
29
with identical activation function, h : R --+ R n is a network with one hidden layer with np hidden neurons with squashing or locally Riemann integrable and nonpolynomial activation function and linear outputs. The activation function in g can be substituted by any activation function a which is locally C 1 with a ~ ~ O. In this case the initial context is a(Xo), where Xo is some point with a'(xo) ~ O.
Proof. We assume that exactly d digits are necessary for a representation of ai E 57. The injective mapping .q0 : S* --+ IR,
[ail,ai~,... ,ai,] ~-+ 0.ai, ai,_x . . . a i l
is induced by g : 57 x R ~ R, g ( x l , x 2 ) = (0.1)dxl -t- (0.1)dx2. g can be computed by one neuron with linear activation function. The existence of an appropriate h in the feed-forward part of the recurrent network approximating f follows directly from the approximation and interpolation results in [58, 59, 119]. The linear activation function in g can be substituted by the term (a(xo + ex) - a ( x o ) ) / ( e a ' ( x o ) ) , which is approximately x for small e. The additional scaling and shifting is considered to be part of the weights of the neuron in the recursive part and h. The initial context is to be changed from 0 to a(xo). [] The labels often come from a real vector space instead of a finite alphabet if dealing with hybrid data instead of purely symbolic data. The above results can be adapted in this case. However, one additional layer is necessary in order to substitute the real valued inputs by appropriate symbolic values. T h e o r e m 3.2.1. A s s u m e f : (R m)~ -+ IRn is a function and t l , . . . , t p E (Rm)~. Then a folding network h o [?y exists such that h o ~y(ti) = f ( t i ) for all i. h : R ~ R" if k = 1 or h : R2 --r Rn if k = 2, respectively, is a network with linear output neurons and one hidden layer with np hidden neurons. The activation function of these neurons is either a squashing activation function or some function which is locally R i e m a n n integrable and nonpolynomial. I f k = 1 then g : R m • R --+ R is a network with one hidden layer with O(p 2) neurons with a squashing activation function which is locally C x with nonvanishing derivative and one output neuron with a locally C 1 activation function with nonvanishing derivative. I f k > 2 then g : R m • R 2k --r R 2 is a network with O(lgk) layers. The first hidden layer contains O(p 2) neurons with a squashing and locally C 1 activation function with nonvanishing derivative. The remaining hidden layers contain O(k) neurons with a locally C 2 activation function with nonvanishing second derivative. Proof. The trees h , - - - , tp are different either because of their structure or because of their labels. We want to substitute the real vectors in the single
30
3. Approximation Ability
labels by symbolic values such that the resulting trees are still mutually different. Consider trees t l , . 9 tp, of the same structure. At least one coefficient in some label exists in tl which is different from the coefficient in the same label in t2; the same is valid for tl and t3, tl and t4, . . . , tp,-- 1 and tp,. Hence we can substitute all coefficients arbitrarily for all but at most (pl + 1)2 different values such that the resulting trees are still mutually different. Denote the real values which distinguish the trees t l , . . . , t p by a~, j denoting the coefficient in which the value occurs. Note that the values add up to at most P = (p + 1) 2. The number of coefficients occurring in dimension i is denoted by Pi. Now a layer is added to the function g in Lemma 3.2.3 or 3.2.4, respectively, which substitutes the labels by a symbolic value in { 2 , . . . , p m + 1}, m being the dimension of the labels. More precisely, g is combined with gl where rn P~ g l ( X l , . . - , X m ) ----2 + ~'~ P ' - ' ~--~(j - 1)-lx,=~, (x,). i=l
j=l
gl computes a representation of the coefficients restricted to the values ai to base P. Combined with the function g in Lemma 3.2.1 or 3.2.4, respectively,
i.e., g2(2~1,..., Xrn, Yl, Z l , . ' . , Yk, Zk) ---- g(gl ( Z l , ' ' - ,
Zrn), yl, Z l , . . . , Ym, Zm)
or
g 2 ( x l , . . . , xm, y) = g(gl ( x l , . . . , xm), y), respectively, this yields a prefix representation of the trees such that the images of the trees ti are mutually different, gl can be computed by one hidden layer with the perceptron activation function and linear neurons copying Yl, ... or y, respectively, and one linear output neuron: for this purpose, lx~=~ is substituted by H(xi - ai) + H ( - x i + aj) - 1. The linear outputs of gl can be integrated into the first layer of g. Since we are dealing with a finite number of points, the biases in the perceptron neurons can be slightly changed such that no activation coincides with 0 on the inputs t~. Hence the perceptron neurons can be approximated arbitrarily well by a squashing function a because of H(x) = lim~_,~ a(x/e) for x # 0. The remaining activation functions can be approximated by difference quotients using activation functions as described in the theorem. The approximation can be performed such that the function is injective on the inputs ti. Hence a function h mapping the images of the ti to the desired outputs as described in the theorem exists. [-]
3.2.2 Approximation of a Mapping in Probability These results show that for standard architectures interpolation of a finite number of points is possible. This interpolation capability is a necessary condition for a learning algorithm which minimizes the empirical error to succeed.
3.2 Approximation in Probability
31
However, if we want to approximate the entire function f the question arises as to whether the entire function can be represented by such a network in some way. A universal approximation capability of folding networks is a necessary condition for a positive answer to this question. Of course, the simple possibility of representing the function in some way does not imply that any function with small empirical error is a good approximation of f as well, it is only a necessary condition; we will deal with sufficient conditions for a learning algorithm to produce the correct function in the next chapter. The capability of approximating a finite number of points implies the capability of approximating any mapping f : X~ --+ I~n in probability immediately if 2: is a finite alphabet. T h e o r e m 3.2.2. A s s u m e ,U C l~{0, 1} is a finite alphabet and P is a probability measure on E~. For any 5 > 0 and function f : ~ ~ ~n there exists a folding n e t w o r k h o ~ y : S ~ --~ R n such t h a t P ( x E E ~ [ f ( x ) # holy(X)) < 5. y depends on the activation function of the recursive part. h can be chosen as an M L P with input dimension 1 if k = 1 or input dimension 2 if k > 2 with linear outputs and a squashing or locally R i e m a n n integrable and nonpolynomial activation function in the hidden layers. g can be chosen as a network with only 1 neuron with an activation function which is locally C 1 with a' ~ 0 in the case k = 1. For k > 2 the function g can be chosen as a network with O(k) neurons with linear and multiplying units or as an M L P with O(lg k) layers, each containing O(k) units, and an activation function which is locally C 2 with a" ~ O. Proof. Since S~ can be decomposed into a countable union of subsets Si, where Si contains only trees of one fixed structure, we can write 1 = P(57~) = ~-]~iP ( S i ) . Consequently, we can find a finite number of tree structures such that the probability of the complement is smaller than 5. Any folding network which interpolates only the finite number of trees of these special structures approximates f in probability with accuracy 0 and confidence 5. A folding network interpolating a finite number of trees with an architecture as stated in the theorem exists because of the approximation lemmata we have already proven. []
We have shown that folding networks have the capacity to handle mappings that take symbolic terms as inputs. However, recurrent networks are often used for time series prediction where the single terms in the time series are real data, e.g., market prices or precipitation. Even when dealing with trees it is reasonable that some of the data is real valued, especially when symbolic as well as sub-symbolic data are considered, e.g., arithmetical terms which contain variables and are only partially evaluated to real numbers, or image data which may contain the relative position of the picture objects as well as the mean grey value of the objects. Consequently, it is an interesting question whether mappings with input trees with labels in a real vector space can be approximated as well.
32
3. Approximation Ability
T h e o r e m 3.2.3. Assume P is a probability measure on (Rm)~, f : (Rm)~ --+ R n is a measurable function, and J and e are positive values. Then a folding network ho[ly : (IRm)~ ~ Rn exists such that P ( x E (Rm)'kllf(x)-ho9y(X)l > e) < 6. h and y can be chosen as in Theorem 3.2.2. Compared to Theorem 3.2.2, g contains one additional layer with a squashin 9 activation function which is locally C 1 with a' ~ O. The number of neurons in this layer depends on the function f . Proof. It follows from [59] that any measurable function fl{trees of a fixed structure with i nodes} can be approximated with accuracy e/2 and confidence (6. (0.5)i+1)/(2.A(i)) in the probability induced by P, where A(i) is the finite number of different tree structures with exactly i nodes. Consequently, the entire mapping f can be approximated with accuracy e/2 and confidence 6/2 by a continuous mapping. Therefore we can assume w.l.o.g, that f itself is continuous. We will show that f can be approximated by the composition of a function fl : R rn --~ 5: which scans each real valued label into a finite alphabet 5: C N and therefore induces an encoding of the entire tree to a tree in S~ and a function f2 : 5:~ -+ Rn which can be approximated using Theorem 3.2.2. Since we are dealing with an infinite number of trees the situation needs additional argumentation compared to Theorem 3.2.1. A height T exists such that trees higher than T have a probability smaller than 6/4. A positive value B exists such that trees which contain at least one label outside ] - B, B[ n have a probability smaller than 6/4. Each restriction f l S of f to a fixed structure S of trees of height at most T and labels in [ - B , B] n is equicontinuous. Consequently, we can find a constant es for any of these finite number of structures such that I(flS)(tl) - (fIS)(t2)l < e/2 for all trees ty and t2 of structure S with ]tl -t21 < es. Take e0 = mins{es}. Decompose ] - B, B[ into disjoint intervals
/1 = ] - B, bl[, 12 = ] b ~ , ~ [ , . . . , I q
= ]bq_~, B[
of diameter at most e0/mv/-m-2Y such that P ( t [ t is a tree with a label with at least one coefficient bi) < J/4. Note that for two trees of the same structure, where for any label the coefficients of both trees are contained in the same interval, the distance is at most e0, i.e., the distance of the images under f is at most e/2. The mapping f l : R'n -~ 5: = { 2 , . . . , q m + 1}, rn
q
( x l , . . . , x m ) ~ 2 + Z q '-1 Z ( j - 1). ll#(Xi) i=1
j=l
encodes the information about which intervals the coefficients of x belong to. Here lli is the characteristic function of the interval Ij. f l gives rise to a mapping (Rm)~: -~ Z~, where each label in a tree is mapped to 5: via f l . Therefore we can define f2 : 2:~ --~ R", which maps a tree t E 5:~ to f(t~),
3.2 Approximation in Probability
33
where t ~ E (Rm)~ is a fixed tree of the same structure as t with labels in the corresponding intervals that are encoded in t via f l . f2 can be interpolated on trees up to height T with a mapping h o gl y : ~ -~ R n, as described in Lemma 3.2.1. Because of the construction, the mapping h o l y : (Rm)~ --~ I~n, where g(x, y l , . . . ,yk) = g l ( f l ( x ) , y l , . . . ,yk) for x E I~m, yi E R 2 differs at most e/2 from f for a set of probability at least 1 -3`5/4. The characteristic function 1i, in f l can be implemented by two neurons with perceptron activation function because l h ( x ) = H(x - bi-1) + H(bi - x) - 1, where bo = - B and bq = B. Consequently, we can approximate the identity activation in g via the formula x = lim~,-,0(a(x0 + e x l ) - a ( x o ) ) / ( e l a ' ( x o ) ) for a locally C 1 activation function a and the perceptron activation in g via the formula H(x) = lim~ 1-40 a(X/el ) for a squashing activation function a. The latter convergence is uniform, except for values near 0. The feed-forward function h approximates a continuous function in the maximum norm such that we obtain a tolerance for the relevant inputs of h such that input changes within this tolerance change the output by at most e/2. Therefore, the approximation process in g results in a mapping which differs at most e on trees of probability at least 1 - `5 from the mapping f . Obviously, an analogous argumentation based on Lemma 3.2.4 leads to the smaller encoding dimension 1 if k = 1. [] We can conclude that general mappings on trees with real labels can be approximated. Unfortunately, the results lead to bounds on the number of hidden layers, but not on the number of neurons required in the recursive or feed-forward part in the general situation. The number of neurons depends somehow on the smoothness of the function to be approximated. This is due to the fact that the number of representative input values for f , i.e., intervals we have to consider in a discretization, depends on the roughness of f . In fact, by using this general approximation result the bound for the number of hidden layers can be further improved. Together with the universal approximation we have constructed an explicit recursive form with continuous transformation function for any measurable mapping on recursive data. The approximation results for feed-forward networks can be used to approximate the transformation function, and the result of [120] follows in fact for any measurable function with trees as inputs which is not written a priori in a recursive form. C o r o l l a r y 3.2.1. Assume P is a probability measure on (Rm)~. For any measurable function f : (IRm ) i -'~ Rn and positive values e and `5 there exists a folding n e t w o r k h o O y : (IR'n)~ ~ R n with P ( x ] If(x) - hoOy(z)l > e) < ,5. h can be chosen as a linear mapping, g can be chosen as a neural network without hidden layer and a squashing or locally Riemann integrable and nonpolynomial activation function.
34
3. Approximation Ability
Proof. As already shown, f can be approximated by a folding network fl o f2y with continuous functions fl and f~ computed by feed-forward networks with sigmoidal activation function, for example. For an approximation in probability it is sufficientto approximate fl o f2y only on trees up to a fixed height with labels in a compact set. Adding n neurons, which compute the output values of fl, to the outputs of f2 in the encoding layer if necessary, we can assume that fl is linear, f2 can be approximated arbitrarily well by a feed-forward network with one hidden layer with squashing or locally Riemann integrable and nonpolynomial activation function on any compact set in the maximum norm [58, 59]. In particular, f2 can be approximated by some g of this form such that the resulting recursive function ~y differs at most e on the trees up to a certain height with labels in a compact set. We can consider the linear outputs of this approximation to be part of the weights in the first layer or the function fl, respectively, which leads to a change of the initial context y. With the choice h = f l , a network of the desired structure results. [] This result minimizes the number of layers necessary for an approximation and is valid for any squashing function even if it is not differentiable, e.g., the perceptron function. But the encoding dimension increases if the function that is to be approximated becomes more complicated. A limitation of the encoding dimension is possible if we allow hidden layers in the recursive and feed-forward part. C o r o l l a r y 3.2.2. For any measurable ~nction f : (Rm)~ -4 R n a folding network h o l y exists which approximates f in probability. The encoding dimen-
sion can be chosen as 2. h and g can be chosen as multilayer networks with one hidden layer, locally Riemann integrable and nonpolynomial or squashing activation functions in the hidden layer of g and h, linear outputs in h and a locally homeomorphic output activation function in g (i.e., there exists a nonempty open set U such that alU is continuous, alU : U --~ a(U) is invertible, and the inversion is continuous, too). Proof. As before, f is approximated with fl o ]2y. A shift and scaling is introduced in the output layer of f2 such that the relevant range of ]2y which was a neighborhood of (0, 0) by construction - now coincides with the range a(U), where the output activation function a is homeomorphic. As before, fl and ((a[a(U)) -1, (a[a(U)) -1) o f2 can be approximated in the maximum norm with feed-forward networks h and ~ such that h and g = (a, a) o ~ fulfill the conditions as stated in the corollary. [] Here again, we can choose the encoding dimension as 1 if k = 1. Furthermore, we can limit the weights in the approximating networks in Corollaries 3.2.1 and 3.2.2 by an arbitrary constant B > 0 if the activation functions in the networks are locally Riemann integrable, because in this case the weights in the corresponding feed-forward networks can be restricted [58]. The possi-
3.2 Approximation in Probability
35
bility of restricting the output biases in the feed-forward and recursive part depends on the respective activation function. For standard activation functions like sgd a restriction is possible. 3.2.3 Interpolation
with ~ :
H
Note that although Corollary 3.2.1 also holds for the perceptron activation function, no bounds on the encoding dimension can be derived. It would be nice to obtain an explicit bound for the number of neurons in the perceptron case, too, at least if only a finite number of examples has to be interpolated. Unlike in the real valued case, the number of neurons in the encoding part necessarily increases with the number of patterns since the information that can be stored in a finite set of binary valued neurons is limited. In fact, a brute force method, which simply encodes the trees directly with an appropriate number of neurons, leads to a bound which requires the same number of neurons as nodes in the trees in the encoding part. T h e o r e m 3.2.4. Assume E is a finite alphabet, tl, . . . , tp E E L are input trees, and f : ~ -~ •n is a function. There exists a folding network h o ~y : 2Y~ --r ]Rn which interpolates f on these examples ti. h can be chosen as an M L P with one hidden layer with np hidden neurons with squashing activation and linear outputs, g can be chosen as an M L P with squashing activation, one hidden layer and one output layer, each with a number of neurons that is exponential in the maximum height of the input trees and in k if k > 2 and with a number of neurons that is linear in the length of the sequences for k = 1. Proof. Assume that T is the maximum height of the trees tl, . . . , tp. Assume 2Y C N has s elements. Trees up to height T can be encoded with d = kT(4+s) binary numbers in the following way: The labels are encoded in s digits in a unary notation ai ~-~ 0 . . . 010... 0, the empty tree is encoded as 01110, and any other tree is encoded as a i ( t l , . . . , tk) ~ 0110code(ai) code(t1) ... code(tk).
In a network implementation we additionally store the length in a unary way in d digits: 1 ... 1 0 . . . 0. Starting with the empty tree length
(011100...0,111110...0) the encoding is induced by g : 1~1• {0, 1} 2kd ~ {0, 1} ~d, (al, y l , i x , . . . , yk, i k) ~-> ( O , l , l , O , l , , = a l , . . . , l , ~ = a . , ~ , 1 , . . . , y k , code of 4 + s + ll + . . . + lk), where I i is the decimal number corresponding to the unary representation Ii, and yi are the first l i coefficients of yi. i.e., the j t h neuron in the output of g
36
3. Approximation Ability
computes for j ' = j - 4 - s > 0 the formula yJ, V (yl2/Xl I = j ' - 1)V (y~ A l
j'-2) V...V(y2AI 1 =j'-d)
V(y~Al 1+12 = j ' - l )
I :
V(y3Al 1 + I ~ =
j ' - 2 ) V . . . V ( y ~ A I 1 + . . . + l k-1 = j ' - d ) . The test 1 1 + . . . + 1 i = j can be performed - with brute force again - just testing every possible tuple (l I = 1 A l 2 = 1 A . . . A I i = j - ( i - 1 ) ) V . . . , where l i = j means to scan the pattern 10 at the places j, j + 1 in the unary representation 1i of l i. The same brute force method enables us to compute the new length 11 + . . . + l ~ with perceptron neurons, g can be computed with a network with perceptron neurons where the number is exponential in the maximum height T. Of course, the perceptron functions can be substituted by a squashing activation function a using the equation H(x) = lim~o__,oa(x/eo) for x ~ 0. The encoding can be simplified in the case k = 1: A sequence of the form [all, a i 2 , . . . , ai,] is encoded as c o d e ( a i . ) . . , code(ail). This is induced by a mapping which scans the code of the label into the first s units and shifts the already encoded part exactly s places. A network implementation only requires a number of neurons which is linear in T. The existence of an appropriate h follows because of [58, 59, 119] [] This direct implementation gives bounds on the number of neurons in the perceptron case, too, although they increase exponentially in T, as expected. However, this enables us to construct situations where the possibility of a training algorithm of minimizing the empirical error can be tested. At least for an architecture as described above a training algorithm should be able to produce small empirical error. If the error is large the training gets stuck in a local optimum.
3.3 Approximation
in the
Maximum
Norm
All approximation results in the previous section only address the interpolation of a finite number of points or the approximation of general mappings in probability. Hence the height of the trees that are to be considered is limited. Consequently the recurrence for which approximation takes place is restricted. In term classification tasks this is appropriate in many situations: In automated theorem proving a term with more than 500 symbols will rarely occur, for example.
3.3.1 Negative Examples However, when dealing with time series we may be interested in approximating the long-term behavior of a series. When assessing entire proof sequences with neural networks we may be confronted with several thousand steps. Therefore it is interesting to ask whether mappings can be approximated for inputs of arbitrary length or height, i.e., in the maximum norm as well.
3.3 Approximation in the Maximum Norm
37
In general, this is not possible in very simple situations. The following example is due to [31].
Example 3.3.1. Assume a finite set of continuous activation functions is specified. Assume e > 0. Even for ,U = { 1} a function f : 5:* ~ II~ exists which cannot be approximated with accuracy e in the maximum norm by a recurrent network with activation functions from the above set regardless of the network's architecture. Proof. ] is constructed by a standard diagonalization argument; in detail, the construction goes as follows: For any number i E N there exists only a finite number of recurrent architectures with at most i neurons and activation functions from the above set. For any fixed architecture A and an input x = [ 1 , . . . , 1] of length i the set s~ = {y [ y is the output of x via a network of architecture A with weights absolutely bounded by i} is a compact set. Therefore we can find for any i E N a number yi such that the distance of Yi to UA ha~ at most i n e u r o n s SA is at least e. The mapping f : 1 , ~ . , ~ ~ Yi i times
cannot be approximated by any recurrent network in the maximum norm with accuracy e and activation function from the specified set. [] Note that the function f constructed above is computable if an appropriate approximation of the activation functions can be computed. This holds for all standard activations like the sigmoidal function or polynomials. However, the function we have constructed does not seem likely to occur in practical applications. We can prevent the above argumentation from occurring by considering only functions with bounded range. But even here, some functions cannot be approximated, although we have to enlarge the alphabet ,U to a binary alphabet in the case k = 1.
Example 3.3.2. Assume an activation function is specified which is for any point Lipschitz continuous in some neighborhood of the point and which has a bounded range. Assume e > 0. Even for ~7 = {0, 1} there exists a function f : ,~* -4 ,U that cannot be approximated by a recurrent network with the above activation function. Proof. As already mentioned, it is shown in [115] that any computation with a recurrent network with activation function as specified above can be simulated by a nonuniform Boolean circuit family with polynomially growing resources. An approximation of a mapping between input sequences of a binary alphabet and binary outputs can be seen as a very simple computation in linear time and therefore can be simulated by circuits as well. It remains to show that some function f : ,U* ~ Z' exists, which cannot be computed by such a circuit family. It can be shown by a standard counting argument: Any Boolean circuit with n inputs and b gates can be obtained as one of the following circuits: Enumerate the b gates. Choose for any gate one of the 3 activations in {AND, OR, NOT} and up to b - 1 § n predecessors
38
3. Approximation Ability
of the gate. Choose one of the gates as the output. This leads to at most
p(b, n) = 3 b. (b + n) b(b-l+n) 9b different circuits. On the contrary, there exist 22" functions ,Un --+ ~:. We can find numbers nl, n2, ... that tend for any i a function f~ : ~7m --~ ~7 such that p(n~,ni) < 22"~ and be implemented by a circuit with n~ gates. Any 9r with f i r '~ = fi implemented by a circuit with polynomial resources and therefore implemented by a recurrent network as specified above either.
to oo and fi cannot cannot be cannot be []
In fact, it is not necessary to use the simulation via Boolean circuits. We can argue with networks directly in the same way because the number of mappings a fixed architecture can implement is limited in a polynomial way for common activation functions.
Example 3.3.3. Assume ,U = {0, 1}. Assume an activation function a is specified such t h a t a is locally C 1 with a ' ~ 0. Assume the number of input sequences in 2:* of length at most T on which any mapping to {0, 1} can be approximated in the m a x i m u m norm with accuracy < 0.5 by an a p p r o p r i a t e choice of the weights in a recurrent architecture with n neurons is limited by a function p(T, n) such t h a t p is polynomial in T. Then there exists a function f : ,U* ~ ,U which cannot be approximated in the m a x i m u m norm by a recurrent network with activation function a. Proof. Note that for any finite set of sequences and recurrent networks with at most N neurons we can find a recurrent architecture with N 2 neurons which approximates each single network by an appropriate choice of the weights on the input sequences. This is due to the fact t h a t feed-forward networks with N nodes can be simulated with a standard MLP architecture with N 2 nodes such that the input and output units can be taken as identical for each simulation. Furthermore, the identity can be approximated by (a(Xo + ex) Now we consider input sequences in (0, 1}* and construct outputs such that the number of neurons of a minimal network which approximates the values increases in each step. Assume that we have constructed the m a p p i n g up to sequences of length To, and let No be the minimum number of neurons such that a network exists which maps the sequences correctly. There exist 2 T different sequences in {0, 1) T. Any network with No neurons can be simulated in a uniform manner by a network with No2 neurons which can approximate any m a p p i n g on at most p(T, N 2) input sequences of length T. Take T such t h a t T > To and 2 T > p(T, N2). Then there exists at least one mapping on sequences of length T that cannot be approximated by a network with No neurons. [] The function f we have constructed is computable if it can be decided whether a given finite set of binary sequences and desired outputs can be m a p p e d correctly by an architecture with activation a and an appropriate choice of the weights. This is valid for piecewise polynomial activations be-
3.3 Approximation in the Maximum Norm
39
cause the arithmetic of the real numbers is decidable and it is shown for the sigmoidal function as well in [86] modulo the so-called Schanuel conjecture in number theory. The m a x i m u m number of points which can be mapped to arbitrary values approximating {0, 1} by an architecture is limited by the so-called pseudodimension of the architecture. This dimension measures the capacity of the architecture and is limited by a polynomial in the input length for piecewise polynomial functions or the sigmoidal function, respectively [64, 86]. In fact, since it plays a key role when considering the generalization capability of a network we will have a closer look at it in the next chapter. Of course, the above examples transfer directly to the case k > 2 because the situation k -- 1 results when the inputs to a folding network are restricted to trees where any node has at most one nonempty successor. For k ~ 2 the situation in the last two examples can be further modified. C o r o l l a r y 3.3.1. Assume the activation is as in one of the previous two examples. Assume k > 2. Then there exists a mapping f : {1}i ~ {0, 1} which cannot be approximated by a folding network in the maximum norm. Proof. Each binary input sequence [ a l , . . . , an] can be substituted by a tree with unary labels of the form l(~n, 1(fin-l, 1 ( . . . , l(fil, 1(_1_,_1_))...))), for k = I(.L,J_) a i = l , [] 2 where fii = .1_ a i = O.
3.3.2 Approximation on Unary Sequences Consequently even very restricted and computable mappings exist that cannot be approximated in the maximum norm by a folding architecture. On the contrary, if we consider recurrent networks as a computational model, then the functions which can be computed by a recurrent network contain the functions which can be computed by a Turing machine as a proper subclass [114, 115]. Another demonstration of the computational capability of recurrent networks is the following result:
Theorem 3.3.1. Assume a is a continuous squashing function. Then any function f : {1}* -+ [0, 1] can be approximated in the maximum norm by a recurrent network without hidden layer, activation function a in the recursive part, and one linear neuron in the feed-forward part. The number of neurons which is sufficient for an approximation can be limited with respect to the accuracy e. Proof. Choose n E N, n even with 1/n <_ e. Define xi = 1/(2n) + (i - 1)/n for i = 1 , . . . , n , / , = [x, - 1/(4n), xi + 1/(4n)]. Since a is a squashing function we can choose K > 0 such that a(Kx)
> 1 - 1 / ( 8 n 2) < 1/(8n 2)
ifx>l/(4n), if x < - 1 / ( 4 n ) .
40
3. Approximation Ability
I
I
i
i
I
I
I
I
I
I
'
I
I
I
I
I
I1
12 13 14
Fig. 3.3. Function 9 with the property g(lj) D U~=lli for j = 1, , 4. Such a function leads to a universal approximation property when applied recursively to unary input sequences The function g(x) = ~
a ( ( - 1 ) i K 9(x - xi)) - (n12 - 1)
i=1
has the property g(Ij) D [.Jinx Ii for any j = 1 , . . . , n because of the continuity of g and g(xj - 1/(4n))
<
~-'~-i<j,ieven1 + Z i < j , i o d d 1/(sn2) -Jr 1/(8n2) + ~'~i>#,ieven 1/(8n2) + ~i>#,iodd 1 -- (n/2 -- 1)
_< (n + 1)/(8n ~) _ l/(4n) i f j is even, g(xj - 1/(4n)) > 1 - 1/(4n) i f j is odd, g(xj + 1/(4n)) > 1 - 1/(4n) if j is even, and g(xj + 1/(4n)) < 1/(4n) if j is odd (see Fig. 3.3.2). g is trivially expanded to inputs from R z by just ignoring the first component of the input. The function gv : R* --r R can be implemented by a recurrent network with n neurons with activation function a in the recursive function part and a linear output in the feed-forward part. The linearity in g as defined above is considered to be part of the weights in the networks g and h, respectively. Furthermore, gv can approximate any function f : {1}* -r [0, 1] in the maximum norm with accuracy 9 by an appropriate choice of y. It is sufficient to choose an initial context in NiEN(gi)-l(Ikl) if f ( ~ ) E [X~, -- 1/(2n),xk, + 1/(2n)]. Note that such a value exists i times
because Ik~ is compact, g is continuous, and for any finite number i0 the intersection Ni_
3.3 Approximation in the Maximum Norm
41
in the desired interval Iio-1, ho-2, -.-- Since in a network implementation the recursive part consists of only one layer with squashing activation, this value y has to be changed to an initial context ( y + n / 2 - 1 , 0 , . . . , 0) according to the function g. [7 The construction can be expanded to show that a sigmoidal network can compute any mapping on off-line inputs in exponential time. An equivalent result is already known in the case of the semilinear activation function [115], but for the standard sigmoidal function only the Turing capability with an exponential increase of time is established in [66]. Since we deal with the sigmoidal function the outputs are only required to approximate the binary values {0, 1}. C o r o l l a r y 3.3.2. Assume the encoding function code : {0, 1} + -} R is computed by c o d e ( [ x l , . . . , XT]) = (3-)-']~Tffilz,2'--1 + 1)/2. Then a recurrent architecture exists where only the initial context is not specified with the standard sigmoidal activation function, a fixed number of neurons, and the property: For any possibly partial function f : {0, 1} + ~ {0, 1} ex/sts an initial context y such that for any input sequence x the recurrent network with initial context (code(z), y) computes f (x ) on off-line inputs. Proof. It follows from Theorem 3.3.1 that a recurrent network hi o(Ol)y exists with a linear unit hi, a sigmoidal network gl, and the property hi o ( ~ l ) y ( [ T , . . . , T))
nplaces
< 0.1 9 ]0.45, 0.55[ ~> 0.9
if J(x) = O, if f ( x ) is not defined, if f ( x ) = 1,
where n = ~ i xi 2i and x = [ X l , . . . , XT]. Only the initial context y depends on f . The other weights can be chosen as fixed values. Additionally, there exists a network h2 o (gZ)eode(z) with a sigmoidal network g2 and a linear unit h2 such that < 0.15
h2 o (~2)r
if n < ~ i xi 2i,
9 10.2, 0.4[ if n = ~ i xiT, n places
~> 0 . 7
if n > ~ i xi2 ~
with x = [xl,. 99 XT], as we will show later. The simultaneous computation of hi o (01)y and h2 o (g2)eode(z) with outputs Ol and o2, respectively, is combined with the computation (Ol > 0.9) A o2 9 ]0.2, 0.4[ for the output line and ((Ol > 0.9) V (Ol < 0.1)) Ao2 9
42
3. Approximation Ability
for the line indicating whether output is present. This latter computation can be approximated arbitrarilywell in the feed-forward part of a sigmoidal network because the identity, the perceptron activation, and Boolean connections can be approximated. Therefore, the entire construction leads to a sigmoidal architecture that computes f on off-lineinputs. Only the part y of the initialcontext depends on f. It remains to show that a mapping h2 o g2 with the demanded properties exists. The recursive mapping induced by x ~-+ 3x computes on the initial context 3- ~~- =~2~-1 the value 3 - ~-~'~,=~2~-1+n for an input of length n. This mapping is combined with the function tanh which approximates the identity for small values. In fact, the function induced by g(x) = tanh(3x) computes on the initial context 3 - ~ =,2'-1 and inputs of length n a value y with y
E
(1 - 32(- ~"]~z'2'-l+n))3- Z z ' 2 ' - l + " , (1 + 32(- ~ ='2'-1+"))3- ~ =,2'-1+n [
otherwise.
Since [1 - tanh(x)/x[ < x2/3 for x ~ 0 this can be seen by induction: If ~3-k (Tn) e ](1 - 32(-'+n))3 - ` + " , (1 + 32(-'+n))3-k+n[ for k > n + 1, then
g3-' (T"+I) =
tanh(3.~3-b(Tn))
9
]3.~3-,,(Tn) (1 - 3.~3-,,(T") 2) ,3~s-'-(T n) (1 + 3~3-,,(Tn)2) [
c
]3 -'+"+1 (1 - 32(-'+-)) (1 - 3 (3-'+-)2(1 - 32(-'+"))2), 3 -`+"+1 (I 4- 32(-k+")) (1 4- 3 (3-k+n+')2(1 4- 32(-'+n))2) [
c 33-,+-+1 (1- 32(-'+-+')),3 -k+-+' (1 + 32(-k+-+') [. For k < n + 1 we obtain ~a-h(T "+1) > tanh(3(3-1(1 - 3-2))) = tanh(8/9) > 0.7. Here T n denotes the sequence of length n with elements T. The function h2 o g2 with the desired properties can be obtained by exchanging tanh(x) with 2. sgd(2x) - 1. [] Consequently even very small recurrent networks with sigmoidal activation function can compute any function in exponential time. The proof can be transferred to more general squashing functions. T h e o r e m 3.3.2. Assume a is a continuous squashing function which is three times continuously differentiable in the neighborhood of at least one point xo such that a'(Xo) ~ O, a'(xo) = O. Then a recurrent architecture h o ~_ with unspecified initial context exists such that for every possibly partial function
3.3 Approximation in the Maximum Norm
43
f : {0, 1} + ~ {0, 1} some y can be found such that hoO(~u ) computes f on off-
line inputs. The encoding function is c o d e ( [ x l , . . . , XT]) = (3- E~r-1 z,2'-1 + 1)/2.
Proof. Because of Theorem 3.3.1 a recurrent network hi o (gl)y exists which outputs a value smaller than 0.1 for inputs of length n if f(x) is 0, x being the sequence corresponding to the number n, which outputs a value at least 0.9 if f(x) is 1, and a value in the interval ]0.45, 0.55[ if f(x) is not defined. Additionally, a network which outputs a value of at most 0.15, or in ]0.2, 0.4[, or at least 0.7 for an input of length shorter than n, equal to n, or longer than n, respectively, exists as we will show later. Hence the combination of the two networks with the function (ol > 0.9) A o2 E ]0.2, 0.4[ for the output line and ((ol > 0.9) V (ol < 0.1)) Ao2 E ]0.2, 0.4[ for the line indicating whether output is present, Ol and o2 denoting the outputs of the two recurrent networks, yields the desired result. Since the identity and perceptron activation can be approximated, a network with activation function a results. In order to construct g2 choose e such that [e2a'"(x)/a'(xo)[ < 2 for all Ix - x0[ < e. The function o(
z + zo) - a(zo)
fulfills the property I1 - #(x)/z I < x2/3 for every x # O, Ixl < 1 because
UtIt (~)~2X 3 = x +
6a'(z0)
for Ix] < 1 and some point ~ between xo and xo + ex and hence _ #(x) x
x2 < =
6a'(xo)
T"
Hence the function induced by g(x) = #(3x) computes on the initial context 3- ~ z'2~-1 and inputs of length n a value y with >0.7
y
ifn>~xiT+l
e ](1--32(-~"~zi2'-l+n))3-~"~zi2'-l+n, (1 + 32(- ~ z,2'-1+,))3- ~ x,2'-1+,* [
otherwise.
as can be seen by induction. The afline mapping in g can be integrated in the weights of the recursive mapping. Hence a function g2 with the desired properties results. I1 However, the proof mainly relies on the fact that the amount of data that can be stored in the internal states is unlimited and the computation is performed with perfect accuracy. If the internal stack is limited, e.g., because
44
3. Approximation Ability
the activation function has a finite range like the perceptron activation function, it can be seen immediately that the computational power is restricted to finite automata. Furthermore, the number of neurons necessarily increases if the function to be computed becomes more complex.
3.3.3 Noisy Computation Even in the case of a real valued smooth activation function the accuracy of the computation may be limited because the input d a t a is noisy or the operations on the real numbers are not performed with perfect reliability. To address this situation, computations that are subject to some limited smooth noise are considered in [84]. It turns out that the capability of recurrent networks also reduces to finite a u t o m a t a for sigmoidal activation functions. The argumentation transfers directly to folding networks, indicating that in practical applications they will only approximate mappings produced by Mealy tree a u t o m a t a correctly when dealing with very large trees. In order to obtain explicit bounds on the number of states of a tree automaton which is equivalent to a noisy folding network we briefly carry out a modification of [84] for folding networks. We restrict the argumentation to the following situation: We consider a folding network h o l y , where h and g are measurable, which computes a {0, 1}-valued function on (Rm)~. We can drop the feed-forward part h by assuming that a tree is mapped to 1 if and only if the image under ~y lies in a distinguished set of final states F, the states which are mapped to 1 using h. Now the computation ~y is affected with noise, i.e., it is induced by a network function g : R m+k'n ~ Rn composed with a noise process Z : Rn x B(R n) -+ [0, 1] such that Z(q,A) describes the probability that a state q is changed into some state contained in A by some noise. B(IR'n) are the Borel measurable sets in R n . Assume that the internal states are contained in a compact set ~ C Rn , e.g., [0, 1]n; assume that Z can be computed by Z(q, A) = fq'eA z(q, q')d~ for some measure p on f2 and a measurable function z : D2 _~ [0, co[. The probability that one computation step in .~y on the initial states ( q l , . - . , qk) and input a results in a state q~ can be described by
7ra(ql,..., qk, q') = z(g(a, q x , . . . , qk), q'). The probability that a computation on an input tree t with root a and subtrees tl, . . . , tk transfers initial states ql, . - . , qk, where the dimensions depend on the structure of the ti, to a state q~ can be recursively computed by 7rt(qX,... ,q~,q~) =
~q ;' ,...,q'D ~ ~
7ft I
(ql, q ~ , ) . . . . .
7ft h
(qk, q~) . lra(q~',..., q~', q')dp k 9
3.3 Approximation in the Maximum Norm
45
The probability that an entire computation on a tree t starting with the initial context y leads to a value 1 is described by the term fqeF ~ t ( Y , . . - , Y, q)dp. A tree is mapped to 1 by a noisy computation if this probability is at least 0.5 + p for a fixed positive value p. It is mapped to 0 if this probability is at most 0.5 - p. We refer to a noisy computation with ~oise. A final assumption is the fact that the noise process is piecewise equicontinuous, i.e., one can find a division of :2 into parts ~?1, . . . , /)t such that Ve > 036 > OVp q Y2Vq',q" E ~2j (Iq' - q"l < 5 =*. [z(p,q') - z(p,q")[ < e). This situation is fulfilled, for example, if the state q is affected with some clipped Gaussian noise. Under these assumptions, a folding network behaves like a tree automaton. Here a tree automaton is a tuple ( I , S , 6 , s , F ) , where I is a set, the input alphabet, S is a finite set with S A I = 0, the set of internal states, 6 : I x S k ~ S is the transition function, s E S is the initial state, and F C S is a nonempty set of final states. A tree automaton computes the mapping f:I;-+{0,1},f(t)=l < :. 5s(t) E f . T h e o r e m 3.3.3. Assume that under the above conditions a noisy computation ~ ~ computes a mapping ( R " ) ~ D B --+ {0, 1}. Then the same mapping
can be computed by a tree automaton (I, S, 6, s, F). Proof. Define the equivalence relation tl --- t2 for trees tl and t2 if and only if for all trees t~ and t[ the following holds: Assume t~ is a tree with subtree tl, t[ is the same tree except for tl which is substituted by t2. Then g~i-n~ = 1 r gy-n~ = 1. Assume that only a finite number of equivalence classes [t] exists for this relation. Then ~oise can be computed by the tree automaton with states {It] I t is a tree}, the initial state [_L], the final set {[t]l~~ = 1}, and the transition function which maps the states [tl], . . . , [tk] under an input a to [ a ( t l , . . . , tk)]. Therefore it remains to show that only a finite number of equivalence classes exists. Assume tl and t2 are trees such that fq
E~
h,(Y,.-.,Y,q)
- ~'t2(Y,--.,Y,q)ld/~
< p.
Then tl and t2 are equivalent. Otherwise, this fact would yield a contradiction because it can be computed for some trees t o and t o , respectively, which differ only in one subtree equal to tl or t2, respectively: (We assume w.l.o.g, that the differing subtrees of t o and t o are in each _i'{-1,.- .,8~+1) and t~ = layer the leftmost subtrees, i.e., t I = ~'i+l~l ,~2 (ti+ 1 8i+1 8/+1~ ai+l~2 , ~ , . . . , ~ j for i = 0, . . . , l - 1 and some I _> 1,1abels ai+l and trees s; +1, t~+1, t~+1 with ~ = tl and t~2 = t2.) 9
~
[,i'Jcl
46
3. Approximation Ability
2p
<_ I f q e f T r t o ( y , . . . , y , q ) d # - - f q e f r r t o ( y , . . . , y , q ) d p l = I fqeF f(ql .....q~.)e a k ( T r t i ( y ' ' ' ' ' y ' ql) - 7 r t ~ ( Y ' " " y ' q~))" 7r,] ( Y , . . . , Y, q21) 9 97rs~ ( y , . . . , Y, q~ ) " 7r~1 (q~, . . . , qlk , q)dpk+ l I ...
=
I fqeF fqesT,,(rtl(Y, "'" , Y , ~ ) -- 7rt2(y,... ,y,q~))" 7rs](Y,...,Y,q~)'..." 7rs~,(y,...,y,q~)'..." 7r,~ ( y , . . . , y, qt2) 9 7ra, (q~,.. 9, qk, 1 q)"
97rs~( y , . . . , y, q~).
. , 2 ( a x e , . . 9 , qk, ~ ql1 ) ' ' " "
g
ra, (q~,..., q~, q'1-1)dptk+ll
fqeo ]Trt~(y,...,y,q) - 7 r t 2 ( y , . . . , y , q ) l d p <_ p.
7 r t ( y , . . . , y , - ) is piecewise uniformly continuous for any t E B with the same constants as z because [Trt(y,... , y , p ) -- 7rt(y,... ,y,q)[ -<
fpea' 7rtl(y,...,pl)...." 9l'ffa ( P l , . . . ,
<
P k , P) -- 7fa ( P l ,
7rt~(y,...,pk) 9 9 9 ,
Pk,
q) Idtt k+l
for IP - ql < 6, p, q e JTi, t = a ( h , . . . , tk). But only a finite number of functions exists which are uniformly continuous on ~i with constant 6 corresponding to e = p/(4p(f2)) in the assumption made on z, and which have a distance at least p from each other with respect to dp. The number can be bounded explicitly using the same argumentation as in [84], Theorem 3.1: The 12i are covered with a lattice of points of distance at most 6, and the values of these points are contained in one of a finite set of intervals with diameter at most p/(2p(J?)) which cover [0, 1]. An upper bound IIi(2~(1"2)/p)( v'~'diam(t2')/'I)" results where diam(/2i) is the diameter of the component 12i of 12. 17 Consequently, the computational capability of folding networks reduces to tree a u t o m a t a if the computation is subject to piecewise equicontinuous noise. The argumentation can be expanded to a function with outputs in a finite, but not necessarily binary set. Then it follows that at most Mealy tree a u t o m a t a can be computed by a noisy folding network.
3.3.4 A p p r o x i m a t i o n on a Finite T i m e Interval So far, we have only considered mappings with discrete inputs which we wanted to approximate in the maximum norm. All negative results transfer to the case of continuous labels, of course. But here, an additional question arises: Can any continuous mapping be approximated in the maximum norm with a folding network on restricted inputs? Note that approximation in
3.3 Approximation in the Maximum Norm
47
the maximum norm on restricted inputs is a special case of approximation in probability if we only consider symbolic, i.e., discrete domains. For continuous labels, the following result shows that an approximation is possible. However, the encoding dimension necessarily increases for realistic networks and some functions that axe to be approximated for increasing input height. T h e o r e m 3.3.4. Choose T E N and a compact set B C IRm. For any continuous mapping f : B 0 a folding network h o ~y exists such that Ih o Oy(t) f(t)l < e for all t E B~ T g can be a feed-forward network without hidden layer and h a single hidden layer feed-forward network with linear outputs. The other activation functions are locally Riemann integrable and nonpolynomial or squashing. If g is continuous and the interior of B is not empty, the encoding dimension increases at least exponentially with T for k > 2 and linearly with T for k = 1 for e < 1 and some real valued and continuous f , regardless of the number of hidden layers in g.
Proof. Without a restriction on the encoding dimension it is easy to construct an encoding g such that Oy simply writes the single labels of an input tree of height T in one real vector of dimension d = k v (m + 1) + 1 or d = T (m + 1) + 1 for k = 1, respectively. If the actual number of places which are already used, p, is encoded into the last dimension as (0.1) p, and s is a real number which is not contained in any label in B, an encoding is induced by h(x, x 1, ( 0 . 1 ) t l , . . . , x k, (0.1) th) =
(s,x,
+"+m+l),
where the coefficients x~, ... can be computed with a finite gain using 11, 12, ... Since the finite gain can be approximated, for example, with a sigmoidal network, the entire mapping g can be approximated with a single hidden layer network with linear outputs. The linearity can be integrated into part h and the connections in g. h can be chosen such that it approximates the continuous mapping on these codes in Rd to I~" [58]. Surprisingly, this brute force method is in some way the best possible encoding. Let an encoding dimension l(T) be given which does not increase exponentially or linearly if k = 1, respectively, with T. Assume g is continuous. Choose x E B and e > 0 such that the ball of radius e with center x is contained in B. Choose T with redo > l(T)k, where do = k T-1 or T - 1 if k = 1, respectively. Assume a 11, . . . , a redo are different points in B. We consider the following mapping f with images in [ - 1 , 1]: We scale all coefficients of the leaves with a factor such that one value becomes 4-1. Then we output one coefficient depending on the label of the root. Formally, the image of a tree with height T, root a ij and leaves a 1 , . . . , a d~ is (a~ --xi)/max{lark --xk]lk, l} if the point ( a l , . . . , a d~ is not contained in the ball of radius e/2 and center ( x , . . . , x ) in Bdo. Otherwise, f is an arbitrary continuation of this function. If k = 1, a mapping f is considered
48
3. Approximation Ability
which maps the sequences of length T of the form [ a l , . . . , a d~ a ij] to an output (a~ - x i ) / m a x { la~ - xkll k, l} or the value of a continuous completion, respectively, in the same manner. The approximation h o ~y on these trees of height T with variable leaves and root and fixed interior labels or on sequences of length T, respectively, decomposes into a mapping .~ : B a~ -~ ll~ (T)k a n d hog : B x I~ (T)k --')"]~ where necessarily antipodal points in the sphere of radius e and center ( x , . . . , x) in B d~ exist which are mapped by 0 to the same value because of the Theorem of Borsuk-Ultam [1]. Consequently, at least one value of h o Oy differs from the desired output by at least 1. [] As a consequence, continuous mappings cannot be approximated in the maximum norm with limited resources for restricted inputs. The fact that it is necessary to increase the encoding dimension is another proof which shows that arbitrary mappings cannot be approximated in the maximum norm on trees with unlimited height with a folding network.
3.4 Discussion
and
Open
Questions
We have shown in the first part of this chapter that folding networks are universal approximators in the sense that any measurable function on trees, even with real labels, to a real vector space can be approximated arbitrarily well in probability. One can derive explicit bounds on the number of layers and neurons that are required for such an approximation. In particular, only a finite number of neurons is needed in the recursive part if the labels are elements of a finite alphabet. This situation takes place in symbolic domains, for example. Furthermore, only a finite number of neurons is necessary in the entire network in order to interpolate a finite set of data exactly; this situation occurs whenever an empirical pattern set is dealt with in a concrete learning scenario. The main relevance of these results for practical applications is twofold: First, the in principle approximation capability of folding networks is a necessary condition for any practical application where an unknown function is to be learned. If this capability was limited, no training algorithm, however complicated, would succeed in approximation tasks dealing with functions that cannot be represented by folding networks. Second, the explicit bounds on the number of layers and neurons enable us to limit the search space for an appropriate architecture in a concrete algorithm. Furthermore: The explicit bounds on the number of neurons necessary to interpolate a finite set of points make it possible to create test situations for the in principle capability of a concrete learning algorithm of minimizing the empirical error. We have used an encoding method for symbolic data and, additionally, a discretization process when dealing with continuous data in the proofs. The discretization process controls the number of neurons which is necessary
3.4 Discussion and Open Questions
49
in the recursive part of a network and which is a priori unlimited for an arbitrary function. A consideration of the smoothness, i.e., the derivatives of the mapping that has to be approximated allows us to limit the number of neurons that are necessary in the recursive part. This is due to the fact that a bound on the derivative also limits the constants that appear in a continuity requirement. Therefore we can limit the diameter of the intervals the input space has to be divided into a priori. Unfortunately, the bounds on the number of neurons that are obtained in this way are rather large compared to the case of finite inputs. Can they be improved in the case of real labels, too? All approximation results in the first part lead to approximation in probability. If a continuous function is to be approximated in the maximum norm for restricted inputs, the encoding is necessarily a trivial encoding in the worst case. This fact indicates that an approximation of functions that are very sensitive to small changes of the input labels may lead to problems in practical application. Considering approximation in the maximum norm for purely symbolic inputs touches on computability questions. It has been shown in the second part of this chapter that functions exist that cannot be approximated in the maximum norm with a recurrent network. Indeed, under realistic assumptions, i.e., the presence of noise, only Mealy tree automata can be approximated. As a practical consequence the capability of using recurrent networks in applications dealing with very large trees or very long sequences is restricted. However, it is an interesting fact that from a theoretical point of view even noncomputable functions can be computed by a recurrent network if the computation on the real values is performed with perfect reliability. In fact, we have shown that any function can be computed with a sigmoidal recurrent network and a fixed number of neurons in exponential time on off-line inputs. But the question is still open as to whether it is possible to simulate simple Turing machines with recurrent networks with the standard sigmoidal activation function and only a polynomial increase in time.
Chapter 4
Learnability
Frequently, a finite set of training examples (xl, yl), . . . , (xm, Ym) is available in a concrete learning task. We choose a folding architecture with an appropriate number of layers and neurons - knowing that the architecture can represent the data perfectly and approximate the regularity in accordance to which the inputs x~ are mapped to the y~ in principle. Then we start to fit the parameters of the architecture. Assumed that we manage the task of minimizing the error on the given data, we hope that the network we have found represents both the empirical data and the entire mapping correctly. T h a t means that the network's output should be correct for even those data x which are different from the xi used for training. Here the question arises as to whether there exists any guarantee that a network generalizes well to unseen data when it has been trained such that it has small empirical error on the training set. T h a t is, the question arises as to whether a finite set of empirical d a t a includes sufficient information such that the underlying regularity can be learned with a folding architecture. Again, a positive answer to this question is a necessary condition for the success of any learning algorithm for folding networks. A negative answer would yield the consequence that a trained network may remember the empirical data perfectly - but no better than a table-lookup, and the network may have an unpredictable behavior with unseen data. Of course the question of learnability of a function class from an information theoretical point of view can be made more precise in many different ways dealing with different scenarios. Here we consider the classical PAC setting proposed by Valiant [129] and some of its variants: We assume that the empirical data is given according to an - in general unknown - probability distribution. The underlying function can be learned if we can bound the probability of poor generalization of our learning algorithm in dependence on the number of examples, the number of parameters in our network, and maybe some other quantities that occur in the algorithm. We require that the probability of training samples where our learning algorithm outputs a function that differs significantly from the underlying regularity becomes small if the number of examples increases. Explicit bounds on the number of examples necessary for valid generalization, such that we can limit the amount of data a priori that we need, are of special interest. Since the computing
52
4. Learnability
time of a learning algorithm directly depends on the number of examples that have to be considered, these bounds should be polynomial in the various parameters of the training algorithm. Even the demand for a probably good generalization with an increasing number of examples can be specified in several ways: Usually we deal with an unknown but fixed distribution of the data such that learnability with respect to one fixed distribution is sufficient. Maybe we are aware of additional information about the data and can restrict the consideration to a special class of distributions. We can, for example, limit the maximum height of input trees if large trees rarely occur. On the other hand, we may be interested in distribution-independent bounds because we do not know the special circumstances of our learning scenario. But we only want to train the network once with a number of examples that is sufficient for every scenario. Of course one algorithm that generalizes well is sufficient. Therefore we may ask whether at least one good algorithm exists. But although such an algorithm might exist this fact could be useless for us because this training method has high computational costs. On the other hand, we may ask for a guarantee that any algorithm that produces small empirical error or satisfies other properties generalizes well. Such a result allows us to improve the performance of our training method without losing the generalization capability - as long as we take care that the algorithm produces small empirical error. Due to the computational costs, noisy data, or some other reasons perfect minimization of the empirical error may be impossible. In these cases a result in which the empirical error is representative for the real error would be interesting. Finally, we can specify the learning scenario itself in different ways: We may be interested in the classification of data or in the interpolation of an entire real function. The function that is to be learned may be of the same form as the network architecture we want to fit to the data or it may have an entirely different and unknown form. Moreover, the data may be subject to some noise such that we have to learn a probability distribution instead of a simple function. The above questions arise in all these settings : 1. Does a learning algorithm exist that generalizes well? 2. Can the generalization ability of every algorithm with small empirical error be established? 3. In which cases is the empirical error representative of the real error? 4. W h a t amount of data is necessary in the above questions? Do explicit, preferably distribution-independent bounds exist? We start this chapter with a formal definition of the learning scenario and an overview about well known results which will be used later on. Here we mainly use the notation of [132] and refer to the references therein. Afterwards we have a closer look at the distribution-dependent setting and add
4.1 The Learning Scenario
53
some general results to this topic. The so-called VC-dimension and generalizations, respectively, play a key role for concrete bounds. The VC-dimension of folding networks depends on several network parameters and, in the most interesting cases, even on the maximum height of the input trees. Consequently, explicit bounds on the number of examples can be derived in all of the above questions for the distribution-dependent setting. Distributionindependent bounds cannot exist in general. Examples can be constructed in which the amount of data increases exponentially compared to the learning parameters, due to the considered distribution. But even without any explicit prior knowledge about the distribution, valid generalization can be guaranteed in some way - in particular, when the data turns out to behave well a posteriori, that is, if the height of the trees in the training set is limited. This consideration fits into the framework of so-called luckiness functions.
4.1 T h e L e a r n i n g
Scenario
Learning deals with the possibility of learning an abstract regularity given a finite set of data. We fix an input space X (for example, the set of lists or trees) which is equipped with a a-algebra. We fix a set Y" of functions from X to [0, 1] (a network architecture, for example). An unknown function f : X ~ [0, 1] is to be learned with jr. For this purpose a finite set of independent, identically distributed data x = ( x l , . . . , xm) is drawn according to a probability distribution P on X. A learning algorithm is a mapping oo
h : U (X x Y)'~ ~ Y which selects a function in Jr for any pattern set such that this function hopefully - nearly coincides with the function that is to be learned. We write h,., (f, x) for hm (x l, f (x 1), 999 xm, f(xm)). Since we are only interested in the information theoretical part in this chapter and will consider computational issues in the next chapter, an algorithm reduces to a mapping as defined above without taking care about whether and how it can be implemented. An algorithm tries to output a function of which it knows only a number of training examples. If the algorithm yields valid generalization, the real error dp (f, hm (f, x)) where g .
dp(f, g)
Ix
]f(x) - g ( x ) ] d P ( x )
will highly probably be small. Note that dR defines a pseudometric on the class of measurable functions. Of course, this error is unknown in general since the probability P and the function ] that has to be learned are unknown.
54
4. Leaxnability
A concrete learning algorithm often simply minimizes the empirical error dm (f, hm (f, x), x) where d m ( f , g , x ) = 1__ ~ m
lY(x~) - g(xi)l 9
i=1
For example, a standard training algorithm for a network architecture fits the weights by means of a gradient descent on the surface representing the empirical error in dependence on the weights.
4.1.1 Distribution-dependent, Model-dependent Learning In the distribution-dependent setting we consider only one fixed probability P on X. We first assume that the function f that is to be learned is itself contained in ~'. Hence the class ~- is a model for f . Since f E ~-, a concrete algorithm can produce empirical error 0 for every function f and training sample x; such an algorithm is called consistent. A class with outputs in the finite set {0, 1} is called a concept class. A concept has an extremely simple form, corresponding to a classification into two sets. A generalization of concepts are functions with values in a finite alphabet which correspond to a classification of the space X into a finite set of classes. We denote by p m the product measure induced by P on X m. Both here and in the following, all functions and sets we consider have to be measurable. See, e.g., [4, 51] for conditions that guarantee this property.
Definition 4.1.1. A function set jr is said to be probably approximately correct or PAC learnable if an algorithm h exists (which is then called PAC, too) such that for any e > 0 and b > 0 a number mo E N exists such that for every m > mo sup P'~(x l d p ( f , h m ( f , x ) ) > e) <_ 5. fEY= jr is called probably uniformly approximately correct or PUAC learnable if an algorithm h exists (which is then called PUAC, too) such that for any e > 0 and ~ > 0 a natural number mo exists such that for every m > mo p m ( x I s u p d p ( f , h m ( f , x ) ) > e) < 5. Icy Jr is said to be consistently PAC learnable if any consistent algorithm is PA C. Jr is said to be consistently PUAC learnable if any consistent algorithm is PUAC. jr has the property of uniform convergence of empirical distances or UCED for short if for any e > 0 and 5 > 0 a number mo E N exists such that for every m >_ mo Prn(x l 3f, g E . T I d m ( f , g , x ) - d p ( f , g ) I > e) < 5. If a fixed e is considered this parameter is referred to as accuracy. If a fixed ~ is considered this 5 is called confidence.
4.1 The Learning Scenario
55
PAC learnability can be seen as the weakest condition for efficient learnability. The number of examples that are necessary for a probably good output of an algorithm is limited independently of the (unknown) function that has to be learned. If such a uniform bound does not exist, the amount of data necessary for valid generalization cannot be determined a priori. In this case, a correct output takes more examples and consequently more time than expected for at least some situations. Of course, efficient learnability requires additionally that the number of examples is only polynomial in the required accuracy and confidence and that the algorithm runs in polynomial time in dependence on the data. In the original work of Valiant the question of the complexity of learning is included in the PAC framework [129]. In this chapter we only consider the information theoretical point of learning, and ask the question of computational complexity in the next chapter. Obviously, the UCED property implies consistent PUAC learnability, which itself implies consistent PAC learnability. But the UCED property is a stronger condition than consistent PUAC learnability (see [132] Example 6.4). As already stated, consistent learnability is desirable such that we can use any efficient algorithm with minimum empirical error. The UCED property is interesting if we do not manage to minimize the empirical error due to high computational costs, because this property leads to the fact that the empirical error of the training algorithm is representative for the real error. Finally, the UCED property does not refer to the notion of a learning algorithm. It would be desirable to obtain equivalent characterizations for the other terms, too, such that the characterizations do not refer to the notion of a learning algorithm and can be tested more easily if only 9v is known. For this purpose several quantities are introduced. Definition 4.1.2. For a set S with pseudometric d the covering number N(e, S, d) denotes the smallest number n such that n points Xl, . . . , xn in S exist such that the closed balls with respect to d with radius e and center xi cover S. The packing number M(e, S, d) is the largest number n such that n points Xl, . . . , Xn in S exist that are e-separated, that is, d ( x i , x j ) > e for i ~ j. It follows immediately that M(2e, X, d) <_ N (e, X , d) <_ M (e, X , d). Learnability can be characterized in terms of the covering number of Y as follows: L e m m a 4.1.1. If the covering number N(e, Y , dR) is finite for every e, J: is PA C learnable. One can construct the so-called minimum risk algorithm for which a number of 8/e 2. In (N(e/2, jr, dR)~6) examples are su1~cient for a function class and a number of 32/e. In (N(e/2, ~ , dR)~6) examples for a concept class to PA C learn jr with accuracy e and confidence 6. If Jr is a concept class, finiteness of the covering number for every e > 0 is equivalent to PAC learnability. Any algorithm h which is P A C with accuracy e and confidence 6 requires at least lg(M(2e, ~', dR)(1 - 6)) examples.
56
4. Learnability
See [132] (Theorems 6.3, 6.4, and 6.5). Considering real valued function classes, it is possible to specify the function that has to be learned by one output value uniquely, even if the function class is not countable. Due to this possibility examples of PAC learnable function classes with infinite covering number exist, e.g., [132] (Example 6.11). But the necessity of a finite covering number holds even for functions with outputs in a finite but not necessarily binary set [132] (Theorem 6.7). For both functions and concepts the property of consistent PUAC learnability can be characterized as follows: L e m m a 4.1.2. The fact that J: is consistently PUA C learnable is equivalent to the shrinking width property, that is, for any e > 0 and ~ > 0 a number mo E N exists such that for every m > mo Pro( x I 3f, g (din(f, g, x) = 0 A dR(f, g) > e)) < 5. See [132] (Theorem 6.2). Finally, even for the UCED property one can find more appropriate characterizations. In a first step, the distances of two functions can be substituted by the mean of only one function, that is, the distance between the function and the constant function 0. The convergence of the empirical mean can also be correlated to covering numbers. L e m m a 4.1.3. Assume that Y has the property of uniform convergence of empirical means, or UCEM for short, that is, for any e > 0 and 5 > 0 a number mo E N exists with Pro( x
I sup
IE.Y
Idp(f,O)
-
d.,(f,O,x)l > e) <
for every m > mo. Then it has the UCED property, too. The UCEM property is equivalent to the condition that for any e > 0 and > 0 some mo E N exists such that for every m > mo E p - (lg(N(e, Ulx, din))) _<5, m where dm is a short notation for the pseudometric measuring the empirical distance of two functions dm (f, g, x) = 1/ m ~ =,n1 If(xi)--g(xi)l forfunctions f and g in .7:'Ix, and E p - denotes the expected value with respect to pro. The following inequality holds:
e " ( x I supr
Idp(Y,g)
-
d,,,(f,g, x)l >
< 2Ep2,, (2 N(e/16, U[x,
d2m)~)e-m'2/a2
4.1 The Learning Scenario
57
See [132] (Example 5.5, Corollary 5.6, Theorem 5.7). Obviously, the UCEM property follows from the UCED property if the constant function 0 is contained in jr. These results establish properties that guarantee a positive answer to the questions 1-3 at the beginning of this chapter. But the bounds are still distribution-dependent and for the computation of these bounds it is necessary to estimate the covering number which depends on P.
4.1.2 Distribution-independent, Model-dependent Learning In the distribution-independent setting nothing is known about the underlying probability measure P but a priori bounds on the number of examples necessary for valid generalization are to be established as well. Fortunately, one can limit the above distribution-dependent covering numbers by combinatorial bounds. These are independent of P such that the above inequalities hold even if the terms in the definition of PAC, PUAC, or UCED, respectively, are prefixed by a supp, which means that the obtained inequalities hold regardless of the special probability measure P on X. For this purpose a quantity that measures the capacity of a concept or function class is introduced.
Definition 4.1.3. Let Jr be a concept class. A set of points {Xl,... ,Zrn} C X is said to be shattered by Jr if for every mapping d : {Xx,... ,xm} -+ {0, 1} some f E Jr exists with f i x = d. The Vapnik Chervonenkis dimension ~)C(Yr) is the largest size of a set (maybe oo) that is shattered by jr. Let Jr be a function class. A set of points { z l , . . . , x , ~ } C X is said to be 9 shattered by Jr if real values r l , . . . , r , n exist such that for every mapping d: {xx,... ,xm} ~ {0,1} some function f E Jr exists with d(xi) = H ( f ( x i ) - ri) and If(zd - ril > 9 for every i. For 9 = 0 we say 'shattered', too. The 9 shattering dimension fat, (.7:) is the largest size of a set that is 9-fat shattered by yr. For 9 = 0 this quantity is called the pseudo-dimension For d = PS(jr) or d = VC(yr), respectively, d > 2, and 9 < 0.94 it holds that M( 9
< 2 (~ln~)
d
for an arbitrary P [132] (Theorem 4.2). Consequently, finite VC- or pseudodimension, respectively, ensure learnability. Concrete bounds on the VC- or pseudo-dimension enable bounds on the convergence of a learning algorithm. Furthermore, for d = 12C(jr) and 9 < 1/d a probability measure P exists such that M ( 9 jr, dR) >_ 2d; take the uniform distribution on d points that are shattered, for example. The argumentation can be improved such that a bound M(e, jr, dp) >_ e ~ arises for 9 < 0.5 [132] (Lemma 7.2). In partitular, finiteness of the VC-dimension is both necessary and sufficient for distribution-independent PAC learnability of a concept class. In [13] it is
58
4. Leaxnability
shown that finiteness of the pseudo-dimension is necessary for the learnability of function classes with a finite number of outputs, too. If d -- VC(Y) or PS(:~) is finite, one can bound the covering number g(e, ~'[x, din) in Lemma 4.1.3 by 2((2e)/e ln((2e)/e)) d ([132] Corollary 4.2). This leads to a number of o(dlnl"In(ln!)+~)e examples which are sufficient such that every consistent algorithm is PUAC with accuracy e and confidence 5 [132] (Theorem 7.5). Furthermore, the UCED property is also guaranteed. In particular, the three terms PAC, PUAC, and UCED coincide in the distribution-independent setting for a concept class and can be characterized by the VC-dimension. One can further bound the expected covering number by the inequality 4m ~ d in(2e,~/(d,))
Ep:(N(e,~lx, dm)) <_ 2 it-~-] for function learning, where d - fat~/4(~" ) is the fat shattering dimension [2]. In fact, even the number SUpx N(e, J~]x, din) is bounded by the above term. Consequently, the weaker condition of finite fat shattering dimension is sufficient for the learnability of functions. A number of O(d/e 2.In(In(d/e)) + l / e : . ln(l/~)) examples is sufficient for the UCEM property where d = fat~/24(~ ) [2]. See [2] (Example 2.1) for a function class with infinite pseudo-dimension but finite fat shattering dimension for all e. One can show that this condition is necessary for the UCEM property [2], and necessary for function learning under the presence of noise which fulfills some regularity conditions [7]. Because of this fact, the fat shattering dimension characterizes distributionindependent learnability of function classes under realistic conditions. Both here and in the concept case PUAC learnability and the UCED property come for free. 4.1.3 M o d e l - f r e e L e a r n i n g We have assumed that the function that has to be learned is itself contained in ~'. This assumption is in general unrealistic since we know nothing about the underlying regularity. Therefore one can weaken the definition of learnability in the model-free setting as follows: Assume the unknown function f0 is contained in a set ~'0, which may be different from ~'. Then we can try to find a function in ~" approximating f0 best. The minimum error achievable in ~" is of size JP(S0) : dp<S, So) Given a sample (xi, f(xi))i the minimum empirical error achievable in ~" has the size
4.1 The Learning Scenario
59
Jm(fo,x) = inf ~ [fo(xi) - f(xi)[. feJ:"v" We define that (Yo, Y) is PAC if an algorithm h exists such that for any e > 0 and 6 > 0 a number m0 E N exists such that for all m _> m0 sup P m ( x ] d p ( f o , h m ( f o , X ) ) - J R ( k ) > e) <_~. fo~l:o
PUAC learnability is characterized by the property that for any e > 0 and > 0 a number m0 E N can be found such that for all m _> m0
P"(x I
sup (dp(fo, hm(fo,x)) - JR(/O)) > e) < ~. fo e.To
The definition of the UCEM property need not be changed. As before, finiteness of the covering number of the approximating class ~- ensures PAC learnability [132] (Theorem 6.9). The UCEM property of 9c ensures in the distribution-independent case that every algorithm with empirical error converging to J m ( f o , x ) is PAC [132] (Theorems 3.2 and 5.11). The definition of UCEM is not different in the model-free case, hence this property can be correlated to the VC-, pseudo-, or fat shattering dimension leading to distribution-independent bounds for learnability in the model-free setting, too. Note that this definition of model-free learning, which is a special case of the notation used in [132], fits only to a restricted setting. In practical applications one usually deals with noisy data, that is, probability distributions rather than simple functions have to be learned. In this situation the empirical error of a probability P / on X x [0, 1] compared to a function g may be defined as dp(Pf, g) = f x • [0,1] ]Y -- f ( x ) ] d P f (x, y). Furthermore, this measure may be not appropriate since some deviations of the output, from the function that has to be learned, may be worse than others. For example, a substitution of the Euclidian distance by the quadratic error punishes large deviations more than small ones. All these modifications fit into the framework of agnostic learning, as established in the work of Haussler [51], who has generalized the approach of Vapnik and Chervonenkis [12, 131] about concept classes to real valued functions. Note that the explicit bounds on the number of examples can be improved for concept learning [132]. Furthermore, other useful modifications of the entire learning scenario exist which we will not consider in the following, see [5, 52, 74, 75, 94, 95, 133]. As a consequence of these results, it is appropriate to first consider the VC-, pseudo-, or fat shattering dimension when examining the learnability of a function class. If this dimension is finite, learnability can be guaranteed and explicit bounds on the number of examples can be established. If this dimension is infinite or very large so that prohibitive bounds on the number of examples result, one can either restrict the function class to a class with smaller VC-dimension or try to estimate the covering number for the special distribution - maybe these methods lead to better bounds.
60
4. Learnability
4.1.4 Dealing with Infinite Capacity In fact, when considering the entire story one often deals with a class of infinite VC-dimension. This is the natural setting if a function with an unknown structure is to be learned. To fit a model to such a function it is necessary to start with a class which has some kind of universal approximation capability - otherwise the approximation process would fail in at least some situations. We have proved the universal approximation capability of folding networks in the previous chapter and it has also been established for standard feed-forward networks [59]. Since any function on vectors, lists, or trees, respectively, can be represented by the class of feed-forward, recurrent, or folding networks these classes obviously have infinite capacity. One usual way to overcome the problem of infinite VC-dimension in a concrete learning scenario is due to the fact that the function class is structured in a natural way by the number of parameters that can be fitted. In neural network learning first an architecture is chosen and then the network parameters are trained. Generally speaking, when approximating an unknown function one can first estimate the complexity and rough structure of the function so that the maximum number of parameters can be limited. Afterwards, we can learn with this class with a finite number of parameters and finite capacity and obtain guarantees for the generalization ability, as already described in this chapter. In fact, this procedure first minimizes the so-called structural risk by restricting the capacity - fixing the neural architecture and it afterwards minimizes the empirical risk by fitting the parameters to the data - back-propagation, for example. The error of the approximation is then bounded by the sum of the structural risk, that is, the deviation of the real error and the training error, and the empirical risk, that is, the training error [130]. The empirical risk can be measured directly, we have to evaluate the output function of the training algorithm at all training examples. The structural risk can be estimated by the bounds which depend on the VC- or pseudo-dimension or related terms we have considered in this chapter. Note that these two minimization procedures, the empirical risk minimization and the structural risk minimization, are contradictory tasks because the structural risk is smaller for a parameterized subset with small capacity and a large amount of training data, whereas the empirical risk becomes worse in this situation. Therefore an appropriate balance between these two tasks has to be found. Instead of the above-mentioned method a component which further reduces the structural risk is often added to the empirical risk minimization in neural network learning: A weight decay or some other penalty term is added to the empirical error which has to be minimized, for example, and corresponds to a regularization of the function [18]. Methods even exist where the structural risk minimization is performed after the empirical risk minimization, as in Vapnik's support vector method [23, 130]. To take this method
4.1 The Learning Scenario
61
into account some argumentation has to be added to the theoretical analysis, for example, as we will now discuss, the luckiness framework. Now, when dealing with recurrent networks the above method - estimating the number of p a r a m e t e r s and fitting the d a t a afterwards - is not applicable because the hierarchy described by the number of p a r a m e t e r s collapses when considering the corresponding complexity of the classes, as we will see in the next section. It will be shown that even neural architectures with a finite and actually very small number of parameters have unlimited capacity. Consequently, one has to add a component to the theoretical analysis to ensure learnability in this case, too. The hierarchy defined by the number of p a r a m e t e r s has to be further refined in some way. Two approaches in the latter case may be useful: Assume the VCdimension of a concept class ~- is infinite, but the input space X can be written as a union of subspaces Ui~176X i such t h a t Xi C Xi+l and VC(J:[Xi) = i for all i. Then every consistent algorithm is PAC and requires only a polynomial number of examples, provided t h a t 1 - P ( X i ) = O(i -~) for some B > 0 [3]. The division of the input space will turn out to be rather natural when dealing with recursive data. One can consider the sets of trees where the maximum height is restricted, for example. But at least some prior information a b o u t the probability measure - the probability of high trees - is necessary. Another approach deals with the special output of a learning algorithm on the real d a t a and guarantees a certain accuracy that is dependent on the concept class as well as on the luckiness of the function which is the concrete output of the algorithm. For this purpose a luckiness function L : X m x J: --r R + is given. This luckiness function measures the luckiness of the function approximating the d a t a which is the output of the learning algorithm. The luckiness m a y measure the margin or number of support vectors in the case of classification with one hyperplane, for example. The luckiness tells us whether the output function lies in a function class with small capacity, t h a t is, small VC- or pseudo-dimension. For technical reasons the luckiness of a function on a double sample has to be estimated on the first half of the sample in the following sense: Functions ~ and r : N x R + x IR+ -~ R + exist such t h a t for
any~> 0 s u P l e y p 2 m ( x y [ 3 g E .T (din(f, g, x) = 0 A Vx'y' c , x y [{hlx'Y' I h e ~" A L ( x ' y ' , h) _> L ( x ' y ' , g)}[ > r L(x, g), 5))) _< where x ' y ' Cn x y refers to any vector which results if a fraction of ,} = 71(m,L(x,g),(i) coefficients are deleted in the part x and in the p a r t y of the vector x y . The condition requires some kind of smoothness of the luckiness function, i.e., if we know how lucky we are in a certain training situation then we can estimate the number of situations which would be at least as lucky even if more training d a t a would be available. Now if an algorithm outputs on m i.i.d, examples a concept h -- h m ( f , x ) , which is consistent with the examples such that r L(x, h), (i) _< 2 ~+1 , then with probability of at least 1 - ~ the following inequality holds for any probability P:
62
4. Learnability
dP(f'h) ~- 2--m( i + 1
+ lgp4-~.~)+477
Pi
(m,L(x,h), ~-) lg(4m),
Y~i=lPi
where are positive numbers satisfying 2m = 1 [113]. Note that this approach leads to posterior bounds, although the represent in some way the confidence of getting an output with a certain luckiness. This approach will be used to estimate the probability of a certain height of the input trees.
Pi
4.1.5 V C - D i m e n s i o n o f N e u r a l N e t w o r k s We want to apply these theoretical results to neural network learning. For this purpose, bounds on the VC-dimension of neural architectures are needed. Denote by W the number of weights in an architecture including biases, by N the number of neurons. For feed-forward architectures with activation function a the following upper, lower, or sharp bounds, respectively, have been established for the VC- or pseudo-dimension d:
I O(WInW) O ( W 2 N 2) f2(WN) d = O(WNIn q + Wh In d)
if a if a if a if a q is d is and
= H, = sgd, = sgd, is piecewise polynomial, the maximum number of pieces, the maximum degree, h is the depth of the architecture.
See [12, 64, 69, 82, 83, 118, 132]. In the last three cases weight sharing is allowed and W denotes only the number of different adjustable parameters. Note that in the last estimation the term In is an upper bound for the maximum degree of a polynomial with respect to the weights, which occurs in a formula corresponding to the network computation. If the activation function a is linear this degree is h, which leads to an improvement of this factor to In Consequently, in interesting cases good upper bounds exist and learnability is established for feed-forward architectures with a standard activation function. Furthermore, the approach [86] proves the finiteness of the VCdimension and consequently the learnability if the activation function is an arbitrary algebraic function. On the contrary, an activation function a = cos can lead to an infinite VC-dimension. One can even construct activation functions which look very similar to the standard sigmoidal function such that very small architectures have an infinite VC-dimension due to a hidden oscillation of the activation function [119]. But in both cases a restriction of the weights limits the capacity, too, because in this case the oscillation is limited. The above bounds on the VC-dimension of neural architectures can be improved if networks with a finite input set or a fixed number of layers are
dh
h.
4.2 PAC Leaxnability
63
considered [8, 9]. Furthermore, in [10] the fat shattering dimension of feedforward architectures is examined. It turns out that good upper bounds can be derived for networks with small weights and depths. Some approaches deal with the VC-dimension of recurrent architectures where the corresponding feed-forward architecture has depth 1 and derive the bounds
O(W2t) O(Wt) O(W In t)
d=
O(WN + Wln(Wt)) O(W2N2t2) ~2(Wln(t/W)) ~2(Wt)
if a is if a is if a is ira= if a = if a = if a =
piecewise polynomial, a polynomial, linear,
H, sgd, H or a = id, sgd or a nonlinear polynomial,
where t is the maximum input length of an input sequence [27, 70]. Since the lower bounds depend on t and become infinite for arbitrary t distributionindependent PAC learnability of recurrent architectures cannot be guaranteed in general. The hierarchy which is defined by the number of parameters collapses when considering the corresponding capacities.
4.2 PAC
Learnability
In this section we present some general results concerning PAC learnability which deal with the learning of recursive data and are therefore of interest when training folding architectures, or which are interesting for the possibility of learning under certain conditions in general.
4.2.1 Distribution-dependent Learning First of all, we consider the distribution-dependent setting. Here the term PUAC has been introduced by Vidyasagar [132]. The stronger condition of uniform convergence is fulfilled automatically in the distribution-independent case if only PAC learnability is required. In [132] (Problem 12.4) the question is posed whether these terms coincide in the distribution-dependent case, too. Indeed, this is not the case, as can be seen by example:
Example ~.2.1. A concept class exists that is PAC, but not PUAC learnable in the distribution-dependent setting. Proof. Consider X -- [0, 1], ~ = uniform distribution and .7" = {f : [0, 1] -~ {0, 1} [ f is constant almost surely}. Since Jr has a finite covering for every e > 0, it is PAC learnable. Assume an algorithm h exists that is PUAC. Assume that for a sample x and function f the function hm (f, x) is almost surely 0. Then for the function g, which equals f on x and is almost surely 1,
64
4. Learnability
the distance dR(g, hm(g,x)) is 1 because h m ( f , x ) - h m ( g , x ) . An analogous situation holds if h produces a function that is almost surely 1. Therefore P ( x I suple~" dR(f, hm(], x)) > e) = 1. [] What is the advantage of the requirement of PUAC learnability in distribution-dependent learning? One nice property is that with PUAC learnability we get consistent PUAC learnability for free. T h e o r e m 4.2.1. ~" is PUAC learnable if and only if.7: is consistently PUAC
learnable. Proof. Assume that not every consistent algorithm is PUAC and therefore the shrinking width property is violated. For an arbitrary learning algorithm h, a sample x and functions f and g with dm(f,g,x) = 0, it is valid that dp(f,g) < dp(f, hr,(f,x)) + de(g,h,,(g,x)). If dR(/,g) > e, at least one of the functions f and g has a distance of at least e/2 from the function produced by the algorithm. Therefore {x I 3f, g (din(f, g, x) = 0 A dR(f, g) > e)) C {xl suPle ~- dR(f, hm(f, x)) > e/2}; the probability of the latter set is at least as large as the probability of the first one. That is, both probabilities do not tend to 0 for increasing m because the shrinking width property is violated. [] As a consequence any consistent algorithm is appropriate if the PUAC property holds for the function class. Furthermore, a characterization of PUAC learnability which does not refer to the notion of a concrete learning algorithm is given by the shrinking width property. On the contrary, PAC learnability and consistent PAC learnability are different concepts, as can be seen by the following example:
Example 4.2.2. A PAC learnable concept class exists for which not every consistent algorithm is PAC. Proof. Consider the following scenario: X = [0, 1], P = uniform distribution, ~" = {f : X -4 {0, 1} If(x) = 0 almost surely or f(x) = 1 for all x}. Consider the algorithm which produces the function 1 until a value x with image 0 is an element of the sample. The algorithm is consistent and PAC because for any function except 1 the set with image 1 has the measure 0. However, the consistent algorithm which produces the function with image 1 on only the corresponding elements of the sample and 0 on any other point is not PAC because the constant function 1 cannot be learned. [] That means, PAC learnability ensures the existence of at least one good learning algorithm but some consistent algorithms may fail to learn a function correctly. Of course, the difference between PAC and PUAC follows from this abstract property, too. It would be nice to obtain a characterization of consistent PAC learnability which does not refer to the notion of a learning algorithm as well. The
4.2 PAC Learnability
65
following condition requires the possibility that any function can be characterized by a finite set of points: T h e o r e m 4.2.2. A .function class is consistently PA C learnable if and only
if it is finitely characterizable, that is, for all e > 0 and 5 > 0 a number mo E N exists such that for all m >_ mo sup P"(x l3g (dm(/,g,x) = 0 ^ dp(f,g) > e)) <_ 5. Proof. Assume that jr is finitelycharacterizable and the algorithm h is consistent. The following inequality bounds the error of the algorithm h:
s u p / P m ( x [ d e ( f , h ~ ( f , x), x) > e) <_ s u p l P " ( x [ d , ~ ( f , h , , , ( f , x ) , x ) ~ O) +sup/P~(x[dp(.f,h,,(Lx),x) > e and d,~(L h , , ( L x ) , x ) = 0) _< s u p l P m ( x [ d m ( L h m ( L x ) , x ) 4 0) + s u p / P ' ~ ( x [ 3g (dp(.f, g, x) > e and d,~ (f, g, x) = 0)). If conversely the condition of finite characterizability is violated, some e > 0, 5 > 0, numbers nl, n2,... -+ oo, and functions fl, ] 2 , . . . exist such that pn, (x [ 3g (dn, (fi, g, x) = 0 A dp(fi, g) > e) > 5. Choose for every x from the support of the above set one g/x such that the properties t~n,(fi,g~:,x) = 0 and dp(fi,gix) > e hold. The partial mapping (x, fi(x)) ~ g~ can be completed to a consistent learning algorithm h which is not PAC. [] Note that the above argumentation shows additionally that the condition of finite characterizability implies that any asymptotically consistent algorithm is PAC, too. 'Asymptotically consistent' means that the probability of points where the empirical error does not vanish tends to zero. In particular, finite characterizability is a desirable property of a function class which is a weaker condition than PUAC learnability:
Example 4.2.3. A concept class exists which is consistently PAC learnable but not PUAC learnable.
Prool. Consider
3={
f : [0,1] ~ {0,1} I ( f is 1 almost surely on [0, 0.5[ and f is 1 on [0.5, 1]) or ( f is 0 on [0, 0.5[ and f is 0 almost surely on [0.5, 1]) }
and the uniform distribution on [0,1]. s u p f P m ( x l 3 g ( d r a ( f , g , x ) = 0A d p ( f , g ) > e)) --~ 0 is valid because for a function which is 1 almost surely, for example, the set of points in [0,0.5[ where f is 1 has measure 0.5. These points characterize f. On the contrary, P m ( x l 3 f , g ( d m ( f , g , x ) =
66
4. Learnability
0 A d p ( f , g ) > e)) -- 1 because we can find for any x functions f and g which are 1 or 0 almost surely, respectively, but f l ( { X l , . . . , x,n} N [0, 0.5[) = g l ( ( x l , . . . ,xm} N [0,0.5[) = 0 and analogous for [0.5, 1]. Consequently, ~" is consistently PAC learnable but not PUAC. [] 4.2.2 Scale S e n s i t i v e Terms
The above characterizations are not fully satisfactory since a concrete learning algorithm often only produces a solution which merely minimizes the empirical error rather than bringing the error to 0. This can be due, for example, to the complexity of the empirical error minimization. In the case of model-free learning, which is considered later, it is possible that a minimum simply does not exist. Therefore the above terms are weakened in the following definition. D e f i n i t i o n 4.2.1. An algorithm h is asymptotically e-consistent if for any 6 > 0 a number mo E N exists such that for all rn > mo sup P m ( x [ dm(f, h m ( f , x ) , x ) > e) < 6. IcY" h is asymptotically uniformly e-consistent if for any 6 > 0 a number mo E N exists such that for all m >>_mo Pro( x I sup dm(f, h m ( f , x ) , x ) > e) _< 6. A function class Y: is finitely el-e2-characterizable if for any 6 > 0 some mo e N can be found such that for all m > m o sup P m ( x l qg(cl,n(f,g,x) <_ el A d p ( f , g ) > e2)) _< 6. IcY: Y: fulfills the el-e2-shrinking width property if for any 6 > 0 some mo 9 N exists such that for all m >_mo p m ( x l 3f, g ( d m ( f , g , x ) <<_e I A d p ( f , g ) > e2) ) <<6. 3: is el-consistently PAC learnable with accuracy e2 if any asymptotically el-consistent algorithm is PA C with accuracy e2. Y: is el-consistently PUAC learnable with accuracy e2 if any asymptotically uniformly el-consistent algorithm is P U A C with accuracy e2. In the case e = 0 and el = 0 the previous definitions result with the difference that here, the algorithms are only required to have small or zero error in the limit. Now we consider the question as to whether an algorithm which minimizes the empirical error with a certain degree is PAC with a certain accuracy. Analogous to the case of a consistent algorithm the following theorem holds.
4.2 PAC Learnability
67
T h e o r e m 4.2.3. )- is el -consistently
PAC learnable with accuracy e~ if and only if Jr is finitely ex -e2-characterizable. Y: is el-consistently PUAC learnable with accuracy e2 if and only if J: fulfills the el-e2-shrinking width property. Proof. Assume that ~ is finitely eve2-characterizable and the algorithm h is el-consistent. The following inequality bounds the error of the algorithm h: sup! Pm(x l dp(f, hm(f,x),x ) > e2) _< s u p ! P m ( x I dm(f, hm(f,x),x) > el) + supI pm(x I dp(f, hra(f,x),x) > e2 and dm(f, hm(f,x),x) <_el)
<_ supfPm(xl[l,n(f, hm(f,x),x) > el) + s u p y P m ( x l 3 g ( d p ( f , g , x ) > e2 and a~m(f,g,x) < el)). If conversely, the condition of finite el-e2-characterizability is violated, some 5 > 0, numbers nl, n~,... -~ (x), and functions fl, f~,.-, exist such that
Pn'(xl qg(dn,(fi,g,x) < el A dp(fi,g) > e2) > 5. Choose for every x from the support of the above set one g~r such that the properties dn, (fi,g~, x) < el and dp(fi, g~:) ) e2 hold. The partial mapping (x, f/(x)) ~-+ g~r can be completed to a el-consistent learning algorithm h which is not PAC with accuracy e2. This shows the first half of the theorem. In the uniform case, assume that ~- possesses the el-e2-shrinking width property and the algorithm h is el-consistent. The following inequality bounds the error of the algorithm h:
Pm(x[ 3f dp(f, hm(f,x),x) > e2) _< Pm(x 13fd, n(f, hm(f,x),x) > ~1) + P ~ ( x 13f (dp(f, hr,,(f,x),x) > e2 and ~l,~(.f, hm(f,x),x) < el)) <_ P m ( x [ 3 f d m ( f , h , ~ ( f , x ) , x ) > el) + P m ( x l 3 f , g(dp(f,g,x) > e2 and d,n(f,g,x) < el)). If conversely, the el-e2-shrinking width property is violated, some 5 > 0 and numbers nl,n2,... -~ oo exist such that
Pn'(xl 3f, g(dn,(f,g,x) <_el A dp(f,g) > e2) > 5. Choose for every x from the support of the above set f~ and g~r such that the properties 0],, (f~, g~r x) _< el and dp(f~, g~) > e2 hold. The partial mapping (x, f~(x)) ~-~ g~ can be completed to a el-consistent learning algorithm h which is not PUAC with accuracy e2. [] Note that the above argumentation is very similar to the proof of Theorem 4.2.2. In fact, we could substitute the property dR(f, g) > e: occurring in the definition of PAC or PUAC, respectively, by some abstract property E1 and the property dm(f,g, x) _< el which occurs in the definition of consistency
68
4. Leaxnability
by some abstract property E2. Assume we want to guarantee that for any algorithm which fulfills Ea(f, hm(f, x)) with high probability automatically E l ( f , hm(f, x)) does not hold with high probability. The above argumentation shows that this implication is true if and only if the probability that E1 and E2 hold in common is small. Note that the latter characterization is independent of the notion of a learning algorithm. Depending on whether the probability is uniform or not the above characterizations: 'finitely characterizable' and 'shrinking width' result in our case. The only thing we have to take care of is that any partial learning algorithm which fulfills E2 can be completed to a learning algorithm which still fulfills E2. The PUAC property no longer guarantees that any asymptotically elconsistent algorithm is PUAC. An additional condition is needed. T h e o r e m 4.2.4. An algorithm is called el -e2-stable if for all ~ > 0 a number mo E N exists such that for all m >_rno
Pm(x l 3f, g(dm(f,g,x) <_el A dp(hm(f,x),hm(g,x)) > e2)) ~ 5. A function class Y: is el-consistently PUAC learnable with accuracy 3e2 if some PUAC algorithm with accuracy e2 exists which is el-e2-stable. Conversely, if Yr is el-consistently PUAC learnable with accuracy e2 then any PUAC algorithm is el-3e2-stable. Proof. For any learning algorithm h the following holds:
C U
{x [ Sf, g (dm(f,g,x) <_el A dR(/,g) > 3e2)} {x I 3f, g (Jim(f, g, x) _< el A dp(hm(f, x), hm(g, x)) > e2)} {x I supydp(f,h,~(f,x)) >e2}.
For a PUAC and stable algorithm h the probability of the latter two sets tends to 0. Conversely,
(x [ 3f, g(dm(f,g,x) < el A dp(hm(f,x),hm(g,x)) > 3e2)} c (x I Sf, g (d,.,,(f,g,x) <_ e~ A d p ( f , g ) > e2)}
u {x I sups dP(f,h,-(f,x)) > e~}, therefore the second statement also follows.
[]
The stability criterion requires that small deviations in the input of an algorithm only lead to small deviations in the output function of this algorithm. The stability property is automatically fulfilled for any PUAC algorithm of a function class which has the UCED property because this property guarantees that a small empirical error is representative for the real error. But concept classes exist which are e-consistently PUAC learnable for every accuracy but do not have the UCED property. Consider, for example, the class
4.2 PAC Learnability
69
Y = {f : [0, 1] --~ {0, 1} [ ] is constant 0 almost surely} and the uniform distribution on [0, 1]. The empirical error din(f, 0, x) may be much larger than the real error dR(f, 0), which is always 0. Let us examine the relation between the other terms that have been introduced, too. The following diagram results where el, e2, and e3 are assumed to be positive: ~" is PAC learnable Y is consistently PAC learnable \/e23el such that Y is elconsistently PAC learnable learnable with accuracy e2
Y is PUAC learnable
r
r
Y is consistently PUAC learnable Ve23el such that Y is el-consistently PUAC with accuracy e2 Ve2Ve33e1 s.t. some el-e3 stable learning algorithm PUAC learns Y with accuracy e2
# Y possesses the UCED property All inclusions are strict. Furthermore, PUAC and e-consistent PAC learnability are incomparable terms (which we refer to as (4)). The strictness in (1)-(4) still has to be shown. Note that the counterexamples for the other inequalities have only used concept classes. Additionally, the concept class in Example 4.2.3 is even e-consistently PAC learnable for every 0 < e < 0.5 with any accuracy 0 < el < 1, which implies inequality (3) and one direction of (4). The remaining inequalities follow from the next example.
Example 4.2.4. A function class exists which is PUAC but not e-consistently PAC learnable for any e > 0 and any accuracy 0 < el < 1. Proof. Consider the function class ~" --- Ui~l Yi (AY'i,where Yi = { f : [0, 1] -~ [0,1]If(x) e {0,(1 + e-i)-1}, f is 0 almost surely} ~-'i = { f : [0,1] -~ [0, 1] I f ( x ) e {1,1 - (1 -{- e - i ) - l } , f is 1 almost surely}, and consider the uniform distribution on [0, 1]. These functions are PUAC learnable since the function values uniquely determine whether the function is 0 or 1 almost surely. On the contrary, for any function f , sample x, and e > 0 a function exists with distance 1 and empirical distance smaller than e from f . []
70
4. Learnability
Unfortunately, this example uses a function class with real outputs and the possibility of encoding a function uniquely into a single output value. Of course, this argument cannot be transferred to concepts. Considering concept classes the following example can be constructed: Example 4.2.5. For any positive e a concept class exists which is PUAC but not e-consistently PAC learnable for any accuracy el < 1. Proof. Consider ~" = { f : [0, 1] ~ {0, 1} I f is 1 or (fl[0, 1 - e/2[ is 0 almost surely and f][1 - e/2, 1] is 0)} and the uniform distribution. Since for large samples nearly a fraction (/2 of the example points is contained in [1 - e / 2 , 1], and therefore determine the function with high probability, this concept class is PUAC. On the contrary, we can find a function with empirical distance at most e and real distance 1 from the constant function 1 for any x where only a fraction e of the examples is contained in [1 - e/2, 1]. [] This example does not answer the question as to whether a positive e exists for any concept class which is PUAC learnable such that the class is e-consistently PAC learnable, too. The example only shows that such an e cannot be identical for all concept classes.
4.2.3 Noisy Data The possibility that for function classes the entire function can be encoded in one real output value causes some difficulties in characterizing PAC learnability in general and prohibits an exact characterization of PAC learnable function classes. Of course, in practical applications the real numbers are only presented with a bounded accuracy, due to the bounded computational capacity. Furthermore, the presence of noise is an inherent property of a learning scenario which prohibits the exact encoding of functions. It has been shown that under the presence of noise, the learnability of a function class in the distribution-independent case reduces to the learnability of a finite valued class [7]. The argumentation can immediately be transferred to the distribution-dependent setting as follows: Let us first introduce some noise into the learning scenario.
Definition 4.2.2. A randomized learning algorithm for a function class ~ is a mapping h : U~=I ( X x Y • z ) "~ --~ Jr together with a probability distribution P z on the measurable space Z. A randomized algorithm is PAC with accuracy e and confidence 5 on rn examples if s u p P 'n x P ~ n ( ( x , z ) I d p ( h , n ( f , x , z ) , f ) > e) ~ 5. Icy ~ The definitions of PUAC, consistent, ... transfer to a randomized algorithm in a similar way.
4.2 PAC Learnability
71
The purpose is that, together with the examples, a randomized algorithm can use random patterns t h a t are taken according to Pz. It may, for example, use a tossed coin if two or more functions of the class fit well to the data. At least for finite valued function classes this notation does not lead to a different concept of learnability because randomized PAC learnability is characterized in analogy to simple PAC learnability by the finiteness of the covering number. L e m m a 4.2.1. If a function class J: with outputs in ( 0 , . . . , B} is random-
ized PA C learnable with accuracy e and confidence 5 on m examples then Jr has a finite 2e-covering number. Pro@ The proof is a direct adaption of [132] ([,emma 6.4). As already mentioned, N(e, jz, dR) (_ M(e, ~, dR). Assume that f l , . . . , fk are 2e separated functions and h is a randomized PAC algorithm with accuracy e and confidence 5 on m examples. Define g : { 1 , . . . , k } x X m x Z m x { 0 , . . . , B } m --+ {0, 1},
1 i f d p ( f j , h m ( y , x , z ) ) <_e, g(j,x,z,y) =
0
otherwise,
where h m ( y , x , z ) = h m ( f , x , z ) for any function f with values y on x. For fixed x, z, and y at most one index j exists where g outputs 1, as a consequence k
Z
ye{o ..... B} m
'~ j = l
On the other hand, the above t e r m evaluates as
>
E~=l fx-, fz,, E y e { 0 ..... B}" g(J, x, z, y)dP~'(z)dPm(x) k Y~=, f x , , fz-, g ( j , x , z , f ( x ) ) d P ~ ( z ) d p m ( x ) >- ~ j = ~ (1 - 5)
because of the PAC property. As a consequence, k < (B + 1)m/(1 - 5).
[]
The purpose of the following definition is to deal with training d a t a which are corrupted by some noise. D e f i n i t i o n 4.2.3. A function class Jr is PAC learnable with accuracy e and confidence 6 on m examples with noise from a set of probability distributions
D on X with variance a 2 if a learning algorithm h exists such that sup
sup
pm x Dm((x,n) ldp(hm( f + n,x),f)
> e) _< 5,
]E~" DE'D,var(D) = a 2
where hm(f + n , x ) is a short notation for hm(xl, f ( X y ) + n l , . . . ,xm, f ( x m ) + nfrj ) .
72
4. Learnability
Obviously, learnability of a function class with data which is corrupted by some noise according to a distribution D corresponds to learning exact data with a randomized algorithm which first adds some noise to the data. But one can even show that learning noisy data is almost the same as learning exact d a t a in an appropriately quantized class with a finite output set if the noise fulfills some regularity conditions: The regularity conditions are fulfilled, for example, for Gaussian or uniform noise and demand that the distributions have zero mean, finite variance, and are absolutely continuous, and the corresponding densities have a total variation, which can be bounded in dependence on the variance. In this case, D is called admissible for short. Under these conditions the following has been shown in [7]: Any PAC algorithm which learns ~ with accuracy e and confidence ~ on m examples with noisy data gives rise to a probabilistic PAC algorithm which learns a quantized version ~ a of j r with accuracy 2e and confidence 2~ on m examples. The noise of the algorithm depends on the noise in 19. j r is the class with outputs in { 0 , a , 2 a , . . . , 1 } , which is obtained if all outputs of f E ~" in [ka - a / 2 , k a + a/2[ are identified with ka. a depends on e, ~, D, and the number of examples. Lemma 5 in [7] is stated for distribution-independent PAC learnability. But the proof is carried out for any specific distribution P as well. Now from Lemma 4.2.1 it follows that ~'a is randomized PAC learnable only if it has a finite covering number. Obviously, this leads to a finite covering number of ~', too, since the distance of any function to its quantized version is at most a / 2 . As a consequence, PAC learnability in the distributiondependent setting under realistic conditions, i.e., the presence of noise, implies the finiteness of the covering number. It may happen that we do not know whether the data is noisy, but in any case we are aware of the special form of our algorithm, for example, it may be PUAC and stable. Stability of an algorithm allows some kind of disturbances of the data, and it can indeed be shown that any stable PUAC algorithm learns j r with noisy data, too, if the variance of the noise is limited in some way. The probability distributions D have light tails if positive constants co and So exist such that D(nI[n[ >_ s/2) < coe - ' / ~ for all distributions D E 19 with variance a s and s > soa. This is fulfilled for Gaussian noise, for example. T h e o r e m 4.2.5. Assume that :7: is learnable with a P U A C algorithm with accuracy e2 and confidence ~ on m examples. Assume that the algorithm is el-e2-stable with confidence ~ on m examples. Then g: is P U A C learnable with accuracy 3e2 and confidence 3~ on m examples with noise from a class 19 with light tails and variance a s bounded according to the confidence ~. Proof. We are given D E 19 with variance 0-2 and h as stated in the theorem, define a learning algorithm h on the noisy data as follows: Given a sample (xi, f(xi)+n~)~ the algorithm chooses a sample (xi, fl (xi))i such that d i n ( f + n, f l , x) is minimal for .fl E Jr and takes this exact sample as an input for h. For h the following inequality is valid:
4.2 PAC Learnability
73
fl f2 u ....
.....
f3
...........
f4
Fig. 4.1. Function class with infinite covering number which is PUAC learnable pm x Dm((x, n) [ sup! dp(hm(f + n, x), f) > 3e2) -< Dm(nl ~-']~iIn~l >__mel/2) + Pm(x l 3f dp(hm(f,x), f) > e2) + Pm(x [ 3f, g (din(f, g, x) < el A dp(h,n(f, x), hm(g, x)) > e2)), where the second probability is bounded because of the PUAC property, the third probability is bounded due to the stability of h, and the first probability is bounded for distributions with light tails and sufficiently small variance as follows: D'n(~-']~i Inil > me1/2)
< =
Eo- (4(E Inil)2)/(m2e~) 4/e~. (varcInl I)/~ + E(Im I)=)
<_ 4,~=/(dm) + 4/e~ (.o,~/2 + f.T./~coe-~'/'d:~) 2
= ~r~lei (41m + (so +
coe-'~
9
[] This result implies that the covering number of a function class is finite if the class is e-consistently PUAC learnable. The demand for some robustness of the learning algorithm prohibits an exact encoding of the functions in the output space. On the contrary, learnability at the PUAC level may only be due to the possibility of encoding functions in real values. One example of this method is the following function class:
Example ~.~.6. A function class exists which is PUAC learnable and has an infinite covering number. Proof. Consider the function class {f : [0, 1] -~ [0, 1]l 3l > 1 f = fl}, where 1 - ( l + e - t ) -1 i f x 9 ft(x) =
(1 + e-Z) -1
21_1_1
otherwise
.
l
[2,/2,(2i+1)/21],
74
4. Learnability
(see Fig. 4.1). Consider the uniform distribution. The output values uniquely determine the function such that any consistent algorithm is PUAC but two functions of the class have a distance of at least 0.2. [] Now the question arises as to whether finiteness of the covering number is still guaranteed if we do not consider uniform learnability, and only a tolerance with respect to the inputs is given. The following result shows that even at the level of e-consistent PAC learnability an exact encoding of functions into real values is no longer possible. T h e o r e m 4.2.6. Assume Y: is finitely el-e2-characterizable. Then J: is PAC learnable with accuracy e2 and any noise D, where the support is bounded according to el. Proof. If ~" is finitely characterizable, any probabilistic algorithm with small empirical error is PAC, too, because sup/e.r_ p,n x P~n((X, z) ldp(f, hm(f, x, z)) > e2) <_ supley:P m • P ~ n ( ( x , z ) I d m ( L h ~ ( L x , z),x) > ex) + sup/ey P'~(x r 3g(d,~(f,g,x) < el ^ dp(f,g) > e2)). As already mentioned, learning on noisy data with an algorithm h is the same as learning on exact data with a probabilistic algorithm which first adds some noise to the data and afterwards applies h. If the support of the noise is bounded such that the probability of events with elements contained in { n l l n I > el/2} is 0, then an algorithm h on the noisy data, choosing one function with smallest empirical distance of the training data, gives rise to an el-consistent probabilistic learning algorithm on the exact data. This algorithm as well as the original one is PAC. [] 4.2.4 M o d e l - f r e e Learning Up to now, all results in this chapter have been stated for the modeldependent case where the form of the function that are to be learned is well known and the function itself is contained in Y. If the function that is to be learned is not contained in Y and the special form is unknown, still most of the previous notations and questions make sense in the model-free case. The terms PAC and PUAC have already been expanded to this case, the generalization of the terms 'consistent', 'finitely characterizable', ... to the model-free case are immediate: From all empirical distances we subtract the minimum empirical error Jm (], x), from all real distances we subtract the minimum real distance Jm(f), where f is the function that has to be learned. The only thing we have to take care of is that all functions denoted by .f in the definitions are from the class ~-o of functions that has to be learned, and the functions denoted by g, the approximating functions, are elements of Y.
4.2 PAC Learnability
75
If we scan the theorems of this paragraph it turns out that the proofs which relate PUAC learnability to consistent PUAC learnability no longer hold in the model-free case. They rely on the symmetric role of g and f, which is no longer guaranteed if these functions are taken from different classes. But the argumentation which shows the equivalence of ~l-consistently PAC or PUAC learnability with accuracy ~2 and finitely (1-~2-characterizability or the scaled shrinking width property transfer to the model-free case - provided that we can verify that any partial mapping we constructed in the proofs can be completed to a learning algorithm with several properties. In the special situations this condition reads as the existence of a consistent or asymptotically ~l-consistent learning algorithm. The existence of a consistent algorithm is not guaranteed for a function class where the term Jm(f, x) is an infimum and not a minimum. Therefore the guarantee of any consistent algorithm being PAC or PUAC does not imply anything in this case. But here, the scale sensitive versions of the above terms turn out to be useful because obviously a nearly consistent learning algorithm can be found in any case.
4.2.5 Dealing with Infinite Capacity We conclude the discussion of learnability with a generalization of the two approaches to function classes that allow a stratification of the learning problem, as mentioned in the previous section. First of all, we consider a class of functions ~- with inputs in X = I.Ji~0 Xi where Xi C Xi+l for the subspaces Xi. Denote the probability of Xi by pi and the probability measure which is induced on Xi by Pi. Denote the restriction of ~ to inputs Xi by ~'i and the restriction of one single function by f[Xi. First of all, Jr is PAC learnable if and, in the case of noisy data, only if the covering number of jr is finite. A useful criterion to test this finiteness is to bound the capacity of the single classes ~'i. This method uses the natural stratification of Jr via the input set and leads to a weaker condition than the finiteness of the VC- or pseudo-dimension of the whole class ~-. One can derive explicit bounds as follows: T h e o r e m 4.2.7. Take i such thatpi > 1-~. Assume that VC(Jri) or PS(Yri) is finite. Then yr is PA C learnable with accuracy ~ and confidence 5 with an algorithm which uses 32(1n4
(
4epi In k ( + Pi -
examples if .T is a concept class and
8(ln 2
In (
4epi +~/~- 1
ln(~ 4epi
76
4. Leaxnability
examples if .7: is a function class. This is polynomial in lie and 1/6 if p ( X \ X i ) < d7~~ .for di = l;C(Y:i) in the concept case and di = PS(Y:i) in the function ease and some positive 8. Proof. For f , g E jr we find
dp(.f,g) < (1 -pi) +pidp,(.flX~,glXO, therefore
M(,,.T, dp) < M (e + P_A- a,jri,dp,) \ Pi for e > 1 - pi. This term is bounded by 2 ( e +2epipi- 1 l n ( e +-~i-2ep'1 ) ) a' with d i = ])C(Jri) or d i = 79S(Jri), respectively. The minimum risk algorithm is PAC and takes the number of examples as stated above [132] (Theorems 6.3 and 6.4). These numbers are polynomial in 1/e and 1/6 ifdi is polynomial, too, which is fulfilled ifpi > 1 - (di) -a for some f~ > 0 because this inequality implies Pi > 1 - ~ for all di > e -1/~. 1~ If we want to use an arbitrary algorithm which minimizes the empirical error instead of the minimum risk algorithm we can use the following result as a guarantee for the generalization ability. T h e o r e m 4.2.8. Any function class if:, where all J: i have finite VC- or ,fat shattering dimension, respectively, has the UCEM property.
Proof. The empirical covering number of ~- can be bounded as follows: < <
E p - (lg N(e, ~'[x, d,,))/m (P(at most m(1 - e/2) points are contained in Xi) lg (2/e) m + Ep~-.(lg M(e/2, ~-,[x, d,~)))/m lg(2/e) 9 16pi(1 - Pi)l(m~ 2) + Ep-, (lg N(el4, J:dx, d,.))/m
for Pi _> 1 - e/4 due to the Tschebyschev inequality. For a concept class the last term in the sum is bounded by 1 (1 + 1;C(~'i) lg ( ~ - In ( ~ - ) ) )
9
For a function class an upper bound is l"(llg+(d4i"lln- (- ~6 -m/ ~' ~em
e2
]))
with di = fut~/ls(:Fi). Obviously, these terms tend to 0 for m ~ oo.
[]
These arguments ensure that with some prior knowledge about the probability distribution even the UCEM property of ~" can be guaranteed if only the capacity of the single subclasses ~'i is restricted.
4.2 PAC Learnability
77
In contrast, the luckiness framework allows us to introduce an arbitrary hierarchy which may even depend on the actual training data. Furthermore, no prior knowledge about the membership in one subclass of the hierarchy is needed. The fact that the learning scenario belongs to a certain subclass can be decided a posteriori and is substituted only by some confidence about the expected difficulty of the learning task. Assume that ~" is a [0, 1J-valued function class with inputs in X and L : X'~ x ~ ' - + R + is a function, the so-called luckiness function. This function measures some quantity, which allows a stratification of the entire function class into subclasses with some finite capacity. If L outputs a small value, we are lucky in the sense that the concrete output function of a learning algorithm is contained in a subclass of small capacity which needs only few examples for correct generalization. Define the corresponding function
l : X m • jr • ~ + ~ R + ,
l(x, i , o) = l{glx l g e Jr, L ( x , g) >_ L ( x , s }~l,
which measures the number of functions that are at least as lucky as f on x. Note that we are dealing with real outputs, consequently a quantization according to some value o of the outputs is necessary to ensure the finiteness of the above number: As defined earlier, ~-a refers to the function class with outputs in {0, o, 2 o , . . . } , which is obtained if all outputs of f E ~ in [ko 0/2, k o + 0 / 2 [ are identified with ko for k E N. The luckiness function L is smooth with respect to r1 and ~, which are both mappings 1~ x (R +)3 --r R+ if P 2 ' n ( x y I 3g E ~-Vx'y' C , x y l ( x ' y ' , g , a ) > ~ ( m , L ( x , g ) , g , a ) )
<_ ~,
where x ' y ' C , x y indicates that a fraction r / = r/(m, L(x, g), ~, a) is deleted in x and in y to obtain x' and y'. This is a stronger condition than the smoothness requirement in [113] because the consideration is not restricted to functions g that coincide on x. Since we want to get results for learning algorithms with small empirical error, but which are not necessarily consistent, this generalized possibility of estimating the luckiness of a double sample knowing only the first half is appropriate in our case. Now in analogy to [113] we can state the following theorem which guarantees some kind of UCEM property if the situation has turned out to be lucky in a concrete learning task. T h e o r e m 4.2.9. Suppose p~ (i E N) are positive numbers with ~':.i pi = 1,
and L is a luckiness function for a class if: which is smooth with respect to T! and ~. Then the inequality infinf / E .~"P"(x I u ( ~ ( m , L ( x , h r , ( f , x ) ) , 6 , a) < 2/+1 Id,~(.f, h m ( L x ) , x ) - d p ( L h . ~ ( Y , x ) ) l < e ( m , i , ~ , a ) ) ) > 1 - 5
78
4. Learnability
is valid for any learning algorithm h, real values 5, a > O, and 2 7 / I n ( m ) + - -1 ( ( i + 3 ) l n 2 + in 4_~
e(m,i,5, a ) = 4 a + 8 ~ } + 4
m
pib]
where y = zl(m, L(x, hm(f , x)), (pi5)/4, a). Proof. For any f E Jr we can bound the probability p m ( x [ qh3i (~(m, L(x, h), 5, a) < 2 i+1 ^ [din(f, h, x) - dR(f, h)[ > e)) <_ 2p2m(xy [ qh3i (~(m,L(x,h),5, a) <_2 i+1 ^ [ d , ~ ( / , h , x ) - d , , ( f , h , y ) [ > e/2)) for m _> 2/e 2 which is fulfilled for e as defined above [132] (Theorem 5.7, step 1). It is sufficient to bound the probability of the latter set for each single i by (pi5)/2. Intersecting such a set for a single i with a set that occurs at the definition of the smoothness of I and its complement, respectively, we obtain the bound 2p2m(xy [ Sx'y' C , x y Bh
(l(x'y',h, ct) <_ 2 ~+1 A [dm(f,h,x) - [lm(f,h,y)[ > e/2)) + where 7/= ~/(m, L(x, h), (pi5)/4, a). Denote the above event by A. Consider the uniform distribution U on the group of permutations in { 1 , . . . , 2m} that only swap elements j and j + m for some j. It can be found that
pzr.(xy [ x y q A)
= f x ~ U(a [ ( x y ) " E A)dP2"(xy) _< SUpxy U(a I (xY) ~ e A), where x ~ is the vector obtained by applying the permutation a [132] (Theorem 5.7, step 2). The latter probability can be bounded by SUPxy ~'~.x,y, Cnxy U(o" [ ~]h (l((x'y') a, h, a) _< 2i+lA
[m'dm,(fa,ha,x') - m'dm,(fa,ha,y')[ > re(e~2 - 2a - 27/))), where m' = m(1 - r/) is the length of x' and y'. fa denotes the quantized version of f, where outputs in [ka - a/2, ka + a/2[ are identified with ka. Now denote the event of which the probability U is measured by B, and define equivalence classes C on the permutations such that two permutations belong to the same class if they map all indices to the same values unless x' and y' both contain this index. We find that
U(al(x'y') ~ e B)
=
<
~'~c P(C)U({a [ (x'Y') ~ E B}[C) supc U({a I (x'Y') a E B}IC).
If we restrict the events to C we definitely consider only permutations which swap elements in x' and y' such that we can bound the latter probability by
4.3 Bounds on the VC-Dimension of Folding Networks 2 '+1 suph
I Im'dm, (fa, ha, (x') ") -
79
m'd,n, (fa, ha, (Y')a)l > m(e/2
-
2a
-
2y))
where U r denotes the uniform distribution on the swappings of the common indices of x t and y~. The latter probability can be bounded using Hoeffding's inequality for random variables z with values in {+(error on x ~ - error on y~)} by the term
2e-m(~/2-2a-4~)2/(2(l-~)).
In total, we can therefore obtain the desired bound if we choose e such that '~ >_ 2 ( m ~ 22i+12e_m(~/2_2a_oT)2/(2(i_,)) \qm]
,
which is fulfilled for e_>4a+8~?+4 l~-~-~-i
(i+3)ln2m + 2~?ln(m) + ln(4/(p~J)) /'a []
Note that the bound e tends to 0 for m -~ oo if a and 77are decreasing, 77in such a way that ~?Inm becomes small. Furthermore, we have obtained bounds on the difference between the real and empirical error instead of dealing only with consistent algorithms as in [113]. We have considered functions instead of concept classes which cause an increase in the bound by a due to the quantization, and a decrease in the convergence because we have used Hoeffding's inequality in the function case. Furthermore, a dual formulation with an unluckiness function L ~ is possible, too. This corresponds to a substitution of _< by > in the definition of l; the other formulas hold in the same manner.
4.3 Bounds
on the VC-Dimension
of Folding Networks
The VC- or fat shattering dimension plays a key role for the concrete computation of the various bounds, hence we first want to estimate these dimensions for folding networks.
4.3.1 Technical Details At the beginning we prove some lemmata showing that several restrictions of the network architecture do not alter the dimension by more than a constant factor.
80
4. Learnability
L e m m a 4.3.1. Let h o l y be a recursive architecture where h and g are mul-
tilayered feed-forward architectures. Then an architecture ]y, exists with only one layer and the same number of weights and computation neurons such that it has the same behavior as the original one if a time delay according to the number of layers is introduced to the inputs9 Proof. The idea of the proof is simple: Instead of computing the activation of several layers in a multilayer feed-forward architecture in one time step we compute the activation in several recursive steps such that the activation of only one layer is to be computed at each time step (see Fig. 4.2 as an example) 9 Denote the number of neurons in the single computation layers by n + k. l, i l , . . . , 2hg,l, Z l , ' " , ""~ i~h~, m, where hh is the number of hidden layers in h, and hg is the number of hidden layers in g. Instead of previously k. l context neurons we now take k. (l + il + . . . + iha + it + - " + i~hh+ m) context neurons such that a corresponding neuron in the context layer of fy, exists for each computation neuron in h o ~y. The connections in fy, directly correspond to the connections in h o ~y. To be more precise, denote the context neurons in the input layer of fy, by n~ (i = 1 , . . . , 1 + i l + . . . + m , j = 1 , . . . , k , one can find k sets of context neurons according to the fan-out of the input trees), and the corresponding neurons in the computational part by n~. Then the connections of the k sets of neurons n~ . . . . , n~ and of the input neurons of the entire architecture to the neurons n~+l, . . . , n~+il correspond to the connections in the first layer of g, the connections between n~+l, - . . , nlt+il to n~t+i~+l, . . . , n~+i~+i 2 correspond to the connections in the second layer of g, analogous for the layers 3 to hg in g, n lI_ b i l _ b . . . h _ i h a _ l h _ l , . . 9 , n lh_ilh_...Tiaa 1 are connected to n 1~, .. ., n t~ corresponding to the last layer in g, n~, .. ., n~ are connected to n~+i~+...+i% +1, . - . , n~t+il+...+i~o +i t corresponding to layer 1 in h, 1 1 nlWQ+...+iha+l, 9 ' nlWQW...+iha+i~ are connected to n I +t i l " ~ - . . , + i h g ~-itl + 1 ' " " " ' n~+ia+...+ih,+i,l+i,~ corresponding to layer 2 in h, analogous for the layers 3, . .
ha in h, n~+il+...+iaa+i~+...+ikh_l+x, ...
9
1
, nl+il+...+iha
+i~ +...+ik
h are
con-
nected to the outputs n~+il +...+ih a +i~ +...+i'ha + 1 , " ' , n~+i~+...+ia a +r +...+ik h +m corresponding to the last layer in h. All other connections are 0. See Fig. 4.2 as an example. Obviously, the number of computation neurons and connections are identical in j~, and h o .qy-Furthermore, ,~, has the same behavior as the original function, provided that the concrete weight values are the same, y~ = ( y , 0 , . . . ,0), and a time delay is introduced to the inputs. This means that any subtree a ( t l , . . . , t k ) in an input tree is recursively substituted by a(... a(tx,... , t k ) . . . ) and afterwards, the entire tree a ( t l , . . . , tk) is substiha
4.3 Bounds on the VC-Dimension of Folding Networks
81
() 7
I . I . r
.
.
.
. .
.
.
.
. .
. .
.
. .
.
. .
. .
. .
. .
. .
.
. .
.
.
.
.
.
.
.
.
. .
.
. .
. .
. .
. .
.
. .
.
.
. I
~ I
I I
'6
1
)DO000 . . . . . . . . . . . . . . . . . .
L
. . . .
-! . . . . . . . . . .
I
I
I I
.4 . . . . . . . . . . . . . . . . . . . . .
L
. . . . . . . . . . . . . . . .
I
Fig. 4.2. Simulating hidden layer in a folding architecture: Instead of the neurons in the single layers, several new context neurons are introduced; the links from the original network connect the context neurons. tuted by a ( . . . a ( t l , . . . , t k ) . . . ) . The delay in the single subtrees just enables hh-F1
us to perform one computation step in g via h 9 recursive computation steps in ]y,, the delay at the root enables us to perform the final computation in h via hh + 1 recursive steps in ]y,. rq As a consequence of this argumentation a restriction of the architecture to one layer only leads to a decrease in the various dimensions by a constant factor compared to a multilayered architecture. Due to the time delay the input length has to be substituted by a multiple of this value. Note that this argumentation transfers to arbitrary architectures, too, if the activation function is capable of approximating the identity in an appropriate way. Then an architecture with arbitrary connection structure can be approximated on each finite set by a layered architecture, which can be simulated as described above. A second observation is that we can drop all biases, too. L e m m a 4.3.2. Given a bias vector O, any architecture h o ~?y can be simulated by an architecture with bias vector O. This requires one additional input neuron which receives constant input 1, one additional context neuron, and an additional number of weights corresponding to the biases. Proof. It is well known that biases in feed-forward architectures can be simulated with a connection to an additional input neuron which receives constant input x E R different from 0. Just choose the additional weight wi of neuron
82
4. Learnability
i as (8~ - 9 i ) / x for any bias 0~. We can proceed in the same way to simulate biases in the recursive part ~y. However, since direct connections from an input neuron to neurons in h are not allowed by definition we have to add one additional output neuron to g, whose only predecessor is the additional input neuron. This neuron stores a constant value and can play the role of an additional constant neuron for part h to simulate the biases in h. [] The same is valid for the initial context: L e m m a 4.3.3. Given an initial context yr, any architecture h O~y where the components of the initial context y are restricted to elements of the range of the activation function of the outputs of g can be simulated by an architecture with initial context yr. The simulating architecture requires an additional input neuron, additional weights according to y, and an expansion of the input trees by one time step. Proof. We expand the input dimension by 1. In all input trees for h o ~y the input labels are expanded by one component 0 for the additional input neuron. Furthermore, all leaves a are substituted by a ( 0 , . . . , 0, 1). This additional time step just writes the images of the additional weights plus a term depending on the bias into the context neurons. If the components of y are contained in the range of the activation function the additional weights can be chosen such that this first computed context vector coincides with y. I-1 This argumentation enables us to expand any input tree to an equivalent tree with full range of some fixed height t. Here full range means that in levels 1 to t all labels are filled with vectors and empty subtrees can only be found at the level t + 1. We will use this fact to argue that we can restrict the VC-analysis of recurrent architectures and inputs with restricted height to the analysis of inputs of one fixed structure, and therefore the analysis of one feed-forward architecture which is obtained by unfolding. L e m m a 4.3.4. Any architecture h o l y can be modified to an architecture with the same behavior such that any input o] height at most t can be expanded to an equivalent input of height exactly t with full range. The number of weights and neurons is o] the same order. Proof. First, we assume that the activation fulfills a(0) = 0. We can modify the architecture such that the biases and the initial context are 0. Then any expansion of an input tree by subtrees with labels 0 does not change the output of the tree. If a(0) r 0 we fix the biases of the neurons in the previous architecture to - p a ( 0 ) , where p is the number of predecessors of the corresponding neuron, not counting the input neurons of the architecture, and we fix the initial context to 0. Then we expand any input tree with entries 0 to a tree of full range. The bias corresponds to a translation of the activation function such that 0 is mapped to 0, with the only difficulty that at the first computation
4.3 Bounds on the VC-Dimension of Folding Networks
83
step the neurons in the first layer receive the additional activation - p a ( 0 ) instead of 0. This difficulty can be overcome by introducing one additional input unit with a constant weight pa(0) to all neurons in the first layer. This input receives an activation 1 at the leaves of the expanded input tree and an activation 0 at all other time steps. [] It follows from these technical lemmata that several restrictions of the architecture do not affect the VC-dimension by more than a constant factor. Furthermore, the last argumentation enables us to restrict the consideration to inputs of a fixed structure if we want to estimate the VC-dimension of an architecture with inputs of a limited height. The possibility of comparing the outputs with arbitrary values instead of 0 in the definition of the pseudo-dimension causes some difficulties in obtaining upper bounds and can in fact simply be dropped because of the equality P $ ( Y ) -- VC(~'e), where ~'e = { r e : X x R -4 {O, 1 } [ f e ( x , y ) = H ( f ( x ) - y) for some f E Y} [86]. At least for standard architectures the consideration of ~-e instead of ~- only causes a slight constant increase in the number of parameters. For a concept class which is parameterized by a set Y, that means ~" = {f~ : X -4 {0,1} ] y E Y}, we can define the dual class ~ v : ( f . ( x ) : Y -4 { 0 , 1 } , ( f . ( x ) ) ( y ) = f y ( x ) l x e X } . It is well known that ];C(~-) > [lgl;C(~'v)J [76]. Since neural architectures are parameterized by the weights, this fact is a useful argument to derive bounds for the VCdimension of neural architectures. The same argumentation as in [76] shows that an analogous inequality is valid for uniform versions of the pseudodimension and fat shattering dimension, too. Here uniform means that in the definition of the fat shattering dimension all reference points for the classification are required to be equal for the points that are shattered. Obviously, this can only decrease the fat shattering dimension. The inequality for the fat shattering dimension is not valid for the nonuniform version, as can be seen in the following example. E x a m p l e 4.3.1. A function class ~" exists with lg~ata(.~)J > fata(~'v). Proof. Consider the sets X = { i / n [ i = -n+ 1,...,n1} and Y = { - 1 , 1 } 2n-1 and function F : X x Y -4 [-1,1], ( i / n , e l , . . . , e 2 , _ l ) ~-~ (2i + e i ) / ( 2 n ) . F induces the classes .7" = { F v : X -4 [ - 1 , 1] [ y E Y ) with .rata(Y) = 2n - 1 and .T v = {Fz : Y -4 [ - 1 , 1][ x E X} with f a t a ( ~ "v) = 1 for a < 1/(2n). []
4.3.2 E s t i m a t i o n o f t h e V C - D i m e n s i o n We first derive upper bounds for the various dimensions which measure the capacity of recurrent and folding architectures. In a first step all inputs are expanded to input trees with full range and height exactly t to obtain upper
84
4. Learnability
bounds on the VC- or pseudo-dimension of folding architectures with N neurons, W weights including biases, and inputs of height at most t. Afterwards, the dimensions of the corresponding feed-forward architectures obtained via unfolding are limited. This unfolding technique has already been applied by Koiran and Sontag to simple recurrent architectures [70]. For folding architectures with a fan-out k _> 2 and activation function a the following bounds can be obtained for d = 1)C(JzIXt) or PS(:7:IXt), respectively, if Xt denotes the set of input trees of height at most t:
I O(ktWln(ktW)) O(W2k2t N 2) O(WNk t lnq + Wthlnd) d= O(W ln(th) )
ifa if a if a q is d is and if a
= H, = sgd, is piecewise polynomial, the maximum number of pieces, the maximum degree, h is the depth of the network, is linear, h denotes the depth.
For large t the bound in the case a = H can be improved with the same techniques as in [70]. T h e o r e m 4.3.1. I:C(~TXt )
= O ( N W k + W t l n k + W l n W ) i.fa = H, k > 2.
Proof. We assume that the architecture consists of only one recursive layer; maybe we have to introduce a time delay as constructed in Lemma 4.3.1. Assume that s inputs of height at most t are shattered. Then the different transition functions transforming an input and context to a new context, which each correspond to a recursive network, act on a domain of size D = 2Nk 9skt (= different activations of the context neurons 9 maximum number of different inputs). Note that the latter term is not affected by a time delay. Each neuron with W, weights implements at most 2D W~ different mappings on the input domain [132] (Theorem 4.1, Example 4.3), consequently, the entire hidden layer implements at most 2ND W different transition functions. Now s inputs are shattered, therefore 2 R < 2N(2Nkskt) W, which leads to the bound s = O(NWk + tWIn k + W I n W). f-] For large t this is a better bound than the previous one since the VCdimension only grows linearly with increasing height. In particular, the same argumentation shows that for inputs from a finite alphabet 2Y the VCdimension is finite, too, since in this case the term sk t, which describes the different inputs, reduces to the term I•l Nh.
If the size of the input alphabet E is finite, the bound VC(Jz) = O(NkW(ln IEI + 1)) is valid for a = H. C o r o l l a r y 4.3.1.
But unfortunately, the bounds depend in most interesting cases on the maximum input height t and therefore become infinite if the input set consists of trees with unbounded height. The lower bounds on the VC-dimension obtained by Koiran and Sontag [70] indeed show that the dependence of these
4.3 Bounds on the VC-Dimension of Folding Networks
85
bounds on t is necessary, and the order of the dependence corresponding to t is tight in the case k = 1, ~r = H. But since Koiran and Sontag restrict their considerations to networks with only one input neuron the lower bounds can be slightly improved for general networks. T h e o r e m 4.3.2. /f a = H and k = 1 then ~C(Y:lXt) = I 2 ( W ln(Wt)).
Proof. In [70] a recurrent architecture shattering f l ( W l n ( t / W ) )
points is constructed, in [82] the existence of a feed-forward architecture shattering I2(W In W) points is shown. Any feed-forward architecture can be seen as a special recurrent network where the context units have no influence on the computation, and the corresponding weights are 0. We can take one architecture with W weights shattering I2(W l n ( t / W ) ) points, two architectures each with Wweights and shattering I2(W In W) points and combine them in a single architecture with 3W + 28 weights which shatters JT(W ln(tW)) points as desired. The technical details of the combination are described in the next lemma. [] L e m m a 4.3.5. Forfeed-forward architectures f l : lRnl+k _~ {0, 1}, . . . , f t : /Rnr+k ..4 {0, 1}. with w l , . . . , wl weights such that the corresponding induced architectures f~ shatter ti input trees, a folding architecture with wl + . . . + wt + 91 + 1 weights can be constructed shattering tl + ... + tt input trees of the same height as before. The architecture consists of a combination of the single architectures with some additional neurons which are equipped with the perceptron activation function.
Proof. Here we denote the architectures by the symbols of a typical function which they compute. Define f : R nl+'''+n'+x+kt ~ {0, 1} t,
f(Xl,...,371,Z, yl,..-,yl)
:
(fX(zl,yl) A x = b l , . . . , f t(xt,yt) A x = bt) for palrwise different bi G R. f can be implemented with the previous wl + ... + wt weights and 81 additional weights in 3l additional perceptron units. Now if we expand each label a in a tree t of the ti trees that are shattered by ]d to a vector ( 0 , . . . , 0 , a, 0 , . . . , 0 , bi), the induced architecture nl+...-~-ni-i
ni+l+...+nl
fo computes the output ( 0 , . . . , rio(t),..., 0). Consequently, the folding architecture which combines fo with an OR-connection shatters the tl + ... + tt input trees obtained by expanding the inputs shattered by the single architectures f l , . . . , ft in the above way. The height of the trees remains the same, and this architecture has wa + . . . + wt + 91 + 1 weights. See Fig. 4.3. [] Obviously, the lower bounds for k = 1 hold for the cases k _> 2, too, but except for a linear or polynomial activation function, the upper bounds differ from these lower bounds by an exponential term in t. We can improve the lower bound for the perceptron activation function as follows:
86
4. Learnability
- - - ~ s u ~ ~
.....
-~. . . . .
~
NN~ /
indicator
~//~~t2
~--
~. . . . .
_A____
Fig. 4.3. Combining architectures to a single architecture with corresponding VCdimension T h e o r e m 4.3.3. /] a = H and k >_ 2 then ~2C(~IX~ ) = f l ( W t + W I n W).
Pro@ We assume k = 2, the generalization to k > 2 follows directly. We write the 2t-1 dichotomies of t - 1 numbers as lines in a matrix and denote the ith column by ei. ei gives rise to a tree of height t as follows: In the 2t-x leaves we write the components of ei + (2, 4, 6 , . . . , 2t); the other labels are filled with 0. The function g : {0, 1} 1+2 --+ {0, 1}, g(xl,x2,x3) = (xt E [2j + 1/2, 2j +3/2]) Vx2 Vx3 can be computed by a feed-forward network with activation function H with 11 weights including biases. 9 induces a function go which computes just the component j of ei for the input tree corresponding to ei. Since any dichotomy of the t - 1 trees can be found as a line of the matrix with columns ei the trees corresponding to the vectors ei can be shattered by the corresponding architecture. For any w E N a feed-forward architecture h exists with activation function H and w weights which shatter f2(w In w) points [82]. We can trivially expand the input dimension of this architecture by 2 such that - denoting this architecture by the same symbol - the induced mapping h0 coincides with the original mapping h on input trees of height one. We want to combine several copies of go and h0 such that all sets shattered by each single architecture are shattered by only one new architecture. Using Lemma 4.3.5 we obtain a folding architecture which shatters # ( W t + W In W) trees of height at most t and which has no more than W parameters if we combine [(W - 20)/40J architectures induced by g each shattering t - 1 trees of height t and one architecture induced by h with [W/2J weights shattering J2(W In W) points. Note that the input dimension could be reduced because
4.3 Bounds on the VC-Dimension of Folding Networks
87
the shattered inputs have labels equal to 0 at most places. In fact, the architectures corresponding to t)0 could share their inputs. [] Note that the order of this bound is tight for t > W. In the sigmoidal case we are unfortunately only aware of a slight improvement of the lower bound such that it still differs from the upper bound by an exponential term. T h e o r e m 4.3.4. For k > 2 and a = sgd an architecture exists shattering
J2(t~W) points. Proof. As before we restrict the argumentation to the case k = 2. Consider the t(t + 1)/2 trees of depth t + 1 in (R~)~ which contain all binary numbers 0 . . . 0 to 1 . . . 1 of length t in the first component of the labels t
t
of the leaves, all binary numbers of length t - 1 in the labels of the next layer, . . . , the numbers 0 and 1 in the first layer, and the number 0 in the root. In the tree tij (i E { 1 , . . . , t } , j E { 1 , . . . , i } ) the second component of the labels is 0 for all except one layer i + 1 where it is 1 at all labels where the already defined coefficient has a 1 as the j t h digit, t2,1 is the tree (0,0)((0,0)((00,0), (01,0)), (1,0)((10, 1), (11, 1))) if the depth t + 1 is 3, for example. The purpose of this definition is that the coefficients which enumerate all binary strings are used to extract the bits number 1, . . . , t ( t + 1)/2 in an efficient way from the context vector: We can simply compare the context with these numbers. If the first bits correspond, we cut this prefix by subtracting the number from the context and obtain the next bits for the next iteration step. The other coefficient of the labels specify the digit of the context vector which is responsible for the input tree tij, namely the 1 + . . . + i - 1 + j t h digit. With these definitions a recursive architecture can be constructed which just outputs for an input tij the responsible bit of the initial context and therefore shatters these trees by an appropriate choice of the initial context. To be more precise, the architecture is induced by the mapping f : IR2+6 -~
f ( X l , X2, Y l , Y2, Y3, Z l , Z2,
Z3) =
(max{lylyZ-ZlE[0,1[ 9(YIY2 -- Xl), lzlz:-zlE[0,1[ " (ZlZ2 -- Xl)}, 0.1 9Y2,Y3 V z3 V (x2 A YIY2 -- X l ~- [0, 1D V (x 2 A Z l Z 2 - x 1 ~_ [0,1D), which leads to a mapping which computes in the third component the responsible bit of y for tij with an initial context (y = O.yly2 ... Yt(t-1)/2, (10) t, 0). The role of the first context neuron is to store the remaining bits of the initial context, at each recursive computation step the context is shifted by multiplying it by Y2 and dropping the first bits by subtracting an appropriate label of the tree in the corresponding layer. The second context neuron computes the value 10 height of the remaining tree. Of course, we can substitute this value by a scaled version which is contained in the range of sgd. The third context neuron stores the bit responsible for tij: To obtain an output 1 the first bits
88
4. Learnability
))
( ~ perceptron activation ",
O0 input context
input
context
0 input
~ d d
',";', identity default link: 1 default bias: 0 " link: 0 recurrent link
context
Fig. 4.4. Networks with infinite VC-dimension. Left: Sigmoidal network. Middle: Perceptron network. Right: Recurrent cascade correlation network of an appropriate context have to coincide with a binary number which has an entry 1 at the position that is responsible for the tree. This position is indicated by x2. f can be approximated arbitrarily well by an architecture with the sigmoidal activation function with a fixed number of neurons. It shatters t(t + 1)/2 trees. By first simulating the initial context with additional weights, as described in Lemma 4.3.3, and adding W of these architectures, as described in Lemma 4.3.5 where we approximate the perceptron activation by the sigmoidal function, we obtain a folding architecture shattering 12(t2W) trees. [] The same argumentation holds for any activation function which can approximate the perceptron activation function, the identity, and square activation function; for example, a piecewise polynomial function with two constant pieces and at least one quadratic piece. Some aspects occurring in the constructions of the other lower bounds for recurrent networks are worth mentioning: The network depicted in Fig. 4.4 on the left with starting vector 0.yl ... y , l - 0.5, Yi E {0, 5} computes the output 0.yi .. 9y , 1 - 0 . 5 in the ith recurrent step. Therefore the corresponding sigmoidal recurrent architecture with only 3 computation neurons, which approximates the perceptron activation by sgd(x/e) (e -+ oo), and the identity by (sgd(ex) - 0.5)/(e" sgd'(0)) (e - r 0), shatters any input set where each input has a different height. This input set occurs if the input patterns are taken from one time series so that the context length increases for any new pattern. T h a t is, in the sigmoidal case very small architectures shatter inputs which occur in practical learning tasks. In the case of the perceptron activation function the architectures with infinite VC-dimension are also very small, but the set that is shattered stores the possible dichotomies in the sequence entries, that is, it has a special form and does not necessarily occur in practical learning tasks. The same is valid if we argue with the dual VC-dimension. But nevertheless, an argumentation with the dual VC-dimension can shed some light on other aspects: The class ~" = {]a : R* - r {0, 1} [ f~(x) = 1 r162 a occurs in the sequence x, a E IR}
4.3 Bounds on the VC-Dimension of Folding Networks
89
can be implemented by a recurrent perceptron architecture with 4 computation neurons, as depicted in Fig. 4.4 in the center. We can consider ~as a function class which is parameterized by a. jrv restricted to inputs of length t has VC-dimension ~ ( t ) . Consequently, recurrent networks with 4 computation neurons and perceptron activation function have VC-dimension ~ ( l n t). Note that we need only one self-recurrence, therefore the same construction works for a recurrent cascade architecture [33]. In a recurrent cascade network the feed-forward function h may depend on the inputs, too. The recurrent part does not contain hidden neurons. The function computing the state si(t) at time t of the ith context neuron has the form a(gi(si(t - 1), 8i-1 ( t ) , . . . , sl(t), input)) with a linear function gi and an activation function a (see Fig. 4.4 on the right). It is quite natural to assume that a function class used for time series prediction, where the last few values are not the important ones and a moving window technique and standard feed-forward networks are not applicable, can at least tell whether a special input has occurred or not. As a consequence any appropriate function class in this learning scenario contains the class ~we defined above as a subclass - and has therefore infinite VC-dimension. In other words, even very restricted function classes lead to an infinite VCdimension due to the recursive structure of the inputs. 4.3.3 Lower Bounds
on
fat~(~)
One solution of this dilemma can be to restrict the number of different possible events to a finite number which leads to a finite VC-dimension in the perceptron case, as already mentioned. In the sigmoidal case we can try another escape from the dilemma: As stated earlier, the interesting quantity in learning tasks concerning real valued functions is the fat shattering dimension and not the pseudo-dimension. The fat shattering dimension is limited by the pseudo-dimension, therefore the upper bounds transfer to this case. But function classes exist with infinite pseudo-dimension and finite fat shattering dimension which are efficiently used as a learning tool, for example, in boosting techniques [44]. For arbitrary weights the fat shattering dimension of networks coincides in fact with the pseudo-dimension - we can simply scale the output weights to obtain an arbitrary classification margin. We therefore restrict the weights. In a learning procedure this is implemented using weight decay, for example. At least in the linear case it is necessary to restrict the inputs, too, since a division of the weights by a real number x corresponds to a scaling of the input at time i by x t-i+1 if the maximum length is t and the initial context is 0 in recurrent networks. Of course, in the perceptron case a restriction of the weights and inputs has no effect since the test as to whether a unit activation is > 0 is computed with arbitrary precision. This assumption may be unrealistic and is dropped if we approximate the perceptron function by a continuous one.
90
4. Learnability
th I
f
I
9 input Fig. 4.5. Left: sigmoidal network with infinite fat shattering dimension. Right: function leading to an infinite fat shattering dimension if applied recursively to an appropriate initial context. The constructions leading to the lower bounds use an approximation process in the sigmoidal case resulting in unlimited weights, and they use arbitrarily growing inputs in the linear case [27, 70]. Now we assume that the inputs are taken from [-1, 1] and the weights are absolutely bounded by B. We fix the parameter for the fat shattering as e > 0. Unfortunately one can find other architectures with limited weights and inputs, but infinite fat shattering dimension for unlimited input length t. T h e o r e m 4.3.5. The e-fat shattering dimension for e < 0.4 of sigmoidal recurrent networks with three computation neurons, inputs in [-1, 1], and weights absolutely bounded by B > 30, is at least linear in the length of the
input sequences. Proof. The mapping f ( x ) = 1 - 2. sgd(15x - 10) - 2. s g d ( - 1 5 x - 10) maps the interval [-0.9, -0.4] to an interval containing [-0.9, 0.9], and in the same way [0.4, 0.9]. Therefore for any sequence el,. 9 en of signs we can find a real value y E [-1, 1] so that f i ( y ) E
[[0.4, - 0 . 90.9] ,-0.4]
ifei if ei = = + -
g
We start with
a value according to the sign of en. Afterwards, we recursively choose one of the at least two inverse images of this value. One sign of the inverse images corresponds to the preceding ei. The iterated mapping f can be simulated by a recurrent network with 2 context neurons, as shown in Fig. 4.5. All weights and inputs are bounded and any set of sequences of different height can be shattered by the corresponding architecture since we can choose the signs ei in the procedure to find the starting vector y according to the dichotomy. In the network in Fig. 4.5 the initial context is changed to, e.g., ((1 - y)/2, 0) instead of y which is still bounded by B. [::] For W weights a lower bound f l ( t W ) can be derived by composing a constant fraction of W of such architectures, as described in Lemma 4.3.5. Since the equality which is used in the proof of 4.3.5 is to be approximated with a sigmoidal network, only a finite number of networks can be combined in such a way without increasing the weights.
4.3 Bounds on the VC-Dimension of Folding Networks
91
Since only the length of the input sequences is important the shattered set occurs in practical learning tasks as before. Furthermore, the same argumentation is valid for any function where a linear combination leads to a similar graph as in Fig. 4.5, with the property that a positive and a negative interval P and N exist with N tJ P C f ( N ) fl f ( P ) . In particular, this argumentation holds with possibly different weight restrictions for any activation function which is locally C 2 with a nonvanishing second derivative. It seems that the possibility of chaotic behavior is responsible for the infiniteness of the fat shattering dimension since, regardless of the special input, any sequence of signs can be produced by an appropriate choice of the initial context. But even in the case of linear activation the fat shattering dimension remains infinite. We argue with the dual class, consequently, the shattered set is more special in this case and stores in some sense the dichotomies we want to implement.
T h e o r e m 4.3.6. The e-fat shattering dimension for e < 0.3 of recurrent architectures with linear activation and 2 computation units is fl(ln 2 t) if t denotes the maximum input length, the weights are absolutely bounded by 2, and the inputs are contained in [-1, 1]. Proof. For an angle ~ = 7r/n the sequence sin(i~ + 7r/(2n)), i = 0, 1 , . . . is the solution of the difference equation 3rr , xi+2 = 2 C O S ( q 0 ) X i + 1 x~ = sin (2-~) , xl = sin (~nn)
-- :Ti.
The difference equation can be implemented by a recurrent neural architecture with bounded weights and 2 computation units, that is, the output for an arbitrary input sequence of length i is sin(iqo + 7r/(2n)). But then we can produce for any natural number n a sequence of n consecutive positive values, n consecutive negative values, and so on. We fix t networks which produce these alternating signs for n = 1, 2, 4, 8 , . . . , 2 t - l . Any dichotomy of these t networks appears as an o u t p u t vector of the t networks at one of the time steps 1 , . . . , 2 t. Therefore these networks can be shattered if one chooses an input of accurate length - the dual class of the linear recurrent architecture has VC-dimension of at least t for inputs of length 2 t even if the weights and inputs are bounded. We are interested in the fat shattering dimension. For growing n in the previous construction the signs are correct but the value sin(iTr/n + 7r/(2n)) becomes small if i equals multiples of n. Therefore we have to modify the construction. We simply consider longer input sequences and ignore the outputs that are too small: We requested Ioutputl _> e _< 0.3. Since 0.3 < sin(Tr/8 - 1/16) for a sign sequence of n = 2 i consecutive equal signs for i _> 3 the first 2i/8 values are too small, and even so the same number before and after any sign change. For an even number t we consider the 3t/2 networks producing alternating signs for n = 1, 2 , . . . , 23t/2-1 as above. We
92
4. Learnability I
OlOlOlOlOd
I
I
I
I
I
I
I
0'1 0'1 0'1 0'1 0'1 0'1 0'1 0 1 0 1 0 1 0 1 I
I
I
I
I
I
I
l
00110011818
oog'l 11 000000001,1
I
l
I
I
I
I
I
I
I
I
I
I
i
I
I
i
I
i
I
0',0"~ 1 1~'6,00~V[,I 1,1,,0"00 0,'fl 1 t I I
1,1 h i 1,10~)O,O0,OOtO 1 1 1 1 1 1 1 1
8 8 8 8 G G G G S G OiOOiOOiO iil iil iil iil i i i i i i i i I
I
I
I
I
I
i
I
m ovv-o oy o:o qo o:o op o:o o:o o:o .
.
.
.
remaining places
Fig. 4.6. Example of output canceling: The time steps where an output of a network is too small are canceled in the consideration. drop the networks number 2, 5, 8 , . . . , 3t/2 - 1 and in the remaining networks the outputs corresponding to the time steps 1 , . . . , 23i/8 before and after any sign change in the network number 3i for i = 1 , . . . , t/2. This means that in any three steps we cancel half the number of outputs balanced around the sign changes in the actual network. Fig. 4.6 shows an example. In the networks number 1,4, 7 , . . . we revert the signs. Still any dichotomy of the networks is stored in the networks' outputs at one time step in the remaining t networks and output sequences with original length 23t/2 with 2 t single outputs that are not canceled. These outputs have an absolute value of at least e. This follows by induction on i _< t: In the network 3i there are 22i of any 23i outputs not canceled since in the 3(i + 1)st step of 8 consecutive runs of 2 si outputs with equal sign, the first, fourth, fifth, and last run is dropped, that is, there are 8 - 2 2 i - 4.22i = 22(i+0 outputs left. In the 3ith step all sign sequences with n _< 22i-1 can be found since from step 3i to 3(i + 1) the old sign sequences are not disturbed, and additionally, the sequence with 7 sign changes in 230+1) original outputs leads to the reverted sequence with n = 22~, the sequence with one sign change leads to the sequence with n = 22i+1. The absolute values of the outputs are correct since if we drop the values 1 , . . . ,n/4 of n consecutive signs, n/8 <_n/4 and the same is valid for 2- n instead of n on the left side. We have found t networks where any dichotomy of the networks correspond to one output at time < 23t/2 of the networks. Therefore the dual class, that is, the recurrent architecture with inputs of length at most t has e-fat shattering dimension E2(ln2 t). The additional log-term is due to the fact that we considered the dual VC dimension. [] We can combine W of such architectures to obtain a lower bound E2(Wln 2 t) as follows: We simulate the initial context by two additional weights and input neurons which receive the input value 1 at the first step and 0 at all other steps (compare Lemma 4.3.3). We take W of these architec-
4.4 Consequences for Learnability
93
tures and combine the single output neurons with one linear output neuron. If we consider inputs where the values which simulate the initial contexts are 0 at any time step for all but one architecture, then only the output of this architecture differs from 0. Consequently, all single architectures can be simulated within this combination by an appropriate choice of the additional inputs. Here the reasons for the infiniteness are, generally, that even for very uniform outputs as - l n l n . . . , for example, one can find points where any output value is possible in an appropriate sequence. The information t h a t can be stored in the input sequences such that a network can use it in an easy way is not bounded if the length of the sequences is not bounded. As a consequence even the fat shattering analysis leads to lower bounds which depend on the input length in all interesting cases.
4.4
Consequences
for Learnability
It follows from the bounds on the various dimensions t h a t distributionindependent learnability cannot be guaranteed for folding architectures under realistic conditions. The natural hierarchy of the class described by the number of p a r a m e t e r s collapses if the corresponding capacity is considered. As a consequence it is necessary to find other guarantees for the generalization capability. The learning process and structural risk minimization has to take some different stratification into account. When dealing with distribution-dependent learnability, bounds on the number of examples which guarantee valid generalization can be found. This is due to the fact t h a t a natural stratification of the learning scenario is given via the input set. Since the capacity of folding architectures with restricted inputs is bounded in dependence on the number of parameters and the input length, we can apply Theorem 4.2.8 to this situation and obtain bounds which depend on the special distribution.
Corollary
4.4.1. Assume Pt is the probability of inputs of height at most t. Then any learning algorithm for a folding architecture which produces small empirical error is PA C. The number of examples necessary for valid generalization for the minimum risk algorithm is polynomial in the desired accuracy if pt is of the order 1 - d r a for some/3 > O. dt is the VC- or pseudo-dimension of the folding architecture restricted to inputs of height at most t.
As a consequence we obtain exponential bounds if the probability of high trees vanishes with increasing height in a less than logarithmical manner in the case of a = H, k = 1 or a = id and k > 1, or in a less than polynomial manner in the case of a = H, k > 2 or a a polynomial, k > 1 or a = sgd, k = 1, or in a less than exponential m a n n e r in the case a = sgd, k > 2. But this observation does not necessarily lead to the consequence t h a t any concrete example exists where the training set increases exponentially.
94
4. Learnability
Indeed, the existence of a uniform bound may be impossible for a class of distributions, although each single distribution guarantees polynomial learnability, as can be seen in the following example:
Example $.4.1. There exists a class ~Dof probability distributions and a function class ~- such that for any learning algorithm no uniform upper bound m = re(e, 5) exists with inf sup P'n(x ] dp(f,h,n(f,x))
> e) < 5,
PET) fEY
but for any single distribution P 9 l) PAC learnability with a polynomial number of examples is guaranteed.
Proof. Define Ft = { f l , . . . ,ft} with 2 a-1 - - I
fl : [O, 1] --+ {O, 1},
ft(x) = I 0 if x 9
(
2t,
21
i----0
1 otherwise.
These are almost the same functions as in Example 4.2.6, except that a perceptron activation function is added at the outputs. For the uniform distribution P~ on [0, 1] it holds that dp~ (fi, fj) = 0.5 for i ~t j. For x E [0, 1] define a tree tt(x) of height t as follows: The ith leaf (i = 1 , . . . , 2t - l ) contains the label (x/(2i - 1), x/(2i)) E R2, any other label is (0, 0). Any function ft 9 Ft can be computed as the composition of the injective mapping tt with g0 where g : R2 x {0,1} 2 ~ {0, 1}, g(xl,x2,x3,xa) -- x3 Vx4 V (xl > 1/2 zAx2 < 1/2 z) and l is chosen according to the index of fl. Let ~" denote the folding architecture corresponding to g. Then M(e,Y[X~, dR,) > t for e < 0.5, if X~ denotes the set of trees of exactly height t and Pt denotes the probability on X~ induced by tt and the uniform distribution P~ on [0, 1]. Now D contains all distributions Dr, where Dt equals Pt on X~ and assigns 0 to all other trees. Since M(e, :F, dDt) >_ t, at least lg(t(1 - 5)) examples are needed to learn the functions correctly with respect to Dt and confidence 5. Obviously, this number exceeds any uniform bound for m. On the contrary, any single distribution D, ensures PAC learnability with a polynomial number of examples of the order d/eln(1/e), where d = VC(~[Xt) for any consistent learning algorithm. [] Since the bounds which are obtained for each single distribution Dt in the above example are not uniform with respect to t the above example is not really surprising. A very sophisticated example for a class of probability distributions such that each single distribution allows learnability with uniform bounds but the entire class of probabilities is not learnable can be found in [132] (Example 8.1). However, we can use the above construction to obtain a witness for another situation: It is of special interest whether one single
4.4 Consequences for Leaxnability
95
distribution can be found where the sample size necessarily grows exponentially. Note that in distribution-independent learning such a situation cannot exist since concept classes are either distribution-independent learnable and have finite VC-dimension, which implies polynomial bounds, or they are not distribution-independent learnable at all. The following construction is an example of an exponential scenario. This in particular answers a question which Vidyasagar posed in Problem 12.6 [132].
Example 4.,~.2. A concept class exists where the number of examples necessary for valid generalization with accuracy e is growing exponentially with
1/~. Proof. Using Lemma 4.1.1 it is sufficient to show that the covering number of the function class to be learned grows more than exponentially in 1/e. The idea of the construction is as follows: The input space X is a direct sum of subspaces, i.e., the trees of height t. On these subspaces we use the construction from the previous example which corresponds to function sets with a growing covering number. But then we can shift the probabilities of the single subspaces so that trees with rapidly increasing heights have to be considered to ensure a certain accuracy. Define Y, Xg, and Pt as in the previous example. Since X is the disjoint union of X~ for t = 1 , . . . , oc, we can define a probability P on X by setting P[X~ = Pt and choosing P(X~) = Pt with 6/(n27r )
Pt =
0
if t = 22" for n > 1, otherwise.
Since dR(f, g) > ptdp, (flX~, gIX~) it is
M(e,.T',dp) >_ M(V"e,.T'IX~,dp, ) >_ t for 1/2 > V~ and Pt >_ V ~. Assume e = 1/n. Then Pt >_ v ~ is valid for t = 22" , m g vf6/rr n 1/4. As a consequence
M ( 1 / n , Y , dR) >_>22c~'/4 for a positive constant c. This number is growing more than exponentially in n. [--I Unfortunately, the argumentation of Corollary 4.4.1 needs prior information about the underlying probability. Only if the probability of high trees is restricted and therefore the recurrence is limited can a priori correct generalization be guaranteed. In contrast, the luckiness framework allows us to train situations with arbitrary probabilities, estimate the probability of high trees a posteriori from the training data, and output a certain confidence of the learning result which depends on the concrete data.
96
4. Learnability
C o r o l l a r y 4.4.2. Assume ~r is a [0, 1]-valued function class on the trees, P is a probability distribution, dt -- 7~S(.TI trees of height < t), and Pi are positive numbers with )"~ip i = 1, then the unluckiness function L'(x, f ) = max{height of a tree in x} leads to a bound sup P ( x
I Idm(f, h m ( f , x), x) - dp(f, hm(f, x)) I _< e) _> 1 -
6
/or =
- m
+
--
21g
lnm+(3+i)ln2+ln
4
where i >_dL'(x,h.,(l,x)) lg (4em/c~ 9ln(4em/a)) and a > O. Proof. L' is smooth with respect to ~(m, h, 6, a) = 2 ( 4 e m / a . ln(4em/c~)) a, where d = P S ( ~ I trees of height < U(x, hm(f,x))) and r/(m,h, 6, a) = ln(1/6)/(m In 2), as can be seen as follows: <
P2m(xy [ 3 f E U V x ' y ' C o xy l(x'y', f, or) > ~(m, L'(x, h), 6, a)) P2m(xy ] m i n : , y , c , : y max{height in x'y'} > max{height in x})
because the number of functions in I{flx'y' Ff E 2-}~f, where x ' y " s height is at most L'(x, hm(f,x)), can be bounded by ~ ( m , L ' ( x , h ) , ~ , a ) because the number is bounded by M(a/(2m), (2]x'y')~, d,,,,) where m' = length of x ' y ' . The latter probability equals
fx~-, U(a l (xy) ~ E A)dp2m(xy), where U is the uniform distribution on the swapping permutations of 2m elements and A is the above event. We want to bound the number of swappings of x y such that on the first half no tree is higher than a fixed value t, whereas on the second half at least mr/trees are higher than t. We may swap at most all but mr/indices arbitrarily. Obviously, the above probability can be bounded by 2 -ran, which is at most 6 for I/_> lg(1/g)/ra. Furthermore, the condition ~(rn, L(x, hm(f,x)),cf, a) _< 2 i+I leads to a bound for i. Now we can insert these values into the inequalities obtained at the luckiness framework and get the desired bound. D Obviously, the bound for ~ tends to 4ct for large m. But we still need a good prior estimate of the probabilities Pi for good bounds. Furthermore, we have not allowed dropping a fraction in x, either, when measuring the maximum height of the example trees. If the sample size increases this would be useful since the fraction of large trees is described by the probability of those trees. This probability may be small but not exactly 0 in realistic cases. Unfortunately, allowing to drop a fraction r/h in x when measuring L' would lead to a lower bound r/h for 7/. Then the factor r/In m in ~ would tend to cr for increasing m.
4.5 Lower Bounds for the LRAAM
4.5 L o w e r B o u n d s
97
for t h e L R A A M
The VC analysis of recurrent networks has an interesting consequence for the LRAAM. It allows us to derive lower bounds on the number of neurons that are necessary for appropriate decoding with the dynamics, as introduced in section 2.3. Assume a concrete architecture is capable of appropriately encoding and decoding trees of height t such that the composition yields the identity of those trees. We restrict ourselves to symbolic data, i.e., the labels are contained in the alphabet {0, 1 }. Instead of exact decoding we only require approximate decoding. Hence labels > 0.5 are identified with 1, labels < 0.5 are identified with 0. Then the possibility of proper encoding yields the following result: T h e o r e m 4.5.1. Assume points in R m exist which are approximately decoded to all binary trees of height at most t with labels in {0, 1} with some ~ty, h -- (ho, hi, h2) : ~m _~ ~l+m+m, y C ~m. If h is a feed-forward neural network, then the number of neurons is lower bounded by 2 s)(w) if the acti-
vation function is the standard sigmoidal activation function dr a piecewise polynomial activation. Proof. h = (ho, hi, h2) : R m --+ R• if{m • R m gives rise to a recurrent network ~y:R" -~RxR'n,g:RxRxR m - ~ R x R m,
(xo, zl, z2) ~ (ho(zl), (1 -
xo) "hi (z2) + zo . h2(z2)).
If h y maps value z to some tree t, then ~rl o~(0,z) maps any binary sequence of length i to some node in the i th level of the tree t, 7rl being the projection to the first component; the exact number of the node depends on the sequence: [ 0 , . . . , 0] is mapped to the leftmost node in the i th level, [ 1 , . . . , 1] is mapped to the rightmost node, the other sequences lead to the nodes in between. The last component of the sequences is not relevant; see Fig. 4.7. If points in R m exist that are approximately mapped to all trees of height t in {0, 1}~ with h y , the neural architecture 7rl o g(0,_) shatters all binary sequences of length t with the last component 1: One can simply choose the second part of the initial context corresponding to a vector z which encodes a tree of height t and leaves according to the dichotomy. g can be computed by adding a constant number of neurons with some at most quadratic activation function to h and it can be approximated arbitrarily well adding a constant number of sigmoidal units to h. Consequently, the VC-dimension of g(0,_), restricted to inputs of height at most t, is limited by O(NaTln(qd)) if the activation function in h is piecewise polynomial with at most q pieces and degree at most d >_ 2. The VC-dimension is limited by O ( N 4 T 2) if the activation function is the standard sigmoidal function. In both cases, N denotes the number of neurons in h. The lower bound 2 T-1 for the VC-dimension leads to bound N = 2 ~(T) for the neurons in h. [] Note that the way in which the trees are encoded in R m is not important for
98
4. Learnability
r . . . . . . . . . . . . . . . . . . .
I
I
I
z
l input [0,1,'1 leads to /
hI
h2 ~---~bAc
d
A
ef
A
hI
h2
ho ~ h1 h2
e
g
Fig. 4.7. An appropriate input to the recurrent network as defined in Theorem 5 restores a path of the tree h y ( z ) , the length of the input sequence indicates the length of the path, entries 0 and 1 stand for the left or right subtree, respectively, ho yields the output label. the lower bound. Furthermore, a more sophisticated decoding of the single binary nodes or using other standard activation functions leads to the same lower bound since, for any reasonable modification, the VC-dimension of the recurrent network as defined in the proof is still bounded by some polynomial in the number of nodes and maximum input height. Consequently, the decoding formalism requires an increasing amount of resources even for purely symbolic data. Hence a formalism like the LRAAM can deal only with restricted situations, i.e., almost linear trees or limited height, whereas these restrictions do not apply to methods which merely focus on encoding, like recurrent and folding networks.
4.6 Discussion
and
Open
Questions
In this chapter we have considered the possibility of using folding networks as a learning tool from an information theoretical point of view in principle. It is necessary to investigate the question as to whether usual data sets contain enough information such that the underlying regularity can be approximated with a folding network - or it can be seen that no such regularity exists before we can start training. If learnability is not guaranteed, no learning algorithm, however complicated, could succeed in general. Apart from guarantees of the possibility of learning in principle we have obtained concrete bounds on the number of examples sufficient for valid generalization which tell us which network size and number of patterns we should use in a concrete learning task. In the first part of this chapter we have discussed several possibilities of formalizing PAC learnability from an information theoretical point of view. Depending on the task that only the existence of one good learning algorithm is to be proved or every consistent algorithm shall perform correct generalization, one gets the term PAC or consistent PAC learnability. If the
4.6 Discussion and Open Questions
99
generalization is to be uniform with respect to every function the term PAC is substituted by PUAC. Furthermore, it makes sense to consider nearly consistent algorithms with small empirical error, too, which leads to scaled versions of the above-mentioned terms. We have introduced these scale sensitive versions and examined the connection between these terms, answering in particular Problem 12.4 in [132]. One question that remains open in this context is whether a concept class exists which is PUAC learnable, but not e-consistently PAC learnable for all e > 0. Furthermore, it is possible to find for (almost) all concepts equivalent characterizations that do not rely on the notion of a learning algorithm. Such characterizations are of special interest for practical applications. They lead to necessary and sufficient conditions that can be tested before the design of a concrete learning algorithm. They establish the capability of any learning algorithm - maybe with some conditions - to perform correct generalization in one of the concrete forms as specified above in principle. The well-known characterization of a finite covering number only forms a sufficient but not necessary condition for PAC learnability of function classes, due to the possibility of encoding functions uniquely in real values. Here we have shown that the argumentation no longer holds if the scaled versions of the above terms are considered because these formulations imply a certain robustness with respect to observation noise. Considering the same topic it is an interesting question as to whether the addition of noise to a learning algorithm itself causes an increase in the number of function classes that can be learned with such an algorithm. For concept classes it has been shown that all randomized algorithms can be substituted by deterministic algorithms and it can be shown that even in function learning some kind of noise, for example, a tossed coin can be simulated within a deterministic algorithm. The argument of [52], which simulates a tossed coin within a deterministic algorithm, transfers to the function case. It remains open as to whether other (reasonable) kind of noise increases the number of function classes which are PAC learnable compared to the deterministic setting. We have briefly discussed which results remain valid if the practically interesting case of model-free learning is dealt with. This is interesting if no prior knowledge about the form of the function to be learned is available. We cited several results that deal with the distribution-independent case, where the underlying probability in accordance to which the examples are chosen is entirely unknown. Of course, there are as many open questions here as further approaches to deal with several learning scenarios that fit various kinds of practical applications. But in concrete tasks concerning neural networks, learnability is most often established by simply examining the capacity of the function class. Finiteness of the VC- or pseudo-dimension guarantees learnability in all of the above cases.
100
4. Learnability
Therefore, we have estimated the VC- or pseudo-dimension of folding architectures. Here the upper and lower bounds in the sigmoidal case unfortunately differ by an order of 2 t, where t is the maximum input height, leaving as an open problem the real order of the dimension. Furthermore, some kind of weight-sharing argument in the perceptron case would improve the bounds in the perceptron case, too. However, the VC-, pseudo-, and fat shattering dimension depend on the maximum input height even for restricted weights and inputs in most interesting cases. Furthermore, even very small architectures with only a few recurrent connections lead to such results. The input set that is shattered in the sigmoidal case, for example, occurs naturally in time series prediction. As a consequence distribution-independent bounds for valid generalization cannot be obtained for folding architectures without further restrictions on the inputs or the function class. Furthermore, the structure of the input patterns and minimum architectures that shatter these patterns constitute situations that are likely to occur in practical learning tasks. Taking the particular probability into account we have constructed situations where an exponential number of examples is needed for valid generalization answering Problem 12.6 in [132]. On the contrary, we have obtained natural conditions on the probability of the inputs such that polynomial bounds are guaranteed. For this purpose an approach of [3] has been generalized to function classes. But the limitation of the generalization error requires an a p r i o r i limitation of the probability of high trees. Another possibility of guaranteeing valid generalization without explicit prior knowledge is to introduce a luckiness function which defines a posterior stratification of the learning scenario - here via the maximum input height in a concrete training set. For this purpose we have generalized the approach of [113] to function classes and arbitrary, not necessarily consistent, learning algorithms. Unfortunately, the bounds that are obtained in this way are rather conservative. It is not possible to substitute the maximum height of the inputs by a number such that only almost all example trees are of smaller height - which would be appropriate in our situation. From a technical point of view, this problem is due to the factor r/In m in the obtained bounds, which seems too conservative. Another problem with the luckiness function is the usage of the numbers Pi which measure an a p r i o r i confidence for an output of the algorithm with a certain luckiness. It would be natural to consider values pi which are increasing since the confidence of obtaining an output corresponding to il, which is less lucky than another output corresponding to i2, should be at least as high as the confidence of an output i2. But for technical reasons the Pi have to add up to 1. However, we have obtained a guarantee for valid generalization in the distribution-dependent case and an a posteriori guarantee for any situation where the input trees are restricted. In both cases the bounds should be further improved by an appropriate restriction or stratification of folding architectures. Since the reasons leading
4.6 Discussion and Open Questions
101
to infinite VC-dimension are twofold - the possibility of chaotic behavior in the sigmoidal case and the possibility of encoding arbitrary information in the input strings if they have unlimited length - a restriction of the architectures such that they fulfill certain stability criteria and a restriction of the inputs such that they have limited storage capacity may lead to better bounds. One simple limitation of the storage capacity is given by the accuracy of the computation, of course, which reduces any network to a finite automaton or tree automaton in a realistic computation. A more sophisticated argument for the limitation of the storage can be found if noise is taken into account. As already mentioned in the last chapter, the computational capability of folding networks reduces to tree automata if the computation is affected with noise. Of course, this leads to a finite VC-dimension, too, as carried out in [84] for recurrent networks. Apart from the generalization capability bounds on the VC-dimension of a class which solves a specified task characterize the minimum number of resources required for a concrete implementation. This argumentation led to lower bounds for the encoding part of the LRAAM.
Chapter 5
Complexity
We have seen that the network we have chosen will be able to generalize well to unseen data; in our case the reason may be simple: Prior knowledge about the input distribution may be available telling us that high trees will rarely occur. We choose a neural architecture with an appropriate number of neurons. We start to train this architecture with one of the standard algorithms, say, back-propagation through structure. Now we wait until the algorithm has produced a network with small empirical error, hoping that the training procedure takes at most a few minutes until it succeeds. Unfortunately, the first training process gets stuck in a local minimum such that we have to start the training again. The next try suffers from a wrong choice of the training parameters and does not succeed either. The next try produces a very unstable behavior of the weight changes and stops because of a weight overflow. Now at last, we ask whether the training algorithm we use, a gradient descent method, is appropriate for the training of folding networks. Perhaps a simple gradient descent method has numerical problems when minimizing the empirical error of a network architecture. Such a result would motivate us to introduce some heuristics or more sophisticated methods into our gradient descent or to use a minimization procedure which is based on a different idea. Furthermore, we may investigate the complexity of training folding networks in principle. Results about the complexity of the learning task tell us whether any learning algorithm can succeed in adequate time, no matter whether it is based on a simple gradient descent method and therefore perhaps suffering from numerical problems, or whether it is based on other, more sophisticated tools. If we get results showing the NP completeness of the learning task we want to deal with, for example, we definitely know that any learning algorithm, however complicated, can take a prohibitive amount of time in some situations - unless P = NP, which most people do not believe. We may ask the question about the difficulty of learning in different situations. One situation is the following setting: We fix an architecture and a training set and try to minimize the empirical error for this single architecture. The architecture may have several layers and use a certain activation function, in practical applications usually a sigmoidal function. Then of
104
5. Complexity
course, it is sufficient to find a training algorithm which succeeds in adequate time for this single architecture and training set. On the contrary, we may be searching for an optimum architecture. Therefore we train several architectures with a varying number of layers and neurons, and afterwards use the architecture with minimum generalization error. For this purpose we need a training algorithm which works well for architectures with a different number of hidden layers and units, but the training set, and in particular the number of input neurons, is fixed. Furthermore, the training algorithms which are commonly used for the empirical risk minimization are uniform. This means that the standard training algorithms do not take into account the special structure, the number of neurons and hidden layers of the architectures. Therefore it would be nice if the training complexity scales well with respect to all parameters, including the size of the architecture, the number of input neurons, and the size of the training set. If this general training problem, where several parameters are allowed to vary, turns out to be NP hard this gives us a theoretical motivation to keep certain parameters as small as possible when designing the training task. An increase in these parameters would lead to an excessive increase in the training time for the standard uniform learning algorithms. The situations may be even more complicated than described above. One reasonable task is to search for a network with small empirical error in an entire family of architectures. For example, we may search for a network with small empirical error which may have an arbitrary architecture, where only the maximum number of neurons is limited in order to limit the structural risk. This question occurs if the training algorithms are allowed to modify the architecture and to insert or delete appropriate units, which is valid, for example, in cascade correlation, pruning methods, or an application of genetic algorithms to neural networks [20, 33, 40, 77, 92, 123]. Here one extreme situation is to allow the use of an arbitrary number of neurons. Since any pattern set can be implemented with a standard architecture and a number of neurons, which equals the size of the training set plus a constant number, the training problem becomes trivial. Of course, this result is useless for concrete training tasks because we cannot expect adequate generalization from such a network. On the contrary, it turns out in this chapter that the training problem is NP hard for tasks where the number of neurons is fixed and not correlated to the number of patterns. Of course, it is of special interest to see what happens between these two extreme positions. Concerning this question, one practically relevant case is to bound the number of neurons using the results from the PAC setting which guarantee adequate generalization. In the following we mainly deal with a somewhat different problem, which is a decision problem correlated to the learning task in one of the above settings. Instead of finding an optimum architecture and adequate weights we only ask whether a network of the specified architecture and concrete weights exist such that the empirical error is small for this network. The difference
5.1 The Loading Problem
105
is that for our task an algorithm answering only 'yes' or 'no' is sufficient, whereas a training task even asks for the concrete architecture and weights if the answer is 'yes'. Since this decision task is a simpler problem than the training task all results concerning the NP completeness transfer directly to the practically relevant training problem. The results we obtain which state that the decision problem can be solved polynomially in certain situations lead to a solution for the corresponding training problem, too, because the proof methods are constructive. To summarize the above argumentation we ask one single question which we deal with in this chapter: 1. When is the following problem solvable in polynomial time: Decide whether a pattern set can be loaded correctly by a neural architecture with a specified architecture. This question turns out to be NP complete if a fixed multilayer feed-forward architecture with perceptron activation and at least two nodes in the first hidden layer is considered where the input dimension is allowed to vary. The problem is NP complete even if the input dimension is fixed, but the number of neurons may vary in at least two hidden layers. Furthermore, it is NP complete if the number of neurons in the hidden layer is allowed to vary depending on the number of input patterns. But for any fixed folding architecture with perceptron activation the training problem is solvable in polynomial time without any restriction on the maximum input height. In the third section we deal with the sigmoidal activation function instead of the perceptron function and prove that the classical loading problem to decide whether a pattern set can be classified correctly with a sigmoidal three node architecture is NP hard if two additional, but realistic restrictions are fulfilled by the architecture. First the in principle loading problem will be defined.
5.1 The
Loading
Problem
The loading problem is the following task: Let ~- = ( F I I I E N} be a family of function classes which are represented in an adequate way. Given as an input a class F I E ~- and a finite set of points P = ((x~, y~) ] i = 1 , . . . , m}, where Fl and P may have to fulfill some additional properties, decide whether a function f E Fl exists such that f(x~) = y / f o r all i. Since we are interested in the complexity of this problem, it is necessary to specify for any concrete loading problem the way in which Ft and P are to be represented. In tasks dealing with neural network learning, Fl often forms the set of functions that can be implemented by a neural architecture of a specified form. In this case we can specify FI by the network graph. P is the training set for the architecture. It can be chosen in the following settings dealing with NP complexity as a set of vectors with elements in Q, which are
106
5. Complexity
represented by some numerator and denominator without common factors in a decimal notation. Another possibility could be the assumption that the patterns consist of integers. This does not in fact affect the complexity of the problem if we deal with the perceptron activation function. Here any finite pattern set with rational input coefficients can be substituted by a pattern set with coefficients from Z. This set requires the same space for a representation and can be loaded if and only if the original set can be loaded. It is obtained by just multiplying the coefficients with the smallest common denominator. Now the simplest feed-forward neural network consists of only one computation neuron. One loading problem in this context is given if Ft equals the class of functions computed by an architecture with one computation unit, unspecified weights and biases, and input dimension I. If the architecture is used for classification tasks and is equipped with the perceptron activation function, then this loading problem, where P is taken as a pattern set in Q" x {0, 1}, is solvable in polynomial time if the patterns are encoded in a decimal representation. It can be solved through linear programming [63, 65]. We may consider larger feed-forward networks. Assume these networks all have the same fixed architecture with a fixed input dimension and any computation unit has at most one successor unit. Then it is shown in [83] that the loading problem for such a function class is solvable in polynomial time, provided that the activation functions are piecewise polynomial function. The situation changes if architectural parameters are allowed to vary. Judd considers the problem where Ft is given by the class of network functions implemented by a special feed-forward architecture with perceptron activation [62]. Here the number of computation and output neurons and the number of connections is allowed to vary if I varies. The NP completeness of this loading problem is proven by means of architectures which correspond to the SAT problem. The construction is valid even if the input dimension in any Fl is restricted to two, each neuron has at most three predecessors, and the network depth is limited by two. As a result, the construction of uniform learning algorithms which minimize the empirical error of arbitrary feed-forward architectures turns out to be a difficult task. However, the connection structure in the architectures that are considered in [62] is of a very special form and not likely to appear in practical applications. Furthermore, the output dimension is allowed to vary, whereas in practical applications it is usually restricted to only a few components. A more realistic situation is addressed in [19]: It is shown that the loading problem is NP complete if Ft is the class of functions which are computed by a feed-forward perceptron network with l inputs, two hidden nodes, and only one output. The variation of the input dimension l seems appropriate to model realistic problems. A large number of features is available, for example, in image processing tasks [60, 88]. The features encode relevant information of the image but lead to an increase in the computational costs of the training, apart from the decrease in the generalization ability. The result of Blum and
5.1 The Loading Problem
107
Rivest is generalized to architectures with a fixed number k _> 2 of hidden neurons and an output unit which computes an AND in [6]. Lin and Vitter show a similar result as Blum and Rivest for a cascade architecture with one hidden node [78]. In [108] the question is examined whether a restriction of the training set, such that the single patterns have limited overlap, and a restriction of the weights make the problems easier. However, all these results deal with very restricted architectures and cannot be transformed to realistic architectures with more than one hidden layer directly. One reason for the difficulty of the loading problem can be given by the fact that the approximation capability of the function class is too weak. The use of a more powerful class possibly makes the loading problem easier. This problem is discussed in [109]. Another drawback of these results is that they deal with the perceptron activation function. In contrast, the most common activation function in concrete learning tasks is a sigmoidal function. Therefore much effort has been made to obtain results for a sigmoidal, or at least continuous activation, instead of the perceptron function. In [26] Dasgupta et.al, succeed in generalizing the NP result of Blum and Rivest to a three node architecture, where the hidden nodes have a semilinear activation. One price they pay is that the input biases are canceled. Of course, we are particularly interested in results concerning the loading problem for sigmoidal architectures. Considering a fixed sigmoidal architecture it is expected that the loading problem is at least decidable [86]. The statement relies on the assumption that the so called Schanuel conjecture in number theory holds. But the complexity of training fixed networks is not exactly known in the sigmoidal case. One problem in this context is that polynomial bounds for the weights of a particular network which maps a finite data set correctly are not known. When considering architectures with varying architectural parameters the sigmoidai function is addressed in [134]. Here the NP hardness of loading a three node network with varying input dimension, sigmoidal hidden nodes, and a hard limiter as output is proved provided that one additional so-called output separation condition is fulfilled. This condition is automatically fulfilled for any network with output bias 0. Unfortunately, canceling the output bias prohibits a transformation of the result from the standard sigmoidal function to the function tanh with the usual classification reference 0. Hoeffgen proves the NP completeness of the loading problem if ~" is an architecture with varying input dimension, two sigmoidal hidden nodes, and a linear output, which means that the network is used for interpolation instead of classification purposes. One crude restriction in this approach is that the weights have to be taken from {-1, 1} [55]. Interesting results on sigmoidal networks are obtained if the question of simple classification is tightened to the task of an approximate interpolation or a minimization of the quadratic empirical error of the network. In this
108
5. Complexity
context, Jones proves that it is NP hard to train a sigmoidal three node network with varying input dimension and linear output with bias 0 such that the quadratic error on a training set is smaller than a given constant [61]. This argumentation even holds for more general activation functions in the hidden layer. This result is generalized to a sigmoidal architecture with k > 2 hidden nodes in [135]. Finally, modifying the problem of classification to the more difficult task of a minimization of the empirical error even adds some interesting aspects to the case we have started with, the simple perceptron unit. It is proved in [55] that it is not possible to find near optimum weights for a single perceptron in a real learning task in reasonable time unless RP = NP. All results we have cited so far are stated for simple feed-forward networks, but obviously transfer directly to NP hardness results for recurrent and folding networks. Here training does not become easier since training feedforward networks is a special case of training folding architectures. Therefore this subproblem is to be solved if we deal with the training of general folding networks, too. But when training a fixed folding architecture an additional parameter that may be allowed to vary in a function class ~" occurs: the maximum input height of a training set. In [27] the loading problem for fixed recurrent networks with linear activation where the input length is allowed to vary is considered and proved to be solvable in polynomial time. The argument that the loading problem is easier than a training task, and therefore NP hardness results of the first problem transfer directly to the latter case, motivated us at the beginning of this chapter to mainly deal with the loading problem instead of considering practical learning tasks. Actually, the connection between training and loading can be made more precise as, for example, in [26, 56, 99]. First, the possibility of training efficiently in a learning task or to learn in polynomial time is defined formally. One possibility is the following modification of a definition in [99]. Definition 5.1.1. Assume 1: = Un~176 F I is a function class, where the input set of the functions in Fz is Xi and the output is contained in Y = [0, 1]. ~" /s polynomially learnable if an algorithm
h: U {l}• i=1
(X, x Y ) " m=l
exists with the following properties: h(l, _) or h t for short maps to ~ . h runs in time polynomial in l and the representation of the inputs (xi, Yi)i. For all e , 6 > 0 and l > 1 a numbermo = mo(e,6,l) exists which is polynomial in l, 1/e, and 1/~i such that for all m > m o and for all probability distributions P on Xt and functions f : Xt --+ Y the inequality p m ( x I d p ( f , h ~ ( f , x ) ) - m i n d p ( f , g ) > e) < 6 gG Ft
is valid.
5.1 The Loading Problem
109
Actually, this notation only adds to the term of distribution-independent model-free PAC learnability the possibility of stratifying the function class and allowing a polynomial increase in the number of examples in accordance to this stratification - a definition which is useful if recursive architectures are considered as we have already seen, but which may be applied to uniform algorithms to train neural architectures with a varying number of inputs or hidden units as well. Furthermore, the limitation of the information theoretical complexity via the sample size m is accompanied by the requirement of polynomial running time of the algorithm h, as in the original setting of Valiant [129]. Note that sometimes a learning algorithm is defined such that it uses the parameters e and 5, too [129]. This enables the algorithm to produce only a solution which is el-consistent with some el sufficient to produce the actual accuracy, for example. In this case the algorithm is required to be also polynomial in 1/e and 1/(i. Even with this more general definition the next Theorem holds. The loading problem we deal with in the remainder of this chapter is not affected. In analogy to the argumentation in [99] the following result can be shown: T h e o r e m 5.1.1. Assume J: = {Ft I l E N} is a family of function classes, where Ft maps Xt to Y C ~ with [YI < cr Assume the representation of Ft takes at least space l. Assume the loading problem for jc and arbitrary patterns P is N P hard. Then J: is not polynomially learnable unless R P = NP.
Proof. Assume ~" is polynomially learnable. Then we can define an RP algorithm for the loading problem as follows: For an instance (Ft, P) of the loading problem define U as the uniform distribution on the examples in P and e = Co/(1 + IPI), 5 = 0.5, where e0 is the minimum distance of two points in Y. The algorithm computes m = rn(e, 5, l), chooses m examples according to U, and starts the polynomial learning algorithm h on this sample. The entire procedure runs in time polynomial in the representation length of Ft and P, and outputs with probability 0.5 a function f such that the values of f are e-close to the values of any function in Fi approximating the pattern set P best. Because of the choice of e the values of f are identical to the values of a best approximator of P. Hence we have either found with probability 0.5 a function in F! which exactly coincides with the pattern set P if it exists and can answer 'yes', or we have unfortunately missed this function with probability 0.5, or it does not exist. In both cases we answer 'no', which is wrong with probability 0.5. This leads to an RP algorithm for the loading problem which contradicts the inequality RP#NP. [] From this argumentation it follows that the class of three node perceptron networks stratified according to their input dimension, for example, is not polynomially learnable due to complexity problems. In contrast, the results which deal with the complexity of minimizing the quadratic empirical error
110
5. Complexity
limit the capability of special learning algorithms which try to minimize this error directly. Since many common training algorithms for neural networks are based on some kind of gradient descent method, it is appropriate to consider the complexity of training with this special method in more detail. The examination of the drawbacks that occur when feed-forward networks are trained with gradient descent has led to remarkable improvements of the original algorithm [103, 141]. In particular, training networks with a large number of hidden layers is very difficult for simple back-propagation due to an exponential decrease in the error signals that are propagated through the layers. Several heuristics prevent the decrease in the signals and speed up the learning considerably. Unfortunately, the same drawback can be found in recurrent and folding networks since they can be seen as feed-forward networks with a large number of hidden layers and shared weights. Here the same improved methods from the feed-forward case cannot be applied directly due to the weight sharing. Several other approaches have been proposed to overcome this difficulty, like learning based on an EM approach [14], some kind of very tricky normalization and recognition of the error signals in LSTM [54], or substituting an additional penalty term for the weight sharing property [67]. The detailed dynamics of gradient descent are studied, for example, in [16, 71]. Several approaches even characterize situations in which gradient descent behaves well in contrast to the general situation or try to find regions where the network behaves nicely, is stable, for example, and where training may be easier [17, 35, 791.
5.2 The
Perceptron
Case
In this section we consider only networks with the perceptron activation function. However, the architectures themselves we deal with, for example, in NP results, are realistic since they contain several hidden layers and an arbitrary number of hidden units.
5.2.1 Polynomial Situations First of all it is possible to train any fixed feed-forward perceptron architecture in polynomial time. One training algorithm can be obtained directly by a recursive application of an algorithm which is proposed by Meggido for k-polyhedral separating of points [89], and which is already applied in [26] to a perceptron architecture with one hidden layer. The following is a precise formulation of an analogous algorithm for arbitrary feed-forward perceptron architectures.
5.2 The Perceptron Case
ill
T h e o r e m 5.2.1. For any fixed feed-forward architecture with perceptron activation function there exists a polynomial algorithm which chooses appropriate weights for a pattern set if it can be loaded and outputs 'no' otherwise. Proo]. Enumerate the N neurons of the architecture in such a way that all predecessors of neuron j are contained in { 1 , . . . , j - 1 }. Assume p points are to be classified by such an architecture. Each neuron in the network defines a hyperplane separating the input patterns of this neuron that occur during the computation of the p different outputs. We can assume that no input pattern of this neuron lies directly on the separating hyperplane. Therefore the behavior of the neuron is described by p inequalities of the form w t 9 inputp + # > 1 or w t 9 inputp + 8 < - 1 , where inputp is the input of the neuron when computing the output value of the entire network of point p, w are the weights of this neuron, and 8 is the bias. But as in [89] (Proposition 9/10), an equivalent hyperplane can be uniquely determined by the choice of at most d + 1 of the above inequalities (d = input dimension of the neuron) for which an exact equality holds instead of a <_ or >. For each choice of possible equalities it is sufficient to solve a system of linear equations to find concrete weights for the neuron. This leads to a recursive computation of the weights as follows: Assume 1 , . . . , n are the input neurons. Set o~ =- pji for j = l , . . . , n , i - 1 , . . . ,p, where pi is the ith input pattern. Compute r e c ( n + l , {o~ [i = 1 , . . . , p, j -- 1 , . . . , n}), where rec is the procedure: r e c ( k , [o} [i = 1 , . . . , p , j = 1 , . . . , k - 1 1 ) : { Assume k l , . . . , k t a r e t h e p r e d e c e s s o r s o f n e u r o n k. For a l l c h o i c e s of _ < l + l p o i n t s of { ( o ~ , , . . . , o ~ , ) [ i = 1 , . . . , p } and a separation into positive and negative points:
The choice defines_
Output
the actual
weights.
If k ~ N: Compute r e c ( k + 1, [o}[i = 1 , . . . , p , j = 1 , . . . ,k]). This procedure outputs for all possible hyperplane settings one weight vector. The running time is limited by the product of the possible choices of at most l + 1 of the p input points for any neuron and the separation of these points into two sets. It is limited by the term
112
5. Complexity
< (3p)r H
i=n-t-1
+..+l,'+N-n),
li+l
where li is the number of predecessors of neuron i.
[]
Unfortunately, this bound is exponential in the number of neurons and the number of predecessors of the single neurons. Therefore it does not lead to good bounds if architectural parameters are allowed to vary. Furthermore, it cannot be applied to recurrent architectures because here, the number N would increase with increasing input length. Fortunately, another argumentation shows that recurrence does not add too much complexity to the learning task: T h e o r e m 5.2.2. For any fixed folding architecture with perceptron activation it can be decided in polynomial time whether a pattern set can be loaded correctly even if the input height is not restricted.
Proof. Assume a folding architecture with perceptron activation function and an input set P are given where the number of different labels which occur in any tree in P adds up to L. Note that at a computation of the activation of one neuron which has i input neurons among its l predecessors the output value is uniquely determined by the value of the inputs - one of L i possible input vectors - and the value of the other predecessors - one of 2 t-i possible vectors. The activation of any neuron with 1 predecessors without input neurons is determined by the value of the l predecessors - one of 21 possible vectors. If we write the different linear terms that may occur at the computation of the values on P to one vector then we obtain a vector of length ~"~N1 2t~L i~, where N is the number of neurons, li is the number of predecessors of neuron i, which are computation or context neurons, and ii is the number of predecessors which are input units. This is a vector of polynomial length containing linear polynomials in the weights if the architecture is fixed. We can determine the o u t p u t of the network if the signs of each polynomial in this vector are known. We can check in polynomial time whether one sign vector corresponds to a computation in the folding architecture, which leads to a correct classification. (See Fig. 5.1 for an example.) Therefore the following procedure can solve the loading problem: Find all possible sign vectors which occur if the polynomials are computed for some concrete weights. Afterwards, test whether at least one of these sign vectors describes a correct classification. One algorithm which finds all possible sign vectors of a vector of polynomials and which is polynomial in the length of the vector and the coefficients, but exponential in the number of parameters, is described in [11, 27]. Since the number of parameters is fixed in a fixed architecture the algorithm runs in polynomial time. [] As a consequence, we can efficiently learn a perceptron architecture from a computational point of view if we fix the structure. This fact implies poly-
5.2 The Perceptron Case
113
wl 9x I +01]
< o/---.~ ~o I ~ ' ~ 2 4 7 I I~'1 § I
< o/~---.-~>_ ,~(-~w2
v
I
v
o
I
t
~-
~
I~~247
"
~
_>o
I ~,3.~§
I
"
_>o
([~l, ~2, x3], y)
J
~
Iw~.o§ <
0
~ 0
~
I I ~~ § l _
>
0 l
Fig. 5.1. Linear terms occurring at a computation of a concrete input for a folding architecture. nomial learnability for feed-forward perceptron architectures with a fixed structure because the number of examples necessary for valid generalization increases only polynomially with 1/e and 1/~. On the contrary, folding architectures with perceptron activation function do not lead to polynomial learnability because of information theoretical problems, as we have seen in the last chapter. But the complexity of learning does not increase in a more than polynomial way. 5.2.2 N P - R e s u l t s However, the question arises in the feed-forward as well as in the recurrent scenario as to how the complexity scales with respect to architectural parameters. The argumentation in Theorems 5.2.1 and 5.2.2 leads to bounds which are exponential in the number of neurons. Such a scaling prohibits polynomial learning in the sense of Definition 5.1.1, where the stratification is given by the number of network parameters: An algorithm as in Theorems 5.2.1 and 5.2.2 leads to an enormous increase in the computation time if the input class Fi becomes more complex. But note that the loading problem is contained in NP for feed-forward as well as for recurrent architectures with the perceptron activation function since we can guess the weights - they are appropriately limited because they can be obtained as a solution of a system of linear equations as follows from Theorem 5.2.1 - and test whether these weights are correct. In the following we derive several N P completeness results for the feedforward case. Of course, the recurrent case is at least as complex as the
114
5. Complexity
feed-forward scenario, therefore the training of folding architectures is NP hard in comparable situations, too. Since we consider multilayer feed-forward architectures where the weights and biases are mostly unspecified, a somewhat different representation than the network graph is useful: We specify the function class F, computed by a certain feed-forward architecture, by the parameters (n, n l , . . . ,nh,1), where n is the number of inputs, h is the number of hidden layers, and nl, . . . , nh is the number of hidden units in the corresponding layer. We assume that the architecture has only one output neuron. In a concrete loading problem, Jr is defined by a family of such architectures where some of the parameters, for example, the input dimension n, are allowed to vary. It is appropriate to assume that the numbers (n, n l , . . . ) are denoted in a unary representation because the size n is a lower bound on the representation of the training set and the numbers nl, ... are a lower bound on the representation of concrete weights of such a network. Therefore any training algorithm takes at least nl + 9.. + nh time steps. Of course, NP hardness results for unary numbers are stronger than hardness results for a decimal notation. Consequently, instances of any loading problem in this section have the form (n, nl, . . . , nh, 1), P, where the architecture is contained in a specified class Y as mentioned above, and P is a finite training set with patterns in Qn x {0, 1} which is denoted in a decimal notation. The result of Blum and Rivest [19] shows that an increase of the input dimension is critical and leads to NP completeness results. But the architecture considered in [6, 19, 108] is restricted to a single hidden layer with a restricted output unit. Here we generalize the argumentation to general multilayer feed-forward architectures. T h e o r e m 5.2.3. Assume jr = { F , [ n e N}, where Fn is given by a feedforward pereeptron architecture (n, nl > 2, n 2 , . . . , nh, 1) with h >_ 1. In particular, only n is allowed to vary. Then the loading problem is N P complete. Proof. In [6] it is shown that for fixed h = 1 and nl > 2, but varying input dimension n and binary patterns, the loading problem is NP-complete if the output computes o ( x l , . . . , x m ) = xl A ... A xnl for xi E {0,1}. We reduce this loading problem to the loading problem as stated in the theorem. One instance given by the architecture (n, nl, 1), nx fixed and o = AND is mapped to an instance given by the architecture (h, n l , . . . ,nh,1) with fixed h, n 2 , . . . , n h as above, and fi = n + nl + 1. Assume a pattern set P = {(xi, Yi) e Qn • {0,1} li = 1 , . . . , m} and an architecture of the first type are given. We enlarge P to guarantee that a bigger architecture necessarily computes an AND in the layers following the first hidden layer. Define t3
=
{ ( ( x i , O , . . . , O ) , y i ) 9 II~a x {0,1} [i = 1 , . . . , m }
u {((0,..., 0,i,, 1), 0), ((0,..., 0,
1), 1) I i = 1,..., nl(nl + 1))
U { ( ( 0 , . . . , 0 , pi, 1 ) , q i ) l i = 1 , . . . , 2 n ' } ,
5.2 The Perceptron Case
9
115
9
Pl
CO
_
~
P2
_
. . . . . . . .
separating hyperplanes 9 P3
~/ 0 P4
Fig. 5.2. Additional points in /5 which determine the behavior of large parts of the neural architecture. where ~'i,zi,Pi E •nl, qi E {0, 1} are constructed as follows: Choose nl + 1 points with positive coefficients and zero ith coefficient on each hyperplane Hi -- {x E R n l [ the ith component of x is 0}, such that na + 1 of these points lie on one hyperplane if and only if they lie on one Hi. Denote the points by z a , z ~ , . . . . Define zi E R"' such that ~.i equals zi, except for the ith component, which equals a small positive value e. Define ~.i in the same way, but with ith component - e . e can be chosen such that if one hyperplane in I~m separates at least nl + 1 pairs (zi, ~.i), these pairs coincide with the nl + 1 pairs corresponding to the nl + 1 points on one hyperplane Hi, and the separating hyperplane nearly coincides with Hi (see Fig. 5.2). This is due to the fact that nl + 1 points z~ do not lie on one hyperplane if and ~
if det ( s1
... "'" a Znl+l 1 m) ~ 0"e If e is zsmall en~ l
the
inequality holds if zi is substituted by any point on the line from ~.i to ~.~. Pi are all points in { - 1 , 1} TM, and qi = 1 r Pi = ( 1 . . . 1). After decreasing the above e if necessary we can assume that no Pi lies on any hyperplane which separates n + 1 pairs ~.i and ~.i. A s s u m e P can be loaded with a network N of the first architecture. We construct a solution of 15 with a network N of the second architecture: For each of the neurons in the first hidden layer of N choose the first n weights and the bias as the weights or bias, respectively, of the corresponding neuron in N. The next nl weights are 0, except for the ith weight in the ith neuron of the first hidden layer of N which is 1 and the n + nl + 1st weight which is -0i if 0i is the bias of the ith neuron in the hidden layer of N. The neurons in the other layers of N compute an AND. This network m a p s / 5 correctly. A s s u m e P can be loaded with a network N of the second architecture. The points ( . . . , ~.,, 1) and ( . . . , ~.~,1) are mapped differently, therefore each pair is separated by at least one of the hyperplanes defined by the neurons
116
5. Complexity
in the first hidden layer. Because nl(nl + 1) points are to be separated, the hyperplanes defined by these neurons nearly coincide in the dimensions n + 1, . . . , n + nl with the hyperplanes Hi we have used in the construction of i~ and ~.~, where the dimension n + nl + 1 serves as an additional bias for inputs with corresponding coefficient 1. We can assume that the point ( . . . , ~ , 1 ) is mapped to 1 by the neuron corresponding to H~. Maybe we have to change the sign of the weights and the biases beforehand, assuming w.l.o.g, that no activation coincides with zero exactly. Then the values of { ( 0 , . . . , 0 , p i , 1 ) l i -- 1 , . . . , 2 m} are mapped to the entire set {0,1} TM by the neurons of the first hidden layer, and the remaining part of the network necessarily computes a function which equals the logical function AND on these values. Consequently, the network N maps P correctly if the weights in the first hidden layer equal the weights of N restricted to dimension n, and if the output unit computes an AND. [:l As a consequence of this result the complexity of training a realistic perceptron architecture increases rapidly if the number of input neurons increases. This fact gives us a theoretical motivation to design any concrete learning task in such a way that the input dimension becomes small. Apart from an improved generalization capability this leads to short training times. However, this argumentation is not valid for every pattern set. In fact, we have used a special pattern set in the above proof where the patterns are highly correlated. The situation may be different if we restrict the patterns to binary or even more special pattern sets. For example, if the patterns are orthogonal the loading problem becomes trivial: Any orthogonal pattern set can be loaded. Therefore one often uses a unary instead of a binary encoding, although such an encoding increases the input dimension in practical applications [112]. This method may speed up the learning because the patterns become uncorrelated and need fewer hyperplanes, which means fewer computation units for a correct classification. Another problem that occurs if we have fixed the input representation is choosing an appropriate architecture. One standard method is to try several different architectures and afterwards to choose the architecture with minimum generalization error. But this method needs an efficient training algorithm for network architectures with a varying number of hidden units in order to be efficient. Unfortunately, even here the loading problem is NP complete and may therefore lead to an increasing amount of time which is necessary to find an optimum architecture with the above method. T h e o r e m 5.2.4. The loading problem for architectures from the set Y: = {(n, n l , . . . ,nh, 1 ) I n l , . . . , n h E N} is NP-complete for any fixed h > 2 and
n~2. Pro@ In [89] it is shown that the problem of deciding whether two sets of points R and Q in Q2 can be separated by k lines, i.e., each pair p E R and q E Q lies on different sides of at least one line, is NP-complete. Consider
5.2 The Perceptron Case
117
the training set P={(xi,yi) 9
•149
V(xi 9
for two sets of points R and Q. P can be loaded by a network with structure (2, k, IRI, 1) if and only if R and Q are separable: Assume P can be loaded. The hidden nodes in the first hidden layer define k lines in 1~2. Each point p 9 R is separated from each point q 9 Q by at least one line because otherwise the corresponding patterns would be mapped to the same value. Assume R and Q are separable by k lines. Define the weights of the neurons in the first hidden layer according to these lines. Let R = { P l , - . . , pro}. Let the j t h hidden unit in the second hidden layer compute ( x l , . . . ,xk) (-~)xl A ... A (~)xk, where the -~ takes place at xi if PJ lies on the negative side of the ith hyperplane. In particular, the unit maps Pi to 1 and q to 0 for all q 9 Q. Consequently, if the output unit computes an OR the pattern set P is mapped correctly. This argumentation can be directly transferred to h > 2 and n > 2. [-'1 Obviously, the same problem becomes trivial if we restrict the inputs to binary patterns. One immediate consequence of the NP results is that we can find a finite pattern set for arbitrary large networks such that it cannot be implemented by the network. As a consequence, any kind of universal approximation argument for feed-forward networks necessarily has to take the number of patterns into account. The number of parameters necessary for an approximation depends in general on this parameter. Here another interesting question arises: What is the complexity of training if we allow the number of parameters to vary, but only in dependence on the number of training points? As already mentioned, one extreme position is to allow a number of hidden units that equals the size of the training set plus a constant. Then the loading problem becomes trivial because any set can be implemented. Other situations are considered in the following. One problem that plays a key role in the argumentation is the set splitting problem which is NP complete. Definition 5.2.1. The k-set splitting problem (k-SSP) is the following task: A set of points C = { c t , . . . , c n } and a set of subsets of these points S = { s l , . . . , sin} are given. Decide whether a decomposition of C into k subsets $1, . . . , Sk exists such that every set s~ is split by this decomposition. In a formal notation: Does $1, . . . , Sk C C exist with Si A S 1 = 0 for all i ~ j k and [Ji=l Si = C such that si ~ S i for all i and j ? The k-set splitting problem is NP complete for any k _> 2, and it remains NP complete for k = 2 if S is restricted such that any si contains exactly 3 elements [37].
118
5. Complexity
T h e set splitting problem is reduced to a loading problem for a neural architecture in [19] showing the NP completeness of the latter task. We use it in the next theorem, too. 5.2.5. For.T = {(n, nl, 1 ) I n , n1 E N} and instances (n, nl, 1), P with the restriction IPI < 2. (nl + 1) 2 the loading problem is N P complete.
Theorem
Proof. We reduce an instance C = { c l , . . . ,cn}, 3 for all i of the 2-SSP to an instance (n + 3, n + where the p a t t e r n set P consists of 2n 2 + m 2 + particular, IPI < 2(n + m + 1) 2. The 2-SSP (C, S) has a solution if and only
Ct= S'
S = { s l , . . . ,Sm} with tsil = m, 1) of the loading problem, 3mn + n + 3m + 3 points. In if the k = n + m-SSP
{Cl,...,Cn, Cn+l,...,Cn,=n+k_2} , =
{Sl,...,Sm,Sm+l,...,s,~,=m+(,,+k-3)(k-2)}
is solvable, where { s m + i ] i = 1 , . . . , n ' - n} = {{cj,ck} [j E {n + 1 , . . . , n + k - 2},k e { 1 , . . . , n + k - 2}\{j}}. Note t h a t [si[ < 3 for all i. The new points and sets in C' or S', respectively, ensure that k - 2 of the k-splitting sets have the form {ci}, where i > n, and the remaining two sets correspond directly to a solution of the original 2-set splitting problem. Such an instance of the k-set splitting problem is reduced to an instance of the (n' + 3, k, 1) loading problem as follows: The training set P consists of two parts. One part corresponds to the SSP directly, the other part determines t h a t the output unit computes an AND or a function which plays an equivalent role in a feed-forward perceptron network. The first part consists of the following points: - The origin ( 0 , . . . , 0) is m a p p e d to 1, - a vector ( 0 , . . . , 0 , 1 , 0 , . . . , 0 , 1 , 0 . . . , 0 ) for any set si E S' which equals 1 at the coefficient j if and only if cj E si is m a p p e d to 1, - t h e unit vectors ( 0 , . . . , 0 , 1 , 0 . . . , 0 ) with an entry 1 at the place i for i e { 1 , . . . , n ' } are m a p p e d to 0. Consequently, we have constructed a training p a t t e r n corresponding to any point ci and set s j, respectively, in the first part of the pattern set. The patterns in the second part have an entry 0 at the first n' places, an entry 1 at the n' + 3 r d place in order to simulate an additional bias, and the remaining two components n' + 1, n' + 2 are constructed as follow: Define the points x ij = (4(i - 1) + j , j ( i - 1) + 4((i - 2) + . . . + 1)) for i E { 1 , . . . , k } , j E {1,2,3}. These 3k points have the property that if three of them lie on one line then we can find an i such t h a t the three points coincide with x il , x i2, and x i3. Now we divide each point into a pair pij and n ij of points which are obtained by a slight shift of x ij in a direction t h a t is orthogonal to the line Ix il, x i3] (see Fig. 5.3). Formally, pij = xiJ + eni and n ij = x ij - eni, where ni is a normal vector of the line Ix il, x i3] with a positive second coefficient and e is a small positive value, e can be chosen such t h a t the following holds:
5.2 The Perceptron Case
119
#
11 33 / ) n33
p31/ Off/~n31
p23 ,"
separating lines
p 22 i !
n23
p21 ~ n 22
pH p12 pl3
."" 0 n2!
9 9 _e.._~---
--0- 0 0." nil n 12 n13 9
Fig. 5.3.. Construction of the points pU and n 'J : The points result by dividing each point x U on the lines into a pair. Assume one line separates three pairs (niljt,piljl), (n/2J2,pi2J2), and (niaJ3, pi3js), then the three pairs necessarily correspond to the three points on one line, which means il = i2 = is. This property, is. fulfilled if the deij .'3 .'3x terminant of all triples of points (1, Ytij , Y2 ), where [~ttl , Y2 ) is some point in Ix q + eN, x ij - EN] and the indices i are different in such a triple, does not vanish. Using Proposition 6 of [89] it is sufficient for this purpose to choose < 1/(24 9 k(k - 1) + 6) if N is a vector of length 1. Consequently, the representation of the points n ij and p i j is polynomial in n and m. The patterns (0, -..,,,,elfl ,~iJ,r2ij, 1) are mapped to 1, the patterns (0, . . . , 0, n~, n~J, 1) are mapped to 0. Now assume that the SSP (S',C') is solvable and let S 1 , . . . , S k be a solution. Then the corresponding loading problem can be solved with the following weights: The j t h weight of neuron i in the hidden layer is chosen as -1 ifcjESi 2 otherwise, the bias is chosen as 0.5. The weights ( n ' + 1, n ' + 2, n ' + 3) of the ith neuron are chosen as ( - i + 1, 1, - 0 . 5 + 2-i (i - 1)) which corresponds
120
5. Complexity
to the line through the points x ~1, x i2, and x ~3. The output unit has the bias - k + 0.5 and weights 1, i.e., it computes an AND. With this choice of weights one can compute that all patterns are mapped correctly. Note that the point corresponding to si E C' is mapped correctly because si is not contained in any Sj and si contains at most 3 elements. Assume, conversely, that a solution of the loading problem is given. We can assume that the activations of the neurons do not exactly coincide with 0 when the outputs on P are computed. Consider the mapping which is defined by the network on the plane { ( 0 , . . . , 0, xn+l, xn+2, 1) I Xn+l, Xn+2 E R}. The points pij and n ij are contained in this plane. Because of the different outputs each pair (pij, nij) has to be separated by at least one line defined by the hidden neurons. A number 3k of such pairs exists. Therefore, each of the lines defined by the hidden neurons necessarily separates three pairs (pij, n,j) with j E {1, 2, 3} and nearly coincides with the line defined by [xil,xi3]. Denote the output weights of the network by w l , . . . , wk and the output bias by 0. We can assume that the ith neuron nearly coincides with the ith line and that the points pij are mapped by the neuron to the value 0. Otherwise we change all signs of the weights and the bias in neuron i, we change the sign of the weight wi, and increase 0 by wi. But then the points pi2 are mapped to 0 by all hidden neurons, the points n ~2 are mapped to 0 by all but one hidden neuron. This means that 0 > 0, 0 + wi < 0 for all i and therefore 0 + wi~ + ... + wi~ < 0 for all i l , . . . , i t E { 1 , . . . , k } with l _> 1. This means that the output unit computes the function ( x l , . . . ,xn) ~-~ -'xl A . . . A - , x n on binary values. Define a solution of the SSP as Si = {cj I the j t h unit vector is mapped to 1 by the ith hidden neuron}\(S1 U ... U Si-1). This definition leads to a decomposition of S into $ 1 , . . . , Sk. Any set si is split because the situation in which all points corresponding to some cj in si are mapped to 1 by one hidden neuron would lead to the consequence that the vector corresponding to s, is mapped to 1 by the hidden neuron, too, because its bias is negative due to the classification of the zero vector. Then the pattern corresponding to si is mapped to 0 by the network. [] If we are interested in an analogous result where the input dimension is fixed we can have a closer look at the NP completeness proof in [89] and obtain the following result: T h e o r e m 5.2.6. The loading problem for architectures from the set ~ = {(2,nx,n2, 1) I nx,n2 E N} with instances (2,n1,n2, 1) and P with IPI < n~ as input is N P complete.
Meggido reduces in [89] a 3-SAT problem with m clauses and n variables to a problem for k-polyhedral separability with k = 8(u + 1) + n m lines and m + n m 2 + 16(u + 1)(m + n m 2) positive and m + n m 2 + 32(u + 1)(m + n m 2) negative points with coefficients that are polynomial in n and m and with a polynomial value u. This problem can be reduced as in the proof of Theorem
5.2 The Perceptron Case
121
5.2.3 to a loading problem with the same number of points and nl = k, 16(u + 1)(n + n m 2) hidden units. In particular, (8(u + i) + rim) 3 _~ (m + nTrt2)(2 -~- 16(u -~- 1) + 32(u + 1)) for n _> 3, m _> 3. [] n2 = m + nrn 2 +
One interesting question to ask is which of the above NP completeness results remain valid if we restrict the weights of a possible solution. This means an input for the loading problem does consist of the functions described by (n, n a , . . . , nh, 1), with the additional condition that in a network all weights and biases are absolutely bounded by some constant B. Here we assume that the weights are integers since for real or rational values, an arbitrary restriction would be possible by an appropriate scaling of the weights. This question is relevant since in practical applications a weight decay term is often minimized parallel to the empirical error minimization. It may be the case that training is NP complete, but only due to the fact that we sometimes have to find a solution with extreme weights which is hard to find. A weight restriction would lead to the answer 'no' in this case. Note that the formulations of the above theorems can be transferred directly to an analogous situation in which the weights are restricted. When considering the proofs, Theorem 5.2.3 holds with a weight restriction B = k. A slight modification of the proof allows the stronger bound B = 2: The SSP is NP complete even if the sets si E S contain at most 3 elements. Therefore, the biases of the first hidden layer are limited. Furthermore, we can substitute an AND by the function z x , . . . , z , ~ -~Xl ^ ... ^ -~xn in the network via a change of all weight signs in the first hidden layer. This function can be computed with weights and biases restricted by B. Theorem 5.2.4 does not hold for restricted weights since a restriction of the weights would lead to a finite number of different separating hyperplanes in R2 which correspond to the neurons in the first hidden layer. Therefore a weight restriction reduces the loading problem to a finite problem which is trivial. The same is valid for Theorem 5.2.6. In Theorem 5.2.5 the weights of a possible solution are unbounded in order to solve the more and more complicated situation which we have constructed in the last three input dimensions. But unfortunately, we are not aware of any argument which shows that the loading problem becomes easier for restricted weights in this situation. We conclude this section with a remark on the complexity of algorithms which are allowed to change their architecture during the training process. If the changes are not limited then the training may produce a network with an arbitrary number of neurons. This does not generalize in the worst case but can approximate any pattern in an appropriate way: Here again, the loading problem becomes trivial. In order to restrict the architecture in some way it is possible to formulate the learning task as the question as to whether an architecture with at most a limited number of neurons exists which loads the given data correctly. But the number of neurons does not necessarily coincide exactly with the maximum
122
5. Complexity
number for a concrete output 9 In a formal notation this corresponds to an input Fl of functions given by , (n, nl,.
9 .
, n h ~i ,
1)
h'
5.3 The
Sigmoidal
Case
Of course, the strongest drawback of the results in the previous section is that the activation function is a step function. In realistic applications one deals with a sigmoidal or at least continuous activation 9 In this section we derive one complexity result for the sigmoidal case which can be seen as a transfer of the complexity result for the 3-node perceptron architecture in [19] to the sigmoidal activation. In contrast to the work of [61, 135], we do not consider the task of minimizing the empirical error, but we use a sigmoidal network as a classifier. Since a classification is easier to obtain than an approximation, NP hardness results for a classification are more difficult to prove. In contrast to [134] we substitute the output separation condition by a condition which seems more natural and makes it possible to transfer the result to an arbitrary scaled or shifted version of the standard sigmoidal function, for example, the tanh. Furthermore, our result even holds for functions that are only similar to the sigmoidal function. We deal with a feed-forward architecture of the form (n, 2, 1) where the input dimension n is allowed to vary, the two hidden nodes have a sigmoidal activation a instead of a perceptron activation, and the output function is the following modification of the perceptron activation
H~(x)=
0 undefined 1
ifx < -e, if - e < x < e , otherwise.
The purpose of this definition is to ensure that any classification is performed with a minimum accuracy. Output values that are too small are simply rejected. It is necessary to restrict the output weights, too, since otherwise any classification accuracy could be obtained by an appropriate scaling of the output values. Altogether this leads to a loading problem of the following form:
5.3 The Sigmoidal Case
123
D e f i n i t i o n 5.3.1. The loading problem for a 3-node architecture with varying input dimension, activation function a, accuracy e > 0, and weight restriction B > 0 is the loading problem given by jr = {(n, 2, 1) I n e N}, where the architecture (n, 2, 1) has the activation function a in the hidden nodes
and activation function H~ in the output node, and fulfills the additional condition that the output weights are absolutely bounded by B. The output bias may be arbitrary. The pattern set P can be any set in Qn x {0,1}. As already mentioned, the additional restrictions only ensure a certain accuracy of the classification. Now we can state the following theorem for a sigmoidal network. T h e o r e m 5.3.1. The above loading problem given by j r = {(n, 2, 1) In E N} with activation function a = sgd of the hidden nodes, output activation Ht,
and weight restriction B of the output weights is NP hard for any accuracy e E ]0, 0.5[ and weight restriction B > 2.
Proof. First of all we take a closer look at the geometric situations which may occur in such a classification. A 3-node network computes the function x ~ H , ( a s g d ( a t x + no) + ~ sgd(btx + bo) + 7) for some weights a, f~, 7, a, b, ao, bo, where lal < B and [fll < B. Assume a 3-node architecture classifies a pattern set correctly with accuracy e and weight restriction B. The set of parameters such that the patterns are mapped correctly is an open set in I~ +2" ; therefore after a slight shift of the parameters, if necessary, we can assume that 7 # 0, a + 7 # 0, ~ + 7 # 0, a + B + 7 ~ 0, a ~ 0, and f~ ~ 0. Furthermore, we can assume that a and b are linearly independent and the same is valid for all restrictions of a and b to at least two dimensions. We are interested in the boundary that is defined by (*) a sgd(atx + no) + ~ sgd(btx + bo) + 7 = 0. This is empty or forms an (n - 1)-dimensional manifold M of the following form: If x E M , then x + v E M for any v orthogonal to a and b. Consequently, M is constant in the directions orthogonal to a and b; to describe M it is sufficient to describe the curve which is obtained if M is intersected with a plane containing a and b (see Fig. 5.4). Here we are only interested in the geometric form. Therefore, we can assume a t x + ao = xl, where xl is the first component of x after a rotation, translation, and scaling, if necessary, which does not affect the in principle form. Then the curve which describes M can be parameterized by xl. A normal vector can be parameterized by n(xl) = a s g d ' ( x t ) 9a + / ~ s g d ' ( b t x + bo) 9b, where the term sgd'(btx + bo) can be substituted using (*), which means n ( x , ) = a s g d ' ( x l ) 9a + ( - 7 - a sgd(xt)) (1 - - 7 - a s g d ( x , ) ) . b .
124
5. Complexity
b
b stant
b
a
Fig. 5.4. Manifold M which forms the boundary of the classification: M can be entirely determined by the curve which is obtained if M is intersected with a plane containing the vectors a and b. Define n ( X l ) ---- n(xl)/In(zl)[. Considering the four values 7, 3' + a, '7 +/3, and 7 + a +/3 several cases result: -
-
All values are positive or all values are negative: M is empty. One value is positive, the other three are negative: Since s g d ( - x ) = 1 - sgd(z) we can assume that 3' > 0. Maybe we have to change the sign of the weights and the biases beforehand. If, for example, a + 3' is positive, we substitute a by - a , 3' by a + 7, and the weights of the first hidden neuron by their negative values without changing the entire mapping. Consequently, a < - % a n d / 3 < - % Dividing (,) by 3' we obtain 7 = 1, a < - 1 , and/3 < - 1 . The curve describing M looks like that depicted in Fig. 5.5, in particular, the region classified positively is convex, as can be seen as follows: For sgd(xl) ~ - 1 / a the normal vector is h ( x l ) ~ -a/la[. For sgd(xx) ~ 0 it is h ( z l ) = - b / l b [ . In general, h(x~) = A~(x~)a + A2(Xl)b for appropriate functions A1 and A2. Assume that the curve is not convex. Then there would exist at least two points on the curve with identical fi, identical A1/A2 and, consequently, at least one point x~ with (A1/A2)t(zl) ---- 0. But one can compute ( A 1 / A 2 ) t ( Z l ) = C ( X l ) 9 ( - / 3 1 + sgd(xl)(2/3 + 2) + sgd2(xl)(2a + a 2 + a/3)) with some factor C ( x l ) = ot/3sgd'(Xl)/((-1 - otsgd(Xl))2(1 -t-/3 -t- otsgd(xl))2) # 0. If (~l/~2)t(Xl) was O, a = /3= - 1 or -/3 - 1
(**)
sgd(xl) = a(ct +/3 + 2) 4-
J ( 1 4-/3)((Or 4" 1) 2 -t-/3(Or 4" 1)) a2( a +/3 + 2) 2 , !
where the term the square root is taken from is negative except for a = - 1 or/3 = - 1 because (1 + a), (1 +/3), and (1 + a +/3) are negative.
5.3 The SigmoidalCase
--
b Ib[ d
i
125
a
~
n(Xl)~ -~-~
normalvectors~ 4n(Xl)~ lal
21
xl
)
I --
s:,(_5)
.
Fig. 5.5. Left: classification in case 2, precisely 1 value is positive; right: classification in case 3, precisely 2 values are positive.
-
-
Exactly two values are positive: We can assume t h a t the positive values are 7 and ~ + 7. M a y b e we have to change the role of the two hidden nodes or the signs of the weights and the biases beforehand. Note that 7 and a + ~ + 7 cannot both be positive in this situation. If sgd(xl) ~ - 7 / a or sgd(Xl) ~ ( - 7 - ~)/a then it holds t h a t fi(Xl) ~ - a / l a l . Arguing as before, we can assume 7 = 1, a < - 1 , /~ > - 1 , and a + ~ < - 1 . The curve describing M has an S-shaped form (see Fig. 5.5) because there exists at most one point on the curve where (A1/A2)'(Xl) vanishes: This point is sgd(xl) = 0.5 if a + ~ + 2 = 0, the point is the solution (**) with the positive sign if a + 13 + 2 < 0, and the solution (**) with the negative sign if a + ~ + 2 > 0. Exactly 3 values are positive: This case is dual to case 2.
To summarize, the classification can have one of the following forms:
+
<.
Now we reduce the 2-set splitting problem to a loading problem for this architecture. Reduction: For an SSP (C,S) with C = (Cl,. .. ,an} and S = { S l , . . . , s m } , where we can assume [sil = 3 for all i [37], the following m + n + 15 patterns in R '*+5 can be loaded if and only if the SSP is solvable: Positive examples, i.e., the output is to be 1, are
126
5. Complexity
(0, .., 1, .., 1, .., 1, .., 0, 0, 0, 0, 0, 0) with an entry 1 at the place i, k, and l for any sj = {ci,ck,ct} in S, (0,. ...................... , 0 , 0 , 0 , 0 , 0 , 0 ) , (0,. ......................
,0,1,1,0,0,0),
(0,. ...................... ,0,0, 1, 1,0,0), (0,. ...................... , 0 , 0 , 0 , 0 , - 0 . 5 , 0.5), (0,. ...................... , o , 0 , 0 , o , 0 . 5 , o . 5 ) , (0,. ...................... , 0 , 0 , 0 , 0 , c,c), (0,. ...................... , 0 , 0 , 0 , 0 , - c , c ) , c being constant with c > 1 + 4B/e. ( s g d - ' (1 - e/(2B))
-
sgd -1 (e/(2B))).
Negative examples, i.e., the o u t p u t is to be O, are (0,. ........ ,1,. ........ , O, O, O, O, O, O) (0,. ...................... ,0, 1 , 0 , 0 , 0 , 0 ) , (0,. ...................... ,0,0, 1 , 0 , 0 , 0 ) ,
(o,.
. . . . . . . . . . . . . . . . . . . . . .
(0,. ...................... (0,. ...................... (0,. ...................... (0,. ...................... (0,. ......................
with an entry 1 at place i for i _< n,
,o,o,o, 1,o,o), ,0, 1, 1, 1,0,0), , 0 , 0 , 0 , 0 , - 1 . 5 , 0.5), , 0 , 0 , 0 , 0, 1.5, 0.5), , 0 , 0 , 0 , 0 , 1 + c,c), , 0 , 0 , 0 , 0 , - 1 - c,c) with c as above.
Assume that the SSP is solvable with a partition C = $1 tAS2 where $1 nS2 = 0. Consider the weights a = ~ = - 1 , 3' = 0.5, a = k- (al, . . . , a,,1,
-1,1,1,
-1)
b = k-(bl,...,bn,-1,1,-1,-1,-1), ao = - 0 . 5 - k, bo = - 0 . 5 - k, where k is a positive constant and
{ 1 ai =
-2
ifciES1 otherwise
and
b~ =
{ 12 _
ifciES2 otherwise
For appropriate k, this solves the loading problem with accuracy e < 0.5 because s g d ( - x ) --+ 0 and sgd(x) --+ 1 for x --+ oo.
Assume that the loading problem is solvable. First, cases 3 and 4 are excluded. T h e n a solution of the SSP is constructed using the convexity of the positive region in case 2. Obviously, case 1 can be excluded directly.
Assume the classification is of case 3: We only consider the last two dimensions, where the following problem is included: (We drop the first n + 3 coefficients which are 0.) T h e points ( - 0 . 5 , 0.5), (0.5, 0.5), (c, c), ( - c , c) are m a p p e d to 1 and the points ( - 1 . 5 , 0.5), (1.5,0.5), (1 + c,c), ( - 1 - c,c) are m a p p e d to 0 (see Fig. 5.6a). Define Po := sgd-i(e/(2B)) and Pl := s g d - i ( 1 - e/(2B)). {x [po _< a t x + ao _< px}
5.3 The Sigmoidal Case
a)
,
c
separating _~__,.\ 9
0.5 -1-'c
O 9
00
-1:5 6
O9
1:5 i;:
~
127
_S:Prv~:ting
a I X+ao=Pl
Ia
Fig. 5.6. a) Classification problem in the last two dimensions; b) outside the brelevant region. and {xlPo < btx+bo < Pl } are called the a- or b-relevant region, respectively. Outside, sgd(atx + a0) or sgd(btx + bo), respectively, can be substituted by a constant, the difference of the output is at most e/2. Now the argumentation proceeds as follows: First it is shown that three points forming a triangle are contained in the a-relevant region. This leads to a bound for the absolute value of a. Second, if three points forming a triangle are contained in the b-relevant region the same argumentation leads to a bound for the absolute value of b. Using these bounds it can be seen that neighboring points cannot be classified differently. Third, if no such triangle is contained in the b-relevant region, the part b t x + bo does not contribute to the classification of neighboring points outside the b-relevant region, the two points cannot be classified differently. First step: Since the points with second component 0.5 cannot be separated by one hyperplane, one point (x, 0.5) with x E [-1.5, 1.5] exists inside the a- and b-relevant region, respectively. If the points (c, c) and (1 + c, c) were both outside the a-relevant region then they would be separated by any hyperplane with normal vector b which intersects the separating manifold outside the a-relevant region because the part given by a does not contribute to the classification (see Fig. 5.6b). The normal vector of the manifold is approximately -a/lal for large and small b t x + bo, respectively. Therefore we can find a hyperplane where both points are located on the same side. Contradiction. The same argumentation holds for ( - c , c ) and ( - 1 - c , c ) . Therefore the diameter of the a-relevant region restricted to the last two dimensions is at least c - 1. Consequently, a < (Px - p o ) / ( c - 1) < e / ( 4 B ) , where a = I(an+4, an+5)lSecond step: If one of the points (c, c) and (1 + c, c) and one of the points ( - c , c ) and ( - 1 - c , c ) is contained in the b-relevant region, it follows that b < e/(4B) for b = [(b,,+4,b,,+5)l. This leads to a contradiction: For the points Xl = ( 0 , . . . , 0 , c , c ) and x2 = ( 0 , . . . , 0 , 1 + c,c) we can compute
128
5. Complexity separating hyperplanes
I I I
projection
..... ___L._ji Fig. 5.7. Classification problem; projection of the classification to the a/b-plane, at least one negative point is not classified correctly la sgd(atxl + a0) + Z sgd(btXl + bo) + 7 - a sgd(atx2 + a0) - ~ sgd(btXl + bo) - 71 _< [a] [atxl - atx21 + I~[ [btXl - btx2[ ~ e. Because Is[, [B[ _< B and [sgd(x) - sgd(x + 6)[ < -6 for 6 > 0. Third step: If both points (c, c) and (1 + c, c) or both points ( - c , c) and ( - 1 - c, c) are outside the b-relevant region, the difference of the values sgd(btx - b0) with corresponding x is at most e/(2B). The same contradiction results.
Assume the classification is of case 4: The classification includes in the dimensions n + 1 to n + 3 the problem depicted in Fig. 5.7. The negative points are contained in a convex region, each positive point is separated by at least one tangential hyperplane of the separating manifold M from all negative points. Consider the projection to a plane parallel to a and b. Following the convex curve which describes M the signs of the coefficients of a normal vector can change at most once. But a normal vector oriented in such a way that it points to the positive points which separates a positive point necessarily has the signs (+, +, - ) for (1, 1,0), ( - , +, +) for (0, 1, 1), and ( - , - , - ) for (0, 0,0) in the dimensions n + 1 to n + 3. Contradiction.
Solution of the SSP: Consequently, the classification is of case 2. We can assume 3' = - 1 , a > 1, and fl > 1. Define Sx = {ci[ ai is positive}, $2 = C\S1. Assume st = {ci,cj,ck} exists such that all three coefficients ai, aj, and ak are positive. In the components i, j, k the classification (1,0, 0), (0, 1, 0), (0, 0, 1) ~ 0 and (0,0,0), (1,1,1) ~ 1 is contained. The positive points are contained in a convex region, each negative point is separated by at least one tangential hyperplane of the separating manifold M. We project to a plane parallel to a and b. Following the curve which describes M, the normal vector, oriented to the positive region, is ~ - a / ] a [ , then the sign of each component of the normal vector changes at most one time, finally it is ~ - b / I b l . But a vector where the three signs in dimension i, j, and k are equal cannot separate a negative point because (0, 0, 0) and (1, 1, 1) are mapped to 1. Furthermore,
5.3 The Sigmoidal Case
129
the sign in dimension i has to be negative if ci is separated, and the same is valid for j and k. Contradiction. The same argumentation shows that not all three coefficients ai, a j, and ak can be negative. [] Note that we have not used the special form of the sigmoidal function, but only some of its properties. Consequently, the same result with perhaps different accuracy e and weight restriction B is valid for any activation function a : R ~ R which is continuous, piecewise continuously differentiable, symmetric, squashing, and where the boundary limits a convex region in cases 2 and 4. Furthermore, the request for a certain accuracy enables us to generalize the NP hardness result to functions which can be approximated by the sigmoidal function, but are not identical. C o r o l l a r y 5.3.1. The loading problem for the 3-node architecture with accuracy e E ]0, 1/3[, weight restriction 2, and varying input dimension is NP-hard for any activation function a which can be approximated by the standard sigmoidal activation in the following way: For all x E R, la(x) - sgd(x)l < e/8 holds. Proof. Consider the reduction in the main proof with the function sgd, weight restriction 2, and accuracy e/2. If the SSP is solvable we can not only find a solution of the corresponding loading problem with accuracy e/2, but accuracy 3e/2 < 0.5. This leads to a solution of the loading problem for the activation a with weight restriction 2 and accuracy e. Conversely, any solution of the loading problem with activation a, weight restriction 2, and accuracy 9 can be transformed into a solution with activation sgd, weight restriction 2, and accuracy e/2. This leads to a solution of the SSP. [] C o r o l l a r y 5.3.2. The loading problem ]or the 3-node architecture with accuracy 9 E ]0,0.5[, weight restriction B > O, and varying input dimension is NP-hard for any activation function a which can be written as aa(bx + c) + d -= sgd(x) for all x E R and real numbers a, b, c, and d with lal < B/2. Proof. Consider a reduction of the SSP to a loading problem with accuracy 9 and weight restriction B/lal > 2 for the activation sgd. A solution for the activation function sgd with weight restriction B/lal directly corresponds to a solution for the activation function a with weight restriction B. [] In particular, this result holds for any function which is a shifted or scaled version of the sigmoidal function like the hyperbolic tangent. Note that the concrete choice of 9 and B in our main proof is not the only possibility. An increase of B leads to the possibility of increasing 9 too. In fact, any value B >_ 2 and 9 <_ B / 4 is a possible choice. This leads to corresponding values
130
5. Complexity
for e and B in Corollary 5.3.1, too, but it does not affect the distance of a and sgd which is at most 1/24.
5.4 Discussion
and Open
Questions
In this chapter we have examined the complexity of training folding networks in a very rough setting. This means that we have first figured out several situations which seem of practical relevance: The training of a fixed architecture or the training of architectures where architectural parameters are allowed to vary in order to get universal training algorithms for these situations. Then we have focused on the question as to whether this task can be solved in polynomial time or is NP hard. We have substituted a correlated decision problem, the loading problem, for the learning problem. It has been pointed out that the NP hardness of this decision problem leads to a complexity theoretical barrier to learning such architectures polynomially unless RP = NP.
It has been shown that the training of a fixed folding architecture can be done in polynomial time if the activation function is the perceptron activation. For the standard sigmoidal function this is an unsolved problem. When architectural parameters vary it has been shown in the perceptron case that an increasing number of input neurons may lead to an enormous increase in the training time for standard multilayer feed-forward perceptron networks. As a practical consequence any learning task should be designed in such a way that the number of inputs is small. Although this generalizes the result of [19] to realistic architectures, we have in contrast to [19] considered arbitrary patterns which are not necessarily binary patterns. The complexity of training may be much smaller if the patterns only have a binary form or this seems even more promising - if the patterns have limited correlation. An attempt in this direction is made, for example, in [108]. Furthermore, our proof does not transfer to neural architectures with general connections. Since more general connection structures like additional direct links from the input units to the outputs, for example, are used in practical applications, results concerning a general structure would be interesting. The consideration of arbitrary patterns instead of binary ones leads to the question of the complexity of training networks with varying architectures (compare Problem 12.13 in [132]). We have shown the NP completeness of this question, too. Unfortunately, we have used at least two hidden layers where the number of neurons varies. The problem which deals with the complexity of training a single hidden layer architecture where the number of neurons may vary remains unsolved. We have considered situations which take into account training procedures that correlate the number of neurons and patterns or that may manipulate the architecture during the training via pruning or insertion of neurons, too. In this context we have proved several NP completeness results dealing -
5.4 Discussion and Open Questions
131
with realistic architectures. Note that NP completeness results for architectures where the number of neurons and patterns are correlated are of special interest for concrete bounds on the number of neurons necessary for an interpolation of a finite set of points. The situations where NP completeness can be proven lead to lower bounds for the approximation scenario. Here it would be nice to modify the theorems in such a way that the number of neurons is allowed to increase linearly in the number of patterns. When considering sigmoidal networks it does not seem likely that training becomes easier. But note that there is no theoretical motivation for this assumption and the complexity results do not transfer directly to the sigmoidal case. In fact, using the sigmoidal function instead of a step activation has enabled the training of networks via back-propagation [32, 136] in practice, whereas the lack of a reasonable training procedure for perceptron networks with more than one computation unit several years earlier had significantly reduced the interest in neural networks [90]. Therefore a theoretical investigation of the sigmoidal case is necessary, too. Our NP results in the sigmoidal case are unfortunately restricted to the 3node architecture and use arbitrary, not necessarily binary inputs. However, our modification of the classification task such that a minimum accuracy is guaranteed makes it possible to transfer the result to functions which nearly coincide with the standard sigmoidal function. Pathological examples of activation functions where a hidden oscillation of the function leads to a large capacity as in [119] are in some way excluded since nearly invisible oscillations can only have an effect if they are either reinforced with large weights or arbitrarily small outputs are allowed - both possibilities are excluded in our case. Of course, it would be nice to obtain NP results for realistic sigmoidal architectures. Furthermore, a precise analytic characterization of the activation functions for which such an NP result holds would be interesting. Finally, results whether the training of sigmoidal architectures is contained in NP are missing because polynomial bounds on the weights are not known in contrast to the perceptron case. Despite these NP results an investigation of the complexity of the concrete learning algorithms used for the training of recurrent and folding networks is necessary. Here it is adequate to consider the number of training steps that are used in the worst case or in general settings, and furthermore, the complexity of one single step has to be taken into account [98, 107, 139]. An analysis of numerical problems that may occur are interesting, in particular when dealing with recurrent or folding networks, and may give rise to the preference of some learning algorithms like LSTM [54] compared to others, e.g., the simple gradient descent method.
Chapter 6
Conclusion
In this volume, folding networks have been examined which form a very promising approach concerning the integration of symbolic and subsymbolic learning methods. It has been proven that they are suitable as a learning method in principle. For this purpose, we have investigated the approximation capability, the learnability, and the complexity of learning for folding architectures. Apart from the argumentation specific for folding networks, we have obtained results that are interesting for conventional recurrent networks, standard feed-forward networks, or learning theory as well. In the first part of the mathematical investigation, folding networks have been shown to be universal approximators if a measurable mapping is to be approximated in probability. This transfers to recurrent networks. In both cases, bounds on the number of neurons are given if a finite number of examples are to be interpolated. On the contrary, several restrictions exist if a mapping is to be approximated in the maximum norm with a recurrent or folding network. However, we have shown that as a computational model, recurrent networks with the standard sigmoidal activation function can compute any mapping on off-line inputs in exponential time. Concerning learnability we have first contributed several results to the topic of distribution-dependent learnability: In [132] the term PUAC learnability is introduced as a uniform version of PAC learnability. We have shown that this is a different concept to PAC learnability, answering Problem 12.4 of [132]. In analogy to consistent PUAC learnability in [132] we have introduced the term of consistent PAC learnability and scale sensitive versions of these terms. We have established characterizations of these properties which do not refer to the notion of a learning algorithm, and are therefore of interest if the property is to be proved for a concrete function class. Additionally, we have examined the relations between these terms. We have discussed at which level learning via an encoding of an entire real valued function into a single value is no longer possible. Usually, learnability in any of the above formalisms is guaranteed by means of the finiteness of the capacity of a function class. Since it has turned out that the capacity of folding architectures with arbitrary inputs is in some sense unlimited, it has been necessary to take a closer look at the situation. For this purpose we have generalized two approaches which guarantee learnability even in the case of infinite capacity to function classes.
134
6. Conclusion
Since the capacity of a function class plays a key role in the generalization ability, we then estimated the capacity of folding architectures with restricted inputs. For recurrent networks, upper bounds already exist in the literature but the lower bounds in the literature deal with the pseudodimension, the finiteness of which is a sufficient but not necessary condition for learnability. We have substituted these bounds by bounds for the fat shattering dimension, a measure characterizing learnability under realistic conditions. In all cases the lower bounds depend on the maximum input height. Consequently, distribution-independent learnability cannot be guaranteed in general. But if the probability of high trees or the maximum height in a training set is limited, guarantees for correct generalization can be found. However, we have constructed a situation in which the number of examples necessary for valid generalization increases at least in a more than polynomial way with the required accuracy, which answers in particular Problem 12.6 of [132]. When dealing with the complexity of learning we have shown that efficient learning is possible for a fixed folding architecture with perceptron activation. All other results have shown the NP hardness of the loading problem - a problem which is even more simple than training - for a feed-forward architecture if several architectural parameters are allowed to vary. Since the training of a folding architecture is not easier, the results transfer directly to this case, too, but they are interesting concerning general neural network learning as well. For the perceptron activation function we have generalized the classical NP completeness result of Blum and Rivest [19] to realistic multilayer feed-forward architectures. We have considered the case where the number of hidden neurons is allowed to increase depending on the number of training examples. A result addressing Problem 12.13 in [132] has been added which is interesting if an optimal architecture is to be found for a concrete learning task. Furthermore, a generalization of the loading problem in [19] under two additional realistic conditions to the standard sigmoidal activation function has been presented, compare Problem 12.12 in [132]. All these results are of interest for neural network learning in principle, since they show which quantities should be kept small in the design of a learning problem, and which quantities do not slow down the training process too much. Of course, several problems concerning the above topics remain open. We have already listed some of them in the conclusions of the respective chapters. Two problems seem of particular interest: Although we have obtained guarantees for the learning capability of folding architectures from an information theoretical point of view, the argumentation refers to the fact that inputs with an extensive recurrence become less probable. This is in some way contradictory to the original idea of recurrent or folding networks, since one advantage of these approaches is that they can deal with input data of a priori unlimited length or height, i.e., with some kind of infiniteness. Therefore it would be interesting to see whether a guarantee for learnability can be found employing a specific property of the network instead, for example
6. Conclusion
135
a stability criterion. Additionally, such a criterion may be useful for the efficiency of the learning process itself. A reduction of the search space for the weights to well behaved regions may prohibit an instability of the learning algorithm. Another interesting question is the complexity of training sigmoidal recurrent or folding architectures. Although this problem has not yet been answered in the feed-forward case, efficient algorithms for the training of fixed sigmoidal feed-forward architectures exist. Therefore, this problem is expected to be of equal difficulty as training feed-forward perceptron networks. On the contrary, training recurrent sigmoidal architectures poses several problems in practice and some theoretical investigation of these problems also exist, as we have already mentioned. Additionally, the fundamental behavior of recurrent networks with a sigmoidal or perceptron activation, respectively, is entirely different, as we have seen in the chapter dealing with approximation capabilities. Therefore it is interesting to know, whether this difference leads to a different complexity level of training, too. Although folding networks are an interesting and well performing approach, a lot of work has to be done before learning machines will be able to solve such complex tasks as football playing or understanding spoken language. For an efficient use of subsymbolic learning methods in all domains, the structure of networks needs further modifications. If we intend to integrate symbolic learning methods and neural networks, we need networks that can produce symbolic data as outputs instead of simple vectors as well. The dual dynamics proposed in the LRAAM unfortunately needs increasing resources even for purely symbolic data. Additionally, more complex data structures like arbitrary graphs occur frequently in applications. Folding networks are usually not adapted to these tasks. For this it is necessary to design different and more complex architectures and training methods and establish the basic theoretical properties of these approaches, too.
Chapter 6
Conclusion
In this volume, folding networks have been examined which form a very promising approach concerning the integration of symbolic and subsymbolic learning methods. It has been proven that they are suitable as a learning method in principle. For this purpose, we have investigated the approximation capability, the learnability, and the complexity of learning for folding architectures. Apart from the argumentation specific for folding networks, we have obtained results that are interesting for conventional recurrent networks, standard feed-forward networks, or learning theory as well. In the first part of the mathematical investigation, folding networks have been shown to be universal approximators if a measurable mapping is to be approximated in probability. This transfers to recurrent networks. In both cases, bounds on the number of neurons are given if a finite number of examples are to be interpolated. On the contrary, several restrictions exist if a mapping is to be approximated in the maximum norm with a recurrent or folding network. However, we have shown that as a computational model, recurrent networks with the standard sigmoidal activation function can compute any mapping on off-line inputs in exponential time. Concerning learnability we have first contributed several results to the topic of distribution-dependent learnability: In [132] the term PUAC learnability is introduced as a uniform version of PAC learnability. We have shown that this is a different concept to PAC learnability, answering Problem 12.4 of [132]. In analogy to consistent PUAC learnability in [132] we have introduced the term of consistent PAC learnability and scale sensitive versions of these terms. We have established characterizations of these properties which do not refer to the notion of a learning algorithm, and are therefore of interest if the property is to be proved for a concrete function class. Additionally, we have examined the relations between these terms. We have discussed at which level learning via an encoding of an entire real valued function into a single value is no longer possible. Usually, learnability in any of the above formalisms is guaranteed by means of the finiteness of the capacity of a function class. Since it has turned out that the capacity of folding architectures with arbitrary inputs is in some sense unlimited, it has been necessary to take a closer look at the situation. For this purpose we have generalized two approaches which guarantee learnability even in the case of infinite capacity to function classes.
134
6. Conclusion
Since the capacity of a function class plays a key role in the generalization ability, we then estimated the capacity of folding architectures with restricted inputs. For recurrent networks, upper bounds already exist in the literature but the lower bounds in the literature deal with the pseudodimension, the finiteness of which is a sufficient but not necessary condition for learnability. We have substituted these bounds by bounds for the fat shattering dimension, a measure characterizing learnability under realistic conditions. In all cases the lower bounds depend on the maximum input height. Consequently, distribution-independent learnability cannot be guaranteed in general. But if the probability of high trees or the maximum height in a training set is limited, guarantees for correct generalization can be found. However, we have constructed a situation in which the number of examples necessary for valid generalization increases at least in a more than polynomial way with the required accuracy, which answers in particular Problem 12.6 of [132]. When dealing with the complexity of learning we have shown that efficient learning is possible for a fixed folding architecture with perceptron activation. All other results have shown the NP hardness of the loading problem - a problem which is even more simple than training - for a feed-forward architecture if several architectural parameters are allowed to vary. Since the training of a folding architecture is not easier, the results transfer directly to this case, too, but they are interesting concerning general neural network learning as well. For the perceptron activation function we have generalized the classical NP completeness result of Blum and Rivest [19] to realistic multilayer feed-forward architectures. We have considered the case where the number of hidden neurons is allowed to increase depending on the number of training examples. A result addressing Problem 12.13 in [132] has been added which is interesting if an optimal architecture is to be found for a concrete learning task. Furthermore, a generalization of the loading problem in [19] under two additional realistic conditions to the standard sigmoidal activation function has been presented, compare Problem 12.12 in [132]. All these results are of interest for neural network learning in principle, since they show which quantities should be kept small in the design of a learning problem, and which quantities do not slow down the training process too much. Of course, several problems concerning the above topics remain open. We have already listed some of them in the conclusions of the respective chapters. Two problems seem of particular interest: Although we have obtained guarantees for the learning capability of folding architectures from an information theoretical point of view, the argumentation refers to the fact that inputs with an extensive recurrence become less probable. This is in some way contradictory to the original idea of recurrent or folding networks, since one advantage of these approaches is that they can deal with input data of a priori unlimited length or height, i.e., with some kind of infiniteness. Therefore it would be interesting to see whether a guarantee for learnability can be found employing a specific property of the network instead, for example
6. Conclusion
135
a stability criterion. Additionally, such a criterion may be useful for the efficiency of the learning process itself. A reduction of the search space for the weights to well behaved regions may prohibit an instability of the learning algorithm. Another interesting question is the complexity of training sigmoidal recurrent or folding architectures. Although this problem has not yet been answered in the feed-forward case, efficient algorithms for the training of fixed sigmoidal feed-forward architectures exist. Therefore, this problem is expected to be of equal difficulty as training feed-forward perceptron networks. On the contrary, training recurrent sigmoidal architectures poses several problems in practice and some theoretical investigation of these problems also exist, as we have already mentioned. Additionally, the fundamental behavior of recurrent networks with a sigmoidal or perceptron activation, respectively, is entirely different, as we have seen in the chapter dealing with approximation capabilities. Therefore it is interesting to know, whether this difference leads to a different complexity level of training, too. Although folding networks are an interesting and well performing approach, a lot of work has to be done before learning machines will be able to solve such complex tasks as football playing or understanding spoken language. For an efficient use of subsymbolic learning methods in all domains, the structure of networks needs further modifications. If we intend to integrate symbolic learning methods and neural networks, we need networks that can produce symbolic data as outputs instead of simple vectors as well. The dual dynamics proposed in the LRAAM unfortunately needs increasing resources even for purely symbolic data. Additionally, more complex data structures like arbitrary graphs occur frequently in applications. Folding networks are usually not adapted to these tasks. For this it is necessary to design different and more complex architectures and training methods and establish the basic theoretical properties of these approaches, too.
Bibliography
1. P. Alexandroff and H. Hopf. Topologie, volume 1. Springer, 1974. 2. N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. In Proceedings of the 3~th IEEE Symposium on Foundations of Computer Science, pp. 292-301, 1993. 3. M. Anthony and J. Shawe-Taylor. A sufficient condition for polynomial distribution-dependent learnability. Discrete Applied Mathematics, 77:1-12, 1997. 4. M. Anthony. Uniform convergence and learnability. Technical report, London School of Economics, 1991. 5. M. Anthony. Probabilistic analysis of learning in artificial neural networks: The PAC model and its variants. Neural Computing Surveys, 1:1-47, 1997. 6. M. Anthony and N. Biggs. Computational Learning Theory. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1992. 7. P. L. Bartlett, P. Long, and R. Williamson. Fat-shattering and the learnability of real valued functions. In Proceedings of the 7th ACM Conference on Computational Learning Theory, pp. 299-310, 1994. 8. P. L. Bartlett, V. Maiorov, and It. Meir. Almost linear VC dimension bounds for piecewise polynomial networks. Neural Computation, 10(8):2159-2173, 1998. 9. P. Bartlett and It. Williamson. The VC dimension and pseudodimension of twolayer neural networks with discrete inputs. Neural Computation, 8(3):653-656, 1996. 10. P. L. Bartlett. For valid generalization, the size of the weights is more important than the size of the network. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, Volume 9. The MIT Press, pp. 134-141, 1996. 11. S. Basu, It. Pollack, and M.-F. Roy. A new algorithm to find a point in every cell defined by a family of polynomials. Journal of the ACM, 43:1002-1045, 1996. 12. E. B. Baum and D. Hanssler. What size net gives valid generalization? Neural Computation, 1(1):151-165, 1989. 13. S. Ben-David, N. Cesa-Bianci, D. Haussler, and P. Long. Characterizations of learnability for classes of {0,..., n}-valued functions. Journal of Computer and System Sciences, 50:74-86, 1995. 14. Y. Bengio and P. Frasconi. Credit assignment through time: Alternatives to backpropagation. In J. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, volume 6. Morgan Kaufmann, pp. 75-82, 1994. 15. Y. Bengio and F. Gingras. Recurrent neural networks for missing or asynchromous data. In M. Mozer, D. Touretzky, and M. Perrone, editors, Advances in Neural Information Processing Systems, volume 8. The MIT Press, pp. 395-401, 1996.
138
Bibliography
16. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157166, 1994. 17. M. Bianchini, S. Fanelli, M. Gori, and M. Maggini. Terminal attractor algorithms: A critical analysis. Neurocomputing, 15(1):3-13, 1997. 18. C. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995. 19. A. Blum and R. Rivest. Training a 3-node neural network is NP-complete. Neural Networks, 9:1017-1023, 1988. 20. H. Braun. Neuronale Netze. Springer, 1997. 21. W. L. Buntine and A. S. Weigend. Bayesian back-propagation. Complex Systems, 5:603-643, 1991. 22. M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6):1135-1178, 1996. 23. C. Cortes and V. Vapnik. Support vector network. Machine Learning, 20:1-20, 1995. 24. F. Costa, P. Frasconi, and G. Soda. A topological transformation for hidden recursive models In M. Verleysen, editor, European Symposium on Artificial Neural Networks. D-facto publications, pp. 51-56, 1999. 25. I. Croall and J. Mason. Industrial Applications of Neural Networks. Springer, 1992. 26. B. DasGupta, H. T. Siegelmann, and E. D. Sontag. On the complexity of training neural networks with continuous activation. IEEE Transactions on Neural Networks, 6(6):1490-1504, 1995. 27. B. Dasgupta and E. D. Sontag. Sample complexity for learning recurrent perceptron mappings. IEEE Transactions on Information Theory, 42:1479-1487, 1996. 28. A. Eliseeff and H. Pangam-Moisy. Size of multilayer networks for exact learning. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9. The MIT Press, pp. 162-168, 1996. 29. J. L. Elman. Finding structure in time. Cognitive Science, 14:179-211, 1990. 30. J. L. Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7:195-225, 1991. 31. T. Elsken. Personal communication. 32. S. E. Fahlman. An empirical study of learning speed in back-propagation networks. In Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988 33. S. E. Fahlman. The recurrent cascade-correlation architecture. In R. Lippmann, J. Moody, D. Touretzky, and S. Hanson, editors, Advances in Neural Information Processing Systems, volume 3. Morgan Kaufmann, pp. 190-198, 1991. 34. J. Fodor and Z. Pylyshin. Connectionism and cognitive architecture: A critical analysis. Cognition, 28:3-71, 1988. 35. P. Frasconi, M. Gori, S. Fanelli, and M. Protasi. Suspiciousness of loading problems. In IEBB International Conference on Neural Networks, 1997. 36. P. Frasconi, M. Gori, and A. Sperduti. A general framework for adaptive processing of data sequences. 1EEE 7~ansactions on Neural Networks, 9(5):768786, 1997. 37. M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory o] NP-Completeness. W. H. Freeman and Company, 1979. 38. C. L. Giles and M. Gori, editors. Adaptive Processing of Sequences and Data Structures. Springer, 1998.
Bibliography
139
39. C. L. Giles, G. M. Kuhn, and R. J. Williams. Special issue on dynamic recurrent neural networks. IEEE Transactions on Neural Networks, 5(2), 1994. 40. C. L. Giles and C. W. Omlin. Pruning recurrent neural networks for improved generalization performance. IEEE Transactions on Neural Networks, 5(5):848851, 1994. 41. C. Goller. A connectionist approach for learning search control heuristics for automated deduction systems. PhD thesis, Technische Universit~it Miinchen, 1997. 42. C. GoUer and A. Kiichler. Learning task-dependent distributed representations by backpropagation through structure. In Proceedings of the IEEE Conference on Neural Networks, pp. 347-352, 1996. 43. M. Gori, M. Mozer, A. C. Tsoi, and R. L. Watrous. Special issue on recurrent neural networks for sequence processing. Neurocomputing, 15(3-4), 1997. 44. L. Gurvits and P. Koiran. Approximation and learning of convex superpositions. In 2nd European Conference on Computational Learning Theory, pp. 222236, 1995. 45. B. Hammer. On the learnability of recursive data. Mathematics of Control, Signals, and Systems, 12:62-79, 1999. 46. B. Hammer. On the generalization of Elman networks. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicaud, editors, Artificial Neural Networks - 1CANN'97. Springer, pp. 409-414, 1997. 47. B. Hammer. On the approximation capability of recurrent neural networks. In M. Heiss, editor, Proceedings of the International Symposium on Neural Computation. ICSC Academic Press, pp. 512-518, 1998. 48. B. Hammer. Some complexity results for perceptron networks. In L. Niklasson, M. Bod~n, and T. Ziemke, editors, Proceedings of the 8th International Conference on Artificial Neural Networks. Springer, pp. 639-644, 1998. 49. B. Hammer. Training a sigmoidal .network is difficult. In M. Verleysen, editor, European Symposium on Artificial Neural Networks. D-facto publications, pp. 255-260, 1998. 50. B. Hammer. Approximation capabilities of folding networks. In M. Verleysen, editor, European Symposium on Artificial Neural Networks. D-facto publications, pp. 33-38, 1999. 51. D. Hanssler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100:78-150, 1992. 52. D. Hanssler, M. Kearns, N. Littlestone, and M. Warmuth. Equivalence of models for polynomial learnability. Information and Computation, 95:129-161, 1991. 53. J. Hertz, A. Krogh, and R. Palmer. Introduction to the Theory of Neural Computation. Addison Wesley, 1991. 54. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. 55. K.-U. I-ISffgen. Computational limitations on training sigmoidal neural networks. Information Processing Letters, 46(6):269-274, 1993. 56. K.-U. I-ISffgen, H.-U. Simon, and K. S. VanHorn. Robust trainability of single neurons. Journal of Computer and System Sciences, 50:114-125, 1995. 57. B. G. I-Iorne and C. L. Giles. An experimental comparison of recurrent neural networks. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7. The MIT Press, pp. 697-704, 1995. 58. K. Hornik. Some new results on neural network approximation. Neural Networks, 6:1069-1072, 1993.
140
Bibliography
59. K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359-366, 1989. 60. B. Jaehne. Digitale Bildverarbeitung. Springer, 1989. 61. L. Jones. The computational intractability of training sigmoidal neural networks. IEEE T~'ansactions on Information Theory, 43(1):167-173, 1997. 62. S. Judd. Neural Network Design and the Complexity of Learning. MIT Press, 1990. 63. N. Karmarkar. A new polynomial time algorithm for linear programming. Combinatoriea, 4(4):373-395, 1984. 64. M. Karpinski and A. Macintyre. Polynomial bounds for the VC dimension of sigmoidal neural networks. In Proceedings of the ~7th annual A CM Symposium on the Theory of Computing, pp. 200-208, 1995. 65. L. Khachiyan. A polynomial algorithm for linear programming. Soviet Mathematics Doklady, 20:191-194, 1979. 66. J. Kilian and H. T. Siegelmann. The dynamic universality of sigmoidal neural networks. Information and Computation, 128:48-56, 1996. 67. L. Kindermann. An addition to backpropagation for computing functional roots. In M. Heiss, editor, International Symposium on Neural Computation. ICSC Academic Press, pp.424-427, 1998. 68. P. Koiran, M. Cosnard, and M. Garzon. Computability with low-dimensional dynamical systems. Theoretical Computer Science, 132:113-128, 1994. 69. P. Koiran and E. D. Sontag. Neural networks with quadratic VC dimension. Journal of Computer and System Sciences, 54:223-237, 1997. 70. P. Koiran and E. D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. In Proceedings of the 3rd European Conference on Computational Learning Theory, pp. 223-237, 1997. 71. C.-M. Kuan, K. Hornik, and H. White. A convergence result in recurrent neural networks. Neural Computation, 6(3):420-440, 1994. 72. A. Kfichler. On the correspondence between neural folding architectures and tree automata. Technical report, University of Ulm, 1998. 73. A. Kfichler and C. Goller. Inductive learning symbolic domains using structuredriven neural networks. In G. GSrz and S. HSlldobler, editors, KI-g6: Advances in Artificial Intelligence. Springer, pp. 183-197, 1996. 74. S. It. Kulkarni, S. K. Mitter, and J. N. Tsitsiklis. Active learning using arbitrary binary queries. Machine Learning, 11:23-35, 1993. 75. S. R. Kulkarni and M. Vidyasagar. Decision rules for pattern classification under a family of probability measures. IEEE Transactions on Information Theory, 43:154-166, 1997. 76. M. C. Laskowski. Vapnik-Chervonenkis classes of definable sets. Journal of the London Mathematical Society, 45:377-384, 1992. 77. Y. LeCun, J. Denker, and S. Solla. Optimal brain damage. In D. Touretzky, editor, Advances in Neural Information Processing Systems, volume 2. Morgan Kanfmann, pp. 598-605, 1990. 78. J.-H. Lin and J. Vitter. Complexity results on learning by neural networks. Machine Learning, 6:211-230, 1991. 79. T. Lin, B. Horne, P. Tino, and C. L. Giles. Learning long-term dependencies is not as difficult with NARX recurrent neural networks. Technical report, University of Maryland, 1995. 80. R. Lippmann. Review of neural networks for speech recognition. Neural Computation, 1:1-38, 1989. 81. D. W. Loveland. Mechanical theorem-proving for model elimination. Journal of the ACM 15(2):236-251, 1968.
Bibliography
141
82. W. Maass. Neural nets with superlinear VC-dimension. Neural Computation, 6:877-884, 1994. 83. W. Maass. Agnostic PAC-learning of functions on analog neural nets. Neural Computation, 7(5):1054-1078, 1995. 84. W. Maass and P. Orponen. On the effect of analog noise in discrete-time analog computation. Neural Computation, 10(5):1071-1095, 1998. 85. W. Maass and E. Sontag. Analog neural nets with Gaussian or other common noise distributions cannot recognize arbitrary regular languages. Neural Computation, 11:771-782, 1999. 86. A. Macintyre and E. D. Sontag. Finiteness results for sigmoidal 'neural' networks. In Proceedings 25th Annual Symposium Theory Computing, pp. 325-334, 1993. 87. M. Masters. Neural, Novel, eJ Hybrid Algorithms for Time Series Prediction. Wiley, 1995. 88. U. Matecki. Automatisehe Merkmalsauswahl fiir Neuronale Netze mit Anwendung in der pizelbezogenen Klassifikation von Bildern. PhD thesis, University of Osnabriick, 1999. 89. N. Megiddo. On the complexity of polyhedral separability. Discrete and Computational Geometry, 3:325-337, 1988. 90. M. Minsky and S. Papert. Perceptrons. MIT Press, 1988. 91. M. Mozer. Neural net architectures for temporal sequence processing. In A. Weigend and N. Gershenfeld, editors, Predicting the future and understanding the past. Addison-Wesley, pp. 143-164, 1993. 92. M. Mozer and P. Smolensky. Skeletonization: A technique for trimming the fat from a network via relevant assessment. In D. Touretzky, editor, Advances in Neural Information Processing Systems, volume 1. Morgan Kaufmann, pp. 107115, 1989. 93. K. S. Narendra and Parthasarathy. Identification and control of dynamical networks. IEEE Transactions on Neural Networks, 1(1):4-27, 1990. 94. B. K. Natarajan. Machine learning: A theoretical approach. Morgan Kanfmann, 1991. 95. A. Nobel and A. Dembo. A note on uniform laws of averages for dependent processes. Statistics and Probability Letters, 17:169-172, 1993. 96. S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight sharing. Neural Computation, 4(4):473-493, 1992. 97. C. Omlin and C. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(2):937-972, 1996. 98. B. A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5):1212-1228, 1995. 99. L. Pitt and L. Valiant. Computational limitations on learning from examples. Journal of the Association for Computing Machinery, 35(4):965-984, 1988. 100. T. Plate. Holographic reduced representations. IEEE Transactions on Neural Networks, 6(3):623-641, 1995. 101. J. Pollack. Recursive distributed representation. Artificial Intelligence, 46(12):77-106, 1990. 102. M. Reczko. Protein secondary structure prediction with partially recurrent neural networks. SAR and QSAR in environmental research, 1:153-159, 1993. 103. M. Pdedmiller and H. Braun. A direct adaptive method for faster backpropagation: The RPROP algorithm. In Proceedings of the Sixth International Conference on Neural Networks. IEEE, pp. 586-591, 1993. 104. J. A. Robinson. A machine oriented logic based on the resolution principle. Journal of the ACM, 12(1):23-41, 1965.
142
Bibliography
105. D. Rumelhart, G. Hinton, and R. Williams. Learning representations by backpropagating errors. Nature, 323:533-536, 1986. 106. D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by back-propagating errors. In Neuroeomputing: Foundations of Research. MIT Press, pp. 696-700, 1988. 107. J. Schmidhuber. A fixed size storage O(m 3) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243-248, 1992. 108. M. Schmitt. Komplexitiit neuronaler Lernprobleme. Peter Lang, 1996. 109. M. Schmitt. Proving hardness results of neural network training problems. Neural Networks, 10(8):1533-1534, 1997. 110. T. Schmitt and C. Goller. Relating chemical structure to activity with the structure processing neural folding architecture. In Engineering Applications of Neural Networks, 1998. 111. S. Schulz, A. Kiichler, and C. Goller. Some experiments on the applicability of folding architectures to guide theorem proving. In Proceedings of the lOth International FLAIRS Conference, pp. 377-381, 1997. 112. T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145-168, 1987. 113. J. Shawe-Taylor, P. L. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over data dependent hierarchies. Technical report, NeuroCOLT, 1996. 114. H. T. Siegelmann. The simple dynamics of super Turing theories. Theoretical Computer Science, 168:461-472, 1996. 115. H. T. Siegelmann and E. D. Sontag. Analog computation, neural networks, and circuits. Theoretical Computer Science, 131:331-360, 1994. 116. H. T. Siegelmann and E. D. Sontag. On the computational power of neural networks. Journal of Computer and System Sciences, 50:132-150, 1995. 117. P. Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46(1-2):159216, 1990. 118. E. D. Sontag. VC dimension of neural networks. In C. Bishop, editor, Neural Networks and Machine Learning. Springer, pp. 69-95, 1998. 119. E. D. Sontag. Feedforward nets for interpolation and classification. Journal of Computer and System Sciences, 45:20-48, 1992. 120. E. D. Sontag. Neural nets as systems models and controllers. In 7th Yale Workshop on Adaptive and Learning Systems, pp. 73-79, 1992. 121. F. Soulie and P. Gallinari. Industrial Applications of Neural Networks. World Scientific, 1998. 122. A. Sperduti. Labeling RAAM. Connection Science, 6(4):429-459, 1994. 123. A. Sperduti. On the computational power of recurrent neural networks for structures. Neural Networks, 10(3):395, 1997. 124. A. Sperduti and A. Starita. Dynamical neural networks construction for processing of labeled structures. Technical report, University of Pisa, 1995. 125. M. Stone. Cross-validatory choice and assessment of statistical predictions (with discussion). Journal of the Royal Statistical Society B, 36:111-147, 1974. 126. J. Suykens, B. DeMoor, and J. Vandewalle. Static and dynamic stabilizing neural controllers applicable to transition between equilibrium point. Neural Networks, 7(5):819-831, 1994. 127. P. Tino, B. Horne, C. L. Giles, and P. Colligwood. Finite state machines and recurrent neural networks - automata and dynamical systems approaches. In Neural Networks and Pattern Recognition. Academic Press, pp. 171-220, 1998.
Bibliography
143
128. D. Touretzky. BoltzCONS: Dynamic symbol structures in a connectionist network. Artificial Intelligence, 46:5-46, 1990. 129. L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 1984. 130. V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. 131. V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264-280, 1971. 132. M. Vidyasagar. A Theory of Learning and Generalization. Springer, 1997. 133. M. Vidyasagar. An introduction to the statistical aspects of PAC learning theory. Systems and Control Letters, 34:115-124, 1998. 134. J. Sima. Back-propagation is not efficient. Neural Networks, 9(6):1017-1023, 1996. 135. V. H. Vu. On the infeasibility of training neural networks with small squared error. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 10. The MIT Press, pp. 371-377, 1998. 136. P. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Science. PhD thesis, Harvard University, 1974. 137. P. Werbos. Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, 1990. 138. P. Werbos. The roots of backpropagation. Wiley, 1994. 139. R. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin and D. Rumelhart, editors, Back-propagation: Theory, Architectures and Applications. Erlbaum, pp. 433-486, 1995. 140. W. H. Wilson. A comparison of architectural alternatives for recurrent networks. In Proceedings of the Fourth Australian Conference on Neural Networks, pp. 189-192, 1993. 141. A. Zell. Simulation Neuronaler Netze. Addison-Wesley, 1994.
Index
ls, characteristic function of S 32 S\$2, difference o r s and $1 118 0(), lower bound of order of magnitude 62
,U~, see tree S ~< t , trees of height < t 11 S~, trees of height t 11 0(), exact bound of order of magnitude
- through structure, 11 through time, 11 bias, 6, 81 BoltzCONS, 13 Boolean circuits nonuniform, 24, 37 Borsuk - Theorem of, 48 -
-
62
I , empty tree 8 T, dummy element in R ° 23 wtx, scalar product 111 ISl, cardinality o r s 109 [an, •. •, a2, al], see sequence a ( t l , . . . , t k ) , see tree accuracy, 20, 54, 122 activation, 6 activation function, 6 activity prediction of chemical structures, 16 agnostic learning, see learnability a-algebra, 10 approximation, 19 - in probability, 19, 20, 31 in the maximum norm, 19, 20, 36, 39, 46 inapproximability results, 36 - real data, 31 - restricted length/height, 46 - symbolic data, 31 universal, 3, 21, 31, 48, 60 - with feed-forward networks, 21 architecture, 6 - bounds, 19, 25, 27, 28 - corresponding, 7 folding, 8 automated deduction, 16 -
-
-
-
back-propagation, 11
C n, n times continuously differentiable 8 cascade correlation, 104 chemical data, 16 ,complexity, 103 - training feed-forward networks, 106, 110 - training folding networks, 112 computation, 22 on off-line inputs, 23, 41 on on-line inputs, 22 - subject to noise, 24, 44, 45 time, 22, 23 concept class, 54 confidence, 20, 54 consistent, 54 - asymptotically e, 66 - asymptotically uniformly e, 66 continuity, 10 covering number, 55, 71 - bounds, 57, 58 more than exponential, 95 cross-validation, 12 -
-
-
-
d,,~(f, g, x), empirical distance of f and g 54 dp(f, g), distance of f and g 53 decision tree, 1 det, determinant 115 DNA sequence, 14 dual class, 83
146
Index
dual VC dimension, 83 E P 0 , expectation w.r.t. P 57 e, base of natural logarithm 58 Elman network, 14 EM algorithm, 110 empirical risk, 60 empirical risk minimization, 60, 104 encoding dimension, 10, 34, 47 equivalence relation, 45 error - empirical, 54 - quadratic, 11 - real, 53 j r see quantized function class ~-v see dual class f~ see quantized function class
~'lS, {/IS l f e ~} 57 f[S, restriction of f 57 fat~ (~'), see fat shattering dimension fat shattering dimension, 57, 58, 89 recurrent network identity, 91 - - sigmoidal function, 90 feed-forward part, 9 finite automaton, 21, 44 finitely characterizable, 65 scale sensitive term, 66 folding architecture, 8 folding network, 3, 8 -
- -
-
g y ,
8
generalization, 3 gradient descent, 11 H, see perceptron function by, 13 hm(x, f ) , see learning algorithm hidden layer, 8, 34 Hoeffding's inequality, 79 holographic reduced representation, 13 id, see identity identity, 7, 91 image classification, 18 induced recursive mapping, 8, 13 infinite capacity, 75 initial context, 8, 82 interpolation, 20, 35 real data, 29 sequences, 28 - symbolic data, 25 -
-
JP(10), 58 Jm(f0,x), 58 L'(x, f), see unluckiness function L(x, f), see luckiness function learnability, 51 agnostic, 59 - consistent PAC, 54 - consistent PUAC, 54, 56 - distribution-dependent, 54 - distribution-independent, 57 efficient, 55 - model-free, 58, 74 - polynomial, 108, 109 - probably approximately correct, 54 - probably uniformly approximately correct, 54 - randomized, 70 - randomized PAC, 70 scale sensitive terms, 65, 66 - with noise, 71, 72, 74 learning algorithm, 53 - consistent, 54 lg, logarithm to base 2 27 light tall, 72 lin, see semilinear activation linear programming, 106 In, logarithm to base e 76 loading problem, 105, 122 - MLP with varying hidden layers, 116, 120 - MLP with varying input, 114 - sigmoidal network, 122, 129 loadingproblem - sigmoidal network, 129 local minimum, 103 locally, 8 logo, 18 long-term dependencies, 12 LRAAM, see recursive autoassociative memory LSTM, 131 luckiness function, 61, 77 smooth, 96 -
-
-
-
M(e, ~', dp), see packing number machine learning, 1 manifold, 123 maximum norm, 19 measurability, 10 minimum risk algorithm, 55 MLP, see neural network model elimination calculus, 16
Index N(e, yr, dp), see covering number neural network, 2 - feed-forward, 5 - folding, 8 - multilayer, 8 - recurrent, 8, 9 neuron, 5 - computation, 6 - context, 9 - hidden, 6 input, 6, 9 multiplying, 8 - output, 6, 9 3-node architecture, 122, 129 noise, 24, 44, 70 admissible, 72 normal vector, 123 NP, complexity class N P 103 NP completeness, 103, 113, 118, 120 -
-
-
quadratic error, 11 quantized function class, 72, 78, 79 RAAM, see recursive autoassociative memory randomized algorithm, 70 recurrent cascade correlation, 89 recurrent network, 3, 8, 9 recursive autoassociative memory, 13, 97, 135 - bounds, 97 - labeled, 13 recursive part, 9 regularization, 60 resolution, 16 RP, complexity class R P 108 sample size - PAC learning, 55, 76 folding networks, 93 - PUAC, 58 - UCED, 56, 78 SAT, 106, 120 Schanuel conjecture, 107 search heuristics, 16 semilinear function, 7 sequence, 10 set splitting problem, 117, 125 sgd, see sigmoidal function shatter, 57 shrinking width property, 56 sigmoidal function, 7, 87, 90, 122 sign vector, 112 spear-man's correlation, 17 square activation, 8 squashing function, 8, 34 SSP, see set splitting problem stable, 68 stochastic learning method, 2 stratification, 109 structural risk, 60 subsymbolic representation, 13, 135 support vector machine, 60 swapping, 78 symbolic data, 13, 25, 135 -
O, upper bound of order of magnitude 27 orthogonal patterns, 116 P, complexity class P 103 PAC learnability, 3, 54, 65, 69, 104 e-consistent, 66 consistent, 69 distribution-dependent, 55 distribution-independent, 57 folding networks, 93 - model-free, 59 - not consistent PAC, 64 - not PUAC, 63 packing number, 55 perceptron function, 7, 35, 84, 110 permutation, 78 polyhedral separability, 110, 116 prior knowledge, 12, 77, 95, 103 pruning, 12, 104 P S ( ~ ) , see pseudo-dimension pseudo-dimension, 57 - dual, 83 feed-forward networks, 62 folding networks, 79, 84 recurrent networks, 63 PUAC learnability, 54, 69 e-consistent, 66 consistent, 69 distribution-independent, 57 - equivalence to consistent PUAC learnability, 64 - model-free, 59 -
-
-
-
-
-
-
-
-
-
-
-
147
tanh, hyperbolic tangent 122 tensor construction, 13 term classification, 15 theorem proving, 16 topology, 10 training, 11 - complexity, 4, 103 - decidability, 39, 107
148
Index
folding networks, II training set, II tree, 8, 15 tree automaton, 15, 21, 45, 46 triazine, 16 Tschebyschev inequality, 76 Turing machine, 22 simulation, 23, 41 - super-Turing capability, 41 -
-
UCED property, 54, 69 distribution-independent, 57 - folding networks, 96 UCEM property, 56, 76, 77 unfolding, 10, 27 uniform convergence of empirical distributions, s e e UCED property uniform convergence of empirical means, see UCEM property unit, s e e neuron unluckiness function, 79, 96 -
Vapnik Chervonenkis dimension, see VC dimension PC(~'), see VC dimension VC dimension, 57 - dual, 83, 88 - feed-forward networks, 62 - folding networks, 79, 84 - - perceptron function, 84, 85 sigmoidal function, 86 - recurrent network - - perceptron function, 89 - recurrent networks, 63 -
-
weight decay, 60, 89 weight restriction, 122 weight sharing, 62, II0 weights - restricted, 34, 121