Reviews of Nonlinear Dynamics and Complexity Volume 3
Edited by Heinz Georg Schuster
WILEY-VCH Verlag GmbH & Co. KGaA
Reviews of Nonlinear Dynamics and Complexity Edited by Heinz Georg Schuster
Related Titles E. Schöll, H.G. Schuster (Eds.)
Handbook of Chaos Control 2008 ISBN: 978-3-527-40605-0
B.K. Chakrabarti, A. Chakraborti, A.Chatterjee (Eds.)
Econophysics and Sociophysics Trends and Perspectives Hardcover ISBN: 978-3-527-40670-8
B. Schelter, M. Winterhalder, J. Timmer (Eds.)
Handbook of Time Series Analysis Recent Theoretical Developments and Applications Hardcover ISBN: 978-3-527-40623-4
L.V. Yakushevich
Nonlinear Physics of DNA 2004 ISBN: 978-3-527-40417-9
M. Kantardzic
Data Mining Concepts, Models, Methods, and Algorithms 2003 ISBN: 978-0-471-22852-3
S. Bornholdt, H.G. Schuster (Eds.)
Handbook of Graphs and Networks From the Genome to the Internet 2003 ISBN: 978-3-527-40336-3
Reviews of Nonlinear Dynamics and Complexity Volume 3
Edited by Heinz Georg Schuster
WILEY-VCH Verlag GmbH & Co. KGaA
The Editor Prof. Dr. Heinz Georg Schuster University of Kiel
[email protected]
All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate. Library of Congress Card No.: applied for
Editorial Board Christoph Adami California Institute of Technology Pasadena Stefan Bornholdt University of Bremen Wolfram Just Queen Mary University of London Kunihiko Kaneko University of Tokyo Ron Lifshitz Tel Aviv University Ernst Niebur Johns Hopkins University Baltimore Günter Radons Technical University of Chemnitz Eckehard Schöll Technical University of Berlin Hong Zhao Xiamen University
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de ¤ 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.
Printed in the Federal Republic of Germany Printed on acid-free paper Typesetting Uwe Krieg, Berlin Printing and Bookbinding Strauss GmbH, Mörlenbach ISBN: 978-3-527-40945-7
V
Contents
Preface XI List of Contributors 1
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.7.1 1.7.2 1.8 1.8.1 1.9 1.9.1 1.9.2 1.9.3 1.9.4 1.9.5
XIII
The Chaos Computing Paradigm 1
William L. Ditto, Abraham Miliotis, K. Murali, and Sudeshna Sinha Brief History of Computers 1 The Conceptualization, Foundations, Design and Implementation of Current Computer Architectures 2 Limits of Binary Computers and Alternative Approaches to Computation: What Lies Beyond Moore’s Law? 3 Exploiting Nonlinear Dynamics for Computations 4 General Concept 5 Continuous-Time Nonlinear System 8 Proof-of-Principle Experiments 10 Discrete-Time Nonlinear System 10 Continuous-Time Nonlinear System 13 Logic from Nonlinear Evolution: Dynamical Logic Outputs 16 Implementation of Half- and Full-Adder Operations 17 Exploiting Nonlinear Dynamics to Store and Process Information 18 Encoding Information 19 Processing Information 21 Representative Example 24 Implementation of the Search Method with Josephson Junctions 25 Discussions 28
VI
Contents
1.10 1.11
VLSI Implementation of Chaotic Computing Architectures: Proof of Concept 30 Conclusions 32 References 34
2
How Does God Play Dice? 37
2.1 2.2 2.2.1 2.3 2.4 2.5 2.6 2.7
Jan Nagler and Peter H. Richter Introduction 37 Model 38 Bounce Map with Dissipation 40 Phase Space Structure: Poincaré Section Orientation Flip Diagrams 46 Bounce Diagrams 53 Summary and Conclusions 56 Acknowledgments 57 References 58
3
3.1 3.2 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 3.5 3.5.1 3.5.2 3.5.3 3.6
41
Phase Reduction of Stochastic Limit-Cycle Oscillators
59
Kazuyuki Yoshimura Introduction 59 Phase Description of Oscillator 61 Oscillator with White Gaussian Noise 62 Stochastic Phase Equation 63 Derivation 65 Steady Phase Distribution and Frequency 68 Numerical Examples 69 Oscillator with Ornstein–Uhlenbeck Noise 72 Generalized Stochastic Phase Equation 72 Derivation 75 Steady Phase Distribution and Frequency 77 Numerical Examples 78 Phase Equation in Some Limits 81 Noise effect on entrainment 85 Periodically Driven Oscillator with White Gaussian Noise Periodically Driven Oscillator with Ornstein–Uhlenbeck Noise 87 Conjecture 88 Summary 89 References 90
85
Contents
4
4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.2 4.2.1 4.2.1.1 4.2.1.2 4.2.1.3 4.2.2 4.2.2.1 4.2.2.2 4.2.2.3 4.3 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.4
Complex Systems, numbers and Number Theory
91
Lucas Lacasa, Bartolo Luque, and Octavio Miramontes A Statistical Pattern in the Prime Number Sequence 93 Benford’s Law and Generalized Benford’s Law 93 Are the First-Digit Frequencies of Prime Numbers Benford Distributed? 95 Prime Number Theorem Versus Size-Dependent Generalized Benford’s Law 98 The Primes Counting Function L( N ) 99 Remarks 101 Phase Transition in Numbers: the Stochastic Prime Number Generator 101 Phase Transition 105 Network Image and Order Parameter 105 Annealed Approximation 107 Data Collapse 110 Computational Complexity 111 Worst-Case Classification 112 Easy-Hard-Easy Pattern 113 Average-Case Classification 116 Self-Organized Criticality in Number Systems: Topology Induces Criticality 117 The Division Model 118 Division Dynamics and SOC 118 Analytical Developments: Statistical Physics Versus Number Theory 121 A More General Class of Models 124 Open Problems and Remarks 125 Conclusions 125 References 126
5
Wave Localization Transitions in Complex Systems 131
5.1 5.2 5.2.1 5.2.2 5.2.3 5.2.4 5.3
Jan W. Kantelhardt, Lukas Jahnke, and Richard Berkovits Introduction 131 Complex Networks 133 Scale-Free and Small-World Networks 134 Clustering 137 Percolation on Networks 138 Simulation of Complex Networks 139 Models with Localization–Delocalization Transitions
142
VII
VIII
Contents
5.3.1 5.3.2 5.3.3 5.3.4 5.4 5.4.1 5.4.2 5.4.3 5.4.4 5.5 5.5.1 5.5.2 5.5.3 5.5.4 5.6
6
6.1 6.2 6.2.1 6.2.2 6.2.3 6.2.4 6.3 6.3.1 6.3.2 6.3.2.1 6.3.2.2 6.3.2.3 6.4 6.4.1 6.4.1.1 6.4.1.2 6.4.1.3 6.4.2 6.4.2.1
Standard Anderson Model and Quantum Percolation 142 Vibrational Excitations and Oscillations 144 Optical Modes in a Network 146 Anderson Model with Magnetic Field 148 Level Statistics 149 Random Matrix Theory 149 Level Statistics for Disordered Systems 151 Corrected Finite-Size Scaling 153 Finite-Size Scaling with Two Parameters 155 Localization–Delocalization Transitions in Complex Networks 156 Percolation Networks 157 Small-World Networks without Clustering 158 Scale-Free Networks with Clustering 159 Systems with Constant and Random Magnetic Field 161 Conclusion 163 References 165 From Deterministic Chaos to Anomalous Diffusion 169
Rainer Klages Introduction 169 Deterministic Chaos 170 Dynamics of Simple Maps 171 Ljapunov Chaos 173 Entropies 178 Open Systems, Fractals and Escape Rates 185 Deterministic Diffusion 192 What is Deterministic Diffusion? 193 Escape Rate Formalism for Deterministic Diffusion 197 The Diffusion Equation 197 Basic Idea of the Escape Rate Formalism 198 The Escape Rate Formalism Worked out for a Simple Map 200 Anomalous Diffusion 205 Anomalous Diffusion in Intermittent Maps 206 What is Anomalous Diffusion? 206 Continuous Time Random Walk Theory 209 A Fractional Diffusion Equation 213 Anomalous Diffusion of Migrating Biological Cells 216 Cell Migration 216
Contents
6.4.2.2 Experimental Results 217 6.4.2.3 Theoretical Modeling 219 6.5 Summary 223 References 224 Color Figures Index
241
229
IX
XI
Preface Following the appearance of the first two very successful volumes of Reviews of Nonlinear Dynamics and Complexity, it is now my pleasure to introduce the third volume, beginning with an outline of the aims and purpose of this new series. Nonlinear behavior is ubiquitous in nature and ranges from fluid dynamics, via neural and cell dynamics, to the dynamics of financial markets. The most prominent feature of nonlinear systems is that small external disturbances can induce large changes in behavior. This can and has been used for effective feedback control in many systems, from lasers to chemical reactions and the control of nerve cells and heartbeats. A new hot topic involves nonlinear effects that appear on the nanoscale. Nonlinear control of the atomic force microscope has improved its accuracy by orders of magnitude. The nonlinear electromechanical oscillations of nano-tubes, the turbulence and mixing of fluids in nano-arrays and the nonlinear effects in quantum dots are further examples. Complex systems consist of large networks of coupled nonlinear devices. The observation that scale-free networks describe the behavior of the internet, cell metabolisms, financial markets and economic and ecological systems, has led to new discoveries concerning their behavior, such as damage control, optimal spread of information, or the detection of new functional modules that are pivotal for their description and control. This shows that the field of Nonlinear Dynamics and Complexity consists of a large body of theoretical and experimental work with many applications, which is nevertheless governed and held together by some very basic principles, such as control, networks and optimization. The individual topics are definitely interdisciplinary, which makes it difficult for researchers to discover the new solutions – which
XII
Preface
could be most relevant for them – that have been found by their scientific neighbors. Therefore, its seems that there is an urgent need to provide Reviews of Nonlinear Dynamics and Complexity where researchers or newcomers to the field can find the most important recent results, described in a fashion which breaks down the barriers between the disciplines. This third volume contains new topics ranging from chaotic computing, via random dice tossing and stochastic limit-cycle oscillators, to a number theoretic example of self-organized criticality, wave localization in complex networks and anomalous diffusion. I would like to thank all the authors for their excellent contributions. If readers take some inspiration for their further research from these interdisciplinary reviews, then this volume will have fully served its purpose. I am grateful to all members of the Editorial Board, and the staff of Wiley-VCH, for their excellent help, and would like to invite my colleagues to contribute to the next volumes. Kiel, January 2010
Heinz Georg Schuster
XIII
List of Contributors Richard Berkovits Minerva Center and Department of Physics Bar Ilan University Ramat-Gan 52900 Israel
[email protected] William L. Ditto Arizona State University Harrington Department of Bioengineering Tempe, AZ 85287-9309 USA and Control Dynamics Inc. 1662 101st Place SE Bellevue, WA 98004 USA
[email protected]fl.edu Lukas Jahnke Martin-Luther-Universität HalleWittenberg Institut für Physik von-Seckendorff-Platz 1 06120 Halle (Saale) Germany
Jan W. Kantelhardt Martin-Luther-Universität HalleWittenberg Institut für Physik von-Seckendorff-Platz 1 06120 Halle (Saale) Germany
[email protected] Rainer Klages Queen Mary University of London School of Mathematical Sciences Mile End Road London E1 4NS UK
[email protected] Lucas Lacasa Universidad Politécnica de Madrid Departamento de Matemática Aplicada y Estadística ETSI Aeronáuticos Plaza Cardenal Cisneros 3 28040, Madrid Spain
XIV
List of Contributors
Bartolo Luque Universidad Politécnica de Madrid Departamento de Matemática Aplicada y Estadística ETSI Aeronáuticos Plaza Cardenal Cisneros 3 28040, Madrid Spain Abraham Miliotis University of Florida Department of Biomedical Engineering Gainesville, FL 326611-6131 USA K. Murali Anna University Department of Physics Chennai 600 025 India Octavio Miramontes Vidal Universidad Nacional Autónoma de México Instituto de Física Circuito de la Investigación Científica Ciudad Universitaria CP 04510, México, D.F. Mexico octavio@fisica.unam.mx
Jan Nagler Max-Planck-Institute for Dynamics and Self-Organization Bunsenstraße 10 37073 Göttingen Germany and Georg-August-University Göttingen Institute for Nonlinear Dynamics Bunsenstrasse 10 37073 Göttingen Germany
[email protected] Peter H. Richter University of Bremen Institute for Theoretical Physics Otto-Hahn-Allee 28334 Bremen Germany Sudeshna Sinha The Institute of Mathematical Sciences CIT Campus Taramani Chennai 600 113 India Kazuyuki Yoshimura NTT Communication Science Laboratories 2-4, Hikaridai Seika-cho, Soraku-gun Kyoto 619-0237 Japan
[email protected]
1
1 The Chaos Computing Paradigm William L. Ditto, Abraham Miliotis, K. Murali, and Sudeshna Sinha
1.1 Brief History of Computers
The timeline of the history of computing machines can probably be traced back to early calculation aids, varying in sophistication from pebbles or notches carved in sticks to the abacus, which was used as early as 500 B.C.! Throughout the centuries computing machines became more powerful, progressing from Napier’s Bones and the slide rule, to mechanical adding machines and on to the modern day computer revolution. The ‘first generation’ of modern computers, were based on wired circuits containing vacuum valves and used punched cards as the main storage medium. The next major step in the history of computing was the invention of the transistor, which replaced the inefficient valves with a much smaller and more reliable component. Transistorized (still bulky) computers, normally referred to as ‘Second Generation’, dominated the late 1950s and early 1960s. The explosion in the use of computers began with ‘Third Generation’ computers. These relied on the integrated circuit or microchip. Large-scale integration of circuits led to the development of very small processing units. Fourth generation computers were developed, using a microprocessor to locate much of the computer’s processing abilities on a single (small) chip, allowing the computers to be smaller and faster than ever before. Although processing power and storage capacities have increased beyond all recognition since the 1970s the underlying technology of LSI (large-scale integration) or VLSI (very-large-scale integration) microchips has remained basically the same, so it is widely regarded that most of today’s computers still belong to the fourth generation.
2
1 The Chaos Computing Paradigm
One common thread in the history of computers, be it the abacus or Charles Babbage’s mechanical ‘anlytical engine’ or modern microprocessors, is this: computing machines reflect the physics of the time and are driven by progress in the understanding of the physical world.
1.2 The Conceptualization, Foundations, Design and Implementation of Current Computer Architectures
Computation can be actually defined as finding a solution to a problem from given inputs by means of an algorithm. This is what the theory of computation, a subfield of computer science and mathematics, deals with. For thousands of years computing was done with pen and paper, or chalk and slate, or mentally, sometimes with the aid of tables. The theory of computation began early in the twentieth century, before modern electronic computers had been invented. One of the farreaching ideas in the theory is the concept of a Turing machine, which stores characters on an infinitely long tape, with one square at any given time being scanned by a read/write head. Basically, a Turing machine is a device that can read input strings, write output strings and execute a set of stored instructions at a time. The Turing machine demonstrated both the theoretical limits and potential of computing systems and is a cornerstone of modern day digital computers. The first computers were hardware-programmable. To change the function computed, one had to reconnect the wires or even build a new computer. John von Neumann suggested using Turing’s Universal Algorithm. The function computed can then be specified by just giving its description (program) as part of the input rather than by changing the hardware. This was a radical idea which changed the course of computing. Modern day computers still largely implement binary digital computing which is based on Boolean algebra; the logic of the true and false. Boolean algebra shows how you can calculate anything (within some epistemological limits) with a system of two discrete values. Boolean logic became a fundamental component of modern computer architecture, and is remarkable for its sheer conceptual simplicity. For instance, it can be rigorously shown that any logic gate can be obtained by adequate connection of NOR or NAND gates (i.e. any boolean circuit can be built using NOR/NAND gates alone). This implies that the
1.3 Limits of Binary Computers and Alternative Approaches to Computation
capacity for universal computing can simply be demonstrated by the implementation of the fundamental NOR or NAND gates [1].
1.3 Limits of Binary Computers and Alternative Approaches to Computation: What Lies Beyond Moore’s Law?
The operation of any computing machine is necessarily a physical process, and this crucially determines the possibilities and limitations of the computing device. For the past 20 years, the throughput of digital computers has increased at an exponential rate. Fuelled by (seemingly endless) improvements in integrated-circuit technology, the exponential growth predicted by Moore’s law has held true. But Moore’s Law will come to an end as chipmakers will hit a wall when it comes to shrinking the size of transistors, one of the chief methods of making chips that are smaller, more powerful and cheaper than their predecessors. As conventional chip manufacturing technology runs into physical limits in the density of circuitry and signal speed, which sets limits to binary logic switch scaling, alternatives to semiconductor-based binary digital computers are emerging. Apart from analogue VLSI, these include bio-chips, which are based on materials found in living creatures; optical computers that live on pure light; and quantum computers that depend on the laws of quantum mechanics in order to perform, in theory, tasks that ordinary computers cannot. Neurobiologically inspired computing, quantum computing and DNA computing differ in many respects, but they are similar in that their aim, unlike conventional digital computers, is to utilize at the basic level some of the computational capabilities inherent in the basic, analogue, laws of physics. Further, understanding of biological systems, has triggered the question: what lessons do the workings of the human mind offer for computationally hard problems? Thus the attempt is to create machines that benefit from the basic laws of physics and which are not just constrained by them. Here we review another emerging computing paradigm: one which exploits the richness and complexity inherent in nonlinear dynamics. This endeavour also falls into the above class, as it seeks to extend the possibilities of computing machines by utilizing the physics of the device.
3
4
1 The Chaos Computing Paradigm
1.4 Exploiting Nonlinear Dynamics for Computations
We would now like to paraphrase the classic question ‘What limits do the laws of classical physics place on computation’ to read ‘What opportunities do the laws of physics offer computation’. It was proposed in 1998 that chaotic systems might be utilized to design computing devices [2]. In the early years the focus was on proofof-principle schemes that demonstrated the capability of chaotic elements to do universal computing. The distinctive feature of this alternative computing paradigm was that it exploited the sensitivity and pattern formation features of chaotic systems. In subsequent years there has been much research activity to develop this paradigm [3–17]. It was realized that one of the most promising directions of this computing paradigm was its ability to exploit a single chaotic element to reconfigure into different logic gates through a threshold-based morphing mechanism [3, 4]. In contrast to a conventional field programmable gate array element [18], where reconfiguration is achieved through switching between multiple single-purpose gates, reconfigurable chaotic logic gates (RCLGs) are comprised of chaotic elements that morph (or reconfigure) logic gates through the control of the pattern inherent in their nonlinear element. Two input RCLGs have recently been realized and shown to be capable of reconfiguring between all logic gates in discrete circuits [5–7]. Additionally, such RCLGs have been realized in prototype VLSI circuits (0.13 μm CMOS, 30 MHz clock cycles). Further, reconfigurable chaotic logic gates arrays (RCGA) which morph between higher-order functions such as those found in a typical arithmetic logic unit (ALU), have also been designed [17]. In this review we first recall the theoretical concept underlying the reconfigurable implementation of all fundamental logical operations utilizing nonlinear dynamics [3]. We also describe specific realizations of the theory in chaotic electrical circuits. Then we present recent results of a method for obtaining logic output from a nonlinear system using the time evolution of the state of the system. Finally we discuss a method for storing and processing information by exploiting nonlinear dynamics. We conclude with a brief discussion of some ongoing technological implementations of these ideas.
1.5 General Concept
1.5 General Concept
We outline below a theoretical method for obtaining all basic logic gates with a single nonlinear system. The broad aim here is to use the rich temporal patterns embedded in a nonlinear time series in a controlled manner to obtain a computing device that is flexible and reconfigurable. Consider a chaotic element (our chaotic chip or chaotic processor) whose state is represented by a value x. In our scheme all the basic logic gate operations (NAND, NOR, XOR, AND, OR, XNOR and NOT) involve the following steps: 1) Inputs: x → x0 + X1 + X2 for 2-input logic operations, such as the NAND, NOR, XOR, AND, OR and XNOR operations, and x → x0 + X for 1-input operations, such as the NOT operation. Here x0 is the initial state of the system, and X = 0 when I = 0 and X = Vin when I = 1 where Vin is a positive constant. 2) Dynamical update, i.e. x → f ( x ) where f ( x ) is a nonlinear function. 3) Threshold mechanism to obtain output Z: Z=0
if f ( x ) ≤ E,
Z = f (x) − E
and
if f ( x ) > E
where E is a monitoring threshold.
5
6
1 The Chaos Computing Paradigm
This is interpreted as logic output 0 if Z = 0 and logic ouput 1 if Z > 0 (with Z ∼ Vin ). Since the system is strongly nonlinear, in order to specify the inital x0 accurately one needs a controlling mechanism. Here we will employ a threshold controller [19, 20] to set the inital x0 . Namely, we will use the clipping action of the threshold controller to achieve the initialization and subsequently to obtain the output as well. Note that in our implementation we demand that the input and output have equivalent definitions (i.e. one unit is the same quantity for input and output), as well as among various logical operations. This requires that constant Vin assumes the same value throughout a network, and this will allow the output of one gate element to couple easily to another gate element as input, so that gates can be wired directly into gate arrays implementing compounded logic operations. In order to obtain all the desired input-output responses of the different gates, we need to satisfy the conditions enumerated in Table 1.1 simultaneously. So given a dynamics f ( x ) corresponding to the physical device in actual implementation, one must find values of the threshold and initial state which satisfy the conditions derived from the Truth Tables to be implemented (see Table 1.2). Table 1.1 Truth table of the basic logic operations for a pair of inputs: I1 , I2 [1]. The 1-input NOT gate is given by: NOT(0) is 1; NOT(1) is 0. I1
I2
NAND
NOR
XOR
AND
OR
XNOR
0
0
1
1
0
0
0
1
0
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1
0
0
0
1
1
1
A representative example is given in Table 1.3, which shows the exact solutions of the initial x0 and threshold E which satisfy the conditions in Table 1.2 when the dynamical evolution is governed by the prototypical logistic equation: f ( x ) = 4x (1 − x ) The constant Vin = logical gates.
1 4
is common to both input and output and to all
1.5 General Concept Table 1.2 Necessary and sufficient conditions, derived from the logic truth tables, to be satisfied simultaneously by the nonlinear dynamical element, in order to have the capacity to implement the logical operations AND, OR, XOR, NAND, NOR and NOT (cf. Table 1.1) with the same computing module. Logic Operation
AND
OR
XOR
NOR
NAND
NOT
Input Set (I1 , I2 )
Output
Necessary and Sufficient Condition
(0,0)
0
f ( x0 ) < E
(0,1)/(1,0)
0
f ( x0 + Vin ) < E
(1,1)
1
f ( x0 + 2Vin ) − E = Vin
(0,0)
0
f ( x0 ) < E
(0,1)/(1,0)
1
f ( x0 + Vin ) − E = Vin
(1,1)
1
f ( x0 + 2Vin ) − E = Vin
(0,0)
0
f ( x0 ) < E
(0,1)/(1,0)
1
f ( x0 + Vin ) − E = Vin
(1,1)
0
f ( x0 + 2Vin ) < E
(0,0)
1
f ( x0 ) − E = Vin
(0,1)/(1,0)
0
f ( x0 + Vin ) < E
(1,1)
0
f ( x0 + 2Vin ) < E
(0,0)
1
f ( x0 ) − E = Vin
(0,1)/(1,0)
1
f ( x0 + Vin ) − E = Vin
(1,1)
0
f ( x0 + 2Vin ) < E
0
1
f ( x0 ) − E = Vin
1
0
f ( x0 + Vin ) < E
Above, we have explicitly shown how one can select temporal responses, corresponding to different logic gate patterns, from a nonlinear system, and this ability allows us to construct flexible hardware. Contrast our use of nonlinear elements here with the possible use of linear systems on one hand and stochastic systems on the other. It is not possible to extract all the different logic responses from the same element in the case of linear components, as the temporal patterns are inherently very limited. So linear elements do not offer much flexibility or versatility. Stochastic elements on the other hand have many different temporal sequences. However, they are not deterministic and so one cannot use them to design components. Only nonlinear dynamics enjoys both richness of temporal behavior as well as determinism.
7
8
1 The Chaos Computing Paradigm Table 1.3 One specific set of solutions of the conditions in Table 1.2 which yield the logical operations AND, OR, XOR, NAND and NOT, with Vin = 14 . Note that these theoretical solutions have been fully verified in a discrete electrical circuit emulating a logistic map [5].
Operation
AND
OR
XOR
NAND
NOT
x0
0
1/8
1/4
3/8
1/2
E
3/4
11/16
3/4
11/16
3/4
Also note that, while nonlinearity is absolutely necessary for implementing all the logic gates, chaos may not always be necessary. In the representative example of the logistic map presented in Table 1.3, solutions for all the gates exist only at the fully chaotic limit of the logistic map but the degree of nonlinearity necessary for obtaining all the desired logic responses will depend on the system at hand and on the specific scheme employed to obtain the input-output mapping. It may happen that certain nonlinear systems will allow a wide range of logic responses without actually being chaotic.
1.6 Continuous-Time Nonlinear System
We now present a somewhat different method for obtaining logic responses from a continuous-time nonlinear system. Our processor is now a continuous-time system described by the evolution equation d x /dt = F (x, t), where x = ( x1 , x2 , . . . x N ) are the state variables and F is a nonlinear function. In this system we choose a variable, say x1 , to be thresholded. Whenever the value of this variable exceeds a threshold E it resets to E, i.e. when x1 > E then (and only then) x1 = E. Now the basic 2-input 1-output logic operation on a pair of inputs I1 , I2 in this method simply involves the setting of an inputs-dependent threshold, namely the threshold is: E = VC + I1 + I2
1.6 Continuous-Time Nonlinear System
where VC is the dynamic control signal determining the functionality of the processor. By switching the value of VC one can switch the logic operation being performed. Again I1 /I2 has the value 0 when the logic input is 0 and has the value Vin when the logic input is 1. So the threshold E is equal to VC when the logic inputs are (0, 0), VC + Vin when the logic inputs are (0, 1) or (1, 0) and VC + 2Vin when the logic inputs are (1, 1). The output is again interpreted as a logic output 0 if x1 < E, i.e. the excess above threshold V0 = 0. The logic output is 1 if x1 > E, and the excess above threshold V0 = ( x1 − E) ∼ Vin . The schematic diagram of this method is displayed in Figure 1.1.
Figure 1.1 Schematic diagram for implementing a morphing 2 input logic cell with a continuous time dynamical system. Here VC determines the nature of the logic response, and the 2 inputs are I1, I2.
Now, for a NOR gate implementation (VC = VNOR ) the following must hold true (cf. truth table in Table 1.1): • when input set is (0, 0), output is 1, which implies that for threshold E = VNOR , output V0 = ( x1 − E) ∼ Vin ; • when input set is (0, 1) or (1, 0), output is 0, which implies that for threshold E = VNOR + Vin , x1 < E so that output V0 = 0; • when input set is (1, 1), output is 0, which implies that for threshold E = VNOR + 2Vin , x1 < E so that output V0 = 0. For a NAND gate (VC = VNAND ) the following must hold true (cf. truth table in Table 1.1):
9
10
1 The Chaos Computing Paradigm
• when input set is (0, 0), output is 1, which implies that for threshold E = VNAND , output V0 = ( x1 − E) ∼ Vin ; • when input set is (0, 1) or (1, 0), output is 1, which implies that for threshold E = Vin + VNAND , output V0 = ( x1 − E) ∼ Vin ; • when input set is (1, 1), output is 0, which implies that for threshold E = VNAND + 2Vin , x1 < E so that output V0 = 0. In order to design a dynamic NOR/NAND gate one has to find values of VC that will satisfy all the above input-output associations in a robust and consistent manner.
1.7 Proof-of-Principle Experiments 1.7.1 Discrete-Time Nonlinear System
In this section, we describe an iterated map whose nonlinearity has a simple (i.e. minimal) electronic implementation. We then demonstrate explicitly how all the different fundamental logic gates can be implemented and morphed using this nonlinearity. These gates provide the full set of gates necessary to construct a general-purpose, reconfigurable computing device. Consider an iterated map governed by the following equation: x n +1 =
αxn β
1 + xn
(1.1)
where α and β are system parameters. Here we will consider α = 2 and β = 10 where the system displays chaos. In order to realize the chaotic map above in circuitry, one needs two sample-and-hold circuits (S/H): the first S/H circuit holds an input signal (xn ) in response to a clock signal CK1. The output from this sampleand-hold circuit is fed as input to the nonlinear device for subsequent mapping, f ( xn ). A second sample-and-hold (S/H) circuit takes the output from the nonlinear device in response to a clock signal CK2. In lieu of control, the output from the second S/H circuit (xn+1 ) closes the loop as the input to first S/H circuit. The main purpose of the two sample-
1.7 Proof-of-Principle Experiments
and-hold circuits is to introduce discreteness into the system and, additionally, to set the iteration speed. To implement a control for nonlinear dynamical computing, the output from the second sample-and-hold circuit is input to a threshold controller, described by: x n +1 = f ( x n )
if
x n +1 < E
x n +1 = x ∗
if
x n +1 ≥ E
(1.2)
where E is a prescribed threshold. The output from this threshold controller then becomes the input to the first sample-and-hold circuit. In the circuit, the notations xn and xn+1 denote voltages. A simple nonlinear device is produced by coupling two complementary (n-channel and p-channel) junction field-effect transistors (JFETs) [13] mimicking the nonlinear characteristic curve f ( x ) = 2x/(1 + x10 ). The circuit diagram is shown in Figure 1.2. The voltage across resistor R1 is amplified by a factor of five using operational amplifier U1 in order to scale the output voltage back into the range of the input voltage, a necessary condition for a circuit based on a map.
Figure 1.2 Circuit diagram of the nonlinear device. Left: Intrinsic (resistorless), complementary device made of two (n-type and p-type) JFETs. Q1: 2N5457, Q2: 2N5460. Right: Amplifier circuitry to scale the output voltage back into the range of the input voltage. R1: 535 Ω, U1: AD712 op-amp, R2: 100 kΩ and R3: 450 kΩ. Here Vin = xn and V0 = xn+1 .
11
12
1 The Chaos Computing Paradigm
The resulting transfer characteristics of the nonlinear device are depicted in Figure 1.3 In Figure 1.2, the sample-and-hold circuits are realized with National Semiconductor’s sample-and-hold IC LF398, triggered by delayed timing clock pulses CK1 and CK2 [13]. Here a clock rate of either 10 or 20 kHz may be used. The threshold controller circuit as shown in Figure 1.4 is realized with an AD712 operational amplifier, a 1N4148 diode, a 1 kΩ series resistor and the threshold control voltage.
Figure 1.3 Nonlinear device characteristics.
Figure 1.4 Circuit diagram of the threshold controller. Vin and V0 are the input and output, D is a 1N4148 diode, R = 1 kΩ, and U2 is an AD712 op-amp. The threshold level E is given by the controller input voltage Vcon .
1.7 Proof-of-Principle Experiments
Now in order to implement all the fundamental logic operations, NOR, NAND, AND, OR and XOR with this nonlinear system we have to find a range of parameters for which the necessary and sufficient conditions displayed in Table 1.2 are satisfied. These inequalities have many possible solutions depending on the size of Vin . By setting Vin = 0.3 we can easily solve the equations for the different x0 that each gate requires. The specific x0 values for different logical operations are listed in Table 1.4. Table 1.4 One specific solution of the conditions in Table 1.2 which yields the logical operations AND, OR, XOR, NAND and NOT, with Vin = 0.3 and threshold Vcon equal to 1 (cf. Figure 1.4). These values are in complete agreement with hardware circuit experiments.
Operation
NOR
NAND
AND
OR
XOR
x0
0.9138
0.6602
0.0602
0.3602
0.45
Thus we have presented a proof-of-principle device that demonstrates the capability of this nonlinear map to implement all the fundamental computing operations. It does this by exploiting the nonlinear responses of the system. The main benefit is its ability to exploit a single chaotic element to reconfigure into different logic gates through a threshold-based morphing mechanism. Contrast this to a conventional field programmable gate array element, where reconfiguration is achieved through switching between multiple single-purpose gates. This latter type of reconfiguration is both slow and wasteful of space on an integrated circuit. 1.7.2 Continuous-Time Nonlinear System
A proof-of-principle experiment of the method using the continuous time chaotic systems described in Section 1.6 was realized with the dou-
13
14
1 The Chaos Computing Paradigm
ble scroll chaotic Chua’s circuit given by the following set of (rescaled) three coupled ODEs [21] x˙1 = α( x2 − x1 − g( x1 ))
(1.3)
x˙2 = x1 − x2 + x3
(1.4)
x˙3 = − βx2
(1.5)
where α = 10 and β = 14.87 and the piecewise linear function g( x ) = bx + 12 ( a − b)(| x + 1| − | x − 1|) with a = −1.27 and b = −0.68. We used the ring structure configuration of the classic Chua’s circuit [21]. In the experiment we implemented minimal thresholding on variable x1 (this is the part in the ‘control’ box in the schematic figure). We clipped x1 to E, if it exceeded E, only in (1.4). This has very easy implementation, as it avoids modifying the value of x1 in the nonlinear element g( x1 ), which is harder to do. So then all we need to do is to implement x˙2 = E − x2 + x3 instead of (1.4), when x1 > E, and there is no controlling action if x1 ≤ E. A representative example of a dynamic NOR/NAND gate can be obtained in this circuit implementation with parameters: Vin = 2 V. The NOR gate is realized around VC = 0 V (see Figure 1.6). At this value of control signal, we have the following: for input (0,0) the threshold level is at 0, which yields V0 ∼ 2 V; for inputs (1,0) or (0,1) the threshold level is at 0, which yields V0 ∼ 0 V; and for input (1,1) the threshold level is at 2 V, which yields V0 = 0 as the threshold is beyond the bounds of the chaotic attractor. The NAND gate is realized around VC = −2 V. This control signal yields the following: for input (0,0) the threshold level is at −2 V, which yields V0 ∼ 2 V; for inputs (1,0) or (0,1) the threshold level is at 2 V, which yields V0 ∼ 2 V; and for input (1,1) the threshold level is at 4 V, which yields V0 = 0 [6]. In the example above, the knowledge of the dynamics allowed us to design a control signal that can select out the temporal patterns emulating the NOR and NAND gate [7]. So as the dynamic control signal VC switches between 0 V and −2 V, the module first yields the NOR and then a NAND logic response. Thus one can obtain a dynamic logic gate capable of switching between two fundamental logic responses, namely the NOR and NAND.
1.7 Proof-of-Principle Experiments
Figure 1.5 Circuit diagram with the threshold control unit in the dotted box.
Figure 1.6 Timing sequences from top to bottom: (a) First input I1, (b) Second input I2, (c) Output VT (cf. Figure 1.5), (d) Output V0 (cf. Figure 1.5) and (e) Recovered Output (RT) obtained by thresholding, corresponding to NOR ( I1 , I2 ).
15
16
1 The Chaos Computing Paradigm
1.8 Logic from Nonlinear Evolution: Dynamical Logic Outputs
Now we describe a method for obtaining logic output from a nonlinear system using the time evolution of the state of the system. Namely, our concept uses the nonlinear characteristics of the time dependence of the state of the dynamical system to extract different responses from the system. The highlight of this method is that a single system can yield complicated logic operations, very efficiently. As before, we have: 1) Inputs: x → x0 + X1 + X2 for 2-input logic operations, such as NOR, NAND, AND, OR, XOR and XNOR operations, and x → x0 + X for 1-input logic operations such as NOT operation Here x0 is the initial state of the system, and X = 0 when I = 0, and X = Vin when I = 1 (where Vin is a positive constant) 2) Nonlinear evolution over n time steps, i.e. x → f n ( x ) where f ( x ) is a nonlinear function. 3) Threshold mechanism to obtain the Output: If f n ( x ) ≤ E Logic Output is 0, and If f n ( x ) > E Logic Output is 1 where E is the threshold. So the inputs set up the initial state: x0 + I1 + I2. Then the system evolves over n iterative time steps to updated state xn . The evolved state is compared to a monitoring threshold E. If the state is greater
1.8 Logic from Nonlinear Evolution: Dynamical Logic Outputs Table 1.5 Necessary and sufficient conditions to be satisfied by a chaotic element in order to implement the logical operations NAND, AND, NOR, XOR and OR during different iterations. LOGIC
NAND
AND
NOR
XOR
OR
Iteration n
1
2
3
4
5
Input (0, 0)
x1 = f ( x0 ) > E
f ( x1 ) < E f ( x2 ) > E f ( x3 ) < E f ( x4 ) < E
Input (0, 1)/(1, 0) x1 = f ( x0 + Vin ) > E f ( x1 ) < E f ( x2 ) < E f ( x3 ) > E f ( x4 ) > E Input (1, 1)
x1 = f ( x0 + 2Vin ) < E f ( x1 ) > E f ( x2 ) < E f ( x3 ) < E f ( x4 ) > E
Table 1.6 Updated state of chaotic element satisfying the conditions in Table 1.5 in order to implement the logical operations NAND, AND, NOR, XOR and OR during different iterations with x0 = 0.325,Vin = 14 and E = 0.6. Operation
NAND
AND
NOR
XOR
OR
Iteration n
1
2
3
4
5
State of the system (xn )
x1
x2
x3
x4
x5
0.88
0.43
0.98
0.08
0.28
0.9775
0.088
0.33
0.872
0.45
0.58
0.98
0.1
0.34
0.9
Logic input(0,0) x0 =0.325 Logic input(0,1)/(1,0) x0 = 0.575 Logic input(1,1) x0 =0.825
than the threshold, a logical 1 is the output, and if the state is less than the threshold, a logical 0 is the output. This process is repeated for subsequent iterations. (See Figure 1.7 for a representative example.) 1.8.1 Implementation of Half- and Full-Adder Operations
Now the ubiquitous bit-by-bit arithmetic addition (half-adder) involves two logic gate outputs: namely AND (to obtain carry) and XOR (to obtain first digit of sum). Using the scheme above we can obtain this combinational operation in consecutive iterations, with a single onedimensional chaotic element.
17
18
1 The Chaos Computing Paradigm
Figure 1.7 Template showing different logic patterns for range of x0 (0–0.5) versus iteration n (0–10). Here E = 0.75 for 1 ≤ n ≤ 4 and E = 0.4 for n > 4. Vin is fixed as 0.25.
Further, the typical full-adder requires two half-adder circuits and an extra OR gate. So in total, the implementation of a full-adder requires five different gates (two XOR gates, two AND gates and one OR gate). However, using the dynamical evolution of a single logistic map, we require only three iterations to implement the full-adder circuit. So this method allows combinational logic to be obtained very efficiently.
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
Information storage is a fundamental function of computing devices. Computer memory is implemented by computer components that retain data for some interval of time. Storage devices have progressed from punch cards and paper tape to magnetic, semiconductor and optical disc storage by exploiting different natural physical phenomena to achieve information storage. For instance, the most prevalent memory element in electronics and digital circuits is the flip-flop or bistable multivibrator which is a pulsed digital circuit capable of serving as a one-bit memory, namely storing value 0 or 1. More meaningful information is obtained by combining consecutive bits into larger units.
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
Now we consider a different direction in designing information storage devices. Namely, we will implement data storage schemes based on the wide variety of controlled patterns that can be extracted from nonlinear dynamical systems. Specifically we will demonstrate the use of arrays of nonlinear elements to stably encode and store various items of information (such as patterns and strings) to create a database. Further, we will demonstrate how this storage method also allows one to efficiently determine the number of matches (if any) to specified items of information in the database. So the nonlinear dynamics of the array elements will be utilized for flexible-capacity storage, as well as for preprocessing data for exact (and inexact) pattern matching tasks. We give below, the specific details of our method and demonstrate its efficacy with explicit examples. 1.9.1 Encoding Information
We consider encoding N data elements (labeled as j = 1, 2, . . . , N), each comprised of one of M distinct items (labeled as i = 1, 2, . . . , M ). N can be arbitrarily large and M is determined by the kind of data being stored. For instance, for storing English text one can consider the letters of the alphabet to be the natural distinct items building the database, namely M = 26. Or, for the case of data stored in decimal representation, M = 10, and for databases in bioinformatics comprised typically of symbols A, T, C, G, one has M = 4. One can also consider strings and patterns as the items. For instance, for English text one can also consider the keywords as the items, and this will necessitate larger M as the set of keywords is large. Now we demonstrate a method which utilizes nonlinear dynamical systems, in particular chaotic systems, to store and process data through the natural evolution of the dynamics. The abundance of distinct behaviors of a chaotic system gives it the ability to represent a large set of items. We also demonstrate how one can process data stored in such systems by utilizing specific dynamical patterns. We start with a database of size N which is stored by N chaotic elements, with state Xni [ j] (j = 1, 2, . . . , N). Each dynamical element stores one element of the database, encoding any one of the M items comprising our data. Now in order to hold information one must confine the dynamical system to a fixed point behavior, i.e. a state that is stable and constant throughout the dynamical evolution of the system. We
19
20
1 The Chaos Computing Paradigm
employ the threshold mechanism mentioned above to achieve this. It works as follows. Whenever the value of a state variable of the system, Xni [ j], exceeds a prescribed threshold T i [ j] (i.e. when Xni [ j] > T i [ j]) the variable Xni [ j] is reset to T i [ j]. This simple mechanism is capable of extracting a wide range of stable regular behaviors from chaotic systems under different threshold values [19, 20]. Typically, a large window of threshold values can be found where the system is confined on fixed points, namely, the state of the chaotic element under thresholding is stable at T i [ j] (i.e. Xni [ j] = T i [ j] for all times n). So each element is capable of yielding a continuous range of fixed points [19]. As a result it is possible to have a large set of thresholds T 1 , T 2 , . . . , T M , each having a one-to-one correspondence with a distinct item of our data. So the number of distinct items that can be stored in a single dynamical element is typically large, with the size of M limited only by the precision of the threshold setting. In particular, consider a collection of storage elements that evolve in discrete time n according to the tent map, f ( Xni [ j]) = 2 min( Xni [ j], 1 − Xni [ j])
(1.6)
with each element storing one element of the given database ( j = 1, . . . N). Each element can hold any one of the M distinct items indicated by the index i. As described above, a threshold will be applied to each dynamical element to confine it to the fixed point corresponding to the item to be stored. For this map, thresholds ranging from 0 to 2/3 yield fixed points, namely Xni [ j] = T i [ j], for all time, when threshold 0 < T i [ j] < 2/3. This can be obtained exactly from the fact that f ( T i [ j]) > T i [ j] for all T i [ j] in the interval (0, 2/3), implying that the subsequent iteration of a state at T i [ j] will always exceed T i [ j], and thus get reset to T i [ j]. So Xni [ j] will always be held at value T i [ j]. In our encoding, the thresholds are chosen from the interval (0, 1/2), namely a subset of the fixed point window (0, 2/3). For specific illustration, with no loss of generality, consider each item to be represented by an integer i, in the range [1, M]. Defining a resolution r between each integer as: r=
1 1 2 M+1
(1.7)
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
gives a lookup map from the encoded number to the threshold, namely relating the integers i in the range [1, M], to threshold T i [ j] in the range [r, 12 − r ], by: T i [ j] = i.r
(1.8)
Therefore, we obtain a direct correspondence between a set of integers ranging from 1 to M, where each integer represents an item and a set of M threshold values. So we can store N database elements by setting appropriate thresholds (via (1.8)) on N dynamical elements. Clearly, from (1.7), if the threshold setting has more resolution, namely smaller r, then a larger range of values can be encoded. Note, however, that precision is not a restrictive issue here, as different representations of data can always be chosen in order to suit a given precision of the threshold mechanism. 1.9.2 Processing Information
Once we have a given database stored by setting appropriate thresholds on N dynamical elements, we can query for the existence of a specific item in the database using one global operational step. This is achieved by globally shifting the state of all elements of the database up by the amount that represents the item searched for. Specifically the state Xni [ j] of all the elements (j = 1, . . . N) is raised to Xni [ j] + Qk , where Qk is a search key given by: Qk =
1 − Tk 2
(1.9)
where k is the number being queried for. So the value of the search key is simply 12 minus the threshold value corresponding to the item being searched for. This addition shifts the interval that the database elements can span, from [r, 12 − r ] to [r + Qk , 12 − r + Qk ], where Qk is the globally applied shift. See Figure 1.8, for a schematic of this process. Notice that the information item being searched for, is coded in a manner ‘complimentary’ to the encoding of the items in the database (much like a key that fits a particular lock), namely Qk + T k adds up to 12 . This guarantees that only the element matching the item being queried for will have its state shifted to 12 . The value of 12 is special
21
22
1 The Chaos Computing Paradigm
Figure 1.8 Schematic of the database held in an array of dynamical systems and of the parallelized query operation.
in that it is the only state value that, on the subsequent update, will reach the value of 1, which is the maximum state value for this system. So only the elements holding an item matching the queried item will reach extremal value 1 on the dynamical update following a search query. Note that the important feature here is the nonlinear dynamics that maps the state 12 to 1, while all other states (both higher and lower than 12 ) get mapped to values lower than 1 (see Figure 1.9). The unique characteristic of the point 12 that makes this work, is the fact that it acts as ‘pivot’ point for the folding that will occur on the interval [r + Qk , 12 − r + Qk ] upon the next update. This provides us with a single global monitoring operation to push the state of all the elements matching the queried item to the unique maximal point, in parallel. The crucial ingredient here is the nonlinear evolution of the state, which results in folding. Chaos is not strictly necessary here. It is evident though that, for unimodal maps, higher nonlinearities allow larger operational ranges for the search operation, and also enhance the resolution in the encoding. For the tent map, specifically, it can be shown that the minimal nonlinearity necessary for the above search operation to work is in the chaotic region. Another specific feature of the tent map is that its piecewise linearity allows the encoding and search key operation to be very simple indeed. To complete the search we now must detect the maximal state at 1. This can be accomplished in a variety of ways. For example, one can simply employ a level detector to register all elements at the maximal state. This will directly give the total number of matches, if any. So the total search process is rendered simpler as the state with the matching pattern is selected out and mapped to the maximal value, allowing easy
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
(a)
(b)
(c)
(d)
Figure 1.9 Schematic representation of the state of an element (i) matching a queried item (ii) higher than the queried item (iii) lower than the queried item. The top left panel shows the state of the system encoding a list element. Three distinct elements are depicted. The state of the first element is held at 0.1; the second element is held at 0.25 and the third element is held at 0.4. These are shown as lines of proportional lengths on the x-axis in (a).
(b)–(d) show each of these elements with the search key added to their states. Here the queried for item is encoded by 0.25. So Qk = 1/2 − 0.25 = 0.25. After the addition of the search key, the subsequent dynamical update yields the maximal state 1 only for the element holding 0.25. The ones with states higher and lower than the matching state (namely 0.1 and 0.4) are mapped to lower values. See also color figure on page 230.
detection. Further, by relaxing the detection level by a prescribed ‘tolerance’, we can check for the existence within our database of numbers or patterns that are close to the number or pattern being searched for.
23
24
1 The Chaos Computing Paradigm
So nonlinear dynamics works as a powerful ‘preprocessing’ tool, reducing the determination of matching patterns to the detection of maximal states, an operation that can be accomplished by simple means, in parallel (see [23]). 1.9.3 Representative Example
Consider the case where our data is English language text, encoded as described above by an array of tent maps. In this case the distinct items are the letters of the English alphabet. As a result M = 26 and we obtain r = 0.018 518 5 . . . from (1.7), and the appropriate threshold levels for each item is obtained via (1.8). More specifically, consider as our database the line ‘the quick brown fox jumps over the lazy dog’; each letter in this sentence is an element of the database, and can be encoded using the appropriate threshold, as in Figure 1.10(a). Now the database, as encoded above, can be queried regarding the existence of specific items in the database. Figure 1.10 presents the example of querying for the letter ‘d’. To do so the search key value corresponding to letter ‘d’ (1.9) is added globally to the state of all elements (b). Then through their natural evolution, upon the next time step, the state of the element(s) containing the letter ‘d’ is maximized (c). In Figure 1.11 we performed an analogous query for the letter ‘o’, which is present four times in our database, in order to show that multiple occurrences of the same item can be detected. Finally, in Figure 1.12 we consider a modified database (encoding the line ‘a quick brown fox jumped over a lazy dog’) and query for an item that is not part of the given database, namely the letter ‘h’. As expected, Figure 1.12 (c) shows that none of the elements are maximized. Further, by relaxing the detection level by a prescribed ‘tolerance’, we can check for the existence within our list of numbers or patterns that are close to the number or pattern being searched for. For instance, in the example above, by lowering the detection level to the value 1 − (2r ), we can detect whether adjacent items to the queried one are present. Specifically in the example in Figure 1.12, we can detect that the neighboring letters ‘g’ and ‘i’ are contained in our database, though ‘h’ is not. However, if we had chosen our representation such that the ordering put T and U before and after Y (as is the case on a standard QWERTY keyboard), then our inexact search would find spellings of bot or bou
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
(a)
(b)
(c) Figure 1.10 From top to bottom: (a) Threshold levels encoding the sentence ‘the quick brown fox jumps over the lazy dog’, (b) the search key value for letter ‘d’ is added to all elements, (c) the elements update to the next time step. For clarity we marked with a dot any elements that reach the detection level.
when boy was intended. Thus ‘nearness’ is defined by the choice of the representation and can be chosen advantageously depending on the intended use. Also note that the system given by (1.1), realizable with the electronic circuit described in Figure 1.2, can also be utilized in a straightforward fashion to implement this storage and information processing method [13]. 1.9.4 Implementation of the Search Method with Josephson Junctions
The equations modelling a resistively shunted Josephson junction [22] with current bias and rf drive are as follows: dV C + R−1 V + Ic sin φ = Idc + Irf sin(ωt) (1.10) dt
25
26
1 The Chaos Computing Paradigm
(a)
(b)
(c) Figure 1.11 From top to bottom: (a) Threshold levels encoding the sentence ‘the quick brown fox jumps over the lazy dog’, (b) the search key value for letter ‘o’ is added to all elements, (c) the elements update to the next time step. For clarity we marked with a dot any elements that reach the detection level.
˙ C is the Josephson junction capacitance, V is the where 2 eV = h¯ φ, voltage across the junction, R is the shunting resistance, Ic is the critical current of the junction, φ is the phase difference across the junction, Idc is the current drive, Irf is the amplitude of the rf-current drive. If we scale currents to be in units of Ic and time to be in units of ωp−1 , where ωp = (2eIc /¯hC )1/2 is the plasma frequency, we obtain the scaled dynamical equations to be: dv 1/2 = β− [idc + irf sin(Ωt) − v − sin φ] c dt dφ = β1/2 c v dt
(1.11)
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
(a)
(b)
(c) Figure 1.12 From top to bottom: (a) Threshold levels encoding the sentence ‘a quick brown fox jumps over a lazy dog’. (b) The search key value for letter ‘h’ is added to all elements. (c) The elements update to the next time step. No elements reach the detection level as ‘h’ does not occur in the encoded sentence.
where β c = 2eIc R2 C/¯h is the McCumber parameter. Here we choose representative values ωp ∼ 36 GHz, Ω = 0.11, β c = 4, ir f = 1.05, idc = 0.011. Using an additional external bias current (added to idc ) to encode an item, one obtains an inverted ‘tent-map’-like relation between the absolute value of the Josephson-junction voltage and the biasing input (essentially like a broadened tent map). So, exactly as before, a section of this (broadened) ‘map’ can be used for encoding and a complementary key can be chosen. The output is a match if it drops below a certain voltage (for instance the one showed by a line in Figure 1.13).
27
28
1 The Chaos Computing Paradigm
Figure 1.13 Absolute voltage |v| of the Josephson junction vs Input (bias current) given by (2.1).
1.9.5 Discussions
A significant feature of this scheme is that it employs a single simple global shift operation, and does not entail accessing each item separately at any stage. It also uses a nonlinear folding to select out the matched item, and this nonlinear operation is the result of the natural dynamical evolution of the elements. So the search effort is considerably minimized as it utilizes the native processing power of the nonlinear dynamical processors. One can then think of this as a natural application, at the machine level, in a computing machine consisting of chaotic modules. It is also equally potent as a special-applications ‘search chip’, which can be added on to regular circuitry, and should prove especially useful in machines which are repeatedly employed for selection/search operations. So in terms of the time scales of the processor the search operation requires one dynamical step, namely one unit of the processor’s intrinsic update time. The principal point here is the scope for parallelism that exists in our scheme. This is due to the selection process occurring through one global shift, which implies that there is no scale-up (in principle) with size N. Additionally, this search does not need an ordered set, further reducing operational time. Regarding information storage capacity, note that we employ an Mstate encoding, where M can in principle be very large. This offers much gain in encoding capacity. As in the example we present above,
1.9 Exploiting Nonlinear Dynamics to Store and Process Information
the letters of the alphabet are encoded by one element each; binary coding would require much more hardware to do the same. Specifically, consider the illustrative example of encoding a list of names and then searching the list for the existence of a certain name. In the current ASCII encoding technique, each ASCII letter is encoded into two hexadecimal numbers or 8 bits. Assuming a maximum name length of k letters, this implies that one has to use 8 × k binary bits per name. So typically the search operation scales as O (8kN ). Consider, in comparison, what our scheme offers. If base 26 (‘alphabetical’ representation) is used, each letter is encoded into one dynamical system (an ‘alphabit’). As mentioned before, the system is capable of this dense encoding as it can be controlled on to 26 distinct fixed points, each corresponding to a letter. Again assuming a maximum length of k letters per name, one needs to use k ‘alphabits’ per name. So the search effort scales as kN. Namely, the storage is eight times more efficient and the search can be done roughly eight times faster as well! If base S encoding is employed, where S is the set of all possible names (size(S) ≤ N), then each name is encoded into one dynamical system with S fixed points (a ‘superbit’). So one needs to use just 1 ‘superbit’ per name, implying that the search effort scales simply as N, i.e. 8 k times faster than the binary encoded case. In practice, the final step of detecting the maximal values can conceivably be performed in parallel. This would reduce the search effort to two time steps (one to map the matching item to the maximal value and another step to detect the maximal value simultaneously). In that case the search effort would be 8 kN times faster than the binary benchmark. Alternate ideas to implement the increasingly important problem of search have included the use of quantum computers [26]. However, the method here has the distinct advantage that the enabling technology for practical implementation need not be very different from conventional silicon devices. Namely, the physical design of a dynamical search chip should be realizable through conventional CMOS circuitry. Implemented at the machine level, this scheme can perform unsorted database searches efficiently. CMOS circuit realizations of chaotic systems, like the tent map, operate in the region of 1 MHz. Thus a complete search for an item comprising of: search key addition, update, threshold detection and database restoration, should be able to be performed at 250 kHz, regardless of the size of the database. Commercial efforts are underway to construct VLSI circuitry in GHz ranges and are showing promising results in terms of power, size and speed.
29
30
1 The Chaos Computing Paradigm
Finally, regarding the general outreach of the scheme: nonlinear systems are abundant in nature, and so embodiments of this concept are very conceivable in many different physical systems, ranging from fluids to electronics to optics. Potentially good candidates for physical realization of the method include nonlinear electronic circuits and optical devices, which have distributed degrees of freedom [24]. Also, systems such as single electron tunneling junctions [25], which are naturally piecewise linear maps, can conceivably be employed to make such search devices. All this underscores the general scope of this concept.
1.10 VLSI Implementation of Chaotic Computing Architectures: Proof of Concept
We are currently developing a VLSI implementation of chaotic computing in a demonstration integrated circuit chip. The demonstration chip has a parallel read/write interface to communicate with a microcontroller, with standard logic gates. The read/write interface responds to a range of addresses to give access to internal registers, and the internal registers will interface with the demonstration chaotic computing circuits. For the demonstration we selected circuits that were based upon known experimental discrete component implementations and, as such, the circuits are larger than is necessary in this first generation of chip. Currently, the TSMC 0.18 μm process is the IC technology chosen for the development. This process was chosen to demonstrate that the chaotic elements work in smaller geometries, and the extra metal layers in this process will provide a margin of safety for any routing issues that might develop. For our proof of concept on the VLSI chip a small ALU (Arithmetic Logic Unit) with three switchable functions: two arithmetic functions (adder, multiplier, divider, barrel shifter or others) and one function of scratchpad memory, is being implemented. The ALU switches between at least two arithmetic functions and a completely different function like a small FIFO (First-In, First-Out memory buffer). This experiment takes a significant step toward showing the possibilities for future configurable computing. The three functions are combined into a single logic array controlled through a microcontroller interface. The micro-
1.10 VLSI Implementation of Chaotic Computing Architectures: Proof of Concept
controller can switch functions and then write data to the interface and read the results back from the interface. Figure 1.14 shows the simplified representation of this experiment [17].
Figure 1.14 Simplified schematic of the proof of concept VLSI implementation of an ALU which can switch between at least two arithmetic functions, and a completely different function such as a small FIFO (First-In, First-Out memory buffer).
Recently, ChaoLogix Inc. designed and fabricated a proof of concept chip that demonstrates the feasibility of constructing reconfigurable chaotic logic gates, henceforth ChaoGates, in standard CMOS-based VLSI (0.18 μm TSMC process operating at 30 MHz with a 3.1 × 3.1 mm die size and a 1.8 V digital core voltage). The basic building block ChaoGate is shown schematically in Figure 1.14. ChaoGates were then incorporated into a ChaoGate Array in the VLSI chip to demonstrate higher-order morphing functionality including the following: 1. A small Arithmetic Logic Unit (ALU) that morphs between higherorder arithmetic functions (multiplier and adder/accumulator) in less than one clock cycle. An ALU is a basic building block of computer architectures. 2. A Communications Protocols (CP) Unit that morphs between two different complex communications protocols in less than one clock cycle: Serial Peripheral Interface (SPI, a synchronous serial data link) and an Inter Integrated Circuit Control bus implementation (I2C, a multi-master serial computer bus). While the design of the ChaoGates and ChaoGate Arrays in this proof of concept VLSI chip was not optimized for performance, it clearly
31
32
1 The Chaos Computing Paradigm
(a) Figure 1.15 (a) Schematic of a twoinput, one-output morphable ChaoGate. The gate logic functionality (NOR, NAND, XOR, . . . ) is controlled (morphed), in the current VLSI design, by global thresholds connected to VT1, VT2
(b) and VT3 through analog multiplexing circuitry. (b) A size comparison between the current ChaoGate circuitry implemented in the ChaoLogix VLSI chaotic computing chip and a typical NAND gate circuit. (Courtesy of ChaoLogix Inc.)
demonstrates that ChaoGates can be constructed and organized into reconfigurable chaotic logic gate arrays capable of morphing between higher-order computational building blocks. Current efforts are focused upon optimizing the design of a single ChaoGate to levels where they are comparable to or smaller than a single NAND gate in terms of power and size yet are capable of morphing between all gate functions in under a single computer clock cycle. Preliminary designs indicate that this goal is achievable and that all gates currently used to design computers may be replaced with ChaoGates to provide added flexibility and performance.
1.11 Conclusions
In summary, we have demonstrated the direct and flexible implementation of all the basic logic gates utilizing nonlinear dynamics. The richness of the dynamics allows us to select out all the different gate responses from the same processor by simply setting suitable threshold levels. These threshold levels are known exactly from theory and
1.11 Conclusions
are thus available as a look-up table. Arrays of such logic gates can conceivably be programmed on the run (for instance, with a stream of threshold values being sent in by an external program) to be optimized for the task at hand. For example, such a morphing device may serve flexibly as an arithmetic processing unit or an unit of memory and can be swapped, as the need demands, to be one or the other. Thus architectures based on such logic implementations may serve as ingredients of a general-purpose reconfigurable computing device more powerful and fault tolerant [11] than statically wired hardware. Further, we have demonstrated the concept of using nonlinear dynamical elements to store information efficiently and flexibly. We have shown how a single element can store M items, where M can be large and can vary to best suit the nature of the data being stored and the application at hand. So we obtained information storage elements of flexible capacity, which are capable of naturally storing data in different bases or in different alphabets or multilevel logic. This cuts down space requirements significantly, and can find embodiment in many different physical contexts. Further, we have shown how this method of storing information can be naturally exploited for processing as well. In particular, we have demonstrated a method to determine the existence of an item in the database. The method involves a single global shift operation applied simultaneously to all the elements comprising the database and this operation, after one dynamical step, pushes the element(s) storing the matching item (and only those) to a unique, maximal state. This extremal state can then be detected by a simple level detector, thus directly giving the number of matches. So nonlinear dynamics works as a powerful ‘preprocessing’ tool, reducing the determination of matching patterns to the detection of maximal states. The method can also be extended to identify inexact matches as well. Since the method involves just one parallel procedural step, it is naturally setup for parallel implementation on existing and future implementations of chaos based computing hardware ranging from conventional CMOS based VLSI circuitry to more esoteric chaotic computing platforms such as magneto based circuitry [27] and high speed chaotic photonic integrated circuits operating in the GHz frequency range [28].
33
34
References
References 1 Mano, M.M. Computer System Architecture, 3rd edition, Prentice Hall, Englewood Cliffs, 1993; Bartee, T.C. Computer Architecture and Logic Design, New York, Mc-Graw Hill, 1991. 2 Sinha, S. and Ditto, W.L. Phys. Rev. Lett. 81 (1998) 2156. 3 Sinha, S., Munakata, T. and Ditto, W.L, Phys. Rev. E 65 (2002) 036214; Munakata, T., Sinha, S. and Ditto, W.L, IEEE Trans. Circ. and Systems 49 (2002) 1629; Munakata, T. and Sinha, S., Proc. of COOL Chips VI, Yokohama, (2003) 73. 4 Sinha, S. and Ditto, W.L. Phys. Rev. E 59 (1999) 363; Sinha, S., Munakata, T. and Ditto, W.L Phys. Rev. E 65 036216; W.L. Ditto, K. Murali and S. Sinha, Proceedings of IEEE AsiaPacific Conference on Circuits and Systems (APCCAS06), Singapore (2006) pp. 1835–38. 5 Murali, K., Sinha, S. and Ditto, W.L., Proceedings of the STATPHYS-22 Satellite conference Perspectives in Nonlinear Dynamics Special Issue of Pramana 64 (2005) 433 6 Murali, K., Sinha, S. and Ditto, W.L., Int. J. Bif. and Chaos (Letts) 13 (2003) 2669; Murali, K., Sinha S., and I. Raja Mohamed, I.R., Phys. Letts. A 339 (2005) 39. 7 Murali, K., Sinha, S., Ditto, W.L., Proceedings of Experimental Chaos Conference (ECC9), Brazil (2006) published in Phil. Trans. of the Royal Soc. of London (Series A) (2007); Murali, K., Sinha, S. and Ditto, W.L., Proceedings of IEEE Asia-Pacific Conference on Circuits and Systems (APCCAS06), Singapore (2006) pp. 1839–42. 8 Prusha B.S. and Lindner J.F., Phys. Letts. A 263 (1999) 105. 9 Cafagna, D. and Grassi, G., Int. Sym. Signals, Circuits and Systems (ISSCS 2005) 2 (2005) 749.
10 Chlouverakis, K. E. and Adams, M. J., Electronics Lett. 41 (2005) 359. 11 Jahed-Motlagh, M.R., Kia, B., Ditto, W.L. and Sinha, S., Int. J. of Bif. and Chaos 17 (2007) 1955. 12 Murali, K. and Sinha, S., Phys. Rev. E 75 (2007) 025201 13 Miliotis, A., Murali, K., Sinha, S., Ditto, W.L., and Spano, M.L., Chaos, Solitons & Fractals, Volume 42, (2009) Pages 809–819. 14 Miliotis, A., Sinha, S. and Ditto, W.L., Int. J. of Bif. and Chaos 18 (2008) 1551– 59; Miliotis, A., Sinha, S. and Ditto, W.L., Proceedings of IEEE Asia-Pacific Conference on Circuits and Systems (APCCAS06), Singapore (2006) pp. 1843–46. 15 Murali, K., Miliotis, A., Ditto, W.L. and Sinha, S., Phys. Letts. A 373 (2009) 1346–51 16 Murali, K., Sinha, S., Ditto, W.L., Bulsara, A.R., Phys. Rev. Lett. 102 (2009) 104101; Sinha, S., Cruz, J.M., Buhse, T., Parmananda, P., Europhys. Lett. 86 (2009) 60003 17 Ditto, W., Sinha, S. and Murali, K., US Patent Number 07096347 (August 22, 2006). 18 Taubes, G., Science 277 (1997) 1935. 19 Biswas, S. and D., Phys. Rev. Lett. 71 (1993) 2010; Glass, L. and Zheng, W., Int. J. Bif. and Chaos 4 (1994) 1061; Sinha, S., Phys. Rev. E 49 (1994) 4832; Sinha, S., Phys. Letts. A 199 (1995) 365; Sinha, S. and Ditto, W.L., Phys. Rev. E 63 (2001) 056209; Sinha, S., Phys. Rev. E 63 (2001) 036212; Sinha, S., Nonlinear Systems, Eds. Sahadevan, R. and Lakshmanan, M.L., (Narosa, 2002) 309–28; Ditto, W.L. and Sinha, S., Phil. Trans. of the Royal Soc. of London (Series A) 364 (2006) 2483. 20 Murali, K. and Sinha, S., Phys. Rev. E 68 (2003) 016210.
References
21 Maddock, R.J. and Calcutt, D.M., Electronics: A Course for Engineers, Addison Wesley Longman Ltd., (1997) p. 542; Dimitriev, A.S. et al, J. Comm. Tech. Electronics, 43 (1998) 1038. 22 Cronemeyer, D.C., et al, Phys. Rev. B 31 (1985) 2667. 23 For instance, content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications, such as routers. Unlike standard computer memory (random access memory or RAM) in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data
word is stored anywhere in it. What we attempt to design here is a CAMlike device. 24 Sukow, D.W., et al. Chaos 7 (1997) 560; Blakely, J.N., Illing, L. and Gauthier, D.J., Phys. Rev. Lett. 92 (2004); Blakely, J.N., Illing, L. and Gauthier, D.J., IEEE Journal of Quantum Electronics 40 (2004) 299. 25 Yang, T. and Chua, L.O., Int. J. of Bif. and Chaos 10 1091 (2000). 26 Grover, LK., Phys. Rev. Letts. (1997) 79 325. 27 Koch, R., Scientific American, 293(2), 56 (2005). 28 Yousefi, M., Barbarin, Y., Beri, S., Bente, E. A. J. M., Smit, M. K., Notzel, R., and Lenstra, D., Phys. Rev. Lett. 98, 044101 (2007)
35
37
2 How Does God Play Dice? Jan Nagler and Peter H. Richter
2.1 Introduction
Since childhood almost everybody is familiar with board games such as ludo, backgammon, or monopoly where a player starts his move by throwing cubic dice. Besides the platonic four, six, eight, twelve and 20-sided dice, role-playing gamers find it appealing to use fairly unusal dice geometries such as 7-sided, 16-sided, 34-sided, or even 100sided dice [1]. All of which are assumed to be perfect random number generators. Even Einstein seemed to take it for granted that dice tossing has random outcomes. In 1926 he objected to the view that the basic laws of nature incorporate randomness when he wrote in a letter to Born ‘I, at any rate, am convinced that He does not play dice’ [2]. After the concept of deterministic chaos [3, 4] was introduced in the seventies it became conceivable that randomness may be generated by purely classical mechanical systems – such as bouncing dice. Experiments to test fair dice and coins have been reported in [5]. Theoretical work has mainly focused on simulations of simple models with two-sided or four-sided dice models. Vulvovi´c and Prange compared the basins of attraction of the two possible final configurations of a homogeneous rod [6] with those calculated by Keller for a simplified coin-tossing model [7] whereas Feldberg et al. focused on a rolling square and the corresponding final states for varying initial conditions [8]. Ford and Kechen studied throws of a homogeneous disk [9,10]. Recently, coin tossing has been studied in model and experiment by Strzalko and co-workers [11]. The probability of edge-landing of a coin had been addressed earlier by Murray and Teare [12]. Interestingly, the odds for an American nickel landing on edge are about 1 to 6000.
38
2 How Does God Play Dice?
A simple home (or office) experiment will support the expectation that the degree of randomness in dice throwing depends on the circumstances, notably on initial conditions. The reader may try to drop a cube with the same side up a few times from a small altitude of the order of the cube’s side length. This sort of experiment demonstrates that the outcome is, to a certain extent, predictable rather than completely random. In this article we analyze the implications of this idea in terms of explicit calculations. In our barbell model, apparent chaos is generated in the succession of free-flight episodes and bounces off the ground. It is admittedly only a caricature of a cubic die, yet it has the advantage of being more amenable to numerical and analytical investigation. Most importantly, we believe we capture the essential features of dice throwing by studying such a minimal model.
2.2 Model
We consider a barbell with two point masses m1 and m2 in a plane, see Figure 2.1. The point masses are connected through a massless rod of unit length; their positions are ( x1 , y1 ) and ( x2 , y2 ), respectively. We assume gravity to pull in the negative y-direction, the floor being at y = 0. The barbell has three degrees of freedom; two for translation and one for rotation. It is convenient to use the center-of-mass coordinates ( x, y) and the angle ϕ (see Figure 2.1). Their connection to the coordinates of the mass points is x1 = x + β 2 cos ϕ
(2.1)
x2 = x − β 1 cos ϕ
(2.2)
y1 = y + β 2 sin ϕ
(2.3)
y2 = y − β 1 sin ϕ
(2.4)
The β 1,2 are the mass ratios β 1 = m1 /(m1 + m2 ) and β 2 = m2 /(m1 + m2 ). Since β 1 + β 2 = 1 we have only a single mass parameter. Throughout the article we consider the case β := β 1 ≥ 1/2. In addition, we consider dimensionless units such that the Lagrangian then reads 1 1 L = ( x˙ 2 + y˙ 2 ) + β 1 β 2 ϕ˙ 2 − y 2 2
(2.5)
2.2 Model
Figure 2.1 An example trajectory and its symbolic characterization. We define the symbols 0 and 1 for bounces relative to the initial orientation of the barbell. Two possible orientations are distinguished, mass 1 being to the left or to the right of mass 2. A bounce is labeled by 0 when the orientation at time t = tbounce is the same as initially at t = 0; a bounce where the barbell has flipped its orientation compared to the initial one is labeled by 1. Note that the symbolic code does not reveal which ball bounces. In addition, when during a flight the barbell
passes the vertical axes clockwise, we denote the passage by the letter R. A counterclockwise passage is denoted by L. No further symbol is added to the symbol code when the barbell has too little energy for vertical passages, that is, when the motion is restricted to either infinitely many type-0 or type-1 bounces. In addition, we omit repeating symbols at the end of the code and do not consider the (singular) case of the barbell hopping vertically with vanishing angular momentum. The symbol code for the example is R1L00R111L0.
Rather than introducing a reflecting potential at the floor, the reflections at y1 = 0 and y2 = 0 will be given special attention in the following derivation of the bounce map. In contrast to what Figure 2.1 may suggest we assume no force to act in the x-direction, either upon reflection or during the free-flight episodes. This is possible because x˙ is a constant of the motion which we take to be 0, together with x. Note that the system has effectively only two degrees of freedom, represented by the coordinates y and ϕ. During free fall, both energy E=
1 2 1 y˙ + β 1 β 2 ϕ˙ 2 + y 2 2
(2.6)
and angular momentum L = β 1 β 2 ϕ˙
(2.7)
are conserved. We now pay special attention to what happens during reflection.
39
40
2 How Does God Play Dice?
2.2.1 Bounce Map with Dissipation
Consider the case when mass 1 hits the floor, y1 = 0, at an angle π < ϕ < 2π. As we will see it is convenient to use the coordinates y1 , y˙ 1 ˙ and to express the energy (2.6) as instead of y, y, E=
1 β1 y˙ 2 2 β 1 + β 2 cos2 ϕ 1
2 1 cos ϕ ˙ y + β 2 ( β 1 + β 2 cos2 ϕ) ϕ˙ − − β 2 sin ϕ 1 2 β 1 + β 2 cos2 ϕ
(2.8)
At given ϕ the expression ϕ˜˙ := ϕ˙ −
cos ϕ y˙ 1 β 1 + β 2 cos2 ϕ
(2.9)
is proportional to the tangential component of the momentum, which is assumed to be conserved during the collision (no friction parallel to the floor). Hence, it follows that ϕ and ϕ˜˙ do not change during reflection,
( ϕ, ϕ˜˙ ) → ( ϕ , ϕ˜˙ ) = ( ϕ, ϕ˜˙ )
(2.10)
As to y˙ 1 , consider first the elastic collision where E is constant. Then (2.8) tells us that at given ( ϕ, ϕ˜˙ ) and E there are two possible values of y˙ 1 which differ only in sign; the negative value corresponds to the incoming trajectory, the positive to the outgoing, and the reflection condition is y˙ 1 → y˙ 1 = −y˙ 1 . In the general case we assume the simplest version of an inelastic bounce, y˙ 1 → y˙ 1 = −(1 − f )y˙ 1
(2.11)
where = 1 − f is the restitution coefficient, and 0 ≤ f ≤ 1. A vanishing value of f represents elastic reflection, and f = 1 corresponds to the case where all vertical momentum is dissipated. From (2.10) and (2.11) we obtain the reflection law ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ϕ ϕ ϕ ⎝ ϕ˜˙ ⎠ → ⎝ ϕ˜˙ ⎠ = ⎝ ⎠ ϕ˜˙ y˙ 1 −(1 − f )y˙ 1 y˙ 1
(y1 = y1 = 0, π < ϕ < 2π ) (2.12)
2.3 Phase Space Structure: Poincaré Section
the change in energy being Δ = E − E =
β 1 y˙ 21 f (2 − f ) 2 β 1 + β 2 cos2 ϕ
(2.13)
The energy loss is strongest, Δ = 12 f (2 − f )y˙ 21 , when the bounce is headon, ϕ = 3π/2. In the case when mass 2 bounces off the floor, y2 = 0 and 0 < ϕ < π, the formulas (2.8) through (2.13) remain the same except for the replacements y1 → y2 , β 1,2 → β 2,1 , sin ϕ → − sin ϕ and cos ϕ → − cos ϕ. The complete motion of the barbell is described by y¨ = −1
and
ϕ¨ = 0
(2.14)
as long as both y1 > 0 and y2 > 0 and, by the corresponding reflection laws, if either y1 = 0 or y2 = 0. 2.3 Phase Space Structure: Poincaré Section
The barbell lives in a 4D phase space, with configuration space S( ϕ) × ˙ y˙ ) ∈ R2 . Except for zero friction R (y) and momenta ( p ϕ , py ) = ( β 1 β 2 ϕ, (where E = const) there exists no constant of motion. The points of ˙ y, y˙ ) = (0, 0, 0, 0) and (π, 0, 0, 0), are minimum energy E = 0, ( ϕ, ϕ, the two possible final states. Together they attract the entire phase space, except for the boundary between their basins of attraction. This boundary is formed by the 3D stable manifolds of the two unstable ˙ y, y˙ ) = (π/2, 0, β 1 , 0) with energy E = β 1 , and fixed points ( ϕ, ϕ, ˙ y, y˙ ) = (3π/2, 0, β 2 , 0) with energy E = β 2 . These points cor( ϕ, ϕ, respond to the barbell standing upright with either mass 2 or mass 1 touching the ground, ‘indecisive’ as to which way to fall over. Their stable manifolds consist of the (y, y˙ )-planes defined by ( ϕ, ϕ˙ ) = (π/2, 0), with E > β 1 , and ( ϕ, ϕ˙ ) = (3π/2, 0) with E ≥ β 2 , plus the sets of initial conditions which are drawn towards these planes. Our analysis aims to understand how these manifolds partition the phase space into the two basins of attraction. A convenient way of studying this phase space structure is in terms of suitably chosen Poincaré sections, thereby reducing the system’s dimension by one. Note that a section condition is ‘suitable’ if, first, every orbit intersects it sufficiently many times to reveal its nature and final
41
42
2 How Does God Play Dice?
state, and secondly the surface of section can be represented in a oneto-one projection. A natural choice of this kind is to consider the barbell’s motion at the moments immediately after it bounces off the floor, i. e. at y1 = 0 with y˙ 1 > 0 (π < ϕ < 2π) and y2 = 0 with y˙ 2 > 0 (0 < ϕ < π). This condition is met infinitely many times by every orbit, hence it is complete in the sense defined in [14] and produces a 3D surface of section through the 4D phase space. But which coordinates to choose to ˙ E ), represent this surface is a delicate matter. We might think of ( ϕ, ϕ, ˙ E) together with y1 = 0 (π < but note from (2.8) that a given set ( ϕ, ϕ, ϕ < 2π) would not in general allow us to determine a unique y˙ 1 > 0 (there may be two such values, or none). However, if we use the ˜˙ E), then (2.8) becomes coordinates ( ϕ, ϕ, E=
1 1 β1 y˙ 21 + β 2 ( β 1 + β 2 cos2 ϕ) ϕ˜˙ 2 − β 2 sin ϕ 2 2 β 1 + β 2 cos ϕ 2
(2.15)
˜˙ E). which shows that y˙ 1 > 0 is indeed uniquely determined by ( ϕ, ϕ, ˙ y, y˙ ) right after the bounce The corresponding initial conditions ( ϕ, ϕ, are then given as ϕ˙ = ϕ˜˙ +
cos ϕ y˙ 1 , β 1 + β 2 cos2 ϕ
y = − β 2 sin ϕ,
y˙ = y˙ 1 − β 2 ϕ˙ cos ϕ (2.16)
and similarly for bounces of mass 2 if 0 < ϕ < π. The 3D surface of section P is that part of S( ϕ) × R ( ϕ˜˙ ) × R ( E) where E ≥ 0 and y˙ 21,2 ≥ 0; using (2.15) and the corresponding equation for ˜˙ at given E, reflection of mass 2, we find that the allowed values of ϕ, are restricted by 2 E + β 2 sin ϕ 0 ≤ ϕ˜˙ 2 ≤ β 2 β 1 + β 2 cos2 ϕ
(π < ϕ < 2π )
2 E − β 1 sin ϕ 0 ≤ ϕ˜˙ 2 ≤ β 2 β 1 + β 2 cos2 ϕ
(0 < ϕ < π ).
(2.17)
The Poincaré map P : P → P is the mapping ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ϕ ϕ ϕ ϕ ϕ ⎝ ϕ˜˙ ⎠ → ⎝ ϕ˜˙ ⎠ = P ⎝ ϕ˜˙ ⎠ = R ◦ F ⎝ ϕ˜˙ ⎠ = R ⎝ ϕ˜˙ ⎠ E E E E E
(2.18)
2.3 Phase Space Structure: Poincaré Section
where F describes the flight (2.14) to the next bounce, and R the reflection (2.12). Notice from the previous subsection that the new coordinates ( ϕ , ϕ˜˙ ) are determined by the flight F alone, while the energy changes only in the reflection, according to (2.13). ˜˙ E) = (0, 0, 0) =: S1 and (π, 0, 0) =: S2 The stable equilibria ( ϕ, ϕ, ˜˙ E) = (π/2, 0, β 1 ) =: U1 belong to P , as do the unstable equilibria ( ϕ, ϕ, and (3π/2, 0, β 2 ) =: U2 . The stable manifolds W1,2 ⊂ P of U1,2 are 2D surfaces which contain the lines (π/2, 0, E ≥ β 1 ) and (3π/2, 0, E ≥ β 2 ), plus all points which are attracted to these lines under P. Let us think of P as being made up of ( ϕ, ϕ˜˙ )-slices at constant E. In the case when the collision is elastic, these slices are invariant planes, and we may study the Poincaré map at fixed energy. Since P is not smooth at ϕ = 0 or π, the points (0, 0, E) and (π, 0, E) do not exhibit the typical elliptic character of stable points in analytic maps. However, the unstable points (π/2, 0, E) =: P1 (E > β 1 ) and (3π/2, 0, E) =: P2 (E > β 2 ) are typical hyperbolic points, and we may consider the linearized map in their neighborhoods. To do so we start with an initial condition
( ϕ(0), ϕ˜˙ (0), E) = (3π/2 + α, ω, E)
(2.19)
assuming α and ω to be infinitesimally small, and determine α , ω in
( ϕ(t), ϕ˜˙ (t), E) = (3π/2 + α , ω , E)
(2.20)
to linear order in α, ω, wheret is the time of the next bounce. Using (2.15) we obtain y˙ 1 (0) = 2( E − β 2 ) + O(2), where O(2) means second order in (α, ω ), and with (2.16) we get the initial values α 2( E − β 2 ) + O(2), ϕ˙ (0) = ω + β1 y(0) = β 2 + O(2), y˙ (0) = 2( E − β 2 ) + O(2).
(2.21)
The flight between the bounces is then given by y(t) = y(0) + y˙ (0)t − 1 2 2 t and ϕ( t ) = ϕ(0) + ϕ˙ (0) t. The time t of the next bounce is determined from y(t) = − β 2 sin ϕ(t) which we solve by expanding the
43
44
2 How Does God Play Dice?
r. h. s. to second order in t: t = 2y˙ (0) + O(2). This implies (omitting the higher orders)
⇒ α = α + 2y˙ (0) ϕ˙ (0), ϕ(t) = ϕ(0) + 2y˙ (0) ϕ˙ (0) y˙ (t) = −y˙ (0) = − 2( E − β 2 ) = y˙ 1 (t), α ϕ˜˙ (t) = ϕ˙ (0) − y˙ 1 (t) β1
ω = ϕ˙ (0) +
⇒
(2.22)
α y˙ (0). β1
Inserting ϕ˙ (0) = ω + y˙ (0)α/β 1 on the r. h. s. of the equations for α and ω , we finally obtain the mapping
α
ω
⎛
=⎝
2 β1
1+
4 (E β 1
− β2 )
2( E − β 2 ) 1 +
2 β1 ( E
− β2 )
2
1+
2( E − β 2 ) 4 β1 ( E
− β2 )
⎞ ⎠
α ω (2.23)
Since both α and ω do not change upon reflection, this already describes the full Poincaré map (as E = E). Nothing in these arguments needs to be modified if dissipation is taken into account. The only change happens when energy is lost in the reflection process; using (2.13) we see that (2.23) needs to be complemented by E = β 2 + (1 − f )2 ( E − β 2 )
(2.24)
which means that the Poincaré map takes us from the energy slice E to the energy slice E which approaches β 2 in a geometric manner. The eigenvalues λs,u (stable and unstable, respectively) and eigenvectors of the mapping (2.23) are 4 2 λs,u = 1 + ( E − β 2 ) ∓ 2( E − β 2 )( β 1 + 2( E − β 2 )) (2.25) β1 β1
β α 1 ∝ (2.26) ω ∓ β 1 + 2( E − β 2 ) the upper (lower) sign referring to the stable (unstable) direction. They do not depend on the friction parameter f but on the energy. As E → β 2 they are (up to a factor) (α, ω ) = ( β 1 , ∓1/ β 1 ), and as E → ∞ we have √ (α, ω ) → ( β 1 , ∓ 2E). According to the stable and unstable manifold theorem the manifolds s,u u W1,2 are tangent to these eigenvectors. The unstable manifolds W1,2
2.3 Phase Space Structure: Poincaré Section
can be obtained by forward iteration of little segments on the unstable s by backward iteration of eigendirections and the stable manifolds W1,2 u exlittle segments on the stable eigendirections. The manifolds W1,2 tend down to the stable fixed points at energy E = 0 whereas the stable s exist only at energies E > β , respectively. The stable manifolds W1,2 1,2 manifolds represent the boundaries between the basins of attraction of the two final states. Given the complexity of the motion, it appears impossible to give a rendering of the three-dimensional surface of section P and its divis,u sion by the two-dimensional manifolds W1,2 . Therefore, we propose to consider orbit flip diagrams. They are derived from the Poincaré section condition y˙ = 0 and y¨ < 0, i. e. we consider those points of a trajectory where the center of mass reaches a local maximum. The 3D section of phase space so defined may be parameterized by the coordi˙ y), and from (2.6) we see that the 2D subsets E = const are nates ( ϕ, ϕ, ˙ y) plane, independent of ϕ except for the fact that parabolas in the ( ϕ, the bouncing conditions y1 > 0 and y2 > 0 require y > − β 2 sin ϕ (π < ϕ < 2π) and y > β 1 sin ϕ (0 < ϕ < π). A given point, ˙ y) (together with y˙ = 0) immediately defines an initial condition ( ϕ, ϕ, and the Poincaré map is defined by three steps: (i) downward flight ϕ(t) = ϕ(0) + ϕ˙ (0)t and y(t) = y(0) − 12 t2 ; (ii) reflection at the bottom y1 = 0 or y2 = 0; (iii) upward flight until the condition y˙ (t) = 0 is ˙ E), a similar analysis might be met again. If we took coordinates ( ϕ, ϕ, performed in terms of ( ϕ, ϕ˙ )-slices at constant energy, with a decrease of E according to (2.13), depending on where the bounce took place. ˙ y) However, the physically more directly appealing coordinates ( ϕ, ϕ, suggest to look at ( ϕ, ϕ˙ )-slices with fixed y = h. This would not be suitable for the definition of a meaningful Poincaré map because the values y change at each iteration, even at zero dissipation f = 0, and if f > 0 there is a general tendency for the maximum height to decrease so that a given slice would only be met a finite number of times. This argument does not prevent us from looking at final state diagrams in a ( ϕ, ϕ˙ )-slice with initial value y = h. Here we consider a set of initial conditions where the barbell starts at height h with no vertical velocity, but any combination of ( ϕ, ϕ˙ ) ∈ S( ϕ) × R ( ϕ˙ ); the energy E ≥ h as given by (2.6) gets arbitrarily large as ϕ˙ 2 increases. In order to see ˙ y) = (π/2, 0, β 1 ) how the invariant manifolds of the lines above ( ϕ, ϕ, and (3π/2, 0, β 2 ) intersect these slices, we consider an initial condition
45
46
2 How Does God Play Dice?
˙ y) = (3π/2 + α¯ , ω, ¯ h) with infinitesimal (α¯ , ω¯ ) and determine ( ϕ, ϕ, where the trajectory ¯ α¯ (t) = α¯ + ωt,
1 y ( t ) = h − t2 2
(2.27)
intersects the Poincaré surface y1 = 0, y˙ 1 > 0. An elementary calculation to linear order in (α¯ , ω¯ ) gives t = 2(h − β 2 ) and, with ϕ˜˙ := ω as before, ⎛ ⎞ ⎛ ⎞⎛ ⎞ 1 α¯ 2( h − β 2 ) α(t) ⎝ ⎠=⎝ ⎠ ⎝ ⎠ (2.28) (1/β 1 ) 2(h − β 2 ) 1 + (2/β 1 )(h − β 2 ) ω (t) ω¯ From (2.26) we know the tangent vectors to the invariant manifolds in the Poincaré surface y1 = 0, y˙ 1 > 0. Therefore, inverting (2.28) with (α(t), ω (t)) along the eigendirections, we obtain the tangent vectors to the invariant manifolds in the (α¯ , ω¯ )-plane:
α¯ β 1 + 2( h − β 2 ) (2.29) ∝ ω¯ ∓1 As for (2.26), again the upper sign refers to the stable, the lower to the unstable, direction. In addition, if the bounce occurs at y2 = 0, β 1 and β 2 must be exchanged. 2.4 Orientation Flip Diagrams
We analyze the system in the following manner. At a given height h above ground, the barbell is released with zero velocity y˙ but arbitrary ˙ This makes for a two-dimensional orientation ϕ and angular velocity ϕ. set of initial conditions ( ϕ0 , ϕ˙ 0 ), the energy E0 = h + 12 β 1 β 2 ϕ˙ 20 increasing with ϕ˙ 0 . In our orientation flip diagrams (OFD), see Figure 2.2, a fine grid of these initial conditions is scanned and for each initial point the motion is computed until the final state is determined. Then, we choose the color according to whether the orientation in the final state is the same as the initial one (color yellow, state 0) or different (color red, state 1), see Figure 2.1. If both masses are equal this introduces a symmetry under ϕ → ϕ + π into the orbit flip diagrams there, see Figure 2.2. For β = 1/2 this symmetry gets broken, compare Figure 2.2(c) and Figure 2.3(b). In Figure 2.2 OFDs for four different
2.4 Orientation Flip Diagrams
drop altitudes are compared. With increasing altitude h the stable and unstable eigendirections become less flat, as expected from (2.29). On the other hand, for increasing initial energy, fine scaled regions become more pronounced.
Figure 2.2 Orientation flip diagrams for β = 0.5, and for f = 0.1, for four drop altitudes, (a) h = 0.6, (b) h = 0.8, (c) h = 1.0, (d) h = 1.2. Each OFD displays, in the plane of initial angles and angular velocities, the final outcome relative to the initial orientation of the throw when the barbell has been dropped from a given altitude h above ground. Yellow points indicate no orientation flip (state 0), red color marks points with a flipped final state 1. The
brightness of the color codes for the number of bounces before the barbell can no longer change its orientation; the darker the color the more bounces the system needs to fall below the critical energy value Ec = 1 − β = 0.5. The diagonal lines indicate the stable (white) and unstable (black) directions of the linearized invariant manifolds of the hyperbolic points A and B. See also color figure on page 231.
47
48
2 How Does God Play Dice?
The extent to which this happens is determined by the intersection of the plane of initial conditions with the stable and unstable manifolds of the hyperbolic planes ( ϕ, ϕ˙ ) = (π/2, 0) with energy E ≥ β 1 , and ( ϕ, ϕ˙ ) = (3π/2, 0) with energy E ≥ β 2 . These intersections are the white (stable manifolds) and black lines (unstable manifolds) in Figure 2.2. The stable manifolds are the boundaries between the basins of attraction, the unstable manifolds are their mirror images under ( ϕ, ϕ˙ ) → ( ϕ, − ϕ˙ ). The linear approximation to these manifolds in the neighborhood of the hyperbolic planes depends on the initial height h but not on the dissipation parameter f (see (2.29)); therefore, the white and black lines are omitted in the lower two panels of Figure 2.3. It is remarkable how far the linear approximation extends from the hyperbolic points. The dependence on β has the effect of turning the central rhomboid at β = 1/2 into a deltoid at β > 1/2. One might be tempted to interpret C, and D, in Figure 2.2 as heteroclinic points, but that becomes meaningful only in the limit of vanishing f : their forward images tend to lie in slices with lower values of y, whereas the backward images tend to have larger y. Nevertheless, it is obvious that the diagrams reveal some degree of heteroclinic entanglement. Its depth in scale, however, is limited by friction [13]. The implication for ‘playing the loaded barbell’ is evident: choosing the initial condition such that the angle ϕ0 is near π/2 (the heavy mass on top) makes it more probable that it will end up in the original orientation rather than near 3π/2. However, this statement only holds for relatively small ϕ˙ 0 . At larger initial velocities (and low friction) the mixing of yellow and red points rapidly becomes too intricate for the loading to be helpful. With larger friction, the patterns look more complicated but also more promising for a player who cares to practice a lot. While Figure 2.3 shows the dependence of OFDs on the dissipation parameter f at given mass β = 0.8, we show in the left parts of Figures 2.4 and 2.5 how the pattern changes with β, at fixed f = 0.2. The asymmetry in the slope of the invariant manifolds becomes more and more pronounced and the very chaotic mixing of colors is shifted to higher and higher values of the initial angular velocities. It is interesting to ask how the probability Pflip ( E0 ) of final state 1 (orientation has flipped) depends on the initial energy E0 . To answer that question we sampled trajectories for random initial conditions at fixed initial energy E0 . For each trajectory we chose the center-of-massvelocity to be zero, y˙ (t0 ) = 0, initial orientation ϕ(t0 ) to be random and uniformly distributed between 0 and 2π, the initial altitude h also
2.4 Orientation Flip Diagrams
Figure 2.3 Orientation flip diagrams for β = 0.8 and four friction values: (a) f = 0.05, (b) f = 0.1, (c) f = 0.2, (d) f = 0.4. Each inset displays the decomposition of the corresponding OFD into state 0 (gray). Black regions represent initial conditions where the barbell ends up standing almost sliding. (a) Small friction strength f = 0.05. (b) Friction strength f = 0.1. For friction strengths in that range the intersections
of the lines for the stable and unstable directions define a deltoid which approximately delineates the separation of order from chaos. (c) f = 0.2; the white lines are boundaries of orbit-type classes with symbol length up to 6; symbol sequences of some simple orbit-type classes are displayed. (d) f = 0.4, which corresponds to a realistic friction strength. See also color figure on page 232.
to be random and uniformly distributed between 1 and E0 , whence the initial angular velocity must be ϕ˙ (t0 ) = + 2( E0 − h)/( β 1 β 2 ) (the other sign would give identical results). For friction strengths between f = 0.1 and f = 0.5 the results are shown in Figure 2.6 (β = 0.8) and in Figure 2.7 (β = 0.5).
49
50
2 How Does God Play Dice?
Figure 2.4 Orientation flip diagrams (left) in comparison with corresponding bounce diagrams (right) for β = 0.5 (a), and β = 0.6 (b); h = 1.0. The grayscale of the OFDs in the insets is the same as for those in Figure 2.3. Bounce diagrams (right) display, for the same range of initial conditions, which mass bounces more often. Inset: When
mass 1 bounces more often a point is gray, otherwise black. For a colored representation for higher values of β see Figure 2.5. The grayscale codes for the number of bounces before the barbell can no longer change its orientation (as in the OFD to the left). See also color figure on page 233.
As expected, all curves start from Pflip = 0 at E0 = 1 and approach the asymptotic value Pflip → 1/2 as E0 → ∞. More interestingly, at moderate initial energies they perform striking oscillations; see insets in the figures. These oscillations depend on both the friction f and mass ratio β; they are a consequence of dynamical resonances. The barbell can lose three quarters of the initial energy at the first bounce if f = 0.5.
2.4 Orientation Flip Diagrams
Figure 2.5 The same as Figure 2.4 but in color and for (a) β = 0.7, (b) β = 0.8, and (c) β = 0.9; h = 1.0. When mass 1 bounces more often a point is white, otherwise red. The brightness of the color codes for the number of bounces before the barbell can no longer change its orientation (as in the OFD to the left). See also color figure on page 234.
51
52
2 How Does God Play Dice?
Figure 2.6 Fraction of initial configurations for which the barbell has flipped (final state 1) in dependence on the initial energy E0 for five friction parameters from f = 0.1 to f = 0.5. The graphs are evaluations of 107 trajectories of random initial conditions (105 for each of 100 points in every graph), with β = 0.8. Inset: Blow-up of the oscillations.
Figure 2.7 The same as Figure 2.6 but for β = 0.5.
2.5 Bounce Diagrams
Hence, even for E0 = 8, it may perform only two or three bounces before it can no longer flip over. Depending on the state before the last flip, for given E0 , slightly higher initial energies may or may not increase the flip probability. Since the final configuration is evaluated relative to the initial one, a larger E0 can therefore both decrease or increase Pflip ( E0 ). As a consequence, Pflip ( E0 ) depends nonmonotonously on E0 and so the ‘phase’ and ‘frequency’ of the oscillations depend on both β and f . 2.5 Bounce Diagrams
In the course of our computer experimentation we noticed that, with unequal masses, the light and heavy barbell tips tend to have different numbers of bounces before the final state is determined, with an increasing preference for the heavy mass as β grows. This may not be relevant for the ordinary tossing game because this property is independent of which final state is reached, but it does reflect an interesting aspect of the loading and might be used for another kind of game. As it may not be obvious that the two masses can have different numbers of bounces on the average, let us demonstrate this fact with a simple calculation for the case where both of them stay close to the floor, i. e. for small energy E 1 and | ϕ| 1 (the case ϕ close to π would give the same result). Using the approximation sin ϕ ≈ ϕ we obtain y ≈ y1 − β 2 ϕ = y2 + β 1 ϕ
⇒
ϕ ≈ y1 − y2
⇒
ϕ˙ ≈ y˙ 1 − y˙ 2 (2.30)
Together with the energy formula, (2.6), we get E ≈ 12 ( β 1 y˙ 1 + β 2 y˙ 2 )2 + 12 β 1 β 2 (y˙ 1 − y˙ 2 )2 + β 1 y1 + β 2 y2
= E1 (y1 , y˙ 1 ) + E2 (y2 , y˙ 2 )
(2.31)
where E1,2 (y1,2 , y˙ 1,2 ) = 12 β 1,2 y˙ 21,2 + β 1,2 y1,2
(2.32)
This shows that the Hamiltonian separates into that of two independent freely falling mass points. If y˙ 0 is the initial velocity of any of them, right after a bounce, then T = 2y˙ 0 is the time at which the next bounce occurs, and A = y˙ 20 /2 the amplitude reached in between. Com˙ we see that each bining this with the friction model y˙ → y˙ = (1 − f )y,
53
54
2 How Does God Play Dice?
mass point will perform oscillations with geometrically decreasing amplitude and period, A → (1 − f )2 A,
T → (1 − f ) T
(2.33)
If the initial T is T1,2 for the two masses, then they come to complete rest, after infinitely many bounces, at times T1 / f and T2 / f , respectively. If we stop counting at some earlier time, then the mass with the smaller Ti and amplitude Ai has undergone more bounces than the other. It seems intuitively clear that the heavier mass dominates the relaxation process: its reflection has a bigger effect on the lighter mass than vice versa. Therefore one expects the larger mass to come to rest earlier than the smaller, and hence to exhibit more bounces before the final state is determined. This is indeed what the bounce diagrams in the right parts of Figures 2.4 and 2.5 show. The same ( ϕ, ϕ˙ )-plane as in the OFDs is scanned, but now we monitor the number of bounces of the two masses before the orientation can no longer change. If mass 1 bounces more often, the color is white to gray, if mass 2 bounces more often the color is red to gray; the brightness reflects the number of bounces as indicated in the color code bars at the upper right of each panel. At β = 0.5 the numbers on the whole are equal, but, as β grows, bounces of the heavy mass become more and more dominant. In Figure 2.8 we plot the fraction Fh of initial configurations in the BDs for which the heavy mass bounces more often, with three different friction strengths. The data points are determined by pixel counting in diagrams like those of Figures 2.4 and 2.5, for the respective values of β. The result of this numerical evaluation is that Fh is a little lower than β. It appears that the deviations from Fh = β originate from the low energy part of the bounce diagrams. In order to have some understanding of the linear law we assume a sort of Boltzmann statistics in the spirit of Levin [15]. Consider the two unstable upright configurations of the barbell, with energies E A = Ecrit = 1 − β (at ϕ = ϕ A = 3π/2, heavy mass at bottom), and EB = β (at ϕ = ϕ B = π/2, light mass at bottom), respectively. These states are the centers of channels on the barbell’s route from E = E0 to the final state at E = 0. When the barbell is in the upright configuration A with the heavy mass touching the floor, the heavy mass obviously bounces more frequently than the light mass. Hence, we estimate the fraction of white points in the BDs, Figure 2.4 and Figure 2.5, as Fh ( β) ≈ PA ( β) where PA ( β) is the probability that the final state is
2.5 Bounce Diagrams
Figure 2.8 Fraction Fh of the initial configurations for which the heavier mass bounces more often, in dependence on the mass ratio β. The graphs are evaluations by pixel counting of bounce diagrams, for three friction parameters as indicated in the legend. The gray straight line is Fh = β. The straight line in black is the Boltzmann approximation (2.36).
reached through channel A. A rough approximation to PA ( β) may be derived from the assumption of a heat bath in which the barbell’s energy fluctuates on a ‘thermal’ energy scale kT ≈ ( E A + EB )/2 = 1/2. This implies Boltzmann factors PA ( β) ∼ e−(1− β)/kT = e−2(1− β) PB ( β) ∼ e− β/kT = e−2β
and
(2.34)
Together with Z = e−2(1− β) + e−2β
(2.35)
we obtain PA ( β) =
e − 2( 1− β ) 1 = 2 Z 1 + e (1−2β)
(2.36)
An expansion of (2.36) around β = 1/2 yields PA ( β) = β. In Figure 2.8 the gray line represents this simple behavior whereas the black
55
56
2 How Does God Play Dice?
line follows (2.36). The agreement with the data is satisfactory, given the roughness of the argument. A slightly more involved argument would take into account the finite size of the two channels and integrate over certain ranges around ϕ A and ϕ B , say of size π/2. This would lead to the assumtion that PA ( β) ∼
π/4 − π/4
e−(1− β) cos x/kT dx
and PB ( β) ∼
(2.37) π/4 − π/4
e− β cos x/kT dx
The numerical evaluation with kT = 1/2 gives a curve which is very similar to the black curve in Figure 2.8.
2.6 Summary and Conclusions
We have investigated a simple model for throwing loaded dice in terms of the behavior of a barbell with point masses at its tips. The equations of motion, and hence the phase space structure, of this simple model depend on only one parameter, the ratio of the two masses. These can be reduced to two degrees of freedom, so that the phase space has four dimensions. We used two kinds of diagrams to exhibit the typical features. Both of them live in the ( ϕ, ϕ˙ )-plane of initial conditions where we start with the barbell’s center of mass at a given height and with zero vertical velocity. They are final state diagrams in the sense that each initial condition is given a color according to what happens in the long run. The orientation flip diagrams code for the relative orientation of the final and initial states reveal the position and extent of different types of motion, for which we introduced a symbolic orbit classification. We could attribute a particular organizing role, for an understanding of the system’s complexity, to the stable and unstable invariant manifolds of the unstable equilibria, and their intersections. As a consequence, we were able to delineate roughly a border between orderly regions of predictable behavior and chaos where the dependence on initial conditions is sensitive. This works particularly well in the limit of low dissipation, or almost Hamiltonian dynamics.
2.7 Acknowledgments
For realistic friction strengths f ≈ 0.5 [6] the typical number of bounces before the barbell can no longer change its orientation is about 5. Hence, if dice throwing may be taken as a process to generate random numbers this is primarily because of the gambler’s inability to reproduce initial conditions sufficiently well to ensure similar trajectories – and not so much because of an inherently strongly chaotic dynamics. The orbit flip diagrams show that the flip probability is an intricate feature. Its overall behavior as shown in Figures 2.6 and 2.7 is to increase from 0 to 0.5 as the initial energy increases, with complicated oscillations superimposed which depend on both friction and mass ratio. These oscillations can be bigger for β = 0.5 than for β > 0.5, so an experienced player may benefit more from an unloaded than from a slightly loaded barbell, by choosing an optimal initial energy. On the other hand, the diagrams of Figures 2.4 and 2.5 exhibit a symmetry breaking with respect to the angles ϕ and ϕ + π which increases with β; from this point of view, a player may benefit from a loaded die by choosing the optimal initial angle. Our bounce diagrams display a different effect of loading in terms of unequal masses. With increasing mass difference, there is a strong tendency of the heavier mass to bounce more frequently than the lighter one. This feature is unrelated to the final state, but interesting in itself.
2.7 Acknowledgments
We gratefully acknowledge helpful comments by T. Geisel, B. Kriener, H. W. Gutch, and R. S. Shaw. We also thank K. Bröking, and G. Martius for technical help.
57
58
References
References 1 See the Wikipedia article on dice at http://en.wikipedia.org/wiki/Dice. 2 Born, M., Physics in my generation, Springer, New York, 1969, p. 113. 3 Schuster, H. G., Deterministic Chaos: An Introduction, 3rd ed., VCH, Weinheim, Germany, 1995. 4 Tél, T. and Gruiz, M., Chaotic Dynamics. An Introduction Based on Classical Mechanics. Cambridge Univ. Press, 2006.
8 Feldberg, R., Szymkat, M., Knudsen, C. and Mosekilde, E., Phys. Rev. A 42, 4493 (1990). 9 Ford, J., Phys. Today 36, 40 (1983). 10 Zhang, K.-C., Phys. Rev. A 41, 1893 (1990). 11 Strzalko, J., Grabski, J., Stefanski, A., Perlikowski, P. and Kapitaniak, T., Physics Reports, 469, No. 2., 59, (2008). 12 Murray, D. B. and Teare, S. W., Phys. Rev. E 48, 2547 (1993).
5 Diaconis, P., Holmes, S. and Montgomery, R., SIAM Rev. 49, 211 (2007); see also at http://www.geocities. com/dicephysics.
13 Nagler, J. and Richter, P. H., Phys. Rev. E 78, 036207 (2008); featured in ‘Research Highlights’, Nature 455, 434 (2008).
6 Vulvovi´c, V. Z. and Prange, R. E., Phys. Rev. A 33, 576 (1986).
14 Dullin, H. R. and Wittek, A., J. Phys. A 28, 7157 (1995).
7 Keller, J. B., Amer. Math. Monthly 93, 191 (1986).
15 Levin, E. M., Am. J. Phys. 51, 149 (1983).
59
3 Phase Reduction of Stochastic Limit-Cycle Oscillators Kazuyuki Yoshimura
3.1 Introduction
A variety of physical systems exhibit oscillatory dynamics in nature and technology. Such systems are as diverse as electrical circuits, lasers, chemical reaction systems and neuronal networks. These systems are mathematically modeled as limit-cycle oscillators, which are described by nonlinear differential equations. It is well known that limit-cycle oscillators can exhibit various types of ordered motion. One of the fundamental themes of nonlinear physics is to clarify their dynamics and a large number of works have been done along this line. It is in general difficult to investigate the full equations of an oscillator analytically because of their nonlinearity. Therefore, an approximation method for effectively simplifying the equations is essential in the study of oscillator dynamics. A fundamental theoretical technique for this purpose is the phase-reduction method, which allows one to approximately describe the oscillator dynamics by a simpler equation for the phase variable only (e.g., [1, 2]). This method has been widely and successfully applied to a network of coupled oscillators or an oscillator subjected to an external periodic signal, revealing the nature of various types of entrainment phenomena. Dynamics of oscillators subjected to noise have also been attracting much interest (e.g., [1,2]). Recent works have shown that not only a periodic signal but also a noise signal can give rise to ordered motion via entrainment in an ensemble of independent oscillators [3–8]. The results of these works show the importance of exploring the role of noise for the emergence of order in oscillator systems. An approach using the phase-reduction method is expected to be useful for studying the dynamics of noisy oscillators. It has been commonly believed that the phase-reduction method gives a good approximation for any type of
60
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
weak signal, including noise. Therefore, this method has been applied in a conventional way also to the stochastic differential equations which describe oscillators subjected to white Gaussian noise. However, it has been pointed out recently that the phase equation obtained in such a way is not a proper approximation in the sense that it cannot properly describe the dynamics of the original full oscillator system even in the weak-noise limit [9]. In order to facilitate studies for exploring the roles of noise in noisy oscillators, it is essential to develop the phase-reduction theory applicable to their stochastic differential equations. The simplest case is an oscillator subjected to white Gaussian noise. The modified stochastic phase equation valid in this case has been proposed in [9]. In the real world, due to memory effects, noise has a finite correlation time. Therefore, it is also important to generalize the stochastic phase equation to the case of oscillators subjected to colored noise in order to properly understand the effects of real noise in oscillatory systems. This generalization has been carried out recently, assuming Ornstein–Uhlenbeck noise as a simple colored noise [10]. The method for deriving the phase equation in [10] can be extended to a more general class of colored noise. The stochastic phase-reduction theory proposed in [9, 10] provides the theoretical basis for studying dynamics of various noisy oscillator models and the corresponding real systems. In this review, we describe this stochastic phase-reduction theory. In addition, we apply the present theory and discuss the effects of noise on oscillator dynamics, focusing on the entrainment property of oscillators. The present review is organized as follows. In Section 3.2, we briefly review the idea of the phase description of an oscillator and the phase equation commonly used so far. In Section 3.3, we describe the phasereduction theory for oscillators with white Gaussian noise, which is the most fundamental and important case. In Section 3.4, we generalize the theory to the case of colored noise. As a simple and important case, we deal with the Ornstein–Uhlenbeck noise. In Section 3.5, we discuss effects of noise on the entrainment property of oscillators. A summary is given in Section 3.6.
3.2 Phase Description of Oscillator
3.2 Phase Description of Oscillator
We briefly describe a basic idea of the phase description of oscillators and the conventional phase equation (for detail, see [1]). Let x = ( x1 , . . . , x N ) ∈ R N be a state variable vector and consider the Ndimensional differential equation dx = F ( x) dt
(3.1)
where F is a smooth vector function representing the vector field. Equation (3.1) is assumed to have a linearly stable limit-cycle solution x0 (t) with the period T, i.e. x0 (t + T ) = x0 (t). We denote the point set of this solution in the phase space by Γ, i.e. Γ = { x0 (t) ∈ R N ; t ∈ R }. A phase coordinate φ can be defined on Γ by associating a scalar φ to each point x ∈ Γ in such a way that the time evolution of φ on Γ can be described by the equation dφ( x) = ω, dt
x∈Γ
(3.2)
where ω is the natural frequency given by ω = 2π/T. The phase φ is regarded as φ ∈ [0, 2π ) by taking φ mod 2π. When we consider a weakly perturbed oscillator, the state point slightly leaves from Γ. Therefore, it is useful to extend the definition of φ in a neighborhood U of Γ. A convenient way of the extension is to define φ as a smooth function of x so that gradx φ( x ) · F ( x) = ω may hold for any point x ∈ U, where gradx φ( x) = (∂φ/∂x1 , . . . , ∂φ/∂x N ). Under this definition of the phase, the time evolution of φ is described by an equation of the same form as (3.2) in U, i.e. dφ( x) = ω, dt
x∈U
(3.3)
Once the phase coordinate is defined in U, one can consider the N − 1 dimensional hypersurface I (φ), which consists of the points having the same value of φ. The hypersurface I (φ) is called the isochron. Let us consider the perturbed oscillator dx = F ( x) + p( x, t) dt
(3.4)
61
62
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
where p( x, t) stands for a small perturbation, which in general depends on x and t. If we use the phase coordinate φ defined above and take account of (3.3), we can obtain the equation dφ = ω + gradx φ( x ) · p( x, t) dt
(3.5)
This equation is still exact but does not have a closed form with respect to φ since the second term on the right-hand side depends on the precise position of x. Since the limit-cycle solution is linearly stable, it is expected that the phase point x stays close to Γ, provided that the perturbation strength is weak enough. Thus, x in (3.5) may be approximated by x0 (φ) having the same φ for a certain class of perturbation p. If we apply this approximation, we have the closed phase equation
dφ = ω + gradx φ( x) x= x0 (φ) · p( x0 (φ), t) dt
(3.6)
Equation (3.6) is a good approximation and has been successfully applied at least in the case that the perturbation p is given by an external periodic signal or weak couplings with the other oscillators. However, the class of perturbation for which equation (3.6) is a proper approximation has not yet been clarified, although it is believed to be a good approximation for any type of weak perturbation. In the following sections, we show that the phase equation in general has to be modified when the perturbation is given by noise. Moreover, in the case of noisy perturbation, we clarify the condition for (3.6) to recover its validity.
3.3 Oscillator with White Gaussian Noise
We describe the phase-reduction theory for an oscillator subjected to white Gaussian noise and numerically demonstrate that the phase equation properly approximates the dynamics of the original noisy oscillator. This white Gaussian noise case is the most fundamental and important. Indeed, the phase equation in this case gives a basis for extension of the theory to colored noise cases, as will be described in the next section.
3.3 Oscillator with White Gaussian Noise
3.3.1 Stochastic Phase Equation
Let x = ( x1 , . . . , x N ) ∈ R N be a state variable vector and consider the N-dimensional stochastic differential equation x˙ = F ( x) + G ( x)ξ (t)
(3.7)
where F is an unperturbed smooth vector field, G is a smooth vector function, and ξ (t) is the white Gaussian noise such that ξ (t) = 0 and ξ (t)ξ (s) = 2D δ(t − s), where · denotes averaging over the realizations of ξ and δ is Dirac’s delta function. We call the constant D > 0 the noise intensity. The unperturbed system x˙ = F ( x) is assumed to have a limit-cycle solution with frequency ω. We employ the Stratonovich interpretation for (3.7). In this interpretation, ordinary variable transformation in differential equation can be applied. Equation (3.7) is formally of the same form as (3.4), where the perturbation is given by p( x, t) = G ( x)ξ (t). Consider the unperturbed system x˙ = F ( x) and let x0 (t) be its limitcycle solution. We define the smooth phase coordinate φ in a neigh borhood U of x0 in phase space so that gradx φ · F ( x) = ω may hold for any points in U. We can define the other N − 1 smooth coordinates r = (r1 , . . . , r N −1 ) in U. We assume that r = 0 on the limit cycle. In Figure 3.1, an illustration for the (φ, r ) coordinates is shown in the case of a two-dimensional oscillator. If we perform the transformation ( x1 , . . . , x N ) → (φ, r1 , . . . , r N −1 ) in (3.7), we have φ˙ = ω + h(φ, r )ξ (t)
(3.8)
r˙ i = f i (φ, r ) + gi (φ, r )ξ (t)
(3.9)
where i = 1, . . . , N − 1. The functions h, f i , and gi are defined as fol lows: h(φ, r ) = grad x φ · G ( x(φ, r )), f i (φ, r ) = gradx ri · F ( x(φ, r )), gi (φ, r ) = gradx ri · G ( x(φ, r )), where the gradients are evaluated at the point x(φ, r ). They are 2π-periodic functions of φ. Stratonovich stochastic differential equations (3.8) and (3.9) can be converted into the equivalent Ito stochastic differential equations [11]. The φ component of this Ito-type equation is obtained as follows:
N −1 ( φ, r ) ∂h(φ, r ) ∂h h(φ, r ) + ∑ φ˙ = ω + D gi (φ, r ) + h(φ, r )ξ (t) (3.10) ∂φ ∂ri i =1
63
64
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
Figure 3.1 Illustration of (φ, r ) coordinates.
In the case of weak noise 0 < D 1, the deviation of r from 0 is expected to be small. Thus, we can use the approximation r = 0 in (3.10) and arrive at φ˙ = ω + D Z(φ) Z (φ) + Y (φ) + Z(φ)ξ (t) (3.11) where Z(φ) and Y (φ) are given by Z(φ) = h(φ, 0),
Y (φ) =
N −1
∑
i =1
∂h(φ, 0) gi (φ, 0) ∂ri
(3.12)
Since h and gi are 2π-periodic, Z(φ + 2π ) = Z(φ) and Y (φ + 2π ) = Y (φ) hold. The Ito-type phase equation for the noise-driven oscillator (3.7) is given by (3.11). The Stratonovich stochastic differential equation equivalent to (3.11) is given by φ˙ = ω + DY (φ) + Z(φ)ξ (t)
(3.13)
The conventional procedure of phase reduction, which is described in Section 3.2, consists in substituting r = 0 in (3.8). This procedure leads to the Stratonovich-type phase equation φ˙ = ω + Z(φ)ξ (t)
(3.14)
which has the same form as (3.6) since
gradx φ( x) x= x (φ) · p( x0 (φ), t) = gradx φ( x ) x= x (φ) · G ( x(φ, 0))ξ (t) 0 0
= h(φ, 0)ξ (t)
3.3 Oscillator with White Gaussian Noise
Comparison of (3.13) and (3.14) clearly shows that the term DY (φ) is dropped in the conventional phase equation (3.14). This term is of O ( D ) and in general is not negligible. Thus, (3.14) does not correctly describe the original oscillator dynamics even in the lowest order approximation. It should be noted that the approximation r = 0 has to be performed in the Ito-type equation for φ not in the Stratonovich-type one, to obtain the proper phase equation, since the term DY (φ) has to be included due to correlation between fluctuations in r and ξ. 3.3.2 Derivation
We now describe the derivation of (3.11) or (3.13). For simplicity, we consider the case N = 2 and denote ri , f i , and gi in (3.9) by r, f , and g, respectively, omitting the index. The limit cycle is located by r = 0. Generalization for larger N is straightforward. The Fokker–Planck equation for (3.8) and (3.9) is given by ∂2 [ h2 Q ] ∂ ∂Q =− {ω + D(hφ h + hr g)} Q + D ∂t ∂φ ∂φ2
−
∂ ∂2 [hgQ] ∂2 [ g2 Q ] { f + D( gφ h + gr g)} Q + 2D +D ∂r ∂φ∂r ∂r2
(3.15)
where Q(t, φ, r ) is the time-dependent probability distribution and the subscripts φ and r stand for partial derivatives with respect to φ and r, respectively. The periodic boundary condition Q(t, 0, r ) = Q(t, 2π, r ) is assumed. When D = 0, the steady distribution is given by Q0 (φ, r ) = (2π )−1 δ(r ), where δ is Dirac’s delta function. For small D > 0, the steady distribution Q0 still localizes near r = 0 and it rapidly decreases with increasing |r | because of asymptotic stability of the limit cycle. To confirm this localization property, we roughly estimate the profile of Q0 in the r direction. Since Q0 is a 2π-periodic function of φ, it can be expanded into the Fourier series Q0 (φ, r ) = w0 ( r ) + ∑ ∞ n =1 [ wc,n (r ) cos nφ + ws,n (r ) sin nφ]. We note that the first term w0 becomes dominant and the other Fourier coefficients wc,n and ws,n for n ≥ 1 become zero and negligible as D → 0, because Q0 (φ, r ) = (2π )−1 δ(r ) for D = 0 and it is independent of φ. In (3.15), each function consisting of h and/or g such as gφ h or g2 also can be expanded as the Fourier series of φ. As for f , it has the form
65
66
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
f = −λ(φ)r + o (r ) because of the linear stability of the limit cycle, 2π where λ(φ) is a function of φ and its average λ¯ = (2π )−1 0 λ(φ) dφ is positive. Since we estimate Q0 in a region of small r, we make the approximation f −λ(φ)r. The equation for Q0 is obtained by assuming ∂Q/∂t = 0 in (3.15). If we expand each function of φ in this equation into the Fourier series and neglect all the terms including wc,n and ws,n , then we can obtain the equation for w0 as follows: D
d2 2 d ¯ [ g w ] + { λr − D ( g h + g g )} w φ r 0 0 =0 dr2 dr
(3.16)
where g2 , gφ h, and gr g are the averages over φ ∈ [0, 2π ) and they are still functions of r. Since D is small, we neglect D gφ h and D gr g in ¯ Moreover, we approximate g2 with its (3.16), comparing them with λr. value at r = 0, which we denote by a and in general a = g2 |r=0 > 0. Then, we have the equation λ¯ d d 2 w0 + [rw0 ] = 0 dr2 Da dr
(3.17)
Although this is an approximate equation for small r, we solve this over R and use the solution in a region of small r. Equation (3.17) has the Gaussian solution ¯ 2 1 λ¯ λr exp − (3.18) × w0 ( r ) = 2π 2πDa 2Da This solution shows that Q0 w0 (r ) well localizes near r = 0 for a small D. Let ρ be a small constant such that the region {(φ, r ) ∈ R2 ; −ρ ≤ r ≤ ρ} is in the neighborhood U. Equation (3.18) shows that the values of Q0 and ∂Q0 /∂r at r = ±ρ are higher order than D n for any positive integer n. Since we will approximate the Fokker–Planck equation up to O( D ), we may approximate as Q0 = 0 and ∂Q0 /∂r = 0 at r = ±ρ. Hereafter, assume that t is sufficiently large. Then, the localization property also holds for Q(t, φ, r ), since Q converges to Q0 . Thus, Q = 0 and ∂Q/∂r = 0 approximately hold at r = ±ρ for small D. ρ We introduce the marginal distribution P(t, φ) ≡ −ρ Q(t, φ, r )dr, neglecting a small probability over the region |r | > ρ. We integrate (3.15) with respect to r over the interval I = [ −ρ, ρ ] to obtain an approximate
3.3 Oscillator with White Gaussian Noise
Fokker–Planck equation for P. The last three terms in (3.15), which include the derivative ∂/∂r, vanish after the integration due to the two conditions Q = 0 and ∂Q/∂r = 0 at r = ±ρ; for example, ρ ∂ f + D ( gφ h + gr g) Q dr = f + D ( g φ h + gr g ) Q =0 −ρ I ∂r Therefore, after integrating (3.15), we have ∂ ∂P =− ∂t ∂φ
∂2 (ω + DK1 ) Q dr + D 2 ∂φ I
I
K2 Q dr
(3.19)
where K1 and K2 are functions of φ and r given by K1 = hφ h + hr g and K2 = h 2 . The functions K1 and K2 can be expanded in the forms K1 = hφ (φ, 0)h(φ, 0) + hr (φ, 0) g(φ, 0) + rR1 (φ, r ) and K2 = h(φ, 0)2 + rR2 (φ, r ), where R1 and R2 are functions of O (r0 ) or higher, with respect to r. Since Z(φ) = h(φ, 0) and Y (φ) = hr (φ, 0) g(φ, 0) from (3.12), K1 and K2 are rewritten as K1 = Z(φ) Z (φ) + Y (φ) + rR1 (φ, r ) and K2 = Z(φ)2 + rR2 (φ, r ). Consider the integrals I rRi Q dr, i = 1, 2. Recall that the steady distribution Q0 of (3.15) satisfies lim D→0 Q0 (φ, r ) = (2π )−1 δ(r ). Since Q(t, φ, r ) Q0 (φ, r ) holds, the profile of Q(t, φ, r ) in r may be approximated by δ(r ) in the limit D → 0. If we use this approximation and note that rRi is of the order O (r ) or higher, we have limD→0 I rRi Q dr = 0. This implies that ∂ lim D →0 ∂φ
∂ lim rR1 Q dr = ∂φ D→0 I
I
rR1 Q dr = 0
and ∂2 D →0 ∂φ2
∂2 lim rR2 Q dr = 0 ∂φ2 D→0 I I ∂ ∂2 rR1 Q dr = o ( D ) and D ∂φ rR2 Q dr = Therefore, we have D ∂φ 2 I I o ( D ). If we substitute the expansions of K1 and K2 into (3.19) and use these facts, we can obtain the approximate Fokker–Planck equation up to O ( D ) as follows: lim
rR2 Q dr =
∂2 ∂ ∂P = − [ {ω + D( ZZ + Y )} P ] + D 2 [ Z2 P ] ∂t ∂φ ∂φ
(3.20)
The Ito stochastic differential equation equivalent to (3.20) is given by (3.11). The corresponding equation is given by (3.13) in Stratonovich
67
68
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
interpretation. Thus, we may conclude that (3.11) or (3.13) is a proper phase equation for system (3.7). 3.3.3 Steady Phase Distribution and Frequency
We calculate the steady probability distribution P0 (φ) of the phase variable and the mean frequency Ω. We will compare these two quantities between a two-dimensional oscillator model and its reduced phase model. Let us consider (3.20) with the boundary condition P(t, 0) = P(t, 2π ), which is equivalent to (3.11) or (3.13). The steady solution P0 (φ) is obtained by assuming ∂P/∂t = 0 in (3.20). If we construct an asymptotic solution for P0 in the power of D, then up to O( D ) we obtain P0 (φ) =
1 D Z(φ) Z (φ) − Y (φ) + Y + o ( D ) + 2π 2πω
(3.21)
where Y is the average defined by Y=
1 2π
2π 0
Y (φ) dφ
(3.22)
The mean frequency Ω of the oscillator is defined by Ω = T limT →∞ T −1 0 φ˙ (t) dt. This can be calculated by replacing the time average with the ensemble average: i.e. Ω = φ˙ . Equation (3.11) is useful for calculating the ensemble average, since there is no correlation between φ and ξ in the Ito equation. If we take the ensemble average of (3.11), we have Ω = ω + D Z(φ) Z (φ) + Y (φ) , where we used the fact Z(φ)ξ (t) = Z(φ)ξ (t) = 0. For an arbitrary function A(φ), its ensemble average can be calculated by using P0 : i.e. 2π A = 0 A(φ) P0 (φ) dφ. With the use of (3.21), we obtain Ω up to O ( D ) as follows: Ω = ω + DY + o ( D )
(3.23)
Since the white Gaussian noise has no characteristic frequency, intuitively one might expect that it causes no change in the oscillator frequency. However, this is not the case. Equation (3.23) shows that white Gaussian noise does change Ω. We call this phenomenon the noiseinduced frequency shift (NIFS). It depends on the sign of Y whether Ω increases or decreases as the noise intensity increases.
3.3 Oscillator with White Gaussian Noise
Equations (3.21) and (3.23) show that the term Y (φ) in (3.11) significantly affects both P0 (φ) and Ω in the first order of D. In particular, as shown by (3.23), the first-order frequency shift is determined only from Y (φ). Therefore, it is crucially important to include the term Y (φ) in the phase equation. It is clear that the conventional phase equation (3.14), which lacks Y (φ), cannot give proper approximations for P0 (φ) and Ω. 3.3.4 Numerical Examples
In order to validate (3.13), we compare P0 (φ) and Ω between theoretical and numerical results. As an example, we use the noise-driven Stuart– Landau (SL) oscillator [1]: x˙ = F SL ( x) + G ( x)ξ (t)
(3.24)
where x = ( x, y), F SL is the vector field of the SL oscillator, G is a vector function, and ξ is the white Gaussian noise with the properties ξ (t) = 0 and ξ (t)ξ (s) = 2D δ(t − s). The vector field F SL is given by ⎛ ⎞ 1 1 1 2 + y2 )( x − cy ) λx − ( λc + ω ) y − λ ( x 2 2 ⎠ F SL ( x) = ⎝ 2 (3.25) 1 1 1 2 2 ( 2 λc + ω ) x + 2 λy − 2 λ( x + y )(cx + y) where c, ω, and λ are constants and λ is positive. The noise-free SL oscillator has the limit-cycle solution x0 (t) = (cos ωt, sin ωt) with the natural frequency ω. If we define the coordinates (φ, r ) by x = (1 + r ) cos[φ + c ln(1 + r )] and y = (1 + r ) sin[φ + c ln(1 + r )], then φ gives the phase variable and the limit cycle is represented by r = 0. In these coordinates, equation (3.24) is rewritten as 1 Gx + c Gy sin[φ + c ln(1 + r )] 1+r
+ c Gx − Gy cos[φ + c ln(1 + r )] ξ (t)
(3.26)
3 1 r˙ = −λr − λr2 − λr3 2 2 + Gy sin[φ + c ln(1 + r )] + Gx cos[φ + c ln(1 + r )] ξ (t)
(3.27)
φ˙ = ω −
where Gx and Gy represent the x and y components of G, respectively. As indicated by (3.27), λ stands for the rate of attraction to the limit
69
70
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
cycle when ξ = 0. These equations are of the form of (3.8) and (3.9). Then, the phase-reduction method in Section 3.3.1 can be applied. We use two types of G function: G1 = (1, 0) and G2 = ( x, 0). For G1 , Z(φ) and Y (φ) are given by Z(φ) = − (sin φ + c cos φ) ,
Y (φ) = 12 (1 + c2 ) sin 2φ
(3.28)
For G2 , they are given by Z(φ) = − cos φ (sin φ + c cos φ) Y (φ) = c cos2 φ (− cos 2φ + c sin 2φ)
(3.29)
Approximations for P0 (φ) and Ω can be obtained by substituting these expressions for Z(φ) and Y (φ) into (3.21) and (3.23). In Figures 3.2(a)–(d), numerical and theoretical results for P0 (φ) are compared: the filled circle and solid line represent P0 (φ) obtained by
Figure 3.2 Steady probability distribution P0 (φ) of phase for SL oscillator subjected to white Gaussian noise. Numerical result (•), analytical result of (3.21) (solid line), and that obtained from (3.14) (dashed line) are shown. Parameters are D = 0.02, λ = 2, and ω = 1. (a) G1 and c = 0; (b) G1 and c = 1; (c) G2 and c = 1; (d) G2 and c = −1.
3.3 Oscillator with White Gaussian Noise
numerically solving the SL oscillator equation, (3.24), and that given by (3.21), respectively. Theoretical predictions made by (3.14), which are obtained just by setting Y = 0 in (3.21), are also shown by a dashed line. Figures 3.2(a) and (b) are for G1 while Figures 3.2(c) and (d) are for G2 . The parameters are set as D = 0.02, λ = 2, and ω = 1. It is clear that the present phase equation (3.13) gives precise approximations in all the cases. The agreements are excellent. In contrast, the conventional phase equation (3.14) does not give proper approximations at all in spite of the weak-noise intensity. Figures 3.3(a) and (b) show the mean frequency Ω plotted against the noise intensity D for G1 and G2 , respectively, where λ = 2 and ω = 1. The numerical results obtained by solving (3.24) are shown by filled or open circles. The theoretical estimations given by (3.23) are shown by a solid or dashed line. The theoretical estimation is Ω = ω + o ( D ) for G1 , which is constant up to O ( D ). In Figure 3.3(a), the numerically obtained Ω is almost constant for c = 0. This coincides with the above theoretical estimation. For c = 1, the numerically obtained Ω is not constant but increases with increasing D. However, this increase is not linear with respect to D but a higher order one as shown in the inset. In this sense, an agreement between the numerical and theoretical results is confirmed up to O ( D ). In the case of G2 , the theoretical estimation
Figure 3.3 Mean frequency Ω vs D for (a) G1 and (b) G2 . Numerical result (symbol) and analytical result of (3.23) (line) are shown. Parameters are λ = 2 and ω = 1. (a) c = 0 (• dashed line) and c = 1 (◦ dashed line); (b) c = 1 (• dashed line) and c = −1 (◦ solid line). The inset in (a) is a logarithmic plot of Ω − ω vs D for c = 1, where the reference line for the scaling law D1 is also shown.
71
72
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
is given by Ω = ω − (c/4) D + o ( D ), which has a nonvanishing term of O ( D ) except for c = 0. This indicates that Ω can either increase or decrease, depending on the sign of c. In Figure 3.3(b), this estimation agrees well with the numerical result in each of the cases c = 1 and c = −1. If we use (3.14) instead of (3.13), then we obtain the estimation Ω = ω + o ( D ) for G2 , in which the O ( D ) term vanishes. This estimation apparently disagrees with the numerical results. Figures 3.2 and 3.3 clearly demonstrate that (3.13) precisely approximates dynamics of oscillators with weak white Gaussian noise. In contrast, it is apparent that the conventional equation (3.14) is erroneous.
3.4 Oscillator with Ornstein–Uhlenbeck Noise
We generalize the stochastic phase-reduction theory to the case of oscillators subjected to colored noise. Noise has a finite correlation time in the real world. The white noise should be regarded as a simplification. Therefore, this generalization is important in the understanding of the dynamics of various real oscillatory systems with noise. In this section, we deal with the Ornstein–Uhlenbeck (OU) noise [11], which is a simple colored noise often used in the literature, and derive a generalized stochastic phase equation in this case, based on the theory developed in Section 3.3. This generalized phase equation reduces to (3.13) or (3.14) in certain limits. In addition, using the generalized phase equation, we clarify effects of the finite correlation time on the oscillator frequency. 3.4.1 Generalized Stochastic Phase Equation
Consider an oscillator subjected to the OU noise, which is described by the stochastic differential equations x˙ = F ( x) + G ( x)η
(3.30)
η˙ = −γη + γξ (t)
(3.31)
where x = ( x1 , . . . , x N ) ∈ R N , F is an unperturbed smooth vector field, and G is a smooth vector function. The unperturbed system x˙ = F ( x) is assumed to have a limit-cycle solution. The OU noise η is generated via (3.31), where γ > 0 is a constant and ξ (t) is the white Gaussian noise such that ξ (t) = 0 and ξ (t)ξ (s) = 2D δ(t − s). The average and
3.4 Oscillator with Ornstein–Uhlenbeck Noise
correlation function of η (t) are given by η (t) = 0 and η (t)η (s) = Dγ exp[ −γ|t − s| ], respectively [11]. We call the constant D > 0 the noise intensity since the variance of η is given by Dγ. The correlation function of η indicates that γ represents the correlation decay rate and the correlation time τη can be defined by τη = γ−1 . We rewrite (3.30) and (3.31) as follows: ξ (t) X˙ = F( X ) + G
(3.32)
= (0, γ). The where X = ( x, η ), F = ( F ( x) + G ( x)η, −γη ) and G ˙ unperturbed system X = F ( X ) is a limit-cycle oscillator itself because limt→+∞ η (t) = 0 when ξ = 0. We call it the extended oscillator. A key point for obtaining the phase equation is that (3.32) takes the form of an oscillator subjected to white Gaussian noise. Thus, introducing a phase variable θ for the extended oscillator and applying the method described in Section 3.3.1, we can obtain the phase equation for (3.32), which is of the form (3.13). We emphasize that it is also possible to obtain phase equations for a class of colored noise which is generated from white Gaussian noise via differential equations by using the extended oscillator approach, although we deal with the OU noise case only. In order to perform the phase reduction, we introduce new coordinates. Let x0 (t) be the limit-cycle solution of the original unperturbed system x˙ = F ( x) with the frequency ω. We define the smooth phase coordinate φ so that gradx φ · F ( x) = ω may hold for any points in U, where U is a neighborhood of x0 in the original phase space R N . We define the other N − 1 smooth coordinates r = (r1 , . . . , r N −1 ) in U such that r = 0 on the limit cycle. For simplicity, suppose that the Floquet matrix of the linearized equation v˙ = DF ( x0 (t)) · v, where v is the variation and DF is the Jacobian matrix, has positive eigenvalues and is diagonalizable. We appropriately take the coordinates ri in the directions of Floquet eigenvectors other than dx0 (φ)/dφ, which is the Floquet eigenvector tangential to x0 . Since the eigenvalues are real, each ri is a real variable. If we perform the transformation ( x1 , . . . , x N ) → (φ, r1 , . . . , r N −1 ) in (3.30), we can obtain equations of the form φ˙ = ω + h(φ, r )η
(3.33)
r˙ i = −λi ri + f i (φ, r ) + gi (φ, r )η
(3.34)
where i = 1, . . . , N − 1 and λi are positive constants, which represent rates of attraction to the limit cycle. The characteristic time scale τr,i
73
74
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
of attraction in ri can be defined by τr,i = λi−1. The functions h, f i , and gi are defined as follows: h(φ, r ) = grad
x φ ·G ( x(φ, r )), −λi ri + f i (φ, r ) = gradx ri · F ( x(φ, r )), gi (φ, r ) = gradx ri · G ( x(φ, r )), where the gradients are evaluated at the point x(φ, r ). They are 2π-periodic functions of φ. The function f i satisfies f i (φ, r ) = o (r ). Considering the expansions of h and gi with respect to ri , we denote ( 1) the functions of φ in their coefficients as h(0) (φ) = h(φ, 0), hi (φ) = ( 0)
∂h(φ, 0)/∂ri , and gi (φ) = gi (φ, 0). Let Fc be the operator defined by
Fc [ u ] =
2π
1 1 − e−2πc/ω
0
u(φ + x )
c −cx/ω e dx ω
(3.35)
where c is a constant and u is a 2π-periodic function. Using this operator, we introduce the following functions: A ( φ ) = F γ [ h ( 0) ] Bi (φ) =
(3.36)
γ ( 1) F γ + λi [ h i ] γ + λi
(3.37) ( 0)
C (φ) = F2γ [ h(0) A + ∑iN=−1 1 gi Bi ]
(3.38)
Let θ (φ, r, η ) be the phase variable of the extended oscillator such that it is a smooth function of (φ, r, η ) satisfying θ (φ, r, 0) = φ and θ˙ = ω holds when ξ = 0 ( D = 0). The extended oscillator’s phase θ is illustrated in the case of N = 2 in Figure 3.4, where the φ-axis is the limit cycle x0 , I (θ ) is the isochron of the extended oscillator, and I (φ) is that of the original oscillator. Since θ (φ, r, 0) = φ, I (φ) coincides with I (θ ) on the (φ, r ) plane. If we apply the phase-reduction method in Section 3.3.1 to (3.32), we can obtain the Stratonovich-type phase equation for θ as follows: θ˙ = ω + DYγ (θ ) + Zγ (θ )ξ (t)
(3.39)
where Yγ (θ ) and Zγ (θ ) are given by Zγ (θ ) = A(θ ),
Yγ (θ ) = C (θ ) − A(θ ) A (θ )
(3.40)
It should be emphasized that (3.39) is valid for any γ and λi , provided that D is small enough. That is, (3.39) applies to the OU noise with an arbitrary correlation time and also to any oscillator having arbitrary rates of attraction. In this sense, (3.39) is a generalized stochastic phase equation.
3.4 Oscillator with Ornstein–Uhlenbeck Noise
Figure 3.4 Illustration of isochron I (θ ) of the extended oscillator.
We assumed the case of positive eigenvalues of the Floquet matrix to make our discussion within the real variables. However, there is no essential difference when the Floquet matrix has some pairs of complex conjugate eigenvalues and/or some negative eigenvalues, although the corresponding ri variables become complex numbers. In this case, the phase equation is still given by (3.39). Remark.
3.4.2 Derivation
We describe the derivation of (3.39). For simplicity, we assume the case N = 2 and denote λi , ri , f i , and gi in (3.34) by λ, r, f , and g, respectively, omitting the index. Generalization for larger N is straightforward. In order to determine the phase θ of the extended oscillator as a function of (φ, r, η ), we assume the expansion θ (φ, r, η ) = φ + u1 (φ)η + u2 (φ)rη + u3 (φ)η 2 + · · ·
(3.41)
where ui (φ) are 2π-periodic functions of φ. This does not include the powers of r only since θ (φ, r, 0) = φ. We note that the inverse of (3.41) up to the first order is given by φ(θ, r, η ) = θ − u1 (θ )η + · · · . The phase θ can be determined by using the condition (gradX θ ) · F( X ) = ω. In the coordinates (φ, r, η ), this condition is rewritten as ∂θ ˙ ∂θ ∂θ φ + r˙ + η˙ = ω. ∂φ ∂r ∂η
75
76
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
Thus, we have ∂θ ∂θ ∂θ (ω + hη ) + (−λr + f + gη ) + (−γη ) = ω ∂φ ∂r ∂η
(3.42)
If we substitute (3.41) into (3.42) and set the coefficients of each power of r and η to be zero, then we obtain differential equations for ui (φ). Solving the equations for ui (φ) under the periodic condition ui (2π ) = ui (0), we can obtain u1 = A(φ)/γ, u2 = B(φ)/γ, u3 = C (φ)/2γ2
(3.43)
where A ( φ ) = F γ [ h ( 0) ] ,
B(φ) =
γ F γ + λ [ h ( 1) ] γ+λ
with h(1) = ∂h(φ, 0)/∂r, and C (φ) = F2γ [ h(0) A + g(0) B ]. This set of A(φ), B(φ) and C (φ) is a particular case of (3.36), (3.37) and (3.38) for N = 2. Equation (3.42) reduces to ∂θ/∂η = 0 in the whitenoise limit γ → +∞. This implies θ = φ in this limit. In the coordinates (θ, r, η ), (3.32) is written as θ˙ = ω + H (θ, r, η ) γξ (t)
(3.44)
r˙ = −λr + f (φ(θ, r, η ), r ) + g(φ(θ, r, η ), r )η,
(3.45)
η˙ = −γη + γξ (t)
(3.46)
where H (θ, r, η ) is defined by ∂θ (φ, r, η ) H= ∂η φ= φ( θ,r,η )
(3.47)
That is, H is a function obtained by differentiating θ as a function of (φ, r, η ) and then substituting φ(θ, r, η ) into φ. To obtain (3.44), we recalled the extended oscillator equations in the (φ, r, η ) coordinates, which are given by (3.31), (3.33) and (3.34), and used the relation
∂θ ∂θ ∂θ ∂θ , , θ˙ = ω + · (0, 0, γξ ) = ω + γξ ∂φ ∂r ∂η ∂η
3.4 Oscillator with Ornstein–Uhlenbeck Noise
Using (3.41) and the inverse φ(θ, r, η ) = θ − u1 (θ )η + · · · , we can approximate H up to the first order as (3.48) H = u1 (θ ) + u2 (θ )r + 2u3 (θ ) − u1 (θ )u1 (θ ) η + · · · Equations (3.44)–(3.46) can be regarded as an oscillator subjected to the white Gaussian noise ξ. Therefore, applying the formula given by (3.12) and (3.13), we can obtain the Stratonovich-type phase equation. Noting that ξ does not appear in (3.45), we have ∂H (θ, 0, 0) + γH (θ, 0, 0) ξ (t) θ˙ = ω + Dγ2 ∂η
(3.49)
If we substitute (3.43) and (3.48) into (3.49), we arrive at (3.39). 3.4.3 Steady Phase Distribution and Frequency
In order to validate (3.39), we compare theoretical and numerical results for the steady probability distribution P0 (θ ) of the phase variable and the mean frequency Ω. Approximations for P0 and Ω have been obtained for phase equations of the form (3.13), which are given by (3.21) and (3.23), respectively. Those approximations apply to (3.39) since it has the same form as (3.13): equation (3.39) is obtained by replacing Y and Z with Yγ and Zγ , respectively. An asymptotic solution for P0 in the power of D is given by P0 (θ ) =
1 D Zγ (θ ) Zγ (θ ) − Yγ (θ ) + Yγ + o ( D ) + 2π 2πω
(3.50)
where Yγ is defined by Yγ = (2π )
−1
2π 0
Yγ (θ ) dθ
The mean frequency Ω = limT →∞ T −1 Ω = ω + DYγ + o ( D )
(3.51)
T 0
θ˙ (t) dt is given by (3.52)
This indicates that the OU noise also causes a shift of the frequency, i.e. the NIFS. It depends on the sign of Yγ whether Ω increases or decreases as the noise intensity D increases. It should be noted that Ω depends on γ and λi since Yγ depends on them.
77
78
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
3.4.4 Numerical Examples
We show some numerical results of P0 (φ) and Ω to validate the generalized stochastic phase equation, (3.39). We employ the noise-driven SL oscillator model x˙ = F SL ( x) + G ( x)η (t)
(3.53)
where x = ( x, y), F SL is the vector field given by (3.25), G is a vector function, η is the OU noise with the properties η (t) = 0 and η (t)η (s) = Dγ exp[ −γ|t − s| ]. We introduce the (φ, r ) coordinates by x = (1 + r ) cos[φ + c ln(1 + r )] and y = (1 + r ) sin[φ + c ln(1 + r )] as in Section 3.3.4. In these coordinates, (3.53) is written in the form of (3.33) and (3.34). Then, the phase-reduction method described in Section 3.4.1 is applicable and a phase equation of the form (3.39) is obtained for an appropriately defined phase variable θ. We use two types of G function: G1 = (1, 0) and G2 = ( x, 0). For G1 , the phase equation of the form (3.39) is obtained with Zγ (θ ) and Yγ (θ ) given by
−γ [ (γ − cω ) sin θ + (ω + cγ) cos θ ] ω 2 + γ2
Zγ (θ ) = Yγ (θ ) =
(1 + c2 ) γ 2(ω 2 + γ2 ){ω 2 + (γ + λ)2 } × −ωλ(2γ + λ) + γ(γ2 + γλ − ω 2 ) sin 2θ + ωγ(2γ + λ) cos 2θ
(3.54)
(3.55)
We do not present expressions of Zγ (θ ) and Yγ (θ ) for G2 since they have long expressions. Approximations for P0 (θ ) can be obtained by substituting the expressions for Zγ (θ ) and Yγ (θ ) into (3.50). Approximations for Ω up to the first order are obtained from (3.52) as follows: Ω=ω−
D (1 + c2 )ωλγ(2γ + λ) 2(ω 2 + γ2 ){ω 2 + (γ + λ)2 }
(3.56)
for G1 and Ω=ω−
Dγ 4(4ω 2
+ γ2 ){4ω 2
+ (γ
+ λ )2 }
· cγ3 + (cλ + 2ω )γ2
+ 4ω {(1 + c2 )λ + cω }γ + 2ω {(1 + c2 )λ2 + 2cωλ + 4ω 2 } (3.57)
3.4 Oscillator with Ornstein–Uhlenbeck Noise
for G2 , respectively. Equations (3.56) and (3.57) indicate that Ω depends on the parameters ω, λ and γ. These parameters represent characteristic time scales of the oscillator or the noise. That is, (3.56) and (3.57) indicate that these time scales strongly influence Ω. Numerical and theoretical results of P0 (θ ) are compared in Figures 3.5(a) and (b): symbols and lines represent P0 (θ ) obtained by numerically solving (3.53) and that given by (3.50), respectively. Note that the horizontal axis represents θ not φ. The phase θ was calculated from (φ, r, η ) via (3.41) in the numerical results. Figures 3.5(a) and (b) show that profile of P0 (θ ) varies depending on γ. It is confirmed that (3.39) precisely approximates the numerical results in all the cases.
Figure 3.5 Steady probability distribution P0 (θ ) for SL oscillator subjected to OU noise. Numerical (symbols) and analytical (lines) results are shown. Parameters are λ = 1, ω = 1, c = 1 and D = 0.02. (a) G1 and γ = 0.5 (• solid line), 1 (+ dashed line), and 10 (◦ dotted line); (b) G2 and γ = 0.5 (• solid line), 2 (+ dashed line), and 10 (◦ dotted line).
Figures 3.6(a) and (b) show the mean frequency Ω plotted against γ for G1 and G2 , respectively. The numerical results and the theoretical estimations given by (3.52) are shown by symbols and lines, respectively. Their agreement is excellent in both figures. In Figure 3.6 (a), the results for G1 are shown for three different λ values. It is clearly shown that Ω strongly depends on γ and λ. In the case of G1 , limγ→+∞ Ω = ω follows from (3.56) and thus the NIFS do not occur in the white-noise limit. However, the NIFS dose occur when the correlation decay rate γ is finite as shown in Figure 3.6 (a). In Figure 3.6 (b), the results for G2 are shown for three different c values. Strong dependence of Ω on γ is also observed in these results. In the white-noise limit, Ω for each c converges to limγ→+∞ Ω = ω − (c/4) D, which follows from (3.57). Note
79
80
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
Figure 3.6 Mean frequency Ω vs γ for SL oscillator subjected to OU noise. Numerical (symbols) and analytical (lines) results are shown. (a) G1 , c = 1 and λ = 0.5 (• solid line), 1 ( dashed line), and 10 (◦ dotted line); (b) G2 , λ = 1 and c = 1 (• solid line), 0 ( dashed line), and −1 (◦ dotted line). The other parameters are ω = 1 and D = 0.02.
that this limit value coincides with that mentioned in Section 3.3.4. In addition, it is seen that the γ-dependence of Ω is qualitatively different for different c values: Ω < ω = 1 holds over whole range of γ for c = 1 while Ω < ω holds in the small γ region up to γ 3 and then Ω monotonically increases and satisfies Ω > ω for c = −1. Figures 3.7(a)–(d) show contour plots of Ω in the (γ, λ) plane for the noise-driven SL oscillator with G2 . The result obtained by numerically integrating (3.53) and the theoretical estimation given by (3.57) for c = 1 are shown in Figures 3.7(a) and (b), respectively. The numerical and theoretical results for c = −1 are also shown in Figures 3.7(c) and (d), respectively. The other parameters are ω = 1 and D = 0.02. Dependence of Ω on the two parameters (γ, λ) is not simple but rather complicated, as expected from (3.57). A good agreement between the numerical and theoretical results is confirmed. The results in Figures 3.6 and 3.7 clearly demonstrate that the frequency Ω significantly changes depending on the characteristic time scales of the oscillator and noise, i.e. the noise correlation time τη = γ−1 and the time scale of attraction τr = λ−1 . This fact shows the importance of appropriately describing the effects of these characteristic time scales. The generalized stochastic phase equation (3.39) can properly describe the dependence.
3.4 Oscillator with Ornstein–Uhlenbeck Noise
Figure 3.7 Contour plot of mean frequency Ω as a function of γ and λ for SL oscillator subjected to OU noise with G2 . (a) Numerical result for c = 1; (b) analytical result for c = 1; (c) numerical result for c = −1; (d) analytical result for c = −1. The other parameters are ω = 1, D = 0.02.
3.4.5 Phase Equation in Some Limits
Equation (3.39) is dependent on the parameters γ and λi , which define the noise correlation time τη and the time scales τr,i of attraction in ri as τη = γ−1 and τr,i = λi−1 , respectively. Since (3.39) applies to any γ > 0 and λi > 0, it is possible to take an arbitrary limit of τη → 0 and/or τr,i → 0. In this section, we discuss the form of stochastic phase equation in several limits. In particular, we show that (3.39) reduces to (3.13) in an appropriate limit while the conventional phase equation is recovered in another particular limit. As a simple and typical case, we assume N = 3 in the following discussion. Therefore, the control parameters are τη and τr,i , i = 1, 2. We consider seven types of their limits. Figure 3.8 illustrates the limit point set of each limit, which is labeled by (i)–(vii) in the parameter space (τr,1 , τr,2 , τη ). (i) τη → 0 and τr,i = const. Consider the limit τη → 0 with constant τr,i > 0, i = 1, 2. This is the case when the white Gaussian noise acts on an oscillator which has finite τr,i being not necessarily small. The limit point is in the set {(τr,1 , τr,2 , τη ) ∈ R3 ; τr,1 > 0, τr,2 > 0, τη = 0}. In terms of γ
81
82
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
Figure 3.8 Illustration of various limits in (τr,1 , τr,2 , τη ) space.
and λi , this limit corresponds to γ → +∞ and λi = const. Since limc→+∞ (c/ω )e−c| x|/ω = 2δ( x ), where δ is Dirac’s delta function, we have limc→+∞ Fc [u] = u(φ). Using this formula, we can find ( 1) that limγ→+∞ A(φ) = h(0) (φ), limγ→+∞ Bi (φ) = hi (φ), and lim C (φ) = h(0) (φ){h(0) (φ)} +
γ→+ ∞
N −1
∑
i =1
( 0)
( 1)
gi ( φ ) h i ( φ )
provided that λi > 0, i = 1, 2 are fixed. Therefore, if we take the limit γ → +∞ in (3.39) using these results and recall that θ = φ holds in this limit, we have (3.13). This indicates that (3.13) is a good approximation when 0 < τη min{τr,1 , τr,2 }. Since case (i) is just the situation dealt with in Section 3.3, the above result is quite reasonable. (ii) τη = const. and τr,i → 0 Consider the limit τr,i → 0, i = 1, 2 with constant τη > 0, that is, the limit where the oscillator’s limit cycle has strong stability. The limit point set is the τη -axis except the origin, i.e. {(τr,1 , τr,2 , τη ) ∈ R3 ; τr,1 = τr,2 = 0, τη > 0}. In terms of γ and λi , this limit corresponds to γ = const. and λi → +∞, i = 1, 2. It is easy to see that limλi →+∞ Bi (θ ) = 0, provided that γ > 0 is fixed. Therefore, we have A(θ ) = Fγ [h(0) ], Bi (θ ) = 0, and C (θ ) = F2γ [ h(0) A ] in the limit. The phase equation is given by θ˙ = ω + DYˆγ (θ ) + Zˆ γ (θ )ξ (t)
(3.58)
where Zˆ γ (θ ) = Fγ [h(0) ] and Yˆγ (θ ) = F2γ [ h(0) A ] − Fγ [h(0) ]{Fγ [h(0) ]} . ( 0)
Note that gi
does not appear in (3.58). This fact indicates that the
3.4 Oscillator with Ornstein–Uhlenbeck Noise
fluctuation in ri has no influence on the phase dynamics. In fact, (3.58) can be regarded as the phase equation determined from the equations φ˙ = ω + h(φ, 0)η
(3.59)
η˙ = −γη + γξ (t)
(3.60)
by applying the extended oscillator approach. Equation (3.59) is the conventional phase equation for (3.30), which has the OU noise perturbation (cf. (3.14)). Equation (3.58) is equivalent to (3.59) although the new phase θ is used and the perturbation term is written by the white Gaussian noise. Therefore, the conventional phase equation (3.59) also works well in this limit and it is still a good approximation when 0 < max{τr,1 , τr,2 } τη . The result in this case suggests that the conventional phase equation may be a good approximation for general weak noisy perturbations other than OU noise as long as the limit cycle has sufficiently short time scales of attraction. (iii) τη = const., τr,1 = const. and τr,2 → 0 Consider the limit τr,2 → 0 with constant τη > 0 and τr,1 > 0. The limit point is in the set {(τr,1 , τr,2 , τη ) ∈ R3 ; τr,1 > 0, τr,2 = 0, τη > 0}. In terms of γ and λi , this limit corresponds to γ = const., λ1 = const., and λ2 → +∞. As in case (ii), we obtain limλ2 →+∞ B2 (θ ) = 0, provided that γ and λ1 are fixed. Then, we have A(θ ) = Fγ [h(0) ], B1 (θ ) =
( 1) γ γ + λ1 F γ + λ1 [ h1 ] ,
B2 (θ ) = 0, and ( 0 ) C (θ ) = F2γ [ h(0) A + g1 B1 ] in the limit. The phase equation is given by (3.39) with Zγ (θ ) and Yγ (θ ) determined from them. This result indicates that the amplitude fluctuation in a strongly stable direction, i.e. in r2 , has no influence on the phase dynamics.
(iv) τη = const., τr,1 → 0 and τr,2 = const. This case is essentially the same as case (iii). The phase equation is given by (3.39) with Zγ (θ ) and Yγ (θ ) determined from A(θ ) = Fγ [h(0) ], B1 (θ ) = 0, B2 (θ ) =
F2γ [ h(0) A +
( 0) g2 B2
( 1) γ γ + λ2 F γ + λ2 [ h2 ] ,
and C (θ ) =
].
(v) τη → 0 and τr,i → 0 Let us consider the limit such that τη → 0 and τr,i → 0 are taken simultaneously. That is, we consider the case when (τr,1 , τr,2 , τη )
83
84
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
approaches the origin. The origin is singular in the sense that the limit form of the phase equation is not unique but it depends on the direction along which the limit is taken. In terms of γ and λi , this limit corresponds to taking γ → +∞ and λi → +∞, i = 1, 2 simultaneously. Let αi be the ratio between λi and γ, i.e. αi = λi /γ. If we substitute λi = αi γ into (3.39) and take the limit γ → +∞ keeping each αi constant, then we can obtain a phase equation parametrized by αi . Using the formula limc→+∞ (c/ω )e−c| x|/ω = 2δ( x ), we can find that lim A(φ) = h(0) (φ),
γ→+ ∞
( 1)
lim Bi (φ) = hi (φ)/(1 + αi )
γ→+ ∞
and lim C (φ) = h(0) (φ){h(0) (φ)} +
γ→+ ∞
N −1
∑
i =1
( 0)
( 1)
gi ( φ ) h i ( φ ) / ( 1 + α i )
If we take the limit γ → +∞ in (3.39) using these results and note that θ = φ in this limit, we have the Stratonovich-type phase equation φ˙ = ω + DYα (φ) + Z(φ)ξ (t)
(3.61)
where Z(φ) = h(φ, 0) and Yα (φ) is given by Yα (φ) =
N −1
∑
i =1
1 ∂h(φ, 0) gi (φ, 0) 1 + αi ∂ri
(3.62)
Comparison between (3.13) and (3.61) shows that Y (φ) is modified into Yα (φ). Equations (3.61) and (3.62) clearly show that the limit form of phase equation depends on the ratios αi . Let us suppose that the noise has a small nonzero correlation time 0 < τη 1 so that it may be well approximated by the white Gaussian noise. Equation (3.61) is still a good approximation in such a case. The approximation depends on the ratios αi = τη /τr,i . We note that (3.61) reduces to (3.13) or (3.14) in two extreme cases. Let us consider the case 0 < τη min{τr,1 , τr,2 } and the opposite case 0 < max{τr,1 , τr,2 } τη , to which the limits of (i) and (ii) are relevant, respectively. If 0 < τη min{τr,1 , τr,2 }, it reduces to (3.13) because αi 0 and Yα Y. This is consistent with case (i). On the other hand, if 0 < max{τr,1 , τr,2 } τη , it reduces to (3.14) because αi 1 and Yα 0. This is consistent with case (ii) since (3.58) and also (3.59) become (3.14) in the limit γ → +∞.
3.5 Noise effect on entrainment
(vi) τη → 0, τr,1 = const. and τr,2 → 0 Consider the limit such that τη → 0 and τr,2 → 0 are taken simultaneously while τr,1 > 0 is kept constant. The limit point set is the τr,1 -axis except the origin, i.e. {(τr,1 , τr,2 , τη ) ∈ R3 ; τr,2 = τη = 0, τr,1 > 0}. The τr,1 -axis is singular similarly to the origin: the limit form of the phase equation depends on the direction along which the limit is taken. In terms of γ and λi , this limit corresponds to taking γ → +∞, λ1 = const. and λ2 → +∞ simultaneously. We introduce the ratio α2 = λ2 /γ and take the limit γ → +∞ in (3.39) keeping α2 and λ1 constant. The result is simply given by (3.61) with Yα of α1 = 0. (vii) τη → 0, τr,1 → 0 and τr,2 = const. This case is essentially the same as case (vi). If we introduce the ratio α1 = λ1 /γ, the phase equation in the limit is given by (3.61) with Yα of α2 = 0. 3.5 Noise Effect on Entrainment 3.5.1 Periodically Driven Oscillator with White Gaussian Noise
Equation (3.13) or (3.39) has revealed that noise in general induces a shift in the frequency of an oscillator. It is well known that the oscillator frequency is an important characteristic parameter in various types of entrainment phenomena. Therefore, it is expected that the NIFS causes significant influences on the entrainment property of the oscillator. In this section, we focus on entrainment of a noisy oscillator under periodic driving and discuss the effects of NIFS on the entrainment property. Let us consider the SL oscillator with both noise and a periodic signal, which is described by x˙ = F SL ( x) + G ( x)ξ (t) + K cos ω0 t
(3.63)
where x = ( x, y), F SL is the vector field given by (3.25), G is a vector function, ξ (t) is the white Gaussian noise such that ξ (t) = 0 and ξ (t)ξ (s) = 2Dδ(t − s), K is a constant vector, and ω0 is a constant.
85
86
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
The corresponding phase equation can be obtained by just adding the periodic term to (3.13), i.e. φ˙ = ω + DY (φ) + Z(φ)ξ (t) + K (φ) cos ω0 t (3.64)
where K (φ) is defined by K (φ) = gradx φ( x) x= x (φ) · K. We assume 0 K = (κ, 0) in our numerical simulation, for which K (φ) = −κ (sin φ + c cos φ). As for the vector G, we employ G2 = ( x, 0) and then Z(φ) and Y (φ) are given by (3.29). We define the detuning Δω by Δω = ω0 − ω, where ω is the natural frequency of the SL oscillator. The parameters in F SL are set as λ = 2, ω = 1 and c = 1 in the simulation. Figure 3.9(a) shows frequency locking regions in the (Δω, κ ) plane for D = 0 and 0.02. Since ω = 1 is fixed, Δω is varied by varying ω0 : a larger Δω corresponds to a larger ω0 . The locking condition |Ω p − ω0 | < 10−3 holds in a wedge-shaped region between two boundaries shown by dashed or solid line, where Ω p represents the mean frequency under the periodic forcing and noise. These boundary curves are obtained by numerically solving (3.63). The locking region is centered at Δω = 0 when D = 0. In contrast, when D = 0.02, the center of the locking region clearly shifts to the negative direction as if the oscillator has a smaller natural frequency. That is, the resonance frequency of the oscillator shifts.
Figure 3.9 (a) Frequency locking region of SL oscillator with white Gaussian noise and periodic signal. Parameters are λ = 2, ω = 1 and c = 1. Locking regions for D = 0 (dashed line) and 0.02 (solid line) are shown. Regions for D = 0.02 obtained by (3.64) (•) and the corresponding conventional phase equation (◦) are also shown. (b) ψ vs t for cases A and B, where D = 0.02.
3.5 Noise effect on entrainment
We compare the locking region of (3.63) for D = 0.02 with those obtained from phase equations. For comparison, we use two types of phase equations. One is (3.64) and the other is the conventional phase equation, which is given by (3.64) with Y (φ) = 0. The locking regions obtained by numerically solving these phase equations are also shown by symbols. Equation (3.64) properly describes the resonance-frequency shift, showing a good agreement with the region of (3.63). In contrast, the conventional phase equation, which lacks the term Y (φ) and cannot describe the NIFS, does not describe the resonance-frequency shift and a disagreement is apparent. The amount of the resonance-frequency shift in Δω is −0.005 and this value coincides with the amount of NIFS estimated by the theoretical formula Ω − ω = −(c/4) D with ω = 1, c = 1 and D = 0.02. Thus, we may conclude that the resonance-frequency shift of the locking region is an effect due to the NIFS. In Figure 3.9(b), the phase difference ψ = φ − ω0 t obtained by numerical integration of (3.63) is plotted against t for two cases, which are indicated in Figure 3.9(a) with the labels ‘A’ and ‘B’. The average laminar time is much longer in A than B and a betterquality locking is achieved in case A, although case B has a smaller original detuning and a better-quality locking is expected from the conventional phase equation. The above results clearly show that the original detuning ω0 − ω is not relevant for entrainment in an oscillator with noise. Instead, the effective detuning ω0 − Ω is the important parameter, where Ω is the oscillator frequency under the action of noise and it is given by (3.23). The effective detuning ω0 − Ω characterizes the nature of entrainment in a noisy oscillator. It is essential to use (3.13) to describe this point. 3.5.2 Periodically Driven Oscillator with Ornstein–Uhlenbeck Noise
As has been shown, the NIFS is not a peculiarity of white Gaussian noise but is observed also in the case of OU noise. Therefore, a similar effect on the entrainment property is expected. Let us consider the SL oscillator with both OU noise and a periodic signal x˙ = F SL ( x) + G ( x)η (t) + K cos ω0 t
(3.65)
where η (t) is the OU noise such that η (t) = 0 and η (t)η (s) = Dγ exp[−γ |t − s|]. This is just the OU noise version of the previous model: ξ in (3.63) has been replaced by η. We use K = (κ, 0) and G = G1 =
87
88
3 Phase Reduction of Stochastic Limit-Cycle Oscillators
(1, 0) in the simulation. According to (3.39), the corresponding phase equation is obtained as θ˙ = ω + DYγ (θ ) + Zγ (θ )ξ (t) + K (θ ) cos ω0 t
(3.66)
where Zγ (θ ) and Yγ (θ ) are given by (3.54) and (3.55), respectively, and K (θ ) = −κ (sin θ + c cos θ ). Figure 3.10(a) shows frequency locking regions in the (Δω, κ ) plane for different sets of γ and λ. The boundary curves are obtained by numerically solving (3.65) and shown by solid, dotted and dashed lines. The parameters are set as ω = 1, c = 1 and D = 0.02. The locking condition |Ω p − ω0 | < 10−3 holds in each wedge-shaped region, where Ω p represents the mean frequency under the periodic forcing and noise. The center of each locking region clearly shifts to the negative direction as if the oscillator has a smaller natural frequency. This is the same behavior as in the white Gaussian noise case. The locking regions obtained by numerically solving (3.66) are also shown by symbols. These regions are in good agreement with those of (3.65). It should be noted that the amount of resonance-frequency shift depends on (γ, λ). Compared with Figure 3.6(a), it can be confirmed that the amount of the resonance-frequency shift in Δω for each locking region coincides with the amount of NIFS. In Figure 3.10(b), the phase difference ψ = φ − ω0 t obtained by numerical integration of (3.66) is plotted against t for two cases of (γ, λ) = (1, 1), which are indicated in Figure 3.10(a) with the labels ‘A’ and ‘B’. A better quality locking is achieved in case A than B. Based on the above results, it is clear that in the case of OU noise the effective detuning ω0 − Ω is still important for characterizing the nature of entrainment in a noisy oscillator, where Ω is the frequency under the action of noise and is given by (3.52). It has been demonstrated that the generalized stochastic phase equation can precisely describe the resonance-frequency shift. The generalized stochastic phase equation may provide a useful theoretical tool for investigating the nature of entrainment in various real physical systems subjected to colored noise. 3.5.3 Conjecture
It is expected that the effective detuning, which takes into account the NIFS, is important for characterizing the entrainment transition in two mutually coupled oscillators or an ensemble of many coupled oscillators when they are subjected to noise. Theoretical studies including the
3.6 Summary
Figure 3.10 (a) Frequency locking region of SL oscillator with OU noise and periodic signal. The regions obtained from (3.65) (lines) and (3.66) (symbols) are shown. Parameters are ω = 1, c = 1, D = 0.02 and (γ, λ) = (2, 0.5) (• solid line), (1, 1) (+ dotted line), and (1, 10) (◦ dashed line). (b) ψ vs t for cases A and B, where (γ, λ) = (1, 1) and D = 0.02.
NIFS effect could reveal new scenarios for entrainment in noisy oscillator systems. For instance, assume an ensemble of many non-identical noisy oscillators: the amount of NIFS of each oscillator is different. It could happen that the natural-frequency and effective-frequency distributions are qualitatively different, for example, the latter could have a double-peak profile while the former a single-peak profile. The conventional theory lacking the NIFS effect describes the entrainment transition scenario based on the natural-frequency distribution. However, the transition may be dominated by the effective-frequency distribution. In such a case, there is a possibility that the actual transition scenario is qualitatively different from that predicted by the conventional theory and the present theory can reveal it.
3.6 Summary
We have presented a review of the phase-reduction theory for stochastic limit-cycle oscillators. The phase equations for oscillators subjected to white Gaussian noise or OU noise were presented and their derivation was also described. These phase equations have an additional term, which we call the Y-term, and this is the difference from the conventional one. The present phase equations are precise approximations
89
90
References
to the original noisy oscillators. The phase-reduction method based on the extended oscillator approach is applicable to various colored noise generated from white Gaussian noise via differential equations. An important effect of noise is that it induces a shift in the oscillator frequency. The amount of the noise-induced frequency shift is proportional to the noise intensity and, moreover, depends on the other parameters of the oscillator itself and the noise. In particular, it strongly depends on two characteristic time scales; the noise correlation time and the time scale of attraction to the limit cycle. This noise-induced frequency shift results in the resonance-frequency shift in entrainment of a noisy oscillator driven by a periodic signal. The present phase equations describe well both the noise-induced frequency shift and the resonancefrequency shift via the Y-term, whereas the conventional phase equation fails to describe them. The present phase-reduction theory establishes a basis for investigating the dynamics of noisy oscillators. Theoretical studies based on the present phase equations may reveal a new role of noise for entrainment in noisy oscillator systems. References 1 Kuramoto. Y. Chemical Oscillation, Waves, and Turbulence. SpringerVerlag, Tokyo, 1984. 2 Pikovsky, A. S., Rosenblum, M., Kurths, J. Synchronization. Cambridge University Press, Cambridge, 2001. 3 Teramae, J. and Tanaka, D. Physical Review Letters, 93:204103, 2004. 4 Goldobin, D. S. and Pikovsky, A. S. Physica A, 351:126–32, 2005. 5 Goldobin, D. S. and Pikovsky, A. Physical Review E, 71:045201R, 2005. 6 Teramae, J. and Tanaka, D. Progress of Theoretical Physics Supplement, 161:360–63, 2006.
7 Nakao, H., Arai, K., Kawamura, Y. Physical Review Letters, 98:184101, 2007. 8 Yoshimura, K., Davis, P. and Uchida, A. Progress of Theoretical Physics, 120:621-33, 2008. 9 Yoshimura, K. and Arai, K. Physical Review Letters, 101:154101, 2008. 10 Yoshimura, K. (submitted for publication). 11 Gardiner, C. W. Handbook of Stochastic Methods. Springer, New York, 1997.
91
4 Complex Systems, numbers and Number Theory Lucas Lacasa, Bartolo Luque, and Octavio Miramontes
All is number. Pythagoras
Much ink has dried in the historical discussion of what is mathematics? Does it follows the scientific method? Is it just a collection of recipes, formulas and algorithms? Is it a science or a tool for others to use? Is it both? Carl Friedrich Gauss, one of the greatest mathematicians of all times called, it The Queen of Sciences but it was Eric Temple Bell who named it as Queen and Servant of Science in the title of one of his most celebrated books [1]. To some, this is an adequate way of expressing this dichotomy on the nature of mathematics and of its practical role ahead of other sciences [2]. There is, however, an aspect often neglected in all this discussion which is related to mathematics seen as natural phenomena and so susceptible of study by other sciences, such as physics. Do numbers along with their arithmetic operations behave as physical systems? More specifically, can the modern theory of nonlinear physics address number arithmetics as complex physical phenomena? From the remarkable coincidence in 1972 between H. Montgomery’s work on the statistics of the spacings between zeta zeros and F. Dyson’s analogous work on eigenvalues of random matrices, we have seen, somewhat unexpectedly, how number theory and physics have tended bridges between each other. These connections range from the reinterpretation of the Riemann zeta function as a partition function [3] or the focus of the Riemann Hypothesis via quantum chaos [4–6], to multifractality in the distribution of primes [7] or computational phase transitions in the number partitioning problem [8], to cite but a few (see [9] for an extensive bibliography).
92
4 Complex Systems, numbers and Number Theory
The application of number theoretical techniques in physics is not casual. As a matter of fact, theoretical physicists are, in the great majority, pythagorean. We tend to believe that the universe can be described by mathematical relations and numbers. Indeed, the extraordinary efficacy of mathematics as a formal language to unveil how nature works is impressive and we give it full credit. In the same way, many physicists are delighted by the elegance, simplicity and mathematical beauty within a physical theory. Such criteria are sometimes closer to pure mathematics, or even aesthetics, than to the experimental sciences. Recently, an alternative novel stream within Complexity Science has approached Number Theory along the line of the latter epistemological focus. Complex systems, where emergent behavior from critical to self-organized or adaptive phenomena are likely to take place are, abstractly speaking, systems composed by many individual elements interacting locally and in a nonlinear fashion. Such systems typically appear in nature and society (physical systems as well as technological, biological or even social). But, since Number Theory is the field that studies elements (numbers) and the relations between elements, doesn’t it merit a complex system approach as well? Indeed, number relations (arithmetic and number theoretic properties) can be understood as local nonlinear interactions. Isn’t then a system made by numbers a perfect hallmark for complex behavior to take place? Can we find signatures of emergent phenomena, such as pattern formation, phase transitions, self-organized criticality, in systems whose elements are numbers and whose local interaction rules are precisely their number theoretic properties? If so, what information regarding the nature of numbers can be derived? What lessons can we extract from a complexity science perspective? In this chapter we will present some examples along this line and will try to convince the reader that interesting connections can be built between Complexity Science and Number Theory that may give new answers and propose new questions within such apparently separated fields. First we will comment on a novel pattern in the distribution of the first-digit frequencies of prime numbers; then an analysis will be presented on a stochastic algorithm to generate primes that exhibits a phase transition, while having interesting properties regarding computational complexity; and finally a model of integer divisions would be presented along with its properties of self-organization towards criticality induced by the integers network scale-free topology.
4.1 A Statistical Pattern in the Prime Number Sequence
4.1 A Statistical Pattern in the Prime Number Sequence
God may not play dice with the universe, but something strange is going on with the prime numbers. Attributed to P. Erd˝os, referring to the famous quote of Einstein. 1)
Location of individual prime numbers within the integers seems to have no apparent order. However, their global distribution exhibits an amazing regularity [10]. Certainly, this intriguing property contrasting local randomness and global evenness has caused the distribution of primes to be, since ancient times, a fascinating open problem for mathematicians [11] and more recently for physicists as well (see for instance [5, 9, 12–15]). The Prime Number Theorem, which establishes the global smoothness of the counting function π (n) by providing the number of primes less or equal to the integer n, was the first indication of such regularity [16]. Some other prime patterns have been found and advanced so far, from the visual Ulam Spiral [17] to the arithmetic progression of primes [18]. Some other patterns remain conjectures, such as the global gap distribution between primes or the twin primes distribution [16], enhancing the enigmatic interplay between apparent randomness and hidden regularity. There are indeed many open problems still to be solved, and the prime number distribution is yet to be understood [19–21]. For instance, deep connections exist between the prime number sequence and the nontrivial zeros of the Riemann zeta function [9, 22]. The celebrated Riemann Hypothesis, one of the most important open problems in mathematics, states that the nontrivial zes ros of the complex-valued Riemann zeta function ζ (s) = ∑∞ n =1 1/n are all complex numbers with real part 1/2, the location of these being intimately connected with the prime number distribution [22, 23]. 4.1.1 Benford’s Law and Generalized Benford’s Law
The leading digit of a number represents its nonzero leftmost digit. For instance, the leading digit of the prime 8703 is 8. The most well 1) D. Mackenzie, Mathematics: Homage to an Itinerant Master, Science 275:759, 1997.
93
94
4 Complex Systems, numbers and Number Theory
known leading digit distribution is the so called Benford’s law [24], after physicist Frank Benford (1938) who empirically found that, in many disparate natural data sets and mathematical sequences, the leading digit d wasn’t uniformly distributed as might be expected, but instead had a biased probability as follows P(d) = log10 (1 + 1/d)
(4.1)
where d = 1, 2, . . . , 9. This empirical law was first discovered by the astronomer Simon Newcomb in 1881 [25]; however, it is popularly known as Benford’s Law, or alternatively, as the Law of Anomalous Numbers. Several disparate data sets such as stock prices, freezing points of chemical compounds or physical constants (see Figure 4.1) exhibit this pattern, at least empirically. While originally being only a curious pattern [26], practical implications began to emerge in the 1960s in the design of efficient computers (see for instance [27]). In recent years goodness-of-fit tests against Benford’s Law have been used to detect possibly fraudulent data, by analyzing the deviations of accounting data, corpora-
Figure 4.1 Leading digit histogram (black bars) of a list of 201 physical constants (in the International System). Grey bars represent Benford’s Law (4.1).
4.1 A Statistical Pattern in the Prime Number Sequence
tion incomes, tax returns, scientific experimental data and even election theft, to theoretical Benford predictions [28–32]. Several mathematical insights regarding Benford’s Law have been put forward so far [26, 33–35] and [36] proved a Central Limit-like Theorem which states that random entries picked from random distributions form a sequence whose first-digit distribution tends towards Benford’s Law, thereby explaining its ubiquity. This law has for a long time been almost the only distribution that could explain the presence of skewed first-digit frequencies in generic data sets. Recently, physicist Pietronero and collaborators [37] proposed a generalization of Benford’s Law (GBL) based in multiplicative processes. It is well known that a stochastic process with probability density 1/x generates data which are Benford, therefore seriesgenerated by power-law distributions P( x ) ∼ x −α with α = 1, would have a first-digit distribution that follow a so-called Generalized Benford’s Law [37]: d +1 1 −α 1− α 1− α (4.2) P(d) = C x dx = 1−α ( d + 1) −d 10 −1 d where the prefactor is fixed for normalization to hold and α is the exponent of the original power-law distribution (observe that for α = 1 the GBL reduces to Benford’s Law, while for α = 0 it reduces to the uniform distribution). 4.1.2 Are the First-Digit Frequencies of Prime Numbers Benford Distributed?
Although the prime numbers are rigidly determined, they somehow feel like experimental data. T. Gowers, Mathematics: A Very Short Introduction.
Many mathematical sequences such as (nn )n∈N and (n!)n∈N [39], binomial arrays (nk ) [41], geometric sequences, or sequences generated by recurrence relations [26,35], to mention a few, have been proved to conform to Benford. So one may wonder if this is the case for the primes. In Figure 4.2 we have plotted the leading digit d rate of appearance for the prime numbers placed in the interval [1, N ] (black bars), for different sizes N. Note that intervals [1, N ] have been chosen such that N = 10D , D ∈ N in order to assure an unbiased sample where all
95
96
4 Complex Systems, numbers and Number Theory
possible first digits are equiprobable, a priori (see [40] and references therein for a deeper discussion on this point). Benford’s Law states that the first digit of a datum extracted at random is 1 with a frequency of 30.1%, and is 9 only about 4.6%. Note in Figure 4.2 that primes seem, however, to approximate to uniformity in their first digit. Indeed, the more we increase the interval under study, the more we approach uniformity (in the sense that all integers 1, . . . , 9 tend to be equally likely as a first digit). As a matter of fact, [41] proved that primes are not Benford distributed as long as their first significant digit is asymptotically uniformly distributed. The direct question arises: how does the prime
Figure 4.2 Leading digit histogram of the prime number sequence. Each plot represents, for the set of prime numbers comprised in the interval [1, N ], the relative frequency of the leading digit d (black bars). Sample sizes are: 5 761 455 primes for N = 108 , 50 847 534 primes
for N = 109 , 455 052 511 primes for N = 1010 and 4 118 054 813 primes for N = 1011 . Grey bars represent the fit to a Generalized Benford Distribution (4.2) with a given exponent α( N ). (Adapted from [38]).
4.1 A Statistical Pattern in the Prime Number Sequence
sequence reach this uniform behavior in the infinite limit? Is there any pattern on its trend towards uniformity, or on the contrary, does the first-digit distribution lack any structure for finite sets? Although [41] showed that the leading digit of primes distributes uniformly in the infinite limit, there exists a clear bias from uniformity for finite sets as we can see in Figure 4.2. In these pictures we have also plotted (grey bars) the fitting to a GBL. Note that in each of the four intervals that we present, there is a particular value of exponent α for which an excellent agreement holds (see [40] for fitting methods and statistical tests). More specifically, given an interval [1, N ], there exists a particular value α( N ) for which a GBL fits, with extremely good accuracy, the first-digit distribution of the primes appearing in that interval. Observe at this point that the functional dependency of α is only in the interval’s upper bound: once this bound is fixed, α is constant in that interval. Interestingly, the value of the fitting parameter α decreases as the interval upper bound N, hence the number of primes, increases. In Figure 4.3 we have plotted this size dependence, showing that a functional relation between α and N seems to take place: α( N ) =
1 log N − a
Figure 4.3 Size-dependent parameter α( N ). The dots represent the exponent α( N ) for which the first significant digit of prime number sequence fits a Generalized Benford Law in the interval [1, N ]. The line corresponds to the fit, using a least squares method, α( N ) = 1/(log N − 1.10). (Adapted from [38]).
(4.3)
97
98
4 Complex Systems, numbers and Number Theory
where a = 1.1 ± 0.1 is the best fit. Notice that lim N →∞ α( N ) = 0, and this size-dependent GBL reduces asymptotically to the uniform distribution, consistent with previous theory [41]. Despite the local randomness of the prime numbers sequence, it seems that its first-digit distribution converges smoothly to uniformity in a very precise trend: as a GBL with a size-dependent exponent α( N ). 4.1.3 Prime Number Theorem Versus Size-Dependent Generalized Benford’s Law
Why do prime number sequences exhibit this unexpected pattern in the leading-digit distribution? What is originating it? While the prime number distribution is deterministic in the sense that precise rules determine whether or not an integer is prime, its apparent local randomness has suggested several stochastic interpretations. In particular, Cramér [42] (see also [16]) defined the following model. Assume that we have a sequence of urns U (n) where n = 1, 2, . . . and put black and white balls in each urn such that the probability of drawing a white ball in the kth -urn goes as 1/ log k. Then, in order to generate a sequence of pseudo-random prime numbers, we only need to draw a ball from each urn: if that drawn from the kth -urn is white, then k will be labelled as a pseudo-random prime. The prime number sequence can indeed be understood as a concrete realization of this stochastic process, where the chance of a given integer x being prime is 1/ log x. We have repeated all the statistical tests within the stochastic Cramér model, and have found that a statistical sample of pseudo-random prime numbers in the interval [1, 1011 ] is also GBL distributed and reproduces all the statistical analysis previously found in the actual primes (see [40] for an in-depth analysis). This result strongly suggests that a density 1/ log x, which is nothing but the mean local primes density by virtue of the Prime Number Theorem, is likely to be responsible for the GBL pattern. Recently, it has been shown that disparate distributions such as the Lognormal, the Weibull or the Exponential Distribution can generate standard Benford Behavior [43] for particular values of their parameters. In this sense, a similar phenomenon could be taking place with GBL: can different distributions generate GBL behavior? One should thus switch the emphasis from the examination of data sets that obey GBL to probability distributions that do so, other than power laws.
4.1 A Statistical Pattern in the Prime Number Sequence
The prime counting function π ( N ) provides the number of primes in the interval [1, N ] [16] and up to normalization, stands as the cumulative distribution function of primes. While π ( N ) is a stepped function, a nice asymptotic approximation is the offset logarithmic integral: π(N) ∼
N 2
1 dx = Li( N ) log x
(4.4)
(one of the formulations of the Riemann hypothesis actually states that √ |Li(n) − π (n)| < c n log n, for some constant c [22]). We can interpret 1/ log x as an average prime density and the lower bound of the integral is set to be 2 for singularity reasons. Following Leemis [43], we can calculate a chi-squared goodness-of-fit test of the conformation between the first-digit distribution generated by π ( N ), Li( N ) and a GBL with exponent α( N ). In both cases there is remarkably good agreement and we cannot reject the hypothesis that primes are size-dependent GBL. 4.1.4 The Primes Counting Function L( N )
Statistical arguments indicate that other distributions than x −α , such as 1/ log x can generate GBL behavior: can we provide analytical arguments that support this fact? Suppose that a given sequence has a power law-like density x −α (and whose first significant digits are consequently GBL). One can derive from this latter density a counting function L( N ) that provides the number of elements of that sequence appearing in the interval [1, N ]. A first option is to assume a local density of the shape x −α( x) , such that N L( N ) ∼ 2 x −α( x) dx. Note that this option implicitly assumes that α varies smoothly in [1, N ], which is not the case in the light of the numerical relation (4.3), which implies that the functional dependence of α is only with respect to the upper bound value of the interval. Indeed, x −α( x) is not a good approximation to 1/ ln x for any given interval. This drawback can be overcome by defining L( N ) as follows. L( N ) = eα( N )
N 2
x −α( N ) dx
(4.5)
99
100
4 Complex Systems, numbers and Number Theory
where the prefactor is fixed for L( N ) to fulfill the prime number theorem and consequently lim
N →∞
L( N ) =1 N/ log N
(4.6)
Observe that what we are claiming is that fixed the interval [1, N ], acts as a good approximation to the primes’ mean local density 1/ ln x in that interval. In order to prove this, let us compare the counting functions derived N from both densities. First, Li( N ) = 2 (1/ ln x )dx possesses the following asymptotic expansion
N 1 2 1 (4.7) 1+ Li( N ) = + +O log N log N log2 N log3 N x −α( N )
On the other hand, we can asymptotically expand L( N ) as follows. L( N ) =
α( N )e N 1− α ( N ) 1 − α( N )
N −a · exp = log N − ( a + 1) log N − a
N a+1 ( a + 1)2 1 1+ = + +O log N log N log2 N log3 N
a a2 1 1− + + O log N − a 2(log N − a)2 (log N − a)3
N 1 1 + a − a2 /2 1 (4.8) 1+ = + +O log N log N log2 N log3 N Comparing (4.7) and (4.8), we conclude that Li( N ) and L( N ) are compatible cumulative distributions within an error
N 2 1 + a − a2 /2 1 (4.9) E( N ) = − +O log N log2 N log2 N log3 N that is indeed a minimum for a = 1, consistent with our previous numerical results obtained for the fitting value of a (4.3). Hence, within that error we can conclude that primes obey a GBL with α( N ) following (4.3): primes follow a size-dependent generalized Benford’s Law.
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
4.1.5 Remarks
In this first section we have addressed a pattern in the primes, which relates the distribution of their first digits with their mean density. Some concluding remarks can be stated. First, we have seen that Generalized Benford Distributions in the first digit are related to stochastic processes with power-law densities. In other words, there is a close link between complex systems with correlations that decay as power laws (usually called long-range correlations) and GBL, something that may suggest alternative approaches to characterize such correlations in real systems. Second, the prime number theorem is indeed responsible for the statistical pattern found in the primes: for a fixed interval [1, N ], we can approximate the mean local density of this sequence to a power-law distribution, with good accuracy. The mean density of the primes, 1/ log N, can be understood as the strength of the correlations in such a sequence. Hence, reasoning as before, we can deduce that complex systems with long-range correlations that decay even slower than a power law – concretely, as 1/ log N – are likely to evidence a size dependent GBL behavior, just like the primes. The question is thus straightforward: which natural systems evidence this kind of correlation? 4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
In mathematics most things are abstract, but I have some feeling that I can touch the primes, as if they are made of a really physical material. To me, the integers as a whole are like physical particles. Quoted in K. Sabbagh’s Dr. Riemann’s Zeros.
I sometimes have the feeling that the number system is comparable with the universe that the astronomer is studying . . . The number system is something like a cosmos. M. Jutila, quoted in K. Sabbagh, Beautiful Mathematics.
Prime numbers are mostly found using the classical sieve of Eratosthenes and its recent improvements [44]. Additionally, several methods able to generate probable primes have been put forward [20]. In
101
102
4 Complex Systems, numbers and Number Theory
this section we study an algorithm somewhat different from those mentioned above, that generates primes by means of a stochastic integer decomposition as algorithm and we analyze whether hints of collective behavior take place in such. Suppose [45,46] that we have a pool of positive integers {2, 3, . . . , M}, from which we randomly extract a certain number N of them (this will constitute the system under study). Note that the chosen numbers can be repeated, and that the integer 1 is not taken into account. Now, given two numbers ni and n j taken from the system of N numbers, the algorithm division rules are the following: Rule 1: if ni = n j there is no division, and the numbers are not modified. Rule 2: If the numbers are different (say ni > n j ), a division will take place only if n j is a divisor of ni , i.e. if ni mod n j = 0. The algorithm’s outcome is then schematized as ni ⊕ n j −→ nk ⊕ n j where nk =
ni nj .
Rule 3: if ni > n j but ni mod n j = 0, no division takes place. The result of a division will be the extinction of the number ni and the introduction of a smaller one, nk . The algorithm goes as follows. After randomly extracting from the pool {2, 3, . . . , M} a set of N numbers, we pick at random two numbers ni and n j from the set. We then apply the division rules. In order to have a parallel updating, we will establish N repetitions of this process (N Monte Carlo steps) as a time step. Note that the algorithm rules tend to reduce numbers, hence this dynamic when iterated may generate prime numbers in the system. We say that the system has reached stationarity when no more divisions can be achieved, whether because every number has become a prime or because Rule 2 cannot be satisfied in any case – ‘frozen state’. The algorithm then stops. This algorithm clearly tends to generate as many primes as possible: when the algorithm stops, one may expect the system to have a large number of primes or at least have a frozen state of nondivisible pairs. A first indicator that can evaluate this feature properly is the unit percentage or ratio of primes r, that a given system of N numbers reaches at stationarity [48]. In Figure 4.4 we present the results of Monte Carlo
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
simulations calculating, as a function of N and for a concrete pool size M = 214 , the steady values of r. Every simulation is averaged over 2 × 104 realizations in order to avoid fluctuations. We can clearly distinguish in Figure 4.4 two phases, a first where r is small and a second where the prime number concentration reaches unity.
Figure 4.4 Numerical simulation of the steady values of r versus N , for a pool size M = 214 . Each run is averaged over 2 × 104 realizations in order to avoid fluctuations. Note that the system exhibits a phase transition which distinguishes a phase where every element of the system becomes a prime in the steady state, and a phase with low prime density. (Adapted from [47]).
This is the portrait of a phase transition where N would stand as the control parameter and r as the order parameter. In the phase with small r, the average steady state distribution of the N elements is plotted in Figure 4.5: the distribution is uniform (note that the vertical scale is zoomed in such a way that if we scale it between [0, 1] we would see a horizontal line), which is related to an homogeneous state. In this regime, every number has the same probability of appearing in the steady state. On the other hand, the average steady state distribution of the N numbers in the phase of high r is plotted on Figure 4.5(a): the distribution is now a power law, which is related to a biased, inhomogeneous state. In this regime, the probability of having a composite
103
104
4 Complex Systems, numbers and Number Theory
Figure 4.5 (a) the steady state distribution (averaged over 2 × 104 realizations) of the N elements, for N = 10 and M = 104 (phase with low r): this one is a uniform distribution U (2, M) (note that the distribution is not normalized). (b) the same plot for N = 110 and M = 104 (phase where r reaches the unity): this one is a power law P( x ) ∼ 1/x. (Adapted from [47]).
number – in the steady state – is practically null, and the probability of having the prime x is in turn proportional to 1/x [49]. The breaking of this symmetry between steady distributions leads us to assume an order-disorder phase transition, the phase with small proportion of primes being the disordered phase and the ordered phase being the one where r tends to one. A second feature worth investigating is the size dependence of the transition. In Figure 4.6 we plot r versus N, for a set of different pool sizes M. Note that the qualitative behavior is size invariant, however, the transition point increases with M. This size dependence will be considered in a later section. As a third preliminary insight, we shall study the temporal evolution of the system. For example, we can compute and study the evolving value r (t) for several Ns and the cumulated number of divisions that a system of N numbers needs to make in order to reach stationarity. In the disordered phase we can see that the system is rapidly frozen: the algorithm is not efficient in producing primes and r is asymptotically small. In the ordered phase the system needs more time to reach stationarity: this is due to the fact that the algorithm is producing many primes, as the evolving value of r reaches unity. It is, however, in a
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Figure 4.6 Plot of r versus N , for different pool sizes M (each simulation is averaged over 2 × 104 realizations). (Adapted from [47]).
neighborhood of the transition where the system takes the higher times to reach the steady state: the system is producing many divisions, but not that many primes. This fact can be related to a critical slowing down phenomenon. It is worth noting in Figures 4.4 and 4.6 that in the disordered phase the order parameter doesn’t vanish, as it should. This is due to the fact that, in a pool of M numbers, following the prime number theorem, one finds on average M/ log( M) primes [50]. Thus there is always a residual contribution to the ratio 1/ log( M) not related to the system’s dynamics which only becomes relevant for small values of N, when the algorithm is not able to produce primes. 4.2.1 Phase Transition 4.2.1.1 Network Image and Order Parameter
Let us now see how this phase transition can be understood as a dynamical process embedded in a network having integer numbers as the nodes. Consider two numbers of that network, say a and b (a > b).
105
106
4 Complex Systems, numbers and Number Theory
These numbers are connected (a → b) if they are exactly divisible, that is to say, if a/b = c with c being an integer. The topology of similar networks has been studied in [51–53], concretely in [53] it is shown that this network exhibits scale-free topology [54]: the degree distribution is P(k) ∼ k−λ with λ = 2. In our system, fixing N is equivalent to selecting a random subset of nodes in this network. If a and b are selected they eventually can give a/b = c. In terms of the network this means that the path between nodes a and b is traveled thanks to the ‘catalytic’ presence of c. We may say that our network is indeed a catalytic one [55, 56] where there are no cycles as attractors but two different stationary phases: (i) for large values of N all resulting paths sink into primes numbers, and (ii) if N is small only a few paths are traveled and no primes are reached. Notice that, in this network representation, primes are the only nodes that have input links but no output links (by definition, a prime number is only divisible by the unit and by itself, acting as an absorbing node of the dynamics). When the temporal evolution of this algorithm is explored for small values of N, we have observed that the steady state is reached very quickly. As a consequence, there are only a few traveled paths over the network and since N is small the probability of catalysis is small as well, hence the paths ending in prime nodes are not traveled. We say in this case that the system freezes in a disordered state. In contrast, when N is large enough, many divisions take place and the network is traveled at large. Under these circumstances, an arbitrary node may be catalyzed by a large N − 1 quantity of numbers, its probability of reaction being high. Thus, on average, all numbers can follow network paths towards the prime nodes: we say that the system reaches an ordered state. In the light of the preceding arguments, it is meaningful to define a new order parameter P as the probability that the system has for a given (N,M) then to reduce every number from N into primes, that is to say, to reach an ordered state. In practice, P is calculated in the following way: given (N,M), for each realization we check, once stationarity has been reached, whether the whole set of elements are primes or not and we subsequently count the fraction of runs in which all the remaining numbers are prime in the steady state. In Figure 4.7 we plot P versus N, for different pool sizes M. The phase transition that the system exhibits now has a clear meaning; when P = 0, the probability that the system has to be able to reduce the whole system into primes is null (disordered state), and viceversa when P = 0. In each case, Nc ( M), the critical value separating the phases P = 0 and P = 0, can now be
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Figure 4.7 Order parameter P versus N , for the same pool sizes as Figure 4.6 (averaged over 2 × 104 realizations). Note that P is now a well defined order parameter, as long as P ∈ [0, 1]. Again, Nc depends on the pool size M. (Adapted from [47]).
defined. Observe in Figure 4.7 that Nc increases with the pool size M: N is not an intensive variable of the system. In order to describe this size dependence, we need to find some analytical argument by means of which define a system’s characteristic size. As we will see in a few lines, this one will not be M as one would expect at first. 4.2.1.2 Annealed Approximation
The system shows highly complex dynamics: correlations take place between the N numbers of the system at each time step in a nontrivial way. Finding an analytical solution to the former problem is hard. However, an annealed approximation can still be performed. The main idea is to obviate these two-time correlations, assuming that at each time step, the N elements are randomly generated. This way, we can calculate, given N and M, the probability q that at a single time step, no pair of numbers among N are divisible. Thus, 1 − q will be the probability that there exists at least one reacting pair. Note that 1 − q will
107
108
4 Complex Systems, numbers and Number Theory
Figure 4.8 Numerical simulations calculating the probability 1 − q( N, M ) (as explained in the text) versus N , for different values of pool size M, in the annealed approximation. (Adapted from [47]).
somehow play the role of the order parameter P in this oversimplified system. As a first step, we can calculate the probability p( M) that two numbers randomly chosen from the pool M are divisible: 2 p( M) = ( M − 1)2
M/2
∑
x =2
M−x x
≈
2 log M M
(4.10)
where the floor brackets stand for the integer part function. Obviously, 1 − p( M) is the probability that two numbers randomly chosen are not divisible in any case. Now, in a system composed by N numbers, we can make N ( N − 1)/2 distinct pairs. However, these pairs are not independent in the present case, so that probability q( N, M ) is not simply (1 − p( M)) N ( N −1)/2 . Correlations between pairs must be somehow taken into account. At this point, we can make the following ansatz:
q( N, M ) ≈
2 log M 1− M
N 1/α (4.11)
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Figure 4.9 Scaling of Nc versus the system’s characteristic size in the annealed approximation. The plot is log– log: the slope of the straight line provides the exponent α = 0.48 of (4.12). (Adapted from [47]).
where α characterizes the degree of independence of the pairs. The relation 1 − q( N, M ) versus N is plotted in Figure 4.8 for different values of the pool size M. Note that for a given M, the behavior of 1 − q( N, M ) is qualitatively similar to P, the order parameter in the real system. For convenience, in this annealed approximation we will define a threshold Nc as the one for which q( Nc , M) = 0.5. This value is the one for which half of the configurations reach an ordered state. This procedure is usual for instance in percolation processes, since the choice of the percolation threshold, related to the definition of a spanning cluster, is somewhat arbitrary in finite-sized systems [57]. Taking logarithms in (4.11) and expanding up to first order, we easily find an scaling relation between Nc and M, that reads M α Nc ∼ (4.12) log M This relation suggests that the system’s characteristic size is not M, as one would expect at first, but M/ log( M). In Figure 4.9 we plot, in log– log, the scaling between Nc and the characteristic size M/ log( M) that
109
110
4 Complex Systems, numbers and Number Theory
can be extracted from Figure 4.8. The best fitting provides a relation of the shape (4.12) where α = 0.48 ± 0.01 (note that the scaling is quite good, which gives consistency to the leading order approximations assumed in (4.12)). 4.2.1.3 Data Collapse
The annealed approximation introduced in the preceding section suggests that the characteristic size of the system is not M as one would expect but rather M/ log( M). This is quite reasonable if we have in mind that the number of primes that a pool of M integers has is, on average, M/ log( M) [50]: the quantity of primes does not grow linearly with M. In order to test if this conjecture also applies to the prime number generator, in Figure 4.10 we represent (in log–log) the values of Nc (obtained numerically from the values where P( N, M ) becomes non-null for the first time) as a function of M/ log( M). We find the same scaling relation as for the annealed system (4.12), but with a different value for α = 0.59 ± 0.05 due to the obviation of correlations.
Figure 4.10 Scaling of the critical point Nc versus the characteristic system’s size M/ log( M ) in the prime number generator, for pool size M = {210 − 218 }. The plot is log–log: the slope of the curve provides an exponent α = 0.59. (Adapted from [47]).
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Figure 4.11 Data collapse of curves ( N, P) for different values of M, assuming the scaling relation (4.12). The collapse is very good, the scaling relation seems to be consistent. (Adapted from [47]).
In Figure 4.11 we have collapsed all curves P( N, M ) from Figure 4.7 according to the preceding relations and standard finite-sized scaling techniques. Note that the collapse is excellent, something which provides consistency to the full development. 4.2.2 Computational Complexity
Computer science and physics, although in essence different disciplines, have been closely linked since the birth of the first one. More recently, computer science has met together with statistical physics in the so-called combinatorial problems and their relation to phase transitions and computational complexity (see [58] for a compendium of recent works). For instance, Erd˝os and Rényi, in their pioneering work on graph theory [59], found the existence of zero–one laws in their study of cluster generation. These laws have a clear interpretation in terms of phase transitions, which appear extensively in many physical systems. The Computer science community has recently detected this behavior in the context of algorithmic problems [60–68]. The so called threshold
111
112
4 Complex Systems, numbers and Number Theory
phenomenon [58] distinguishes zones in the phase space of an algorithm where the problem is, computationally speaking, either tractable or intractable. It is straightforward in that these three phenomena can be understood as a unique concept, in such a way that building bridges between each other is an appealing idea. Related to the concept of a phase transition is the task of classifying combinatorial problems. The theory of computational complexity distinguishes problems which are tractable, that is to say, solvable in polynomial time by an efficient algorithm, from those which are not. The so-called NP class gathers problems that can be solved in polynomial time by a nondeterministic Turing machine [61]. This class generally includes many hard or eventually intractable problems, although this classification is denoted worst-case, that is to say, a rather pessimistic one, since the situations that involve long computations can be rather rare. Over recent years, numerical evidence suggests the presence of the threshold phenomenon in NP problems. These phase transitions may in turn characterize the average-case complexity of the associated problems, as pointed out recently [63]. As also pointed out in [64], phase transitions quite similar to the former one, such as percolation processes for instance can be easily related to search problems. In the case under study we can redefine the system as a decision problem in the following terms: one could ask when is the clause every number of the system is prime when the algorithm reaches stationarity satisfied? It is clear that through this focus, the prime number generator can be understood as a SAT-like problem [58], as long as there is an evident parallelism between the satisfiability of the preceding clause and our order parameter P. Therefore, in order to study the system from the focus of computational complexity theory, we must address the following questions: ‘Which is the algorithmic complexity of the system?’ and ‘How is the observed phase transition related to the problem’s tractability?’ 4.2.2.1 Worst-Case Classification
The algorithm under study is related to both primality test and integer decomposition problems. Although primality was believed to belong to the so-called NP problems [72] (solvable in nondeterministic polynomial time), it has recently been shown to be in P [69]: there exists at least an efficient deterministic algorithm that tests if a number is prime in polynomial time. The integer decomposition problem is in turn a harder problem, and to find an algorithm that would factorize numbers in polynomial time is an unsolved problem of computer science.
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Furthermore, exploring the computational complexity of the problem in hand could eventually shed light on these aspects. For this task, let us determine how the search space grows when we increase N. In a given time step, the search space corresponds to the set of configurations that must be checked in order to solve the decision problem: this is simply the number of different pairs that can be formed using N numbers. Applying basic combinatorics, the set of different configurations G for N elements and N/2 pairs is: G( N ) =
N! 2! N/2 ( N/2)!
= ( N − 1)!!
(4.13)
We find that the search space increases with N as ( N − 1)!!. On the other hand, note that the decision problem is rapidly checked (in polynomial time) if we provide a candidate set of N numbers to the algorithm. These two features lead us to assume that the problem in hand belongs, in a worst-case classification [58], to the NP complexity class. Observe that this is not surprising: the preceding sections led us to the conclusion that the process is embedded in a (dynamical) scale-free nonplanar network [70]. Now, it has been shown that nonplanarity in this kind of problem usually leades to NP-completeness [71] (for instance, the Ising model in two-dimensions is, when the underlying network topology is nonplanar, in NP). 4.2.2.2 Easy-Hard-Easy Pattern
An ingredient which is quite universal in the algorithmic phase transitions is the so-called easy-hard-easy pattern [58]. In both phases, the computational cost of the algorithm (the time that the algorithm requires to find a solution, that is, to reach stationarity) is relatively small. However, in a neighborhood of the transition, this computational time reaches a peaked maximum. In terms of search or decision problems, this fact has a clear interpretation: the problem is relatively easy to solve as long as the input is clearly in one phase or the other, but not in between. In the system under study, the algorithm is relatively fast in reaching an absorbing state of low concentration of primes for small N because the probability of having positive divisions is small. On the other hand, the algorithm is also quick to reach an absorbing state of high concentration of primes for high N, because the system has enough ‘catalytic candidates’ at each time step to be able to reduce them, so the probability of having positive divisions is high. In the transition’s vicinity, the system is critical. Divisions can be achieved,
113
114
4 Complex Systems, numbers and Number Theory
however, the system needs to make an exhaustive search of the configuration space in order to find these divisions. In this region the algorithm requires much more time to reach stationarity. Note that this easy-hard-easy pattern is related, in second-order phase transitions, to the the phenomenon of critical slowing down, where the relaxation time in the critical region diverges [58]. We have already seen that the system reaches the steady state in a different manner, depending on which phase is located the process. More accurately, when N Nc (disordered phase), the system rapidly freezes, without practically achieving any reaction. When N Nc (ordered phase), the system takes more time to reach the steady state, but it is in the regime N ∼ Nc where this time is maximal. In order to be able to compare these three regimes properly, let us define a characteristic time in the system τ as the number of average time steps that the algorithm needs to take in order to reach stationarity. Remember that we defined a time step t as N Monte Carlo steps (N operations). Thus, normalizing it over the set of numbers considered, it is straightforward to define a characteristic time as: t (4.14) τ(N) = N Note that τ can be understood as a measure of the algorithm’s time complexity [61]. In Figure 4.12 we plot τ versus N for a set of different pools M = 210 . . . 214 (simulations are averaged over 2 × 104 realizations). Note that, given a pool size M, τ reaches a maximum in a neighborhood of its transition point Nc ( M), as can be checked according to Figure 4.7. As expected, the system exhibits an easy-hard-easy pattern, as long as the characteristic time τ required by the algorithm to solve the problem has a clear maximum in the neighborhood of the phase transition. Moreover, the location of the maximum shifts with the system’s size according to the critical point scaling found in (4.12). On the other hand, this maximum also scales as:
δ τmax M/ log( M) ∼ M/ log( M) (4.15) where the best fitting provides δ = 0.13 ± 0.1. Note that, in the thermodynamic limit, the characteristic time would diverge in the neighborhood of the transition. It is straightforward to relate this parameter to the relaxation time of a physical phase transition. According to these relations, we can collapse the curves τ ( N, M ) of Figure 4.12 into a single universal one. In Figure 4.13 this collapse is provided: the goodness of the former one supports the validity of the scaling relations.
4.2 Phase Transition in Numbers: the Stochastic Prime Number Generator
Figure 4.12 Characteristic time τ as defined in the text versus N , for different pool sizes, from left to right: M = 210 , 211 , 212 , 213 , 214 . Every simulation is averaged over 2 × 104 realizations. Note that, for each curve and within the finite size effects, τ ( N ) reaches a maximum in a neighborhood of its transition point (this can be easily explored in Figure 4.7). (Adapted from [47]).
Figure 4.13 Data collapse of τ for the curves of Figure 4.12. The goodness of the collapse validates the scaling relations. (Adapted from [47]).
115
116
4 Complex Systems, numbers and Number Theory
4.2.2.3 Average-Case Classification
While Computational Complexity Theory has generally focused on worst-case classification, one may readily generalize this study to different complexity definitions. Average-case analysis is understood as the classification of algorithms according to which resource usage is needed to solve them on average (for instance, the execution time needed to solve an algorithm often depends on the algorithm’s inputs and consequently one can define an average execution time). This classification seems more relevant for practical purposes than the worstcase one. As a matter of fact, although NP-complete problems are generally thought of as being computationally intractable, some are indeed easy on average (however, some remain complete in the average case, indicating that they are still difficult in randomly generated instances). The system under study has been interpreted in terms of a search problem, belonging to the NP class in a worst-case classification. Now, an average-case behavior, which is likely to be more useful in order to classify combinatorial problems, turns out to be tough to describe. In [63], Monasson et al. showed that there where NP problems exhibit phase transitions (related to dramatic changes in the computational hardness of the problem), the order of the phase transition is in turn related to the average-case complexity of the problem. More specifically, that second-order phase transitions are related to a polynomial growing of the resource requirements, instead of exponential growing, associated to first order phase transitions. We have shown that the system exhibits a continuous transition and an easy-hard-easy pattern. Following Monasson et al. [63], while our prime generator is likely to belong to the NP class, its average-case complexity class is only polynomial. This means that as the pool size M grows, the execution time that the algorithm needs in order to solve the problem (to reach the steady state) increases polynomially on average (on average means in the ensemble of all possible initial configurations, i.e. considering many realizations with an initial random configuration). We may argue that one of the reasons of this hardness reduction is that the algorithm doesn’t realize a direct search but on the contrary this search is stochastic: the search space is not exhaustively explored. Thereby, the average behavior of the system and thus the average decision problem can be easily solved by the algorithm, in detriment of the probable character of this solution.
4.3 Self-Organized Criticality in Number Systems: Topology Induces Criticality
4.3 Self-Organized Criticality in Number Systems: Topology Induces Criticality
Too complicated! Per Bak (1948–2002) Self-Organized Criticality (SOC) is a concept introduced in the 1980s [73,74] to explain how multicomponent systems may evolve into barely stable self-organized critical structures without the need for external ‘tuning’ of parameters. This concept motivated a large theoretical and experimental research effort in many areas of physics and interdisciplinary science. As a consequence, several natural phenomena were claimed to exhibit SOC [75–77]. Nevertheless, there was not a generally accepted definition of SOC or an acceptable explanation of under what conditions it is likely to arise. In order to disengage the mechanism of self-organization to criticality it is necessary to focus on rather ‘simple’ models, and in this sense Flyvbjerg introduced the so-called ‘simplest SOC model’ along with a workable definition of the phenomenon [78, 79], namely ‘a driven, dissipative system consisting of a medium through which disturbances can propagate causing a modification of the medium, such that eventually the disturbances are critical and the medium is modified no more – in the statistical sense’. On the other hand, in recent years it has been realized that the dynamics of processes taking place on networks evidence a strong dependence on the network’s topology [80,81]. Concretely, there is a current interest on the possible relations between SOC behavior and scale-free networks [81], characterized by power-law degree distributions P(k) ∼ k−γ , and how self-organized critical states can emerge when coupling topology and dynamics [82–85]. In this last section of the present chapter we introduce a rather simple and general mechanism by which the onset of criticality in the dynamics of self-organized systems is induced by the scale-free topology of the underlying network of interactions. To illustrate this mechanism we present a simple model, called the division model from now on, based uniquely in the division between integers. We show that this model compliances with Flyvbjerg’s definition of SOC and to our knowledge, constitutes the simplest SOC model advanced so far that is also analytically solvable.
117
118
4 Complex Systems, numbers and Number Theory
4.3.1 The Division Model
A primitive set of N integers is defined in Number Theory as the one for which none of the set elements divide exactly any other element [19, 86, 87]. Consider an ordered set of M − 1 integers {2, 3, 4, . . . , M} (notice that zero and one are excluded, and that integers are not repeated), that we will name as the pool from now on. Suppose that we have extracted N elements from the pool to form a primitive set. The division model then proceeds by drawing integers at random from the remaining elements of the pool and introducing them into the set. Suppose that at time t the primitive set contains N (t) elements. The algorithm updating rules are the following: (R1) Perturbation: an integer a is drawn from the pool at random and introduced into the primitive set. (R2) Dissipation: if a divides and/or is divided by say s elements of the primitive set, then we say that an instantaneous divisionavalanche of size s takes place, and these latter elements are returned to the pool, such that the set remains primitive but with a new size N (t + 1) = N (t) + 1 − s. This process is then iterated, and we expect the primitive set to vary in size and composition accordingly. The system is driven and dissipative since integers are constantly introduced and removed from it, its size temporal evolution being characterized by N (t). 4.3.2 Division Dynamics and SOC
In order to unveil the undergoing dynamics in the model, we have performed several Monte Carlo simulations for different values of the pool size M. In Figure 4.14(a) we have represented, for illustration purposes, a concrete realization of N (t) for M = 104 and N (0) = 0. Note that, after a transient, N (t) self-organizes around an average stable value Nc , fluctuating around it. In the inner part of Figure 4.14(b), we have plotted in log–log the power spectrum of N (t): the system evidences f − β noise, with β = 1.80 ± 0.01. The former fluctuations are indeed related to the fact that at each time step a new integer extracted from the pool enters the primitive set (external driving R1). Eventually (according
4.3 Self-Organized Criticality in Number Systems: Topology Induces Criticality
Figure 4.14 (a) Single realization of the division model showing the time evolution of the primitive set size N (t) for a pool size M = 104 and N (0) = 0. Notice that, after a transient, N (t) selforganizes around an average stable value Nc , fluctuating around it. (b) (black dots) Scaling behavior of the average stable value Nc as a function of the system’s characteristic size M/ log M. The
best fitting provides Nc ∼ ( M/ log M )γ , with γ = 1.05 ± 0.01. (squares) Scaling of Nc as predicted by (4.23). Inner figure: plot in log–log of the power spectrum of N (t), showing f − β noise with β = 1.80 ± 0.01 (this latter value is the average of 105 realizations of N (t) for 4096 time steps after the transient and M = 104 ). (Adapted from [88]).
to rule R2), a division-avalanche can propagate and cause a modification in the size and composition of the primitive set. These avalanches constitute the disturbances of the system. In Figure 4.15 (a) we have represented an example of the avalanche’s size evolution in time. In the same figure (b) we show the probability P(s) that a division-avalanche of size s takes place, for different pool sizes M. These latter distributions are power laws P(s) ∼ s−τ exp(s/s0 ) with τ = 2.0 ± 0.1: dis-
119
120
4 Complex Systems, numbers and Number Theory
Figure 4.15 (a) Single realization of the division model showing the time distribution of division-avalanches. (b) Probability distribution P(s) that a division-avalanche of size s takes place in the system, for different pool sizes M = 210 (triangles), M = 211 (inverted triangles), M = 212 (diamonds) and M = 213 (circles). In every case
we find P(s) ∼ s−τ exp(s/s0 ) with τ = 2.0 ± 0.1. Note that the powerlaw relation evidences an exponential cut-off due to finite-sized effects at particular values of s0 . Inner figure: scaling of the cut-off value s0 as a function of the system’s characteristic size M/ log M, with an exponent ω = 1.066 ± 0.003. (Adapted from [88]).
turbances are thus critical. Observe that the power-law relation suffers from a crossover to exponential decay at a cut-off value s0 due to finite-size effects (pool is finite), and that the location of these cutoffs scales with the system’s characteristic size s0 ∼ ( M/ log M)ω with ω = 1.066 ± 0.003, which is typically characteristic of a finite-sized critical state [75] (this characteristic size will be explained later in the text). We can conclude that, according to Flyvbjerg’s definition [78], the division model exhibits SOC. Division-avalanches lead the system to differ-
4.3 Self-Organized Criticality in Number Systems: Topology Induces Criticality
ent marginally stable states, that are simply primitive sets of different sizes and composition. Accordingly, for a given pool [2, M], these time fluctuations generate a stochastic search in the configuration space of primitive sets. 4.3.3 Analytical Developments: Statistical Physics Versus Number Theory
In the following we discuss analytical insights of the problem. Consider the divisor function [90] that provides the number of divisors of n, excluding integers 1 and n: d(n) =
n −1
∑
k =2
n n−1 − k k
(4.16)
where stands for the integer part function. The average number of divisors of a given integer in the pool [2, M] is then: M M 1 1 M d(n) = M − 1 n∑ M − 1 k∑ k =3 =2
1 log M + 2(γ − 1) + O √ M
M
1 k k =2
∑
(4.17)
Accordingly, the mean probability that two numbers a and b taken at random from [2, M] are divisible is approximately P = Pr ( a|b) + Pr (b| a) 2 log M/M. Moreover, if we assume that the N elements of the primitive set are uncorrelated, the probability that a new integer generates a division-avalanche of size s is on average (2 log M/M) N. We can consequently build a mean field equation for the system’s evolution, describing that at each time step an integer is introduced in the primitive set and a division-avalanche of mean size (2 log M/M) N takes place:
2 log M N (t) N ( t + 1) = N ( t ) + 1 − (4.18) M whose fixed point Nc = M/(2 log M), the stable value around which the system self-organizes, scales with the system’s size as Nc ( M) ∼
M log M
(4.19)
121
122
4 Complex Systems, numbers and Number Theory
Therefore, we can conclude that the system’s characteristic size is not M (pool size) as one would expect at first, but M/ log M. This scaling behavior has already been noticed in other numbertheoretic models evidencing collective phenomena [48, 89]. In Figure 4.14 we have plotted (black dots) the values of Nc as a function of the characteristic size M/ log M provided by Monte Carlo simulations of the model for different pool sizes M = 28 , 29 , . . . , 215 (Nc has been estimated averaging N (t) in the steady state). Note that the scaling relation (4.19) holds. However, the exact numerical values Nc ( M) are underestimated by (4.18). This is reasonable since we have assumed that the primitive set elements are uncorrelated, which is obviously not the case: observe for instance that any prime number p M/2 introduced into the primitive set will remain there forever. Fortunately, this drawback of our mean field approximation can be improved by considering the function D (n) that defines the exact number of divisors that a given integer n ∈ [2, M] has, i.e. the amount of numbers in the pool that divide or are divided by n: M D (n) = d(n) + −1 (4.20) n Define pn (t) as the probability that the integer n belongs at time t to the primitive set. Then, we have
D (n) 1 1 − pn (t) (4.21) p n ( t + 1) = 1 − pn (t) + M − N (t) M − N (t) that leads to a stationary survival probability in the primitive set: p∗n =
1 1 + D (n)
(4.22)
In Figure 4.16 we depict the stationary survival probability of integer n (black dots) obtained through numerical simulations for a system with M = 50, while the squares represent the values of p∗n as obtained from (4.22). Note that there exists a remarkable agreement. We now can proceed to estimate the critical size values Nc ( M) as: Nc ( M) ≈
M
M
1 n =2 1 + D ( n )
∑ p∗n = ∑
n =2
(4.23)
In Figure 4.14 we have represented (squares) the values of Nc ( M) predicted by (4.23), showing good agreement with the numerics (black dots).
4.3 Self-Organized Criticality in Number Systems: Topology Induces Criticality
Figure 4.16 Inset: Histogram of the amount of integers in [2, 106 ] that have D divisors. The histogram have been smoothed (binned) to reduce scatter. The best fitting provides a power law P( D ) ∼ D −τ with τ = 2.01 ± 0.01, in agreement with P(s) (see the text). The black dots indicate stationary survival
probability of integer n in a primitive set for a pool size M = 50, obtained from Monte Carlo simulations of the model over 106 time steps (a preliminary transient of 104 time steps was discarded). The squares indicate the theoretical prediction of these survival probabilities according to (4.22). (Adapted from [88]).
Finally, previous calculations point out the system’s fluctuations, i.e. the division-avalanches distribution P(s) is proportional to the percentage of integers having s divisors. In order to test this conjecture, in Figure 4.16 (inset) we have plotted a histogram describing the amount of integers having a given number of divisors, obtained from the computation of D (n) for M = 106 . The tail of this histogram follows a power law with exponent τ = 2.0. This can be proved analytically as it follows. The numbers responsible for the tail of the preceding histogram are those numbers, which divide many others, i.e. rather small ones (n M). A small number n divides typically D (n) M n . Now, how many ‘small numbers’ have D (n) divisors? The answer is n, n + 1,. . . , n + z where M M M (4.24) = = ... = n n−1 n−z
123
124
4 Complex Systems, numbers and Number Theory
The maximum value of z fulfills M M − =1 n−z n that is z n2 /M. The frequency of D (n) is thus f r ( D (n)) = n2 /M, but since s ≡ D (n) M/n, we get f r (s) ∼ Ms−2 , and finally normalizing, P ( s ) ∼ s −2 . 4.3.4 A More General Class of Models
Returning to Flyvbjerg’s definition of SOC, which is the medium in the division model? Observe that the process can be understood as embedded in a network, where nodes are integers and two nodes are linked if they are exactly divisible. The primitive set hence constitutes a subset of this network, which is dynamically modified according to the algorithm’s rules. The degree of node n is D (n), and consequently the degree distribution P(k) ∼ k−2 is scale-free. Hence the SOC behavior, which arises due to the divisibility properties of integers, can be understood as a sort of anti-percolation process taking place in this scale-free network. Observe that the division model is a particular case of a more general class of self-organized models: a network with M nodes having two possible states (on/off ) whith the following dynamics: (R1) Perturbation: at each time step a node in the state off is randomly chosen and switched on. (R2) Dissipation: the s neighbors of the perturbed node which were in the state on in that time step are switched off, and we say that an instantaneous avalanche of size s has taken place. N (t) measures the number of nodes in the state on as a function of time. Its evolution follows a mean-field equation which generalizes (4.18):
k N (t) (4.25) M where k is the mean degree!networknetwork’s mean degree. Accordingly, in every case N (t) will self-organize around an average value Nc ( M). Within regular or random networks, fluctuations (avalanches) around Nc ( M) will follow a Binomial or Poisson distribution respectively. However, when the network is scale-free with degree distribution P(k) ∼ k−γ , fluctuations will follow a power-law distribution P(s) ∼ s−τ with τ = γ, and the dynamics will consequently be SOC. In this sense, we claim that scale-free topology induces criticality. N ( t + 1) = N ( t ) + 1 −
4.4 Conclusions
4.3.5 Open Problems and Remarks
Some questions concerning this new mechanism can be posed. (i) What is the relation between the specific topology of scale-free networks and the power spectra of the system’s dynamics? (ii) Which physical or natural systems evidence this behavior? With regard to the division model, the bridge between statistical physics and Number Theory should also be investigated in depth. This includes possible generalizations of this model to other related sets such as (i) k-primitive sets [91], where every number divides or is divided by at least k others (k acting as a threshold parameter), (ii) relatively primitive sets [92] and (iii) cross-primitive sets [87] (where this will introduce coupled SOC models). From the computational viewpoint [58], properties of the model as a primitive set generator should also be studied. Of special interest is the task of determining the maximal size of a k-primitive set [87, 91], something that can be studied within the division model by extreme-value theory [76].
4.4 Conclusions
The extreme specialization of mathematicians and physicists, along with their own jargon and exposition style, constantly separates both fields. Physicists find it extremely boring to get into the arid exposition of mathematical papers, while mathematicians feel confused – and sometimes shocked – by the less rigorous and more heuristic style of physical works. Nevertheless, some mutual approaches have been recently observed. The words experimental mathematics have been coined, and gather researchers who address mathematical problems through computer simulations, non-exact analytical developments, and so on. Physicists, from their side look at mathematics as if it were the ultimate physical system, and use their intuition in this playground. Within statistical mechanics, numbers can be understood as a less redundant complex system, where emergent behavior is likely to take place according to the number-theoretical properties of the system. The preceding sections are just some examples of this.
125
126
References
References 1 Bell, E.T., Mathematics: Queen and Servant of Science, The Mathematical Association of America, 1996 (first edition 1951). 2 Bos, H.J.M., Lectures in the History of Mathematics, AMS and London Mathematical Society, 1993.
14
15
3 Julia, B.L., Statistical Theory of Numbers: From Number Theory and Physics, Springer-Verlag, 1990. 4 Berry, M.V., Riemann zeta function: A model for quantum chaos? In Quantum Chaos and Statistical Nuclear Physics, Selegman, T.H. and Nishioka, H., eds., Springer-Verlag, Berlin-New York, 1986.
16
17
5 Berry, M.V. and Keating, J.P., The Riemann zeta-zeros and Eigenvalue asymptotics, SIAM Review 41:236–266, 1999.
18
6 Bogomolny, E., Riemann Zeta functions and Quantum Chaos, Progress in Theoretical Physics supplement 166:19– 44, 2007.
19
7 Wolf, M., Multifractality of prime numbers, Physica A 160:236–266, 1989. 8 Mertens, S., Phase transition in the number partitioning problem, Phys. Rev. Lett., 81:4281–4284, 1998.
20
9 Watkins, M.R., Number Theory and Physics archive, http://secamlocal.ex. ac.uk/~mwatkins/zeta/physics.htm, retrieved 2009.
21
22
10 Zagier, D., The first 50 million primes, Mathematical Intelligencer 0:7–19, 1977. 11 Dickson, L.E., in History of the Theory of Numbers, Volume I: Divisibility and Primality, Dover Publications, New York, 2005.
23
12 Liboff, R.L. and Wong, M., QuasiChaotic Property of the PrimeNumber Sequence, Inter. J. of Theor. Phys. 37:3109–3117, 1998.
24
13 Kriecherbauer, T., Marklof, J. and Soshnikov, A., Random matrices and
25
quantum chaos, Proc. Natl. Acad. Sci USA, 98:10531–10532, 2001. Bonanno, C. and Megab, M.S., Toward a dynamical model for prime numbers, Chaos, Solitons & Fractals 20:107–118, 2004. Szpiro, G.G., The gaps between the gaps: some patterns in the prime number sequence, Physica A 341:607– 617, 2004. Tenenbaum, G. and France, M.M., in The Prime Numbers and Their Distribution, American Mathematical Society, 2000. Stein, M.L., Ulam, S.M. and Wells, M.B, A Visual Display of Some Properties of the Distribution of Primes, The American Mathematical Monthly, 71:516–520, 1964. Green, B. and Tao, T., The primes contain arbitrary long arithmetic progressions, Ann. Math., 167:481–547, 2008. Guy, R.K., in Unsolved Problems in Number Theory, Springer, New York [3rd ed.], 2004. Ribenboim, P., in The Little Book of Bigger Primes, 2nd ed. Springer, New York, 2004. Caldwell, C., The Prime Pages, available at http://primes.utm.edu/, retrieved 2009. Edwards, H.M, in Riemann’s Zeta Function, Academic Press, New YorkLondon, 1974. Chernoff, P.R., A pseudo zeta function and the distribution of primes, Proc. Natl. Acad. Sci USA, 97:7697– 7699, 2000. Hill, T.P., The first-digit phenomenon, Am. Sci., 86:358–363, 1996. Newcomb, S., Note on the frequency of use of the different digits in natural numbers, Amer. J. Math., 4:39–40, 1881.
References
26 Raimi, R.A., The First Digit Problem, Amer. Math. Monthly, 83:521–538, 1976. 27 Knuth, D., in The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, Addison-Wesley, 1997. 28 Nigrini, M.J., in Digital Analysis Using Benford’s Law, Global Audit Publications, Vancouver, BC, 2000. 29 Nigrini, M.J. and Miller, S.J., Benford’s law applied to hydrological data: results and relevance to other geophysical data, Mathematical Geology, 39:469–490, 2007. 30 Mebane Jr., W.R. , Detecting Attempted Election Theft: Vote Counts, Voting Machines and Benford’s Law, Annual Meeting of the Midwest Political Science Association, April 20–23 2006, Palmer House, Chicago. Available at http://macht.arts.cornell.edu/ wrm1/mw06.pdf, retrieved 2009. 31 Alvarez, R.M., Hall, T.E. and Hyde, S.D., (eds). Election Fraud: Detecting and Deterring Electoral Manipulation. Brookings Institution Press, 2008. 32 Battersby, S.,. Statistics hint at fraud in Iranian election. New Scientist 2714, 24 June 2009. 33 Hill, T.P., Base-invariance implies Benford’s law, Proc. Am. Math. Soc., 123: 887–895, 1995. 34 Pinkham, R.S., On the distribution of first significant digits, Ann. Math. Statistics, 32:1223–1230, 1961. 35 Miller, S.J. and Takloo-Bighash, R. in An Invitation to Modern Number Theory, Princeton University Press, Princeton, NJ, 2006. 36 Hill, T.P., A statistical derivation of the significant-digit law, Statistical Science, 10:354–363, 1995. 37 Pietronero, L., Tossati, E., Tossati, V. and Vespignani, A., Explaining the uneven distribution of numbers in nature: the laws of Benford and Zipf, Physica A, 293:297–304, 2001.
38 Luque, B. and Lacasa, L., The first digit frequencies of primes and Riemann zeta zeros tend to uniformity following a sizedependent generalized Benford’s law, arXiv:0811.3302v1 [math.NT], 2008. 39 Benford, F., The law of anomalous numbers, Proc. Amer. Philos. Soc., 78:551–572, 1938. 40 Luque, B. and Lacasa, L., The firstdigit frequencies of prime numbers and Riemann zeta zeros, Proceedings of the Royal Society A, 465:2197–2216, 2009. 41 Diaconis, P., The distribution of leading digits and uniform distribution mod 1, The Annals of Probability, 5:72– 81, 1977. 42 Cramér, H., Prime numbers and probability, Skand. Mat.-Kongr., 8:107–115, 1935. 43 Leemis, L.M., Schmeiser, W. and Evans, D.L., Survival Distributions Satisfying Benford’s Law, The American Statistician 54:236–241, 2000. 44 Riesel, H., Prime Numbers and computer methods for factorization, Progress in Mathematics, Birkhauser, Boston, 1994. 45 Dittrich, P., Banzhaf, W., Rauhe, H. and Ziegler, J., Macroscopic and Microscopic Computation in an Artificial Chemistry, Proceedings of the Second German Workshops on Artificial Life, 19–22, 1998. 46 Dittrich, P., Ziegler, J. and Banzhaf, W., Artificial Chemistries – A Review, Artificial Life 7:225–275, 2001. 47 Lacasa, L., Luque, B., Miramontes, O., Phase transition and computational complexity in a stochastic prime number generator, arXiv:0712.3137v1 [cs.CC], 2007. 48 Luque, B., Lacasa, L. and Miramontes, O., Phase transition in a stochastic prime number generator, Phys. Rev. E 76, 010103 (R), 2007.
127
128
References
49 This is something quite intuitive if we take into account that there are typically N/2 multiples of two, N/3 multiples of three and so on: the probability that the prime x appears in a random composite number is on average 1/x.
61 Mertens, S., Computational complexity for physicists, arxiv.org: condmat/0012185.
50 Schroeder, M.R., Number theory in Science and Communications, Springer, Berlin, 1997.
63 Monasson, R., Zecchina, R., Kirkpatrick, S., Selman, B. and Troyansky, L., Computational complexity from ‘characteristic’ phase transitions, Nature 400:133–137, 1999.
51 Corso, G., Families and clustering in a natural numbers network, Phys. Rev. E 69, 036106, 2004. 52 Achter, J.D., Comment on ‘Families and clustering in a natural numbers network’ Phys. Rev. E 70, 058103, 2004. 53 Zhou, T., Wang, B.H., Hui, P.M. and Chan, K.P., Topological properties of integer networks, Physica A 367:613– 618, 2006. 54 Albert, R. and Barabasi, A.L., Statistical mechanics of complex networks, Rev. Mod. Phys. 74: 47–97, 2002. 55 Kauffman, S.A., The Origins of Order. Oxford University Press, 1993. 56 Jain, S. and Krishna, S., Autocatalytic Sets and the Growth of Complexity in an Evolutionary Model, Phys. Rev. Lett. 81:5684–5687, 1998. 57 Gould, H. and Tobochnik, J., An Introduction to Computer Simulation Methods, Addison-Wesley, 1996. 58 Percus, A., Istrate, G. and Moore, C., (editors), Computational Complexity and Statistical Physics, Oxford University Press, 2006. 59 Erd˝os, P. and Rényi, A., On random graphs I, Publ. Math. Debrecen, 6:290– 297, 1959. 60 Cheeseman, P., Kanefsky, B. and Taylor, W.M., Computational Complexity And Phase Transitions Workshop on Physics and Computation, 1992. PhysComp ’92., ISBN 0-8186-3420-0 IEEE, 1992.
62 Kirkpatrick, S. and Selman, B., Critical Behavior in the Satisfiability of Random Boolean Expressions Science 264:1297–1301, 1994.
64 Hogg, T., Huberman, B.A. and Williams, C.P., Phase transitions and the search problem, Artificial Intelligence 81:1–15, 1996. 65 Istrate, G., Computational complexity and phase transitions, 15th Annual IEEE Conference on Computational Complexity, 2000. 66 Mertens, S., Phase Transition in the Number Partitioning Problem, Phys. Rev. Lett., 81:4281–4284, 1998. 67 Biroli, G., Cocco, S. and Monasson, R., Phase transitions and Complexity in computer science: An overview of the statistical physics approach to the random satisfiability problem, Physica A 306:381–394, 2002. 68 Mézard, M., Parisi, G. and Virasoro, M.A., Spin Glass Theory and Beyond, World Scientific, Singapore, 1987. 69 Agrawal, M., Kayal, N. and Saxena, N., PRIMES in P, Annals of Mathematics 160:781–793, 2004. 70 Bollobas, B., Modern Graph Theory, Springer, Berlin, 2002. 71 Istrail, S., Statistical Mechanics, Three-Dimensionality and NPcompleteness. Proceedings of the 32nd ACM Symposium on the Theory of Computing, ACM Press, 87–96, 2000. 72 Pratt, V.R., Every Prime has a Succinct Certificate, SIAM J. Comput., 4:214–220, 1975.
References
73 Bak, P., Tang, C. and Wiesenfeld, K., Self-organized criticality: An explanation of the 1/ f noise, Phys. Rev. Lett. 59:381–384, 1987. 74 Bak, P., Tang, C. and Wiesenfeld, K., Self-organized criticality, Phys. Rev. A, 38:364–374, 1988. 75 Jensen, H.J., Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems, Cambridge University Press, 1998. 76 Sornette, D., Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder, Concepts and Tools, Springer series in Synergetics, Berlin, 2004. 77 Turcotte, D.L., Self-organised criticality, Rep. Prog. Phys., 62:1377–1429, 1999. 78 Flyvbjerg, H., Simplest possible selforganized critical system, Phys. Rev. Lett. 76:940–943, 1996. 79 Flyvbjerg, H., Self-organized critical pinball machine, Physica A 340:552– 558, 2004. 80 Newmann, M.E.J., The structure and function of complex networks, SIAM Rev., 45:167–256, 2003. 81 Albert, R. and Barabasi, A.L., Statistical mechanics of complex networks, Rev. Mod. Phys. 74:47–97, 2002. 82 Goh, K.I., Lee, D.S., Kahng, B. and Kim, D., Sandpile on scale-free networks, Phys. Rev. Lett. 91, 148701, 2003. 83 Fronczak, P., Fronczak, A. and Holyst, J.A., Self-organized criticality and coevolution of network structure and
dynamics, Phys. Rev. E 73, 046117, 2006. 84 Bianconi, G. and Marsili, M., Clogging and self-organized criticality in complex networks, Phys. Rev. E 70, 035105 (R), 2004. 85 Garlaschelli, D., Capocci, A. and Caldarelli, G., Self-organized network evolution coupled to extremal dynamics, Nature Phys. 3:813–817, 2007. 86 Erd˝os, P., On the Density of some Sequences of Numbers, J. Lond. Math. Soc. 10:128–136, 1935 87 Ahlswede, R. and Khachatrian, L.H., The Mathematics of Paul Erd˝os, Vol. I: Algorithms and Combinatorics, Springer-Verlag, New York-Berlin, pp. 104–116, 1997. 88 Luque, B., Miramontes, O., Lacasa, L., Number theoretic example of scale-free topology inducing self-organized criticality, Phys. Rev. Lett. 101, 158702 (2008). 89 Lacasa, L., Luque, B. and Miramontes, O., Phase transition and computational complexity in a prime number generator, New Journal of Physics 10 023009, 2008. 90 Schroeder, M.R., Number Theory in Science and Communication, Springer, Berlin, Heidelberg, 1997. 91 Vijay, S., On the Largest k-primitive Subset of [1,n], Integers: Electronic journal of Combinatorial Number Theory 6, 2006. A01. 92 Nathanson, M.B., Affine invariants, relatively prime sets, and a phi function for subsets of {1,2,. . . ,n}, Integers: Electronic journal of Combinatorial Number Theory 7 A01:1–7, 2007.
129
131
5 Wave Localization Transitions in Complex Systems Jan W. Kantelhardt, Lukas Jahnke, and Richard Berkovits
5.1 Introduction
Phase transitions between localized and extended modes remain in the focus of research on disordered systems even though half a century has passed since the localization phenomenon was first reported in the context of electron transport through disordered metals [1, 2]. Such transitions have also been suggested and verified in many physically very different systems, such as light in strongly scattering media [3, 4] or photonic crystals [5, 6], acoustical vibrations in glasses [7] or percolation systems [8] and, very recently, atomic Bose–Einstein condensates in an aperiodic optical lattice [9, 10]. The main idea is that a phase transition from extended to localized eigenstates exists as a function of the disorder in the system. This Anderson transition will be manifested by a change in the transport through the system, e.g. from metallic to insulating or from transparent to reflecting. The transition is conceptually different from the canonical explanation for the existence of insulators (i.e. the Fermi energy is in the gap between two energy bands) since, for an Anderson insulator, there are many available states at the Fermi level. Nevertheless, since these eigenstates are localized due to the disorder, for electronic systems no current can pass through. In the case of vibrational excitations, propagating phonon modes are replaced by localized excitations, which reduces heat conduction through the material. Localization of light is correspondingly defined, but most difficult to observe experimentally. The reason is that it is hard to distinguish systems with localized light modes from those with strong light absorption, since both effects lead to an exponential decay [3, 4]. This is a motivation for the study of light localization in artificially designed optical systems.
132
5 Wave Localization Transitions in Complex Systems
Another relevant phase transition occurring in disordered systems is the percolation transition. Although both localization–delocalization transitions (Anderson transitions) and percolation transitions are caused by the disorder of the considered system, they have different origins and can be clearly distinguished. The percolation transition is a purely geometrical transition. When more and more sites become unoccupied or bonds are broken (i.e. the occupation probability is reduced) the large percolation cluster breaks into pieces and direct paths between two edges of the sample disappear. Transport is thus interrupted below the critical percolation threshold, since there are no more paths available. In the Anderson transition, on the other hand, such paths do exist. Nevertheless, the probability of waves transversing the sample is exponentially small due to constructive interference between time-reversed paths and destructive interference between paths with a identical beginning and end. Thus, the Anderson transition takes into account the wave (or quantum) nature of the modes traveling through the sample, while the percolation transition is purely classical in nature. Since the Anderson transition depends on interference effects between time-reversed paths, there is an interesting dependence on the dimensionality of the system. The lower critical dimension, at and below which the system is localized for all strengths of disorder, is believed to be two [11]. The reason is that the probability of returning to the origin stays nonzero in the limit of infinite system size for d ≤ 2. The upper critical dimension with extended modes remaining for any strength of disorder is still uncertain, although it is generally believed to be infinity [12–16]. In order to elucidate the causes for the dimensionality dependence of the Anderson transition, studies considering more complex topologies have recently been undertaken. The reason is that several properties of the system (like the probability of short or long paths returning to the origin) can be modified systematically in addition to normal on-site or bond disorder in such systems. Recently, Anderson transitions in small-world networks [17–19], Cayley trees [20], random regular graphs, Erd˝os–Rényi graphs and scale-free networks [21, 22] have been studied. In addition to justifying this interest by the light shed on the general properties of the transition, we would like to emphasis that complex networks could actually be realized in real-world situations such as optical networks (see below). The chapter is organized as follows. We begin with a description of complex networks in Section 5.2, focusing particularly on scale-free
5.2 Complex Networks
networks with and without clustering and with and without additional percolation. Our considered models for transport on the systems, i.e. electronic wave functions with and without magnetic field, vibrational modes, and coherent optical modes, are given in Section 5.3. Then we describe how one can determine the properties of an Anderson transition based on the statistical properties of the eigenvalues (levels) of the Hamiltonians or dynamical matrices using level statistics and finitesize scaling in Section 5.4. Results for the Anderson transition in different complex networks are finally reported in Section 5.5 with particular emphasis on effects of clustering. We show that new complex topologies lead to novel physics; specifically, increasing clustering may lead to localization even without additional disorder. We summarize and conclude in Section 5.6. 5.2 Complex Networks
Complex networks can be found in a variety of dynamical systems, such as protein folding, food webs, social contacts, phone calls, collaborational networks in science and business, the world wide web (WWW) and the Internet. In a network, one typically refers to nodes which are linked (connected) through edges. For example, in social networks the nodes are humans and the edges are different types of social connections. In the Internet, the nodes are hubs and servers, while the edges are connections between them. The number of edges a node participates in, is its degree k. The structure of disordered physical materials can also be described in terms of a network. Prominent examples are the SiO2 network in a glass or structures of amorphous materials. Here, the fluctuating value of k represents the number of bonds (edges) for each of the atoms (nodes). The main distinction between a general complex network and structural networks in materials is the absence of long-range links in materials. In optical systems, coherent propagation of light over long distances can be realized with optical fibers (see Section 5.3.3).
133
134
5 Wave Localization Transitions in Complex Systems
5.2.1 Scale-Free and Small-World Networks
In general it is possible to group complex networks using only few characteristic properties, although the networks can be found in very different applications. A very important characteristic is the distribution of degrees, P(k). In a random graph with N nodes, each node is connected to exactly k0 random neighbors [23]; thus P(k) = 1 for k = k0 and zero otherwise. In Erd˝os–Rényi networks, where N nodes are connected randomly with a probability p, the degree distribution is given by a Poisson distribution peaking at k0 = k = p( N − 1) [23, 24], P(k) = kk0 e−k0 /k!
(5.1)
For another important class of networks, so-called scale-free networks, one finds [25] P(k) = ak−λ
for m ≤ k ≤ K
with λ > 2
(5.2)
where a = (λ − 1)mλ−1 . The upper cut-off value K = mN 1/(λ−1) depends on the system size N [26]. Such networks are called scale-free, because the second moment, k2 , diverges in the limit of infinite system size (N → ∞). Hence, no intrinsic scale k0 can be defined. Scalefree networks can be found in natural, sociological and technological networks. For example, the Internet is characterized by λ ≈ 2.6 [27]. Figure 5.1 shows representative simulated scale-free networks with λ = 5. Note that sometimes not all nodes are linked to the largest cluster (giant component), but finite clusters also appear, even if no edges are cut. However, the degree distribution P(k) is practically identical for the giant component and the whole network as shown in Figure 5.2(a) for scale-free networks with λ = 4. We note that a localization–delocalization transition is well defined only on the giant component since the other clusters do not grow with system size. Only the giant component becomes infinite for an infinite system size. For the finite clusters a localized state with a localization length larger than the cluster size cannot be distinguished from an extended state. Another important network property is the average path length l. The path length ln,m between nodes n and m is defined as the number of nodes along the shortest path between them. Another way to define the topological size of networks is the diameter d which is the maximal distance between any pair of nodes. For random graphs and Erd˝os–Rényi graphs the average path length l and the diameter d can be calculated
5.2 Complex Networks
Figure 5.1 Representative pictures of scale-free networks (degree distribution exponent λ = 5) (a) without and (b,c) with clustering (C0 = 0.4, 0.6). All three networks have the same size of N = 250. The giant component has a size of (a) Ng = 250, (b) Ng = 203, and (c) Ng = 145. The actual global clustering coefficient is (a) C = 8 × 10−4 , (b) C = 0.34, and (c) C = 0.53. The
global clustering coefficient of the giant component is (b) Cg = 0.10 and (c) Cg = 0.17, since there are several small clusters with larger C. The logarithmically scaled coloring presents the intensity of an optical mode with E ≈ 0.45, red indicating the highest, yellow intermediate, and blue the lowest intensities. See also color figure on page 235.
analytically and are given by the logarithm of its size, l ∝ d ∝ ln N [23]. The average path length and the diameter of scale-free networks depend on the exponent λ. For λ > 3 one finds the same results as in the Erd˝os–Rényi graph case, whereas for λ < 3, d ∝ ln ln N [28], and for λ = 3, d ∝ ln N/ ln ln N [29]. Since the average distance of the nodes is very small, such complex networks are referred to as small-world
135
136
5 Wave Localization Transitions in Complex Systems
Figure 5.2 (a) Degree distribution P(k) and (b) degree-dependent clustering coefficients C (k) for scale-free networks with λ = 4 in (5.2), C0 = 0.65, α = 1 according to (5.5), and N = 15 000 nodes, averaged over 120 configurations. Cir-
cles with lines for distributions regarding the whole network and grey squares for the giant component (with N1 = 11906 nodes; shifted vertically by a factor of 2 for the degree distribution). (Figure redrawn after [22]).
objects. Moreover, scale-free networks with λ < 3 are referred to as ultra-small-world objects. Examples of real complex networks are listed in Table 5.1. All presented networks are small-world objects with average path length around three. This average path length is similar for equivalent random networks. However, lattices and other structures without long-range edges do not have short path lengths. For a d-dimensional hypercubic lattice the average path length scales as N 1/d , i.e. it increases much faster with N. Table 5.1 Characteristic quantities of real-world networks: size of considered (sub-) network N , average degree k, average path length l , average path length of an equivalent random network lrand , clustering coefficient C, and clustering coefficient of an equivalent random network Crand . (Data taken from [23]).
Network
N
k
WWW, site level 153 127 35.21 Internet, domain level 3015–6209 3.5–4.1 Movie actor 225 226 61 Co-authorship 56 627 173 Co-occurrence of words 460 902 70.13
l
lrand
C
Crand
3.1 3.35 0.1078 0.00023 3.7 6.4–6.2 0.18–0.3 0.001 3.65 2.99 0.79 0.00027 4.0 2.12 0.726 0.003 2.67 3.03 0.437 0.0001
5.2 Complex Networks
5.2.2 Clustering
Another property observed in real networks is clustering. In particular, social networks tend to contain cliques. These are circles of friends or acquaintances where every member knows every other member. Such behavior can be quantified by the clustering coefficient C. If the neighbors of a node n with k n edges are grouped in a clique where everybody is connected with each other, this clique has a total of k n (k n − 1)/2 edges. The clustering coefficient can then be defined as the ratio of the actual number of edges between these nodes, Tn , and the maximum number of possible edges [30], Cn =
2Tn k n ( k n − 1)
(5.3)
Here Tn is also the numbers of triangles passing through vertex n. Since each triangle represents a very short loop in the network, waves in networks with high clustering will have a high probability of returning to the same node and to interfere destructively. Such interferences are the main reason for quantum localization. One may thus expect that strong clustering in complex networks can induce localization. Although clustering is by definition a local topological quantity, it is possible to define a global clustering coefficient C by averaging over Cn . Such a global approach cannot take into account special properties of a network. For instance, different degree-degree correlations, also known as assortativity, can lead to equal C, although the topological structures of the networks are fundamentally different [31–34]. A practical ansatz is a degree-dependent clustering coefficient [35, 36], 1 C¯ (k) = Nk
∑
Cn
(5.4)
n∈Γ(k )
where Nk is the number of vertices of degree k and Γ(k) is the set of such vertices. It is not possible to achieve all functional dependences of C¯ (k) with k. To achieve high clustering of the higher degrees k the assortativity of the network has to be strong. When nodes with large degrees are connected to nodes with lower degrees they cannot achieve a high clustering because the nodes with lower degrees do not have enough connections to participate in a large number of triangles. Depending on
137
138
5 Wave Localization Transitions in Complex Systems
the strength of the degree-degree correlation one finds an upper limit which can be approximated by C¯ (k) = C0 (k − 1)−α
(5.5)
with α between 1 for no and 0 for high assortativity. Although assortativity is definitely an interesting subject, here we focus on the effect of clustering. To prevent interference between both we keep the degreedegree correlation as low as possible, restricting ourselves to α = 1. The measured clustering coefficients of real networks reported in Table 5.1 are much higher than those of the equivalent random networks, where almost no cliques emerge. This occurs due to equivalent probabilities for a connection of nearest neighbors and two random nodes [23], Crand = k/N
(5.6)
decaying to zero for large N. The larger C values in Table 5.1 indicate the presence of many loops on short length scales [23, 37, 38]. Furthermore, C is independent of the network size N. This observation bears similarity to a lattice, where clustering is also size independent, depending only on the coordination number. 5.2.3 Percolation on Networks
Percolation is a standard model for disordered systems. Its applications range from transport in amorphous and porous media and composites to the properties of branched polymers, gels and complex ionic conductors. Because of universality the results do not depend on the specific model, and general scaling laws can be deduced. For site percolation on a lattice, each site is occupied randomly with probability p (or it is empty with probability q = 1 − p). For a small p all clusters of neighboring occupied sites are finite and transport through the system is impossible. A so-called infinite cluster emerges if p exceeds the critical concentration pc , where the system undergoes a geometrical phase transition. For p ≥ pc transport through the system is possible and extended modes can occur on the infinite cluster. Besides site percolation, other variants exist, e.g. percolation in continuum space or bond percolation. Similar to lattices, percolation can be defined on networks. The occupation probability p is equivalent to the probability to connect an
5.2 Complex Networks
edge or to introduce a node. The critical occupation probability pc of a network depends on its degree distribution. A randomly connected network with arbitrary degree distribution has an infinite cluster if [26, 39, 40]
k2 / k > 2
(5.7)
Random removal of nodes and edges with probability q = 1 − p changes the degree distribution, reducing k2 /k. The critical probability pc is reached when k2 /k = 2. Scale-free networks are particularly interesting, since it is nearly impossible to break them, i.e. pc > 0.99 is found for exponents 2 < λ < 3 [26, 40, 41]. Equation (5.7) is only true for uncorrelated networks with zero clustering. The influence of degree-degree correlations on the stability of networks, however, has not been investigated in much detail. For uncorrelated networks with clustering coefficient C0 , defined in (5.5), an infinite cluster exists if [32]
k2 /k > 2 + C0
(5.8)
Therefore, clustering shifts the percolation threshold, such that the network already breaks for lower values of p. Recently it was found that the network can also be broken by a mere change of the clustering index C without tampering with the degree distribution, i.e. without removing nodes or bonds [22]. 5.2.4 Simulation of Complex Networks
The simplest way to create networks with high clustering but small l was proposed by Watts and Strogatz in 1998 [30]. In this model one starts with an ordered ring lattice where each node is connected to the K nodes closest to it. In this ordered case, the clustering coefficient is C=
3( K − 2) 4( K − 1)
(5.9)
converging to 3/4 in the limit of large K. In the second step, the edges are rewired with a probability pr without allowing self-connections and duplicate edges. Therefore, pr NK/2 long-range edges are introduced, connecting to other neighborhoods. Using this model, it is possible to numerically calculate the dependence of clustering C ( pr ) and average
139
140
5 Wave Localization Transitions in Complex Systems
path length l ( pr ) on the rewiring probability pr . The transition to a small-world object is rather fast, with nearly a constant clustering coefficient. As a result, the small-world phenomena is almost undetectable at the local level. A coexistence of small-world characteristics and high clustering is thus possible as observed in real-world networks (see Table 5.1). However, the Watts–Strogatz model cannot generate scale-free networks, since the degree distribution P(k) decays exponentially. In the pioneering work of Barabási and Albert [42] it is argued that real networks have two generic mechanisms missing in random graphs. First, real networks grow from a small number of nodes by including new nodes and edges, e.g. the WWW is growing exponentially by new links and additional pages, and co-authorship connections in scientific publications are growing steadily by new publications. Second, nodes are not connected independent of their degree, i.e. nodes with high degree are more probable to be linked with new nodes. For example, well known web pages, having large number of links, will be linked more often than unknown ones. In the same way, papers with a large number of citations are more likely to be cited again. This phenomena is known as preferential attachment in network science. Both ingredients, growth and preferential attachment, lead to a model which dynamically reproduces scale-free degree distributions [42]. However, the Barabási–Albert model fails to reproduce the strong clustering observed in real complex networks (see Table 5.1). Therefore, we have used an algorithm suggested recently by Serrano and Boguñá [36] which can generate complex small-world networks with predefined degree distribution P(k) and predefined degreedependent clustering coefficients C¯ (k). Figure 5.1 shows three representative pictures of the resulting scale-free networks. They all have the same number N of nodes; C0 increases from (a) to (c). One can clearly see that an increasing number of nodes disintegrate from the giant component for higher clustering. The reason is that for nodes with low degree it is easier to achieve higher clustering in separate small clusters. For example, nodes with a degree of two can achieve the highest clustering in triangles. Figure 5.2(a) confirms that the algorithm can achieve practically identical scale-free degree distributions P(k) for the whole network as well as the giant component. The degree-dependent clustering index C¯ (k) for the whole network shown in Figure 5.2(b) also follows very closely (5.5) with α = 1. However, the values of C¯ (k) for the giant component are drastically smaller by approximately a factor between two and
5.2 Complex Networks
three, leading to a reduced global clustering index Cg of the giant component compared with the global clustering index C of the whole network. It is, a-priori, not clear if either C or Cg is the better order parameter for studying phase transitions upon increasing clustering. The ambiguity can be resolved by analyzing the dependence of C and Cg on C0 . An illustrative result is shown in Figure 5.3. Both C and Cg depend linearly on C0 if C0 ≤ 0.9. We also found no system-size dependence, which is important since we want to use a finite-size scaling formalism to analyze the phase transition. The value of C0 fed into the network generation algorithm can thus be used as order parameter for studying transitions upon clustering. However, values of C0 ≥ 0.85 should be avoided since the algorithm fails to achieve such high clustering.
Figure 5.3 Test of the relation C = C0 Δ(λ) (line) for scale-free networks with λ = 4 and m = 2. The circles (squares) are results for the full network (giant component). The different gray scales represent different system sizes from N = 2000 (black) to N = 20000 (white). The dashed (dashed dotted) line
is a linear fit for the full network (giant component). Inset: Results of the fits for Δ(λ) (same symbols). The full line marks the theoretical maximum Δ for the whole network, while the dashed line marks the average Δ = 0.34 for the giant component.
The relation between C and C0 is characterized by a factor Δ depending mainly on λ, i.e. C = C0 Δ(λ) with Δ = ∑Kk=m (k − 1) P(k). The theoretically possible maximum of Δ can be calculated exactly using the basic network parameters (m, K, λ, and N) as well as (5.2). As shown in the inset of Figure 5.3 this maximal clustering is very closely reached for a wide range of exponents λ in the degree distribution (compare
141
142
5 Wave Localization Transitions in Complex Systems
circles and full line). For the giant component, the prefactor Δ g linking Cg with C0 cannot be calculated analytically. It is significantly smaller than the Δ for C, but apparently rather independent of λ as shown in the inset of Figure 5.3. We obtained similar, but less reliable results when generating networks using the algorithm of Volz [43] fixing C instead of C¯ (k). The problem with this algorithm is that it is not possible to control the assortativity. Therefore, the assortativity can change when C is altered, making it difficult to decide whether two networks with different C are comparable.
5.3 Models with Localization–Delocalization Transitions
In this section, models for electrons, vibrational modes and optical modes in disordered systems are described. They are all believed to be in the same universality class and could be supplemented by spin waves. The last part deals with electrons in an additional magnetic field, where the universality class is changed from orthogonal to unitary. 5.3.1 Standard Anderson Model and Quantum Percolation
To study localization effects one can consider the standard Anderson Hamiltonian [1], H=
∑ n aˆ †n aˆ n + ∑ tn+δ,n aˆ †n+δ aˆ n n
(5.10)
n,δ
with the second sum running over all neighboring pairs of nodes n and n + δ. The operator aˆ n (aˆ †n ) is creating (annihilating) a particle at position Rn . The first part of (5.10) represents an on-site (node) potential and the second part describes the transfer between the linked pairs of nodes (n + δ, n). Disorder can be introduced by parametrization of the potential landscape n (diagonal disorder) or the hopping-matrix elements tn+δ,n (off-diagonal disorder). The equation is equivalent with the tight-binding equation for wave function coefficients ψn,E and energy eigenvalues E,
Eψn,E = n ψn,E + ∑ tn+δ,n ψn+δ,E δ
(5.11)
5.3 Models with Localization–Delocalization Transitions
Here, we focus on diagonal disorder characterized by an homogeneous uncorrelated distribution of the on-site potentials −W/2 < n < W/2. For vanishing disorder strength W and an ordered lattice the energy eigenvalues form energy bands of finite width and the eigenfunctions correspond to periodic Bloch functions spreading the entire system. Electrons in such extended states are highly mobile and contribute to charge transport. For strong disorder (large W) large fluctuations of the potential energy lead to backscattering and interference effects, and the wave function localizes. It decays roughly exponentially in space, although the shape also depends on the averaging procedure (see [44] for details). Therefore, electrons are bound at impurities and cannot contribute to transport. Starting from an ordered lattice, the density of states broadens with increasing disorder and localized states appear near the band edges. They are separated from the extended states in the band center by a critical energy ± Ec called the mobility edge [2]. At a critical disorder Wc the mobility edges merge at E = 0 (the band center), and all extended states disappear. At this point a quantum-phase transition occurs, known as the metal–insulator transition. For a one-dimensional system Wc = 0 can be derived analytically [2]. For higher dimensions, there are no analytical approaches, and Wc needs to be extracted from numerical simulations. As a result, the properties of two-dimensional systems are discussed somewhat controversially, although no extended states are usually believed to exist (Wc = 0). In three-dimensional systems the phase transition from extended to localized states is observed for Wc ≈ 16.5 [2, 45, 46]. It needs higher accuracy and a large numerical effort to determine the value of the critical exponent ν characterizing the divergence of the correlation length scale ξ of the wave functions near the phase transition point, ξ ∼ |W − Wc |−ν by a power law. Therefore, the reported values of ν for the standard Anderson model changed as accuracy was improved. The range is from ν = 1.2 ± 0.3 [2] to ν = 1.56 ± 0.02 [46]. In the percolation model, disorder is introduced via the (nondiagonal) hopping terms tn+δ,n that are 1 only if both neighboring sites n + δ and n are occupied and 0 otherwise. The diagonal terms n are usually set to zero. Therefore, the geometrical structure of the considered percolation cluster determines the localization properties of the wave functions. For p < pc all clusters are finite, and the eigenfunctions are always localized. The same is true at pc due to the fractal structure of the infinite cluster, resembling a highly disordered system. For p > pc
143
144
5 Wave Localization Transitions in Complex Systems
the infinite cluster becomes more and more dense, until an ordered lattice is reached for p = 1 with extended plain waves as eigenfunctions of the tight-binding Hamiltonian. In between a critical occupation probability pq with pc < pq ≤ 1 must exist, where extended wave functions appear for the first time for some part of the energy spectrum. The threshold pq is known as quantum percolation threshold. 5.3.2 Vibrational Excitations and Oscillations
To model the propagation of vibrational excitations on networks we assume that equal masses M are placed on each node. Directly connected nodes n and n + δ are coupled by equal (scalar) force constants Dn,n+δ . In this case, the components of displacements decouple, and we obtain the same equation of motion for all excitations un (t), M
d2 un (t) = dt2
∑ δ
Dn,n+δ [un+δ (t) − un (t)]
(5.12)
where again the sum runs over all k nodes n + δ having common edges with node n. The standard ansatz un (t) = un,ω exp(−iωt) leads to the corresponding time independent vibration equation. Setting M = Dn,n+δ = 1 (which determines the unit of the frequency) we obtain ω 2 un,ω =
∑ δ
(un,ω − un+δ,ω )
(5.13)
which is an eigenvalue equation and has to be solved numerically by diagonalizing the dynamical matrix. Note that (5.13) is identical with (5.11) if we choose tn+δ,n = −1 for all neighboring nodes and n = − ∑δ tn+δ,n = k n , where k n is again the number of edges at node n. For illustration, Figure 5.4 shows representative vibrations of percolation clusters with varying occupation probabilities. One can observe a transition from the localized mode at the geometrical phase transition point p = pc towards more extended looking modes for larger values of p. Clearly, vibrational modes can be localized even if an infinite cluster exists. This is exactly the same as for electronic wave functions in quantum percolation and electronic modes and vibrational modes are believed to be in the same universality class. The only difference is with regard to the dispersion relation. For vibrations, the wavelength diverges in the limit ω → 0, such that modes with very large wavelength always exist. By selecting a sufficiently large wavelength, one
5.3 Models with Localization–Delocalization Transitions
Figure 5.4 Vibrational modes of percolation clusters on a square lattice at concentrations (a) p = pc , (b) 0.61, (c) 0.65, (d) 0.75, (e) 0.90, and (f) 0.99. The black points are unoccupied sites and finite clusters. The vibrational amplitudes |un,ω | of the selected eigenmodes of (5.13) are color coded with red for maximum amplitude, followed by yellow, green and blue down to white for very small amplitudes. A transition from lo-
calized behavior at p ≈ pc to extendedlooking behavior for large p seems to occur. However, all modes are expected to be localized in the limit of infinite system size according to the single-parameter scaling theory [11] confirmed by numerical studies of level statistics [8]. Lattice size and frequency are 200 × 200 and ω2 ≈ 0.01D/M, respectively. See also color figure on page 236.
145
146
5 Wave Localization Transitions in Complex Systems
can always find a mode extending over the whole length of the system even if the occupation probability is arbitrarily close to the geometrical threshold pc . There is thus no equivalent of the quantum percolation threshold pq for vibrations. However, note that the wavelength and correlation length of the modes are not identical [8]. Figure 5.4 also illustrates that looking at modes (eigenfunctions) may be misleading in studies of localization–delocalization transitions. None of the modes shown there is actually believed to be extended asymptotically (i.e. in the limit of infinite system size). This is a consequence of the universality of vibrational and electronic excitations and the singe-parameter scaling theory [2,11], which excludes extended modes in dimensions d ≤ 2. However, the modes shown in Figure 5.4 (e) and (f) do indeed look extended, because their correlation lengths ξ are much larger than the considered lattice size. In particular, for the largest value of p, p = 0.99 where just one percent of the sites is unoccupied, the mode is dominated by the boundary conditions and is not much different from a common vibrational mode of a square plate with appropriately chosen frequency. The asymptotically localized nature of the vibrational modes on a two-dimensional lattice can be shown numerically by the application of level statistics (see Section 5.4) [8]. 5.3.3 Optical Modes in a Network
A different perspective for the relevance of complex networks to realworld localization arises from considering optical networks. There is a long history of considering optical (or microwave) systems as a tool to analyze localization behavior. In 1982 Shapiro [47] generalized a model proposed originally by Anderson et al. [48] to describe localization in disordered systems. Instead of a tight-binding description of a lattice (see (5.10)), Shapiro considered a model in which nodes are represented by beam splitters or cavities and edges by optical fibers or waveguides. From a theoretical point of view, this description is convenient since it yields a description of the system in the scattering matrix formalism. This approach was taken a step further by Edrei et al. [49] who applied it to the dynamics of wave propagation (for example, acoustic or light waves) in a disordered medium. Such a description can form the basis of a real network built in a laboratory. For example a network of beam splitters and optical fibers may be constructed and its localization properties studied experimentally.
5.3 Models with Localization–Delocalization Transitions
Since optical fibers have a very low loss rate, there is no essential difference between connecting neighboring nodes or nodes far away from each other. Moreover, since in realistic optical set-ups, the length of any optical fiber is much larger than the wavelength of the light, the phase difference gained by a wave as it transverses from one node to the next is always a fixed random number characteristic of this link (edge). Any form of a complex network (Cayley tree, random regular graph, Erd˝os– Rényi graph or scale-free network) can thus be constructed and studied on an optical bench. Although this seems experimentally feasible for small networks, to the best of our knowledge it has not been done. Furthermore, transitions in the transport properties of coherent waves on complex networks with long-range links might become relevant to typical real-world communication networks [27, 35]. Alternatively, one might consider a network of wave guides on the nanoscale similar to photonic lattices [5, 6]. Similar to the analogy between electronic and vibrational systems, the analogy between electronic and optical systems is quite general. The scalar wave equation is a good approximation for the propagation of an optical wave in an inhomogeneous medium as long as polarization effects are not important. It may be written as
∇2 Ψ +
(r ) ∂2 Ψ=0 c2 ∂2 t
(5.14)
where (r ) = 1 + δ(r ) describes the local fluctuations in the dielectric constant and c is the speed of light. Assuming a monochromatic wave, one may write Ψ(r, t) = ψ(r ) exp(iωt), where ω is the frequency. Inserting Ψ(r, t) into (5.14) yields
−∇2 ψ − δ(r )ψ = (ω/c)2 ψ
(5.15)
When this is compared with the stationary Schrödinger equation with a varying potential U (r ) = U0 + δU (r ),
−∇2 ψ +
2m h¯
2
δU (r )ψ =
2m h¯ 2
( E − U0 )ψ
(5.16)
it can be seen that the Schrödinger equation and the scalar wave equation in random media are nearly identical up to constants. Thus, one may use techniques developed in the field of electronic localization in order to study the properties of optical networks. Specifically for optical waves in a complex network of beam splitters and fibers, one can employ (5.11) with the coefficients tn+δ,n =
147
148
5 Wave Localization Transitions in Complex Systems
exp(iϕn+δ,n ) for connected nodes and tn+δ,n = 0 for disconnected nodes. Here, ϕn+δ,n denotes the optical phase (modulo 2π) accumulated along the link. For simplicity, one can restrict tn+δ,n to random values ±1 to keep the Hamiltonian in the orthogonal symmetry class. The extension to unitary symmetry is straightforward. In this scenario, the on-site disorder W of the coefficients n results from variations in the optical units (beam splitters) located at the nodes. Figure 5.1 shows the calculated intensities of three optical modes on complex networks with different clustering indices and no disorder in the beam splitters, W = 0. 5.3.4 Anderson Model with Magnetic Field
If an additional magnetic field is applied to electrons in a disordered solid, time-reversal invariance is broken, also yielding unitary symmetry. In the standard Anderson model with additional magnetic field the critical disorder Wc is shifted to larger values [50, 51]. The phenomenological theory assumes that two length scales determine the behavior of the system. The first is the correlation (or localization) length as a function of the disorder parameter, ξ (W ) ∝ |W − Wc |−ν . The √ second relevant length scale is the magnetic length given by L B = h¯ /eB, where B is the magnetic field. Rewriting the usual single-parameter scaling theory of the conductivity [11] with the ratio of these two length scales, Larkin and Khmel’nitskii [50] obtained e2 L B 1/ν σ( B, W ) ∝ Φ (5.17) h¯ L B ξ (W ) where Φ( x ) is a scaling function equal to zero when x is of the order of unity. The conductivity will thus be zero for ξ ≈ L B , or, using the definition of the correlation length, the field-dependent critical disorder 1/ν Wc ( B) is related to Wc (0) by Wc ( B) − Wc (0) ∝ L− ∝ B1/2ν for small B B. Replacing the magnetic field B by the magnetic flux φ and inserting a constant factor Wφ yields a power-law behavior Wc (φ) − Wc (0) = Wφ φ β/ν
with β = 1/2
(5.18)
In the Schrödinger Hamiltonian the magnetic field is introduced by [52]. In classisubstituting the momentum operator p with p − e A cal electrodynamics the vector potential is introduced as a convenient
5.4 Level Statistics
mathematical aid, but all fundamental equations are expressible in terms of fields. This is no longer true in quantum mechanics, where is necessary to introduce the magnetic field. Therefore, A is relevant A ×A is zero. for quantum objects, even if B = ∇ = Here we chose a Landau gauge for the vector potential, A B(0, x, 0), to simplify the problem. Therefore, the (nondiagonal) hopping matrix elements of the tight-binding Hamiltonian (5.10) with Rn+δ = Rn ±ey read tn+δ,n = t exp(±2πiφn+δ,n m) where m = Rn ·ex , and φn+δ,n = ϕn+δ,n /ϕ0 is the ratio of flux ϕn+δ,n = a2 Bn+δ,n through a lattice cell with size a2 to one flux quantum ϕ0 = h/e. For bonds in the x or z direction the nondiagonal matrix elements are tn+δ,n = t. The results for φn+δ,n = and φn+δ,n = 1/4 + are invariant under the shift → −. The range φn+δ,n = 0 to φn+δ,n = 1/4 thus represents the full flux spectrum from the flux-free case to the largest possible flux. A constant magnetic field is described by constant flux φn+δ,n = φ through each plaquete. On the other hand, a spatially random magnetic flux is described by random phases φn+δ,n such that the tn+δ,n are random complex numbers with absolute value t. Note that this is the only option for a complex network where there is no real space and directions are arbitrary. Since hopping terms of the form tn+δ,n = exp(iϕn,m ) (i.e. with t = 1) also occur for optical networks, the Anderson model with random magnetic fields is equivalent to this case.
5.4 Level Statistics
Next we describe the technique of level statistics that can be used to study localization–delocalization transitions of the modes in all four models introduced in the previous section by analyzing eigenvalues instead of eigenfunctions. 5.4.1 Random Matrix Theory
Level statistics was first used in nuclear physics [53] to understand complex excitation spectra of heavy nuclei. It is not feasible to calculate the energy spectra distribution using elementary quantum physics, but a qualitative description using random matrix theory [54] succeeded. In the eighties, random matrix theory was introduced in solid state
149
150
5 Wave Localization Transitions in Complex Systems
physics to understand the chaotic eigenvalue spectra of disordered solid systems [55, 56]. The connections to metal–insulator transitions were realized a few years later, and level statistical methods were then used for numerical investigations of the Anderson transition [57–59], see [60–62] for review articles. In random matrix theory one studies 2 × 2 matrices, with random numbers as matrix elements An,m . In the case of Gaussian orthogonal ensembles (GOE) the matrix elements A1,1 , A1,2 = A2,1 and A2,2 can be transformed by orthogonal transformations. Using infinitesimal transformation techniques, the diagonal matrix elements can be shown to be Gaussian distributed with mean E0 and standard deviation σ, while√ the nondiagonal elements have zero mean and standard deviation σ/ 2. After calculating the two eigenvalues A1 and A2 of the random matri√ ces one determines their normalized distances s = | A1 − A2 |/( πσ) with s = 1. The distribution of these level spacings s follows the parameter-free Wigner distribution π π (5.19) PGOE (s) = s exp − s2 2 4 Because PGOE (s) → 0 for s → 0, the probability of degenerated or similar eigenvalues is zero or suppressed; this is known as eigenvalue repulsion. For large level spacings the Wigner distribution has a Gaussian tail. Hermitian matrices with complex random matrix elements and invariance under unitary transformations represent the Gaussian unitary ensemble (GUE). In this case the Wigner distribution of the normalized level spacings s is similar to (5.19), 32 2 4 2 PGUE (s) = 2 s exp − s (5.20) π π If also rotational symmetry is broken, we obtain the Gaussian symplectic ensemble (GSE). The corresponding distribution PGSE (s) shows an even stronger level repulsion, increasing as PGSE (s) ∝ s4 for small s. Although they are exact only for 2 × 2 matrices, PGOE (s), PGUE (s), and PGSE (s) are a good approximations for level-spacing distributions of large random matrices with the corresponding symmetry. Extensive numerical investigations have shown that this holds even for sparsely filled matrices. Consequently, the level spacing distributions of the tight-binding Hamiltonian (5.10) can be described by random matrix theory.
5.4 Level Statistics
5.4.2 Level Statistics for Disordered Systems
There are two cases for which the Wigner distributions given in the previous subsection are not correct. (i) The matrices are not random, i.e. not disordered. The eigenvalue of ordered or nearly ordered systems are nearly equidistant and some are degenerated. Therefore, they cannot be repulsive. (ii) The eigenstates are localized. In this case the eigenfunctions do not overlap and the eigenvalues are therefore independent. The level spacings of independent eigenvalues (or the spacings between random numbers in general) are characterized by a Poisson distribution PP (s) = exp[−s]
(5.21)
In this case, the levels do not repel and degeneration is probable, PP (0) = 1. The tail of the Poisson distribution is a simple exponential, decreasing much slower than the Wigner distributions (5.19) and (5.20). Consequently, level-spacing distributions of tight-binding Hamiltonians can be used for determining metal–insulator transitions [57–59]. The distributions PP (s) for localized states and PGOE (s) or PGUE (s) for extended states are valid asymptotically in the limit of infinite system size. For finite systems, the numerical P(s) is intermediate, approaching the limiting distributions for increasing system sizes, since finitesized corrections (for example due to boundary conditions) become smaller with increasing system size. Therefore, the numerical P(s) changes towards a Poisson distribution, if eigenfunctions are localized. And it changes towards a Wigner distribution, if they are extended. Only at the transition point (the critical disorder) is the numerical P(s) system-size independent. However, the exact form of the critical distribution Pc (s) cannot be determined analytically. It depends on the topology of the system and even boundary conditions. Figure 5.5 shows the numerical level-spacing distributions P(s) for optical modes on large scale-free networks without diagonal disorder (W = 0) and varied clustering index C0 (see Figure 5.1 for pictures of the structures and the modes; see Section 5.2.2 for clustering in complex networks). The two limiting cases PGOE (s) (see (5.19)) and PP (s) (see (5.21)) are plotted for comparison. One can clearly see that the shape of the numerical P(s) changes from Wigner to Poisson with increasing C0 . The same type of plot is found when studying a localization– delocalization transition in the standard Anderson model by consider-
151
152
5 Wave Localization Transitions in Complex Systems
Figure 5.5 Level-spacing distribution P(s) for optical modes on scale-free networks with degree-distribution exponent λ = 5, system size N = 12 500 and no disorder, W = 0. A clear transition from Wigner (dashed red curve) to Poisson (dash-dotted blue curve) behavior is observed as a function of the clustering coefficient prefactor C0 increasing from C0 = 0.0 (continuous red curve) to C0 = 0.90 (continuous blue
curve). Inset: localization parameter γ (see (5.22)) versus C0 for networks with N = 5000 (red), N = 7500 (light green), N = 10000 (green), N = 12500 (blue), and N = 15000 (purple). A transition from extended modes for small C0 to localized modes for large C0 is observed at C0,q ≈ 0.69. The results are based on eigenvalues around | E| = 0.2 and 0.5. (Adapted from [22]). See also color figure on page 237.
ing a normal lattice and increasing the diagonal disorder strength W from W < Wc to W > Wc . Clustering in complex networks can thus cause an Anderson-like localization–delocalization transition although there is no on-node disorder W and no changes in the degree distribution P(k). The actual transition point can be determined more accurately by analyzing the system-size (L) dependence of [58] ∞ ∞ P ( s ) ds − PGOE (s) ds 2 ∞ γ( L) = 2∞ (5.22) PP (s) ds − 2 PGOE (s) ds 2 where γ → 0 if P(s) approaches the Wigner distribution, and γ → 1 if it approaches the Poisson distribution. The inset of Figure 5.5 shows γ for five system sizes versus the clustering strength C0 . Indeed, γ decreases with system size for small values of C0 while it increases with size for large values of C0 . One can thus observe the phase transition at the
5.4 Level Statistics
Figure 5.6 The localization parameter I0 = s2 /2 versus disorder W for the standard three-dimensional Anderson model with linear sizes L = 14 (red circles), 17 (yellow squares), 20 (green diamonds), 23 (light blue stars), 30 (blue pluses), and 40 (pink crosses)
and hard-wall boundary conditions. The lines correspond to fits of (5.29). Insets: the region around the crossing point zoomed in. In (b) the I0 values are corrected by subtraction of the irrelevant scaling variables. See also color figure on page 238.
critical value C0,q ≈ 0.69 by the crossing of the five curves, indicating a system-size independent critical value of γc ≈ 0.76. Alternatively the full level-spacing distribution P(s) can be parametrized by its second moment, I0 ( L) =
1 2 1 s = 2 2
∞ 0
s2 P(s) ds
(5.23)
which converges to 1 for localized modes (Poisson limit) and to 0.637 or 0.589 for extended modes (GOE or GUE Wigner limit, respectively). Figure 5.6 exemplifies the determination of Wc for the metal–insulator transition based on I0 for different system sizes and different W in the standard three-dimensional Anderson model with diagonal disorder and hard-wall boundary conditions. 5.4.3 Corrected Finite-Size Scaling
In some cases, finite-size effects can become quite strong, so that there is no unique crossing point of the γ or I0 curves for different system sizes L. One example is shown in Figure 5.6(a), see magnification of the crossing region in the inset. In such cases, the effects of irrelevant scaling variables must be subtracted [46].
153
154
5 Wave Localization Transitions in Complex Systems
To describe the procedure, we assume that a parameter Γ characterizing the localization properties will depend only on the ratio of the linear system size L and the characteristic (correlation) length scale ξ ∼ |W − Wc |−ν , Γ( L, W ) = F [ L/ξ (W )]
(5.24)
Here, Γ can stand for either γ or I0 introduced in the previous section, while W represents any parameter characterizing the disorder. In particular, W might be replaced by C0 if effects of clustering rather than diagonal disorder are considered or by p if effects of percolation are considered. Although Γ( L, W ) is a noncontinuous function of W for L = ∞, it is analytical for finite L and thus F can be expanded around the critical point, F ( x ) = a + bx1/ν + cx2/ν + . . . ( x → 0), leading to Γ( L, W ) ≈ Γ( L, Wc ) + R|W − Wc | L1/ν
(5.25)
with constant R. Here, irrelevant scaling variables Ξ, decaying with the system size L are not included. Hence the scaling formula (5.24) has to be extended to include relevant Υ and irrelevant Ξ scaling variables, Γ = F (ΥL1/ν , ΞLy )
(5.26)
If irrelevant scaling variables do not introduce discontinuities, F is expandable not only around the relevant scaling variable Υ up to order n R , but also around the irrelevant scaling variable Ξ up to order n I , Γ=
nR
∑
n =0
Υn Ln/ν Fn (ΞLy )
with
Fn (ΞLy ) =
nI
∑
m =0
Ξm Lmy Fnm
(5.27)
The scaling variables are analytical functions of the dimensionless disorder w = (Wc − W )/Wc and therefore equally expandable, Υ(w) =
mR
∑ bn w n
n =1
and
Ξ(w) =
mI
∑ cn wn
(5.28)
n =0
up to order m R for the relevant and m I for the irrelevant variables, respectively. The relevant scaling variable vanishes at criticality, thus, b0 = 0, whereas c0 is finite, introducing a size-dependent error. A size-independent Γc at criticality is reestablished by subtracting the terms with irrelevant variables. The absolute scales of the arguments are unknown. However, they can be fixed by choosing e.g. F01 =
5.4 Level Statistics
F10 = 1 in (5.27). The total number of fitting parameters is thus Np = (n I + 1)(n R + 1) + m R + m I + 2. Figure 5.6 depicts a typical example for the three-dimensional Anderson model with hard-wall boundary conditions. I0 is expanded up to orders n I = 1, n R = 3, m R = 1 and m I = 0 such that I0 (W, L) = F0 (ΥL1/ν ) + ΞLy F1 (ΥL1/ν )
(5.29)
In Figure 5.6(a) I0 is shown without the corrections. No welldefinable crossing point exists, i.e. I0 is still L dependent at criticality. In Figure 5.6(b) the corrected quantity I0 − ΞLy F1 is shown. The lines cross at one well-defined point as seen in the inset. The critical disorder for the three-dimensional Anderson model with hard-wall boundary conditions is given by Wc = 16.57 ± 0.13 in agreement with the most recent numerical result Wc = 16.54 [46]. For periodic boundary conditions the effect of the irrelevant scaling variables is negligible. 5.4.4 Finite-Size Scaling with Two Parameters
To analyze the localization–delocalization transition for networks with clustering (characterized by C0 ) and on-node disorder W the finite-size scaling equations have to be extended such that two parameters can be taken into account simultaneously. Without considering irrelevant variables this yields (5.30) Γ(C0 , W, L) = Γc + R1 |C0 − C0,q | + R2 |W − Wc | L1/ν where R1 and R2 are constants. Using (5.30) one can determine the coordinates C0,q and Wc of the localization–delocalization-transition point for scale-free networks with various λ. A corresponding plot involving just data with varying C0 is shown in Figure 5.7. The system size of the considered scale-free networks has to be approximated by L ∝ ln ( a(C0 ) N ), where the N dependence is well established [28] but the C0 dependence hardly explored. Our data for N up to 105 suggested ln a ∝ (C0 − C0,c )−νc , where C0,c denotes the classical transition point where the whole network breaks into pieces by strong clustering, and the giant component ceases to exist. Since C0,q < C0,c , a and thus L depend weakly on C0 at the wave localization transition point.
155
156
5 Wave Localization Transitions in Complex Systems
Figure 5.7 Finite-size scaling of γ according to (5.30) for optical modes on scale-free networks with λ = 5 (as in Figure 5.5), without on-node disorder, W = 0. The considered system sizes are N = 5 000 (squares), 7 500 (circles), 10 000 (diamonds), 12 500 (triangles up), and 15 000 (triangles down). The two branches correspond to the extended regime (bottom) and localized regime (top).
5.5 Localization–Delocalization Transitions in Complex Networks
In this section we review numerical results for localization–delocalization transitions in complex networks determined by means of level statistics. Anderson and quantum percolation transitions, which seem to be in the same universality class, have been studied on different topologies including quasi-fractal structures [63], percolation networks [64–72], Cayley trees [20], and small-world networks [21]. Transitions of vibrational modes were also studied on percolation networks [8]. In all these cases, the transitions were induced either by on-site disorder or by cutting bonds (percolation) and thus by changing the degree distribution of the network. Furthermore, quantum phase transitions (and also classical phase transitions) induced by mere topological changes of the network, i.e. changes in the clustering index, have been found recently [22] even with zero on-site disorder.
5.5 Localization–Delocalization Transitions in Complex Networks
5.5.1 Percolation Networks
For quantum percolation on a quadratic lattice it is believed, but not undisputed [64,66], that all eigenfunctions are localized (pq = 1) [65,67, 69] as predicted by the single-parameter scaling theory [11]. All studies based on level statistics find no extended states in two-dimensional systems. For quantum percolation on a simple cubic lattice the critical occupation probability pq ≈ 0.44 > pc ≈ 0.312 for site percolation [68, 71] and pbq ≈ 0.33 > pbc ≈ 0.247 for bond percolation [68, 70] are reported; the study by Berkovits et al. [70] being based on level statistics. The values for the critical exponent ν are similar to the Anderson transition: ν ≈ 1.35 [70], and ν ≈ 1.46 [72]. This is expected, because the Anderson model and quantum percolation are in the same universality class. The localization behavior of vibrational modes of infinite site percolation clusters above the critical concentration has been studied in two and three dimensions using level statistics [8]. While all eigenstates are localized in d = 2, clear evidence for a localization–delocalization transition was found in d = 3. However, contrary to the common view, this transition occurs for frequencies above the phonon–fracton crossover and gives rise to a regime of extended fracton states. The term ‘fracton’ is used for vibrational modes of a fractal structure [74], if the wavelength λ ∼ ω −2/dw of a mode with frequency ω is smaller than the correlation length ξ p ( p) ∼ | p − pc |−νp of the percolation structure. In these equations, νp denotes the critical exponent of percolation (νp = 4/3 in d = 2 and 0.875 in d = 3), while dw is the random walk exponent (dw = 2.88 in d = 2 and 3.8 in d = 3). In the fracton regime, the density of states is characterized by g(ω ) ∼ ω ds −1 with the spectral dimension ds = 2df /dw ≈ 4/3 (and df the fractal dimension) [74] rather than ds = d for nonfractal structures. It was found that the crossover from the fracton regime to a standard phonon regime (λ ∼ ω −1 , g ∼ ω d−1 ) occurs independent of the localization–delocalization transition [8]. In d = 2 there is only the fracton–phonon crossover and all modes are asymptotically localized for nonzero disorder. In d = 3 the crossover occurs at frequencies which are larger than those for the localization– delocalization transition by approximately a factor of three [8].
157
158
5 Wave Localization Transitions in Complex Systems
5.5.2 Small-World Networks without Clustering
The localization behavior of electrons in random networks, Erd˝os– Rényi networks and scale-free networks without clustering was studied using level statistics [21]. A clear localization–delocalization transition at particular strengths Wc of the on-node disorder has been observed for a group of networks, which are all characterized by an average degree k ≤ 3.1 and an averaged size of the last occupied shell l ≥ 9.45. The results are summarized in Table 5.2. Table 5.2 Small-world networks showing a transition from localized to extended states. The number of nodes on the last shell l is reported for networks of size N = 1000. The critical disorder Wc and the critical exponent ν have been determined by level statistics. (Data taken from [21]).
Network
k
Scale-free, λ = 4, m = 2
l
Wc
ν
2.97
12.46
15.7±0.9
0.55±0.11
Random-regular
3
11.8
11.9±0.26
0.66±0.08
˝ Erdos–Rényi
3
20.5±0.23
0.68±0.08
9.45
For several other small-world networks with k ≥ 3.1 no transition towards localized electronic eigenfunctions could be observed [21]. Nevertheless, one should be rather careful in interpreting this observation since larger values of k lead to smaller sizes l of the network for the same number of nodes N. The data for all the networks (including those which show no clear signs of transition) can be scaled according to their average degree k. The larger the value of k, the larger also is the value of W needed to obtain a specific value of γ (see (5.22)). For all the small-world networks showing a localization–delocalization transition, the critical exponent ν is approximately equal to 1/2. A critical index of ν = 1/2 is expected for a system of infinite dimensionality [15, 63]. The main features of the Anderson transition are thus similar for a wide range of small-world networks. However, the fact that networks with high connectivity are very compact raises the problem of identifying the transition point. It is hard to extend the usual finite-sized scaling method to networks with high connectivity since the number of sites grows very rapidly with size, while for small network sizes the crossover behavior of the γ curves is very noisy. This results in an
5.5 Localization–Delocalization Transitions in Complex Networks
inability to clearly identify the Anderson transition for scale-free networks with k ≥ 3.1. The possibility of a critical connectivity above which no transition exists cannot be ruled out. 5.5.3 Scale-Free Networks with Clustering
In a recent paper [22] we have studied the localization–delocalization transition for optical modes on scale-free networks with clustering. By exact diagonalization, we have calculated the eigenvalues of (5.11) with random hopping elements tn+δ,n = ±1 characterizing the links between all connected nodes. The networks are characterized by the exponent λ in the scale-free degree distribution (5.2), the clustering coefficient prefactor C0 from (5.5) as well as the strength of the on-node disorder W. We applied level statistics to determine the localization behavior of the modes and to extract the phase-transition points. Figure 5.8(a) shows the phase diagram for the transition from localized (upper right) to extended (lower left) optical modes. The horizontal axis (C0 = 0) corresponds to the case without clustering described in the previous section. Here, the critical disorder Wc depends on λ. The main result regards the transitions on the vertical axis. Without on-node disorder (for W = 0), the transition to a localized phase occurs at a critical clustering C0,q that depends on λ, i.e. the degree distribution. While even the strongest clustering C0 = 1 cannot achieve such a transition if λ < 4, values of C0,q < 1 are observed for λ > 4. The case λ = 4 seems to be limiting, since it represents the broadest degree distribution allowing for a localization–delocalization phase transition upon increasing clustering. If variations of C0 and W are considered, the full phase diagram can be explored. Evidently, smaller values of C0 are sufficient for localization–delocalization phase transitions if W > 0. Within our error bars the critical exponent ν corresponds to the mean-field value ν = 1/2 for infinite dimensions. This is shown in Figure 5.8(b) and is expected for an Anderson-type transition [15]. We obtained similar phase diagrams for networks with homogeneous or Erd˝os–Rényi-type degree distributions (not shown). There are several similarities between the phase transitions induced by clustering and by quantum percolation. In both cases the giant component becomes smaller when the parameters C0 and p approach the quantum transition points C0,q and pq , respectively; the extended phase
159
160
5 Wave Localization Transitions in Complex Systems
Figure 5.8 (a) Phase diagram for transitions from localized optical modes (upper right) to extended modes in parts of the spectrum (lower left) for different degree distribution exponents λ = 4 (diamonds), 4.25 (circles), and 5 (squares). (b) Critical exponent ν for different λ and C0,q . The values are consistent with the
mean-field prediction ν = 1/2. (c) Quantum transitions (circles) and classical transitions (squares) as a function of the degree exponent λ. In the regime λ < 4.5 only quantum transitions occur. For W > 0 the curves move downwards making quantum transitions possible for λ < 4. (Redrawn from [22]).
exists for C0 < C0,q and for p > pq . One also finds a geometrical phase transition after the quantum phase transition in both cases, since the network breaks for C0 > C0,c and p < pc . In between, there is a regime with a giant component (or an infinite cluster) but merely localized modes for C0,q < C0 < C0,c and pc < p < pq . However, the relation between both types of transitions is still an open question. The changes in the degree distribution of the giant component are not sufficient to explain the classical transition. To make sure that the quantum transition is induced by clustering and not by a classical phase transition we have determined the corresponding classical critical clustering coefficient C0,c . For this purpose we have analyzed the size N2 of the second largest cluster in the system which should increase with C0 if the giant component exists (C0 < C0,c ) and decrease for higher values of C0 if it broke down (C0 > C0,c ) [26,40]. We found no indications of a classical transition for λ < 4.5, i.e. the giant component is not broken. For λ = 5 we found C0,c ≈ 0.85, signifi-
5.5 Localization–Delocalization Transitions in Complex Networks
cantly larger than C0,q ≈ 0.69 as shown in the inset of Figure 5.5 and in Figure 5.8(c). The localization–delocalization transition for W = 0 is thus clearly different from the classical one in two ways. (i) There is no classical transition for λ < 4.5 although a quantum transition is clearly seen. (ii) For λ > 4.5, the quantum transition occurs for lower C0 values than the classical one, leaving an intermediate regime (C0,q < C0 < C0,c ≤ 1) in which all modes are localized although there is a spanning giant cluster. Clustering represents a new degree of freedom which can be used to induce and study phase transitions in complex networks. Comparing systems with different clustering properties might enable one to find the most relevant cause of quantum localization. We proposed that the phenomenon should be observable experimentally and relevant in complex coherent optical networks made of fibers and beam splitters. Such experiments would directly probe the influence of complex network topology on the Anderson localization of light [3–6]. 5.5.4 Systems with Constant and Random Magnetic Field
Finally, we look at the Anderson model with additional magnetic field. As described in Section 5.3.4 the field destroys time reversal invariance and leads to complex values of the hopping terms tn+δ,n in the eigenvalue equation (5.11). The model with random magnetic flux is thus equivalent to the full description of an optical network with the complete range of phase shifts for light propagation along the links (see also Section 5.3.3). For the Anderson localization–delocalization transition on a standard three-dimensional lattice with a constant magnetic field B, a systematic increase of the critical disorder Wc with magnetic flux φ had already been predicted by Larkin and Khmel’nitskii in 1981 [50], see (5.18). The prediction was verified in several different sophisticated analytical studies. Numerical work, however, concentrated on the effect of large or random magnetic fluxes [75, 76]. Very recently, this prediction was confirmed for small values of φ by extended numerical simulations based on level statistics [51]. Note that the Anderson model with hard-wall boundary conditions must be used in this case, since it is not possible to match the increases of the magnetic vector potential with periodic boundary conditions unless very large systems or very large fluxes are considered. The disadvan-
161
162
5 Wave Localization Transitions in Complex Systems Table 5.3 Fitting results for the critical disorder Wc in the three-dimensional Anderson model with magnetic flux φ. Error bars of the combined fits between ±0.03 and ±0.08 have been calculated by the bootstrap method. φ = 0.0008 is the lowest nonzero flux that could be studied in the considered system sizes (up to 403 lattice points), since at least one whole flux quantum must penetrate the system.
φ
0.0
Wc 16.54
0.0008 17.09
0.0015 17.21
0.004 17.44
0.007 17.16
0.01
0.03
0.1
0.25
17.27 17.65 18.16 18.14
tage, however, is that irrelevant scaling variables must be taken into account explicitly in the corrected finite-size scaling approach (see Section 5.4.3). Table 5.3 shows preliminary numerical results for the critical disorder strengths Wc (φ), which do indeed increase with magnetic flux φ. The increase is already very strong for weak fluxes, the prefactor Wφ in (5.18) being surprisingly large for both small and large φ: Wφ = 4.9 ± 0.4 and 3.8 ± 0.1, respectively. This type of behavior naturally leads to the idea that the effect might be used as the basis of a very sensitive low-temperature sensor for magnetic flux. In addition, the data can be fit by (5.18) with exponents β = 0.45 ± 0.05 and 0.60 ± 0.07 for small and large φ, respectively. The prediction of Larkin and Khmel’nitskii is thus confirmed also regarding the scaling exponent β = 1/2. The reason for dividing the fit into two regions for small and large φ stems from a slight shift in the critical level spacing distribution Pc (s). For strong random magnetic fields an even larger critical disorder strength Wc = 18.80 has been reported [76] based on calculations with the transfer matrix method. We can thus conclude that a constant and a random magnetic field lead to a significant shift of the localization– delocalization transition in systems on regular lattices. However, preliminary results show that this is not the case for quantum percolation transitions and for localization–delocalization transitions on complex networks. Table 5.4 compares the classical percolation thresholds with the quantum percolation thresholds without and with random magnetic field. Note that a constant magnetic field is not an option for complex networks; however, the effect of magnetic fields was strongest for the random field in the Anderson model on a lattice. One can clearly see that the magnetic field (and the corresponding change from orthogonal to unitary symmetry) have no significant effect on the quantum localization–delocalization transition.
5.6 Conclusion Table 5.4 Classical percolation thresholds pbc for bond percolation on different networks and preliminary results for the corresponding quantum bond percolation thresholds pbq without and with magnetic flux φ. The error bars of the pq values are at least 0.01. Therefore, no significant effect of the magnetic field on the quantum percolation thresholds is observed.
pbc
pbq (φ = 0)
pbq (φ > 0)
Simple cubic lattice
0.247
0.325
0.313
Random regular network, k0 = 6
0.200
0.237
0.239
Random regular network, k0 = 4
0.333
0.398
0.395
˝ Erdos–Rényi network, k0 = 4
0.250
0.327
0.323
Network
5.6 Conclusion
In summary, we have shown that localization–delocalization transitions (Anderson transitions) of electronic, vibrational, and optical modes occur in disordered systems because of backscattering and interference on time-reversed paths. The disorder can be due to a random potential landscape on a lattice (standard Anderson model) or to a complex structure of the network describing the system. Focusing on the latter case, we have shown that the localization–delocalization transition of percolation systems (i.e. quantum percolation) can be clearly distinguished from the classical (geometrical) percolation transition. On even more complex networks characterized by a small distance between all nodes (small-world property), a scale-free degree distribution, and clustering (increased probability of fully connected triangles) the same type of wave localization transition is observed. It is also clearly distinct from the classical percolation transition occurring in complex networks upon cutting edges (links). We have shown that changing only the local structure of a scale-free network by increasing the clustering coefficient can drive a localization–delocalization transition, even if there is no on-node disorder and no edges are cut. The numerical technique used in most of the reviewed studies is level statistics together with finite-size scaling. This approach can characterize the localization properties of eigenfunctions of the considered Hamiltonian or dynamical matrix without actually requiring the calculation of the eigenfunctions. Making use of random matrix theory, it is based on studying the distribution of normalized level spacings obtained from eigenvalues only. This is a significant advantage for the
163
164
5 Wave Localization Transitions in Complex Systems
computational effort, and it allows one to consider much larger systems. In particular, in systems with hard-wall boundary conditions, the finite-size scaling approach has to be supplemented by taking into account irrelevant scaling variables. The nature of the considered wave-like excitations is not essential for the transitions, since the wave character can be due to quantum theory (electrons) or just classical mechanics (vibrational modes). Similar phenomena can be expected for spin waves in disordered solids, which yield the same type of eigenvalue equation. Studying the localization of light waves is also equivalent in theory. However, it is more complicated experimentally on the one hand, since localization effects can hardly be distinguished from absorption effects. On the other hand, laser light has a macroscopic coherence length. Studying localization effects in complex networks of fibers and beam splitters might thus become experimentally feasible on an optical table. Such experiments could directly probe the influence of complex network topology on the Anderson localization of light [3–6]. We have also considered electronic systems with additional random and nonrandom magnetic field. Introducing the magnetic fields changes the universality class of the Hamiltonian from orthogonal to unitary. The case with strong random magnetic field is equivalent with the full description of an optical network, in which random phase shifts occur for light waves traveling through fibers (edges) between beam splitters. We showed that a constant magnetic field has a significant effect on the critical disorder characterizing the localization– delocalization transition (metal–insulator transition) on a lattice and confirmed a predicted scaling law down to very low fields. On a lattice, random fields also have a strong effect. However, this does not hold for localization–delocalization transitions in complex networks like percolation networks or small-world networks. Complex networks are characterized by strong topological disorder rather than on-node potential disorder. In this case, the symmetry class of the Hamiltonian (or the dynamical matrix) seems to be much less important. Therefore, if the nonzero nondiagonal terms are already distributed in a strongly disordered fashion in the matrix due to the disordered topological structure, the actual range of the values and even the type (real or complex) does not matter much. Hence, the properties of light modes, vibrational modes and electronic modes on complex networks are even more similar than they are on regular lattices with on-site disorder.
References
Acknowledgment
We would like to thank Prof. Shlomo Havlin for discussions. The work has been supported by the German Research Foundation (DFG), the European Community (grant 231288, project SOCIONICAL), the Minerva Foundation, the Israel Science Foundation (grant 569/07 and National Center for Networks) and the Israel Center for Complexity Science.
References 1 Anderson, P.W.: Absence of Diffusion in Certain Random Lattices. Phys. Rev. 109, 1492–1505 (1958). 2 Kramer, B., MacKinnon, A.: Localization – Theory and Experiment. Rep. Prog. Phys. 56, 1496–1564 (1993). 3 Wiersma, D.S., Bartolini, P., Lagendijk, A., Righini, R.: Localization of Light in Disordered Medium. Nature 390, 671–673 (1997). 4 Störzer, M., Gross, P., Aegerter, C.M., Maret, G.: Observation of the Critical Regime Near Anderson Localization of Light. Phys. Rev. Lett. 96, 063904 (2006). 5 Schwartz, T., Bartal, G., Fishman, S., Segev, M.: Transport and Anderson Localization in Disordered TwoDimensional Photonic Lattices. Nature 446, 52–55 (2007). 6 Lahini, Y., Avidan, A., Pozzi, F., Sorel, M., Morandotti, R., Christodoulides, D.N., Silberberg, Y.: Anderson Localization and Nonlinearity in One-Dimensional Disordered Photonic Lattices. Phys. Rev. Lett. 100, 013906 (2008). 7 Foret, M., Courtens, E., Vacher, R., Suck, J.B.: Scattering Investigation of Acoustic Localization in Fused Silica. Phys. Rev. Lett. 77, 3831–3834 (1996). 8 Kantelhardt, J. W., Bunde, A., Schweitzer, L.: Extended Fractons
9
10
11
12
13
14
and Localized Phonons on Percolation Clusters. Phys. Rev. Lett. 81, 4907–4910 (1998). Billy, J., Josse, V., Zuo, Z.C., Bernard, A., Hambrecht, B., Lugan, P., Clement, D., Sanchez-Palencia, L., Bouyer, P., Aspect, A.: Direct Observation of Anderson localization of Matter Waves in a Controlled Disorder. Nature 453, 891 (2008). Roati, G., D’Errico, C., Fallani, L., Fattori, M., Fort, C., Zaccanti, M., Modugno, G., Modugno, M., Inguscio, M.: Anderson Localization of a Non-Interacting Bose–Einstein Condensate. Nature 453, 895–898 (2008). Abrahams, E., Anderson, P.W., Licciardello, D.C., Ramakrishnan, T.V.: Scaling Theory of Localization – Absence of Quantum Diffusion in 2 Dimensions. Phys. Rev. Lett. 42, 673-676 (1979). Lukes, T.: Critical Dimensionality in the Anderson-Mott Transition. J. Phys. C 12, L797 (1979). Kunz, H., Souillard, B.: On the Upper Critical Dimension and the Critical Exponents of the Localization Transition. J. Phys. Lett. 44, L503-L506 (1983). Straley, J.P.: Conductivity Near the Localization Threshold in the HighDimensionality Limit. Phys. Rev. B 28, 5393 (1983).
165
166
References
15 Efetov, K.B.: Anderson Transition on a Bethe Lattice (the Symplectic and Orthogonal Ensembles). Zh. Eksp. Teor. Fiz 93, 1125–1139 (1987) [Sov. Phys. JETP 61, 606 (1985)]. 16 Castellani, C., DiCastro, C., Peliti, L.: On the Upper Critical Dimension in Anderson Localization. J. Phys. A 19, 1099-1103 (1986).
26
17 Zhu, C.P., Xiong, S.-J.: Localization– Delocalization Transition of Electron States in a Disordered Quantum Small-World Network. Phys. Rev. B 62, 14780 (2000).
27
18 Giraud, O., Georgeot, B., Shepelyansky, D.L.: Quantum Computing of Delocalization in Small-World Networks. Phys. Rev. E 72, 036203 (2005). 19 Gong, L., Tong, P.: von Neumann Entropy and Localization– Delocalization Transition of Electron States in Quantum Small-World Networks. Phys. Rev. E 74, 056103 (2006). 20 Sade, M., Berkovits, R.: Localization Transition on a Cayley Tree via Spectral Statistics. Phys. Rev. B 68, 193102 (2003). 21 Sade, M., Kalisky, T., Havlin, S., Berkovits, R.: Localization Transition on Complex Networks via Spectral Statistics. Phys. Rev. E 72, 066123 (2005). 22 Jahnke, L., Kantelhardt, J.W., Berkovits, R., Havlin, S.: Wave Localization in Complex Networks with High Clustering. Phys. Rev. Lett. 101, 175702 (2008). 23 Albert, R., Barabási, A.L.: Statistical Mechanics of Complex Networks. Rev. Mod. Phys. 74, 47-97 (2002).
28
29
30
31
32
33
34
35
24 Erd˝os, P., Rényi, A.: On Random Graphs. Publ. Math. Debrecen 6, 290297 (1959). 25 Kalisky, T., Cohen, R., ben Avraham, D., Havlin S.: Tomography and Stability of Complex Networks. In: Ben-Naim, E., Frauenfelder H.,
36
Toroczkai Z. (eds) Lecture Notes in Physics: Proceedings of the 23rd LANL-CNLS Conference, “Complex Networks”, Santa-Fe, 2003, Springer, Berlin (2004). Cohen, R., Erez, K., ben Avraham, D., Havlin, S.: Resilience of the Internet to Random Breakdowns. Phys. Rev. Lett. 85, 4626–4628 (2000). Carmi, S., Havlin, S., Kirkpatrick, S., Shavitt, Y., Shir, E.: A Model of Internet Topology using k-Shell Decomposition. PNAS 104 11150-11154 (2007). Cohen, R., Havlin, S.: Scale-Free Networks are Ultrasmall. Phys. Rev. Lett. 90, 058701 (2003). Bollobas, B., Riordan, O.: Mathematical Results on Scale-Free Random Graphs. In: Bornholdt, S., Schuster, H.G. (eds.) Handbook of Graphs and Networks. Wiley-VCH, Berlin (2002). Watts, D. J., Strogatz, S. H.: Collective Dynamics of ‘Small-World’ Networks. Nature 393, 440-442 (1998). Newman, M. E. J.: Assortative Mixing in Networks. Phys. Rev. Lett. 89, 208701 (2002). Serrano, M. A., Boguñá, M.: Percolation and Epidemic Thresholds in Clustered Networks. Phys. Rev. Lett. 97, 088701 (2006). Serrano, M. A., Boguñá, M.: Clustering in Complex Networks. I. General Formalism. Phys. Rev. E 74, 056114 (2006). Serrano, M. A., Boguñá, M.: Clustering in Complex Networks. II. Percolation Properties. Phys. Rev. E 74, 056115 (2006). Vázquez, A., Pastor-Satorras, R., Vespignani, A.: Large-Scale Topological and Dynamical Properties of the Internet. Phys. Rev. E 65, 066130 (2002). Serrano, M. A., Boguñá, M.: Tuning Clustering in Random Networks with Arbitrary Degree Distributions. Phys. Rev. E 72, 036133 (2005).
References
37 Dorogovtsev, S. N., Mendes, J. F. F.: Evolution of Networks – From Biological Nets to the Internet and WWW. Oxford University Press, Oxford (2003). 38 Pastor-Satorras, R., Vespignani A.: Evolution and Structure of the Internet: A Statistical Physics Approach. Cambridge University Press, Cambridge (2004). 39 Molloy, M., Reed, B.: The Size of the Giant Component of a Random Graph with a Given Degree Sequence. Combinatorics, Probability & Computing 7, 295–305 (1998). 40 Cohen, R., Erez, K., ben Avraham, D., Havlin, S.: Breakdown of the Internet under Intentional Attack. Phys. Rev. Lett. 86, 3682–3685 (2001). 41 Cohen, R., ben Avraham, D., Havlin, S.: Percolation Critical Exponents in Scale-Free Networks. Phys. Rev. E, 66, 036113 (2002). 42 Barabási, A. L., Albert, R.: Emergence of Scaling in Random Networks. Science 286, 509 (1999). 43 Volz E.: Random Networks with Tunable Degree Distribution and Clustering. Phys. Rev. E, 70, 056115 (2004). 44 Kantelhardt, J. W., Bunde, A.: Sublocalization, Superlocalization, and Violation of Standard Single Parameter Scaling in the Anderson Model, Phys. Rev. B 66, 035118 (2002). 45 MacKinnon, A., Kramer, B.: OneParameter Scaling of Localization Length and Conductance in Disordered Systems. Phys. Rev. Lett. 47, 1546–1549 (1981). 46 Slevin, K., Ohtsuki, T., Kawarabayashi, T.: Topology Dependent Quantities at the Anderson Transition, Phys. Rev. Lett. 84, 3915– 3918 (2000). 47 Shapiro, B.: Renormalization-Group Transformation for the Anderson Transition. Phys. Rev. Lett., 48, 823– 825 (1982).
48 Anderson, P. W., Thouless, D. J., Abrahams, E., Fisher, D.S.: New Method for a Scaling Theory of Localization. Phys. Rev. B, 22, 3519-3526 (1980). 49 Edrei, I., Kaveh, M., Shapiro, B.: Probability-Distribution Functions for Transmission of Waves Through Random-Media – a new NumericalMethod. Phys. Rev. Lett., 62, 2120– 2123 (1989). 50 Khmel’nitskii, D. E., Larkin, A. I.: Mobility Edge Shift in External Magnetic Field. Solid State Commun. 39, 1069 (1981). 51 Jahnke, L., Kantelhardt, J.W., Berkovits, R., The Effect of a Small Magnetic Flux on the Metal–Insulator Transition. Preprint (2009). 52 Hofstadter, D. R.: Energy Levels and Wave Functions of Bloch Electrons in Rational and Irrational Magnetic Fields. Phys. Rev. B 14, 2239 (1976). 53 Wigner, E. P.: On a Class of Analytic Functions from the Quantum Theory of Collisions. Ann. Math. 53, 36 (1951). 54 Dyson, F. J.: Statistical Theory of the Energy Levels of Complex Systems. J. Math. Phys. 3, 140 (1961). 55 Efetov, K. B.: Supersymmetry and Theory of Disordered Metals. Adv. Phys. 32, 53 (1983). 56 Altshuler, B. L., Shklovskii, B. I.: Repulsion of Energy-levels and the Conductance of Small Metallic Samples. Sov. Phys. JETP 64, 127 (1986). 57 Altshuler, B. L., Zharekeshev, I. Kh., Kotochigova, S. A., Shklovskii, B. I.: Energy-level Repulsion and the Metal–Insulator-Transition. Sov. Phys. JETP 67, 625 (1988). 58 Shklovskii, B. I., Shapiro, B., Sears, B. R., Lambrianides P., Shore H. B.: Statistics of Spectra of Disordered Systems near the Metal– Insulator Transition. Phys. Rev. B, 47, 11487-11490 (1993).
167
168
References
59 Hofstetter E., Schreiber M.: Relation between Energy-Level Statistics and Phase Transition and its Application to the Anderson Model. Phys. Rev. E, 49, 14726 (1994). 60 Metha, M. L.: Random Matrices. Academic Press, Boston (1991). 61 Guhr, T., Müller-Groeling, A., Weidenmüller, H. A.: Random-Matrix Theories in Quantum Physics: Common Concepts. Phys. Rep. 299, 189 (1998). 62 Mirlin, A. D., Statistics of Energy Levels and Eigenfunctions in Disordered Systems. Phys. Rep. 326, 260 (2000). 63 Schreiber, M., Grussbach, H.: Dimensionality Dependence of the Metal– Insulator Transition in the Anderson Model of Localization. Phys. Rev. Lett., 76, 1687-1690 (1996). 64 Meir, Y., Aharony, A., Harris, A. B.: Delocalization Transition in TwoDimensional Quantum Percolation. Europhys. Lett. 10, 275 (1989). 65 Taylor, J. P. G., MacKinnon, A.: A Study of the Two-Dimensional Bond Quantum Percolation Model. J. Phys. Condens. Mat. 1, 9963 (1989). 66 Koslowski, T., v. Niessen, W.: Mobility Edges for the Quantum Percolation Problem in Two and Three Dimensions. Phys. Rev. B 42, 10342 (1990). 67 Soukoulis, C. M., Grest, G. S.: Localization in Two-Dimensional Quantum Percolation. Phys. Rev. B 44, 4685 (1991). 68 Soukoulis, C. M., Li, Q., Grest, G. S.: Quantum Percolation in ThreeDimensional Systems. Phys. Rev. B 45, 7724 (1992).
69 Dasgupta, I., Saha, T., Mookerjee, A.: Analysis of Stochastic Resonances in a Two-Dimensional Quantum Percolation Model. Phys. Rev. B 47, 3097 (1993). 70 Berkovits, R., Avishai, Y.: Spectral Statistics Near the Quantum Percolation Threshold. Phys. Rev. B 53, R16125R16128 (1996). 71 Kusy, A., Stadler, A. W., Haldas, G., Sikora, R.: Quantum Percolation in Electronic Transport of Metal– Insulator Systems: Numerical Studies of Conductance. Physica A 241, 403 (1997). 72 Kaneko, A., Ohtsuki, T.: ThreeDimensional Quantum Percolation Studied by Level Statistics. J. Phys. Soc. Jpn. 68, 1488 (1999). 73 Lorenz, C.D., Ziff, R.M.: Precise Determination of the Bond Percolation Thresholds and Finite-Size Scaling Corrections for the sc, fcc, and bcc Lattices. Phys. Rev. E, 57, 230-236 (1998). 74 Alexander, S., Orbach, R.: Density of States on Fractals: Fractons. J. Phys. (Paris) Lett. 43, L-625 (1982). 75 Slevin, K., Ohtsuki, T.: The Anderson Transition: Time Reversal Symmetry and Universality. Phys. Rev. Lett. 78, 4083–4086 (1997). 76 Kawarabayashi, T., Kramer, B., Ohtsuki, T.: Anderson Transitions in Three-Dimensional Disordered Systems with Randomly Varying Magnetic Flux. Phys. Rev. B 57, 11842– 11845 (1998).
169
6 From Deterministic Chaos to Anomalous Diffusion Rainer Klages
6.1 Introduction
Over the past few decades it was realized that deterministic dynamical systems involving only a few variables could exhibit complexity reminiscent of many-particle systems if the dynamics is chaotic, as can be quantified by the existence of a positive Ljapunov exponent [1]. Such systems provided important paradigms in order to construct a theory of nonequilibrium statistical physics starting from first principles, i.e. based on microscopic nonlinear equations of motion. This novel approach led to the discovery of fundamental relations characterizing transport in terms of deterministic chaos, of which formulas relating deterministic diffusion to differences between Ljapunov exponents and dynamical entropies form important examples [2–4]. More recently, scientists learned that random-looking evolution in time and space also occurs under conditions that are weaker than requiring a positive Ljapunov exponent, which means that the separation of nearby trajectories is weaker than exponential [4]. This class of dynamical system is called weakly chaotic and typically leads to transport processes that require descriptions going beyond standard methods of statistical mechanics. A paradigmatic example is the phenomenon of anomalous diffusion, where the mean-square displacement of an ensemble of particles does not grow linearly in the long-time limit, as in the case of ordinary Brownian motion, but nonlinearly in time. Such anomalous transport phenomena do not only pose new fundamental questions to theorists, but were also observed in a large number of experiments [5]. This review gives a tutorial introduction to all these topics in the form of three sections: Section 6.2 reminds us of two basic concepts quantifying deterministic chaos in dynamical systems, which are Ljapunov ex-
170
6 From Deterministic Chaos to Anomalous Diffusion
ponents and dynamical entropies. These approaches will be illustrated by studying simple one-dimensional maps. Slight generalizations of these maps will be used in Section 6.3 in order to motivate the problem of deterministic diffusion. Their analysis will yield an exact formula expressing diffusion in terms of deterministic chaos. In the first part of Section 6.4 we further generalize these simple maps such that they exhibit anomalous diffusion. This dynamics can be analyzed by applying continuous time random walk theory, a stochastic approach that leads to generalizations of ordinary laws of diffusion from which we derive a fractional diffusion equation as an example. Finally, we demonstrate the relevance of these theoretical concepts to experiments by studying the anomalous dynamics of biological cell migration. The degree of difficulty of the material presented increases from section to section, and the style of our presentation changes accordingly: While Section 6.2 mainly elaborates on textbook material of chaotic dynamical systems [6, 7], Section 6.3 covers advanced topics that have emerged in research over the past twenty years [2, 3, 8]. Both these sections were successfully taught twice to first year Ph. D. students in the form of five one-hour lectures. Section 6.4 covers topics that were presented by the author in two one-hour seminar talks and are closely related to recently published research articles [9–11].
6.2 Deterministic Chaos
To clarify the general setting, we start with a brief reminder about the dynamics of time-discrete one-dimensional dynamical systems. We then quantify chaos in terms of Ljapunov exponents and (metric) entropies by focusing on systems that are closed on the unit interval. These ideas are then generalized to the case of open systems, where particles can escape (in terms of absorbing boundary conditions). Most of the concepts we are going to introduce carry over, suitably generalized, to higher-dimensional and time-continuous dynamical systems.1) 1) The first two sections draw on [12], in case the reader needs a more detailed introduction.
6.2 Deterministic Chaos
6.2.1 Dynamics of Simple Maps
As a warm-up, let us recall the following: Definition 1 Let J ⊆ R, xn ∈ J, n ∈ Z. Then
F:J→J,
x n +1 = F ( x n )
(6.1)
is called a one-dimensional time-discrete map. xn+1 = F ( xn ) are sometimes called the equations of motion of the dynamical system. Choosing the initial condition x0 determines the outcome after n discrete time steps, hence we speak of a deterministic dynamical system. It works as follows: x1 = F ( x0 ) = F 1 ( x0 ) x2 = F ( x1 ) = F ( F ( x0 )) = F2 ( x0 )
⇒ F m ( x0 ) : =
F ◦ F ◦ · · · F ( x0 ) ! "# $ m-fold composed map
(6.2)
In other words, there exists a unique solution to the equations of motion in the form of xn = F ( xn−1 ) = . . . = F n ( x0 ), which is the counterpart of the flow for time-continuous systems. In the first two sections we will focus on simple piecewise linear maps. The following one serves as a paradigmatic example [1, 2, 6, 13]: Example 1 The Bernoulli shift (also shift map, doubling map, dyadic transformation)
The Bernoulli shift shown in Figure 6.1 can be defined by % 2x , 0 ≤ x < 1/2 B : [0, 1) → [0, 1) , B( x ) := 2x mod 1 = (6.3) 2x − 1 , 1/2 ≤ x < 1 One may think about the dynamics of such maps as follows, see Figure 6.2: Assume we fill the whole unit interval with a uniform distribution of points. We may now decompose the action of the Bernoulli shift into two steps: 1. The map stretches the whole distribution of points by a factor of two, which leads to divergence of nearby trajectories.
171
172
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.1 The Bernoulli shift.
2. Then we cut the resulting line segment in the middle due to the the modulo operation mod 1, which leads to motion bounded on the unit interval.
Figure 6.2 Stretch-and-cut mechanism in the Bernoulli shift.
The Bernoulli shift thus yields a simple example for an essentially nonlinear stretch-and-cut mechanism, as it typically generates deterministic chaos [6]. Such basic mechanisms are also encountered in more realistic dynamical systems. We may remark that ‘stretch and fold’ or ‘stretch, twist and fold’ provide alternative mechanisms for generating chaotic behavior, see, e.g. the tent map. The reader may wish to play around with these ideas in thought experiments, where the sets of points are replaced by kneading dough. These ideas can be made mathematically precise in the form of what is called mixing in dynamical systems, which is an important concept in the ergodic theory of dynamical systems [2, 14].
6.2 Deterministic Chaos
6.2.2 Ljapunov Chaos
In [15] Devaney defines chaos by requiring that, for a given dynamical system, three conditions have to be fulfilled: sensitivity, existence of a dense orbit, and that the periodic points are dense. The Ljapunov exponent generalizes the concept of sensitivity in the form of a quantity that can be calculated more conveniently, as we will indicate by an example: Example 2 Ljapunov instability of the Bernoulli shift [6, 16]
Consider two points that are initially displaced from each other by δx0 := | x0 − x0 | with δx0 ‘infinitesimally small’ such that x0 , x0 do not hit different branches of the Bernoulli shift B( x ) around x = 1/2. We then have δxn := | xn − xn | = 2δxn−1 = 22 δxn−2 = . . . = 2n δx0 = en ln 2 δx0 (6.4) We thus see that there is an exponential separation between two nearby points as we follow their trajectories. The rate of separation λ( x0 ) := ln 2 is called the (local) Ljapunov exponent of the map B( x ). This simple example can be generalized as follows, leading to the general definition of the Ljapunov exponent for one-dimensional maps F. Consider δxn = | xn − xn | = | F n ( x0 ) − F n ( x0 )| =: δx0 enλ( x0 ) (δx0 → 0)
(6.5)
for which we presuppose that an exponential separation of trajectories exists.2) By furthermore assuming that F is differentiable we can rewrite this equation as λ( x0 ) = lim lim
n →∞ δx0 →0
1 δxn ln n δx0
1 | F n ( x0 + δx0 ) − F n ( x0 )| ln n →∞ δx0 →0 n δx0 n 1 dF ( x ) = lim ln n→∞ n dx x= x0
= lim lim
Using the chain rule we obtain dF n ( x ) = F ( x n −1 ) F ( x n −2 ) . . . F ( x0 ) dx x= x0 2) We emphasize that this is not always the case, see, e.g. Section 17.4 of [4].
(6.6)
(6.7)
173
174
6 From Deterministic Chaos to Anomalous Diffusion
which leads to
1 n−1 λ( x0 ) = lim ln ∏ F ( xi ) n→∞ n i =0 1 n→∞ n
= lim
n −1
∑ ln F (xi )
(6.8)
i =0
This simple calculation motivates the following definition: Definition 2 [13] Let F ∈ C1 be a map of the real line. The local Ljapunov
exponent λ( x0 ) is defined as 1 n→∞ n
λ( x0 ) := lim
n −1
∑ ln F (xi )
(6.9)
i =0
if this limit exists.3) Remark 1
1. If F is not C1 but piecewise C1 , the definition can still be applied by excluding single points of nondifferentiability. 2. If F ( xi ) = 0 ⇒ ∃λ( x ). However, usually this concerns only an ‘atypical’ set of points. Example 3 For the Bernoulli shift B( x ) = 2x mod 1 we have B ( x ) =
2 ∀ x ∈ [0, 1) , x = 12 , hence trivially λ( x ) =
1 n −1 ln 2 = ln 2 n k∑ =0
(6.10)
at these points. Note that Definition 2 defines the local Ljapunov exponent λ( x0 ), that is, this quantity may depend on our choice of initial conditions x0 . For the Bernoulli shift this is not the case, because this map has a uniform slope of two, except at the point of discontinuity, which makes the calculation trivial. Generally the situation is more complicated. One question is then of how to calculate the local Ljapunov exponent, and another is to what extent it depends on initial conditions. An answer to 3) This definition was proposed by A.M. Ljapunov in his Ph.D. thesis of 1892.
6.2 Deterministic Chaos
both of these questions is provided in the form of the global Ljapunov exponent that we are going to introduce, which does not depend on initial conditions and thus characterizes the stability of the map as a whole. It is introduced by observing that the local Ljapunov exponent in Definition 2 is defined in the form of a time average, where n terms along the trajectory with initial condition x0 are summed up by averaging over n. That this is not the only possibility to define an average quantity is clarified by the following definition. Definition 3 Time and ensemble average [2, 14]
Let μ∗ be the invariant probability measure of a one-dimensional map F acting on J ⊆ R. Let us consider a function g : J → R, which we may call an ‘observable’. Then 1 n→∞ n
g( x ) := lim
n −1
∑ g( xk )
(6.11)
k =0
x = x0 , is called the time (or Birkhoff) average of g with respect to F.
g :=
J
dμ∗ g( x )
(6.12)
where, if such a measure exists, dμ∗ = ρ∗ ( x ) dx, is called the ensemble (or space) average of g with respect to F. Here ρ∗ ( x ) is the invariant density of the map, and dμ∗ is the associated invariant measure [6, 12, 17]. Note that g( x ) may depend on x, whereas g does not. If we choose g( x ) = ln | F ( x )| as the observable in (6.11) we recover Definition 2 for the local Ljapunov exponent, 1 n −1 ∑ ln | F (xk )| n→∞ n k =0
λ( x ) := ln | F ( x )| = lim
(6.13)
which we may write as λt ( x ) = λ( x ) in order to label it as a time average. If we choose the same observable for the ensemble average (6.12) we obtain λe := ln | F ( x )| :=
J
dxρ∗ ( x ) ln | F ( x )|
(6.14)
175
176
6 From Deterministic Chaos to Anomalous Diffusion
Example 4 For the Bernoulli shift we have seen that for almost every
x ∈ [0, 1) λt = ln 2. For λe we obtain λe =
1 0
dxρ∗ ( x ) ln 2 = ln 2
(6.15)
taking into account that ρ∗ ( x ) = 1, as we have seen before. In other words, time and ensemble average are the same for almost every x, λt ( x ) = λe = ln 2
(6.16)
This motivates the following fundamental definition: Definition 4 ergodicity [2, 14]4)
A system is called ergodic if for every g on J ⊆ R satisfying dynamical ∗ dμ | g( x )| < ∞5) g( x ) = g
(6.17)
for typical x. For our purpose it suffices to think of a typical x as a point that is randomly drawn from the invariant density ρ∗ ( x ). This definition implies that for ergodic dynamical systems g( x ) does not depend on x. That the time average is constant is sometimes also taken as a definition of ergodicity [2, 7]. To prove that a given system is ergodic is typically a hard task and one of the fundamental problems in the ergodic theory of dynamical systems; see [2, 14, 19] for proofs of ergodicity in the case of some simple examples. On this basis, let us return to Ljapunov exponents. For time average λt ( x ) and ensemble average λe of the Bernoulli shift we have found that λt ( x ) = λe = ln 2. Definition 4 now states that the first equality must hold whenever a map F is ergodic. This means, in turn, that for an ergodic dynamical system the Ljapunov exponent becomes a global 4) Note that mathematicians prefer to define ergodicity by using the concept of indecomposability [18]. 5) This condition means that we require g to be a Lebesgue-integrable function. In other words, g should be an element of the function space L1 ( J, A , μ∗ ) of a set J, a σ-algebra A of subsets of J and an invariant measure μ∗ . This space defines the family of all possible real-valued measurable functions g satisfying dμ∗ | g( x )| < ∞, where this integral should be understood as a Lebesgue integral [17].
6.2 Deterministic Chaos
quantity characterizing a given map F for a typical point x irrespective of what value we choose for the initial condition, λt ( x ) = λe = λ. This observation very much facilitates the calculation of λ, as is demonstrated by the following example: Example 5 Let us consider the map A( x ) displayed in Figure 6.3.
Figure 6.3 A simple map for demonstrating the calculation of Ljapunov exponents via ensemble averages.
From the figure we can infer that % 3 x, 0 ≤ x < 23 A( x ) := 2 3x − 2 , 23 ≤ x < 1
(6.18)
It is not hard to see that the invariant probability density of this map is uniform, ρ∗ ( x ) = 1. The Ljapunov exponent λ for this map is then easily calculated to λ=
1 0
dxρ∗ ( x ) ln | A ( x )| = ln 3 −
2 ln 2 3
(6.19)
By assuming that map A is ergodic (which is the case here), we can conclude that this result for λ represents the value for typical points in the domain of A. In other words, for an ergodic map the global Ljapunov exponent λ yields a number that assesses whether it is chaotic in the sense of exhibiting an exponential dynamical instability. This motivates the following definition of deterministic chaos:
177
178
6 From Deterministic Chaos to Anomalous Diffusion
Definition 5 Chaos in the sense of Ljapunov [6, 7, 13, 16]
An ergodic map F : J → J, J ⊆ R, F (piecewise) C1 is said to be Lchaotic on J if λ > 0. Why did we introduce a definition of chaos that is different from Devaney’s definition mentioned earlier? One reason is that often the largest Ljapunov exponent of a dynamical system is easier to calculate than checking for sensitivity.6) Furthermore, the magnitude of the positive Ljapunov exponent quantifies the strength of chaos. This is the reason why in the applied sciences ‘chaos in the sense of Ljapunov’ became a very popular concept.7) Note that there is no unique quantifier of deterministic chaos. Many different definitions are available highlighting different aspects of ‘chaotic behavior’, all having their advantages and disadvantages. The detailed relations between them (such as the ones between Ljapunov and Devaney chaos) are usually nontrivial and a topic of ongoing research [12]. We will encounter yet another definition of chaos in the following section. 6.2.3 Entropies
This section particularly builds upon the presentations in [2, 6]; for a more mathematical approach see [20]. Let us start with a brief motivation outlining the basic idea of entropy production in dynamical systems. Consider again the Bernoulli shift by decomposing its domain J = [0, 1) into J0 := [0, 1/2) and J1 := [1/2, 1). For x ∈ [0, 1) define the output map s by [1] % 0 , x ∈ J0 s : [0, 1) → {0, 1} , s( x ) := (6.20) 1 , x ∈ J1 and let sn+1 := s( xn ). Now choose some initial condition x0 ∈ J. According to the above rule we obtain a digit s1 ∈ {0, 1}. Iterating the Bernoulli shift according to xn+1 = B( xn ) then generates a sequence of digits {s1 , s2 , . . . , sn }. This sequence yields nothing other than the 6) Note that a positive Ljapunov exponent implies sensitivity, but the converse does not hold true [12]. 7) Here one often silently assumes that a given dynamical system is ergodic. To prove that a system is topologically transitive as required by Devaney’s definition is not any easier.
6.2 Deterministic Chaos
binary representation of the given initial condition x0 [1, 2, 6]. If we assume that we pick an initial condition x0 at random and feed it into our map without knowing about its precise value, this simple algorithm enables us to find the number that we have actually chosen. In other words, here we have a mechanism of creation of information about the initial condition x0 by analyzing the chaotic orbit generated from it as time evolves. Conversely, if we now assume that we already knew the initial state up to, say, m digits precision and we iterate p > m times, we see that the map simultaneously destroys information about the current and future states, in terms of digits, as time evolves. So creation of information about previous states goes along with loss of information about current and future states. This process is quantified by the Kolmogorov–Sinai (KS) entropy (also called metric, or measure-theoretic entropy), which measures the exponential rate at which information is produced, respectively lost in a dynamical system, as we will see below. The situation is similar to the following thought experiment illustrated in Figure 6.4: Let us assume we have a gas consisting of molecules, depicted as billiard balls, which is constrained to the left half of the box as shown in (a). This is like having some information about the initial conditions of all gas molecules, which are in a more localized, or ordered, state. If we remove the piston as in (b), we observe that the gas spreads out over the full box until it reaches a uniform equilibrium steady state. We then have less information available about the actual positions of all gas molecules, that is, we have increased the disorder of the whole system. This observation lies at the heart of what is called thermodynamic entropy production in the statistical
Figure 6.4 Schematic representation of a gas of molecules in a box. In (a) the gas is constrained by a piston to the left-hand side of the box, in (b) the piston is removed and the gas can spread out over the whole box. This illustrates the basic idea of (physical) entropy production.
179
180
6 From Deterministic Chaos to Anomalous Diffusion
physics of many-particle systems which, however, is usually assessed by quantities that are different from the KS-entropy [21]. At this point we will not elaborate further on the relation to statistical physical theories. Instead, let us make precise what we mean by KSentropy starting from the famous Shannon (or information) entropy [6, 7]. This entropy is defined as HS :=
r
∑ pi ln
i =1
1 pi
(6.21)
where pi , i = 1, . . . , r are the probabilities for the r possible outcomes of an experiment. Think, for example, of a roulette game, where carrying out the experiment one time corresponds to n = 1 in the iteration of an unknown map. HS then measures the amount of uncertainty concerning the outcome of the experiment, which can be understood as follows: 1. Let p1 = 1 , pi = 0 otherwise. By defining pi ln p1i := 0 , i = 1, we have HS = 0. This value of the Shannon entropy must therefore characterize the situation where the outcome is completely certain. 2. Let pi = 1/r , i = 1, 2, . . . , r. Then we obtain HS = ln r thus characterizing the situation where the outcome is most uncertain because of equal probabilities. Case (1) thus represents the situation of no information gain by doing the experiment, case (2) corresponds to maximum information gain. These two special cases must therefore define the lower and upper bounds of HS ,8) 0 ≤ HS ≤ ln r
(6.22)
This basic concept of information theory carries over to dynamical systems by identifying the probabilities pi with invariant probability measures μi∗ on subintervals of a given dynamical system’s phase space. The precise connection is worked out in four steps [2, 6]. 8) A proof employs the convexity of the entropy function and Jensen’s inequality, see [22] or the Wikipedia entry of information entropy for details.
6.2 Deterministic Chaos
1. Partition and refinement Consider a map F acting on J ⊆ R, and let μ∗ be an invariant probability measure generated by the map.9) Let { Ji }, i = 1, . . . , s be a partition of J.10) We now construct a refinement of this partition as illustrated by the following example: Example 6 Consider the Bernoulli shift displayed in Figure 6.5. Start
with the partition { J0 , J1 } shown in (a). Now create a refined partition by iterating these two partition parts backwards according to B−1 ( Ji ) as indicated in (b). Alternatively, you may take the second forward iterate B2 ( x ) of the Bernoulli shift and then identify the pre-images of x = 1/2 for this map. In either case the new partition parts are obtained to J00 := { x : x ∈ J0 , B( x ) ∈ J0 } J01 := { x : x ∈ J0 , B( x ) ∈ J1 } J10 := { x : x ∈ J1 , B( x ) ∈ J0 } J11 := { x : x ∈ J1 , B( x ) ∈ J1 }
(6.23)
If we choose x0 ∈ J00 we thus know in advance that the orbit emerging from this initial condition under iteration of the map will remain in J0 at the next iteration. That way, the refined partition clearly yields more information about the dynamics of single orbits. More generally, for a given map F the above procedure is equivalent to defining
{ Ji1 i2 } := { Ji1 ∩ F −1 ( Ji2 )}
(6.24)
The next round of refinement proceeds along the same lines yielding
{ Ji1 i2 i3 } := { Ji1 ∩ F −1 ( Ji2 ) ∩ F −2 ( Ji3 )}
(6.25)
and so on. For convenience we define
{ Jin } := { Ji1 i2 ...in } = { Ji1 ∩ F −1 ( Ji2 ) ∩ . . . ∩ F −(n−1) ( Jin )}
(6.26)
9) If not noted otherwise, μ∗ holds for the physical (or natural) measure of the map in the following [12]. 10) A partition of the interval J is a collection of subintervals whose union is J, which are pairwise disjoint except perhaps at the end points [13].
181
182
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.5 (a) The Bernoulli shift and a partition of the unit interval consisting of two parts. (b) Refinement of this partition under backward iteration.
2. H-function In analogy with the Shannon entropy (6.21) we next define the function H ({ Jin }) := − ∑ μ∗ ( Jin ) ln μ∗ ( Jin )
(6.27)
i
where μ∗ ( Jin ) is the invariant measure of the map F on the partition part Jin of the nth refinement. Example 7 For the Bernoulli shift with uniform invariant probabil∗ n ity ρ∗ ( x ) = 1 and density associated (Lebesgue) measure μ ( Ji ) = ∗ n dx ρ ( x ) = diam Ji we can calculate Jn i
H ({ Ji1 })
=−
1 1 1 1 ln + ln 2 2 2 2
= ln 2
H ({ Ji2 }) = H ({ Ji1 ∩ B−1 ( Ji2 )}) = −4
1 1 ln 4 4
= ln 4
H ({ Ji3 }) = . . . = ln 8 = ln 23 .. . H ({ Jin }) = ln 2n
(6.28)
6.2 Deterministic Chaos
3. Take the limit We now look at what we obtain in the limit of infinitely refined partition by 1 H ({ Jin }) n→∞ n
h({ Jin }) := lim
(6.29)
which defines the rate of gain of information over n refinements. Example 8 For the Bernoulli shift we trivially obtain11)
h({ Jin }) = ln 2
(6.30)
4. Supremum over partitions We finish the definition of the KS-entropy by maximizing h({ Jin }) over all available partitions, hKS := sup h({ Jin })
(6.31)
{ Jin }
n for The last step can be avoided if the partition { Ji } is generating, n which it must hold that diam Ji → 0 (n → ∞) [18, 20, 22].12) It is quite obvious that for the Bernoulli shift the partition chosen above is generating in that sense, hence hKS = ln 2 for this map.
Remark 2 [1]
1. For strictly periodic motion there is no refinement of partition parts under backward iteration, hence hKS = 0; see, for example, the identity map I ( x ) = x. 2. For stochastic systems all pre-images are possible, so there is immediately an infinite refinement of partition parts under backward iteration, which leads to hKS → ∞. These considerations suggest yet another definition of deterministic chaos: 11) In fact, this result is already obtained after a single iteration step, i.e. without taking the limit of n → ∞. This reflects the fact that the Bernoulli shift dynamics, sampled in this way is mapped onto a Markov process. 12) Note that alternative definitions of a generating partition in terms of symbolic dynamics are possible [7].
183
184
6 From Deterministic Chaos to Anomalous Diffusion
Definition 6 Measure-theoretic chaos [7]
A map F : J → J, J ⊆ R, is said to be chaotic in the sense of exhibiting dynamical randomness if hKS > 0. Again, one may wonder about the relation between this new definition and our previous one in terms of Ljapunov chaos. Let us look again at the Bernoulli shift: Example 9 For B( x ) we have calculated the Ljapunov exponent to
λ = ln 2, see Example 4. Above we have seen that hKS = ln 2 for this map, so we arrive at λ = hKS = ln 2. That this equality is not an artefact due to the simplicity of our chosen model is stated by the following theorem: Theorem 1 Pesin’s Theorem (1977) [2, 20, 23]
For closed C2 Anosov13) systems the KS-entropy is equal to the sum of positive Ljapunov exponents. A proof of this theorem goes considerably beyond the scope of this review [23]. In the given formulation it applies to higherdimensional dynamical systems that are ‘suitably well-behaved’ in the sense of exhibiting the Anosov property. Applied to onedimensional maps, it means that if we consider transformations which are ‘closed’ by mapping an interval onto itself, F : J → J, under certain conditions (which we do not further specify here) and if there is a positive Ljapunov exponent λ > 0 we can expect that λ = hKS , as we have seen for the Bernoulli shift. In fact, the Bernoulli shift provides an example of a map that does not fulfill the conditions of the above theorem precisely. However, the theorem can also be formulated under weaker assumptions, and it is believed to hold for an even wider class of dynamical systems. In order to get a feeling for why this theorem should hold, let us look at the information creation in a simple one-dimensional map such as the Bernoulli shift by considering two orbits { xk }nk=0 , { xk }nk=0 starting at nearby initial conditions | x0 − x0 | ≤ δx0 , δx0 1. Recall the 13) An Anosov system is a diffeomorphism, where the expanding and contracting directions in phase space exhibit a particularly ‘nice’, so-called hyperbolic structure [2, 20].
6.2 Deterministic Chaos
encoding defined by (6.20). Under the first m iterations these two orbits will then produce the very same sequences of symbols {sk }m k =1 , {sk }m , that is, we cannot distinguish them from each other by our k =1 encoding. However, due to the ongoing stretching of the initial displacement δx0 by a factor of two, eventually there will be an m such that starting from p > m iterations different symbol sequences are generated. Thus we can be sure that in the limit of n → ∞ we will be able to distinguish initially arbitrarily close orbits. If you like analogies, you may think of extracting information about the different initial states via the stretching produced by the iteration process, as using a magnifying glass. Therefore, under iteration the exponential rate of separation of nearby trajectories, which is quantified by the positive Ljapunov exponent, must be equal to the rate of information generated, which in turn is given by the KS-entropy. This is at the heart of Pesin’s theorem. We remark that, typically, the KS-entropy is much harder to calculate for a given dynamical system than positive Ljapunov exponents. Hence, Pesin’s theorem is often employed in the literature for indirectly calculating the KS-entropy. Furthermore, here we have described only one type of entropy for dynamical systems. It should be noted that the concept of the KS-entropy can straightforwardly be generalized leading to a whole spectrum of Rényi entropies, which can then be identified with topological, metric, correlation and other higher-order entropies [7]. 6.2.4 Open Systems, Fractals and Escape Rates
This section draws particularly on [2, 6]. So far we have only studied closed systems, where intervals are mapped onto themselves. Let us now consider an open system, where points can leave the unit interval by never coming back to it. Consequently, in contrast to closed systems the total number of points is no longer conserved. This situation can be modeled by a slightly generalized example of the Bernoulli shift. Example 10 In the following we will study the map
Ba : [0, 1) → [1 − a/2, a/2) , % ax , 0 ≤ x < 1/2 Ba ( x ) : = ax + 1 − a , 1/2 ≤ x < 1
(6.32)
185
186
6 From Deterministic Chaos to Anomalous Diffusion
see Figure 6.6, where the slope a ≥ 2 defines a control parameter. For a = 2 we recover our familiar Bernoulli shift, whereas for a > 2 the map defines an open system. That is, whenever points are mapped into the escape region of width Δ these points are removed from the unit interval. You may thus think of the escape region as a subinterval that absorbs any particles which are mapped onto it.
Figure 6.6 A generalization of the Bernoulli shift, defined as a parameter-dependent map Ba ( x ) modeling an open system. The slope a defines a control parameter, Δ denotes the width of the escape region.
We now wish to compute the number of points Nn remaining on the unit interval at time step n, where we start from a uniform distribution of N0 = N points on this interval at n = 0. This can be done as follows. Recall that the probability density ρn ( x ) was defined by
number of points Nn,j in interval dx centered around position x j at time step n (6.33) ρn ( x ) := total number of points N times width dx
6.2 Deterministic Chaos
where Nn = ∑ j Nn,j . With this we have that N1 = N0 − ρ0 NΔ
(6.34)
By observing that, for Ba ( x ), starting from ρ0 = 1 points are always uniformly distributed on the unit interval at subsequent iterations, we can derive an equation for the density ρ1 of points covering the unit interval at the next time step n = 1. For this purpose we divide the above equation by the total number of points N (multiplied with the total width of the unit interval which, however, is one). This yields ρ1 =
N1 = ρ0 − ρ0 Δ = ρ0 (1 − Δ ) N
(6.35)
This procedure can be reiterated starting now from N2 = N1 − ρ1 NΔ
(6.36)
leading to ρ2 =
N2 = ρ1 (1 − Δ ) N
(6.37)
and so on. For general n we thus obtain ρn = ρn−1 (1 − Δ) = ρ0 (1 − Δ)n = ρ0 en ln(1−Δ)
(6.38)
or correspondingly Nn = N0 en ln(1−Δ)
(6.39)
which suggests the following definition. Definition 7 For an open system with an exponential decrease in the
number of points Nn = N0 e−γn
(6.40)
γ is called the escape rate. In the case of our mapping we thus identify γ = ln
1 1−Δ
(6.41)
as the escape rate. We may now consider whether there are any initial conditions that never leave the unit interval and also wonder about the character of this set of points. The set can be constructed as exemplified for Ba ( x ) , a = 3 in Figure 6.7.
187
188
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.7 Construction of the set C B3 of the initial conditions of the map B3 ( x ) which never leave the unit interval.
Example 11 Let us start again with a uniform distribution of points on
the unit interval. We can then see that the points which remain on the unit interval after one iteration of the map form two sets, each of length 1/3. Iterating the boundary points of the escape region backwards in time according to xn = B3−1 ( xn+1 ), we can obtain all pre-images of the escape region. We find that initial points which remain on the unit interval after two iterations belong to four smaller sets, each of length 1/9, as depicted at the bottom of Figure 6.7. Repeating this procedure infinitely many times reveals that the points which never leave the unit interval form the very special set C B3 , which is known as the middle third Cantor set. Definition 8 Cantor set [6, 15]
A Cantor set is a closed set which consists entirely of boundary points each of which is a limit point of the set.
6.2 Deterministic Chaos
Let us explore some fundamental properties of the set C B3 [6] 1. From Figure 6.7 we can infer that the total length ln of the intervals of points remaining on the unit interval after n iterations, which is identical with the Lebesgue measure μL of these sets, is 2 4 l0 = 1 , l1 = , l2 = = 3 9
2
n 2 2 , . . . , ln = 3 3
We thus see that
n 2 ln = → 0 (n → ∞) 3
(6.42)
(6.43)
that is, the total length of this set goes to zero, μL (C B3 ) = 0. However, there are also Cantor sets whose Lebesgue measure is larger than zero [6]. Note that matching ln = exp(−n ln(3/2)) to (6.41) yields an escape rate of γ = ln(3/2) for this map. 2. By using the binary encoding (6.20) for all intervals of C B3 , thus mapping all elements of this set onto all the numbers in the unit interval, it can nevertheless be shown that our Cantor set contains an uncountable number of points [2, 24]. 3. By construction C B3 must be the invariant set of the map B3 ( x ) under iteration, so the invariant measure of our open system must be the measure defined on the Cantor set, μ∗ (C) , C ∈ C B3 [18]; see the following Example 12 for the procedure used to calculate this measure. 4. For the next property we need the following definition Definition 9 repeller [2, 7]
The limit set of points which never escape is called a repeller. The orbits which escape are transients, and 1/γ is their typical duration. From this we can conclude that C B3 represents the repeller of the map B3 ( x ). 5. Since C B3 is completely disconnected by only consisting of boundary points, its topology is highly singular. Consequently, no invariant density ρ∗ ( x ) can be defined on this set, since this concept presupposes a certain ‘smoothness’ of the underlying topology such that
189
190
6 From Deterministic Chaos to Anomalous Diffusion
one can meaningfully speak of ‘small subintervals dx’ on which one counts the number of points, see (6.33). In contrast, μ∗ (C) is still well-defined,14) and we speak of it as a singular measure [2, 12]. 6. Figure 6.7 shows that C B3 is self-similar, in the sense that smaller pieces of this structure reproduce the entire set upon magnification [6]. Here we find that the whole set can be reproduced by magnifying the fundamental structure of two subsets with a gap in the middle by a constant factor of three. Often such a simple scaling law does not exist for these types of sets. Instead, the scaling may depend on the position x of the subset, in which case one speaks of a self-affine structure [25, 26]. 7. Again we need a definition: Definition 10 fractals, qualitatively [7, 26]
Fractals are geometrical objects that possess nontrivial structure on arbitrarily fine scales. In the case of our Cantor set C B3 , these structures are generated by a simple scaling law. However, generally fractals can be arbitrarily complicated on finer and finer scales. A famous example of a fractal in nature, mentioned in the pioneering book by Mandelbrot [25], is the coastline of Britain. An example of a structure that is trivial, hence not fractal, is a straight line. The fractality of such complicated sets can be assessed by quantities called fractal dimensions [6,7], which generalize the integer dimensionality of Euclidean geometry. It is interesting how, in our case, fractal geometry naturally comes into play, forming an important ingredient of the theory of dynamical systems. However, here we do not further elaborate on the concept of fractal geometry and refer to the literature instead [6, 24–26]. Example 12 Let us now compute all three basic quantities that we have
introduced so far, that is: the Ljapunov exponent λ and the KS-entropy hks on the invariant set as well as the escape rate γ from this set. We do so for the map B3 ( x ) which, as we have learned, produces a fractal repeller. According to (6.12) and (6.14) we have to calculate λ(C B3 ) =
1 0
dμ∗ ln | B3 ( x )|
(6.44)
14) This is one of the reasons why mathematicians prefer to deal with measures instead of densities.
6.2 Deterministic Chaos
However, for typical points we have B3 ( x ) = 3, hence the Ljapunov exponent must trivially be λ(C B3 ) = ln 3
(6.45)
because the probability measure μ∗ is normalized. The calculation of the KS-entropy requires a bit more work. Recall that H ({Cin })
2n
:= − ∑ μ∗ (Cin ) ln μ∗ (Cin )
(6.46)
i =1
see (6.27), where Cin denotes the ith part of the emerging Cantor set at the nth level of its construction. We now proceed along the lines of Example 7. From Figure 6.7 we can infer that μ∗ (Ci1 ) =
1 3 2 3
=
1 2
at the first level of refinement. Note that here we have renormalized the (Lebesgue) measure on the partition part Ci1 . That is, we have divided the measure by the total measure surviving on all partition parts such that we always arrive at a proper probability measure under iteration. The measure constructed in this way is known as the conditionally invariant measure on the Cantor set [7,27]. Repeating this procedure yields μ∗ (Ci2 ) =
1 9 4 9
=
1 4
.. . μ∗ (Cin ) =
( 13 )n = 2− n ( 23 )n
(6.47)
from which we obtain H ({Cin })
2n
= − ∑ 2−n ln 2−n = n ln 2
(6.48)
i =1
We thus see that, by taking the limit according to (6.29) and noting that our partitioning is generating on the fractal repeller C B3 = {Ci∞ }, we arrive at 1 H ({Cin }) = ln 2 n→∞ n
hKS (C B3 ) = lim
(6.49)
191
192
6 From Deterministic Chaos to Anomalous Diffusion
Finally, with (6.41) and an escape region of size Δ = 1/3 for B3 ( x ) we get for the escape rate γ(C B3 ) = ln
1 3 = ln 1−Δ 2
(6.50)
as we have already seen before.15) In summary, we have that γ(C B3 ) = ln 32 = ln 3 − ln 2, λ(C B3 ) = ln 3, hKS (C B3 ) = ln 2, which suggests the relation γ(C B3 ) = λ(C B3 ) − hKS (C B3 )
(6.51)
Again, this equation is no coincidence. It is a generalization of Pesin’s theorem to open systems, known as the escape rate formula [29]. This equation holds under similar conditions, like Pesin’s theorem, which is recovered from it if there is no escape [2].
6.3 Deterministic Diffusion
We now apply the concepts of dynamical systems theory, developed in the previous section, to a fundamental problem in nonequilibrium statistical physics, which is to understand the microscopic origin of diffusion in many-particle systems. We start with a reminder of diffusion as a simple random walk on the line. Modeling such processes by suitably generalizing the piecewise linear map studied previously, we will see how diffusion can be generated by microscopic deterministic chaos. The main result will be an exact formula relating the diffusion coefficient, which characterizes macroscopic diffusion of particles, to the dynamical systems quantities introduced before. In Section 6.3.1, which draws upon Section 2.1 of [4], we explain the basic idea of deterministic diffusion and introduce our model. Section 6.3.2, which is partially based on [8, 30, 31], outlines a method of how to calculate the diffusion coefficient exactly for such types of dynamical systems. 15) Note that the escape rate will generally depend not only on the size but also on the position of the escape interval [28].
6.3 Deterministic Diffusion
6.3.1 What is Deterministic Diffusion?
In order to learn about deterministic diffusion, we must first understand what ordinary diffusion is all about. Here we introduce this concept by means of a famous example, see Figure 6.8. Let us imagine that some evening a sailor wants to walk home, but he is completely drunk so that he has no control over his single steps. For the sake of simplicity let us imagine that he moves in one dimension. He starts at a lamppost at position x = 0 and then makes steps of a certain step length s to the left and to the right. Since he is completely drunk he loses all memory between any single steps, that is, all steps are uncorrelated. It is like tossing a coin in order to decide whether to go to the left or to the right at the next step. We now ask for the probability of finding the sailor after n steps at position x, i.e. a distance | x | away from his starting point. Let us add a short historical note. This ‘problem of considerable interest’ was first formulated by Karl Pearson in a letter to Nature in 1905 [32]. He asked for a solution, which was provided by Lord Rayleigh referring to older work by himself [33]. Pearson concluded: ‘The lesson of Lord Rayleigh’s solution is that in open country the most probable place to find a drunken man, who is at all capable of keeping on his feet, is somewhere near his starting point’ [32]. This refers to the Gaussian probability distributions for the sailor’s positions, which are obtained in a suitable scaling limit from a gedankenexperiment with
Figure 6.8 The ‘problem of the random walk’ in terms of a drunken sailor at a lamppost. The space-time diagram shows an example of a trajectory for such a drunken sailor, where n ∈ N holds for discrete time and x ∈ R for the position of the sailor on a discrete lattice of spacing s.
193
194
6 From Deterministic Chaos to Anomalous Diffusion
an ensemble of sailors starting from the lamppost. Figure 6.9 sketches the spreading of such a diffusing distribution of sailors with time. The mathematical reason for the emerging Gaussianity of the probability distributions is nothing other than the central limit theorem [34].
Figure 6.9 Probability distribution functions ρn ( x ) to find a sailor after n time steps at position x on the line, calculated for an ensemble of sailors starting at the lamppost, cf. Figure 6.8. Shown are three probability densities after different numbers of iterations n1 < n2 < n3 .
We may now wish to quantify the speed by which a ‘droplet of sailors’ starting at the lamppost spreads out. This can be done by calculating the diffusion coefficient for this system. In the case of onedimensional dynamics the diffusion coefficient can be defined by the Einstein formula 1 x2 n →∞ 2n
D := lim where
x2 :=
dx x2 ρn ( x )
(6.52)
(6.53)
is the variance, or second moment, of the probability distribution ρn ( x ) at time step n, also called the mean-square displacement of the particles. This formula may be understood as follows. For our ensemble of sailors we may choose ρ0 ( x ) = δ( x ) as the initial probability distribution with δ( x ) denoting the (Dirac) δ-function, which mimicks the situation that all sailors start at the same lamppost at x = 0. If our system is ergodic, the diffusion coefficient should be independent of the choice of the initial ensemble. The spreading of the distribution of sailors is then
6.3 Deterministic Diffusion
quantified by the growth of the mean-square displacement in time. If this quantity grows linearly in time, which may not necessarily be the case but holds true if our probability distributions for the positions are Gaussian in the long-time limit [4], the magnitude of the diffusion coefficient D tells us how quickly our ensemble of sailors disperses. For further details about a statistical physics description of diffusion we refer to the literature [34]. In contrast to this well-known picture of diffusion as a stochastic random walk, the theory of dynamical systems makes it possible to treat diffusion as a deterministic dynamical process. Let us replace the sailor by a point particle. Instead of coin tossing, the orbit of such a particle starting at initial condition x0 may then be generated by a chaotic dynamical system of the type as considered in the previous sections, xn+1 = F ( xn ). Note that defining the one-dimensional map F ( x ) together with this equation yields the full microscopic equations of motion of the system. You may think of these equations as a caricature of Newton’s equations of motion modeling the diffusion of a single particle. Most importantly, in contrast to the drunken sailor with his memory loss after any time step here the complete memory of a particle is taken into account, that is, all steps are fully correlated. The decisive new fact that distinguishes this dynamical process from that of a simple uncorrelated random walk is hence that xn+1 is uniquely determined by xn , rather than having a random distribution of xn+1 for a given xn . If the resulting dynamics of an ensemble of particles for given equations of motion has the property that a diffusion coefficient D > 0 (6.52) exists, we speak of (normal)16) deterministic diffusion [1–4, 8]. Figure 6.10 shows the simple model of deterministic diffusion that we shall study in this section. It depicts a ‘chain of boxes’ of chain length L ∈ N, which continues periodically in both directions to infinity, and the orbit of a moving point particle. Let us first specify the map defined on the unit interval, which we may call the box map. For this we choose the map Ba ( x ) introduced in Example 10. We can now periodically continue this box map onto the whole real line by a lift of degree one, Ba ( x + 1 ) = Ba ( x ) + 1
(6.54)
16) See Section 6.4.1 for another type of diffusion, where D is either zero or infinite, which is called anomalous diffusion.
195
196
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.10 A simple model for deterministic diffusion. The dashed line depicts the orbit of a diffusing particle in the form of a cobweb plot [13]. The slope a serves as a control parameter for the periodically continued piecewise linear map Ba ( x ).
for which the acronym old has been introduced [18]. Physically speaking, this means that Ba ( x ) continued onto the real line is translational invariant with respect to integers. Note furthermore that we have chosen a box map whose graph is point symmetric with respect to the center of the box at ( x, y) = (0.5, 0.5). This implies that the graph of the full map Ba ( x ) is anti-symmetric with respect to x = 0, Ba ( x ) = − Ba (− x )
,
(6.55)
so that there is no ‘drift’ in this chain of boxes. The drift case with broken symmetry could be studied as well [4], but we exclude it here for the sake of simplicity.
6.3 Deterministic Diffusion
6.3.2 Escape Rate Formalism for Deterministic Diffusion
Before we can begin this method, let us remind ourselves about the elementary theory of diffusion in the form of the diffusion equation. We then outline the basic idea underlying the escape rate formalism, which eventually yields a simple formula expressing diffusion in terms of dynamical systems quantities. Finally, we work out this approach for our deterministic model introduced before. 6.3.2.1 The Diffusion Equation
In the last section we have sketched in a nutshell what, in our setting, we mean when we speak of diffusion. This picture is made more precise by deriving an equation which exactly generates the dynamics of the probability densities displayed in Figure 6.9 [34]. For this purpose, let us reconsider for a moment the situation depicted in Figure 6.4. There, we had a gas with an initially very high concentration of particles on the left-hand side of the box. After the piston was removed, it seemed natural for the particles, spread out over the right-hand side of the box as well, thus diffusively covering the whole box. We may thus come to the conclusion that, first, there will be diffusion if the density of particles in a substance is nonuniform in space. For this density of particles and, by restricting ourselves to diffusion in one dimension in the following, let us write n˜ = n˜ ( x, t), which holds for the number of particles that we can find in a small line element dx around the position x at time step t divided by the total number of particles N.17) As a second observation, we see that diffusion occurs in the direction of decreasing particle density. This may be expressed as j =: − D
∂n˜ ∂x
(6.56)
which according to Einstein’s formula (6.52) may be considered as a second definition of the diffusion coefficient D. Here the flux j = j( x, t) denotes the number of particles passing through an area perpendicular to the direction of diffusion per time t. This equation is known as Fick’s first law. Finally, let us assume that no particles are created or destroyed 17) Note the fine distinction between n˜ ( x, t) and our previous ρn ( x ), (6.33), in that here we consider continuous time t for the moment, and all our particles may interact with each other.
197
198
6 From Deterministic Chaos to Anomalous Diffusion
during our diffusion process. In other words, we have conservation of the number of particles in the form of ∂n˜ ∂j + =0 ∂t ∂x
(6.57)
This continuity equation expresses the fact that whenever the particle density n˜ changes in time t, it must be due to a spatial change in the particle flux j. Combining the equation with Fick’s first law we obtain Fick’s second law, ∂2 n˜ ∂n˜ =D 2 ∂t ∂x
(6.58)
which is also known as the diffusion equation. Mathematicians call the process defined by this equation a Wiener process, whereas physicists refer to Brownian motion. If we wish now to solve the diffusion equation for the drunken sailor initial density n˜ ( x, 0) = δ( x ), we would obtain the precise functional form of our spreading Gaussians in Figure 6.9,
1 x2 (6.59) exp − n˜ ( x, t) = √ 4Dt 4πDt Calculating the second moment of this distribution according to (6.53) would lead us to recover Einstein’s definition of the diffusion coefficient (6.52). Therefore, both this definition and the one provided by Fick’s first law are consistent with each other. 6.3.2.2 Basic Idea of the Escape Rate Formalism
We are now fully prepared to establish an interesting link between dynamical systems theory and statistical mechanics. We start with a brief outline of the concept of this theory, which is called the [escape rate!formalism]escape rate formalism, pioneered by Gaspard and others [2, 3, 35]. It consists of three steps: Solve the one-dimensional diffusion equation (6.58) derived above for absorbing boundary conditions. That is, we consider now some type of open system similar to what we have studied in the previous section. We may thus expect that the total number of particles N (t) := dx n˜ ( x, t) within the system decreases exponentially as time evolves according to the law expressed by (6.40), that is, Step 1
N (t) = N (0)e−γde t
(6.60)
6.3 Deterministic Diffusion
It will turn out that the escape rate γde defined by the diffusion equation with absorbing boundaries is a function of the system size L and of the diffusion coefficient D. Step 2
Solve the Frobenius–Perron equation
ρ n +1 ( x ) =
dy ρn (y) δ( x − F (y))
(6.61)
which represents the continuity equation for the probability density ρn ( x ) of the map F ( x ) [2, 6, 7], for the very same absorbing boundary conditions as in Step 1. Let us assume that the dynamical system under consideration is normal diffusive, that is, that a diffusion coefficient D > 0 exists. We may then expect a decrease in the number of particles that is completely analogous to what we have obtained from the diffusion equation. That is, if we define as before Nn := dx ρn ( x ) as the total number of particles within the system at discrete time step n, in the case of normal diffusion we should obtain Nn = N0 e−γFP n
(6.62)
However, in contrast to Step 1 here the escape rate γFP should be fully determined by the dynamical system that we are considering. In fact we have already seen before that, for open systems, the escape rate can be expressed exactly as the difference between the positive Ljapunov exponent and the KS-entropy on the fractal repeller; cf. the escape rate formula (6.51). If the functional forms of the particle density n˜ ( x, t) of the diffusion equation and of the probability density ρn ( x ) of the map’s Frobenius–Perron equation match in the limit of system size and time going to infinity – which is what one has to show – the escape rates γde obtained from the diffusion equation and γFP calculated from the Frobenius–Perron equation should be equal, Step 3
γde = γFP
(6.63)
providing a fundamental link between the statistical physical theory of diffusion and dynamical systems theory. Since γde is a function of the diffusion coefficient D, and knowing that γFP is a function of dynamical systems quantities, we should then be able to express D exactly in terms of these dynamical systems quantifiers. We will now illustrate how
199
200
6 From Deterministic Chaos to Anomalous Diffusion
this method works by applying it to our simple deterministic diffusive model introduced above. 6.3.2.3 The Escape Rate Formalism Worked out for a Simple Map
Let us consider the map Ba ( x ) lifted onto the whole real line for the specific parameter value a = 4, see Figure 6.11. With L we denote the chain length. Proceeding along the above lines, let us start with:
Figure 6.11 Our previous map Ba ( x ) periodically continued onto the whole real line for the specific parameter value a = 4. The example shown depicts a chain of length L = 3. The dashed quadratic grid indicates a Markov partition for this map.
Solve the one-dimensional diffusion equation (6.58) for the absorbing boundary conditions Step 1
n˜ (0, t) = n˜ ( L, t) = 0
(6.64)
which models the situation when particles escape precisely at the boundaries of our one-dimensional domain. A straightforward calculation yields
mπ ∞ mπ 2 x (6.65) Dt sin n˜ ( x, t) = ∑ bm exp − L L m =1 with bm denoting the Fourier coefficients.
6.3 Deterministic Diffusion
Step 2 Solve the Frobenius–Perron equation (6.61) for the same absorbing boundary conditions,
ρ n (0) = ρ n ( L ) = 0
(6.66)
In order to do so, we first need to introduce a concept called Markov partition for our map B4 ( x ): Definition 11 Markov partition, verbally [7, 12]
For one-dimensional maps acting on compact intervals a partition is called Markov if parts of the partition get mapped again onto parts of the partition, or onto unions of parts of the partition. Example 13 The dashed quadratic grid in Figure 6.11 defines a Markov
partition for the lifted map B4 ( x ).
Of course there also exists a precise formal definition of Markov partitions, but here we do not elaborate on these technical details [12]. Having a Markov partition at hand enables us to rewrite the Frobenius– Perron equation in the form of a matrix equation, where a Frobenius– Perron matrix operator acts onto probability density vectors defined with respect to this special partitioning.18) In order to see this, consider an initial density of points that covers, e.g. the interval in the second box of Figure 6.11 uniformly. By applying the map onto this density, one observes that points of this interval get mapped two-fold onto the interval in the second box again, but that there is also an escape from this box which uniformly covers the third and the first box intervals, respectively. This mechanism applies to any box in our chain of boxes, modified only by the absorbing boundary conditions at the ends of the chain of length L. Taking into account the stretching of the density by the slope a = 4 at each iteration, this suggests that the Frobenius– Perron equation (6.61) can be rewritten as ρ n +1 =
1 T (4) ρ n 4
(6.67)
18) Implicitly we herewith choose a specific space of functions, which are tailored to the study of the statistical dynamics of our piecewise linear maps; see [36] for mathematical details on this type of method.
201
202
6 From Deterministic Chaos to Anomalous Diffusion
where the L × L-transition matrix T (4) must read ⎞ ⎛ 2 1 0 0 ··· 0 0 0 ⎜ 1 2 1 0 0 ··· 0 0 ⎟ ⎟ ⎜ ⎜ 0 1 2 1 0 0 ··· 0 ⎟ ⎟ ⎜ ⎜ .. .. .. ⎟ T (4) = ⎜ ... . . . ⎟ ⎟ ⎜ ⎜ 0 ··· 0 0 1 2 1 0 ⎟ ⎟ ⎜ ⎝ 0 0 ··· 0 0 1 2 1 ⎠ 0 0 0 ··· 0 0 1 2
(6.68)
Note that in any row and in any column we have three nonzero matrix elements except in the very first and the very last rows and columns, which reflect the absorbing boundary conditions. In (6.67) this transition matrix T (4) is applied to a column vector ρn corresponding to the probability density ρn ( x ), which can be written as ρn = |ρn ( x ) := (ρ1n , ρ2n , . . . , ρkn , . . . , ρnL )∗
(6.69)
where ‘∗’ denotes the transpose and ρkn represents the component of the probability density in the kth box, ρn ( x ) = ρkn , k − 1 < x ≤ k , k = 1, . . . , L , ρkn being constant on each part of the partition. We see that this transition matrix is symmetric, hence it can be diagonalized by spectral decomposition. Solving the eigenvalue problem T (4) | φm ( x ) = χm (4) |φm ( x )
(6.70)
where χm (4) and |φm ( x ) are the eigenvalues and eigenvectors of T (4), respectively, one obtains 1 L χm (4) |φm ( x ) φm ( x ) | ρn−1 ( x ) 4 m∑ =1
L 4 = ∑ exp −n ln |φm ( x ) φm ( x ) | ρ0 ( x ) χ m (4) m =1
|ρn ( x ) =
(6.71)
where |ρ0 ( x ) is the initial probability density vector. Note that the choice of initial probability densities is restricted by this method to functions that can be written in the vector form of (6.69). It remains to solve the eigenvalue problem (6.70) [30, 31]. The eigenvalue equation for the single components of the matrix T (4) reads k k +1 k +2 k +1 φm + 2φm + φm = χm φm
,
0 ≤ k ≤ L−1
(6.72)
6.3 Deterministic Diffusion
supplemented by the absorbing boundary conditions 0 = φmL+1 = 0 φm
(6.73)
This equation has the form of a discretized ordinary differential equation of degree two, hence we make the ansatz k φm = a cos(kθ ) + b sin(kθ )
,
0 ≤ k ≤ L+1
(6.74)
The two boundary conditions lead to a = 0 and
sin(( L + 1)θ ) = 0
(6.75)
1≤m≤L
(6.76)
yielding θm =
mπ L+1
,
The eigenvectors are then determined by k = b sin(kθm ) φm
(6.77)
Combining this equation with (6.72) yields as the eigenvalues χm = 2 + 2 cos θm
(6.78)
Putting all details together, it remains to match the solution of the diffusion equation to that of the Frobenius–Perron equation: In the limit of time t and system size L to infinity, the density n˜ ( x, t) (6.65) of the diffusion equation reduces to the largest eigenmode, π x (6.79) n˜ ( x, t) exp (−γde t) B sin L Step 3
where γde :=
π 2 L
D
(6.80)
defines the escape rate as determined by the diffusion equation. Analogously, for discrete time n and chain length L to infinity we obtain for the probability density of the Frobenius–Perron equation, (6.71) with (6.77),
π ˜ k ρn ( x ) exp (−γFP n) B sin L+1 k = 0, . . . , L + 1
,
k−1 < x ≤ k
(6.81)
203
204
6 From Deterministic Chaos to Anomalous Diffusion
with an escape rate for this dynamical system given by γFP = ln
4 2 + 2 cos(π/( L + 1))
(6.82)
which is determined by the largest eigenvalue χ1 of the matrix T (4), see (6.71) with (6.78). We can now see that the functional forms of the eigenmodes of (6.79) and (6.81) match precisely.19) This allows us to match (6.80) and (6.82) leading to
2 L D (4) = γFP π
(6.83)
Using the right-hand side of (6.82) and expanding it for L → ∞, this formula enables us to calculate the diffusion coefficient D(4) as
2 1 L2 1 L D (4) = γFP = + O( L−4 ) → 2 π 4 ( L + 1) 4
( L → ∞)
(6.84)
Thus we have developed a method by which we can exactly calculate the deterministic diffusion coefficient of a simple chaotic dynamical system. However, more importantly, instead of using the explicit expression for γFP given by (6.82), let us remind ourselves of the escape rate formula (6.51) for γFP , γFP = γ(C B4 ) = λ(C B4 ) − hKS (C B4 )
(6.85)
which more generally expresses this escape rate in terms of dynamical systems quantities. Combining this equation with the above equation (6.83) leads to our final result, the escape rate formula for deterministic diffusion [3, 35]
2 L D (4) = lim [λ(C B4 ) − hKS (C B4 )] L→∞ π
(6.86)
We have thus established a fundamental link between quantities assessing the chaotic properties of dynamical systems and the statistical physical property of diffusion. 19) We remark that there are discretization effects in the time and position variables, which are due to the fact that we compare a time-discrete system defined on a specific partition with time-continuous dynamics. They disappear in the limit of time to infinity by using a suitable spatial coarse graining.
6.4 Anomalous Diffusion
Remark 3
1. Above, we have only considered the special case of the control parameter a = 4. Along the same lines, the diffusion coefficient can be calculated for other parameter values of the map Ba ( x ). Surprisingly, the parameter-dependent diffusion coefficient of this map turns out to be a fractal function of the control parameter [8,30,31]. This result is believed to hold for a wide class of dynamical systems [4]. 2. The escape rate formula for diffusion does not only hold for simple one-dimensional maps but can be generalized to higher-dimensional time-discrete as well as time-continuous dynamical systems [2, 3]. 3. This approach can also be generalized to establish relations between chaos and transport for other transport coefficients such as viscosity, heat conduction and chemical reaction rates [3]. 4. This is not the only approach connecting transport properties with dynamical systems quantities. In recent research it was found that there exist at least two other ways to establish relations that are different but of a very similar nature; see [4] for further details.
6.4 Anomalous Diffusion
In the last section we have explored diffusion for a simple piecewise linear map. One may now wonder what type of diffusive behavior is encountered if we consider more complicated models. Straightforward generalizations are nonlinear maps generating intermittency. In Section 6.4.1 we briefly illustrate the phenomenon of intermittency and introduce the concept of anomalous diffusion. We then give an outline of continuous time random walk theory, which is a powerful tool of stochastic theory that models anomalous diffusion. By using this method we derive a fractional diffusion equation, which generalizes Fick’s second law that we have encountered before for this type of anomalous diffusion. After having restricted ourselves to rather abstract, but mostly solvable, models we conclude our review by discussing an experiment which gives evidence for the existence of anomalous diffusion in a fundamental biological process. Section 6.4.2 first introduces the problem of cell migration. We then present experimental results for two different cell types and explain them by suggesting a model reproducing the observed anomalous dynamics of cell migration.
205
206
6 From Deterministic Chaos to Anomalous Diffusion
Section 6.4.1 particularly draws on [9, 10], see also Section 6.2 of [4], Section 6.4.2 is based on [11]. For more general introductions to the very active field of anomalous transport see, e.g. [5, 37–39]. 6.4.1 Anomalous Diffusion in Intermittent Maps 6.4.1.1 What is Anomalous Diffusion?
Let us consider a simple variant of our previous piecewise linear model, which is the Pomeau–Manneville map [40] Pa,z ( x ) = x + ax z
mod 1
(6.87)
see Figure 6.12, where as usual the dynamics is defined by xn+1 = Pa,z ( xn ). This map has two control parameters, a ≥ 1 and the exponent of nonlinearity z ≥ 1. For a = 1 and z = 1 this map just reduces to our familiar Bernoulli shift (6.3). However, for z > 1 it provides a nontrivial, nonlinear generalization of it. The nontriviality is due to the fact that, in this case, the stability of the fixed point at x = 0 becomes (0) = 1. marginal (sometimes also called indifferent, or neutral), Pa,z Since the map is smooth around x = 0, the dynamics resulting from the left branch of the map is determined by the stability of this fixed point, whereas the right branch is just of Bernoulli shift-type, yielding
Figure 6.12 The Pomeau–Manneville map (6.87) for a = 1 and z = 3. Note that there is a marginal fixed point at x = 0 leading to the intermittent behavior depicted in Figure 6.13.
6.4 Anomalous Diffusion
ordinary chaotic dynamics. There is thus a competition in the dynamics between these two different branches as illustrated by Figure 6.13: One can observe that long periodic laminar phases determined by the marginal fixed point around x = 0 are interrupted by chaotic bursts reflecting the ‘Bernoulli shift-like part’ of the map with slope a > 1 around x = 1. This phenomenology is the hallmark of what is called intermittency [1, 6].
Figure 6.13 Phenomenology of intermittency in the Pomeau–Manneville map Figure 6.12. The plot shows the time series of position xn versus discrete time step n for an orbit generated by the map (6.87), which starts at a typical initial condition x0 .
Following Section 6.3.1 it is now straightforward to define a spatially extended version of the Pomeau–Manneville map. For this purpose we just continue Pa,z ( x ) = x + ax z , 0 ≤ x < 12 in (6.87) onto the real line by the translation Pa,z ( x + 1) = Pa,z ( x ) + 1, see (6.54), under reflection symmetry Pa,z (− x ) = − Pa,z ( x ), see (6.55). The resulting model [41, 42] is displayed in Figure 6.14. As before, we may now be interested in the type of deterministic diffusion generated by this model. Surprisingly, by calculating the mean-square displacement (6.53) either analytically or from computer simulations, one finds that for z > 2
x2 ∼ nα ,
α<1
(n → ∞)
(6.88)
This implies that the diffusion coefficient D := limn→∞ x2 /(2n) as defined by (6.52) is simply zero, despite the fact that particles can go anywhere on the real line as shown in Figure 6.14. We thus encounter a novel type of diffusive behavior classified by the following definition:
207
208
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.14 The Pomeau–Manneville map Figure 6.12, (6.87), lifted symmetrically onto the whole real line such that it generates subdiffusion.
Definition 12 Anomalous diffusion [5, 39]
If the exponent α in the temporal spreading of the mean-square displacement (6.88) of an ensemble of particles is not equal to one, one refers to anomalous diffusion. If α < 1 one says that there is subdiffusion, for α > 1 there is superdiffusion, and in the case of α = 1 one refers to normal diffusion. The constant
x2 n→∞ nα
K := lim
(6.89)
where in the case of normal diffusion K = 2D, is called the generalized diffusion coefficient.20) We will now discuss how K behaves as a function of a for our new model and then show how the exponent α and, in an approximation, K, 20) In detail, the definition of a generalized diffusion coefficient is a bit more subtle [10].
6.4 Anomalous Diffusion
can be calculated analytically. This can be achieved by means of continuous time random walk (CTRW) theory, which provides a generalization of the drunken sailor’s model introduced in Section 6.3.1 to anomalous dynamics. 6.4.1.2 Continuous Time Random Walk Theory
Let us first study the diffusive behavior of the map displayed in Figure 6.14 by computer simulations.21) As we will explain in detail below, stochastic theory predicts that for this map it is % 1, 1≤z<2 (6.90) α= 1 z −1 , 2 ≤ z [41, 42]. For all values of the second control parameter a we indeed find excellent agreement between these analytical solutions and the results for α obtained from simulations. Consequently, α is determined by (6.90) in the following when extracting the generalized diffusion coefficient K (6.89) from simulations. While in Section 6.3 we have only discussed diffusion for a specific choice of the control parameter, we now study the behavior of K as a function of a for fixed z. Computer simulation results are displayed in Figure 6.15. Magnifications of part (a) shown in parts (b) and (c) reveal self-similar-like irregularities, indicating a fractal parameter dependence of K = K ( a). This fractality is highlighted by the sub-structure identified through triangles, which is repeated on finer and finer scales. The parameter values for these symbols correspond to specific series of Markov partitions. Details are explained in [8, 30, 31] for parameter-dependent diffusion in the piecewise linear one-dimensional maps studied in Section 6.3, which exhibits quite analogous structures. Over the past few years such fractal transport coefficients have been revealed for a number of different models. They are conjectured to be a typical phenomenon if the dynamical system is deterministic, low-dimensional and spatially periodic. Their origin can be understood in terms of microscopic long-range dynamical correlations that, due to topological instabilities of dynamical systems, change in a complicated manner under parameter variation. Although it has been argued that such highly irregular behavior of transport coefficients should also occur in physically realistic systems, it has not yet 21) All simulations were performed starting from a uniform, random distribution of 106 initial conditions on the unit interval by iterating for n = 104 time steps.
209
210
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.15 The generalized diffusion coefficient K (6.89) as a function of a for z = 3. The curve in (a) consists of 1200 points, the dashed-dotted line displays the CTRW result K1 , (6.100), (6.102). (b) (600 points) and (c) (200 points) show magnifications of (a) close to the onset of diffusion.
The dotted line in (b) is the CTRW approximation K2 , (6.101), (6.102), the dashed line represents yet another semianalytical approximation as detailed in [9, 10]. The triangles mark a specific structure appearing on finer and finer scales. The inset in (a) depicts again the model (6.87).
clearly been observed in experiments; see [4] for a review of this line of research. Instead of elaborating on the fractality in detail, here we reproduce the coarse functional form of K ( a) by using stochastic CTRW theory. Pioneered by Montroll, Weiss and Scher [43–45], it yields perhaps the most fundamental theoretical approach to explain anomalous diffusion [46–48]. In further groundbreaking works by Geisel et al. and Klafter et al., this method was then adapted to sub- and superdiffusive deterministic maps [41, 42, 49, 50] The basic assumption of the approach is that diffusion can be decomposed into two stochastic processes characterized by waiting times and jumps, respectively. Thus one has two sequences of independent identically distributed random variables, namely a sequence of positive ran-
6.4 Anomalous Diffusion
dom waiting times T1 , T2 , T3 , . . . with probability density function w(t) and a sequence of random jumps ζ 1 , ζ 2 , ζ 3 , . . . with a probability density function λ( x ). For example, if a particle starts at point x = 0 at time t0 = 0 and makes a jump of length ζ n at time tn = T1 + T2 + . . . + Tn , its position is x = 0 for 0 ≤ t < T1 = t1 and x = ζ 1 + ζ 2 + . . . + ζ n for tn ≤ t < tn+1 . The probability that at least one jump is performed t within the time interval [0, t) is then 0 dt w(t ) while the probability t for no jump during this time interval reads Ψ(t) = 1 − 0 dt w(t ). The master equation for the probability density function P( x, t) to find a particle at position x and time t is then P( x, t) =
∞ −∞
dx λ( x − x )
t 0
dt w(t − t ) P( x , t ) + Ψ(t)δ( x ) (6.91)
This equation has the following probabilistic meaning: The probability density function to find a particle at position x at time t is equal to the probability density function to find it at point x at some previous time t multiplied with the transition probability to get from ( x , t ) to ( x, t) integrated over all possible values of x and t . The second term accounts for the probability of remaining at the initial position x = 0. The most convenient representation of this equation is obtained in terms of the Fourier–Laplace transform of the probability density function, Pˆ˜ (k, s) =
∞ −∞
dx e
ikx
∞ 0
dt e−st P( x, t)
(6.92)
where the hat stands for the Fourier transform and the tilde for the Laplace transform. Respectively, this function obeys the Fourier– Laplace transform of (6.91), which is called the Montroll–Weiss equation [43–45], 1 − w˜ (s) 1 Pˆ˜ (k, s) = ˆ s 1 − λ(k)w˜ (s)
(6.93)
The Laplace transform of the mean-square displacement can be readily obtained by differentiating the Fourier–Laplace transform of the probability density function, ∞ ˆ˜ (k, s) 2P ∂ 2 dx x P˜ ( x, s) = − (6.94) x2 ˜(s) = 2 ∂k −∞ k =0
In order to calculate the mean-square displacement within this theory, it thus suffices to know λ( x ) and w(t) generating the stochastic process.
211
212
6 From Deterministic Chaos to Anomalous Diffusion
For one-dimensional maps of the type of (6.87), exploiting the symmetry of the map, the waiting time distribution can be calculated from the approximation x n +1 − x n
dxt = axtz , dt
x1
(6.95)
where we have introduced the continuous time t ≥ 0. This equation can easily be solved for xt with respect to an initial condition x0 . Now one needs to define when a particle makes a ‘jump’, as will be discussed below. By inverting the solution for xt , one can then calculate the time t a particle has to wait before it makes a jump, as a function of the initial condition x0 . This information determines the relation between the waiting time probability density w(t) and the, as yet unknown, probability density of injection points, dx0 w(t) Pin ( x0 ) (6.96) dt Making the assumption that the probability density of injection points is uniform, Pin 1, the waiting time probability density is straightforwardly calculated from the knowledge of t( x0 ). The second ingredient that is needed for the CTRW approach is the jump probability density. Standard CTRW theory takes jumps between neighboring cells only into account leading to the ansatz [41, 42] λ( x ) = δ(| x | − 1)
(6.97)
It turns out that a correct application of this theory to our results for K ( a) requires a modification of the standard theory at three points. First, the waiting time probability density function must be calculated according to the grid of elementary cells indicated in Figure 6.15 [8, 51] yielding w ( t ) = a (1 + a ( z − 1) t )
− z−z 1
(6.98)
However, this probability density function also accounts for attempted jumps to another cell, since after a step the particle may stay in the same cell with a probability of (1 − p). The latter quantity is roughly determined by the size of the escape region p = (1 − 2xc ) with xc as a solution of the equation xc + axcz = 1. We thus model this fact, second, by a jump length distribution in the form of λ( x ) =
p δ(| x | − l ) + (1 − p)δ( x ) 2
(6.99)
6.4 Anomalous Diffusion
Third, we introduce two definitions of a typical jump length li , i ∈ {1, 2}. l1 = {| Ma,z ( x ) − x |}
(6.100)
corresponds to the actual mean displacement, while l2 = {|[ Ma,z ( x )]|}
(6.101)
gives the coarse-grained displacement in units of elementary cells, as it is often assumed in CTRW approaches. In these definitions {. . . } denotes both a time and ensemble average over particles leaving a box. Working out the modified CTRW approximation sketched above, by taking these three changes into account, we obtain for the generalized diffusion coefficient % γ 1+ γ , 0 < γ < 1 2 a sin( πγ) /πγ Ki = pli (6.102) a(1 − 1/γ), 1≤γ<∞ where γ := 1/(z − 1), which for z ≥ 2 is identical with α defined in (6.90). Figure 6.15 (a) shows that K1 well describes the coarse functional form of K for large a. K2 is depicted in Figure 6.15 (b) by the dotted line and is asymptotically exact in the limit of very small a. Hence, the generalized diffusion coefficient exhibits a dynamical crossover between two different coarse-grained functional forms for small and large a, respectively. An analogous crossover has been reported earlier for normal diffusion [8, 51] and was also found in other models [4]. Let us finally focus on the generalized diffusion coefficient at a = 12, 20, 28, . . ., which corresponds to integer values of the height h = [ Ma,z (1/2)] of the map. Simulations reproduce, within numerical accuracy, the results for K2 by indicating that K is discontinuous at these parameter values. Due to the self-similar-like structure of the generalized diffusion coefficient it was thus conjectured that the precise K of our model exhibits infinitely many discontinuities on fine scales as a function of a, which is at variance with the continuity of the CTRW approximation [9, 10]. This highlights again that CTRW theory gives only an approximate solution for the generalized diffusion coefficient of this model. 6.4.1.3 A Fractional Diffusion Equation
We now turn to the probability density functions (PDFs) generated by the map (6.87). As we will show now, CTRW theory not only predicts the power γ correctly but also the form of the coarse-grained PDF
213
214
6 From Deterministic Chaos to Anomalous Diffusion
P( x, t) of displacements. Correspondingly the anomalous diffusion process generated by our model is not described by an ordinary diffusion equation but by a generalization of it. Starting from the Montroll– Weiss equation and making use of the expressions for the jump and waiting time PDFs (6.97) and (6.98), we rewrite (6.93) in the long-timeand-space asymptotic form pl 2 sγ Pˆ˜ − sγ−1 = − i γ k2 Pˆ˜ 2cb
(6.103)
with c = Γ(1 − γ) and b = γ/a. For the initial condition P( x, 0) = δ( x ) of the PDF we have Pˆ (k, 0) = 1. Interestingly, the left-hand side of this equation corresponds to the definition of the Caputo fractional derivative of a function G, ∂γ G 1 := ∂tγ Γ (1 − γ )
t
∂G ∂t
(6.104)
∂γ G = sγ G˜ (s) − sγ−1 G(0) ∂tγ
(6.105)
0
dt (t − t )−γ
in Laplace space [52, 53], ∞ 0
dt e−st
Thus, fractional derivatives come naturally into play as a suitable mathematical formalism whenever there are power-law memory kernels in space and/or time generating anomalous dynamics; see, e.g. [39, 54] for short introductions to fractional derivatives and [52] for a detailed exposition. Turning back now to real space, we thus arrive at the timefractional diffusion equation ∂2 P ∂γ P( x, t) = D ∂tγ ∂x2
(6.106)
with D = KΓ(1 + γ)/2, 0 < γ < 1, which is an example of a fractional diffusion equation generating subdiffusion. For γ = 1 we recover the ordinary diffusion equation. The solution of (6.106) can be expressed in terms of an M-function of Wright type [53] and reads γ 1 P( x, t) = √ M ξ, 2 2 Dtγ/2
(6.107)
Figure 6.16 demonstrates an excellent agreement between the analytical solution (6.107) and the PDF obtained from simulations of the map (6.87) if the PDF is coarse grained over unit intervals. However,
6.4 Anomalous Diffusion
Figure 6.16 Comparison of the probability density obtained from simulations of the map (6.87) (oscillatory structure) with the analytical solution (6.107) of the fractional diffusion equation (6.106) (continuous line in the middle) for z = 3 and a = 8. The probability density was computed from 107 particles after n = 103 iterations. For the generalized diffusion coefficient in (6.107) the simulation re-
sult was used. The crosses represent the numerical results, coarse grained over unit intervals. The upper and the lower curves correspond to fits with a stretched exponential and a Gaussian distribution, respectively. The inset depicts the probability density function for the map on the unit interval with periodic boundaries.
it also shows that the coarse graining eliminates a periodic fine structure that is not captured by (6.107). This fine structure derives from the ‘microscopic’ PDF of an elementary cell (with periodic boundaries) as represented in the inset of Figure 6.16 [8]. The singularities are due to the marginal fixed points of the map, where particles are trapped for long times. Remarkably, that way the microscopic origin of the intermittent dynamics is reflected in the shape of the PDF on the whole real line. From Figure 6.16 it can be seen that the oscillations in the PDF are bounded by two functions, the upper curve being of a stretched exponential type while the lower is Gaussian. These two envelopes correspond to the laminar and chaotic parts of the motion, respectively.22) 22) The two envelopes shown in Figure 6.16 represent fits with the Gaussian a0 exp(− x2 /a1 ) and with the M-function b0 M (| x | /b1 , γ2 ), where a0 = 0.0036, a1 = 55.0183 and b0 = 3.05, b1 = 0.37.
215
216
6 From Deterministic Chaos to Anomalous Diffusion
6.4.2 Anomalous Diffusion of Migrating Biological Cells 6.4.2.1 Cell Migration
We start this final section with results from a famous experiment. Figure 6.17 shows the trajectories of three colloidal particles immersed in a fluid. Their motion looks highly irregular thus reminding us of the trajectory of the drunken sailor’s problem displayed in Figure 6.8. As we discussed in Section 6.3, dynamics which can be characterized by a normal diffusion coefficient is called Brownian motion. It was Einstein’s achievement to understand the diffusion of molecules in a fluid in terms of such microscopic dynamics [55]. His theory actually motivated Perrin to conduct his experiment by which Einstein’s theory was confirmed [56].
Figure 6.17 Trajectories of three colloidal particles of radius 0.53 μm whose positions have been measured experimentally every 30 seconds. Single points are joined by straight lines [56].
Figure 6.18 now shows the trajectory of a very different type of process. Displayed is the path of a single biological cell crawling on a substrate [11]. Nearly all cells in the human body are mobile at a given time during their life cycle. Embryogenesis, wound-healing, immune defense and the formation of tumor metastases are well known phenomena that rely on cell migration. If one compares the cell trajectory
6.4 Anomalous Diffusion
Figure 6.18 Overlay of a biological cell migrating on a substrate. The cell frequently changes its shape and direction during migration, as is shown by several cell contours extracted during the migration process. The inset displays phase contrast images of the cell at the beginning and towards the end of its migration process [11].
with that of the Brownian particles depicted in Figure 6.17, one may find it hard to see a fundamental difference. On the other hand, according to Einstein’s theory a Brownian particle is passively driven by collisions from the surrounding particles, whereas biological cells move actively by themselves converting chemical into kinetic energy. This raises the question of whether the dynamics of cell migration can really be understood in terms of Brownian motion [57, 58] or whether more advanced concepts of dynamical modeling have to be applied [59, 60]. 6.4.2.2 Experimental Results
The cell migration experiments which we now discuss have been performed on two transformed renal epithelial Madin Darby canine kidney (MDCK-F) cell strains: wild-type (NHE+ ) and NHE-deficient (NHE− ) cells.23) The cell diameter is typically 20–50 μm and the mean velocity of the cells about 1 μm/min. The lamellipodial dynamics, which denotes the fluctuations of the cell body surrounding the cell 23) NHE+ stands for a molecular sodium hydrogen emitter that either is present or has been blocked by chemicals, which is supposed to have an influence on cell migration [11].
217
218
6 From Deterministic Chaos to Anomalous Diffusion
nucleus, called the cytoskeleton, which drives the cell migration, is of the order of seconds. Thirteen cells were observed for up to 1000 minutes. Sequences of microscopic phase contrast images were taken and segmented to obtain the cell boundaries shown in Figure 6.18; see [11] for full details of the experiment. As we have learned in Section 6.3, Brownian motion is characterized by a mean-square displacement (msd) proportional to t in the limit of long time, designating normal diffusion. Figure 6.19 shows that both types of cells behave differently. First of all, MDCK-F NHE− cells move less efficiently than NHE+ cells, resulting in a reduced msd for all times. As is displayed in the upper part of this figure, the msd of both cell types exhibit a crossover between three different dynamical
Figure 6.19 (a) Double-logarithmic plot of the mean-square displacement (msd) as a function of time. Experimental data points for both cell types are shown by symbols. Different time scales are marked as phases I, II and III as discussed in the text. The solid lines represent fits to the msd from the solution
of our model, see (6.114). All parameter values of the model are given in [11]. The dashed lines indicate the uncertainties of the msd values according to Bayes data analysis. (b) Logarithmic derivative β(t) of the msd for both cell types.
6.4 Anomalous Diffusion
regimes. These phases can best be identified by extracting the timedependent exponent β of the msd ∼ t β from the data, which can be done by using the logarithmic derivative β(t) =
d ln msd(t) d ln t
(6.108)
The results are shown in the lower part of Figure 6.19. Phase I is characterized by an exponent β(t) roughly below 1.8. In the subsequent intermediate phase II, the msd reaches its strongest increase with a maximum exponent β. When the cell has approximately moved beyond a square distance larger than its own mean-square radius (indicated by arrows in the figure), β(t) gradually decreases to about 1.4. Both cell types therefore do not exhibit normal diffusion, which would be characterized by β(t) → 1 for large times, but move anomalously, where the exponent β > 1 indicates superdiffusion. We next show the probability that the cells reach a given position x at time t, which corresponds to the temporal development of the spatial probability distribution function P( x, t). Figure 6.20 (a) and (b) reveals the existence of non-Gaussian distributions at different times. The transition from a peaked distribution at short times to rather broad distributions at long times again suggests the existence of distinct dynamical processes acting on different time scales. The shape of these distributions can be quantified by calculating the kurtosis κ (t) :=
x4 (t) x2 (t)2
(6.109)
which is displayed as a function of time in Figure 6.20 (c). For both cell types κ (t) rapidly decays to a constant clearly below three in the long time limit. A value of three would be the result for the spreading Gaussian distributions of the drunken sailor. These findings are another strong manifestation of the anomalous nature of cell migration. 6.4.2.3 Theoretical Modeling
We conclude this section with a short discussion of the stochastic model that we have used to fit the experimental data, as was shown in the previous two figures. The model is defined by the fractional Klein–Kramers equation [61] 2 ∂ ∂ 1− α ∂P ∂ 2 ∂ (6.110) = − [vP] + 1−α γ v + vth 2 P ∂t ∂x ∂v ∂v ∂t
219
220
6 From Deterministic Chaos to Anomalous Diffusion
Figure 6.20 Spatio-temporal probability distributions P( x, t). (a) and (b) Experimental data for both cell types at different times in semi-logarithmic representation. The dark lines, labeled FKK, show the solutions of our model (6.110) with the same parameter set used for the msd fit. The light lines, labeled OU, depict fits by Gaussian distributions rep-
resenting the theory of Brownian motion. For t = 1min both P( x, t) show a peaked structure clearly deviating from a Gaussian form. (c) The kurtosis κ (t) of P( x, t), plotted as a function of time, saturates at a value different from that of Brownian motion (line at κ = 3). The other two lines represent κ (t) obtained from our model (6.110) [11].
Here P = P( x, v, t) is the probability distribution depending on time t, position x and velocity v in one dimension,24) γ is a damping term and 24) No correlations between x and y direction could be found, hence the model is only one dimensional [11].
6.4 Anomalous Diffusion
v2th = kB T/M stands for the thermal velocity of a particle of mass M at temperature T, where kB is Boltzmann’s constant. The last term in this equation models diffusion in velocity space, that is, in contrast to the drunken sailor’s problem, here the velocity is not constant but also randomly distributed according to a probability density, which is determined by this equation. Additionally, and again in contrast to the simple diffusion equation that we have encountered in Section 6.3.2.1, this equation features two flux terms both in velocity and in position space, see the second and the first term in this equation, respectively. What distinguishes this equation from an ordinary Klein–Kramers equation, which actually is the most general model of Brownian motion in position and velocity space [62], is the presence of a Riemann–Liouville fractional derivative of order (1 − α) in front of the last two terms defined by ⎧ m ∂ P ⎪ ⎪ m , ⎨ δ ∂t ∂ P := t ⎪ ∂tδ ∂m 1 P(t ) ⎪ ⎩ , dt ∂tm Γ(m − δ) 0 ( t − t ) δ + 1− m
δ=m m−1 < δ < m (6.111)
with m ∈ N. Note that, for α = 1, the ordinary Klein–Kramers equation is recovered. The analytical solution of this equation for the msd has been calculated in [61] as msd(t) = 2v2th t2 Eα,3 (−κtα )
→
2
Dα t 2 − α Γ (3 − α )
(t → ∞)
(6.112)
with Dα = v2th /γ and the generalized Mittag–Leffler function Eα,β (z) =
∞
zk ∑ Γ(αk + β) , α , β > 0 , z ∈ C k =0
(6.113)
Note that E1,1 (z) = exp(z), hence Eα,β (z) is a generalized exponential function. We see that for long times (6.112) yields a power law, which reduces to the Brownian motion result in the case of α = 1. The analytical solution of (6.110) for P( x, v, t) is not known. However, for large friction γ this equation boils down to a fractional diffusion equation for which P( x, t) can be calculated in terms of a Fox function [63]. This solution is what we have used to fit the data. Our modeling is completed by adding what may be called ‘biological noise’ to (6.112), msdnoise (t) := msd(t) + 2η 2
(6.114)
221
222
6 From Deterministic Chaos to Anomalous Diffusion
This uncorrelated white noise of variance η 2 mimicks both measurement errors and fluctuations of the cell cytoskeleton. The strength of this noise as extracted from the experimental data is larger than the measurement error and determines the dynamics at small time scales, therefore here we see microscopic fluctuations of the cell body in the experiment. The experimental data in Figures 6.19 and 6.20 was then consistently fitted by using the four fit parameters vth , α, γ and η 2 in Bayesian data analysis [11]. We consider this model as an interesting illustration of the usefulness of stochastic fractional equations in order to understand real experimental data displaying anomalous dynamics. However, the reader may still wonder about the physical and biological interpretation of the above equation for cell migration. First of all, it can be argued that the fractional Klein–Kramers equation is approximately25) related to the generalized Langevin equation [64] v˙ = −
t 0
dt γ(t − t )v(t ) +
ζ ξ (t)
(6.115)
This equation can be understood as a stochastic version of Newton’s law. The left-hand side holds for the acceleration of a particle, whereas the total force acting onto it is decomposed into a friction term and a random force. The latter is modeled by Gaussian white noise ξ (t) of √ strength ζ, and the friction coefficient is time-dependent obeying a power law, γ(t) ∼ t−α . The friction term could thus be written again in the form of a fractional derivative. Note that for γ = const. the ordinary Langevin equation is recovered, which is a standard model of Brownian motion [34, 62]. This relation suggests that, physically, the anomalous cell migration could have its origin, at least partially, in the existence of a memory-dependent friction coefficient. The latter, in turn, might be explained by anomalous rheological properties of the cell cytoskeleton, which consists of a complex biopolymer gel [65]. Secondly, what could be the possible biological significance of the observed anomalous cell migration? Both the experimental data and the theoretical modeling suggest that there exists a slow diffusion on small time scales, whereas the long-time motion is much faster. In 25) We emphasize that only the msd and the decay of velocity correlations are correctly reproduced by solving such an equation [64], whereas the position distribution functions corresponding to this equation are mere Gaussians in the long time limit and thus do not match the experimental data.
6.5 Summary
other words, the dynamics displays intermittency qualitatively similar to the one outlined in Section 6.4.1. Interestingly, there is an ongoing discussion about optimal search strategies of foraging animals, such as albatrosses, marine predators and fruit flies, see, e.g. [66] and related literature. It has been argued that Lévy flights, which define a fundamental class of anomalous dynamics [5], are typically superior to Brownian motion for animals, in order to find food sources. However, more recently it was shown that under certain circumstances intermittent dynamics is even more efficient than pure Lévy motion [67]. The results on anomalous cell migration presented above might thus be biologically interpreted within this context.
6.5 Summary
This review has covered a rather wide range of topics and methods. Section 6.2 has introduced the concept of deterministic chaos by defining and calculating fundamental quantities characterizing chaos, such as Ljapunov exponents and dynamical entropies. These quantities were shown to be intimately related to each other as well as to properties of fractal sets. Section 6.3 started by reminding us of simple random walks on the line, their characterization in terms of diffusive properties, and the relation to elementary concepts of Brownian motion. These basic ideas were then discussed in a chaotic setting in the form of deterministic diffusion. A formula was derived exactly expressing diffusion in terms of the chaos quantities introduced in the previous section. Section 6.4 generalized the concept of normal diffusion leading to anomalous diffusion. A simple deterministic model generating this type of diffusion was introduced and analyzed both numerically and by means of stochastic theory. For the latter purpose, continuous time random walk theory was explained and applied, leading to a generalized, fractional diffusion equation. As an example of experimental applications of these concepts, anomalous biological cell migration was discussed. Experimental results for the mean-square displacement and for the probability distribution in position space matched nicely with the predictions of a stochastic model in the form of a fractional diffusion equation, defined both in position and velocity space. The scope of this review thus spans from very simple, abstract models to experi-
223
224
References
ments and from basic dynamical systems theory to advanced methods of stochastic analysis. Of course, this work poses a wealth of open questions. Here, we restrict ourselves to only one of them which we consider to be of particular importance. We could understand the origin of normal diffusion in terms of microscopic deterministic chaos by applying a combination of methods from statistical physics and dynamical systems theory; but what about anomalous diffusion? To our knowledge, there is currently no analogous theory available, like the escape rate formalism for normal diffusion, which explains the origin of anomalous diffusion in terms of weak chaos. This is the reason why here we have restricted ourselves to applying numerical and stochastic methods to an anomalous deterministic model. We believe that constructing such a microscopic dynamical system theory of anomalous deterministic transport poses a big challenge for future work in this field. Acknowledgments
The author gratefully acknowledges the long-term collaboration with J. R. Dorfman leading to the material presented in Section 6.3. He also wishes to thank his former Ph. D. student N. Korabel for help with some of the figures and for joint work on Section 6.4.1, which formed part of his Ph. D. thesis. A. V. Chechkin contributed significantly to the same section but, in particular, introduced the author to the stochastic theory of anomalous diffusion, for which he is extremely grateful. P. Dieterich was the main driving force behind the project reviewed in Section 6.4.2, and the author thanks him for highly interesting joint work on crawling cells. Finally, he thanks his former postdoc P. Howard for help with some figures and particularly for recent first steps towards solving the question posed above.
References 1 Schuster, H.G., Deterministic Chaos, 2nd edition, VCH, Weinheim, 1989. 2 Dorfman, J.R., An Introduction to Chaos in Nonequilibrium Statistical Mechanics. Cambridge University Press, Cambridge, 1999.
3 Gaspard, P., Chaos, Scattering, and Statistical Mechanics. Cambridge University Press, Cambridge, 1998. 4 Klages, R., Microscopic Chaos, Fractals and Transport in Nonequilibrium Statistical Mechanics, Vol. 24 of Advanced Series in Nonlinear Dynamics. World Scientific, Singapore, 2007.
References
5 Klages, R., Radons, G. and Sokolov, I.M., Eds., Anomalous Transport. Wiley-VCH, Berlin, 2008
17 Lasota, A. and Mackay, M.C., Chaos, Fractals, and Noise. Springer-Verlag, 1994.
6 Ott, E., Chaos in Dynamical Systems. Cambridge University Press, Cambridge, 1993.
18 Katok, A. and Hasselblatt, B., Introduction to the Modern Theory of Dynamical Systems, Vol. 54 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1995.
7 Beck, C. and Schlögl, F., Thermodynamics of Chaotic Systems, Vol. 4 of Cambridge Nonlinear Science Series. Cambridge University Press, Cambridge, 1993. 8 Klages, R., Deterministic Diffusion in One-dimensional Chaotic Dynamical Systems. Wissenschaft & TechnikVerlag, Berlin, 1996. 9 Korabel, N., Chechkin, A.V., Klages, R., Sokolov, I.M. and Gonchar, V.Yu., Understanding anomalous transport in intermittent maps: From continuous time random walks to fractals. Europhys. Lett. 70, 63–69 (2005). 10 Korabel, N., Klages, R., Chechkin, A.V., Sokolov, I.M., and Gonchar, V.Yu., Fractal properties of anomalous diffusion in intermittent maps. Phys. Rev. E 75, 036213/1–14 (2007). 11 Dieterich, P., Klages, R., Preuss, R. and Schwab, A., Anomalous Dynamics of Cell Migration. PNAS 105, 459–463 (2008). 12 Klages, R., Introduction to Dynamical Systems. Lecture notes, see http://www.maths.qmul.ac.uk/ ~klages/teaching/mas424, 2007. 13 Alligood, K.T., Sauer, T.S. and Yorke, J.A., Chaos – An introduction to dynamical systems. Springer, New York, 1997. 14 Arnold, V.I., and Avez, A., Ergodic Problems of Classical Mechanics. W.A. Benjamin, New York, 1968. 15 Devaney, R.L., An Introduction to Chaotic Dynamical Systems, 2nd edition. Addison-Wesley, Reading, 1989. 16 Robinson, C., Dynamical Systems. CRC Press, London, 1995.
19 Toda, M., Kubo, R. and Saitô, N., Statistical Physics, Vol. 1 of Solid State Sciences, 2nd edition, Springer, Berlin, 1992. 20 Eckmann, J.-P. and Ruelle, D., Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57, 617–656 (1985). 21 Falcioni, M., Palatella, L. and Vulpiani, A., Production rate of the coarse-grained Gibbs entropy and the Kolmogorov–Sinai entropy: A real connection? Phys. Rev. E 71, 016118/1–8 (2005). 22 Badii, R. and Politi, A., Complexity: Hierarchical Structures and Scaling Physics. Cambridge University Press, Cambridge, 1997. 23 Young, L.-S., What are SRB measures, and which dynamical systems have them? J. Stat. Phys. 108, 733–754 (2002). 24 Tricot, C., Curves and Fractal Dimension. Springer, Berlin, 1995. 25 Mandelbrot, B.B., The Fractal Geometry of Nature. W.H. Freeman and Company, San Francisco, 1982. 26 Falconer, K., Fractal Geometry. Wiley, New York, 1990. 27 Tél, T., Transient Chaos. in: B.-L. Hao, Eds., Experimental Study and Characterization of Chaos, Vol. 3 of Directions in Chaos, World Scientific, Singapore, 1990, pp. 149–211. 28 Bunimovich, L. and Yurchenko, A., Where to place a hole to achieve a maximal escape rate. preprint arXiv.org:0811.4438, 2008.
225
226
References
29 Kantz, H. and Grassberger, P., Repellers, semi-attractors, and longlived chaotic transients. Physica D 17, 75–86 (1985).
41 Geisel, T. and Thomae, S., Anomalous diffusion in intermittent chaotic systems. Phys. Rev. Lett. 52, 1936–1939 (1984).
30 Klages, R. and Dorfman, J.R., Simple maps with fractal diffusion coefficients. Phys. Rev. Lett. 74, 387–390 (1995).
42 Zumofen, G. and Klafter, J., Scaleinvariant motion in intermittent chaotic systems. Phys. Rev. E 47, 851–863 (1993).
31 Klages, R. and Dorfman, J.R., Simple deterministic dynamical systems with fractal diffusion coefficients. Phys. Rev. E 59, 5361–5383 (1999).
43 Montroll, E.W. and Weiss, G.H., Random walks on lattices II. J. Math. Phys. 6, 167–179 (1965).
32 Pearson, K., The problem of the random walk. Nature 72, 294, 342 (1905). 33 Strutt, J.W. (Lord Rayleigh), The problem of the random walk. Nature 72, 318 (1905). 34 Reif, F., Fundamentals of Statistical and Thermal Physics. McGraw-Hill, Auckland, 1965. 35 Gaspard, P. and Nicolis, G., Transport properties, Lyapunov exponents, and entropy per unit time. Phys. Rev. Lett. 65, 1693–1696 (1990). 36 Jenkinson, O. and Pollicott, M., Entropy, exponents and invariant densities for hyperbolic systems: dependence and computation. In: Y. Pesin M. Brin, B. Hasselblatt, Eds., Modern Dynamical Systems and Applications, Cambridge University Press, Cambridge, 2004, pp. 365–384. 37 Shlesinger, M.F., Zaslavsky, G.M. and Klafter, J., Strange kinetics. Nature 363, 31–37 (1993). 38 Klafter, J., Shlesinger, M. F. and Zumofen, G., Beyond Brownian motion. Phys. Today 49, 33–39 (1996).
44 Montroll, E.W. and Scher, H., Random walks on lattices IV: Continuous-time walks and influence of absorbing boundaries. J. Stat. Phys. 9, 101–133 (1973). 45 Scher, H. and Montroll, E.W., Anomalous transit-time dispersion in amorphous solids. Phys. Rev. B 12, 2455– 2477 (1975). 46 Bouchaud, J.-P. and Georges, A., Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications. Phys. Rep. 195, 127–293 (1990). 47 Weiss, G.H., Aspects and Applications of the Random Walk. North-Holland, Amsterdam, 1994. 48 Ebeling, W. and Sokolov, I.M., Statistical Thermodynamics and Stochastic Theory of Nonequilibrium Systems. World Scientific, Singapore, 2005. 49 Geisel, T., Nierwetberg, J. and Zacherl, A., Accelerated diffusion in Josephson junctions and related chaotic systems. Phys. Rev. Lett. 54, 616–619 (1985).
39 Metzler, R. and Klafter, J., The random walk’s guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 339, 1–77 (2000).
50 Shlesinger, M.F. and Klafter, J., Accelerated diffusion in Josephsonjunctions and related chaotic systems – comment. Phys. Rev. Lett. 54, 2551 (1985).
40 Pomeau, Y. and Manneville, P., Intermittent transition to turbulence in dissipative dynamical systems. Commun. Math. Phys. 74, 189–197 (1980).
51 Klages, R. and Dorfman, J.R., Dynamical Crossover in Deterministic Diffusion. Phys. Rev. E 55, R1247–R1250 (1997).
References
52 Podlubny, I., Fractional Differential Equations. Academic Press, New York, 1999. 53 Mainardi, F., Fractional calculus: some basic problems in continuum and statistical mechanics. in: A. Carpinteri and F. Mainardi, Eds., Fractals and Fractional Calculus in Continuum Mechanics, Springer, New York, 1997. 54 Sokolov, I.M., Klafter, J. and Blumen, A., Fractional kinetics. Phys. Today 55, 48–54 (2002). 55 Einstein, A., Über die von der molekularkinetischen Theorie der Wärme geforderten Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Annalen der Physik 17, 549–560 (1905). 56 Perrin, J.-B., Mouvement brownien et réalité moléculaire. Ann. Chim. Phys. 19, 5–104 (1909). 57 Dunn, G.A. and Brown, A.F., A unified approach to analysing cell motility. J. Cell Sci. Suppl. 8, 81–102 (1987). 58 Stokes, C.L., Lauffenburger, D.A. and Williams, S.K., Migration of individual microvessel endothelial cells: Stochastic model and parameter measurement. J. Cell Science 99, 419–430 (1991). 59 Hartmann, R.S., Lau, K., Chou, W. and Coates, T.D., The fundamental motor of the human neutrophil is
not random: Evidence for local nonMarkov movement in neutrophils. Biophys. J. 67, 2535–2545 (1994). 60 Upadhyaya, A., Rieu, J.P., J.A. Glazier and Sawada, Y., Anomalous diffusion and non-Gaussian velocity distributions of Hydra cells in cellular aggregates. Physica A 293, 549–558 (2001). 61 Barkai, E. and Silbey, R., Fractional Kramers equation. J. Phys. Chem. B 104, 3866–3874 (2000). 62 Risken, H., The Fokker–Planck Equation, 2nd edition, Springer, Berlin, 1996. 63 Schneider, W.R. and Wyss, W., Fractional diffusion and wave equations. J. Math. Phys. 30, 134–144 (1989). 64 Lutz, E., Fractional Langevin equation. Phys. Rev. E 64, 051106/1–4 (2001). 65 Semmrich, C., Storz, T., Glaser, J., Merkel, R., Bausch, A.R. and Kroy, K., Glass transition and rheological redundancy in F-actin solutions. PNAS 104, 20199–20203 (2007). 66 Edwards, A., et al., Revisiting Lévy flight search patterns of wandering albatrosses, bumblebees and deer. Nature 449, 1044–1048 (2007). 67 Bénichou, O., Coppey, M., Moreau, M. and Voituriez, R., Intermittent search strategies: When losing time becomes efficient. Europhys. Lett. 75, 349–354 (2006).
227
229
Color Figures
230
Color Figures
Chapter 1
(a)
(b)
(c)
(d)
Figure 1.9 Schematic representation of the state of an element: (1) matching a queried item; (2) higher than the queried item; (3) lower than the queried item. (a) shows the state of the system encoding a list element. Three distinct elements are depicted. The state of the first element is held at 0.1 (cyan); the second element is held at 0.25 (green) and the third element is held at 0.4 (blue). These are shown as lines of proportional lengths on the x-axis.
(b)–(d) show each of these elements with the search key added to their states. Here the queried item is encoded by 0.25. So Qk = 1/2 − 0.25 = 0.25. This amount is shown in red. After the addition of the search key, the subsequent dynamical update yields the maximal state 1 only for the element holding 0.25 (green). The ones with states higher and lower than the matching state (namely 0.1 and 0.4, shown in cyan and blue) are mapped to lower values. (See p. 23)
Color Figures
Chapter 2
2
C B
127
D
+7
90
180 angle
7
−7
90
180 angle
270
Figure 2.2 Orientation flip diagrams for β = 0.5, and for f = 0.1, for four drop altitudes, (a) h = 0.6, (b) h = 0.8, (c) h = 1.0, (d) h = 1.2. Each OFD displays, in the plane of initial angles and angular velocities, the final outcome relative to the initial orientation of the throw when the barbell has been dropped from a given altitude h above ground. Yellow points indicate no orientation flip (state 0), red color marks
5
A
0
B
127
D
+7
7
128
127
C
128
0
5
(b) β = 0.5 f = 0.1 h = 0.8
−7
270
(c) β = 0.5 f = 0.1 h = 1.0
angular momentum
0
+7
127
A
−7
angular momentum
2
(a) β = 0.5 f = 0.1 h = 0.6
angular momentum
angular momentum
+7
90
180 angle
270
8
(d) β = 0.5 f = 0.1 h = 1.2
127
8
127
0
−7
90
180 angle
270
points with a flipped final state 1. The brightness of the color codes for the number of bounces before the barbell can no longer change its orientation; the darker the color the more bounces the system needs to fall below the critical energy value Ec = 1 − β = 0.5. The diagonal lines indicate the stable (white) and unstable (black) directions of the linearized invariant manifolds of the hyperbolic points A and B. (See p. 47.)
231
Color Figures
28
(a) β = 0.8 f = 0.05 h = 1.0
274
28
A
0
B
274
state 0
−7 +7
+7
90
180 angle
7
14
(b) β = 0.8 f = 0.1 h = 1.0
144
14
A
0
B
144
state 0
−7
270
(c)
angular momentum
angular momentum
+7
+7
f = 0.2
90
180 angle
270 3
(d)
f = 0.4
L11R0
00R1 000R1
L1
0 0
−7
L1
0
74
state 0
R1
90
0
180 angle
270
Figure 2.3 Orientation flip diagrams for β = 0.8 and four friction values: (a) f = 0.05, (b) f = 0.1, (c) f = 0.2, (d) f = 0.4. Each inset displays the decomposition of the corresponding OFD into state 0 (gray). Black regions represent initial conditions where the barbell ends up standing almost sliding. (a) Small friction strength f = 0.05. (b) Friction strength f = 0.1. For friction strengths in that range the intersections
31 3
angular momentum
74 7
angular momentum
232
00R1
0L1 L111R0
L1
0 0
−7
L1
0
31
state 0
R1
90
0
180 angle
270
of the lines for the stable and unstable directions define a deltoid which approximately delineates the separation of order from chaos. (c) f = 0.2; the white lines are boundaries of orbit-type classes with symbol length up to 6; symbol sequences of some simple orbit-type classes are displayed. (d) f = 0.4, which corresponds to a realistic friction strength. (See p. 49.)
Color Figures
90
180 angle
4
(b) β = 0.6 f = 0.2
67
0
mass 1
−7
270
90
180 angle
270
+7
4
65 4
65
0
−7
4
67 4
angular momentum
67
state 0
+7
+7
67 4
0
−7
angular momentum
4
(a) β = 0.5 f = 0.2
state 0
90
180 angle
270
Figure 2.4 Orientation flip diagrams (left) in comparison with corresponding bounce diagrams (right) for β = 0.5 (a), and β = 0.6 (b); h = 1.0. The grayscale of the OFDs in the insets is the same as for those in Figure 2.3. Bounce diagrams (right) display, for the same range of initial conditions, which
65 4
angular momentum
angular momentum
+7
65
0
−7
mass 1
90
180 angle
270
mass bounces more often. Inset: When mass 1 bounces more often a point is gray, otherwise black. For a colored representation for higher values of β see Figure 2.5. The grayscale codes for the number of bounces before the barbell can no longer change its orientation (as in the OFD to the left). (See p. 50.)
233
Color Figures
90
180 angle
7
74
90
180 angle
180 angle
270
+7
7
9
(c) β = 0.9 f = 0.2
74 7
74
0
−7
270
90
180 angle
270
+7
9
65 9
65
0
−7
90
74 7
0
+7
73
0
−7
270
(b) β = 0.8 f = 0.2
angular momentum
73
−7
5
73 5
angular momentum
+7
+7
73 5
0
−7
angular momentum
5
(a) β = 0.7 f = 0.2
90
180 angle
270
65 9
angular momentum
angular momentum
+7
angular momentum
234
65
0
−7
90
Figure 2.5 The same as Figure 2.4 but in color and for (a) β = 0.7, (b) β = 0.8, and (c) β = 0.9; h = 1.0. When mass 1 bounces more often a point is white, otherwise red. The brightness of the color codes for the number of bounces before the barbell can no longer change its orientation (as in the OFD to the left). (See p. 51.)
180 angle
270
Color Figures
Chapter 5
Figure 5.1 Representative pictures of scale-free networks (degree distribution exponent λ = 5) (a) without and (b,c) with clustering (C0 = 0.4, 0.6). All three networks have the same size of N = 250. The giant component has a size of (a) Ng = 250, (b) Ng = 203, and (c) Ng = 145. The actual global clustering coefficient is (a) C = 8 × 10−4 ,
(b) C = 0.34, and (c) C = 0.53. The global clustering coefficient of the giant component is (b) Cg = 0.10 and (c) Cg = 0.17, since there are several small clusters with larger C. The logarithmically scaled coloring presents the intensity of an optical mode with E ≈ 0.45, red indicating the highest, yellow intermediate, and blue the lowest intensities. (See p. 135.)
235
236
Color Figures
Figure 5.4 Vibrational modes of percolation clusters on a square lattice at concentrations (a) p = pc , (b) 0.61, (c) 0.65, (d) 0.75, (e) 0.90, and (f) 0.99. The black points are unoccupied sites and finite clusters. The vibrational amplitudes |un,ω | of the selected eigenmodes of (5.13) are color coded with red for maximum amplitude, followed by yellow, green and blue down to white for very small amplitudes. A transition from lo-
calized behavior at p ≈ pc to extendedlooking behavior for large p seems to occur. However, all modes are expected to be localized in the limit of infinite system size according to the single-parameter scaling theory [11] confirmed by numerical studies of level statistics [8]. Lattice size and frequency are 200 × 200 and ω2 ≈ 0.01D/M, respectively. (See p. 145.)
Color Figures
Figure 5.5 Level-spacing distribution P(s) for optical modes on scale-free networks with degree-distribution exponent λ = 5, system size N = 12 500 and no disorder, W = 0. A clear transition from Wigner (dashed red curve) to Poisson (dash-dotted blue curve) behavior is observed as a function of the clustering coefficient prefactor C0 increasing from C0 = 0.0 (continuous red curve) to C0 = 0.90 (continuous blue
curve). Inset: localization parameter γ (see (5.22)) versus C0 for networks with N = 5000 (red), N = 7500 (light green), N = 10 000 (green), N = 12 500 (blue), and N = 15 000 (purple). A transition from extended modes for small C0 to localized modes for large C0 is observed at C0,q ≈ 0.69. The results are based on eigenvalues around | E| = 0.2 and 0.5. (Adapted from [22]) (See p. 152.).
237
238
Color Figures
Figure 5.6 The localization parameter I0 = s2 /2 versus disorder W for the standard three-dimensional Anderson model with linear sizes L = 14 (red circles), 17 (yellow squares), 20 (green diamonds), 23 (light blue stars), 30 (blue pluses), and 40 (pink crosses)
and hard-wall boundary conditions. The lines correspond to fits of (5.29). Insets: the region around the crossing point zoomed in. In (b) the I0 values are corrected by subtraction of the irrelevant scaling variables. (See p. 153.)
Heinz Georg Schuster (Editor)
Reviews of Nonlinear Dynamics and Complexity An overview of the most important results on various fields of research on nonlinear phenomena. The carefully selected topics enable both researchers and newcomers in life science, physics, and chemistry to access the most important results in this field, using a common language. Editorial Board Christoph Adami, California Institute of Technology, USA Stefan Bornholdt, University of Bremen, Germany Roman Grigoriev, Georgia Institute of Technology, USA Wolfram Just, Queen Mary University of London, UK Kunihiko Kaneko, University of Tokyo, Japan Ron Lifshitz, Tel Aviv University, Israel Ernst Niebur, Johns Hopkins University, Baltimore, USA Guenter Radons, TU Chemnitz, Germany Eckehard Schoell, TU Berlin, Germany Hong Zhao, Xiamen University, China
Volume 1
June 2008 227 pages, Hardcover ISBN: 978-3-527-40729-3 Table of Contents
Preface. Heinz Georg Schuster 1 Nonlinear Dynamics of Nanomechanical and Micromechanical Resonators Ron Lifshitz and M.C. Cross
2 Delay Stabilization of Rotating Waves Without Odd Number Limitation Bernold Fiedler, Valentin Flunkert, Marc Georgi, Philipp Hövel, and Eckehard Schöll 3 Random Boolean Networks Barbara Drossel 4 Return Intervals and Extreme Events in Persistent Time Series with Applications to Climate and Seismic Records Armin Bunde, Jan F. Eichner, Shlomo Havlin, Jan W. Kantelhardt, and Sabine Lennartz 5 Factorizable Language: From Dynamics to Biology Bailin Hao and Huimin Xie 6 Controlling Collective Synchrony by Feedback Michael Rosenblum and Arkady Pikovsky Volume 2
June 2009 258 pages, Hardcover ISBN: 978-3-527-40850-4 Table of Contents
1 2 3 4 5
6
Introduction Heinz Georg Schuster Human Mobility and Spatial Disease Dynamics Dirk Brockmann Stochastic Evolutionary Game Dynamics Arne Traulsen and Christoph Hauert Dynamic and Topological Interplay in Adaptive Networks Bernd Blasius and Thilo Gross Fractal Models of Earthquake Dynamics Pathikrit Bhattacharya, Bikas K. Chakrabarti, Kamal and Debashis Samanta Epilepsy Klaus Lehnertz, Stephan Bialonski, Marie-Therese Horstmann, Dieter Krug, Alexander Rothkegel, Matthaus Staniek, Tobias Wagner Structure in Networks J. Reichardt and S. Bornholdt
241
Index a algorithmic complexity 112 ALU, see arithmetic logic unit Anderson model 142, 155 annealed approximation 107, 109, 110 arithmetic logic unit (ALU) 4, 30, 31 assortativity 137 average-case classification 116 b barbell 41, 42, 45, 50–52, 54, 56 – model 38 basic logic gates 32 basins of attraction 41 Benford’s law 93, 94 – generalized (GBL) 93, 95, 97–101 Bernoulli shift 171 Boltzmann statistics 54 Boolean – algebra 2 – circuit 2 bounce diagrams 53 bounce map with dissipation 40 boundary conditions 155, 161
c Cantor set 188 cell migration 216 ChaoGates 31 chaotic chip 5 chaotic computing 30 chaotic mixing 48 chaotic processor 5 chaotic systems 4 Chua’s circuit 14 circuit diagram 11, 12, 15 clustering 137, 140, 159 CMOS 33 – circuitry 29 colored noise 60, 72, 88 communications protocol 31 complex system 131 computational complexity 111, 112, 116 continuous time – dynamical system 9 – nonlinear system 8, 13 – random walk 209 correlation function 73 critical exponent 143, 157, 158 critical slowing down 105 d database
22, 23
242
Index
decision problem 112, 113 degree distribution 134, 140 delayed timing clock pulses 12 deterministic chaos 37 dice tossing 37 diffusion – anomalous 208 – coefficient 194 – – generalized 213 – deterministic 193 – equation 197 discrete-time nonlinear system 10 disturbances 119 division dynamics 118 division model 118, 120, 124 division-avalanche 119–121 – distribution 123 divisor function 121 dynamical logic outputs 16 dynamical matrix 144 e easy-hard-easy pattern 113, 114 elastic collision 40 emergent behavior 125 emergent phenomena 92 encoding information 19 entrainment transition 89 entropy – Kolmogorov–Sinai 179 – Shannon 180 ergodicity 176 escape rate 187 extreme-value theory 125 f finite-size scaling
153
first-digit frequencies 95 Floquet matrix 73, 75 Fokker–Planck equation 65, 67 fractal 190 fractional derivative 214, 221 frequency locking 86 – region 89 friction parameter 44 friction strengths 49, 57 frozen state 102 full-adder circuit 18 full-adder operations 17 g Gaussian noise 68 GBL, see Benford’s law, generalized global shift 21, 28, 33 h half-adder circuits 18 Hamiltonian 53, 142 heteroclinic points 48 i inelastic bounce 40 information storage 18 invariant manifolds 46, 48 isochron 61, 75 iterated map 10 Ito equation 68 Ito-type equation 63 Ito-type phase equation 64 j Josephson junction
25, 27, 28
l level statistics 149, 156 limit-cycle oscillator 59, 73
Index
limit-cycle solution 61, 62 Ljapunov exponent 169, 173 loaded barbell 48 loaded dice 56 localization 131, 156 localization–delocalization transition 131, 142, 151, 156, 163 locking region 88 logic gates 5 logic operations 13 logic patterns 18 logic response 9, 14 logical operations 17 logistic equation 6 logistic map 18 look-up table 33 m magnetic field 148, 161 Markov partition 201 mean degree – network 124 memory effects 60 Montroll–Weiss equation Moore’s law 3
211
n network image 105 NIFS, see noise-induced frequency shift noise effect on entrainment 85 noise-induced frequency shift (NIFS) 68, 85, 87–90 nonlinear dynamical processors 28 NP class 112 NP completeness 113 NP complexity 113
NP problem 112, 116 number theoretical – properties 125 – techniques 92 number theory 91, 125 o OFD, see orientation flip diagram operational amplifier 12 optical modes 146, 159 optical network 146 orbit flip diagrams 45, 57 order parameter 105, 107 order-disorder phase transition 104 orientation flip diagram (OFD) 46–51, 56 Ornstein–Uhlenbeck noise 60, 72–74, 79, 80, 83, 87, 89 oscillator shifts 86 OU noise, see Ornstein– Uhlenbeck noise p parallelism 28 percolation 138, 143, 144, 157 periodically driven oscillator 85, 87 phase description 61 phase equation 81, 87, 89, 90 phase reduction 59 – method 59, 70 – theory 60, 89 phase space structure 56 phase transition 92, 103, 105, 111, 112, 114, 131, 156 – in numbers 101 photonic lattice 147 Poincaré map 43–45
243
244
Index
Poincaré section 41, 45 Poisson distribution 134, 151 Pomeau–Manneville map 206 power law 134, 143, 148 prime number – concentration 103 – generator 110, 112, 116 – sequence 93, 96 – theorem 93, 98 primes counting function 99 processing information 21 proof-of-principle device 13 q quantum percolation
144
r random matrix theory 149 random number generator 37 RCGA, see reconfigurable chaotic logic gate array RCLG, see reconfigurable chaotic logic gate reconfigurable chaotic logic gate (RCLG) 4, 31 reconfigurable chaotic logic gate array (RCGA) 4 reconfigurable computing device 33 reduced phase model 68 reflection law 40 Riemann hypothesis 99 Riemann zeta function 91, 93 s SAT-like problem 112 scale-free network 125, 134, 159 scale-free topology 92, 106 scaling theory 146, 148, 153
search effort 29 search key value 27 self-organized criticality (SOC) 92, 117, 118 – model 125 self-organized models 124 small-world network 134, 158 SOC, see self-organized criticality stable manifolds 43, 45, 48 steady phase distribution 68 steady probability distribution 70, 79 stochastic differential equations 60 stochastic limit-cycle oscillators 59, 89 stochastic phase equation 63, 78, 81 stochastic phase-reduction theory 72 stochastic prime number generator 101 storing information 33 Stratonovich-type phase equation 64, 65, 74 Stuart–Landau (SL) oscillator 69 t tent map 20, 27 threshold controller 6, 11, 12 threshold controller circuit 12 threshold levels 25–27, 32 threshold mechanism 20, 21 threshold phenomenon 112 tight-binding equation 142 truth table 6, 7, 9 Turing machine 2
Index
u universal computing 4 universality class 138, 142, 164 unstable manifolds 48 v vector field 61 very-large-scale integration (VLSI) 1, 3 – chip 31 – circuit 4, 29, 33
– implementation 30, 31 vibrational modes 144 VLSI, see very-large-scale integration w waiting time distribution 212 weak-noise limit 60 white Gaussian noise 60, 62, 63, 69, 70, 73, 77, 85, 87–89 Wigner distribution 150
245