This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
wo, a single kink (antikink) undergoes a driven Brownian motion described by the Langevin equation [23; 152; 179] X t = T 2F/aM0 + r?(i), ±. n approaches a step function (order-disorder limit); (ii)
(4.6)
where rj{t) is a Gaussian, zero-mean random force with correlation function (r?(£)r/(0)) = 2(kT/aM0)5(t). The static forcing term F pulls 4>± in opposite directions with average speed u± = (X±)=T
2F/aM0.
(4.7)
The elementary mechanism which allows a >4 chain to switch between its vacuum configurations
(4.8)
Here the center of the nucleus has been set at the origin without loss of generality. Its components experience two contrasting forces: an attractive force due to the vicinity of the nucleating partner and a repulsive force
88
4-
Adding
Spatial
Dimensions
due to the external bias F. In view of Eq. (4.8), the potential function corresponding to the internal force is [180], +oo
/
H[
(4.9)
-co
= 6E0 [(^2/3 + BK -
2K2)
+ (X/d) (1 - ZK1 - 2K3)] , with K = tanh~ (X/d).
For X » d this may be further approximated to
VN(X)
= 2E0 [1 - 6 e x p ( - 2 X / < | ] .
(4.10)
The potential of the external force F can be determined by integrating the drift term of Eq. (4.6), that is, ±2FX. The critical nucleus configuration 4>N{X, R) is attained for a relative kink-antikink distance, 2R(F), such that the two competing forces compensate each other, that is, for 2R(F) = -d ln(Fd/l2E0).
(4.11)
The critical nucleus decays through one unstable mode, only - the collective variable X(t) - with negative eigenvalue \g = V£(R)/M0
= -4F/M0d.
(4.12)
Moreover, its energy AE^(F) is obtainable through Eq. (4.10) after replacing X with R(F). In the Gaussian approximation, the nucleation rate for the biased overdamped
^
exp(-AEN/kT),
(4.13)
where ZQ and ZM denote the (subtracted) partition function for the vacuum and the critical nucleus field configuration, respectively. The entropic factor ZN/ZQ accounts for both the phonon modes (with continuum spectrum), which "dress" 4>a (x) and fix and the two internal modes of 4>N with discrete (nearly degenerate) eigenvalues A^ = \/3wo/2 [42]. A standard calculation yields the following analytical expression for T2 [180]:
ra=»i4(' IT ad
y^y^-A*,/**).
\STTUJQ J
\
kl
J
<4.i4)
4-1.
Spatiotemporal
stochastic
resonance
89
The validity of Eq. (4.14) is restricted by the condition that kT/Eo
u±^u±/yJl-F/u,%.
(4.15)
Note that this "concavity" in the kink speed is observed experimentally (see section 4.1.5 and Fig. 4.8) and in the model (see section 4.1.6 and Fig. 4.11). The inequality Fd ^> kT is implicit in Langer's derivation of formula (4.13). In Ref. [179] the two-body model has been solved without having recourse to the Gaussian approximation: the corrected formula for the nucleation rate in the weak bias regime fcTYio(T)
(4.16)
with
and AEN(F) ~ 2EQ. This second expression for T is compatible with the linear response theory requirements [95] and can be cast in a more suggestive form, namely ri=4n2(T)|u±|.
(4.18)
Eq. (4.18) is the kinetic model prediction for the nucleation rate in an overdamped (f>4 theory [179]. The reason for switching from T2 to T\ lends itself to a direct experimental verification (see Fig. 4.12, below). In the Gaussian approximation the nucleation mechanism is controlled essentially by thermal activation; the decay time of the critical nucleus is taken to be finite and negligible with respect to the activation time. In the weak bias regime, however, the parabolic approximation (4.12) is not accurate [23; 152]
90
4-
Q
I
Adding Spatial
|
Dimensions
|
10
|
20
I
L
30
site # Fig. 4.6 Each graph contains three snapshots of the 32-resonator chain demonstrating the formation and spreading of a kink-antikink pair for three different coupling strengths and open boundary conditions. The curves are staggered with time evolving downwards. The brackets indicate the extremes of the phase. The kink/antikink widths are seen to vary dramatically with coupling strength: loose coupling in a (Re = 180&Q), tight coupling in c (Re = 3.9fcO), and optimal coupling for b (Re = 15/cQ). The SNR's of a and c are 2.2dB below the one achieved with the configuration b.
and the nucleus decay is better described as a steady downhill sliding motion with speed \u±\; the nucleation process is then deemed accomplished only after the nucleating partners have covered a distance of the order of the mean free path nQX(T). Finally, for F
4-1.
4.1.4
Spatiotemporal
Kink nucleation
in the
stochastic
resonance
91
experiment
While not intended to serve as a rigid experimental verification of the universality of the findings for the
1
Site Number
32
Fig. 4.7 Shown are staggered snapshots of the 32-resonator chain demonstrating the thermal formation and subsequent spreading of a kink-antikink pair. The curves are separated by eight drive cycles with time evolving upwards. The coupling resistor is 68 k£l
92
4-
Adding Spatial
Dimensions
defined as the distance between the centers of the kink/antikink, is of the order of half the chain. The corresponding SNR is significantly higher than in the cases of a and c. In order to see how kink-antikink pairs are nucleated in a noisy environment, Figure 4.7 shows staggered snapshots of the thermal nucleation and subsequent spreading of a kink-antikink pair. 4.1.5
Decay rates and kink
speeds
In the hope of gaining more insight into the dynamics of the hopping process of the array for various coupling and noise strengths we examine two significant time scales in more detail: (i) the speed of each component of a nucleating pair as a function of forcing amplitudes and (ii) the kink nucleation rates as a function of temperature, both for a range of coupling strengths. For the remainder of this section, the number of resonators is fixed to 32, and periodic boundary conditions are employed. In the continuum case of the overdamped
4-1.
Spatiotemporal
stochastic
93
resonance
0.8 u >> u 01
0.6
> U
T3 w CD
.-a 0.4 H CO
T)
a 0.2tn
M
a
0.0
-' 0.5
I 1.0
'
I • r^ ' 1.5 2.0 Bias (arb. units)
2.5
3.0
Fig. 4.8 Kink speed as a function of force for various coupling strengths. From top to bottom: Rc = 6.8k Q, 15 kfi, 27 kf2, 68 kQ, 100 kf2, 180 kQ. (Note that the coupling strength is inversely proportional to the value of the coupling resistors. The force is a measure of the asymmetry of the basin of attraction of the two phases. The curves are well fitted by a straight line. For low coupling strength and low forcing, kinks can get trapped due to discreteness effects and inhomogeneities.
Figure 4.8 corresponding to the extreme coupling values, 6.8fcJl and 180fe$l, should be evaluated with caution: for very weak coupling (180k) the kink would move only for relatively strong forcing, while for strong coupling (6.8k) it is impossible to nucleate a kink at all in the case of low forcing. As one would expect, the time it takes for a kink-antikink pair to travel around the loop is much shorter t h a n one drive period. (In this case for the beat frequency of 100 Hz and the forcing amplitude used in the SNR measurements as described above.) How does this time relate to the average time it takes to nucleate the first critical nucleus Tnucl In the experiment we are not able to get an accurate measurement of Tnuc. Instead, we can measure average decay times, i.e. the time it takes for the whole chain to decay from its meta-stable phase into the stable phase. Assuming t h a t
94
^.
Adding Spatial
Dimensions
10. On
E 0)
E
0.4
0.6 0.8 1.0 1/noise (arb. units)
1.2
Fig. 4.9 Half-time (as defined in the text) as a function of inverse noise intensity, for various coupling strengths: From left to right: Rc = 6.8 k£7, 15 kfl, 27 kf2, 68 kf2, 100 kf2, 180 kfl. Two regions can be separated in which the curves are approximately linear, each with a different slope. Note that the coupling strength is inversely proportional to the value of the coupling resistor.
the decay-time after a kink-antikink pair was created (depending inversely on the kink speed) is small compared to the time it takes to nucleate the first pair, a good estimate for Tnuc will be half the average (total) decay time. Of course, this approximation breaks down for high noise intensities since the nucleation rate then becomes comparable to the "post-nucleation" decay rate. We remark that the total decay time is the multidimensional equivalent of the inverse Kramers rate for a single system. Figure 4.9 shows this half-time as a function of inverse temperature*. A step force of 2 units (see Figure 4.8) is applied to the chain of diode resonators initially being in the energetically less favorable phase. We then measure the average decay rates into the stable phase and plot the (average) time it takes for half the elements to switch phase. The curves are roughly piecewise linear on the logarithmic scale, indicating an exponential law in each of the two sections. * Temperature is defined as the square of the rms voltage of the noise sources.
4.1.
4.1.6
Coupled
Spatiotemporal
stochastic
95
resonance
maps
We choose a chain of (symmetrically) coupled logistic maps as a computationally efficient representation of the experimental setup:
< + 1 = ( l - e ) / « , r ) + \e[f{x^\r)
+ f{x^\r)\
+ DC
(4.19)
(i € [1,N], n = time, f(x,r) = rx(l - x), x € [0,1]). The adjustable parameter D alters the intensity of the normally distributed deviate C\ which has zero mean and unit variance [221] and is independent from site to site. These discrete time maps are introduced in the previous chapter and their validity is well established in previous work [123]. The different nature of the coupling in the coupled map lattice and the coupled oscillators results in significant differences in the limit of low and high coupling. A measure of the qualitatively different coupling is the maximum obtainable kink width, i.e., the number of sites that constitute the domain boundary. While, for the time-continuous coupled elements, the kink width at least in principle grows proportionally to the coupling strength, the kink width in the coupled map lattice cannot be pushed beyond eight to nine sites. The limited long-range correlations in a coupled map lattice affect the maximum achievable SNR. Since the local kink nucleation cannot involve more than sixteen to eighteen sites, the SNR is expected to saturate for more than 32-34 maps. Figure 4.10 verifies this trend and also displays two curious differences from the case of the coupled Duffing oscillators [161] and the coupled diode resonators [168]: (i) The SNR is already significantly enhanced for very weak coupling strengths (e = 0.002 - 0.01), and (ii) after reaching a plateau which depends on the number of coupled maps, the maximum SNR does not decrease significantly over the available range of coupling values. These two issues will be subject to further investigation. The coupled map simulations verify the array enhancement; the maximum achievable SNR for a single logistic map is around 18 dB, the SNR for 32 optimally coupled maps is between 25 and 26 dB. The resulting enhancement of 7.5 dB is significant. In order to understand the phase switching process in the coupled map lattice, we study the dynamic properties of kinks in this section. The kink speed as a function of bias for different coupling values is shown in
96
4-
Adding Spatial
Dimensions
26 -i
0.2
i ' i—•—r 0.4 0.6 0.8 coupling strength
1.0
Fig. 4.10 The maximized SNR vs coupling for different numbers of coupled logistic maps. From top to bottom: JV = 32,15, 9, 5, 3. The parameter r = 3.2 was chosen such that each individual map is in a stable period two. The signal amplitude is sub-threshold with Sr = 0.04.
Figure 4.11. Several key observations can be gathered from this graph. Each curve displays a linear region which is expected from theory (see Eq. (4.6).) The concavity behavior for larger forces (bias) qualitatively agrees well with the nonlinear corrections obtained in the
4.1.
Spatiotemporal
stochastic
97
resonance
0.5^
/.55
A.45 (/> C O
///.35
0.4-
15
-
/ / /
a> a> a.
0.3/ / / /
/ / /
w
a> *-•
/-25
/ / / /
0.2-
"35
////
>. "o 0.1 >
////
0.00
/-15 /
/ 0-1
/ // ° 5• °7
\s///
/ y / / / / ^ / / / § . M
1
U.U
/
i
'
i
-0.04
-0.02
•
°'°
i
-0.06
force Fig. 4.11 Same as Fig. 4.8 for the coupled map lattice. Shown are the kink speeds for different values of the coupling parameter (labeling the curves) as a function of applied bias/force.
should be regarded as an almost deterministic process. We measured the average number of iterations it takes to nucleate one pair under an applied bias as well as the average time it takes to fully decay into the stable phase afterwards. Figure 4.12 shows these two quantities as a function of noise intensity D along with the sum and the corresponding SNR. The sum will be the most accessible variable to be measured in experiments. It can be considered as the equivalent of the average waiting time for one uncoupled system. With the added spatial dimension it can be nicely dissected into two distinct components corresponding to the two processes described above. The time it takes to nucleate the first kink decreases exponentially with the noise intensity D. As can be seen in Figure 4.12, the subsequent decay time is a slowly decreasing function of the noise, which implies that
98
4-
Adding Spatial
Dimensions
800 12 tn
600 -
c o
CO i— O)
|
8
~ "o
400
40
200
0
"~' I ' 1 ' 1 0.024 0.028 0.032 0.036 noise strength
DC
0 0.040
Fig. 4.12 Average time ( # of iterations) it takes to nucleate the 1st kink (circles),then to fully decay (diamonds), the sum (squares) as well as the corresponding SNR (triangles, dashed line) versus applied noise. The minimum SNR results from the crossover of the two time-scales, i.e. when the nucleation time is equal to the subsequent full decay.
the (anti)kinks propagate faster for increasing noise. This effect is discussed and utilized in more detail in the following chapter. We plot the SNR on the same graph to illustrate a curious phenomenon: the minimum of the SNR is assumed at almost exactly the noise level where the two time-scales cross. The universality of this coincidence is still a topic of ongoing research. In order to stay close to the experiment, we also measured the half time for the coupled maps. Figure 4.13 is the equivalent of Figure 4.9 for the coupled map lattice. The measurement procedure is identical to the experiment: a step force (bias) of —0.01 is applied at time zero to the coupled maps initially in the less favorable phase. We then measure the number of iterations it takes for half the maps to decay into the stable phase.
4.2.
Doubly stochastic
99
resonance
10000^
(/) c o
«
1000-
E | OJ
> re
100 -
20
30 40 50 1/(noise strength)
60
Fig. 4.13 Same as Figure 4.9 for the coupled map lattice. The coupling values (increasing upwards) are e = 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4.(r = 3.2, Sr = 0.01).
4.2
Doubly stochastic resonance
Apart from short allusions in Chapters 1 and 2 we have — so far — more or less ignored the role of multiplicative noise [54; 86; 87; 147; 242]. In the 1980s, the appearance of new maxima in the probability distribution of nonlinear systems was observed when the intensity of the noise is increased. These novel states exist only in the presence of multiplicative noise, i.e., they have no counterpart in a deterministic or purely additive noise setting. The classic reference for such noise induced transitions is the book by Horsthemke and Lefever [106]. More recent work has focused on first and second order phase transitions in spatially coupled overdamped elements [50; 51; 145; 173; 199; 275; 278]. In particular, the interplay between additive and multiplicative noise has been investigated lately. For an updated, comprehensive review we refer the reader to the book by Garcia-
100
4-
Adding Spatial
Dimensions
Ojalvo and Sancho [75]. In this section we study an TV x N two-dimensional lattice of coupled Langevin equations of the form Xi = f(xi)+g(xi)&(t)
+ — J2
(x j ~ xt) + d{t) + A cos (ait+ (/)), (4.20)
jenn(i)
with / and g defined as f(x) = -ar(l + x 2 ) 2 , g{x) = a2 + x2 .
(4.21)
The sum in Eq. (4.20) runs over all nearest neighbors nn{i) of the zth cell, and the "flattened" index i denotes the two-dimensional cell position i = ix + N(iy — 1), with ix,y e [1,./V]. For the remainder of this section we fix the "purely additive component" of the multiplicative noise function to be a = 1. The additive and multiplicative noise terms are mutually uncorrelated (in space and time), zero-mean Gaussian distributed with auto-correlation functions:
(4-22)
&(W)>=*f<M*-*')> m)Q(t')) = ^kj(t-t')-
(4-23)
In the absence of periodic forcing, the system (4.20), which is monostable in its deterministic version, can undergo a phase transition to a noise-induced bistable state [50; 145]. Intriguingly, this noise-induced structure can act as an effective bistable potential that, in combination with the periodic drive and the additive noise, can exhibit collective stochastic resonance! This effect was termed doubly stochastic resonance [277] to emphasize that additive noise causes a resonance-like behavior in the structure, which is sustained by multiplicative noise. The following is a short summary of Ref. [277], which is the first report on doubly stochastic resonance. For A = 0, the model (4.20) can be solved analytically [75] by replacing the nearest-neighbor interaction by a global term in the Fokker-Planck equation corresponding to (4.20). The resulting steady-state probability distribution is given by Pst{x,m)
=
C(m)
=_fn
[x f(y) - D(y - m)d_
-exp 2 /
y^VOzO + a'c22
V^
2 21 s,
2
^V(2/)+CTc
V
•
(4-24)
4-2.
Doubly stochastic
resonance
Fig. 4.14 Transition lines between the ordered and disordered phases plane for different intensities of additive noise: o'i = 0 (curve 1) j? = 1 C (j? = 5 (curve 3). The black dot corresponds to D = 20, CTJ 3, which chosen to demonstrate DSR in the text. Reprinted with permission from
101
in the (D,a^) (curve 2), and are the values [277].
where C(m) is a normalization constant and m is the mean field, defined by m
xPst(x,
m)dx.
(4.25)
The mean field m serves as an order parameter that lets us distinguish between ordered (m ^- 0) and disordered (TO = 0) phases. In the ordered phase the system occupies one of two symmetric possible states corresponding to mean fields ra\ = —TO2 ^ 0. When increasing the multiplicative noise intensity, the system undergoes a phase transition from a disordered to an ordered state, as shown in Figure 4.14. Figure 4.15 demonstrates the delayed transition onset and reduced magnitude of \m\ when noise of growing amplitude is added. When choosing parameters such that the system remains in the disordered state for a range of additive noise powers (indicated by the black dot in Figure 4.14), the authors of [277] study its response to periodic forcing. Numerically simulating (4.20) with a time step At = 2.5 x 10~ 4 ,
102
4-
Adding Spatial
Dimensions
1.0 0.8 0.6
£ 0.4 0.2
0.0 0
2
4
2
6
8
10
Fig. 4.15 The order parameter |m| vs the intensity of multiplicative noise for D = 20 and cr? = 0 (curve 1), cr? = 1 (curve 2), and <J? = 5 (curve 3). Reprinted with permission from [277].
they find significantly enhanced synchronization between the forcing and the instantaneous mean field computed as m(t) = p ^ i ^ W f° r a n optimum value of the additive noise intensity. Time series of the mean field and the corresponding periodic input signal are plotted in Fig. 4.16. Besides the obvious SR-like dependence on additive noise power, we observe decreased amplitudes of hops as a result of the modified effective potential. In fact, the noise-induced bistablity disappears completely for large noise intensities
4.3
Spatial patterns
So far, we have treated spatial extent as a rather abstract additional dimension, more or less neglecting its very concrete implications in reality. We would not do justice to the expansive title of this chapter, did we not address the rich subject of stochastic and deterministic pattern formation. What exactly do we mean by pattern formation? Pattern formation is a
4-3.
Spatial
patterns
103
1000
Fig. 4.16 Illustration of doubly stochastic resonance via the time evolution of the noisy mean field and the (smooth) periodic external force A cos (tot + <j>). The intensity of additive noise is increasing from top to bottom: ai = 0.01, cr? = 1.05 and <ji = 5 respectively. The middle row graph demonstrates optimum input/output synchronization. Parameters for all panes are N2 = 18 X 18, A = 0.1 LU = 0.1, D = 20 and a2 = 3. Reprinted with permission from [277].
process by which a spatially uniform state loses stability to a non-uniform state: a pattern. Spatiotemporal patterns appear spontaneously in a wide range of physical, chemical, and biological systems when they are driven sufficiently far from thermodynamic equilibrium. The classic example is Rayleigh-Benard convection in a fluid layer heated from below. For sufficiently strong heating, fluid motion sets in, typically in the form of convection rolls. We refer the reader to the excellent review by Cross and Hohenberg [41] for a comprehensive introduction. In this section we focus on two basic numerical models that, despite their simplicity, display many
104
4-
Adding Spatial
Dimensions
(»0
(a)
(c)
..,•
' • • . « •
w
1
"
"
*
**
.... " .-* - ^ v.* -ss s.
(a)
A «s •' *
(<0
(b)
->
Fig. 4.17 Upper panel: Snapshots of the field x for different values of correlated additive noise for D = 1.0, ai = 1.8, and cr? = 0. The parameter a increases from left to right: a = 0.1, a = 1.0, and a = 10. Lower panel: Snapshots of the field x in the case of uncorrected additive noise. The parameter
of the experimentally observed phenomena in spatial, two-dimensional media. The first model is a variation of the lattice investigated in the context of doubly SR in the previous section. Zaikin and Schimansky-Geier [278] omit the sinusoidal drive (A = 0) and replace the simple diffusive spatial coupling term in Eq. (4.20) by a discretized version of the Swift-Hohenberg coupling term — D(q$ + V 2 ) 2 :
LyXi
-DU-^t
j=l
1 — exp I Aej
d_ dr
(4.26)
4-3.
Spatial
patterns
105
Here, r is the spatial coordinate, ej are the lattice unit vectors, and A is the lattice spacing. The crucial difference between diffusive coupling and a spatial operator such as the Swift-Hohenberg term is the selective amplification of a preferred wave number by the latter. While it is possible to observe Turing patterns in purely diffusive systems, the vast majority of pattern forming systems are driven by this fundamental selection mechanism. Theory predicts, for the system defined by equations (4.20) with their coupling term replaced by (4.26) and (4.21), the existence of a reentrant phase transition [216]. The relevant parameter is the intensity of the effective additive noise, which is the sum of the "additive component" of the multiplicative noise a 2 £ and the "pure" additive noise (. We mention in passing that the former term is strongly correlated with the multiplicative fluctuations. The authors of Ref. [278] performed numerical simulations on a 128 x 128 square lattice with open boundary conditions xr = 0 and n • Vxr — 0, utilizing an Euler scheme with time step dt = 5 x 1 0 - 4 . In the upper panel of Fig. 4.17 snapshots of the field (after 100 elapsed time units) have been plotted for three different values of a. Clearly, the increase of correlated additive noise first gives rise to spatial patterns and subsequently destroys them as well. Similar "noise induced patterns" are observed when varying the independent additive noise
Parametric
surface
waves
In 1831 the physicist Michael Faraday described, in an appendix to the Royal Society in London, a surface wave instability now known as Faraday waves or Faraday crispations [58]: When the upper surface of a plate vibrating so as to produce sound is covered with a layer of water, the water usually presents a beautifully crispated appearance in the neighborhood of the centers of vibration. . . . Most fluids, if not all, may be used to produce these crispations, but some with particular advantages; alcohol, oil of turpentine, white of egg, ink, and milk produce them. The basic experimental setup consists of an open container of fluid, which is subjected to uniform vertical oscillations [41; 192; 204]. When the strength or amplitude of these oscillations exceeds a certain threshold, the
4-
Adding Spatial
Dimensions
Fig. 4.18 Faraday waves: typical patterns observed in vertically shaken layers of liquids. Reproduced with permission from [151].
4-3.
Spatial
patterns
107
initially flat surface develops an instability and a p a t t e r n of standing waves is formed. Examples of such p a t t e r n s are shown in Figure 4.18 and include stripes, squares, and hexagons. 4.3.1.1
Noise induced bistability
of Faraday
waves
Depending on the choice of the vibration frequency, the bifurcation from the static state to a finite amplitude oscillatory regime can be smooth (supercritical) or a b r u p t and hysteretic (subcritical). In the latter case, multiplicative noise in the form of fluctuations superimposed on the sinusoidal forcing can lead to noise induced bistability. Residori and coworkers [15; 224] compared the bifurcation diagram for the amplitude of the surface waves in t h e presence of noise with the purely deterministic case a n d found two new branches t h a t involve a much larger difference in oscillation amplitude. T h e experimental setup is depicted in Figure 4.19. The photo detector measures the position of the reflected laser beam, which is proportional to the height of the surface. As the subharmonic p a t t e r n repeats itself only every two drive periods, the relevant variable is the spectral
Fig. 4.19 Schematical diagram of the experimental setup investigating the effect of multiplicative (parametric) noise on the onset of pattern formation. The photo detector (ph.d.) provides a voltage proportional to the height of the surface. The vertical acceleration is measured by a piezoelectric accelerometer (ace). Reprinted with permission from [15; 224].
108
4-
Adding Spatial
Dimensions
power of the photo detector signal at fi/2, which we denote by a rms . As the spatial location is randomly chosen, we expect a r m s to carry appropriate information about the entire surface pattern. The noise £(£) is added to the acceleration, which in the reference frame of the fluid container is simply [15; 224] ffeff = 9 + a c o s Qt + £(£)•
(4.27)
where g is the acceleration due to gravity. The measured value of a rms , denoted by A i m s , is therefore approximately Gaussian-distributed. Time series of ATIUS are displayed in Figure 4.20 for increasing sinusoidal forcing. The bifurcation without noise is subcritical; i.e. the wave amplitude
A rms
^fnfrAifH^
t (sec) o Fig. The and [15;
50
100
150
200
250
300
4.20 Temporal evolution of the wave amplitude for a noise intensity 0.1 m V 2 / H z . sinusoidal forcing corresponds to an average acceleration ( a r m s ) = 5.7,5.9,6.2, 6.4 m / s 2 in (a), (b), (c), and (d), respectively. Reprinted with permission from 224].
4-3.
0
1
2
3
4
5
Spatial
6
7
109
patterns
0
1
2
3
4
5
6
7
Fig. 4.21 Probability densities of the wave amplitude for a noise intensity of 0.1 m V 2 / H z and for different values of the deterministic forcing: (a r m s) = 5.7, 5.9, 6.2, and 6.4 m / s 2 in (a), (b), (c), and (d), respectively. Reprinted with permission from [15; 224].
jumps to a finite value for a rms acceleration ac = 5.8 ± O.lm/s 2 [15; 224]. The probability density functions corresponding to the time recordings of Figure 4.20 are shown in Figure 4.21. We observe that noise triggers the instability onset before the deterministic threshold. In addition, for intermediate noise levels the probability density of the wave amplitude has two maxima that do not correspond to any of the deterministic states. The authors stress that noise does not simply trigger transitions between two metastable states but instead truly enlarges the bistable region in the vicinity of the subcritical bifurcation. Bifurcation diagrams are furnished in Refs. [15; 224] to substantiate that claim. 4.3.1.2
Granular flow
Granular materials such as rice, sugar, grains, and sand defy the standard categorization of matter as a solid, liquid or gas. Understanding
4-
Adding Spatial
Dimensions
Fig. 4.22 Typical granular patterns observed in vertically shaken layers of granular media. From left to right: squares and stripes (top row), hexagons (peak phase) and hexagons (crater phase), coexistence of both phases of hexagons in a period-doubled layer, and 4 phases of hexagons in a period quadrupled layer (bottom row). The middle two hexagons arise when the system is forced sinusoidally at frequency / and with a small (w5%) component at f/2. The bottom images are forced with a single frequency, and the hexagons arise due to the kinematics of the layer and plate motion whose effect is to drive the system with a small subharmonic component as well. The phase of this driving is degenerate by 2 (left bottom) or 4 (right bottom). Reproduced with permission from [257],
and relating macroscopic properties, such as mass and energy transport, of granular materials in motion to "microscopic" grain characteristics like friction, shape, and size are of particular relevance to many industrial processes that mix and sort granular materials. When external energy is provided to compensate for the energy losses due to inelastic collisions, granular materials exhibit fascinating behavior that is highly reminiscent of pattern formation in ordinary liquids. In recent years,
4-3.
Spatial
patterns
111
the study of these "granular equivalents" of Faraday waves in vibrated fluids has received a great deal of attention [122; 188; 189; 258; 259]. Thorough overviews can be found in recent review articles and books [44; 121; 136; 190]. Figure 4.22 provides us with a glimpse of the rich granular patterns observable when parameters such as the drive frequency and acceleration are varied. The similarity with the Faraday patterns from Figure 4.18 is quite striking. The author would like to stress that the experiments are only superficially similar. While it is tempting to derive the analog to the hydrodynamic Navier-Stokes equations from a kinetic theory for the particles, we are faced with the following obstacles: • Unlike molecules, the granular particles interact only through a contact force. Thus the surface of e.g. a heap of sand has no surface tension. • The kinetic energy of the granular particles is so small compared with their mass that any kinetic definition of temperature would yield negligible deviations from absolute zero. • The surface itself is an ill-defined concept. The density covers several orders of magnitude, going from the bottom of the pile, which is densely packed to the upper region that has a very low particle density; yet all of the granular material supposedly is part of the same phase. • Due to grain-grain collisions, no equilibrium temperature can be defined. • Local dissipation induces inhomogeneities leading to clustering and clumping. Considering all of the above, the similarity of the patterns is thus a much more remarkable occurrence than it would at first sight appear. In the experiments of Ref. [122; 188; 189; 258; 259], the granular layer is on a horizontal plate with an upper free surface and is vertically vibrated. Holding the frequency /o fixed and increasing the oscillation amplitude, the onset pattern is usually a stripe pattern oscillating at half the drive frequency /o/2. In order to understand this subharmonic response, let us examine the granular dynamics over the course of one drive cycle in more detail. Figure 4.23, which is actually a sequence of snapshots extracted from a beautiful and educational animation created by Chris Bizon [20], illustrates five different phases of the drive. The phases within a drive period can be roughly
112
^.
Adding Spatial
Dimensions
Fig. 4.23 Various phases of the subharmonic pattern oscillation in a granular layer over one period of the sinusoidal drive. From top to bottom: (a) We start with an existing (stripe) pattern, (b) The bottom surface quickly accelerates upwards, flattening the spatial profile. Note that the downward pressure is the greatest underneath the "hills", resulting in horizontal material flow into the "valleys", (c) Shortly after the peak amplitude, the plate moving downward loses contact with the granular layer, (d) During the free fall the layer rearranges itself, moving particles from the peaks to the valleys, (e) The vertical amplitude of the new pattern is the greatest just before contact with the plate, (f) Same phase as in (a) but the stripe pattern has shifted by one half spatial wavelength. During the subsequent cycle (not shown), the additional phase shift would result in the identical pattern with which we started.
4.3.
Spatial
113
patterns
divided into a compression phase during which the plate accelerates the granular layer upwards and a free fall during which there is no contact between the particles and the bottom plate. As illustrated in Fig. 4.23, the pattern repeats itself only after two drive periods because of a spatial shift by A/2, where A is the wavelength of the pattern. The physical reason for this shift is the horizontal rearrangement of granular material that occurs because of the proportionality of the vertical pressures to the local layer heights. As mentioned above, modeling and analyzing granular flow remains a great challenge to theory mainly because of the lack of "granular-dynamic" equations. Venkataramani and Ott recently suggested a powerful minimalist model, which incorporates temporal period doubling and selective spatial amplification [263; 264]. Although the model is extremely simplified and ignores most of the physics specific to the experiments, it nevertheless displays many of the experimentally observed effects. This fact immediately suggests a certain degree of universality and physics independence of the essential pattern formation mechanisms. The model is discrete in time and continuous in space. We denote the height of the granular layer at position x and integer valued times n by £(x)n. To obtain the next iterate £(x)„ + 1 , we first apply a one-dimensional map M to £„ independently at each point in space and subsequently couple the dynamics of nearby x locations by a linear spatial operator C: C = M[£n(x),r]
= rexp
& . - 1) z
C„+i(f)=£[C]
(4.28)
(4.29)
The map in Eq. (4.28) is similar to the logistic map but remains bounded for all arguments [263]. The parameter r serves as the bifurcation parameter. In Fourier space, the spatial operator C takes on the compellingly simple form of multiplication with a function f(k). The quantity log \f(k)\ is the growth rate for the amplitude of the standing wave with wave vector k between two collisions with the plate [263]. We denote the spatial Fourier transform of £n(x) by £n(k) = ^[^(x)] and set t(k) = /(£•)•• We furthermore assume isotropy so that f(k) = f(k) depends only on k = \k\. All in all, the following three consecutive steps realize spatial coupling: (1) compute the Fourier transform £'„(&) of £'n(x),
4-
114
Adding Spatial
Dimensions
f(k) 1.0
0.5
0.0 k0
kc
Fig. 4.24 Spatial amplification factor f(k) = ®(k)exp[y(k)] as defined via Eqs. (4.30) and (4.31) versus fc. Due to its even symmetry, f(k) = f(—k), only the positive wave vector side is shown. The peak at fco results in preferred spatial patterning at a length scale k^1. The function <3>(fe) changes sign at kc. The parameters are fco = 1, kc = \/5ko.
(2) apply the operator C:
in+i{k) = f(k)i'n{k), (3) and lastly invert the Fourier transform £ n + i(x) = J" - 1 [£„+].(£)]. We refer the reader who is interested in the numerical implementation of the two-dimensional Fast Fourier Transform (FFT) to Appendix C. Spatial patterning at a preferred scale k^1 is thus easily achieved by having |/(fc)| peak at k = k0 (|/(fco)| > 1) and attenuating wave vectors far away from k0. The authors of Ref. [263] chose f(k) = $(fc)exp [y(k)], where
*>-KE)' $(fc) = sgn{k2c - k 2 ).
2 \koJ
(4.30) (4.31)
This even function is shown in Fig. 4.24 for positive k. The factor $(&) introduces a second length scale, which expands the number of possible
4.3.
Spatial
patterns
115
patterns [263]. Numerical simulations of this model yield patterns that are qualitatively very similar to the patterns observed in experiments. The relevant parameters to be varied are r andfcc/fco>which are loosely identified with the experimental dimensionless acceleration and driving frequency, respectively. For fixed kc/ka, as one increases r, the observed bifurcation sequence is a period-1 flat state bifurcating into a period-2 pattern, which then becomes a period-2 flat state, and eventually a period-4 pattern [263]. Among the observed patterns are stripes, squares, hexagons, kinks, and disorder. We refer the reader to Refs. [263; 264] for pictures, bifurcation diagrams, and a thorough stability analysis. In the following we are mainly concerned with the effect of additive and multiplicative fluctuations on the observed patterns. 4.3.1.3
Intermittent pattern switching
We can render the deterministic equations (4.28) and (4.29) into a stochastic model in at least four different, immediately obvious ways, namely: • additive local noise: Cn(xi) = M[£n(xi),r} + ej(n), where (el(n)e'l(n'))=ap(i-i')5(n-n'), • additive global noise: £,'n{xi) = M[£ n (xj),r] + 7](n), where (77(71)77(71')) =alS(n-n'), • multiplicative local noise: Cn(xi) — M[£n(xi),r + Ci(n)], where (O(n)Ci(n')) =
+ ?(n)], where
and of course any combination of the above. It would be beyond the scope of this book to exhaustively explore this vast parameter space and to describe in detail the phenomenology of stochastic pattern formation. Instead we focus on one particular observation, which occurs for both additive and multiplicative global noise: noise induced pattern transitions. We fix the parameters r = 1.9 and (kc/k0)2 = 5 such that in the deterministic case a stripe pattern will form. Applying moderate global noise of intensity ov, = 0.1, we observe intermittent switching between stripes and hexagons. Figure 4.25 shows a few selected snapshots separated by 100 iterations respectively, after transients have settled down.
4-
116
"" a
*
-
at
*
**•
at- j
*
Adding Spatial
*
Dimensions
*?
*
»
* « :> i» • ' • J * ; * p
Fig. 4.25 Snapshots of the granular flow model illustrating noise induced pattern transitions by spatially correlated noise. The integer numbers correspond to time (in units of 100) evolved since starting from a random initial condition. The first frame (7) displays coexistence of stripes and hexagons. A phase flip from (8) to (9) is clearly visible. Parameters are r = 1.9, (fcc/fco)2 = 5 and
4-4-
4.4
Postscript
117
Postscript
Stochastic resonance in arrays of monostable, underdamped oscillators was recently reported by Lindner et al. [160]. Due to its nonlinear nature, each oscillator's characteristic frequency is energy dependent. The average energy can be increased by raising the noise level, which therefore indirectly controls the frequency. SR in a single such system is attained by adjusting the noise intensity such that the resulting frequency coincides with the drive frequency [247; 248]. The coexistence of classic SR and this resonant behavior in a single underdamped bistable system was recently demonstrated by Alfonsi and coworkers [3]. The consequences of coupling N monostable oscillators into a linear array are drastically different from the array enhancement described in section 4.1. While the magnitude of the spectral response is not enlarged, intriguingly the authors of Ref. [160] find multiple resonances. In fact, the spectral response of an individual oscillator coupled into an array of N identical oscillators exhibits N — 1 such intrawell stochastic resonances [160]. A great deal of work remains to be done on stochastic pattern formation. In particular, the noise induced bistability demonstrated in vertically shaken fluid layers in Ref. [15; 224] appears likely to be contained in the model developed in Ref. [263] as well. For that objective, the supercritical map defined in Eq. (4.28) should be replaced by the alternative map M(x,r) = —(rx + x3)exp(-x2/2). Utilizing this subcritical map, Venkataramani and Ott [263] find localized structures, which are very similar to so-called oscillons found in the experiment [122; 188; 189; 258; 259]. Furthermore, applying the broad theory developed for noise induced phase transitions [106; 75] to pattern formation models such as Eqs. (4.28), (4.29) would make for a very appealing research project.
This page is intentionally left blank
Chapter 5
Stochastic Transport Phenomena
The systems investigated in the previous chapter possess translational symmetries: apart from subtle differences at the boundaries, the importance of which dwindles with increasing system size, one element is indistinguishable from the other. In this chapter we reexamine the previously introduced systems with modified boundary conditions which break the spatial symmetry. A preferred direction of information flow can be achieved either by "asymmetrizing" the coupling or by imposing qualitatively different boundary conditions at both ends or by both measures. As a result, elements at one end of the array experience very different dynamics than at the other. We are mainly concerned with the effect of noise on front and wave propagation.
5.1
Noise-sustained structures in convectively unstable media
The focus of this section, parts of which are a summary of Chapter 2 in [169], is one-dimensional systems that possess a non-vanishing group velocity, so-called open flows. Simple prototypes arise quite naturally in fluid mechanics where non-zero flow rates together with appropriate boundary conditions remove translational symmetries. Combined with inherent dynamic instabilities, open flow systems can display convective instability [46; 47; 48; 49]: microscopic noise is amplified while being convected through the system and results in the intermittent growth of complex, nonlinear waves. While single disturbances can only produce temporary structures, since they are eventually convected out of the (finite) system, 119
5.
120
Stochastic
Transport
Phenomena
continuous noise will develop permanent noise sustained structures [47; 48]. These have been found in various effectively one-dimensional experimental settings [10; 123; 124; 164; 233]. Convective instability typically manifests itself in increasingly complex dynamics going "downstream"; i.e. along the preferred direction of information propagation. Examples are plane Poiseuille and channel flow where a spatial transition from laminar flow to high-dimensional turbulence further downstream occurs. It is remarkable that while purely temporal systems can be dichotomously classified into stable and unstable regions, the definition of stability in spatial systems is more subtle. The following definitions are strictly valid only for an infinite domain x £ (—oo, oo). Definition 5.1 A solution $f(x,t) is referred to as absolutely unstable if for an arbitrary fixed value of x lim | # ( z , i ) | -> oo
(5.1)
t—>oo
Definition 5.2 A solution \P(x, t) is referred to as absolutely stable if for all values of x and for any v lim \^(x + vt,t)\ - > 0
(5.2)
Definition 5.3 A solution 9(x,t) is referred to as convectively unstable if for arbitrary fixed values of x, y and for some v lim \W(x + vt,t)\ -> 0 and lim |<5(y + vt,t)\ -> oo t—>oc
(5.3)
t—>oc
Figure 5.1 serves to illustrate the three stability regimes. Note that the sketches necessarily are simplified and suggest that the definitions of absolute (in-)stability imply zero propagation velocity, which is definitely not true. One important detail not conveyed by the figure is the constraint on the propagation velocity for convective instability: the perturbation has to move faster than it is spreading; i.e., it has to outpace its own diffusion. The modification of these definitions for finite domains is somewhat problematic. The boundary conditions then play a crucial role. For periodic boundary conditions the growing perturbations will be fed back into the system and be trapped perpetually, leading to absolute instability. The definition of convective instability, strictly speaking, is meaningful only for open boundaries that allow the perturbations to be moved out of the system. On the one hand this situation results in the ultimate decay of each
5.1.
Noise-sustained
structures in convectively unstable media
121
Absolutely stable
t>0
^
^
~
^
_
Absolutely unstable --*-.. t>0
Convectively unstable ,
-
'
*
'
•
•
-
,
t>o
Fig. 5.1 Schematic illustration of the qualitatively different time evolutions of a bell shaped perturbation. The subtlety of convective instability lies in the fact that perturbations propagate as they grow. Consequently, a linear stability analysis at an arbitrary fixed coordinate will yield (locally) decaying perturbations.
single perturbation and hence renders the system stable in a stringent interpretation. On the other hand the role of noise in convectively unstable systems is intriguing, since continuously applied perturbations necessarily lead to "permanent" macroscopic patterns, which we refer to as noisesustained structures. A reasonable definition of convective instability for finite domains is as follows: Definition 5.4 A solution ^!(x,t) on a finite domain, x e (a,b), with certain boundary conditions, is called convectively unstable if 9 itself is absolutely stable AND the corresponding solution with periodic boundary conditions ^P(x,t) is absolutely unstable lim \iff(x + vt,t)\ -> 0 and lim \tyP(x,t)\
—> oo
(5.4)
122
5.1.1
5.
Stochastic
The Ginzburg-Landau
Transport
Phenomena
equation
Turbulent fluid flow has long escaped a comprehensive description and remains far from being well understood. In the desire to attain more insight into the behavior of fluid systems, the Navier-Stokes equations are often replaced by simpler model equations that are easier both to analyze and to solve numerically. The one-dimensional complex Ginzburg-Landau Equation (CGLE) [205], an amplitude equation capable of producing complex spatial patterns, is one such reduction. While the CGLE is usually written in a coordinate system moving with the (real) group velocity v, here we transform to the stationary frame, which introduces a convective term [47]: ^t = a^-v^x
+ b^xx-c\
(5.5)
The amplitude \E" as well as the constants a, b, and c are in general complex, and the real part of b is nonnegative. For positive group velocity v the convective term — v^>x causes information to propagate towards increasing x, therefore introducing a "mean flow" [48]. The reader is referred to the same reference for a nice physical interpretation of Eq. (5.5). The CGLE (5.5) has been shown to be convectively unstable if [47]: ar
v2br ~r < 0 and ar > 0 , Abl
(5.6)
where the subscripts denote the real or imaginary part. The 2nd condition guarantees the instability, while the first one constrains the group velocity to be greater than a critical value. In this regime the CGLE displays features that are reminiscent of real "open-flow" fluid systems. For the following simulations chapter we choose the parameters to be in that regime. The boundary conditions are ^(0, t) = 0 and *S/XX(L, t) = 0 (L stands for the system-length, which is 300 in our case) [47]. We employed a finite difference scheme with fourth order space differencing, dx = 0.3, and 2nd order Runge Kutta for the time advancement, timestep dt = 0.012 [47]. Random numbers uniformly distributed in [—a, a] are added to vpr and ^i at all grid points at each time step. (The magnitude of a was of the order of 10~7.) Figure 5.2 displays the system after transients have settled down, having evolved in time from the initial condition ^(x,t = 0) = 0. The left boundary is fixed, ty(x = 0,t) = 0, and the right boundary is "open", d^(x,t)/dx\x=L = 0 Shown are snapshots of the absolute value and the real part of <J>, as the imaginary part does not yield any quali-
5.1.
Noise-sustained
structures in convectively unstable
200
media
123
300
Fig. 5.2 Numerical simulations of the complex Ginzburg-Landau equation (5.5) with fixed left and open right boundary. Shown is a snapshot after transients settled down. The upper figure displays the modulus of ^ while the real part is shown in the lower figure. The parameters are a = 2, v = 5.2, bT = 1.8, 6; = —1, cr = 0.5, c* = 1, thereby fulfilling the requirements for convective instability (5.6). The microscopic fluctuations grow to macroscopic proportions at x ~ 35.
tatively different insights. Deissler [48] observes three distinctly different regions which are confirmed in Fig. 5.2: the linear region, where the amplitude is small enough for the nonlinearities to be neglected, the transition region, and the fully developed region where nonlinear dynamics dominate. The correlation length £ is observable as the distance from the left boundary where the amplitude is kept at zero to the point in space where the
124
5.
Stochastic
Transport
Phenomena
microscopic noise is just beginning to grow to macroscopic proportions. A rough estimate from Fig. 5.2 gives £ ~ 35, which is also the extent of the linear region. As derived in [48], only a narrow band of frequencies of the broad-band noise is linearly amplified in space. The exponential growth of the waves is clearly seen in the plot of the absolute value of >3/. Even though the real and imaginary parts are fluctuating seemingly randomly in the linear growth region, the absolute value is virtually constant in time for x less than about 80. This is due to the selective amplification of mainly one frequency and the fact that the real and imaginary part are exactly 90° out of phase. The analytic expression for the frequency that is amplified most can be found in [48]. The picture also reveals a secondary (convective) instability. A coherent structure comprised of rather regular traveling waves which break up into turbulent patterns can clearly be noticed. The nature of this secondary instability is investigated in detail in [47]. 5.1.2
Unidirectionally
coupled diode
resonators
The notion of convective instabilities and noise-sustained structures is not limited to spatially continuous systems. In fact, the main requirements, (i) a nonzero propagation velocity and (ii) an inherently linear instability, can be readily met by coupling chaotic or linearly unstable elements into a chain. The diode resonator is a simple electronic circuit, comprised of the series combination of a 30 mH inductance and a General Instruments 852 silicon diode. This circuit has been well studied, in particular in the context of low-dimensional chaotic dynamics. Individually, the circuits follow the period-doubling route to chaos as the drive voltage is increased, and the peak currents through the diode form a nearly one-dimensional first return map [228]. The experimental system, schematically shown in Fig. 5.3, consists of 32 coupled diode resonator circuits driven by a sinusoidal source at a frequency of 70 kHz. The diodes were matched based on their bifurcation sequences such that they all go chaotic at roughly the same drive voltage. For symmetric coupling, the mean group velocity is zero and no convective instability can be observed. Instead, a one-way coupling, proportional to the difference (VJ - V^j~l), where Vj is the voltage across the ith diode, is employed by placing a buffer and resistor between neighboring diodes, as shown in Fig. 5.3. The coupling resistor R, determines the coupling strength. The effect of the coupling is most readily envisioned by consid-
5.1.
Noise-sustained
structures
in convectively unstable media
125
Fig. 5.3 The open flow circuit consisting of 32 diode resonator circuits. A buffer and a resistor R provide the unidirectional coupling.
ering the extreme cases. For R approaching zero, the voltage across the second diode is completely slaved to the first, and so on down the line. Hence all circuits are perfectly synchronized to the first. In the other extreme, with an infinite resistance between circuits, each circuit is completely independent of all of the others. The drive parameter is chosen such that each individual resonator would be chaotic; i.e., its period-1 fixed point is highly unstable. The right boundary is always "open" (coupled only to its left neighbor), but we altered the left boundary to be either fixed or open. The fixed boundary was realized by stabilizing the period-1 fixed point of the first resonator using chaos control techniques. Representative snapshots of the system dynamics for the two settings with parameters chosen to be in the convectively unstable regime are shown in Fig. 5.4. When the first site is allowed to evolve freely in time, a few of the following sites emulate the first chaotic map due to the synchronizing effect of the coupling. This synchrony is eventually destroyed at high enough site numbers since the spatially homogeneous solution is convectively unstable. Controlling the first resonator, one observes spatial period doubling leading to turbulent behavior at higher sites. Analogous phenomena are observed for coupled map lattices [6; 139], which will be addressed in the next section.
126
5.
Stochastic
Transport
Phenomena
T—I—I—|—I—I—I—I—|—I—I—I—I—|—I—I—I—I—|—I—I—i—I—|—I—I—I—I—|—i—r
I
5
10
15
20
,
.
, ,
I .
25
, ,
, I
, ,
30
site Fig. 5.4 The convective instability of the coupled diode resonators manifests itself in (a) breakup of the synchronous state and (b) spatial period doubling. The fixed left boundary condition in (b) was achieved by stabilizing the period-1 fixed point of the first resonator using chaos control techniques. Note that no noise was added to the systems. Inherent, tiny fluctuations are amplified to macroscopic proportion while being convected "downstream".
5.1.
5.1.3
Noise-sustained
Coupled
structures in convectively unstable media
127
maps
Nonlinear partial differential equations such as the CGLE, though already a significant simplification from the original equations of motion for fluid flow, nevertheless do not allow easy theoretical and numerical analysis. Qualitatively similar behavior can be observed in much simpler systems and models. Being discrete in time and space, coupled map lattices (CMLs) are computationally very feasible and yet able to exhibit highly complex patterns. The one-dimensional lattice of unidirectionally coupled logistic maps, first introduced by Kaneko [139], models the experimental system described in the previous section, which is discrete in space and continuous in time, quite well. Here, we are going to derive criteria, that help us to decide the stability type of a given coupled-map lattice [48]. Given the linearized difference equations Xk(t + 1) = axk{t) + e [xk-i{t) + -yxk+1(t)],
(5.7)
where 7 controls the asymmetry of the coupling (unidirectional for 7 = 0 and diffusive or symmetric for 7 = 1). The "spatial" index k covers a finite range k 6 (1,-W). For periodic boundary conditions, Eq. (5.7) can be rewritten as a matrix equation with the circulant N x N matrix Cp = circulant(a, £7, 0, • • •, 0, e). As we know from section 2.2.1 and Eq. (2.43), its eigenvalues are given by Af = a
i
+ ele
^U-i)/N
+ eei2,u-i)[N-i)/Ni
j =
1 )
...)iV.
(
5.
8 )
For open boundary conditions the Jacobian matrix changes into a tridiagonal matrix C° = tridiagonal(e7, a, e), the eigenvalues of which are given by Xf = a + 2^yecosj^,j
= l,...,N.
(5.9)
The only difference between C° and Cp is the removal of the upper righthand corner element e and the lower left-hand corner £7 due to the open boundary conditions. It should be noted that the transitory amplification of small perturbations leading to convective instability is of the same nature as the nonnormal linear transients discussed in section 2.2. Note that while the circulant matrix Cp is normal, for 7 ^ 1 the tridiagonal matrix C° is
128
5.
Stochastic
Transport
Phenomena
not! In general, the deviation from normality for a tridiagonal matrix /ad
au ctd ai
Oil
0
M
\ 0
•• •
0 au Osd
••
0 0 at
at
(5.10) ad)
is proportional to the difference between its squared off-diagonal elements /<
\ 0
M1 M -
MM1
(5.11)
V o
«s-°, 2 7
which is equal to e 2 (l ~ 7 2 ) for the CML (5.7). It is remarkable how clearly this relationship demonstrates that in the case of symmetric coupling, 7 = 1, no transitory amplification and hence no convective instability is possible for \a\ < 1! Intuitively, this is easy to understand since for diffusive coupling there is no net propagation of perturbations. Recalling definition 5.4, the CML will be convectively unstable if all eigenvalues A*? have modulus less than 1 and if |A^| > 1 for at least one j . For truly unidirectional coupling, 7 = 0, it is clear that this corresponds to \a\ < 1 AND m a x j 6 0 , - , M - i \a + eexp (i2nj(N - 1)/N)\ > 1. The latter sum can be simplified significantly, z = \a + eexp [i2irj(N — l)/iV]| 2 = \a + eexp[-i27rj/7V]| 2 = (a + ecosx) 2 + e2 sin2 x, with x = ~2TTJ/N. Z assumes its maximum at x = 0, — IT 43- j = 0, N/2 since 0 = dz/dx = —2e(a + ecosx) sin a; + 2e2 sin x cos x = —2aesinx. Depending on the sign of a, the maximum modulus of the eigenvalues X^ is either |« + e| or \a — e\, which leaves us with the following proposition Proposition 5.1 The unidirectionally coupled map lattice defined in (5.7) with 7 = 0 is convectively unstable if and only if \a\ < 1 and \a\ + jel > 1.
(5.12)
We will return to these results in section 6.2, which investigates spatiotemporal structures in traffic flow.
5.2.
5.2
Noise sustained front
transmission
129
Noise sustained front transmission
While the previous sections focused on the synchronization and noise reduction characteristics of spatiotemporal SR, for the remainder of this chapter we investigate its role in communication by way of coupled, noisy elements. To date, the role of noise in the encoding and transmission of information by neurons remains an open and challenging question [45]. It has been shown by Jung et al. [135] that noise can aid and optimize the spreading of spiral and single wave fronts in a numerical model of an excitable medium. On the experimental side, Kadar et al. [137] recently reported noise-supported traveling waves in a chemical sub-excitable medium, a photosensitive Belousov-Zhabotinsky reaction. Hu et al. [279] demonstrated that, with sufficient coupling, noise can induce undamped signal transmission in a numerical model of a chain of one-way coupled bistable elements. Furthermore, Sendiha-Nadal et al. [237] introduced random spatial fluctuations in an excitable medium and studied their effects on the propagation of auto waves, while Castelpoggi and Wio [25] recently addressed the problem of local vs. global coupling in reaction-diffusion characterizations of "stochastic resonant media". This section summarizes Refs. [165; 167], which demonstrated the constructive role of noise for the propagation of a signal in a metastable medium. The experimental setup, shown in block diagram form in Fig. 5.5, should be familiar by now and is modified in the following manner: The secondary drive, which we refer to as bias, is phase locked to the main drive at exactly half its frequency. As a consequence, there is no beat frequency as in the previous arrangement. Instead, the "input" signal is an induced phase change at one end of the chain, and a detector at the other end provides the "output". We refer to the less stable phase as the metastable state and investigate the effect of local and global noise on the propagation of a front into the more stable domain. The resonators are initially set in the metastable state and can be kicked into the more stable phase with a sufficiently large perturbation. Unlike excitable elements, which after "firing" return to their quiescent state via a refractory period, the resonators in this setup are reset by a global bias after departure from the metastable phase. This arrangement supports single, nonrepetitive wave fronts and provides a "very clean" experimental system for studying the underlying mechanisms of noise sustained wave propagation in subexcitable biological media [67; 132].
130
Fig. 5.5 Thirty two frequency / . An / / 2 on to the first site. provided by resistors
5.
Stochastic
Transport
Phenomena
diode resonators are placed in the period-2 state by the drive at bias serves to make one phase more stable and to force that phase Independent noise sources are applied to each site. Coupling is Rc- The detector outputs the phase of the last site.
The diffusive coupling and bias are chosen such that, in the absence of random fluctuations, the system is unable to sustain any sort of information transmission. We find that the quality and speed of signal propagation can be greatly enhanced by increasing the ambient noise level. Beyond some optimal level, the "channel" is quickly corrupted by noise, resulting in spurious signals and thus poor reliability. Two major forces affect each individual element: local random fluctuations and the influence of its two nearest neighbors. It is clear that in the absence of noise, a local phase jump will lead to a "domino effect" only if the coupling is strong enough. The energetically lower phase then propagates into the metastable phase in the form of a moving kink. The speed of this moving interface depends on both the coupling strength and the amplitude of the applied bias. If the latter two parameters are chosen low enough, kinks in discrete systems will fail to propagate.* Propagation failure of signals due to discreteness of the supporting medium has previously been observed in coupled chemical reactors [153; 154], *Note that in spatially continuous theories, such as a
5.2.
Noise sustained front
131
transmission
0.3 —i
O i_ CD Q. CD
>
0.2
ID
\_ CD Q_ 1/5 CD
CD CD
0.1 —
CL CO
:*:
0.0 0.0
1.0
2.0
3.0
bias (arb. units) Fig. 5.6 Kink speed as a function of applied bias. Kinks will not propagate for bias values less than approximately 1.0 unit.
in proton transport along hydrogen-bonded chains [282], as well as in theoretical and experimental studies of cardiac tissue [35; 142] and in the context of cell differentiation [59]. The kink speed for an intermediate coupling strength [166; 168] as a function of bias amplitude is shown in Fig. 5.6. No kink propagation could be observed for bias values much less than about 1.0 unit. In the experiment, two effects cause the finite cutoff: the discreteness of the system and non-identical elements. The resonators are loosely matched based on their bifurcation sequence. The variation of important parameters such as energy barrier between the two period-2 phases and the levels of the noise generators are within 10%. Since the parameters differ from site to site, kinks get trapped or slowed down at some resonators. The measured kink
132
5.
Stochastic
Transport
Phenomena
1
32 Site Number
Fig. 5.7 Staggered snapshots (every 32 drive cycles) of a phase kink moving under the influence of noise across 32 resonators. Local variations in its speed are clearly visible, due to both spatial inhomogeneities and the random forcing. The bias amplitude is 0.55 units and the noise intensity 1.3 units.
velocities should therefore be taken as average values over the entire chain, including local variations. Stronger coupling tends to smooth out differences between elements. We employed a bias value of 0.55 units, which is roughly 1/2 of the corresponding threshold. In this parameter regime, the system is not capable of kink propagation in the absence of random fluctuations. To measure the effect of noise on signal propagation, we force the system to reside in its metastable state; i.e., in the less stable period-2 phase. At the same time we induce the resonator at site 1 to switch phase and subsequently measure the phase of all 32 elements as a function of
5.2.
Noise sustained front
133
transmission
time. Figure 5.7 shows a staggered space-time plot of the entire system with the addition of 1.3 units noise. The induced phase kink exhibits local variations in speed due to the random forcing and the differences in the energy barriers of individual elements. Figure 5.8 displays, for four values of applied noise, the integrated probability of a phase change at site 31 as a function of time after an induced change at site 1. The detector emits a step function which, when averaged over approximately 100 events, produces the smoothly rising curves. In order to measure which part of the output is due to noise nucleated spurious signals, we repeat the same experiment without inducing a kink at the first site. These are shown as the dashed curves. For noise less 1.0 —I
0.5
-;
1.1 units noise
o.o l.o —| 0.5 —
1.5 units noise
o.o —r-
l.o 0.5 1.8 units noise
o.o
1.0 .a m _a o
0.5 2.2 units noise
0.0
' 1000
2000
I 3000
Time (drive periods) Fig. 5.8 Probability of a signal at site 31 with (solid line) and without (dashed curves) induced kink at site 1, for four noise levels increasing from top to bottom. The noise level of 1.1 units is too weak to nucleate any transitions.
5.
134
Stochastic
Transport
Phenomena
1.0
1.5
1.0 —i
0.8 —
>,
0.6 —
-t—'
J5 _Q
o £
0.4 —
0.2 —
0.0
0.5
2.0
2.5
Noise Intensity (arb. units) Fig. 5.9 Maximum of the difference between the probability distributions with and without signal vs. noise intensity for thirty coupled diode resonators. The maximum information transmission is assumed for noise levels 1.0—1.5 units.
than 1.0 units no signal gets through to site 31. At 1.1 units the signal always arrives but travels slowly with a wide distribution of arrival times. At 1.5 units there is a shorter transient time and a narrower distribution of arrival times, but spurious signals begin to appear. By 1.8 units about half the signal is caused by the noise. By 2.2 units the early arrival of spurious transitions - some nucleated near the end of the chain - virtually always preclude the detection of the original input signal. The derivative of an individual curve in Fig. 5.8 gives the probability distribution of arrival events: the probability that the 31st resonator changes phase at a certain time after the system is placed in the metastable state. Two competing effects corrupt the quality and reliability of signal detection in our system: for very low noise levels, the chances of successful signal propagation are
5.2.
Noise sustained front
transmission
135
1500
in T3 O
1000
i_
CD d CD
>
CD
500
0.0
0.5
1.0
1.5
2.0
2.5
Noise Intensity (arb. units) Fig. 5.10 Mode (circles) and width (crosses represent full width at half maximum) of the probability distribution of arrival events vs. noise intensity.
virtually zero, while for very high noise intensities, noise-induced, spurious events contaminate the output. We subtract from the distribution with an induced kink the distribution without induced kinks to obtain the probability (as a function of time) that the signal arriving at site 31 is due to the input signal. The integral of this quantity, while positive, gives the probability of successful signal transmission. This quantity, which is plotted in Fig. 5.9, displays an SR-like sharp rise and subsequent fall behavior. The peak of this curve occurs at noise intensities between 1.0 and 1.5 units, at which values the information flow is optimized by the presence of noise. Important information can be gathered from the distributions that include an induced kink at the first site. The skewed distribution functions shift towards shorter times with increasing noise intensity indicating a higher speed of information transmission. In the absence of noise the kink would
136
5.
Stochastic
Transport
Phenomena
arrive at site 31 after approximately 1050 drive periods, as indicated by extrapolating the line in Fig. 5.6 down to a bias of 0.55 units. In Fig. 5.10 we plot the mode (probability maximum) as a measure of the arrival times to illustrate the dependency on noise. Also plotted in Fig. 5.10 are the full widths at half maximum value. The distributions become narrower, denoting a smaller spread or variance in arrival times. The author wishes to stress that the spatiotemporal SR effect described in section 4.1 is of a fundamentally different nature. While earlier the system — at least in principle — was translationally invariant, here this spatial symmetry is broken by the application of a constant bias and appropriate initial conditions. The resulting unidirectional information flow allows us to define the boundaries as "input" and "output", regarding the system as a one-way communication channel. Furthermore, the "goal" was synchronization, i.e., concurrent noise-induced hopping of the entire array. Here, the stochastic events are strictly sequential. The most evident consequence is a greatly different scaling behavior: unlike the former examples, in which the coupling entails partial noise cancellation and improved signal-to-noise ratios, the quality of signal transmission in the current setup in fact deteriorates for higher numbers of elements. The sequential nature of the process leads to error accumulation instead of noise suppression. In order to be able to predict parameter values, such as noise intensity, coupling strength, and bias amplitude, which result in satisfactory and reliable signal propagation, we have to look at the underlying mechanism in more detail. There are two relevant time scales which govern the transition rates of the individual elements. The "unperturbed" escape rate Ku = /iexp (—Euf3) (where Eu denotes the threshold energy and (3 the inverse temperature or noise intensity), valid for a resonator with both neighbors in the same, metastable phase. If the left neighbor has performed the transition to the more stable phase, it will modify the energy threshold through the coupling, making the escape more likely: Kp = \i exp (—Ep(3) > Ku (with the perturbed threshold Ep < Eu). The degree to which Eu differs from Ep is a function of both the coupling and the bias amplitude; naturally, for zero bias or vanishing coupling, we have Ep = Eu. If we assume the time for the actual transition to be negligible, it is clear that on the average it will take N • Kpl time units for the kink to traverse TV sites. (This also explains the increase in kink speed with growing noise intensity, since K~l ~ exp (/?).) Over this period of time, the total probability that a resonator "ahead" of the moving front performs an (undesired) transition,
5.2.
Noise sustained front
transmission
137
is equal to Ku-K~l • N(N — l ) / 2 t. Thus, the signal deterioration will scale with the system size like N2. Prom the above considerations it is evident that for a low contamination with spurious signals, the ratio Ku/Kp should be as low as possible; i.e., Ku
Propagation
failure in cardiac
tissue
Propagation failure of signals due to discreteness of the supporting medium has previously been observed in theoretical [142] and experimental [35] studies of cardiac tissue. In particular, Keener introduced a modified cable theory, which incorporates the discretizing effects of the so-called gap junctions [142]. Gap junctions, characterized by the (relatively high) intercellular resistance rg, provide the electrical coupling between cardiac cells. Mathematically, the propagation of action potential along cardiac cells is described by various cable theories, which are analogous to wave propagation in one-dimensional conductors (cables). Continuous cable theory either ignores the effects of the gap junctions or replaces the cytoplasmic resistance with an effective resistance; in either case the electrical resistance is assumed to be spatially homogeneous. Here, we focus on the opposite assumption that gap junctional resistance is much more important than cytoplasmic resistance. We thus neglect the dynamics within a cell and assume that the propagation of the action potential is dominated by the delay caused by the gap junctions [141]. Within the context of this discrete cable theory, we can write the current balance as [142] CmS^ at
= - (0„+i - 2>„ + >„_!) + 5 / m ( 0 n ) rg
(5.13)
where
5.
138
Stochastic
Transport
Phenomena
0.0 ^
o Coupling Fig. 5.11 Kink speed as a function of coupling strength d = — for the continuous model Eq. (5.14) (dashed line) and the discrete model Eq. (5.13) (solid line). Note that propagation in the discrete model is impossible for rg > r* (d < d* = — ) . For this simulation, CmS = 1 and SIm(
simplicity, we choose a simple cubic polynomial SIm{4>) = 12V3>(1 — (j>) (4> — 0.5) +0.5 [142]. Note that though similar in appearance, Eq. (5.13) is not simply a discretization of its continuous analog km>->
0$ dt
L2 d 2 $ rg
dx2
SIm(
(5.14)
(where L is the size of the cardiac cell), but stands in its own right as a spatially discrete nonlinear wave equation. The most important observation is that propagation can fail in model (5.13) if rg is sufficiently large, but increasing resistances in Eq. (5.14) can never lead to propagation failure. Note that rg depends on the excitability of the tissue. Figure 5.11 shows a plot of the numerically determined speed (solid line) of propagation for model (5.13) as a function of the coupling strength d = l/rg as well as the analytically obtained kink speed c = cc°j^ Vd (dashed line) for Eq. (5.14). (The reader is referred to Ref. [142] for an explanation of Co and Rm-) It is evident that propagation is impossible for rg larger than a certain critical value r*, which turns out to be a monotonically increasing function of excitability [142].
5.2.
Noise sustained front
5
transmission
139
-
0
(d)
Fig. 5.12 Arrival probabilities for the discrete model Eq. (5.13) with (solid line) and without (dashed curves) induced kink at site 1, for five local and global noise levels increasing from top to bottom. The propagation distance spans 40 elements. The time axis is labeled in number of integration steps n. For the employed time step, dt = 0.05, the time range therefore translates from n = 0 , . . . , 6000 to t = 0 , . . . , 300.
140
5.
Stochastic
-f-
Global Nois
•
Local Noise
Transport
Phenomena
Fig. 5.13 Numerically measured average kink velocities as a function of noise strength for both global and local noise. It is insightful to compare these values with Fig. 5.11. The last data point includes a large fraction of spurious kinks which results in an artificially high value for the velocity.
We have performed digital simulations of the stochastic modification of Eq. (5.13):
dt
Pn+l
2<j>n + 0 n _!) + SIm(<j>n)(l + £M(t)) + U(t)
(5.15)
using the Euler-Maruyama algorithm [76; 148] with a time-step of dt = 0.05 and coupling strength e = 0.07 (40 elements). £A(£) and £M(£) are additive and multiplicative Gaussian white noise, bandlimited in practice by the Nyquist frequency IN = j^t- We quantify the noise by its variance a2 = 2D/N , where 2D is the height of the one-sided noise spectrum. Here, we only consider the case of purely additive noise, £M (£) = 0. By following a procedure analogous to the experiment, we obtain the probabilities for successful signal transmission as illustrated in Fig. 5.12. As in
5.2.
Noise sustained
front
transmission
141
the experiment, global noise provides for kink propagation at much lower values of a2 than local noise does. Figure 5.13 compares the velocities of the noise-propelled kink as a function of a2 for both global and local noise. The velocity of the propagating wave front shows an approximately parabolic dependence on the noise power in both cases. For the coupling strength employed, global noise leads to speeds about 15% greater than that observed for local noise. 5.2.2
Information theoretic ROC curves
measures
revisited:
In previous chapters, we frequently employed the signal-to-noise ratio (SNR) as a "performance" measure. However, in practical scenarios, the SNR is often not the best measure of performance. Indeed, a nonlinear signal processor may output a signal which has infinite SNR but is useless because it has no correlation with the input signal. Furthermore, in the absence of periodic input signals, the SNR becomes entirely meaningless. We therefore need alternative measures to assess the quality of an arbitrary signal; information theory provides us with a variety of such measures. The decision whether a kink arriving at the last element corresponds to a signal injected at the first site constitutes a simple binary hypothesis-testing problem [186]. We assign the null hypothesis Ho to "no signal injected" and the opposite for the alternative hypothesis Hi. Denote the corresponding decisions Di as the judgment that hypothesis Hi was in effect. Clearly, there are two possible errors: a so-called Type I error occurs when making the decision that Hi was in effect while the contrary is true. Borrowing notation from radar detection, we refer to this as the probability of false alarm Pf = P(Di\Ho). On the other hand, if Hi was in effect to generate the data and we decide Do, then we have committed a Type II error, which in radar is referred to as a missed detection [186]. We do that with some probability P(Do\Hi), which is related to the probability of detection Pci = P(Di\H\) in an inverse fashion: Pd — 1 — P(Do\Hi). In our experiment, there is no uniquely defined, objectively "best" noise level without first defining a decision strategy. Two suitable approaches are (i) the Neyman-Pears on strategy, in which the probability of detection P^ is maximized while specifying an upper bound for the false alarm probability Pf, and (ii) Bayes' rule which assigns costs to the various outcomes of the decision process, such as correctly detecting a signal or being "deceived"
142
5.
Stochastic
Transport
Phenomena
by a spurious kink. The optimum noise level in the latter case would be the one which minimizes the total average cost. The optimal detection scheme for both criteria (i) and (ii) is commonly expressed in the form of a likelihood ratio test: presented with a measured outcome 77, decide i(H"0u 1 "-
tf PiW/Pofo) < « otherwise ,
(5.16)
where poti(x) are the probability distributions associated with if0,1 respectively. For the front propagation, these (noise dependent) distributions are the derivatives of the rise curves with (ffi) and without (Ho) signal respectively, (see Fig. 5.8.) The choice of the optimum threshold a is found to be a = q/(l — q) in the Bayesian case and is minimized subject to the upper bound constraint on Pf for the Neyman-Pearson strategy [262]. It is intuitively clear that a signal can be detected with least uncertainty when the probability distributions po and p\ are separated as much as possible. In section 2.1 we have already met the Kullback-Leibler distance as a measure of separation between distributions or densities. Much more general information-theoretic distance measures are the Ali-Silvey distances [4; 214], defined by d(po,Pi)
f
—TT p0(x)
Po(x)dx
(5.17)
where f(x) is convex and h(x) is an increasing function. The KullbackLeibler distance (see Def. 2.3) is recovered for f(x) = — logx and h(x) = x. Of particular practical interest is the ds divergence ds(po,Pi)
Po(x)
p0(x)dx.
(5.18)
Remarkably, there exists a simple relation between this distance measure and the minimal probability of error Pg, which is attained when the threshold in (5.16) is set to its optimum value a = q/(l — q) [4; 214]. This one-to-one mapping is given by P£ = \[l-d£(Po,Pi)}.
(5.19)
It follows immediately that minimizing Pg is equivalent to maximizing ds, the separation between the probability distributions. We refer the reader
5.2.
Noise sustained front
143
transmission
1.0-r 0.80.6PH
0.40.2-
0.0
0.5
1.0
1.5
2.0 c
2.5
3.0
2
Fig. 5.14 Total probability of error Pe as a function of noise variance o2 for global (solid line) and local noise (dashed line). For a range of optimal noise strengths, Pe virtually vanishes. The qualitative behavior is robust against variations in the measurement time, which here is taken to be 6000 time steps (300 time units). We assume equal priors: q = 1 - q = 0.5.
to Refs. [120; 227] for a generalization of SR in terms of these information theoretic measures. In communication systems one is usually interested in the total probability of error Pe = qP{Dl \H0) + (1 - q)P{DQ\Hl) = qPf + ( l - g ) ( l - Pd), where q and 1 — q are the a priori probabilities of HQ and H\, respectively. If one lacked any prior information, i.e., if one considered both events equally likely, one would assign equal priors q = 1 — q = 0.5. Though in the experimental setup there is no inherent time-scale, i.e. the decision when to reset the chain is rather arbitrary, in digital communication applications we would expect information bits to be sent at a constant rate. Hence, we choose a reasonable time interval at the end of which we measure the probabilities of false alarm Pf and missed detection 1 — Pd as a function of noise power. Then Pf is simply the value of the dashed line in Fig. 5.12 at time t = 300, and Pd is the corresponding value for the solid line. For local and global noise, the total probability of error Pe is displayed in Fig. 5.14; for very low and rather high values of the noise power, Pe is almost unity. However, there exists an optimal noise strength for which both Pf and 1 — Pd nearly vanish, resulting in a sharp minimum of Pe. It appears sensible at this point to introduce a simple and yet powerful
144
5.
Stochastic
Transport
Phenomena
tool which is commonly referred to as the receiver operating characteristics (ROCs). While the exact definitions vary, the reader could think of an ROC curve as a visualization of the inherent tradeoff between Pd and Pf, in general one cannot be lowered without raising the other. This tradeoff is of such a fundamental and pervasive nature that virtually no diagnostic tool can avoid it. It immediately follows that offering just one number * to characterize a diagnostic system, as is frequently done by advertisements or biased reports, is not only meaningless but also misleading. A more subtle consequence of the tradeoff between Pd and Pf is the absence of an objective optimum; without further information such as the NeymanPearson upper limit on one probability or the definition of a cost function, there exists no absolute minimum. ROC curves quantify the commonplace notion that there is "no free lunch", and their unnecessarily technical name might as well be replaced by tradeoff graphs. John Swets [25l] summarizes succinctly that a ROC . . . is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performance of diverse systems on a common, easily interpreted scale. In practice, an ROC curve is obtained by plotting either Pd or 1 — Pd versus Pf as a function of the implicit parameter a. Varying a from 0 to oo generates all possible (Pd,Pf) pairs; for most applications a finite sub range is sufficient to map out the graph. Figure 5.15a shows a (made up) set of a "typical" family of ROC curves. The 45° line, where the probabilities of false alarm and detection are equal, serves merely as reference; any diagnostic system can achieve this performance by chance alone. The closer the curves nudge into the upper left corner the better the discrimination of the detector. A common measure for the quality of a detector is the area under an ROC curve, which ranges from 0.5 (worst case) to 1 (perfect discrimination). In order to create an ROC curve corresponding to Fig. 5.12 we first need to compute po and p\. Figure 5.16 displays discretized versions of po and p\ for weak and strong noise respectively. The bimodal nature of the histograms and the unusually good signal separation result in rather atypical ROC curves. Consistent with the behavior of the total error Pe in Fig. 5.14, the signal cannot be detected with any confidence *e.g., "This pregnancy test has a detection rate of 99%".
5.2.
0.0
Noise sustained front
0.5 P
145
transmission
1.0
f
Fig. 5.15 ROC curves showing probability of detection P^ versus probability of false alarm Pf for various levels of (local) noise. Panel a is a made up family of "typical" ROCs. Generally, the discriminative power of a detector increases with the area under its corresponding ROC graph. Panel b corresponds to the probability distributions corresponding to Fig. 5.12. The 45° line corresponds to
for low and high noise powers. However, for intermediate noise levels, the corresponding ROCs approximate the optimal curve. It is worth noting that in the case of periodic input signals, the ROC curves can be predicted directly from the output SNR. Since we do our signal detection by comparing the power in the signal bin of the FFT to a threshold, and since one bin of the FFT covers a very narrow range of frequencies, the noise spectrum across the bin and in the vicinity of the bin looks approximately constant. Therefore we can approximately model the output of our nonlinear array (which supplies the input to the optimal detector) as a sine wave in white noise with an SNR equal to the array's output SNR, R. For this input, the optimal detector's probability
146
5.
Stochastic
Transport
Phenomena
Signal CO
No Signal
| LO
rf
CM
9
L -°- &
o
0.0
0.5
d=0.9
;i
O '
1.0
0.0
Signal
0.5
1.0
No Signal ji
fn
CO
CM
d = 1.6
C\J
f
-ii
O
0.0
0.5
c = 1.6
JI
•Ij ' 1
1.0
0.0
0.5
1.0
Fig. 5.16 Binned probability densities of measured values of $40(t = 300) with and without injected signal for two different (local) noise powers a2 = 0.9 and a2 = 1.6. The left column quantifies p i , while the right column corresponds to po- The bimodal nature of the distributions is clearly visible. The scaling of the y-axis is such that the sum of the bar heights times bar widths will equal 1.
of detection (for a set false alarm probability Pf) is [262]:
Pd = Q (V2R, V-21nP/)
(5.20)
where
Q[a,p)
zexp
zA + az
Io(az)dz
(5.21)
is Marcum's Q function, and Jo is the modified Bessel function of the first kind and order zero. This approximating model gives highly accurate results for, e.g., the double well potential, except in a transitional noise range (at low noise strength) where both intra- and interwell motions contribute significantly to the output SNR. The response is most strongly nonlinear in this regime. For example, turning on the sine wave signal in this regime
5.3.
Theory
147
(with input noise strength held constant) causes a large increase in both signal and noise output power. In this case, calculations based on just the final value of the output SNR (with the sine wave turned on) underestimate the signal detection performance. 5.3
Theory
The results of Sec. 5.2 touch upon two central properties of kink statistics in a discrete bistable chain, namely the Brownian motion of an individual kink and the nucleation of kink-antikink pairs in the presence of an external static bias (or d.c. forcing term). In the absence of a precise model for the 2-state potential that describes the phase shifts in the diode resonators or drives the transmembrane potential of Sec. 5.2.1, we speculate that the in situ bistability of our arrays can be rendered satisfactorily in terms of a Double Quadratic (DQ) potential, that is, by two parabolas displaced by a distance 2a (see Fig. 5.17a.) A discrete DQ model is likely to capture, at least qualitatively, the essential features of the array dynamics investigated above, while affording substantial simplifications in its analytical treatment. The analysis of this Section can be carried over, with more mathematical effort, to the >4 model of Sec. 5.2.1, as well. The DQ model has been studied both in the continuum [42] and in the discrete case [55]. In dimensionless units the DQ Hamiltonian reads
] | = ' £ { f + ^ [ ( ^ - ^ - l ) 2 + (^-0n+l)2] + ^(H-l) 2 J, (5.22) with Ho = ma?/I. Each >„ can be regarded as the displacement (in units of 2a) of the nth chain site with mass m, CQ and UJQ represent respectively the limiting speed and frequency of the phonon modes propagating along the chain, and I denotes the chain lattice constant. For instance, in Eq. (5.13) I was set to one. The ratio CQ/I 2 , which quantifies the effectiveness of the coupling between two adjacent bistable units, is the coupling constant of our model. The importance of the discreteness effects is measured by the discreteness parameter
namely the ratio of the kink length d to the chain spacing I.
5.
148
Stochastic
1
c u/lL
_
Transport
Phenomena
1
1 *""
1
s
'2
/
/
1
1
1
F/Fth
Fig. 5.17 The Double Quartic (DQ) model for UJQ = co = a = 1 and discreteness parameter 7 = 1. (a) The DQ potential V[<j>] of Eqs. (5.22) and (5.24); (b) The PN potential V{x,F) of Eq. (5.34) and [182] for F/Fth = 0.08; (c) Kink stationary velocity u{T) versus bias intensity F for uj^/kT = 20 (curve 1) and 50 (curve 2). At T = 0+ the limiting curve (dashed) is given by u/uF = 0 for F < Fth and u/uF = {(F/Fth)ln[F/(F - F^)]}-1 for F > Fth.
5.3.1
The continuum
limit
In the continuum limit 7 —> 00, the Hamiltonian (5.22) can be expressed as the line integral of the Hamiltonian density
#M = f + cl§ + vw,
(5.24)
where the field 4>(x,t) is defined as lim;^ 0 4>x/i{t) and V[<j)] = ^{\
5.3.
149
Theory
analytically in great detail [42]. In particular, we know that the kink (_) (j)±{x,t)=±sgn[x-X(t)}
1 — exp
-x-
X(t)\
dy/T
u2/4
(5.25)
can be regarded as relativistic quasi-particles with size d — C0/LOQ, mass M0 = E0/cl = 1/d (or rest energy E0 = u>0c0) and center of mass X(i) = XQ + ut. At low temperatures, kT <^ EQ, any configuration can be represented as a linear superposition of randomly distributed kinks and antikinks floating on a phonon bath. A DQ string in equilibrium at temperature T and with boundary conditions >(—oo,i) = cj>(+oo,t) naturally bears a dilute gas of thermal kink-antikink pairs with density
The qualification "thermal" underscores the fact that no pairs per unit of length (with nod
Kink Brownian
motion
The <j>±(x,t) solutions (5.25) tend to travel with arbitrary constant speed u < c0, unless perturbed by a coupling to a heat bath or by an external field of force. The simplest heat-bath model was obtained [176; 178] by adding a viscous term —a
(5.27)
Note that all our experiments and simulations have been carried out in the overdamped limit, a > w o , and in the presence of an additional subthreshold force F with F < UQ, also incorporated in Eq. (5.27). Thermalization is imposed here by choosing the noise autocorrelation function (C(x, t)((x',t')) = 2akTS{t - t')S(x - x1).
150
5.
Stochastic
Transport
Phenomena
A single kink (antikink) subjected to thermal fluctuations undergoes driven Brownian motion with a Langevin equation 2F where 77(f) is a zero-mean-valued Gaussian noise with strength D = kT/aMo and autocorrelation function (rj(t)r}(0)) = 2DS(t). As apparent from Eq. (5.28), the external bias pulls (j>± in opposite directions with average speed ±up where UF = 2F/aM0. If the local fluctuations are spatially correlated, say (C(x,t)C(x',t'))
= 2akT6(t - t')[e- | x - a ! 'l / A /2A],
(5.29)
the noise strength D changes into [176; 178]
As speculated in Sec. 5.2, for noise correlation length A smaller than the kink size d, possible spatial inhomogeneities become negligible; i.e., D(\) ~ D for A « d [184]. The global noise regime simulated numerically in Sec. 5.2.1 corresponds to the limit A —> 00 of the source £(x, t) rescaled by the normalization factor A/2A; the Langevin equation (5.28) still applies, but the relevant noise strength is now lirriA^oo 2AD(A) = 4D. This accounts for the observation that global noise sustains kink propagation more effectively than local noise. Note that the enhancement factor of 4, more exactly 4a, is nothing but twice the distance between the DQ potential minima (in dimensionless units). Another important property of global noise is that it cannot trigger the nucleation of a kink-antikink pair and, therefore, minimizes the chances of a "false alarm". For this to occur it would be necessary that a spatial deformation of a stable string configuration (vacuum state) be generated large enough for the external bias to succeed in making it grow indefinitely. Such a 2-body nucleation mechanism would require a local breach of the (f> —> —(f) symmetry of the DQ equation (5.27), which can be best afforded in the presence of uncorrelated in situ fluctuations [179]. The nucleation rate, namely the number of kink-antikink pairs generated per unit of time and unit of length, can be easily computed by combining the nucleation theory of Ref. [179] with the analytical results of Ref. [42] for the DQ theory. For values of the string parameters relevant to Sees. 5.2
5.3.
Theory
151
and 5.2.1, that is for kT and Fd
(5.31)
if Fd «fcT,or r
2m=o\/Tr7r1(T),
(5.32)
if kT
The Peierls-Nabarro
potential
Let us go back now to the case of a discrete DQ chain. Discreteness (with parameter 7) affects the kink dynamics on two accounts: (i) The profile of a static kink (antikink) 4>±(x, 0) is deformed into [55] ^t]n = ±sgn[n - N][\ - Zvv\n~N\\,
(5.33)
with Zv = 2 v W l + v, N = m + 1/2, m = 0 , ± 1 , ± 2 , . . . , and v = [ y l + 47 2 — l]/[^/l + 47 2 + 1]. To make contact with the continuum solution (f)±(x,0), one must replace nl with x, Nl with Xo, and take the continuum limit 7 —> co (so that v ~ 1 — I/7). Note that the spatial extent of the discrete kink solutions
152
5.
Stochastic
Transport
Phenomena
potential with constant I a n d angular frequency LUPN, t h a t is, [55] aX = -co2PN[X
- l(Int[X/l]
where UJPN ~ (1 + V)WQ a n d lnt\X/l]
- 1/2)] T 2F/M0 + arj{t),
(5.34)
denotes t h e integer part of X in units
of I. Note t h a t U>PN —> w 0 a n d UPN —> %/2CJ0 in t h e highly discrete a n d
continuum limits, respectively. T h e energy barriers of t h e P N potential are thus (almost) quadratic in I. The one-dimensional Langevin equation (5.34) has been studied in great detail by Risken a n d coworkers [226]. In t h e noiseless limit, 77(i) = 0, t h e process X(t) is to b e found either in a locked state with (X) = 0, for 4F/M0 < UJPN , or in a running state with (X) ~ UF, for 4F/M0 > UJPN. This is indeed t h e depinning (or locked-to-running) transition described in Fig. 5.6. At finite t e m p e r a t u r e t h e stationary velocity (X) = u(T) can be cast in t h e form following,
1
l
- z ~
S f535
)
where 6 = 2Fl/kT a n d t h e quantities A and B can be computed numerically with minimum effort [182]. T h e ratio u(T)/up is the rescaled
5.4
Noise enhanced wave propagation
T h e noise sustained signal transmission described above is characterized by a n avalanche-like propagation mechanism. B y introducing a symmetry breaking global bias, kinks are forced t o travel towards t h e right boundary. T h e resonators in t h a t setup have t o be reset by a n opposite global bias, thus not permitting continuous signal transmission. By contrast, a two-way, unbiased bistable array is not susceptible t o such avalanches a n d is suited for continuous information transmission. This section describes the somewhat different phenomenon of noise enhanced propagation ( N E P ) of a periodic signal, which was first reported in Ref. [159]. T h e system is
5.4-
Noise enhanced wave
propagation
153
the same chain of bistable elements with two-way nearest-neighbor coupling that exhibits spatiotemporal stochastic resonance. However, instead of applying the sinusoidal forcing to all cells, we force only one end of the array. NEP occurs when stochastic resonance at intermediate sites (which are near threshold) extends the influence of the input forcing. Noise, nonlinearity, and forcing (mediated by coupling) cooperate to make this possible. In the presence of noise, applied at every site, we record a signal-to-noise ratio at each oscillator. We demonstrate that moderate noise significantly extends the propagation of the sinusoidal input, while too little or too much noise does not. Oscillators far from the input — in the region of the chain where noise extends the signal — exhibit classical stochastic resonance. Both the optimal noise and the maximum propagation length will scale like the square root of the coupling. We obtain similar results with twodimensional arrays. The simplicity of the model suggests the generality of the phenomenon. NEP may be important in biophysical and biochemical processes, especially neural networks, and may be exploited by communication and detection technologies. In order to demonstrate NEP as broadly as possible, we focus on a simple model. We study a coupled chain of noisy overdamped bistable oscillators. For n > 1, the amplitude xn of the nth oscillator obeys xn = -V'{xn)
+ e (xn+i + xn-i
- 2xn) + £n(t),e > 0 .
(5.36)
The potential energy function characterizing each element is given by Eq. (3.9). As before, we choose ki,k2 > 0 to ensure its bistability and we remind the reader that the height and width of the energy barrier are h = k\j\k2 and w = 2^/ki/k2, respectively. With the end of the chain free, we force the first element sinusoidally, ±i = -V'ixi)
+ e(x2 -xi)
+£i(t) + Asinuit,
(5.37)
and study the propagation of this signal along the chain. £„(£) is Gaussian white noise, band-limited in practice by our integration time step dt (which establishes a nonzero correlation time) to a Nyquist frequency /AT = l/(2dt). We quantify the noise by its mean squared amplitude or noise power a2 = 2Dfw, where 2D is the height of the one-sided noise spectrum. We study the case of incoherent or local noise (uncorrelated from oscillator to oscillator) rather than coherent or global noise (identical at each oscillator). The system of stochastic differential equations
154
5.
Stochastic
Transport
Phenomena
Fig. 5.18 Spatiotemporal behavior of a chain of 32 overdamped bistable oscillators, sinusoidally forced at one end and subjected to increasing incoherent noise. Parameters are h = 0.75, w = 2.25, A = 5, w = 0.2, and e = 10. Reproduced with permission from [159].
(1)~(3) is numerically integrated via the Euler-Maruyama algorithm [76; 148], using a time step dt = T/2 1 2 . with the forcing period T = 2ir/ui. We then numerically estimate the spectrum, or power spectral density (PSD), of a long time series of each oscillator. After averaging many such PSDs, we estimate the signal-to-noise ratio (SNR) at, each oscillator as the ratio of the signal power to the noise power, at the frequency of the forcing. The SNR is conventionally expressed in decibels (dB). We estimate the noise power by performing a nonlinear fit to the PSD around, but not including, the forcing frequency. We estimate the signal power by subtracting this noise background from the total power at the forcing frequency. Schematically. SNR = 1 0 log 10
signal power noise power
(5.38)
However, our results are robust with respect to variations in this definition of SNR. Figure 5.18 illustrates the system's spatiotemporal behavior in the presence of increasing noise. Each strip represents the evolution of a chain of 32 oscillators, arrayed vertically and evolving horizontally, for 12
5.4-
Noise enhanced wave
propagation
155
Fig. 5.19 Plots of SNR versus oscillator number n and SNR versus noise variance a2. Intermediate noise results in maximum propagation and oscillators in the chain where noise extends the signal exhibit stochastic resonance. Smooth curves have been fit to the data to aid the eye. Parameters are the same as in Fig. 5.18, except that t = 4. Reproduced with permission from [159].
forcing periods. The position xn{t) is color coded, with blue denoting the left well and red denoting the right well. The first, forced, oscillator is at the top edge of each strip. With little noise, the sinusoidal signal propagates only a short a distance down the chain. Moderate noise extends the propagation, while excessive noise destroys it. Note how a noise variance of a2 = 100 enables the signal to initiate system-wide events, occasionally causing the entire chain to hop from one well to the other. In fact, we chose our initial operating parameters so that, in the absence of forcing, the spatial features are large compared to the chain length. Large segments of the chain spontaneously flop across the bistable potential barrier. Such a partially correlated medium allows noise to support, sustain, and enhance signal propagation. Typically, we parameterize the forcing with amplitude A = 0.5 and angular frequency u = 0.2, and the bistable potential with a barrier height of h = 0.75 and barrier width of w = 2.25 (corresponding to fci « 2.37 and k2 ~ 1.87). However, NEP is robust with respect to variations in these parameters. Fi gure 5.19 illustrates SNR versus oscillator number for increasing noise variance and SNR versus noise variance for increasing oscillator number. For oscillators near the forcing end, the SNR decreases as the noise
5.
156
Stochastic
Transport
Phenomena
-| 30 '35 i
J40 SNR(dB)
02
Fig. 5.20 Smoothed contour plots of SNR versus oscillator number n versus noise variance a2, for increasing coupling. SNR contours are accurate to ± 1 dB. Other parameters same as Fig. 5.18. (Here we employ a larger integration time step of dt = T/210.} Reproduced with permission from [159].
5.4-
Noise enhanced wave
propagation
157
increases. But for oscillators farther away, the SNR goes through a local maximum as the noise increases — the classic signature of stochastic resonance. Oscillators near the forcing do not need help from the noise, as the forcing amplitude there is large (A > h); however, oscillators far from the forcing do need help from the noise as the signal there is attenuated. For a given noise power, SNR decreases with oscillator number downstream along the chain. Defining the propagation length as the number of oscillators (or distance along the chain) for which the SNR exceeds a certain cutoff, say 1 dB (this is roughly the uncertainty in our numerics), we observe that the propagation length is longest for moderate noise. This data can be succinctly combined into a contour plot of SNR versus noise variance versus oscillator number. Figure 5.20 presents a series of such contour plots, for increasing coupling. Grays code SNR, with white indicating large SNR (> 40 dB) and black indicating small SNR (< 1 dB). The bottom-left corners represent the peak SNR of the noiseless first oscillator. The SNR decreases everywhere away from these corners. However, the distinct bulges in the contours, where regions of large SNR extend toward the free end of the chain, are the signatures of NEP. (The 1 dB contour traces out the propagation length as a function of noise.) At sufficiently large coupling, the SNR extensions reach the end of the chain, indicating that noise and coupling have succeeded in sustaining the signal throughout its length. Note how both the extent and the position of these SNR extensions increase with increasing coupling. Figure 5.21 summarizes the scaling of optimal noise and maximum propagation length with coupling. Both appear to scale as the square root of the coupling. This reflects the fact that the correlation length of spatiotemporal features for a local and linearly coupled array scales this way [103]. As the coupling increasingly binds adjacent oscillators together in the same well, the noise variance must increase correspondingly in order to force such correlated oscillators across the bistable potential barrier. It is instructive to compare these results for two-way (bi-directional) bistable chains with the behavior of one-way (unidirectional) bistable chains recently reported by Hu et al. [279]. For a one-way bistable chain driven at one end, if forcing and noise cooperate to flip the first site (from one well to the other), sufficient coupling propagates the flip to the other end with probability unity. When the forcing is just below the threshold for the first site, the requisite noise is small enough not to interfere with the propagation of the flip. A one-way bistable chain readily propagates flips, like a line of tumbling dominoes. Consequently, for sufficiently large coupling,
158
5.
Stochastic
Transport
Phenomena
1000
•T-<
o
2
£ OH
o 43 •t—' toJO
o •I—>
W)
o
1 10 Coupling £
100
Fig. 5.21 Scaling of optimal noise
SR-initiated avalanches facilitate undamped signal transmission. NEP is not confined to chains of bistable elements. The effect in two-dimensional arrays was established. Lindner et al. studied a square array of overdamped oscillators, each of which was coupled to its four nearest neighbors. With free boundaries, a circular region of oscillators was sinusoidally forced and the propagation of the resulting circular wavefronts was monitored. As in the one-dimensional case, moderate noise extended the propagation length. Figure 5.22 summarizes the spatiotemporal behavior and SNR response for the two-dimensional array. Note how a noise of 75 dB corresponds to both maximum propagation length and significant spatiotemporal organization.
5.4-
Noise enhanced wave
propagation
t
159
t+T/2
H
1
,,
25
(a)
r
*
75 •
#
.
*
*
•
••400
<£ 12 F
(b)
120
Fig, 5.22 Examples of (a) spatiotemporal behavior and (b) SNR response at (nx,nv) = (n, n) for a two-dimensional array of 32 x 32 bistable oscillators, with free boundaries, forced from a quarter circle of sites at the bottom left (0, 0) corner of the array. Parameters are the same as in Fig. 5.18 except t h a t e = 1. Reproduced with permission from [1591.
160
5.
Stochastic
Transport
Phenomena
Thus, we have demonstrated that noise can significantly extend the propagation of signals in arrays of two-way coupled bistable elements. Like AESR, NEP is an important generalization of SR. 5.4.1
Monostable
noise enhanced
propagation
In this section, we break with the paradigm of bistability and outline two very recent examples of noise aiding propagation in arrays of monostable elements. As first demonstrated by Lindner et al. [160], NEP can also be observed in arrays of monostable oscillators. The potential V(x) defined by Eq. (3.9) is monostable if the parameters are chosen such that k\ < 0 and fc2 > 0. For finite damping, the evolution equation Eq. (5.36) becomes a set of coupled second order differential equations: rnxn + jxn = -V'[xn)
+ e (xn+1 + xn-\ - 2xn) + £n(t),
(5.39)
where the first site is driven by a sinusoid defined in Eq. (5.37) and the white noise term is uncorrelated in time and space, (£n(£)£n'0O) = cr2Snn'S(t — t'). For large enough driving amplitudes, oscillators near the forcing end do not benefit from the noise; on the contrary, the signal-to-noise ratio actually deteriorates with increasing noise power. On the other hand, oscillators far from the forcing need assistance from the noise, since the signal there is attenuated. These oscillators exhibit stochastic resonance: the SNR goes through a local maximum as the noise increases. For a given noise power, the SNR decreases with oscillator number downstream along the chain. Defining the propagation length as the number of oscillators for which the SNR exceeds a certain threshold, the authors of Ref. [160] observe that the propagation length is longest for moderate noise. Figure 5.23 summarizes these observations into a contour plot of SNR versus noise power versus site index. Gray levels code SNR, with white indicating large SNR and black indicating small SNR. The bottom-left corner represents the peak SNR of the first oscillator. The SNR decreases everywhere away from this corner. However, NEP is evident in the distinct bulges, where a rather large SNR value extends virtually non-attenuated till the free end of the chain. For intermediate noise variance, a2 « 103, the SNR extension reaches the end of the chain! Our second example extends the phenomenon of doubly stochastic resonance, which we summarized in section 4.2. It was recently shown [276] that noise induced propagation of waves can be achieved through a similar
5-4-
Noise enhanced wave
161
propagation
SNR 0
^
120
Fig. 5.23 Contour plot of SNR as a function of noise variance a2 and oscillator number n demonstrating noise enhanced propagation in an array of monostable nonlinear oscillators. It is clearly seen that intermediate noise optimizes the propagation of the signal. Parameters corresponding to Eq. (5.39) are m = 1, 7 = 5 x 1 0 - 5 , hi = —1, &2 = 1, A = 1, ui = 2-n X 0.45, and e = 1. Reproduced with permission from [160].
interplay of multiplicative and additive noise. We consider the identical system (4.20) but instead of applying the periodic drive to all elements of the two-dimensional lattice, only the first three columns are driven^. Formally, this is achieved by replacing the signal amplitude A in (4.20) by a cell dependent value: A^> A{ = A(5ixi\ + ^^,2^,3)- Analogously to NEP we first select a suprathreshold drive amplitude; i.e., A is large enough to ^Note that the functions f(x) and g(x) are different in [277] and [276]. However, the phenomena are quite general. For the sake of simplicity we therefore only consider the functions utilized in Ref. [277].
162
5.
Stochastic
Transport
Phenomena
induce hops between the two states without the need for additive noise. As for NEP, the SNR of the first oscillators decreases when noise is added, while for distant oscillators an SR-like response can be observed. An optimal amount of additive noise exists for which propagation of the sinusoidal signal is greatest. If on the other hand, the amplitude A is chosen small enough such that, in the absence of additive noise no hops in the directly excited sites can be observed, doubly SR occurs in this part of the system as described in section 4.2. This excitation actually propagates through the rest of the lattice and results in an SR-like response for all oscillators, for suitable multiplicative noise power. The authors of Ref. [276] refer to this phenomenon as spatiotemporal doubly stochastic resonance.
5.5
Stochastic ratchets and Brownian motors
It is a well-known fact that no useful work can be extracted from thermal equilibrium fluctuations [61; 26l]. Biasing Brownian motion through a clever device would violate the second law of thermodynamics. Feynman illustrated this point through his celebrated example of a microscopic "ratchet and pawl" [6l]. He proved that the anisotropy of the ratchet's teeth couldn't lead to a net motion if all components of this device are treated consistently. The situation is entirely different in the presence of nonequilibrium fluctuations. The last decade has witnessed a flurry of publications on thermal ratchets, which are periodic but anisotropic structures that are capable of rectifying symmetric and unbiased nonequilibrium noise into a directed current. The author cannot even begin to give credit to the various significant papers and conference proceedings but instead refers the reader to comprehensive reviews [9; 94; 127; 223]. While thermal ratchets come in different versions and names such as molecular or Brownian motors or "rocking" and diffusion ratchets, here we narrow our scope to the so-called "flashing" ratchet and present a brief summary of the corresponding section in Ref. [9]. Consider a particle subject to viscous damping and noisy perturbations moving in the asymmetric, tilted saw tooth potential sketched in Fig. 5.24. The forces experienced by the particle are a superposition of the constant deterministic tilt Fcxt and the time varying symmetric fluctuations £(i) as well as the anisotropic force —U!.aw, which periodically switches on and
5.5.
f-1
Stochastic ratchets and Brownian
motors
163
rT
Fig. 5.24 Illustration of a "flashing ratchet". The saw tooth potential C7Saw is periodically switched on and off. The external gradient [/ext = —xFext, demarcated by the upward tilted dashed base line, is on continuously. The anisotropy of the potential is parameterized by a: on the "steep" slope of extent o i , the particle experiences a force —AU/(aL) + F e x t, while on the other leg the force is equal to A [ / / ( [ l — a]L) + F e x tDuring the on phase, the particle is instantly localized in one of the multiple potential minima at nL, n = 0, ± L , ± 2 L , . . . . When (7saw is off, the stochastic motion of the particle is a superposition of a deterministic drift oc £ F e x t / 7 (7 is the coefficient of viscous drag) and a diffusive derealization as symbolized by the Gaussian probability density. The uncertainty in position grows according to a square root law cc V^irDt, where D is the diffusion constant. As explained in the text, the "flashing" potential can cause an average motion to the right despite the net force to the left.
off. We only consider external forces small enough such that the particle is trapped near the bottom of one of the wells when the potential is on. In the following, we denote the damping constant by 7, the diffusion constant by D^, and the period of the flashing potential by IT. As long as the ratchet potential is off, the particle will move diffusively according to a biased random walk. An initial Gaussian distribution centered at XQ = x(£0ff = 0) = 0 ^The temperature T and the diffusion constant D are related via the fluctuationdissipation theorem: D = fcT/7.
164
5.
Stochastic
Transport
Phenomena
will therefore evolve in the following manner [9] P{x, t0ff ko = 0) =
exp
(x -
toSFext/j)2 4Dt off
(5.40)
Accordingly, after one complete off cycle the variance in position is Ax = V2DT. When the ratchet potential is switched on, the particle gets trapped in the potential minimum at nL if it happens to be between (a+n— 1)L and (a + n)L at that time. Because the "left legs" of the anisotropic potential are shorter that the "right legs", a particle starting at nL is more likely to be trapped in the neighboring well to the right at (n + l)L than the well at (n — \)L. This reasoning holds only if (i) the external bias Fext is "small" (to be specified below), (ii) the diffusion Ax during the off phase is at least aL but does not exceed (1 — a)L, and (hi) r is long enough for the particle to equilibrate at the bottom of the well ("adiabatic" requirement). All in all, through the interplay of stochastic diffusion and the intermittent potential, an ensemble of particles can move against an external macroscopic force! The necessary energy comes from the ratchet potential when it is switched on. At that moment the potential energy of the particle will be suddenly increased. For large AU/L the external force F s t o p necessary to balance the opposite "entropic flow" can be computed exactly [163]: Fstop = l±(a-±y
5.6
(5.41)
Postscript
If the spatial distribution of the noise were constrained to a small neighborhood around the kink and zero along the rest of the chain, fast and efficient noise-supported signal transmission without false alarms would be realizable. This seemingly artificially constructed scenario can be achieved naturally by considering the case of purely multiplicative noise [54; 147]. A detailed study of noise-sustained propagation in the presence of multiplicative fluctuations is beyond the scope of this work. Further experimental evidence for SR in an array of 4 coupled Schmitt triggers is given in [229]. The coupling employed is unidirectional and
5.6.
Postscript
165
"negative", so that the antiparallel states 1010 and 0101 are stable ground states. Both AESR and spatiotemporal synchronization are demonstrated and an interesting analogy to charge density waves (CDW) is outlined. We mention in passing that the observed noise-enhanced propagation is much closer to the avalanche effect described in Ref. [279] than the NEP phenomena reported in Refs. [159; 165; 167].
This page is intentionally left blank
Chapter 6
Sundry Topics
While the previous chapters have had a definite bias towards physical systems, this book would be seriously flawed if it were to ignore the recent flurry of activities and revived interest in modeling socioeconomic and nondeterministic complex systems. The increasing, more or less successful trend of scientists trying to explain economic trends and patterns through models and techniques borrowed from statistical physics has given rise to its own hybrid discipline, which is often referred to as econophysics [108; 175]. In this chapter we are trying to give the reader a flavor of the diversity of disciplines and applications in which noise and stochastic concepts lead to non-trivial, fascinating and sometimes counter-intuitive phenomena. The role of fluctuations in game theory, traffic dynamics, machine learning and optimization techniques will be touched upon. We do not claim completeness by any means; in fact this chapter constitutes a rather random medley of topics to which the author happens to have been exposed. This chapter mainly intends to serve as an appetizer and eye-opener for those readers who long to broaden their "application-scope" and want to reexamine the complexity in daily life through a scientific microscope. 6.1
Minority game
The minority game (MG) in its original formulation was proposed by Challet and Zhang [31] in 1997 and has received considerable attention in the literature so far [110]. It was partly inspired by Arthur's "El Farol" problem [8] and applies simple and intuitive rules to a group of interacting agents which are dictated by laws of supply and demand. The imperative that 167
168
6.
Sundry
Topics
both gives rise to the name and drives its complex dynamics declares the minority of a group of buyers and sellers to be the winning faction. While intended as a toy model of economic market mechanisms, the initial formulation in [8] offers a more social setting: UN people decide independently each week whether to go to a bar that offers entertainment on a certain night." Due to limited space, the evening is deemed enjoyable only if fewer than a certain fraction M/N of the crowd shows up. The (binary) decision of the individual whether to visit or to avoid the bar is assumed to be made according to the individual's strategies, which depend solely on previous attendances. The number of people attending in the past plays the role of global information; i.e. it is available to each individual ("agent"). Erom the individual's viewpoint, the problem is obviously ill-defined and forces each agent to predict the decisions of the remainder of the group. The absence of a deducible solution and the system's inherent frustration invalidate any common expectations: if everyone expects most people to show up, nobody will go, contradicting the very assumption and vice versa [8]. Assume now that each agent possesses k strategies or predictors and updates the success history of each one of them respectively. At any given time, each agent then employs the strategy that would have been (see below), most successful in the past. Arthur showed that this so-defined dynamical system self-organizes into an equilibrium pattern in which the mean attendance always converges to M/N [8]. Setting M = (N+l)/2 (N odd), the original minority game is formally defined as follows. TV agents choose at each time step whether to "buy" or "sell". Those agents who have made the minority choice win and are awarded; the others lose. We do not keep track of the exact bids but record a 0 if the buyers were in the minority and a 1 for the opposite case. A "memory" of size m is hence defined as the binary sequence of length m. Evidently, there are 2TO possible memories. A strategy is defined to be the prediction table that uniquely determines the next bid given a specific history. An example of a specific strategy for m = 3 is given in table 6.1. The reader should be able to easily verify that the total number of potential strategies is j(m) = 2 2 '". If we assign a fixed order to the 2m dimensional memory, we strictly have to keep only the prediction vector since the left column of table 6.1 presents no new information. Therefore we can formally define a strategy as a binary vector R with Rj £ (0,1), j= l,...,2m. We initialize the game by randomly, but permanently, assigning s strate-
6.1.
Minority
history 000 001 010 Oil 100 101 110 111
game
169
prediction 0 1 0 0 1 0 1 1
Table 6.1 An example of a prediction table for m = 3. The strategy can be defined to either be the full table or just the right prediction column.
gies of memory m to each agent. We will consider only the nontrivial case of s > 2. We point out that while it is in principle possible for some strategies to be shared by more than one agent, the doubly exponential growth of j(rn) makes this extremely unlikely even for moderately sized m. During the course of the game all deployed strategies are updated according to their potential success. We award a point to a strategy if it would have forecasted the correct bid, ignoring whether it actually was exploited or not. At any given time, each agent selects from among his/her strategies the one with the highest number of points* to pick his/her bid. Apart from the random initial conditions, which comprise the distribution of strategies and the necessarily random first memory string, this minority game is fully deterministic. The macroscopic variable that has received by far the greatest attention is the variance of excess of buyers to sellers, which is defined as
where A(t) is the total bid at time t: A(t) = ^ ~ = 1 a-j(t) and a,j(t) e (0,1) is the individual bid of agent j . The fluctuations around the mean, quantified by the variance, are referred to as volatility in financial markets. The larger *In the unlikely case of a tie, a strategy is randomly chosen among the most successful contenders.
6.
170
Sundry
Topics
a2 is, the larger the total loss of the body of agents.t In a financial context zero volatility would be most desirable as it minimizes risk and generates "wealth" among the largest number of agents. If there were no strategies and all agents played the game randomly, the volatility would simply be of = N/4, according to the law of large numbers. The volatility depends on the parameter m in a nontrivial fashion, as Fig. 6.1 (stars) illustrates: for very small memory sizes the fluctuations of the total bid are significantly larger than if the agents had acted randomly. TThis is, of course, due to the particular payoff function, which does not take into account the size of the minority at all.
15 o Model without memory *—* Model with memory
10
0
0
10
15
m Fig. 6.1 Volatility a as a function of m (s = 2) for the model with and without memmory. The horizontal line is the variance crT = y/N/4 ~ 5 of the random case. The number of agents isN = 101. Error bars are shwon only for the model without memory, while the line just connects the points of the memory model. Reproduced with permission from [27].
6.1.
Minority
game
171
For large m the volatility converges from below towards the random limit, and for intermediate memory sizes, a(m) is actually smaller than err! For the remainder of this section we fix the number of strategies to be s = 2 without losing generality. It has been shown in [32] that the minimum of the curve a(m) persists for larger values of s but gradually grows more and more shallow. Our earlier comment on the likelihood of different agents possessing identical or similar strategies appears to hold true for the behavior depicted in Fig. 6.1. When the strategies are totally uncorrelated, which is more and more likely for increasing m, the game dynamics become more and more random, therefore approaching the variance limit ar. An important breakthrough in the understanding and analysis of the minority game was achieved by Cavagna [27], who demonstrated that the actual memory of the agents is irrelevant! The volatility a(m) of the system is unchanged if we replace the real history by an invented history that in turn can be taken to be a random sequence of bits! In Fig. 6.1, the variance as a function of m is also plotted for this memory-less setup (open circles). The two models surprisingly give the same results and therefore leave the memory size or, more accurately, the dimension of the strategy space D = 2 m as the only relevant variable. As a consequence, there is no need for D being a power of 2 any longer [27] so that from now we conveniently allow D to take on all positive integer values. Cavagna also showed that the dependence of u(D) on s and N is such that all data a collapse onto the same curve if we assume the scaling a2 —> a2/N and m —> 2m/N = D/N. Therefore, in the following, we will always plot a2 /N (or a/\f~N) vs. D/N. Summarizing, it appears that while the information content processed by the agents is extraneous, it is crucial that everyone is exposed to the same "history". In the terminology of Chapter 4, the applied "noise" is global. In the case of local noise, i.e., if each agent received their own information uncorrelated with the others, the dynamics of the game become identical to the random case. For more insights into the controversial topic of whether the actual memory is relevant for the minority game or not, we refer the reader to the literature [155].
6.1.1
The thermal
minority
game
As pointed out in [28], the original version of the minority game can be modified to be both analytically more tractable as well as more realistic. We will first introduce the continuous version of the MG and then proceed
172
6.
Sundry
Topics
to its stochastic version, which is the key rationale for devoting a section of this book to a game theoretical model. Cavagna et al. [28] introduced the concept of a "temperature" to the game dynamics. This section is essentially a summary of their intriguing work, which we highly recommend for further reading. A departure from the binary nature of the MG is motivated by its following shortcomings: (i) Integer variables, such as the strategies and bids, are less suited for time series analysis than floating point numbers. (ii) Geometrical arguments developed for the hypercube of strategies [32; 52; 126; 280] become more natural if the strategy space is continuous. (iii) To model financial markets more realistically, both the size of the individual bids as well as the gains or losses ought to be continuous. The following continuous formulation of the MG meets all of these criticisms and demands. The space of strategies V is assumed to be the surface of a hypersphere of dimension D. A strategy R is therefore a (real) vector in MD subject to the constraint ||i?|| = y/~D. The global information ff(t) presented to the agents is also a vector in R D ' but random and of unit length. We require rf(t) to be uniformly distributed on the unit hypersphere and uncorrelated in time {fj{t)fj{t + T)) = S(T). The (now continuous) bid* corresponding to strategy R is defined as a(R) = R • f)(t)
(6.2)
The sign of a(R) determines whether to buy (positive) or to sell (negative), and its magnitude gives the respective amount. As before, the total bid is simply the sum of all individual bids corresponding to the agent's strategy R* (t) with the highest number of points at that time JV
N
AW = 5>(i?*(t)) = £/?;(*) • m 3= 1
(6.3)
3= 1
All that is left for us to specify is the continuous version of the time evolution of the points P{R, t) for strategy R. In the discrete case, we simply awarded one point to each winning strategy; i.e., P(R,t + 1) = *Note that in Ref. [28] the symbols for the individual bid are b(R) and bi(t) respectively. We prefer to stay close to the original notation, a,i(t), from the earlier MG papers.
6.1.
Minority
173
game
2.0 -i
1.5-
a/N
1/2
random value
1.00.5-
0.0
-I—!~T
4 5 6 7
0.1 D/N Fig. 6.2 The scaled variance a/VN as a function of the reduced dimension D/N for s = 2 and AT = 100. The variance achieves its minimum at a critical dimension dc f» 0.5. The dashed horizontal line serves as a reference for the random case. The first to = 10000 steps were discarded as transients, and we averaged over 100 samples and r = 10000 time steps respectively.
P(R, i) + sign[- -a(R)A(t)\. would be
The natural extension to the continuous case
P(R, t + 1) = P(R, t) -
a{R)A(t)/N,
(6.4)
so that the points P are independent of N and proportional to both the individual and total bids. As can be seen in Fig. 6.2, the main features of the binary MG model are reproduced. There is a worse-than-random phase for < 0.2 and a better-than-random phase for d > 0.2, where d = D/N is the reduced dimension of the strategy space. The volatility a/y/N assumes its minimum value at dc = (D/N)c « 0.5, which we refer to as the critical dimension. The continuous MG constitutes an improvement in analytical tractability, but from a statistical physics point of view a stochastic parameter, which plays the equivalent role of a temperature, appears highly desirable. As an elegant alternative to the deterministic "best-strategy rule", Cavagna et al. [28] introduced the thermal minority game (TMG). The TMG utilizes an inverse temperature (3 = 1/T by computing the prob-
174
6.
Sundry
Topics
abilities -l
^•(i)
=$>*(*)
e^^t\i
= l,...,N,j = l,...,s
(6.5)
^fc=i
for each agent i of choosing his/her strategy j . In the zero temperature limit, /? —> oo, the deterministic MG is recovered, while for decreasing j3 the point differences between the strategies grow less and less decisive. For p = 0 strategies are selected completely at random. To fully explore the dynamics of the TMG, we would have to continuously vary both the temperature and the reduced dimension d. Instead, we only present the dependence of a on T for fixed d, trusting that qualitatively similar trends hold for a wide range of dimensions. Fitting well into the general theme of this book, we find that increasing stochasticity can actually benefit the community of agents by reducing the volatility! From Fig. 6.2 we pick a value of D/N which, for T = 0, results in a worse-than-random volatility a > ar, e.g., D/N = 0.1. Figure 6.3 illustrates the major reduction of a(T)
2.0-1
best strategy value (MG)
1.5-
a/N 1.0
0.5 4 10
10"
10
10'
10
10°
Fig. 6.3 The scaled variance a/-/N as a function of the temperature T at D/N = 0.1 for s = 2 and N = 100. Both t0 and r from Eq. (6.1) are taken to be 10, 000. Note that the increased variance for temperatures T > 100 is an artifact of finite simulation times, as pointed out in [30],
6.1.
Minority
game
175
for a large range of temperatures. Even if d lies in the better-than-random phase, finite temperatures still improve the behavior of the system. We are obliged to mention that the apparent minimum and subsequently increasing variance for temperatures above w 100 is an artifact and entirely due to finite simulation times as pointed out by Challet et al. [29; 30]. It is shown that the integration time r required to reach the steady state is of order - NT. Therefore, for AT = 100 and r = 10,000, the results for T > 100 in Fig. 6.3 do not represent the steady state. The "true" volatility curve, shown in Fig. 1 of [29], monotonically decays with increasing temperature. We did not correct Fig. 6.3 for reasons of completeness and also because its principal message is not weakened: any nonzero temperature larger than a critical value Tc always leads to improvement as long as d is less than the critical value dc\ Intuitively, we can understand the favorable response of the system to "noise" in the following manner: if every agent always chose his/her optimal strategy, then the system would be more "unstable" or more critical, i.e., every once in a while there would be rather large
2.0
-0.5-1.005 C
-1.5
Individual gains Variance
en 03
0/N
•D
| c
-2.0 1.0 -2.5 -3.0^
0.001
1—i i i n r q
0.01
1—i i m i i |
0.1
1—i—ri 11rq
1
1—i i i 11 IT[
10
1—i i i i MM
J-0.5
100
Fig. 6.4 Plot of the average individual gains of all 100 agents as a function of the temperature (D/N = 0.1). In addition, the volatility is plotted. The opposite trend is clearly visible.
176
6.
Sundry
Topics
losses or gains. The introduction of a temperature or random component "smears out" the criticality somewhat, so that there are fewer extreme events, leading to a reduced variance. Recently, the MG has been reexamined through the eyes of various learning algorithms. We refer the reader to Refs. [266] and [5] for the performance of neural networks and the Q learning algorithm on the MG, respectively. Before concluding this section we would like to stimulate the reader's quest for further research. Among the macroscopic variables, the variance of the total bid is the first interesting moment ((A(t) N/2)n), since the mean A(t) vanishes. Another interesting observable is the average individual gain gt{T) = {-cn{R)A{t)/N) of the agents. It is noteworthy that the temperature dependence of the individual gains is almost the exact opposite to that of the volatility. In Fig. 6.4 we have overlaid the average gains gt(T),i = 1 , . . . , 100 and a/y/N. Interestingly, the individual gains are all negative, which can be understood as follows N
-(ai{t)
• A(t))
J=
"D
N
=-(ai(t) $>,-(*)> = -(a?(i)> - £><(*) • aj{t)) 1
(6.6)
i^3
20-i
2.0
10
1.5
c a) a> 3
random value
c/N"
0-
1.0
-10-
0.5
c o
ill
•••*•• Variance —t— Correlation between bids
D/N = 0.1 -20x10"
T
I
0.001
0.01
TTTT
0.1
r—m
0.0
m
l 10
Fig. 6.5 The cross correlation between the individual bids YliLj(ai(t) as a function of temperature for D/N = 0.1.
100
• &j{t))/{N
— 1)
6.2.
Traffic
dynamics
177
The first term in this sum is simply the negative of the autocorrelation of the individual bid a^t), and the second term is the average of the cross correlations between the bids! In Fig. 6.5 we therefore plot instead of the individual bids, the quantity J2i^j(ai(t) " a j W ) / ( ^ ~~ 1)> which is just a randomly selected individual gain, divided by N - 1. It is evident that the cross correlation between the bids crosses zero at the same temperature as the variance approaches the random value. We close this section leaving these intermediate results as food for thought.
6.2
Traffic dynamics
Traffic research is an interdisciplinary and rapidly evolving field. It has attracted the interest of the engineering sciences as well as that of theoretical physics, mathematics, and cybernetics. Since traffic flow is governed by high-dimensional, complex and decidedly nonlinear dynamics reminiscent of fluid or granular flow, there is a great variety in the structure and simplicity of the models that attempt to extract key features. One of the first attempts to capture the nonlinear effects inherent in realistic traffic flows was put forward by Newell [206]. More insights into the solutions of the Newell model were provided by Whitham [267]. A great deal of important macroscopic traffic patterns were reproduced in a series of computationally highly efficient cellular automaton (CA) models first proposed by Nagel and Schreckenberg [101; 202; 232; 235]. In 1995, Bando et al. [ll] pioneered a microscopic, dynamical model that capitalizes on the concept of a legal or optimal velocity VQ . In this section, we can neither afford nor strive to cover even a fraction of the existing literature on traffic modeling. We refer the interested reader, who would like to delve deeper into this fascinating and rich discipline to a number of excellent monographs and conference proceedings [100; 156; 185; 236; 272]. In addition, an entertaining and yet accurate layman-accessible explanation of the cause of density waves and jams in everyday traffic flow can be found online [112]. As a good introduction to the active community of traffic researchers, we recommend visiting the Web site run by M. Treiber [113]. Since this book deals entirely with stochastic effects, we would like to focus on the consequences of noise on pattern formation in car-following models. In fact, this section builds directly upon the results and insights
178
6.
Sundry
Topics
developed in section 5.1. There, the assumption of strongly asymmetric coupling was put forward in a rather ad hoc manner. However, in the context of traffic dynamics, the notion of unidirectional information flow emerges very naturally: from the view of the individual driver, the actions of the preceding vehicle are vastly more significant than those of the following car. For the following discussion, we can safely ignore the minuscule coupling of a vehicle to rear events. Recently, Nagatani [201] as well as Mitarai and Nakanishi [109; 195; 196] have given numerical and analytic evidence for the existence of noise-sustained structures in the time-continuous and difference equation versions of the optimal velocity model. 6.2.1
Time-continuous
model
For later convenience, we summarize the optimal velocity (OV) model, more details of which can be obtained from Refs. [ll; 206]. All dynamical traffic models are based on the common assumption that each driver of a vehicle responds to external stimuli by de/accelerating, which is the only direct control available. We have to distinguish between two diverging conjectures for the driver's dominant strategy to avoid accidents. Follow-the-leader theories assume that each vehicle maintains a velocity dependent safe distance from the preceding vehicle. OV models on the other hand postulate the objective of a legal or optimal velocity, which depends solely on the distance to the preceding vehicle. Furthermore, a linear feedback loop is assumed such that the change in velocity is proportional to the deviation from the desired velocity vn = an(VQ — vn). The proportionality constant an quantifies the responsiveness of the driver and is referred to as the sensitivity. To keep the model simple, one usually ignores the lengths of the vehicles as well as the individual differences in sensitivity. Denoting the position of car n at time t by xn (t), the dynamical equation of the system can therefore be written as xn = a (V0(Axn)
- xn),
with Axn = xn+1 - xn .
(6.7)
As mentioned above, the "goal" velocity of vehicle n depends solely on the distance Axn to the preceding vehicle, which is also referred to as the headway. A subtle aspect of this model worth pointing out is the competing flow of information. While the vehicles move "to the right", i.e., downstream toward larger x, the unidirectional coupling acts "toward the left."
6.2.
Traffic
179
dynamics
Depending on which velocity is larger, information and perturbations propagate either upstream only or in both directions. A fundamental difference from the equations studied in section 5.1 is that the system of differential equations (6.7) is second order, which will necessitate different techniques for a stability analysis (see below). The exact functional form of Vo(z) is rather arbitrary but must meet the following properties: (i) (ii) (hi) (iv)
V0(z) > 0 dVo{z)/dz > 0 V0{z -> oo) -> Vmax < oo Vb(z-0)-0
These requirements quantify our expectations that for small headways the velocity needs to be reduced to avoid a collision and that for longer headways the vehicle can move faster, although it is always limited by some maximum velocity Vmax. Now any monotonic, smooth function that is bounded from below and above must have a general "S" shape. Here, we adopt the optimal velocity function used by Bando et al. [ll] Vo(Ax) = -~-
[tanh (Ax - hc) + tanh (hc)},
(6.8)
where hc is a safety distance referred to as the critical headway. Figure 6.6 shows this velocity function for a typical set of parameters. Note that Vo{Ax) possesses an inflection point at Ax = hc which, as we will see below, leads to bistability. Eq. (6.7) has the stationary solution xsn(t) = b-n + Vo{b)-t
(6.9)
which represents the perfectly periodic arrangement of vehicles traveling at constant velocity Vo{b), spaced b length units apart. In order to find the conditions for absolute linear instability, we linearize the equations of motion and assume periodic boundary conditions b = L/N. xn{t) = xsn{t) + £ n (t) and xN+i
= xx
=» 6, = a (7 • A£n - in) + 0(C),
(6.10)
(6.H)
with 7 = VQQ}). A common method of solving linear, second order differential equations with periodic boundary conditions such as (6.10) is to
180
6.
Sundry
Topics
o
o
O.
O
d o o 10
Ax
Fig. 6.6 Optimal velocity as function of the headway Ax as defined in (6.8) for hc = 5 a n d Vrnax = 2.
expand £„ (£) into a Fourier series N-l
£„(*) = X! «^(".*).
with
& M ) = elAj"+wt
(6.12)
J=0
Due to the presumed movement of the vehicles on a finite circle of length L = N • b, the wave numbers are constrained to Xj = 2irj/N. It is easily seen t h a t substituting £j(n,t) into (6.11) yields the dispersion relation 2
aj (e
(6.13)
1) — au> = A(w, Xj)
2UJ = - a ± J a? + 4a-y(eiXi - 1) = 2w,(A).
(6.14)
Linear stability requires the real part of UJJ(X) to be negative which — we leave it to the reader as an exercise — is true if and only if a > 2-y =
(6.15)
2V^(b).
T h e so-defined stability b o u n d a r y a = 2 V Q ( 6 ) = Vmaxsech2 in Fig. 6.7 as the solid line.
(hc-b)
is plotted
6.2.
Traffic
dynamics
181
O
c\i
Linearly Stable
'Convectively Unstable
m
Absolutely Unstable d
—
—
v
V
2 b
0
Fig. 6.7 The boundaries in the (a, b) parameter space separating the three stability regimes. The solid line demarcates the linear stability curve a = 2VQ (b) for Vmax = 2 = hc and the dashed line is a numerical approximation of Eq. (6.17).
To derive the conditions for convective instability we have to consider the modified dispersion relation [109; 158; 195; 196] (\ \
^
Vo{b)
a.
<\\
(6.16)
The convective stability boundary ac(b) is obtained by numerically solving dujM(X)
dX
| A = A c = 0 , Im[wM(Ac)]=0.
(6.17)
The corresponding curve can be seen in Fig. 6.7 as the dashed line separating the three stability regimes. The derivation of (6.17) is outlined in Appendix D. The first equation of (6.17) has solutions kc± [109; 195] kc± = —iLog(z±) + 2-jrm(m = 0, ± 1 , ±2,
(6.18)
182
6.
(a)
Sundry
Topics
(b)
400
400
200
200
0 204
0
204
Fig. 6.8 Spatiotemporal density plots illustrating the evolution of a single perturbation in Eq. (6.7). Darker regions correspond to higher densities of vehicles. For both figures we have L — 204, b = 2.0 and e = 0.1. (a) a = 1.4. The system displays connective instability and the disturbance propagates upstream out of the system boundaries, (b) a = 1.0. In the absolutely unstable regime, the stochastic impulse is amplified and spreads in both time and space. Reproduced with permission from [196].
Z±
2^o2(ft)
abV^b)
1±
ab2
[a
-4V^b)}
(6.19)
where Log stands for the principal value of the logarithm. For completeness, we would like to present numerical results from Refs. [109; 195; 196]. Figure 6.8 clearly demonstrates the different behavior for parameter settings in the convectively and absolutely unstable region of Fig. 6.7 respectively. To be close to actual traffic dynamics, the authors employ the following boundary conditions [196]: (i) At the upper stream end (x = 0), cars with velocity Vo{b) are being fed into the system at constant time intervals b/Vo(b). (ii) The car furthest downstream obeys the equation of motion xjy = a(Vo(b) — £AT) and leaves the system at x = L.
6.2.
Traffic
183
dynamics
T h e uniform solution x „ ( 0 ) = 6 - n + ^ , a n d i n ( 0 ) = Vb(6),forn = 0 , ± l , ± 2 , . . . ,
(6.20)
is taken as the initial condition for all but the center car, the velocity of which is shifted by a small p e r t u r b a t i o n xo(0) = Vo{b) + e. W h e n a is between the critical value ac(b) acquired from (6.17) and the linear stability b o u n d a r y ac(b) < a < 2VQ(&), t h e disturbance travels u p s t r e a m only (Fig. 6.8a). While for a single p e r t u r b a t i o n the uniform state is eventually recovered, continuously applied microscopic noise would result in similar noise sustained structures as described in section 5.1. Analogous t o the convectively unstable systems studied in section 5.1, the linear growth of perturbations is limited by nonlinearities and results in finite structures. T h e rather regular stripe p a t t e r n visible in Fig. 6.8(b) is referred to as the oscillatory flow. A more detailed view of the spatial structure is provided in Fig. 6.9, which plots the headway bn = xn+i — xn instead of the density of the actual position. Note the difference between the sequence of j a m and free flow (—550 < n < —400), which appears oscillatory as well but simply alternates between the two limiting values of the headway and is not strictly periodic. T h e amplitude of the oscillatory flow (—360 < n < —310) is smaller and forms the transition between the linearly unstable uniform flow and the "stop and go" p a t t e r n . The interested reader can find more details in Ref. [109; 195],
6.2.2
Discrete
time
dynamics
In the remainder of this section, we will investigate the stability of the difference-equation traffic model described in [20l]. T h e discretized version of Eq. (6.7), using the forward difference operator dx(t)/dt = [x(t + T ) — x(t)} /T becomes xn(t + 2T) - 2xn(t
~2
+ T) + xn{t)
/
= a [Vo(&xn)
xn(t + T) +
xn(t)\
J.
(6.21) Before proceeding, it is vital to remember t h a t there is a direct, inverse relationship between the sensitivity a and the time scales in the system. The higher the sensitivity a, the more quickly the local dynamics equilibrate; i.e., the faster an individual vehicle approaches its optimal velocity. Therefore, if one was only interested in the macroscopic properties of the traffic flow,
184
6.
Sundry
Topics
(a) 4.0 3.0 ^
2.0 1.0 0.0 -1000
-800
-600
-400
-200
-325
-300
n (b) 4.0 3.0 ^
2.0 1.0 0.0 -400
-375
-350
n Fig. 6.9 Snapshots of the headway configuration at t = 988 with a = 1.0, b = 2.0, e = 0.1, and L = 10000 (filled circles). The dashed lines are drawn to aid the eye. (a) The oscillatory flow region appears around —360 < n < —310 followed by an alternating sequence of jams and free flows which is found in —550 < n < —400. (b) The magnification of the oscillatory flow region from (a). Reproduced with permission from [109; 195]
it would not entirely be unreasonable to choose r equal to the inverse sensitivity r = 1/a. In that case (6.21) reduces to xn(t + 2T) = xn(t + T)+T-
Vo(Axn{t)).
(6.22)
This difference equation is computationally highly efficient and generates macroscopic traffic patterns such as density waves, jams and oscillatory and freely moving flow [200; 201]. Nagatani [201] does not derive a condition for convective instability but instead distinguishes between stable, metastable and unstable flow. Linearizing Eq. (6.22) around its steady state and expanding the deviation into its Fourier basis terms £,(n, t) = exp (iXjn + cut)
6.2.
Traffic
dynamics
185
o
CO
2.0
-'
\ \
/
stable
\ \.
/ /
\
11.,
2.5
- '/
unstable
o_ 1
0.5
~"
0
1
2
3
4
b Fig. 6.10 The boundaries in the ( r _ 1 , 6 ) parameter space separating the three stability regimes. The solid line demarcates the linear stability curve T - 1 = 3 V Q ( 6 ) = 4V m a a ; sech (/i c — b) for Vmax = 2 = hc. The dashed line is referred to as the coexisting curve given by Eq. (6.25).
yields the following dispersion relation e"T (eWT - 1) = VQT (eiA* - l) ,
(6.23)
which leads to the neutral stability condition
- = 3V6(b),
.24)
which is also referred to as the spinodal line. The uniform traffic flow is therefore unstable if 1/T < 31^(6). A nonlinear analysis shows that the uniform state is metastable if 1/T is less than a critical value TC defined by the coexisting curve [201] 1
3ac
(b-KY + i
(6.25)
with ac = i/2Vmax- In this metastable region, l / r s < 1/T < l/r c , the system displays bistability and fluctuations above a threshold lead to density waves [200; 201]. The phase diagram described by the spinodal line and the coexisting curve is shown in Fig. 6.10. We can rewrite Eq. (6.22) in terms of the headway bn: bn(t + 2T) = bn(t + T)+T-
[Vb(&„+i(*)) - Vo(bn(t))] ,
(6.26)
186
6.
Sundry
Topics
a = 2.0, />, = 5.0, v, = 1.7,
10300
10020
a = 2.0,A = 5.0, vs = 1.65, S = 05
a = 2.0,At = 5.0, v, = 15,6 = 05
a = 2.0,/>, =5.0,v„ =035,<J = 05
(9) °
F i g . 6.11 S p a t i o t e m p o r a l e v o l u t i o n of t h e h e a d w a y for different values of t h e a v e r a g e s p e e d . T h e p a r a m e t e r s a = 2.0, hc = 5.0 a n d noise a m p l i t u d e 5 = 0.5 a r e t h e s a m e for all four g r a p h s ; o n l y t h e m e a n velocity vt, is c h a n g e d . R e p r o d u c e d w i t h p e r m i s s i o n from [201],
6.3.
187
Dithering
in which case the boundary condition translates to bN-x{t
+ 2 r ) = bN-i{t
+ T)+T-
[vN{t + T) - Vb(&Ar-i(*))] •
(6.27)
In the following we are interested in the effects of fluctuations in the velocity of the leading car. We therefore let Ujv(i) fluctuate randomly around a mean value vt,: (t).
(6.28)
T h e perturbations £,Uniform(t) are assumed to be uncorrelated in time and uniformly distributed on the interval [—5,(5]. T h e existence of metastable states in real traffic at intermediate volumes is intuitively understandable. Small local perturbations to the flow die away. Larger ones amplify into a j a m . Congested regions can be pinned to the point of disturbance, for example extending back from an exit lane, or they can travel in waves upstream. T h e conclusions drawn from Fig. 6.11 are intriguing: on a onelane highway, a single car within a stream of traffic can send waves of congestion propagating down the line behind it simply as a consequence of its speed variations, even if it maintains the same average speed as the rest of the flow! Fluctuations in the leading car velocity car can trigger a whole train of density waves, which move upstream like shock waves. The situation worsens as the average speed of the flow (which is the fluctuating car's average speed too) slows. First, the waves of congested traffic grow until the traffic is more like a fully congested flow with occasional regions of low-density traffic. Then, for still slower speeds, even these "free flow" regions vanish, and a j a m of congested traffic propagates backwards behind the fluctuating vehicle. For more papers on the role of noise in traffic models, we refer the reader to Ref. [230; 252].
6.3
Dithering
In Chapter 1 we qualitatively discussed the unique n a t u r e of the nonlinearity generated by thresholding. Here, we are going to reexamine the simplest threshold system possible
{
0 for x < \ : , 1 for X >
T;
(6-29)
188
6.
Sundry
Topics
in much more detail. The specific values for the threshold (j) and the binary output (0,1) are completely arbitrary and irrelevant. This section is inspired by and partially reproduces the results from the paper by Gammaitoni [70], who has shown a deeper connection between the dithering effect and stochastic resonance in threshold systems. Dithering is a technique employed in analog to digital signal conversion [213] that minimizes distortion effects by adding a judiciously chosen amount of noise to the signal before quantization. In order to convert the analog signals found in real, physical processes into digital format, the original signal is usually sampled at discrete times and stored in finite length registers. The resulting information loss, due to the inherently finite precision of the latter, is inversely proportional to the coarseness of the discretization. For clarity of illustration, we will consider only uniform, x-independent amplitude quantization steps of size b, which is taken to be 1 in the above example. Eq. (6.29) serves as a prototype for a simple digitization scheme which could be one component of a more realistic multithreshold quantization of the form -n for — n — \ < x < —n + |
for - \ < x < \
y{x)
n
.
(6.30)
forn—i<x
For a given x, the quantization error r] = y — x depends on how far from its corresponding threshold x "happens to be". Hence, any general statement about the quality or information loss of the discretization schemes (6.29) or (6.30) necessarily will be of a statistical nature. Assuming all values for x to be equally likely, the mean error will be uniformly distributed around zero, r\ £ [—b/2,b/2]. For the following arguments, it will be enlightening to consider the inverse problem of estimating x from y, denoted by S[x|y]^. It is clear that, given a particular output y, we can deduce only the interval that contains the original value x; i.e., our resolution is exactly equal to the coarseness of the discretization grid. Dithering can be considered as a way to increase the statistical resolution at the cost of decreasing confidence about a particular sample. The latter part of this tradeoff is immediately ^To be read as "Estimate of x given y".
6.3.
Dithering
189
obvious: by adding noise to the signal before quantization, x(t) = x(t) + £(t), the previously unique and deterministic inference from the output y to a specific quantization interval becomes fuzzy and probabilistic. In fact, for noise distributions with "infinite tails", such as Gaussian noise, for a given x there is a nonzero probability for x to originate from an arbitrary quantization interval. The situation is less complex in the case of uniform noise, where J5[a;|j/(x)] is nonzero only over a finite number of intervals. The beneficial role of noise is less apparent and can be utilized only if (i) one has the luxury of coherently averaging over many noise realizations £(£), AND (ii) if the register storing this average has a higher precision/length than the one used for the discretization. By coherent we mean averaging only those y{x) that originate from the same x, which evidently assumes extra knowledge about the signal! And the second point is both significant and self-evident, since a higher precision can be achieved only if more digits are available for storing the average. If both conditions are met, then dithering can indeed substantially reduce the average error of quantization defined as [70] D =
Slj
((y)t-x)2dx,
(6.31)
where the average (y)^ is to be taken as the average output of y(x + £) over the noise £ for fixed x. The results of digital simulations of Eqs. (6.29) and (6.31) are displayed in Fig. 6.12. The probability densities of Gaussian and uniform noise are 2 1 e M*) = ^ r = ~ ^ >
and
f — —— < x < — M*) = { Ln ] ~ ~ 2 .
(6-32)
o\J lit 1 0 elsewhere. respectively. Note that the variance of uniform noise \s,a\=\- f_L,2 x2dx = j2- For small noise values, the results are almost identical for the two curves. For increasing noise intensities, however, the uniform noise achieves the optimum value D = 0, whereas the residual error for Gaussian noise always remains positive. The superior performance of the uniform noise in the narrow region around its optimum of L = 1 (cr = l/vT2) is somewhat offset at larger noise values, where Dcauss remains consistently smaller. The results from Fig. 6.12 are consistent with the engineering literature [72],
190
6.
D
Topics
Uniform Noise Gaussian Noise
+
Sundry
ra m
X ^
-k' ''
o
in
•i'
0.0
o o
1
i
0.0
1
0.5
1
1.0
1.5
Fig. 6.12 Average error D as defined in Eq. (6.31) for Gaussian (crosses) and uniform (circles) noise. The solid and dashed lines correspond to the analytical result 6.37 and 6.36, respectively. To allow the two curves to appear on the same scale, the relation ff„ = L/\/l2 was used.
where the best choice for the dither signal is proven to be a uniform random signal with amplitude equal to the quantization step. Analytical results for D are readily available via the probability densities:
(yg,u(x))
,A0Q[x
+ £]d£
(6.33)
where the last identity follows from the definition (6.29) of Q[x], which is nonzero only for x > \. It follows that^
for x < \ - f (yu(x)) = <
2x+L-l 2L
fori
2 —
X
— 2 + 2
(6.34)
for x > \ + \ ^There is a minor typographical error in Eq. (5) of [70], which actually does not change any of the main results: Instead of (yu(x)) = 0 for x > 1/2 + L/2, it should read (yu(x)) = l.
6.3.
191
Dithering
o ••-
00
d CO
d" d 0.2
5 / 3
**4
o o
0.0
/2 0.4
0.2
1 0.8
0.6
1.0
Fig. 6.13 Averaged system output (yu) for noise intensities (1) L = 0 (no noise), (2) L = 0.25, (3) L = 0.5, (4) L = 1 and (5) L = 1.5. The number of averages is N = 2000, the discretization of x to compute the integral in Eq. (6.31) was dx = 1/500. To aid the reader's eye, the identity (yu) = x is drawn as a dotted line. The linearizing quality of the dither is evident.
and 1 .5 — x {yg(x)) = o ~ e r f
with erf
1
-t2l^dt.
(6.35)
O-V2TT
Substituting (6.34) and (6.35) into Eq. (6.31) leads to
DU{L) =
(6.36)
and Dg(
erf
x )
dx.
(6.37)
The theoretical predictions (6.36) and (6.37), drawn in Fig. 6.12, are in very good agreement with the digital simulations. It is worth pointing out that the dithering noise essentially linearizes the nonlinear step function (6.29). For the optimum value L = 1, the average output is the identity
192
6.
Sundry
Topics
(Uu) = x according to Eq. (6.34), which results in zero error. For all finite noise values, the average output is essentially a straight line with slope 1/L and cutoff points ^ ^ . Figure 6.13 demonstrates the linearizing effect of uniform noise on the threshold dynamics (6.29). The corresponding plots for Gaussian noise are similar but exhibit an "S-shape" and therefore cannot be exactly linear. In summary, we have shown that in situations where coherent averaging is feasible, the addition of uniform noise to a signal before quantization can significantly improve the inference from the discrete output Q[x\ to the original continuous signal x. It is clear that the accuracy of the estimator _E[s|(j/)] increases with the number of averages N. As alluded to in Chapter 1, the noise serves to "explore" the hidden features/digits of the underlying value x. We refer the interested reader to the original reference [70] for more details, in particular on the connection to stochastic resonance in threshold systems.
6.4
Noise in neural networks
Within the general context of "data fitting", one typically attempts to estimate a deterministic function h(x) by, e.g., a neural network parameterized by parameters 9. Let us denote the output of this network by y(x, 9) and assume that we are given TV training data tn. The main goal is usually to find those parameters 9Q, which minimize a cost function, such as the sum-of-squares error 1
N
E(t,9) = -J2\\y(xn,0)~tn\\2
(6.38)
n=l
90 = &rgmmE(t,9)
(6.39)
6
The arg min^ operator returns the argument 9 that minimizes the function it operates upon. Unfortunately, given enough free parameters" any function y(xn, 9) can be "made" to approximate the training data arbitrarily well. If the model complexity is much larger than necessary to approximate h(x), the network is likely to be finely tuned to the noise in that particular " In general there is no one-to-one link between the actual number of parameters and the so-called model complexity. Many neural networks have more parameters than training data, but inherent constraints sharply reduce the effective degrees of freedom.
6.4-
Noise in neural networks
193
<M
Training Error Prediction Error
O
J2oo •g
'55
CC "o
CM
O
2
4
6 Model Complexity
8
10
Fig. 6.14 The sum of squares (SS) of the residuals versus model complexity illustrating the bias-variance tradeoff. The prediction error is minimized at some optimal model complexity. For this example, the function y(x,0) is taken to be a polynomial, and the model complexity is its degree. The training and test data are 100 random Gaussian numbers of zero mean and unit variance respectively, and h(x) is a fourth order polynomial.
realization of the training set and will not predict well on new data. This ubiquitous pitfall in statistical learning and regression is generally referred to as over-fitting or the bias-variance tradeoff'[99]. The definitions of these terms require us to consider not just one specific training set but an ensemble of them. The bias is the' average error of the networks, while the variance measures the variability in the network's outputs as a function of different realizations of training data**. In general, both errors contribute to the prediction error in a conflicting fashion, leading to an optimal model complexity as illustrated in Fig. 6.14. The underlying problem is that the minimization of (6.38) can lead to arbitrary, wildly varying functions y(xn,8) since there are no explicit smoothness requirements. Techniques which attempt to avoid over-fitting and to constrain the model complexity are known as regularization methods. In their most general form a penalty **In spirit, this is reminiscent of the stock market where a similar tradeoff exists between the risk (fluctuations or variance) and the return on investment.
194
6.
Sundry
Topics
term proportional to a new parameter v is added to the error function (6.38) E = E + isfl
(6.40)
6V = arg min E(t, 0, v).
(6.41)
§,v
The penalty term fl is smaller the smoother the network output y(xn,6). The parameter v controls this compromise between fitting the training data and requiring smoothness. A common regularizer that is called weight decay in the neural network community and ridge regression in ordinary regression penalizes the amplitude of the coefficients
n = \j20l
( 6 - 42 )
i
A more direct approach would be to penalize the curvature and therefore the second derivative 1
N
d
£>^ l
n=lz—1
where d refers to the dimension of the vector x = {x\,...
,Xd) and yn =
A third approach, which is described in Chapter 9.3 in the outstanding book on neural networks by Bishop [18] and also in [19; 238], involves the addition of noise to the training vectors. Suppose we created an infinite number of surrogate data points by adding zero mean noise £ with corresponding covariance matrix ol to each training vector. The new cost function would then be i
N
r
E& °) = 2 E / H^f» +^6)~ **n||2p(£K ,
(6.44)
where p(£) is the probability density function of the noise. One way to proceed is to Taylor-expand y in powers of £
M(I - + o
= w(f> + £f.£i f =» + 5 E E G & J U J I K . + °
i
3
(6.45) Neglecting the higher order terms and substituting (6.45) into (6.44) yields
6.4-
Noise in neural
networks
195
^ = \ E E / (»* - **.")2 + 2(^ - **.») ( E &»# + \ E E ^ 2 l + ( E &»£) + E t
t« feE&( ( 2 )
P(0d£,
(6.46) with the convenient abbreviations j/J. \ = dyk/dxi\?_0 and j/^. \ • = d2Vk/dxidxj\?Q. The individual components £j of the multivariate noise £ are assumed to be independent, f £i£jP(Qd£ = crdij, and furthermore the mean is zero f £jp(£)d£ = 0. As a direct consequence, integrating over £ simplifies Eq. (6.46) significantly:
^ = \ E E (fo* - **.»)2 + ra=l
fe
\
CT
E
[&* -
'M)2&
+ (J/S) 2 ]) • (6-47)
i
/
Noting that the first term is equivalent to the original cost function (6.38) we can rewrite Eq. (6.47) in the form of (6.40) E = E + aVL
(6.48)
with the regularization term N
»-lEEE(»-^ + (|)' n=l
k
i
(6.49)
t
In [18] it is shown that the term involving the second derivatives of y can be neglected to first order in a. Summarizing, we have seen that the addition of the right amount of noise to training data has a regularizing effect, i.e., can prevent overfitting. The penalty term involves the first and second derivatives of the "learned" function y(x,0), which is reminiscent of the curvature driven penalty (6.43). For large noise powers, higher order terms would enter the expression for the regularization penalty $7 (6.49).
This page is intentionally left blank
Chapter 7
Afterthoughts
Arriving at the "conclusion" is always a difficult and probing instant in the course of both reading and writing a book. The author faces the peril of questioning and doubting the consistency, completeness and uniformity of the just presented, necessarily narrow excerpt of a field as wide as stochastic phenomena. As with many personal works, at this point the temptation to "start all over" and rewrite entire chapters is at its greatest. The reader, on the other hand, is likely looking for guidance on how to most efficiently digest the huge amount of information already absorbed. The main challenge for the author is therefore to concentrate on simple lessons learned and to propose unsolved problems. We begin with stochastic resonance, which is, among all the topics discussed in this book, undoubtedly by far the most prominent one in the scientific literature. Most remarkable is the number of paradigm shifts and generalizations that occurred after its original discovery more than twenty years ago. This is most easily seen by noticing the features that were once thought to be essential to the basic phenomenon but have since proven not to be: bistability, thresholding, periodicity of the input signal, and stochasticity of the added fluctuations. An attempt to reduce SR to its statistical essence was recently put forward by Bezrukov and Voyanoy [16], whose work suggests that statistical, noise-facilitated information transmission is a much more general feature of nonlinear systems than previously assumed. This new paradigm gains particular momentum when viewed from a neurophysiological view. Surprisingly little is known about the role of noise in sensory neurons. Unlike most of the systems studied hitherto, the amplitude of the noise in these neurons is enormous: a typical signal-to-noise 197
198
7.
Afterthoughts
ratio (SNR) is of the order of 0 dB [17]! The high efficiency and gigantic information processing capability of neural networks seems even more astonishing given these gigantic levels of noise contamination. The only possible explanation must have its origin in the large number of coupled neurons and the resulting massively parallel signal processing. We pointed out in section 4.1 that the SNR of a single stochastic resonator can be significantly increased when coupled into an array. However the fundamental limitation of stochastic resonance as a suboptimal signal processing technique cannot be overcome by this arrangement: it is obviously always far better to adjust the potential barriers (or thresholds) to the signal amplitude than to add noise! This hard fact might be the main reason for the lack of commercial signal processing applications of SR. The silver lining of this cloud could very well be the newly discovered suprathreshold stochastic resonance (SSR) [244; 245; 246]. This term was coined by Stocks, who showed that in a summing network of N threshold devices, noise can extend the dynamic range of the input signal over which noise can be beneficial, to suprathreshold strengths. Each threshold device is subject to the same input signal x(t) but to independent Gaussian noise, rji(t), with a common standard deviation an. The crucial difference between AESR and previously studied summing threshold arrays [37] is that the thresholds ©^ are individually adjustable. Consequently the number of free parameters is much higher and of the order of N. The performance of the network for arbitrary, not necessarily periodic signals x(t) is measured by the average mutual information / , which was defined in Eq. (2.4). We consider the case where the threshold levels are uniformly distributed between the limits ± 1 , analogous to Eq. (6.30), and where x(t) is a random uniformly distributed signal between the limits ±L. A maximum in the information transmission is observed for nonzero added noise strengths av and large, out-of-range signals, L > 4. Of course, for small, subthreshold signals, "normal" SR, as described in section 6.3 on dithering, takes place. In this context, Stocks interprets SSR as the "large signal complement of classical SR" [245]. We stress that SSR only occurs in arrays and wish to point out its inherent benefits and multifaceted optimality • SSR can achieve up to 50% of the theoretical capacity computed in the absence of noise. • SSR occurs for arbitrary signal amplitudes; of course the optimum amount of added noise depends on the signal strength.
Chapter 7
199
• SSR performs better than a conventional analog-to-digital converter when noise and signal strength are of the same order. An exciting open question is whether sensory neurons make use of SSR to enhance information transmission [l]. Another promising area of further research within stochastic resonance is quantum effects on fluctuations on microscopic scales. Little is known about, e.g., SR in arrays of coupled quantum systems or even just the quantum mechanical analog of simple threshold SR. The role of fluctuations in minority games has been cautiously explored in section 6.1. However, there is no general framework to treat noise in game theory and the author expects many more interesting effects to be discovered. One recent example is Parrondo's paradox, named after the physicist Juan Parrondo [97]. The original Parrondo's games constitute a beautiful bridge between game theory and Brownian ratchets as highlighted in Refs. [l; 97; 98]. The paradox arises when combining two losing games subsequently results in a winning strategy. The setup of these games is physically motivated by the flashing ratchet described in section 5.5. For the reader's convenience, we briefly summarize the most popularized version of the game. At time step n, each player j maintains an integer capital Xj(n), which decreases (increases) by 1 when he or she loses (wins). Game A is nothing more than the tossing of a weighted coin that has probability p of winning. Game B can be summarized as follows: if the present capital Xj(n) is a multiple of M, the chance of winning is pi, otherwise it is p2For the initial analysis, we can reduce the number of free parameters to one by introducing the biasing parameter e, e.g. by 1 P=-2-e
P2 = \ - e ,
(7.2)
which in combination with M = 3 happens to be the original formulation [97]. For e = 0 both games are fair; for e > 0 both games A and B lose. Figure 7.1 displays the average cumulative gain over 100 played games when playing games A and B, as well as the effect of different switching patterns. The average finishing capital depends on the chosen switching
200
7.
Afterthoughts
3,2] 2,2] random 'cO
o CD
>
is E 3
o Game A
20
-r 40 60 Games Played
Game B 80
100
Fig. 7.1 Parrondo's paradox: When playing games A and B individually, the player loses on the average. However, switching between the games (playing game A a times, game B b times, and so on) results in an average winning strategy. The values of a and 6 are shown by the vectors [a,b] on the right axis. In the case labelled "random", the player switches between the games at random. Other parameters are M — 3, e = 0.005, and total number of averages equals 10 4 .
sequence. The analogy between the flashing ratchet and Parrondo's games becomes more clear when studying the statistical properties of game B alone. As noted in ref. [98], the capital tends to localize somewhere between the k-M — 1 and k-M states for some integer k, which can be seen as follows. For X(n) = k-M — 1, the capital will increase to X{n + 1) = k-M with the relatively high probability p2- However, at that state, the probability pi that the capital will be decreased back to X(n+2) = k-M - 1 is even larger, due to the particular choices of pi,i. This skewed distribution for game B is shown in Fig. 7.2. The dominance of the peaks at 0 , - 1 and 2, which correspond to k-M, k-M-I and (k + l)-M-l respectively, is evident. This localization of capital at the k • M "ceilings" is analogous to the particle localization inside the potential wells in Fig. 5.24. Including game A, which by itself simply generates a normal distribution, corresponds to switching off the localization "potential" and allows the capital to move upwards to the next ceiling k • M —> (fc + 1) • M about half the time. Of course, about
Chapter 7
O
o_ LO
o o J o
o o i LD
1
o ->
-1
0
2
modulus(X,3) Fig. 7.2 Distribution of the modulus of X[100] for Game B. The localization described in the text manifests itself in strong peaks at 0 , - 1 and 2, which correspond to k • M, k • M — 1 and (fc + 1) • M — 1 respectively, and is suppressed occurrence of 1 and —2. The five bars add up to 5000, which is the total number of game sequences. Same parameters as in Fig. 7.1.
half the capital would have moved downwards, k • M —> (fc — 1) • M, but within the next game B phase, this capital would be forced back up to the k • M ceilings [98]. The resulting net gain is similar to the reverse gravity motion of the particles exposed to the flashing ratchet potential. In closing I wish to express my sincere hopes that, if nothing else, this book would be immensely useful for the kind of person I was ten years ago. I had always felt the need for a "one-stop reading", alleviating the painful path of collecting hundreds of articles and attempting to extract the essentials. I hope for this book to serve as both a quick look-up reference as well as a thorough tutorial when read slowly. If, in addition to that modest goal, the reader might feel inspired to go ahead and pursue his or her own research on noise-sustained and induced patterns, I would be extremely pleased.
This page is intentionally left blank
Appendix A
Normal Matrices
Due to their importance, we offer a brief summary of the main properties of normal matrices. The subsequent propositions will also prove the equivalence between the requirements (2.23) and (2.36) for immediate decay of the solutions and the normality of the Jacobian matrices. Note that the proofs exploit proposition 2.4 which states that A being normal is equivalent to the existence of a unitary matrix U and a diagonal A = diag(Ai, • • •, XN) such that A = UHKU, as well as the trivial identity AH = [UHAU]" = UHKU*. Property A . l A square matrix A with eigenvalues Ai, • • •, AJV is normal if and only if the eigenvalues of A + AH are Ai + Ai, • • •, A# + Ajy. Proof. A is normal & A = UH AU ^A
+ AH= UHMJ + UHKU
'Ai + Ai H
U
\
H
(A + A) [/ = U
U. XN
This concludes the proof.
+
XN
(A.l)
/ •
Hence, if J has negative eigenvalues and is normal, then the matrix JT + J in Eq. (2.23) is also negative definite, which is necessary and sufficient for the monotonic decay of the modulus of u(t). *A denotes the complex conjugate of A. For diagonal matrices, it is clear that AH = A 203
204
Appendix A. Normal
Matrices
Property A.2 A square matrix A with eigenvalues Ai, • • •, Ajv is normal if and only if the eigenvalues of AAH are |Aij 2 , • • •, |AJV|2Proof. A is normal o A = UH AU <^> AAH = UHAUUHAU.
(A.2)
The center terms involving the unitary matrix U cancel, C/ _1 = UH: /|Ai|2 H
•& AA
H
H
= U AAU = U
\ •..
V
U.
(A.3)
IAA.IV
This concludes the proof.
•
Hence, if J has negative eigenvalues and is normal, then the matrix JTJ — I in Eq. (2.23) is also negative definite, which is necessary and sufficient for the monotonic decay of the modulus of u(t). All in all, properties A.l and A.2 immediately make clear the connection between the monotonic decay of u){t) in Eqs. (2.23) and (2.36) and the normality of the respective Jacobian. Property A.3 A square matrix A is normal if and only if there exists a polynomial p{x) such that AH = p(A). Proof. It is important to realize that, due to (2.33), for an arbitrary polynomial p(x) we have p(A) — p(UHAU) = UHp(A)U. Therefore, we need the unique polynomial which converts the diagonal matrix A into its conjugate A such that p(A) = UHAU = AH: p(Xi) = Xi,ioii
= l,---,N.
(A.4)
By the fundamental theorem of polynomial interpolation, we can always find a unique polynomial of degree < N — 1 that takes on the values A^ at the points Aj. This concludes the proof. •
Appendix B
Integrating Colored-Noise, Coupled SDEs
For the case of colored noise, Fox [65] has proposed a true second-order algorithm for the numerical integration of stochastic differential equations (SDEs). In this section we are going to briefly summarize the findings from [65] and outline their implementations for the case of coupled elements. A colored noise SDE can generally be written as x = f(x)+g(x)e(t)
(B.l)
e=-\e
(B.2)
+ X£w(t),
where e(t) is the colored noise, A = T - 1 is its inverse correlation time and £w{t) is Gaussian white noise of variance D: (&,(*)> = 0, and <&,(*)&,(*')> = 2Z><5(* - t').
(B.3)
For simplicity we only consider constant g(x) = g; the reader is referred to Ref. [65] for a treatment of the general case. Integrating and expanding (B.l) results in the evolution equation for x(t): x(t + At) = x(t) + Ai/ t ( 0 ) + i(Ai) 2 / t ( 0 ) /t ( 1 )
(B.4)
+ g {To(t + At) - r 0 (t) + / t (1) [T^t + At) - r i ( t ) - Atr 0 (t)]) plus terms of order ~ (At) 3 . Here, we have defined t+At
/ 205
e(s)ds
(B.5)
206
Appendix B. Integrating
Colored-Noise,
Coupled SDEs
and rt+At
T1(t + At)-T1(t)
=
T0(s)ds.
Defining the three Gaussian random numbers Go(t,At), G2(t, At) by
(B.6) G±(t,At)
and
t+At
/
exp[-\(t
+ At-s)}£w(s)ds,
(B.7)
+ At-s)]}^1u(a)ds
(B.8)
rt+At
G1{t,At)
=
{l-exp[-X(t
and G 2 (t,A«) = - /
{\(t + At-s)+exp[-\(t
+
At-8)]-l}Zw(8)ds, (B.9)
we can explicitly solve Eqs. (B.5), (B.6) and (B.2) as follows. e(t + At) = exp (-XAt)e(t)
+ G0(t, At),
r 0 ( t + At) = r 0 (t) + | ( 1 - exp-AAt) e{t) + Gift, A t ) , A
(B.10)
(B.ll)
and r i (t + At) = Ti {t) + AtF0(t) + - ^ (AAt + exp -XAt - 1) e(t) + G2(t,
At).
(B.12) It turns out that the three numbers Gj are not entirely independent. In fact we can generate them from two independent Gaussian random numbers \&i and \&2 with zero means and unit variances [65] in a linear fashion: Go = aJ{G20)*1+bJ{G20)*2,
(B.13)
Gl=cJ(Gl)^1+dJ(G21)^2,
(B.14)
G2=eJ(Gi)*1+/J
(B.15)
B.l.
Exploiting symmetries
of coupled differential equations
207
Solving for the coefficients a,..., e yields^" a=1 6= 0
(God)
VWhWl) (God)2 (Gl)(G\) (G0G2)
V(G2o)(Gl) /
(GiG2)
•vmw) B.l
_ (G0G1){G0G2)
(GD^mm
\ _/
_ (GpG^ 2
X
~1/2
v (GIHGD
Exploiting symmetries of coupled differential equations
As pointed out in section 2.6, any ordinary (ODE) or stochastic differential equation (SDE) of degree N can always be reduced to the study of N first-order differential equations. For example, the 2nd order SDE x = f(x,x)+ge(t)
(B.16)
can be rewritten as x\ = x2
(B.17)
x2=f{xi,x2)+ge{t)
(B.18)
Here we show that, because of the symmetric arrangement of N, linearly and diffusively coupled 2nd order ODEs, the algorithm developed in [65] is modified in the following manner. We have to simply add the coupling term to the first order term (~ At) AND add a "velocity coupling term" to the 2nd order (~ (At) 2 ) evolution term for the momentum. The evolution •There is a minor typographical error in the original reference [65]. The numerator in the expression for d should contain the term (GoGi) 2 instead of {G0G2} 2 as given in Eq. (39) of [65].
Appendix B. Integrating
208
Colored-Noise,
Coupled SDEs
equation for N coupled 2nd order systems is: Xi = fi(x) + gle(t), i = 1,
(B.19)
,2N
with g\ = 0,
Xi+l
fi =
Fi{xi-\,Xi)
+ Ai
for i even
where Aj = e(xi-3,Xi-i,Xi+i) is the diffusive coupling operator (later on referred to as Ax), and F() represents the main dynamics of a single element. Also, for i odd 9i =
for i even
The derivatives are given by: for i odd
' $i+i,J ri.i
fi,j = { Fi^ + A^. .
for J = i for J = i
Ai,i_3
for J = i
Aj,i+i
for J = i •
for i even
for i odd P- .
for J = i
= < Fiti + 2e
for J = i -- 1
—e
for J = i -- 3
—e
for J = i + 1
for i even
Therefore: fi,jgj\rJi(t
+ At) - r{(t) - Atr 0 J (i)]
for i odd for i even
r*+1 (t + Ai) - ri + 1 (t) - Atri+^t) for i odd - 7 ( r ! ( t + At) - T\{t) - Atr0(t))
for i even
B.l.
Exploiting symmetries of coupled differential equations
209
and:
fi+i fi,jfj
for
= ' Fi,ifi + Fi,i-lfi-l ^ + Ahi+ifi+1
{
+ Ai,i-3fi-3
i odd + for
&i,i-lfi-l i even
fi+i
for i odd
Fi,ifi + Fi.i.x/i-i + e(2.f- 1 - / ' + 1 - p-3)
_ j fi+i
for
for
z even i odd
\ Fi,ifi + -Fi,i-i/i-i + e(2xi - a:i_2 - ^+2) for i even Thus, the 2nd order (~ (At) 2 ) term is supplemented by a "velocity coupling" term Ay = e(2xi — x^_2 — #1+2)- Further: ,
7
J
ffJ[r0
(t + Ai)-r 0 J (i)]=
I0
for
i odd
.
I (ro(i + At) - lg(t)) for i even The 2nd derivative of / (for generality, though of no concern for this algorithm, since g is constant) is given by: 0
for i odd
fi.JK = { Fi4i Fj.j-ij-i
for J = i = K
1 > for J = i - 1 = K \
0
for i even
for i odd
0
1
> for i even Fj^-H-i for J = i - 1 = K J All in all, the algorithm defined by Eq. (57) in [65] reduces to: (the upper indexing scheme a, (3 is changed into lower indexing i,j) Xi{t
1
+ At) = Xi(t) + A t / , + - V( A t ) 2 / i , j / .j 2
+ft[rj(t + At) - p0(t)] + Ajffjprftt + At) - rY(t) - Atr0J(t)] (B.20)
210
Appendix
B. Integrating
Colored-Noise,
Coupled SDEs
For odd i:
^At)2fl+1+r\+1(t+At)-r\+1(t)~Atr+1(t)
xl(t+At) = xt(t)+Atfl + = Xi(t) + Atxl+1
+ ^{^t)2
[Fi+i(xi,xi+i)
+ A i+ i(a; i „2 I a; J ,2; i+ 2)]
+T\+1(t + At) - T\+1(t) - AtV+^t)
(B.21)
and for even i: Xi(t+At) = Xl(t)+Atfi+-{At)2
[Fi,ifi + Fi.i-1/i-! + e{2xt - x^2
- xi+2)]
+r0(t + At) - r*(t) + Fi,i [r[(t + At) - v\(t) - Atr0{t)] B.2
(B.22)
Coupled Duffing oscillators
In the case of the Duffing oscillator, we can substitute: Fi(xi^i,Xi) (Fi+i(xi,xi+i)
= —^fXi — axi-i — /faf_i — Jsin(kxi-i) = ~^Xi+\ — axi — j3x\ — Jsin(kxi) ^
t*i,i
=
T' 1 ** i,ii
~
+ A + Bcos(wt), even i + A + Bcos(uit), odd i)
^;
Fi,i-i = —a - 3/3x2_± —
Jkcos(kxi-i),
-Fi,i-ii-i = -6f3xt-i
Jk2sin(kxi-i)
and +
For odd i: Xi(t + At) = Xi(t) + Atxi+1
+ -{At)2
+ A + Bcos(ujt) +
[-~/xi+i - axi - /3xf Ai+1(xi^2,xl,Xi+2)]
+ r\+\t + At) - r\+l(t) - Atr+l{t)
Jsm(kxi)
B.2.
Coupled Duffing oscillators
=» Xi(t + At) = Xiit) + Atxi+1
+ -(At)2
211
[F + Ax]
+T\+1 (t + At) - T\+1(t) - Atr0+1 (t)
(B.23)
and for even i: Xi(t + At) = Xi(t) + At/, + - ( A t ) 2 [ - 7 / z + {-a - 3f3x2_i -Jk cos(kxi^i))xl
+ e(2xt - x{-2 ~ xi+2)}
+r0(t + At) - r* (t) -
7
[ri(t + At) - r*(*) - Ati« (*)]
=> Xi(t + At) = Xi{t) + At/, + -(At) 2 [~1fl + Fi^-Hi + Ay] + r0(t + At) - r0(t) -
7
\T\(t + At) - rut) - Atr^(t)] (B.24)
This page is intentionally left blank
Appendix C N u m e r i c a l I m p l e m e n t a t i o n of t h e F F T
This Appendix serves as a highly practical supplement to section 4.3.1. The reader should not expect any real science or deep insights. Nonetheless, the value and potential time/agony savings for the actual coding of the twodimensional Fourier transform, which is required to implement the granular flow model from section 4.3.1, should not be underrated. While the intention is to remain general, we will address the specific conventions used in the routines realftQ and realft3() from the outstanding book on numerical recipes [221]. While a basic understanding of the Fast Fourier Transform (FFT) and its numerical implementations is not strictly necessary, some readers might find it helpful to refresh their memory at this time. For simplicity, we assume that the number of data points JV is a power of 2. It is worthwhile to start with the one-dimensional case. As the FFT is an invertible operation, N (complex) input values return N (complex) output values in frequency space. Definition C.l The discrete Fourier transform (DFT) Hn of the N points Xk is defined as N-i
2nikn
Hn=J2xkexp^^,n
= 0,...,N-l.
(C.l)
fc=0
It is easily seen that H^-n = H-n for n / 0 , which justifies the indexing n = 0 , . . . , N — 1. For equally spaced sampling (in time or space), Nyquist's theorem states that the highest resolvable frequency is half of the inverse 213
214
Appendix
C. Numerical Implementation
of the FFT
sampling interval A:
f*=±.
(C2)
This "cutoff frequency" is usually referred to as the critical Nyquist frequency. As a consequence, the frequency range is determined by the sampling frequency and extends from —/c to fc. Further, we can compute the frequency resolution df by "fitting" N + 1 equally spaced data points into the interval ( - / c , / c ) : n
/n=
iVA'
N
n =
N
1 M
~T"--'Y^
d/=
iVA-
rno,
(C 3)
'
Note that at first glance it appears as if we are violating the one-to-one mapping from Xk to Hn by trying to compute the transform at the TV + 1 frequencies. However, due to the periodicity of Hn, there is no distinction between the positive and negative Nyquist frequency! All in all, in the notation of (C.l) the critical frequency / c ( = —fc) corresponds to HN/2, non-negative frequencies / „ = WA^ — n — y ~~ 1 trivially correspond to Hn,0 < n < ^ — 1, and the N/2 — 1 negative frequencies /„ = •^K,n= - f + 1 , . . . , - 1 are stored in Hn, f + l < n < i V - l . The FFT is an elegant method to compute the discrete Fourier transform (C.l) which utilizes certain symmetries in case N is a, power of 2 and does so in 0(N log2 N) operations.* There exist countless numerical implementations of the FFT; here we follow the algorithm put forward in [221]. So far so good. Now, in numerous applications the input data are real numbers, which can be handled in two ways. The easier but less efficient approach would be to treat the real numbers as a complex array with zero imaginary parts and proceed as normal. A computationally more parsimonious algorithm would exploit the following properties. (i) For real Xk the Fourier coefficients for negative frequencies are simply the complex conjugate of the corresponding positive frequency coefficient: H^-n = Hn, n = 1 , . . . , y — 1 (ii) For real Xk the Fourier coefficients for zero and the Nyquist frequency are real: IUI(HQ) = lm(Hpj/2) = 0. It is therefore possible to store the DFT of N real numbers in an array of A^/2 complex values. The routine realft() in [221] achieves exactly that by * Until the mid-1960s the discrete F T was considered a regular matrix multiplication that would require 0(N2) operations.
Appendix
C. Numerical Implementation
215
of the FFT
storing the positive frequencies in Hn for n = 1 , . . . , N/2 - 1 and the zero and Nyquist frequency in the real and imaginary part of HQ respectively. In the two-dimensional case, V^
V^
2-Kikini
Hni,n2 = 2 ^ Z^ ^ i - ^ e x p — ^ 1 fe1=0 k2=0
e x p
2mk2n2
^v
2
'
.
^
.
'
where n\ = 0 , . . . , N\ — 1, and 712 = 0 , . . . , N2 — 1, this rearranging of coefficients would be extraordinarily complicated. Instead, for the small price of introducing an extra array speq[ ] [221], which stores the Fourier components at \N2 + 1, the ordering of the Fourier coefficients after the first transform is easy enough for the subsequent one to be straightforward. We note that the array speq[ ] needs to store JVi complex numbers so that its physical dimension is 2N\. The original real array comes back as a complex array of dimensions N\ x (N2/2 + 1) or physical dimensions iVi x (./V2 + 2). Due to the additional array speq[} and the fact that the C language does not have a built-in complex data type, the exact correspondence between the actual frequencies and the indices of the output array can become somewhat confusing. For the reader's convenience we offer a table that shows the precise ordering of the returned coefficients data\ }[ ] and speq[ ] from the routine realftSQ in [22l]§. The author will be more than happy to assist with remaining coding problems and willingly share his own code if requested to do so.
^Note that the Fourier coefficients are returned in the same array data[ ][ ] that is used for the input array Xij. Also, realft3Q is written for 3-dimensional input data. In table C.l we would simply add another index before the two existing ones, which would correspond to a third zero frequency.
216
Appendix
C. Numerical
Implementation
of the FFT
Table C.l Ordering of the coefficients for the two-dimensional version of realft3() in [221]. The highest frequency n corresponds to the Nyquist frequency. fl
f2
data[l][l] data[l][2] data[l][3] data[l][4]
0 0 0 0
0 0 2^/JV 2 2W/N2
d a t a [ l ] [ « 2 - 1] data[l][N2] speq[l] speq[2]
0 0 0 0
2TT(N 2 - 1)/JV 2
data[2][l] data[2][2] data[2][3] data[2][4]
2^/Nj
0 0 2»/JV 2 27r/W 2
data[2J[]V 2 - 1] data[2][N 2 ] speq[3] speq[4]
2«/»l 2»/«l 2»/«l
datalJVi/2 data[Ni/2 data[W1/2 data[/Vi/2
2T(JV2 -
2*7 JVj
+ 2][1] + 2] [2] 4- 2] [3] 4- 2][4]
d a t a [ N , / 2 + 2][JV2 - 1] dataIJVj/2 + 2][N 2 ] speqliVj + 3] speq[W^ + 4]
2T,(N2
-
1)/N2
0 0
n 77
-2w(N1
-
1)/N1
-2TT(N1
-
1)/JV,
-2n(Nt
- 1)/M,
-2^(JVJ -
-2^(/V1 -2^(JVj -2»(»1 -2x(JVj
(real) (imag) (real) (imag)
(real) (imag)
(real) (imag)
n 71
d a t a [ N , / 2 4- 1][JV2 - 1] d a t a l J V i / 2 + 1][JV2] speq[/Vi + 1] speqlJVj + 2]
dataJNi/2 dataJNj/2 dataJJVj/2 datajiVj/2
2^(N2 - 1)/N2
2-TT/JV!
1][1] 1][2] 1] [3] 1][4]
(real) (imag)
(real) (imag)
2IT/N1 2T,/NX
+ + 4+
1)/N2
(real) (imag) (real) (imag)
l)/»!
- 1)/N1 - 1)/N1 - 1)//Vi - l)/iV,
datalNiHl] data[7V 1 ][2] data[JVi][3] data[N 1 ][4]
-2"/«l
d a t a j A f x J l ^ - 1] da.ta\Nx}[N2} S p e q [ 2 N 1 - 1] speq[2N 1 ]
- 2 T / « 1
-27T/JVJ -27T/JV! -21T/AT!
-27r/Af 1 -27T/JV! -2'/Nl
(real) (imag)
2TT//V 2
(real)
27T/JV2
(imag)
2TY(N2 - 1)//V 2 2^(JV 2 - 1)/JV 2 7T
(real) (imag) (real) (imag)
0 0 2«/»2 2»/N2
(real) (imag) («al) (imag)
2 » ( » 2 - 1)/N2 2ir(]V 2 - 1)/JV 2 IT TV
(real) (imag) (real) (imag)
0 0
(real) (imag)
2TT/N2
(real)
2x/JV 2
(imag)
2,r(iV 2 - 1)//V 2 27r(N 2 - 1)/JV 2
(real) (imag) (real) (imag)
Appendix D
Absolute and Convective Instabilities
The purpose of this chapter is mainly to derive Eq. (6.17). The analytical treatment presented here follows Chapters 34 and 62 in Ref. [158]. We first expand the perturbation from the uniform state into a Fourier integral with respect to space: elknUt)^-
(D.l)
Inserting exp (ikn)£k(t) into Eq. (6.11) yields the following linear evolution equation for £fc(i): ^
- "
^
= "7 ( e * - ! ) & ( * ) •
(D.2)
We multiply both sides of (D.2) by exp — itot and integrate with respect to time from 0 to oo. Utilizing
f Jo
at
-iut Zk(t)e-Wt
,/•OO
TOO
+ ico
ik{t)e'wtdt
o
= -£k{0)+iu>£k
(D.3)
/•OO
with|+fc=/ Jo
£k(t)exp(-iuJt)dt
(D.4)
where we have assumed that £fc(oo) exp (—iuioo) = 0. Note that £+fc as defined in (D.4) is simply the Laplace transformation, which is usually written as fp = J0°° /(£) exp {—pt)dt for a function /(£). In our context, p is replaced by the complex argument iu>. From Eq. (D.3) it immediately 217
Appendix D. Absolute and Convective
218
Instabilities
follows that oo Q 2
dUt)e-Mt dt
at1
IUJ
I
—-—e at
= -&(<))-iu>ik(0)-u>2£+k.
at (D.5)
Replacing the respective terms in the integral version of (D.2) by (D.3) and (D.5) yields £+fc • [J2 + aiuj + a 7 ( e * - 1)] = - & ( 0 ) + (a - iu;)&(0).
(D.6)
Note that the factor in square brackets on the left side of (D.6) is identical to the dispersion function A(ui,Xj) (6.13) if we substitute to => —iuj and Xj => k. Hence, we can rewrite (D.6) as - + _ -i'k(0) + (a-iv)ik(0)
m ?,
The inversion of the Laplace transform (D.4) is not entirely trivial due to the zeros of the dispersion relation A(u>, k) in the denominator of Eq. (D.7). A common "trick" to avoid such singularities is to shift the integration path into the complex plane such that the integrand becomes analytic in the new domain. We recommend a review of the results on improper integrals in section 2.8 at this point. Clearly, the growth of the perturbation £n(t) is no more rapid than exp (at) as long as the real exponent a is chosen large enough. The same holds true for £&(£) due to the relation (D.l). Therefore, if we shift the complex integration path by —ia, the inverse Laplace transformation
&((,
= l«-,/« v "'^
(a8
>
becomes meaningful. Combining equations (D.l) and (D.8) we can therefore write: •"-"
' "
<*„;+ * \
,„« Jul
Now the asymptotic behavior (t —> oo) of £n(t) is determined entirely by the unique pole wc of the integral in parentheses in (D.9) [£(w,ra) = j ^ e ifcn|+k|ij with the largest negative imaginary part. According to Eq. (D.7), this pole must be a root kc = k(uic) of the function A(u,k)
Appendix D. Absolute and Convective
Instabilities
219
in fc-space. Following the geometrical argument on page 271 of Ref. [158], the singularities of £(<j,n) are given by the double roots of A(w,fc) such that they "pinch" the integral contour of £(u, n). If two singularities in the k plane approach each other from opposite sides of the fc-contour as w is varied, it is not possible to remove this contour from their neighborhood. Near a double root, the dispersion function must be of the form
=>• k — kc ex ±^/LO — ui~c ,
(D.10)
which necessitates that UJC be a saddle point of ui(k): g u - 0 .
(D.11)
The asymptotic behavior of £„(£) for large t is therefore proportional to [158] £n{t) oc ^= • eiV°°n+u°tK
(D.12)
Since the definition of convective instability depends on the frame of reference, one needs to find the velocity of the moving frame in which the instability has the greatest growth rate. Let us remind the reader that changing from the laboratory frame to one moving at speed V requires us to replace co by (to — kV) in all expressions. The respective replacement in (D.ll) gives du/dk — V = 0, which can be separated into real and imaginary parts^: dlm(u>). , Tr dRe(ui). n _ ^ U = 0 , a n d V = - ^ L .
(D.13)
Expression (D.13) is equivalent to the convective stability boundary defined by Eq. (6.17) and concludes this Appendix.
^We naturally assume that V is real.
This page is intentionally left blank
Bibliography*
[1] D. Abbott. Overview: Unsolved problems of noise and fluctuations. CHAOS, 11:526, 2001. 199 [2] A. Bulsara and G. Schmera. Stochastic resonance in globally coupled nonlinear oscillators. Phys. Rev. E, 47:3734, 1993. 78 [3] L. Alfonsi, L. Gammaitoni, S. Santucci, and A. R. Bulsara. Intrawell stochastic resonance versus interwell stochastic resonance in underdamped bistable systems. Phys. Rev. E, 62:299, 2000. 62, 117 [4] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc, 28:131, 1996. 26,142 [5] M. Andrecut and M. K. Ali. Q learning in the minority game. Phys. Rev. E., 64:067103, 2001. 176 [6] I. Aranson, D. Golomb, and H. Sompolinsky. Spatial coherence and temporal chaos in macroscopic systems with asymmetrical couplings. Phys. Rev. Lett, 68:3495, 1992. 125 [7] S. Arrhenius. Uber die Reaktionsgeschwindigkeit bei der Inversion von Rohzucker durch Sauren. Z. Phys. Chem. (Leipzig), 4:226, 1889. 60 [8] W. B. Arthur. Complexity in economic theory: Inductive reasoning and bounded rationality. In Am. Econ. Assoc. Papers Proc. 84, page 406, 1994. 167, 168 [9] R. D. Astumian. Thermodynamics and kinetics of a Brownian motor. Science, 276:917, 1997. 162, 164 [10] K. L. Babcock, G. Ahlers, and D. S. Cannell. Noise-sustained structure in Taylor-Couette flow with through flow. Phys. Rev. Lett, 67:3388, 1991. 120 [11] M. Bando, K. Hasebe, A. Nakayama, A. Shibata, and Y. Sugiyama. Dynamical model of taffic congestion and numerical simulation. Phys. Rev. E, 51:1035, 1995. 177, 178, 179 Numbers in sanserif at the end of each reference indicate the pages in which the references are cited in the text. 221
222
Bibliography
[12] R. Benzi, G. Parisi, A. Sutera, and A. Vulpiani. Stochastic resonance in climate change. Tellus, 34:10-16, 1982. 62 [13] R. Benzi, A. Sutera, and A. Vulpiani. The mechanism of stochastic resonance. J. Phys. A, 14:453-457, 1981. 62 [14] R. Benzi, A. Sutera, and A. Vulpiani. Stochastic resonance in the LandauGinzburg equation. J. Phys. A, 18:2239-2245, 1985. 77, 78 [15] R. Berthet, A. Petrossian, S. Residori, B. Roman, and S. Fauve. Effect of multiplicative noise on parametric instabilities. Physica D, 174:84, 2003. 107, 108, 109, 117 [16] S. M. Bezrukov and I. Voyanoy. In search for a possible statistical interpretation of stochastic resonance. In D. Abbott and L. B. Kish, editors, Proceedings of the Second International Conference on Unsolved Problems of Noise and Fluctuations (UPoN'99), volume 511, page 169. AIP, New York, 2000). 197 [17] W. Bialek, M. DeWeese, F. Rieke, and D. Warland. Bits and brains: Information flow in the nervous system. Physica A, 200:581, 1993. 198 [18] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford, 1995. 194, 195 [19] C. M. Bishop. Training with noise is equivalent to Tikhonov regularization. Neural Computation, 7 (1):108-116., 1995. 194 [20] Courtesy of Chris Bizon currently at webslingerZ, Inc. The animation can be viewed at the URL given in [257], 111 [21] J. P. Bouchaud and M. Mezard. Wealth condensation in a simple model of economy, cond-mat/0002374, 2000. 13 [22] A. R. Bulsara and L. Gammaitoni. Tuning in to noise. Physics Today, 49:39-45, March 1996. 78 [23] G. Callan and S. Coleman. Fate of the false vacuum. II. First quantum corrections. Phys. Rev. D, 16:1762, 1977. 87, 88, 89 [24] T. L. Carroll and L. M. Pecora. Stochastic resonance and crises. Phys. Rev. Lett., 70:576, 1993. 75 [25] F. Castelpoggi and H. Wio. Stochastic resonant media: Effect of local and nonlocal coupling in reaction-diffusion models. Phys. Rev. E, 57:5112, 1998. 129 [26] C. Cattuto, G. Costantini, T. Guidi, and F. Marchesoni. Elastic strings in solids: Discrete kink diffusion. Phys. Rev. B, 63:094308, 2001. 92, 130 [27] A. Cavagna. Irrelevance of memory in the minority game. Phys. Rev. E, 59:R3783, 1999. 170,171 [28] A. Cavagna, J. P. Garrahan, I. Giardina, and D. Sherrington. Thermal model for adaptive competition in a market. Phys. Rev. Lett., 83:4429, 1999. 171,172,173 [29] A. Cavagna, Juan P. Garrahan, I. Giardina, and D. Sherrington. Reply: Cavagna et al. Phys. Rev. Lett, 85:5009, 2000. 175 [30] D. Challet, M. Marsili, and R. Zecchina. Comment on: Thermal model for adaptive competition in a market. Phys. Rev. Lett., 85:5008, 2000. 174, 175
Bibliography
223
[31] D. Challet and Y.-C. Zhang. Emergence of cooperation and organization in an evolutionary game. Physica A, 246:407, 1997. 167 [32] D. Challet and Y.-C. Zhang. On the minority game: Analytical and numerical studies. Physica A, 256:514, 1998. 171, 172 [33] R. V. Churchill and J. W. Brown. Complex Variables and Applications. McGraw-Hill, 5 edition, 1990. 54 [34] D. Cigna. Wave Propagation in One- and Two-Dimensional Arrays of Bistable Electronic Elements. PhD thesis, Dept. of Physics, Ohio University, Athens, OH 45701, 1999. 52 [35] W. C. Cole, J. B. Picone, and N. Sperelakis. Gap junction uncoupling and discontinuous propagation in the heart. Biophys. J., 53:809, 1988. 131, 137 [36] J. J. Collins, C. C. Chow, and T. T. Imhoff. Aperiodic stochastic resonance in excitable systems. Phys. Rev. E, 52:R3321, 1995. 62 [37] J. J. Collins, C. C. Chow, and T.T. Imhoff. Stochastic resonance without tuning. Nature, 376:236, 1995. 78, 198 [38] J. A. Combs and S. Yip. Single-kink dynamics in a one-dimensional atomic chain: A nonlinear atomistic theory and numerical simulation. Phys. Rev. B, 28:6873, 1983. 130 [39] U.S. Committee for GARP. Understanding climatic change. Technical report, National Academy of Sciences, 1975. 62 [40] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley, New York, 1991. 25 [41] M. C. Cross and P. C. Hohenberg. Pattern formation outside of equilibrium. Rev. Mod. Phys., 65:851-1112, 1993. 103, 105 [42] J. F. Currie, J. A. Krumhansl, A. R. Bishop, and S. E.Trullinger. Statistical mechanics of one-dimensional solitary-wave-bearing scalar fields: Exact results and ideal-gas phenomenology. Phys. Rev. B, 22:477, 1980. 86, 88,147,149,150 [43] P. J. Davis. Circulant Matrices. Chelsea Publishing, New York, 1994. 37, 38 [44] P. G. de Gennes. Granular matter: a tentative view. Rev. Mod. Phys., 71:S374, 1999. I l l [45] G. Deco and B. Schurmann. Information transmission and temporal code in central spiking neurons. Phys. Rev. Lett., 79:4697, 1997. 129 [46] R. J. Deissler. One-dimensional strings, random fluctuations and complex chaotic structures. Phys. Lett, 100A:451, 1984. 119 [47] R. J. Deissler. Noise-sustained structures, intermittency and the ginzburglandau equation. J. Stat. Phys., 40:371, 1985. 119, 120, 122, 124 [48] R. J. Deissler. Spatially growing waves, intermittency and convective chaos in an open-flow system. Physica D, 25:233, 1987. 119, 120, 122, 123, 124, 127 [49] R. J. Deissler and K. Kaneko. Velocity-dependent Lyapunov exponents as a measure of chaos for open-flow systems. Phys. Lett, 119A:397, 1987. 119 [50] C. Van den Broeck, J. M. R. Parrondo, and R. Toral. Noise-induced nonequilibrium phase transition. Phys. Rev. Lett., 73:3395, 1994. 99, 100 [51] C. Van den Broeck, J. M. R. Parrondo, R. Toral, and R. Kawai. Nonequi-
224
[52] [53]
[54] [55]
[56]
[57]
[58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69]
[70]
Bibliography
librium phase transitions induced by multiplicative noise. Phys. Rev. E, 55:4084, 1997. 99 R. D'hulst and G. J. Rodgers. The Hamming distance in the minority game. Physica A, 270:514-525, 1999. 172 I. Dikshtein, A. Neiman, and L. Schimansky-Geier. Stochastic resonance of front motion in inhomogeneous media. Phys. Lett. A, 246:259-266, 1998. 78 P. Tchofo Dinda. Nonlinear excitations for a deformable double-well potential. Phys. Rev. B, 46:12012, 1992. 99,164 P. Tchofo Dinda, R. Boesch, E. Coquet, and C. R. Willis. Discreteness effects on the double-quadratic kink. Phys. Rev. B, 46:3311, 1992. 147, 151, 152 J. K. Douglass, L. Wilkens, E. Pantazelou, and F. Moss. Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance. Nature (London), 365:337, 1993. 62 L. E. El'sgol'ts and S. B. Norkin. Introduction to the theory and application of differential equations with deviating arguments. Academic Press, New York, 1973. 40 M. Faraday. On the forms and states assumed by fluids in contact with vibrating-elastic surfaces. Philos. Trans. R. Soc. London, 52:319, 1831. 105 G. Fath and Z. Domariski. Avalanche of bifurcations and hysteresis in a model of cellular differentiation. Phys. Rev. E, 60:4604, 1999. 131 S. Fauve and F. Heslot. Stochastic resonance in a bistable system. Phys. Lett. A, 97:5, 1983. 71 R. P. Feynman and M. Sands R. B. Leighton. The Feynman Lectures on Physics, Vol. 1, chapter 46. Addison-Wesley, Reading, MA, 1966). 162 A. D. Fokker. Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld. Ann. Physik, 43:810, 1914. 45 R. Fox. Gaussian Stochastic Processes in Physics. Physics Reports, 48:179283, 1978. 35 R. F. Fox. Contributions to the theory of multiplicative stochastic processes. J. Math. Phys., 13:1196, 1972. 46 R. F. Fox. Second-order algorithm for the numerical integration of colored noise problems. Phys. Rev. A, 43:2649, 1991. 50, 205, 206, 207, 209 R. F. Fox and Y. Lu. Analytic and numerical study of stochastic resonance. Phys. Rev. E, 48:3390, 1993. 67 A. Fuliriski. Active transport in biological membranes and stochastic resonances. Phys. Rev. Lett., 79:4926, 1997. 129 P. M. Gade, R. Rai, and H. Singh. Stochastic resonance in maps and coupled map lattices. Phys. Rev. E, 56:2518, 1997. 74 P. C. Gailey, A. Neiman, J. J. Collins, and F. Moss. Stochastic resonance in ensembles of nondynamical elements: The role of internal noise. Phys. Rev. Lett, 79:4701, 1997. 78,79 L. Gammaitoni. Stochastic resonance and the dithering effect in threshold
Bibliography
225
physical systems. Phys. Rev. E, 52:4691, 1995. 62, 78,188,189,190,192, 225 [71] L. Gammaitoni, P. Hanggi, P. Jung, and F. Marchesoni. Stochastic resonance. Rev. Mod. Phys., 70:223-288, 1998. 62, 63, 65, 66, 71 References [8-16] from [70]. 189 H. Gang, G. De-chun, W. Xiao-dong, Y. Chun-yuan, Q. Guang-rong, and L. Rong. Stochastic resonance in a nonlinear system driven by an aperiodic force. Phys. Rev. A, 46:3250, 1992. 62 A. Ganopolski and S. Rahmstorf. Abrupt glacial climate changes due to stochastic resonance. Phys. Rev. Lett, 88:038501, 2002. 63 J. Garcia-Ojalvo and J. M. Sancho. Noise in spatially extended system. Springer, New York, 1999. 100, 117 T. C. Gard. Introduction to Stochastic Differential Equations. Marcel Dekker, 1987. 47, 140, 154 C. W. Gardiner. Handbook of Stochastic Methods. Springer, 1977. 44, 46, 47, 48 Werke 8, 90-92; The entire collection of Gauss' letters and works can be viewed at and downloaded from http://134.76.163.65/agora_docs/ 38910TABLE_OF_CONTENTS.html. This magnificent service is provided by the "Gottinger Digitalisierungszentrum" (http://gdz.sub.unigoettingen.de/), which is part of the Niedersachsische Staats- und Universitatsbibliothek Gottingen. 52 Z. Gingl, L. B. Kiss, and F. Moss. Non-dynamical stochastic resonance: Theory and experiments with white and arbitrarily coloured noise. Europhys. Lett, 29:191, 1995. 62 L. Glass and M. C. Mackey. From Clocks to Chaos, The Rhythms of Life. Princeton University Press, 1988. 19 J. Gleick. Chaos: making a new science. Penguin Books (New York), 1988. (for a popular introduction). 1,19 B. J. Gluckman, T. I. Netoff, E. J. Neel, W. L. Ditto, M. L. Spano, and Steven J. Schiff. Stochastic resonance in a neuronal network from mammalian brain. Phys. Rev. Lett., 77:4098, 1996. 62 N. Goldenfeld. Lectures on phase transitions and critical phenomena. Frontiers in Physics, 1992. 14 G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press Baltimore, 1983. 39 J. A. Gonzalez, B. A. Mello, L. I. Reyes, and L. E. Guerrero. Resonance phenomena of a solitonlike extended object in a bistable potential. Phys. Rev. Lett, 80:1361, 1998. 78 R. Graham and A. Schenzle. Carleman imbedding of multiplicative stochastic processes. Phys. Rev. A, 25:1731, 1982. 99 R. Graham and A. Schenzle. Stabilization by multiplicative noise. Phys. Rev. A, 26:1676, 1982. 99 M. Grifoni and P. Hanggi. Coherent and incoherent quantum stochastic resonance. Phys. Rev. Lett, 76:1611, 1996. 62
226
Bibliography
M. Grifoni and P. Hanggi. Nonlinear quantum stochastic resonance. Phys. Rev. E, 54:1390, 1996. 62 P. Grigolini and F. Marchesoni. Basic description of the rules leading to the adiabatic elimination of fast variables. Adv. Chem. Phys., 62:29, 1985. 61 S. Grossmann. The onset of shear flow turbulence. Rev. Mod. Phys., 72:603618, 2000. 32, 33 S. Guillouzic, I. L'Heureux, and A. Longtin. Small delay approximation of stochastic delay differential equations. Phys. Rev. E, 59:3970, 1999. 41 S. Guillouzic, I. L'Heureux, and A. Longtin. Rate processes in a delayed, stochastically driven and overdamped system. Phys. Rev. E, 61:4906, 2000. 41 P. Hanggi and R. Bartussek. Brownian rectifiers: How to convert Brownian motion into directed transport. In J. Parisi, S. C. Miiller, and W. Zimmermann, editors, Nonlinear physics of complex systems: current status and future trends, volume 476 of Lecture Notes in Physics, page 294. Springer, Berlin, 1996. 162 P. Hanggi, P. Marchesoni, and P. Sodano. Nucleation of thermal SineGordon solitons: Effect of many-body interactions. Phys. Rev. Lett, 60:2563, 1988. 87, 89, 90 P. Hanggi, P. Talkner, and M. Borkovec. Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys., 62:251-341, 1990. 59,60 G. P. Harmer and D. Abbott. Game theory: Losing strategies can win by Parrondo's paradox'. Nature (London), 402:864, 1999. 199 G. P. Harmer, D. Abbott, P. G. Taylor, and J. M. R. Parrondo. Brownian ratchets and Parrondo's games. CHAOS, 11:705, 2001. 199, 200, 201 T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer, 2001. 193 D. Helbing, editor. Verkehrsdynamik. Springer, Berlin, 1997. 177 M. D. Helbing and M. Schreckenberg. Cellular automata simulating experimental properties of traffic flow. Phys. Rev. E, 59:R2505, 1999. 177 H. Hempel, L. Schimansky-Geier, and J. Garcia-Ojalvo. Noise-sustained pulsating patterns and global oscillations in subexcitable media. Phys. Rev. Lett, 82:3713, 1998. 78 J. Hill and J. F. Lindner, personal communication. 157 M. W. Hirsch and S. Smale, Differential Equations, Dynamical Systems and Linear Algebra (Academic Press, New York, 1974). 234 J. H. Van't Hoff. Etudes de Dynamiques Chemiques. F. Mueller and Co., Amsterdam, 1884. translated by T. Ewan as Studies in Chemical Dynamics (London, 1896). 60 W. Horsthemke and R. Lefever. Noise-Induced Transitions. Springer, Berlin, 1984. 99, 117 http://web.mit.edu/jsterman/www/SDG/beergame.html or the system dynamics group at http://sysdyn.mit.edu/. 17, 18
Bibliography
227
[108] http://www.unifr.ch/econophysics/. 167 [109] http://www.soc.nii.ac.jp/jps/jpsj/is the URL for the Journal of the Physical Society of Japan. 178,181, 182,183, 184 [110] http://www.unifr.ch/econophysics/minority/. 167 [111] http://www.umbrars.com/sr/biblio.htm. 63 [112] http://www.amasci.com/amateur/traffic/ offers a layman introduction to the origin of traffic waves. 177 [113] http://www.trafficforum.org/. 177 [114] G. Hu, T. Ditzinger, C. Z. Ning, and H. Haken. Stochastic resonance without external periodic force. Phys. Rev. Lett., 71:807, 1993. 62 [115] E. R. Hunt. Dept. of Physics, Ohio University, Athens, OH 45701. 52 [116] E. R. Hunt. Stabilizing high-period orbits in a chaotic system: The diode resonator. Phys. Rev. Lett., 67:1953, 1991. 71 [117] M. Inchiosa and A. Bulsara. Coupling enhances stochastic resonance in nonlinear dynamic elements driven by a sinusoid plus noise. Phys. Lett. A, 200:283, 1995. 78 [118] M. Inchiosa and A. Bulsara. Nonlinear dynamic elements with noisy sinusoidal forcing: enhancing response via nonlinear coupling. Phys. Rev. E, 52:327, 1995. 78 [119] M. E. Inchiosa, A. R. Bulsara, J. F. Lindner, B. K. Meadows, and W. L. Ditto. Array enhanced stochastic resonance: Implications for signal processing. In R. A. Katz, editor, Proceedings of the Meeting on Nonlinear Dynamics and Full Spectrum Processing. AIP Conf. P r o a , AIP, New York, 1995. 78, 79, 82, 84 [120] M. E. Inchiosa, J. W. C. Robinson, and A. R. Bulsara. Informationtheoretic stochastic resonance in noise-floor limited systems: The case for adding noise. Phys. Rev. Lett, 85:3369, 2000. 143 [121] H. M. Jaeger, S. R. Nagel, and R. P. Behringer. Granular solids, liquids and gases. Rev. Mod. Phys., 68:1259-1273, 1996. I l l [122] H. M. Jaeger, T. Shinbrot, and P. B. Umbanhowar. Does the granular matter? Proc. Natl. Acad. Sci., 97:12959-12960, 2000. I l l , 117 [123] G. A. Johnson, M. Locher, and E. R. Hunt. Stabilized spatiotemporal waves in a convectively unstable open flow system: coupled diode resonators. Phys. Rev. E, 51:R1625, 1995. 71, 74, 79, 95,120 [124] G. A. Johnson, M. Locher, and E. R. Hunt. Stable states and kink dynamics in a system of coupled diode resonators. Physica D, 96:367-437, 1996. 71,120 [125] J. B. Johnson. Thermal Agitation of Electricity in Conductors. Phys. Rev., 32:97, 1928. 43 [126] N. F. Johnson, M. Hart, and P. M. Hui. Crowd effects and volatility in markets with competing agents. Physica A, 269:1-8, 1999. 172 [127] F. Julicher, A. Ajdari, and J. Prost. Modeling molecular motors. Rev. Mod. Phys., 69:1269-1282, 1997. 162 [128] P. Jung. Periodically Modulated Stochastic Processes. Phys. Rep., 234:175,
228
Bibliography 1993. 62 P. Jung. Threshold devices: Fractal noise and neural talk. Phys. Rev. E, 50:2513, 1994. 62 P. Jung. Stochastic resonance and optimal design of threshold detectors. Phys. Lett. A, 207:93, 1995. 62 P. Jung, U. Behn, E. Pantazelou, and F. Moss. Collective response in globally coupled bistable systems. Phys. Rev. A, 46:R1709, 1991. 78 P. Jung, A. Cornell-Bell, K. Madden, and F. Moss. Noise induced spiral waves in Astrocyte Syncytia show evidence of self-organized criticality. J. Neurophysiol., 79:1098, 1998. 129 P. Jung and P. Hanggi. Stochastic nonlinear dynamics modulated by external periodic forces. Europhys. Lett, 8:505, 1989. 64 P. Jung and P. Hanggi. Amplification of small signals via stochastic resonance. Phys. Rev. A, 44:8032, 1991. 62 P. Jung and G. Mayer-Kress. Spatiotemporal stochastic resonance in excitable media. Phys. Rev. Lett, 74:2130, 1995. 78,129 L. Kadanoff. Built upon sand: Theoretical ideas inspired by granular flows. Rev. Mod. Phys., 71:435, 1999. I l l S. Kadar, J. Wang, and K. Showalter. Noise-supported travelling waves in sub-excitable media. Nature, 391:770, 1998. 129 M. H. Kalos and P. A. Whitlock. Monte Carlo Methods. Wiley, New York, 1986. 2 K. Kaneko. Spatial period-doubling in open flow. Phys. Lett. A, 111:321, 1985. 125,127 K. Kaneko. Theory and Applications of Coupled Maps Lattices. Wiley and Sons, New York, 1993. 39 Of course, this is also an approximation. In future work we will replace the discrete cable theory with the modified cable theory [142] which incorporates both the discretizing nature of the gap junctions as well as the continuous propagation within a cell. 137 J. P. Keener. The effects of discrete gap junction coupling on propagation in myocardium. J. Theor. Biol, 148:49, 1991. 131, 137, 138,228 A. Khintchine. Korrelationstheorie der stationaren stochastischen Prozesse. Math. Annalen, 109:604, 1934. 43 S. Kim, S. H. Park, and H.-B. Pyo. Stochastic resonance in coupled oscillator systems with time delay. Phys. Rev. Lett, 82:1620, 1999. 41, 78 S. Kim, S. H. Park, and C. S. Ryu. Comment on noise-induced nonequilibrium phase transition. Phys. Rev. Lett, 78:1827, 1997. 99,100 M. Kirby. Geometric Data Analysis. John Wiley & Sons, 2001. 40 Yu. S. Kivshar and B. A. Malomed. Dynamics of solitons in nearly integ r a t e systems. Rev. Mod. Phys., 61:763, 1989. 99, 164 P. E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, 1992. 47, 140, 154 V. B. Kolmanovskii and V. R. Nosov. Stability of functional differential
Bibliography
229
equations. Academic Press, New York, 1986. 40 [150] H. A. Kramers. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica (Utrecht), 7:284, 1940. 60 [151] A. Kudrolli and J. P. Gollub. Patterns and spatiotemporal chaos in parametrically forced surface waves: a systematic survey at large aspect ratio. Physica D, 97:133, 1996. 106 [152] J. S. Langer. Statistical theory of the decay of metastable states. Ann. Phys., 54:258, 1969. 59, 87, 88, 89 [153] J. P. Laplante and T. Erneux. Propagation failure and multiple steadystates in an array of diffusion coupled flow reactors. Physica A, 188:89, 1992. 130 [154] J. P. Laplante and T. Erneux. Propagation failure in arrays of coupled bistable chemical reactors. J. Phys. Chem., 96:4931, 1992. 130 [155] C. Lee. Is memory in the minority game irrelevant? Phys. Rev. E, 64:015102, 2001. 171 [156] W. Leutzbach. Introduction to the Theory of Traffic Flow. Springer, Berlin, 1988. 177 [157] J. E. Levin and J. P. Miller. Broadband neural encoding in the cricket cereal sensory system enhanced by stochastic resonance. Nature (London), 380:165, 1996. 62 [158] E. M. Lifshitz and L. P. Pitaevskii. Physical Kinetics. Pergamon, Oxford, 1981. 181,217,219 [159] J. Lindner, S. Chandramouli, A. R. Bulsara, M. Locher, and W. L. Ditto. Noise enhanced propagation. Phys. Rev. Lett., 81:5048, 1998. 152, 154, 155, 156,158, 159, 165 [160] J. F. Lindner, Barbara J. Breen, M. E. Wills, A. R. Bulsara, and W. L. Ditto. Monostable array-enhanced stochastic resonance. Phys. Rev. E, 63:051107, 2001. 117, 160, 161 [161] J. F. Lindner, B. K. Meadows, W. L. Ditto, M. E. Inchiosa, and A. R. Bulsara. Array enhanced stochastic resonance and spatiotemporal synchronization. Phys. Rev. Lett., 75:3, 1995. 78, 80, 83, 84, 85, 95 [162] J. F. Lindner, B. K. Meadows, W. L. Ditto, M. E. Inchiosa, and A. R. Bulsara. Scaling laws for spatiotemporal synchronization and array enhanced stochastic resonance. Phys. Rev. E, 53:2081, 1996. 78, 79, 82, 84, 86 [163] D. S. Liu, R. D. Astumian, and T. Y. Tsong. Activation of Na+ and K+ pumping modes of (NaK)-ATPase by an oscillating electric field. J. Biol. Chem., 265:7260, 1990. 164 [164] J. Liu and J. P. Gollub. Onset of spatially chaotic waves on flowing films. Phys. Rev. Lett, 70:2289, 1993. 120 [165] M. Locher, N. Chatterjee, F. Marchesoni, W. L. Ditto, and E. R. Hunt. Noise sustained propagation: Local versus global noise. Phys. Rev. E, 61:4954, 2000. 129,165 [166] M. Locher, D. Cigna, E. R. Hunt, G. A. Johnson, F. Marchesoni, L. Gammaitoni, A. Bulsara, and M. Inchiosa. Stochastic resonance in coupled non-
230
Bibliography
linear dynamic elements. CHAOS, 8:604, 1998. 78,79, 131 M. Locher and E. R. Hunt D. Cigna. Noise sustained propagation of a signal in coupled bistable electronic elements. Phys. Rev. Lett., 80:5212, 1998. 129,165 M. Locher, G. A. Johnson, and E. R. Hunt. Spatiotemporal stochastic resonance in a system of coupled diode resonators. Phys. Rev. Lett., 77:4698, 1996. 78, 79, 84, 95,131 M. Loecher. Coupled Nonlinear Oscillators: a Paradigm for Spatiotemporal Chaos and Nonequilibrium Phenomena. PhD thesis, Dept. of Physics, Ohio University, Athens, OH 45701., 1997. 119 R. Lofstedt and S. N. Coppersmith. Quantum stochastic resonance. Phys. Rev. Lett, 72:1947, 1994. 62 R. Lofstedt and S. N. Coppersmith. Stochastic resonance: Nonperturbative calculation of power spectra and residence-time distributions. Phys. Rev. E, 49:4821, 1994. 62 A. Longtin. Stochastic resonance in neuron models. J. Stat. Phys., 70:309, 1993. 62 S. Mangioni, R. Deza, H. S. Wio, and R. Toral. Disordering effects of color in nonequilibrium phase transitions induced by multiplicative noise. Phys. Rev. Lett., 79:2389, 1997. 99 R. Mannella and V. Palleschi. Fast and precise algorithm for computer simulations of stochastic differential equations. Phys. Rev. A, 40:3381, 1989. 50 R. Mantegna and H. E. Stanley. Econophysics. Cambridge University Press, Cambridge, England, 2000). 167 F. Marchesoni. Solitons in a random field of force: A Langevin equation approach. Phys. Lett. A, 115:29-32, 1986. 149, 150 F. Marchesoni. Pair nucleation rate in a driven Sine-Gordon chain. Ber. Bunsenges. Phys. Chem., 95:353, 1991. 89 F. Marchesoni. Nucleation of kinks in 1 + 1 dimensions. Phys. Rev. Lett., 73:2394, 1994. 149, 150 F. Marchesoni, C. Cattuto, and G. Costantini. Elastic strings in solids: Thermal nucleation. Phys. Rev. B, 57:7930, 1998. and references therein. 86,87,88,89,150,151 F. Marchesoni, L. Gammaitoni, and A. R. Bulsara. Spatiotemporal stochastic resonance in a cf>4 model of kink-antikink nucleation. Phys. Rev. Lett., 76:2609, 1996. 78, 80, 84, 85, 86, 88 F. Marchesoni and P. Grigolini. The kramers model of chemical relaxation in the presence of a radiation field. Physica A, 121:269, 1983. 61 The derivation of the (exact!) expression Eqn. (5.35) follows Chap. 11 in [226], The quantities A and B are denned as follows: A = J^e-v(.^f)/kT l v x F) kT y F kT X v kT dxS 0e <- '< ' dx' and B = /„' e- ^ ^ dx JQ e ^'^' dx', with V(x,F) denoting the tilted PN potential \M0JpN{x\f -2Fx. 148, 152 [183] Fd is essentially the work done by moving the kink through a distance of
Bibliography
[184]
[185] [186] [187] [188]
[189] [190] [191] [192] [193] [194] [195]
[196]
[197]
[198] [199]
231
the order of its radius d. The inequality Fd 3> kT identifies the kink as a pointlike object in comparison with thermal fluctuations. 151 Diffusion of kinks in a random landscape (like the quenched noise due to resonator array dishomogeneities) has been studied, e.g. by F. Marchesoni, Europhys. Lett. 8, 83 (1989),by adding a random potential force term to the Langevin Eqs. (5.28) and (5.34). For variances of the random potential smaller than the energy fluctuations kT, like in the present investigation, the overall picture of kink dynamics does not change (the diffusion parameters, though, must be rescaled to account for spatial disorder). In the opposite limit and for a > u o , disorder may provide more efficient a kink pinning mechanism than discreteness. 150 A. D. May. Traffic Flow Fundamentals. Prentice-Hall, Eaglewood Cliffs, N.Y., 1990. 177 R. N. McDonough and A. D. Whalen. Detection of Signals in Noise. Academic Press, 1995. 141 B. McNamara and K. Wiesenfeld. Theory of stochastic resonance. Phys. Rev. A, 39:4854, 1989. 68 F. Melo, P. Umbanhowar, and H. L. Swinney. Transition to parametric wave patterns in a vertically oscillated granular layer. Phys. Rev. Lett., 72:172, 1994. I l l , 117 F. Melo, P. B. Umbanhowar, and H. L. Swinney. Hexagons, kinks and disorder in oscillated granular layers. Phys. Rev. Lett., 75:3838, 1995. I l l , 117 A. Metha, editor. Granular Matter: An Interdisciplinary Approach. Springer, 1994. I l l M. Milankovitch. Handbuch der Klimatologie 1. Teil. Doppen and Geiger, Berlin, 1930. 62 J. Miles and D. Henderson. Parametrically forced surface waves. Annu. Rev. Fluid Mech., 22:143, 1990. 105 G. N. Milshtein. Approximate integration of stochastic differential equations. Theory. Prob. Appi, 19:557, 1974. 50 G. N. Milshtein. Method of 2nd-order accuracy integration of stochastic differential-equations. Theory. Prob. AppL, 23:396, 1978. 50 N. Mitarai and H. Nakanishi. Convective instability and structure formation in traffic flow. J. Phys. Soc. Jpn., 69(11):3752-3761, 2000. 178,181,182,183,184 N. Mitarai and H. Nakanishi. Spatiotemporal structure of traffic flow in a system with an open boundary. Phys. Rev. Lett., 85:1766, 2000. 178, 181, 182 F. Moss. Stochastic resonance: From the ice ages to the monkey's ear. In G. H. Weiss, editor, Contemporary problems in statistical physics, pages 205-253. SIAM, Philadelphia, 1994. 62 F. Moss and K. Wiesenfeld. The benefits of background noise. Sci. Am., 273:50, 1995. 62 R. Miiller, K. Lippert, A. Kiihnel, and U. Behn. First-order nonequilibrium
232
[200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212] [213] [214]
[215] [216] [217] [218] [219]
[220]
Bibliography phase transition in a spatially extended system. Phys. Rev. E, 56:2658, 1997. 99 T. Nagatani. Density waves in traffic flow. Phys. Rev. E., 61:3564, 2000. 184,185 T. Nagatani. Traffic jams induced by fluctuation of a leading car. Phys. Rev. E., 61:3534, 2000. 178, 183, 184, 185, 186 K. Nagel and M. Schreckenberg. A cellular automaton model for freeway traffic. J. Phys. (France) I, 2:2221, 1992. 177 A. Neiman and L. Schimansky-Geier. Stochastic resonance in two coupled bistable systems. Phys. Lett. A, 197:379-386, 1995. 78 A. C. Newell, T. Passot, and J. Lega. Order parameter equations for patterns. Annu. Rev. Fluid Mech., 25:399, 1993. 105 A. C. Newell and J. A. Whitehead. Finite amplitude finite bandwidth convection. J. Fluid Mech., 38:279, 1969. 122 G. F. Newell. Nonlinear effects in the dynamics of car following. Operations Research, 9:209, 1961. 177, 178 G. Nicolis and C. Nicolis. Stochastic aspects of climatic transitions. Tellus, 33:225-234, 1981. 62 H. Nyquist. Thermal Agitation of Electric Charge in Conductors. Phys. Rev., 32:110, 1928. 43 T. Ohira. Oscillatory correlation of delayed random walks. Phys. Rev. E, 55:R1255, 1997. 41 T. Ohira and J. G. Milton. Delayed random walks. Phys. Rev. E, 52:3277, 1995. 41 T. Ohira and Y. Sato. Resonance with Noise and Delay. Phys. Rev. Lett., 82:2811, 1999. 41 T. Ohira and T. Yamane. Delayed stochastic systems. Phys. Rev. E, 62:1247, 2000. 41 A. V. Oppenheim and R. W. Schaffer. Digital Signal Processing. Prentice Hall, 1975. 188 G. Orsak and B. Paris. On the relationship between measures of discrimination and the performance of suboptimal detectors. IEEE Trans. Inf. Theor., 41:188, 1995. 26, 142 E. Ott. Chaos in dynamical systems. Cambridge University Press, 1993. (for a more technical monograph). 1, 19 J. M. R. Parrondo, C. Van den Broeck, J. Buceta, and F. J. de la Rubia. Noise-induced spatial patterns. Physica A, 224:153, 1996. 105 R. K. Pathria. Statistical Mechanics. Pergamon Press, 1972. 44 M. Peyrard and M. D. Kruskal. Kink dynamics in the highly discrete sineGordon system. Physica D, 14:88, 1984. 130 M. Planck. Uber einen satz der statistischen Dynamik and seine Erweiterung in der Quantentheorie. Sitzber. Preuss. Akad. Wiss., page 324, 1917. 45 A simple online search for the occurrence of power-law" in all Phys. Rev.
Bibliography
233
journals abstracts and titles from 1968 up to July 2001 yielded 2,853 hits. 14 W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge University Press, 1992. 39,50,95,213,214,215,216 W. J. Rappel and S. H. Strogatz. Stochastic resonance in an autonomous system with a nonuniform limit cycle. Phys. Rev. E, 50:3249, 1994. 62 P. Reimann. Brownian motors: Noisy transport far from equilibrium. Phys. Rep., 361:57, 2002. 162 S. Residori, R. Berthet, B. Roman, and S. Fauve. Noise induced bistability of parametric surface waves. Phys. Rev. Lett., 88:024502, 2002. 107,108, 109, 117 L. F. Richardson. Arms and insecurity. Boxwood, Pittsburgh, 1960. 11 H. Risken. The Fokker-Planck Equation. Springer, Berlin, 1984. 9, 44, 46, 47, 152, 230 J. W. C. Robinson, D. E. Asraf, A. R. Bulsara, and M. E. Inchiosa. Information-theoretic distance measures and a generalization of stochastic resonance. Phys. Rev. Lett, 81:2850, 1998. 143 R. W. Rollins and E. R. Hunt. Exactly solvable model of a physical system exhibiting universal chaotic behavior. Phys. Rev. Lett., 49:1295, 1982. 71, 72, 74, 124 A. C. H. Rowe and P. Etchegoin. Experimental observation of stochastic resonance in a linear electronic array. Phys. Rev. E, 64:031106, 2001. 78, 164 P. S. Ruszczynski, L. B. Kish, and S. M. Bezrukov. Noise-assisted traffic of spikes through neuronal junctions. CHAOS, 11:581-586, 2001. 187 J. M. Sancho, M. San Miguel, S. L. Katz, and J. D. Gunton. Analytical and numerical studies of multiplicative noise. Phys. Rev. A, 26:1589, 1982. 50 A. Schadschneider and M. Schreckenberg. Car-oriented mean-field theory for traffic flow models. J. Phys. A, 26:L679, 1993. 177 M. F. Schatz, R. P. Tagg, H. L. Swinney, P. F. Fischer, and A. T. Patera. Supercritical transition in plane channel flow with spatially periodic perturbations. Phys. Rev. Lett, 66:1579, 1991. 120 L. Schimansky-Geier and U. Siewert. A Glauber-Dynamics Approach to Coupled Stochastic Resonators. Springer Verlag, Berlin, 1997. 78 M. Schreckenberg, A. Schadschneider, K. Nagel, and N. Ito. Discrete stochastic models for traffic flow. Phys. Rev. E, 51:2939, 1995. 177 M. Schreckenberg and D. E. Wolf, editors. Traffic and Granular Flow 97. Springer, Singapore, 1998. 177 I. Sendina-Nadal, A. P. Mufiuzuri, D. Vives, V. Perez-Munuzuri, J. Casademunt, L. Ramirez-Piscina, J. M. Sancho, and F. Sagues. Wave propagation in a medium with disordered excitability. Phys. Rev. Lett, 80:5437, 1998. 129
234
Bibliography
J. Sietsma and R. J. F. Dow. Creating artificial neural networks that generalize. Neural Networks, 4 (l):67-69., 1991. 194 U. Siewert and L. Schimansky-Geier. Analytical study of coupled two-state stochastic resonators. Phys. Rev. E, 58:2843, 1998. 78 E. Simonotto, M. Riani, C. Seife, M. Roberts, J. Twitty, and F. Moss. Visual perception of stochastic resonance. Phys. Rev. Lett, 78:1186, 1997. 78 I. M. Sobol. The Monte Carlo Method. University of Chicago Press, 1974. 2 M. L. Spano, M. Wun-Fogle, and W. L. Ditto. Experimental observation of stochastic resonance in a magnetoelastic ribbon. Phys. Rev. A, 46:5253, 1992. 99 John Sterman. Business dynamics: systems thinking and modeling for a complex world, page 695. McGraw-Hill, 2000. 16, 17, 18 N. G. Stocks. Suprathreshold stochastic resonance in multilevel threshold systems. Phys. Rev. Lett, 84:2310, 2000. 198 N. G. Stocks. Information transmission in parallel threshold arrays: Suprathreshold stochastic resonance. Phys. Rev. E, 63:041114, 2001. 198 N. G. Stocks and R. Mannella. Generic noise-enhanced coding in neuronal arrays. Phys. Rev. E, 64:030902(R), 2001. 198 N. G. Stocks, N. D. Stein, and P. V. E. McClintock. Stochastic resonance in monostable systems. J. Phys. A, 26:L385, 1993. 62,117 N. G. Stocks, N. D. Stein, S. M. Soskin, and P. V. E. McClintock. Zerodispersion stochastic resonance. J. Phys. A, 25:L1119, 1992. 62,117 G. Strang. Linear algebra and its applications. Academic Press, 1976. 27, 28, 30, 32, 36 N. Sungar, J. P. Sharpe, and S. Weber. Stochastic resonance in twodimensional arrays of coupled nonlinear oscillators. Phys. Rev. E, 62:1413, 2000. 78,79 J. A. Swets. Measuring the accuracy of diagnostic systems. Science, 240:1285, 1988. 144 S. Tadaki, M. Kikuchi, Y. Sugiyama, and S. Yukawa. Noise induced congested traffic flow in coupled map optimal velocity model. J. Phys. Soc. Jpn., 68(9):3110-3114, 1999. 187 F. Takens. Detecting Strange Attractors in Turbulence, page 898. SpringerVerlag, New York, 1980). 20 This is a consequence of the Poincare-Bendixon theorem [104], which in essence limits the possible long-term solutions of a two-dimensional system of the form (1.20) to steady states and periodic solutions. (Asymptotic approaches to a figure 8 are also allowed.) The dynamics in one dimension is even more restricted. 19 M. Tomochi and M. Kono. Chaotic evolution of arms races. CHAOS, 8:808, 1998. 12 L. S. Tsimring and A. Pikovsky. Noise-induced dynamics in bistable systems
Bibliography
235
with delay. Phys. Rev. Lett, 87:250602, 2001. 41 Courtesy of Paul Umbanhowar at Northwestern University: http://super.phys.northwestern.edu/^pbu/. 110, 222 P. B. Umbanhowar. News and views — patterns in the sand. Nature, 389:541, 1997. I l l , 117 P. B. Umbanhowar, F. Melo, and H. L. Swinney. Localized excitations in a vertically vibrated granular layer. Nature (London), 382:793, 1996. 111,117 W. G. Unruh. Instability in automobile braking. Am. J. Phys, page 52, 1984. 22, 23, 24 M. v. Smoluchowski. Experimentell nachweisbare, der iiblichen Thermodynamic widersprechende Molekularphanomene. Phys. Z., 13:1069, 1912. 162 H. van Trees. Detection, Estimation and Modulation Theory. J. Wiley, New York, 1978. 76, 142, 146 S. C. Venkataramani and E. Ott. Spatiotemporal bifurcation phenomena with temporal period doubling: patterns in vibrated sand. Phys. Rev. Lett., 80:3495, 1998. 113, 114, 115, 117 S. C. Venkataramani and E. Ott. Pattern selection in extended periodically forced systems: A continuum coupled map approach. Phys. Rev. E, 63:046202, 2001. 113, 115 J. M. G. Vilar and R. V. Sole. Effects of noise in symmetric two-species competition. Phys. Rev. Lett, 80:4099, 1998. 78 J. Wakeling and P. Bak. Intelligent systems in the context of surrounding environment. Phys. Rev. E, 64:051920, 2001. 176 G. B. Whitham. Exact solutions for a discrete system arising in traffic flow. Proc. R. Soc. Lond. A, 428:49-69, 1990. 177 N. Wiener. Generalized harmonic analysis. Acta Math., 55:117-258, 1930. 43 K. Wiesenfeld and F. Moss. Stochastic resonance and the benefits of noise: from ice ages to crayfish and squids. Nature, 373:33, 1995. 62 K. Wiesenfeld, D. Pierson, E. Pantazelou, C. Dames, and F. Moss. Stochastic resonance on a circle. Phys. Rev. Lett., 72:2125, 1994. 62 H. S. Wio. An Introduction to Stochastic Processes and Nonequilibrium Statistical Physics. World Scientific, 1994. 44, 46, 47 D. E. Wolf, M. Schreckenberg, and A. Bachem, editors. Traffic and Granular Flow. World Scientific, Singapore, 1996. 177 H. L. Yang, Z. Q. Huang, and E. J. Ding. Stabilization of the less stable orbit by a tiny near-resonance periodic signal. Phys. Rev. E, 54:R5889, 1996. 72,74 W. Yang, M. Ding, and G. Hu. Trajectory (phase) selection in multistable systems: Stochastic resonance, signal bias and the effect of signal phase. Phys. Rev. Lett, 74:3955, 1995. 72, 74 A. A. Zaikin, J. Garcia-Ojalvo, and L. Schimansky-Geier. Nonequilib-
236
[276]
[277] [278] [279]
[280] [281]
[282]
Bibliography
rium first-order phase transition induced by additive noise. Phys. Rev. E, 60:R6275, 1999. 99 A. A. Zaikin, J. Garcia-Ojalvo, L. Schimansky-Geier, and J. Kurths. Noise induced propagation in monostable media. Phys. Rev. Lett., 88:010601, 2002. 160,161,162 A. A. Zaikin, J. Kurths, and L. Schimansky-Geier. Doubly stochastic resonance. Phys. Rev. Lett, 85:227, 2000. 100, 101, 102, 103, 161 A. A. Zaikin and L. Schimansky-Geier. Spatial patterns induced by additive noise. Phys. Rev. E, 58:4355, 1998. 99, 104, 105 Y. Zhang, G. Hu, and L. Gammaitoni. Signal transmission in one-way coupled bistable systems: Noise effect. Phys. Rev. E, 58:2952, 1998. 129, 157, 165 Y.-C. Zhang. Modeling market mechanism with evolutionary games. Europhys. News, 29:51, 1998. 172 H. Zhonghuai, Y. Lingfa, X. Zuo, and X. Houwen. Noise induced pattern transition and spatiotemporal stochastic resonance. Phys. Rev. Lett., 81:2854, 1998. 78 A. V. Zolotaryuk, K. H. Spatschek, and O. Kluth. Narrow kinks in nonlinear lattices: application to the proton transport in hydrogen-bonded systems. In K. H. Spatschek and F. G. Mertens, editors, Nonlinear Coherent Structures in Physics and Biology, volume 329 of NATO ASI Series B: Physics, pages 105-114, 1994. 131
Index
Ali-Silvey distance, 142 analytic function, 54 Arms Race, 11
fixed point, 7 fluctuation-dissipation theorem, 43, 45 Fokker-Planck equation, 45, 46
bias-variance tradeoff, 193 bistability noise induced, 107 Brownian motion, 49
game beer, 17 minority, 167 thermal, 171 Parrondo's, 199 Ginzburg-Landau Equation, 122 granular materials, 109
Cauchy integral formula, 55 Cauchy-Goursat theorem, 54 cross entropy, 26
information theory, 25 ROC curves, 141 instability absolute, 120 convective, 120
differential equations 1st order, 47 delay, 40 stochastic, 47 diode resonator, 71 dithering, 187
kink-antikink nucleation, 85 Kullback-Leibler distance, 26
econophysics, 167 eigenvalue, 28 eigenvector, 28 entropy, 25 Euler method, 48, 49
Langevin equation, 47 Laurent series, 55 Lyapunov exponent, 8 multiplier, 8
Faraday crispations, 105 waves, 105 finite difference, 38
Mackey-Glass model, 19 map logistic, 74 237
238
Index
matrix circulant, 37 Fourier, 37 Hermitian, 30 normal, 30, 31, 203 symmetric, 30 unitary, 31 metastability, 5 Milshtein method, 50 minority game, 167 thermal, 173 Monte Carlo integration, 2 multiplicative noise, 9, 13 mutual information, 26
quantile-quantile plot QQ-plot, 35 quartile, 35
noise enhanced propagation, 152 monostable, 160 in neural networks, 192 induced bistability, 107 multiplicative, 9, 13 sustained propagation, 129 sustained structures, 119 normal modes, 29 Nyquist frequency, 214
Schweinezyklus, 17 singular point, 55 isolated, 55 Singular Value Decomposition, 39 SSR, 198 stability absolute, 120 stability analysis linear, 6 stochastic resonance, 2, 61 array enhanced, 78 doubly, 99 monostable, 117 spatiotemporal, 77 suprathreshold, 198 two state theory, 68 supply chain dynamics, 16
Ornstein-Uhlenbeck process, 35 oscillons, 117 parametric surface waves, 105 Pareto distributions, 13 law, 13 Parrondo, 199 game, 199 paradox, 199 pattern noise induced, 105 switching, 115 Peierls-Nabarro potential, 151 propagation failure, 137 quantile, 35
random walk, 49 ratchets, 162 rate escape, 60 Kramers, 60 reaction-diffusion equation, 38 receiver operating characteristics, 144 regularization, 193 Residue theorem, 55 ridge regression, 194 ROC curves, 144
traffic dynamics, 177 wealth distribution, 13 weight decay, 194 white noise, 48 Wiener process, 48 Wiener-Khintchine theorem, 43