Physics Reports 307 (1998) 1—14
The time scale test for X: the inverse Hubble constant compared with the age of the universe Allan Sandage *, G.A. Tammann, A. Saha The Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101, USA Astronomisches Institut der Universitat Basel, Venusstr. 7, CH-4102 Binningen, Switzerland National Optical Astronomy Observatories, PO Box 26732, 950 N. Cherry Avenue, Tucson, AZ 85726, USA
Abstract The status of the HST program to calibrate the absolute magnitude at maximum of non-peculiar Ia supernovae is reviewed. Assuming, in first approximation, that SNe Ia are perfect standard candles gives an interim calibration, based on seven SNe Ia in six galaxies for which we have Cepheid distances, of 1M (max)2"!19.52$0.07, 1M (max)2"!19.48$0.07. Applying these calibrations to the Hubble diagram of 52 fiducial SNe Ia with good 4 photometric data, and using a correction of 0.08 mag to an earlier adopted Cepheid period-luminosity relation gives H "55$5 km s\ Mpc\. Three other methods (via the Virgo cluster distance tied to the global expansion frame, the luminosity function of field spirals calibrated via Cepheids, and the physical methods using gravitational lenses, the Sunyaev-Zeldovich effect, and expanding SN envelopes) confirm the long distance scale that is implied by this value. Second-parameter corrections to the supernovae method depending on decay rate affect this solution for H by, at most, 5%. A critique is given of the Key Project result that H '70 (Freedman et al., 1998). Disagreements with their precepts are discussed. The “Key Project” results, reduced to our Virgo cluster distance of 21.5 Mpc, gives H "55. Adopting 13.5 Gyr for the time since the beginning of the expansion gives a timing test value of X (total)"0.44 (0.37,#1.16). This shows that the timing test for omega, although powerful in principal because it measures the total mass, is presently impotent because the errors in H and ¹ are too large. Our principal result is that there is no time scale crisis in Big Bang cosmology, sans cosmological constant, because H\'¹ decisively, using the long distance scale derived here. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k
1. The time scale test for the cosmic matter density The topic of this conference is the evidence for or against the presence of majority amounts of non-shining matter, either baryonic or otherwise, in the universe. It is now common knowledge * Corresponding author. Fax: 626 795-8136. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 8 - 9
2
A. Sandage et al. / Physics Reports 307 (1998) 1—14
that luminous matter can account for only, at most, about 1% of the matter density that would be necessary to close the universe, based on the fact that the famous omega factor must have a value of exactly 1 (or q "0.5) for closure and flatness. Here, q is the so-called deceleration parameter (Robertson, 1955; Hoyle and Sandage, 1956) given by q "!(dR/dt)/RH, where R(t) is the cosmic expansion factor found from the solution of the Friedmann equation, and H is the present-epoch value of the Hubble constant. Hence, if X"1 requried for closure, there must be about 100 times more matter in the cosmos that is dark than that is shining. The most straightforward test of this conjecture is to deduce the mass density of the universe by measuring the deceleration of the expansion directly. The method is to compare the true (“real”) age of the universe, ¹ , with the value of the present-day inverse Hubble constant, H . These two times would be identical if there has been no deceleration due to the self-gravity of the system (a zero mass universe where q "0). The two times will differ if deceleration has occurred. Their ratio, ¹ H , will measure the deceleration, giving q directly as related to the mass density of the universe by mass density"(3H/4pG)q . (1) The timing test is so powerful in principle because it measures the total gravitational mass of all kinds (baryonic and possible non-baryonic). The timing equations that give q , once the ratio ¹ H is known by observation (one of the principal observational problems in practical cosmology as it is practiced at the telescope) are well known and are derived elsewhere (Sandage, 1961, Eqs. (61), (65); Narlikar, 1983, Eqs. (4.51), (4.64)). The special case of interest here of q "0.5 has the canonical solution that the inverse Hubble constant (a time) must be 3/2 longer than the actual “age of the universe”. The purpose of this report is to summarize the present-day status of the timing test. The two principal subjects are (1) the value of the Hubble constant as determined by four different methods, each of which give H "55$5 km s\ Mpc\, or H\"18$2 Gyr, and (2) the “actual age of the universe” as determined from the age dating of the Galaxy via the methods of stellar evolution that optimistically give ¹ "13.5$2.5 Gyr. 2. Results of four independent programs to determine H 2.1. The Hubble constant from type Ia supernovae calibrated using Cepheid variables The most reliable route to H began when it became increasingly evident that the Hubble diagram (observed apparent magnitude at maximum vs. the log of the redshift) of type Ia supernovae (SNe Ia) has a remarkably small scatter about the regression line whose slope is dm/d log v"5.0 (Kowal, 1968; Sandage and Tammann, 1982, 1993; Cadonau et al., 1985; Branch and Tammann, 1992; Tammann and Sandage, 1995; Hamuy et al., 1995, 1996). The slope of 5 is the required value if the law of the expansion of the universe is a linear relation between velocity and distance. This is the only form for a velocity field that appears to any observer placed anywhere in the manifold to have the same functional relation and the same apparent expansion rate (the Hubble constant at any particular epoch). The rate calibrates the time scale since “the beginning” of the expansion once the dynamic history of the scale factor R(t) is known.
A. Sandage et al. / Physics Reports 307 (1998) 1—14
3
The linearity of the Hubble law is one of the two most securely established facts of observational cosmology (Sandage, 1975; Sandage and Hardy, 1973; Sandage et al., 1976; Tammann, 1998). The other is the Planckian energy distribution of the Alpher—Herman 3 K relic radiation. The uncalibrated Hubble diagram of a perfect standard candle (one with zero dispersion in absolute luminosity) will show a dispersionless correlation between apparent flux (i.e. the apparent magnitude) and the redshift. However, such an uncalibrated diagram only gives distance ratios. It can be made to give absolute distances once a calibration of the intrinsic luminosity (the absolute magnitude) of the distance indicator can be achieved. Such an absolute calibration leads immediately to the Hubble constant. It is this absolute calibration that is the basis of our HST program to obtain distances to the parent galaxies that have produced type Ia supernovae. These absolute distances have been obtained by discovering Cepheid variables in the parent galaxies with HST, and then applying some version of the Cepheid period-luminosity relation, absolutely calibrated (e.g. Sandage and Tammann, 1968, 1998; Feast and Walker, 1987; Madore and Freedman, 1991). Our HST SNe Ia calibration program has been carried out by a collaboration between a small group composed of Tammann, Saha, Labhardt, Panagia, Macchetto, and Sandage. The program has been described in detail elsewhere by Saha et al. (1997), updating the details of the most recent results. We summarize here our interim results based on HST Cepheid distances to six parent galaxies with seven non-peculiar SNe Ia. Because the relative Hubble diagram for non-peculiar SNe Ia (called hereafter “Branch normal” after Branch et al., 1993) has such a small dispersion in magnitude (RMS (0.25 mag per event) about the regression line of slope 5, our first approximation has been to assume that SNe Ia can be treated as perfect standard candles (meaning zero rms scatter in their absolute luminosities). Although this assumption was known to not be strictly correct (Barbon et al., 1973; Pskovskii, 1977, 1984) because the absolute magnitude of SNe Ia has a weak dependence on the decay rate of the light curve, the evidence from the tightness of the normal SNe Ia Hubble diagram was then, and remains, the exceptionally strong proof that the dependence is very weak, having a three times smaller slope (Sandage and Tammann, 1995c; Parodi and Tammann, 1998) than originally claimed by Phillips (1993). The overstatement of the effect by Phillips has subsequently been confirmed by new data from Cerro Tololo (Hamuy et al., 1996) where their final decay-rate relation is between three and four times smaller than that of Phillips (1993). (It is important to note that five of the nine closed-circled points in Fig. 2 of Hamuy et al. (1996) must be removed from their calibration of the decay rate because these points are derived from two distance indicators [PN and surface brightness fluctuations] that have different pedigrees than the other data that are from a relative Hubble diagram with an arbitrary value of H . Clearly the distance scale of the black points [which in fact deviate most in the Hamuy et al., Fig. 2] differs from that of the relative Hubble diagram). We have shown (Saha et al., 1997; Sandage and Tammann, 1997; Parodi and Tammann, 1998) that accounting for the decay rate—absolute magnitude effect in the SNe Ia Hubble diagram using our current calibrations makes a correction of not more than 5% to the Hubble constant that we derive here, as if Branch normal SNe Ia were perfect standard candles. We disagree with the conclusions of Freedman (1998) and with Filippenko et al. (1998) on this problem, for reasons set out in Section 3.
4
A. Sandage et al. / Physics Reports 307 (1998) 1—14
A summary of the status of our Cepheid calibration of M(max) using the present seven 1, ' calibrators is reviewed in Saha et al. (1997) and by Sandage and Tammann (1997). Our current calibrations are 1M (max)2"!19.52$0.07 , 1M (max)2"!19.48$0.07 4 leading to
(2) (3)
H "57$3 km s\ Mpc\ . (4) Although disagreement still remains concerning Eq. (4) at levels that reach 25% (Freedman et al., 1998; Filippenko et al., 1998), considerable progress has nevertheless been made concerning earlier claims of H near 100. That a factor of 2 disagreement is no longer viable can most clearly be seen by comparing Eqs. (2) and (3) with de Vaucouleurs’ (1979), Appendix B) requirement that 1M(max)2"!18.5 if his distance scale with H near 100 is correct, or the requirement of Pierce (1994) that 1M(max)2 must be !18.74 if his scale of H "86 were to be correct. The final point in our method through “Branch normal” SNe Ia (and for two of the three methods that follow) concerns the accuracy with which we know the calibration of the Cepheid period—luminosity relation upon which the absolute distances of the parent galaxies of the SNe Ia depend. The Hipparcos results (Feast and Catchpole, 1997) for a calibration of the Cepheid P—¸ relation, based on trigonometric parallaxes rather than open cluster main sequence fittings, show that the zero-point determined from such fittings (Sandage and Tammann, 1968; Feast and Walker, 1987) is within 0.1 mag of the new Hipparcos results (Sandage and Tammann, 1998). The Hipparcos trigonometric result is therefore a stunning confirmation of the Cepheid distance scale we have been using to better than 5% of the basic calibrations that enter Eqs. (2) and (3). These have been the Cepheid P—¸ relations of Madore and Freedman (1991). Correction by the 0.08 mag difference between the Madore/Freedman (1991) canonical values and the new Hipparcos calibration by Feast and Catchpole (1997, which agrees within 0.02 mag of that of Sandage and Tammann, 1968), will reduce the Hubble constant in Eq. (4) by 4% to H "55$5 (external) , (5) which we adopt. For the reasons set out in Section 3, we are unconvinced that the rediscussion of our SNe Ia result by Freedman et al. (1998) is valid. An additional point on the robustness of the Cepheid P—¸ relation concerns its possible dependence on metallicity. A large literature now exists on the problem. Two modern studies, in which the earlier work is also reviewed, conclude that if a metallicity effect exists, it is small. Kennicutt et al. (1998), working with the metallicity gradient across the face of M101, derive an uncertain dependence of absolute magnitude on metallicity as dM/d[Fe/H]" !0.16$0.08 mag/dex. This is negligible in the context of our HST experiments. Sandage et al. (1999) from theory, combined with evolutionary tracks in the HR diagram, derive gradients of 0.00$0.02 mag/dex in M ,#0.03$0.02 mag/dex in B (in the sense that lower metallicities mean brighter luminosities in B), !0.08$0.02 mag/dex in V (in the opposite sense), and !0.10$0.02 mag/dex in V. New stellar atmosophers were computed as functions of [Fe/H], ¹ , mass, and surface gravity in this work. Again the effect on Eq. (5) is negligible.
A. Sandage et al. / Physics Reports 307 (1998) 1—14
5
2.2. The Hubble constant via the Virgo cluster distance as tied to the remote cosmic velocity frame The method to H via the Virgo cluster is comprised of two parts. (1) The first is to determine the distance to the Virgo Cluster core itself, freed from all observation selection effects of the various distance indicators used. (2) The second is to devise a method to connect the cluster to the remote velocity expansion field that circumvents all local gravitational perturbations due to the overdensity of the Virgo region complex. Said differently, we must determine the cosmic expansion velocity of the Virgo Cluster core as tied to the remote cosmological frame. Both parts of the problem have been controversial over the past decade. Our analysis of the many available data sources is that the distance to the Virgo cluster core is 21.5$2 Mpc, or (m!M)"31.66$0.08 mag. This distance, when combined with the cosmic (global) velocity in the remote cosmological frame of v (cosmic)"1142$61 km s\, gives the global velocity-to-distance ratio, as H "53$5 km s\ Mpc\. (6) On the other hand, Freedman et al. (1998) have adopted D"17.1 Mpc for the distance to the Virgo cluster core, and v (cosmic)"1368 km s\. Dividing these gives H "80 in their report in this volume. We base our distance to the Virgo core on six independent distance indicators discussed in the next paragraph, the most reliable of which is the Tully—Fisher line width—absolute magnitude relation, corrected for the severe Teerikorpi (1987, 1990) “cluster incompleteness selection bias”. To avoid this bias requires a sampling of the cluster luminosity function to at least 5 magnitudes from its peak, otherwise a correction must be applied for an incomplete sampling. It was the failure to consider this bias that the cluster distances to a number of clusters more distant than Virgo, summarized by Aaronson and Mould (1986) as determined earlier by them and their collaborators (Aaronson et al., 1980, 1982, 1986), were in error, being too small by an average of a factor of 1.6 (Sandage et al., 1995). Teerikorpi derived his bias properties of incomplete cluster samples from a theoretical model. At the same time, the incompleteness bias was demonstrated observationally by Kraan-Korteweg et al. (1988) and was discussed in detail by Sandage et al. (1995) using the very large sample of line width data of Mathewson et al. (1992). There is no question that such a bias must exist if the luminosity function has a finite dispersion (is not a delta function). The claim by Mould et al. (1997) to the contrary is not correct. The methods upon which we have based our adopted distance to the cluster core, summarized elsewhere (Tammann and Federspiel, 1997; Tammann, 1998), include Cepheids in the Leo Group together with the relative distance of Leo to Virgo, the Tully—Fisher relation using the complete sample of Virgo cluster spirals (Federspiel et al., 1998), eight SNe Ia in Virgo spirals (Sandage and Tammann, 1995b; Tammann, 1996), globular clusters (Sandage and Tammann, 1995a), D -sigma (Dressler, 1987; rediscussed by Tammann, 1988), and normal novae (Tammann, 1998) based on the data of Pritchet and van den Bergh (1987). Table 1 shows the result, giving our adopted distance as (m!M)"31.66$0.08, or D"21.5$2 Mpc, where we have assigned the generous external error. The internal error is 0.9 Mpc. For comparison, the adopted distance of Freedman et al. (1998) is (m!M)"31.17$0.45, or D"17.1$4.1, based on the single method of averaging the Cepheid distances of 5 spirals in the extended envelope surrounding the E galaxy core. The wide range of their distance moduli,
6
A. Sandage et al. / Physics Reports 307 (1998) 1—14
Table 1 Method
(m!M)
Hubble type
Source
Cepheids via Leo Tully—Fisher Globular Clusters D -sigma
31.52$0.21 31.58$0.24 31.67$0.15 31.85$0.19
S S E S0, S
Novae
31.46$0.40
E
Tammann (1998) FTS (1998) ST (1995a) Dressler (1987) Tammann (1988) PB (1987), T(1998)
Mean
31.66$0.09
(D"21.5$0.9 Mpc)
corresponding to distances from 14.9 to 25.5 Mpc, shows the large depth for the spiral galaxy envelope that makes their mean value based on their small sample, unreliable. The Tully—Fisher method in Table 1 that uses Virgo cluster spirals relies on the complete sample of 49 spirals that form a closed envelope that surrounds the Virgo core (Federspiel et al., 1998), solving to first order the problem of the back-to-front depth effect. The second part of the route to H through the Virgo cluster requires measuring the recessional velocity (relative to our Local Group) that Virgo would have had in the absence of the gravitational deceleration of the Local Group (in the Virgocentric kinematic frame), caused by the over-density of the Virgo complex. Our solution has been to tie the Virgo cluster directly to the remote global expansion frame via relative distances of distant clusters to Virgo through a relative Hubble diagram, and then to read this diagram at the intercept at zero relative distance (Sandage and Tammann, 1990; Jerjen and Tammann, 1993). This is an exceedingly powerful method because it eliminates the need to map the highly uncertain local expansion field (v(3000 km s\) so as to “correct for the infall velocity to Virgo” (actually the retarded expansion of the Local Group from the Virgocentric frame) (Tonry and Davis, 1981; Kraan-Korteweg, 1986; Sandage and Bedke, 1985a). Fig. 1, expanded from Jerjen and Tammann (1993) with additional points from Giovanelli (1996), shows the relative Hubble diagram using distance ratios of 31 “remote” clusters to Virgo. Reading the diagram at zero modulus difference gives v (cosmic) "1175$66 km s\. Dividing 4 this velocity by the distance of 21.5 Mpc gives H "55$5 . (7) The alternate route by deriving the “infall velocity” and applying it to the observed mean redshift of the cluster gives v (cosmic)"1142$61 km s\ (Federspiel et al., 1998), with the consequence that H "1142/21.5"53$5 , (8) but this way is less secure than through the relative modulus determination of v (cosmic) via Fig. 1. Freedman et al. (1994, 1998) have not used the relative modulus method. Instead they have relied on the uncertain mapping method of the local velocity field. In their early report (Freedman et al.,
A. Sandage et al. / Physics Reports 307 (1998) 1—14
7
Fig. 1. Hubble diagram of 31 clusters with known relative distances. Asterisks are from Jerjen and Tammann (1993). Open circles are from Giovanelli (1996). Filled circles are the average from both sources. The ordinate is log redshift reduced to the CMB frame. The abscissa is the distance modulus difference of each cluster relative to Virgo. Diagram from Sandage and Tammann (1997).
1994) they used v (cosmic)"1404 km s\ which is 20% higher than we derive here. In the report in this volume they are adopting v (cosmic)"1365 which is again higher than our value by 16%. This difference, together with their 25% smaller distance to the cluster core, accounts for their nearly 40% higher value of H than we adopt here in Eq. (5). Section 3 below critiques their summary. 2.3. The local Hubble constant from the luminosity function of nearby spirals as tied to the remote velocity frame The third method we have used is the same as was used by Lemaitre (1927, 1931) and Robertson (1928) in their predating of Hubble in their prediction of the velocity—distance relation, and even in their initial determination of the expansion rate (the Hubble constant). Hubble (1926) had made a first determination of the mean absolute magnitude of spiral galaxies, based on his Cepheid distances of LMC, SMC, M33, M31, NGC 6822, and a tentative distance to M101. Lemaitre, and independently Robertson, adopted Hubble’s estimate that the mean absolute magnitude of “extragalactic nebulae” was 1M2"!15.2. They used this to calibrate their first version of what has now become known as the Hubble diagram, both obtaining H near 550 km s\ Mpc\, close to what Hubble (1929) obtained by the same method in his discovery paper of the velocity—distance relation a year later. The method has been highly refined by the introduction of the relative luminosity functions of galaxies of different Hubble types, and especially by the refinements by individual luminosity functions of van den Bergh (1960a,b) luminosity classes. In a series of applications of the method to
8
A. Sandage et al. / Physics Reports 307 (1998) 1—14
Table 2 Determinations of H by physical methods Method
H
Source
Radio remnant of SN 1979C (Virgo) Expanding photosphere and N SNe Ia Expanding photosphere SNe II Expanding photosphere SNe II Sunyaev—Zeldovich for cluster A2218 and 6 other clusters A 2163 2 clusters Gravitational lenses QSO 0957#561 B 0218#357*
54$20 50—70 60$10 (50 45$20 60$15 68$30 42$10 63$12 60
1 2 3 4 5 6 7 8 9 10
Sources: (1) Bartel (1991); (2) Branch et al. (1996); Hoflich and Khokhlov (1996); Hoflich et al. (1996); Ruiz-Lapuente (1996); (3) Schmidt et al. (1992); (4) Baron et al. (1995); (5) McHardy et al. (1990); Birkinshaw and Hughes (1994); Lasenby and Hancock (1995); (6) Rephaeli (1995); Herbig et al. (1995); (7) Holzapfel et al. (1996); (8) Lasenby (1996); (9) Turner (1996); Kundic et al. (1997); (10) Corbett et al. (1995); Nair (1995).
field galaxies (Sandage, 1993a, b, 1994a, b, 1996a, b; Theureau et al., 1997; Goodwin et al., 1997; Federspiel, 1998), calibrated by the Cepheid distances determined by various groups from HST programs, the mean Hubble constant from local galaxies is H "53$3 km s\ Mpc\ ,
(9)
(Tammann, 1998). Higher values near H "80, determined by others using Tully—Fisher methods on field galaxies, have been shown to be incorrect because of neglect of the necessary corrections for observational selection bias in the presence of the finite (and large) dispersion in M(LW) of the Tully—Fisher relation (Federspiel et al., 1994; Sandage et al., 1995; Teerikorpi, 1997). Although the method determines only the local value of H , it is argued elsewhere (Federspiel et al., 1994) that the local kinematic region (the local bubble) is in bulk motion toward the complex Hydra, Centaurus, Antlia region (there is no Great Attractor; Federspiel et al., 1994), but is also expanding at near the global rate over the coherence region (v(4000 km s\) of the bubble, and that this local value of H is within 10% of the global rate. The details are many, but the conclusion is robust (Federspiel et al., 1994, Fig. 17). 2.4. Distance determinations from non-astronomical methods To complete the summary, we set out in Table 2 a variety of determinations of H by purely physical methods where the details are described in the cited references at the bottom of the table. Clearly, the values cluster about the long distance scale values with a mean near 60. More recent values with smaller error bars are by Meyers et al. (1997) using the Sunyaev-Zeldovich effect giving H "54$14, and by Kundic et al. (1997) using the time delay in the gravitational lens PG 1115#080, giving H "52$14.
A. Sandage et al. / Physics Reports 307 (1998) 1—14
9
3. Analysis of contrary values of H by others In view of Eqs. (5)—(9), why have other collaborations consistently obtain higher values of H , until recently even as high as between 80 and 110, compared to the mean of near H "57 summarized here? Much of the early work on the IR Tully—Fisher relation applied to cluster galaxies (Aaronson et al., 1980, 1982, 1986) gave apparent values near H "85. Because the work was confined to clusters, the authors of much of this work believed they were immune to bias effects of observation selection. “Work with clusters avoids the thorny issue of how to properly treat the Malmquist effect, because the galaxies in a cluster are generally at the same distance — it is likely that any sort of magnitude bias can be dispensed with entirely” (Aaronson et al., 1980). “Several possible sample biases, including the Malmquist effect are considered and dismissed” (Aaronson et al., 1982). “With clusters we confine ourselves to a sample which is basically volume limited rather than magnitude limited, allowing us to circumvent the [bias] problem” (Aaronson et al., 1986). These and many other authors dismissed discussion of the bias properties of their indicators, each necessarily arriving therefrom at systematically too large a value of H by about 40% (Teerikorpi, 1990, 1997; Sandage, 1994a,b; Federspiel et al., 1994; Sandage et al., 1995). The details of their failure to understand the statistics of observation selection bias are complicated. To follow this nearly unique episode in science-gone-wrong during the past two decades in the controversy over H , the reader is referred to the cited papers for an introduction to the extensive literature. Rather than discussing further the bias problems, even though they are arguably the principal reason for the controversy before the new HST data, we discuss in the remainder of this section the reasons why we are critical of the interim report of the “Key Project” consortium set out by Freedman et al. (1998). The principal reason for the difference in our value of H "55$5 compared with H "73$6 adopted by Freedman is traced directly to our larger distances to the Virgo and Fornax clusters. Much of calibration of the secondary methods listed in Freedman’s (1998) summary table depends solely on their assumed distance modulus of (M!m)"31.16 for Virgo and (M!m)"31.34 for Fornax, both of which are 0.5 mag smaller than our moduli of 31.66 and 31.80 for Virgo and Fornax, respectively. We are not convinced by their small modulus for Virgo compared with the Table 1 values here. As mentioned earlier, their Virgo distance is based on 5 spirals whose relation to the back-to-front effect is unknown. We suspect that four of their five are in the foreground because they were not chosen for observation randomly, but rather on the basis of their probable resolvability (Sandage and Bedke, 1985b). The fifth (NGC 4639) with the modulus of 32.0 (Saha et al., 1997) was chosen by us for study of its Cepheids, not on the basis of ease of resolution, but rather because of its parenthood for SNe Ia 1990N. The adopted modulus of the Fornax cluster of 31.34 by Freedman et al. is even more scantily based on only the Cepheid distance of the single giant spiral NGC 1365, neglecting again, as in Virgo, the evident back-to-front ratio. They have no reliable data to determine the relation of NGC 1365 to the E galaxy core. That a large back-to-front ratio exists (or more probably that the spirals associated with Fornax are not coincident in space with the E and S0 members) is discussed by Tammann and Federspiel (1997). Our principal objection is that they use the single distance to
10
A. Sandage et al. / Physics Reports 307 (1998) 1—14
NGC 1365 (the most highly resolved of the Fornax spirals) as if it defined the distance to the Fornax cluster’s E and S0 galaxy’s core. The two early type Fornax galaxies, NGC 1316 and NGC 1380, have produced the three SNe Ia 1980N, 1981D, and 1992A with closely the same mean apparent magnitudes of 1B(max)2"12.54 and 1»(max)2"12.46. Freedman et al. then assume that the distance to the spiral NGC 1365 is the same as that of the E/Sa and S0 galaxies NGC 1316 and NGC 1380, claiming thereby to have calibrated the mean absolute magnitudes at maximum of SNe Ia 1980N, 1981D, and 1992A as M (max)"!18.80 and M (max)"!18.88. These are 0.7 mag fainter than our calibrations in 4 Eqs. (2) and (3). They then argue that the three brightest of our seven SNe Ia calibrations that make up Eqs. (2) and (3) are to be discarded (for reasons that are demonstrably incorrect), and are then to be replaced by their three fainter calibrations via their most uncertain precept concerning the distance to the Fornax cluster core. From this they derive an unsupportably steep decline rate—absolute magnitude relation for SNe Ia, as discussed by Freedman in this volume. None of this makes sense. Their decline rate relation is four times steeper than is found directly from the relative Hubble diagram for 52 fiducial SNe Ia (Sandage and Tammann, 1995c; Hamuy et al., 1996; Parodi and Tammann, 1998). Furthermore, the resulting rms dispersion of their arbitrarily adopted list of calibrators is three times the known rms scatter of the relative Hubble diagram of our 52 SNe Ia fiducial sample. This is because of their extraordinarily faint absolute magnitudes of SN 1980N, 1981D, and 1992A compared with the four brighter calibrators of our seven that they do keep. The problem, of course, is that their adopted modulus for the Fornax galaxies that are parents for the three Fornax SNe is too small by 0.5 mag. We argue elsewhere the reasons that the single distance of NGC 1365 does not define the distance to the Fornax core (Tammann and Federspiel, 1997; Sandage and Tammann, 1997), and therefore that the precepts upon which the Freedman consortium have based their calibrations are unconvincing to us. The consequences for Freedman’s summary table in this volume are the following. All values in that table depend on an assumed Virgo distance of 17.1 Mpc and Fornax distance of 18.5, both of which are 25% smaller than our derived values by a variety of methods set out for Virgo in Table 1 and for Fornax elsewhere (Federspiel and Tammann, 1997). Therefore, on the distance scale of Table 1, all entries in the Freedman summary table must be reduced by this factor, as must their mean value of H "73$6, giving H "58$5 km s\ Mpc\ . (10) 4. The age of the universe by results via stellar evolution The methods of age dating the oldest stars in the Galaxy are well known and are summarized by Chaboyer in this volume and earlier by Sandage and Tammann (1997) with additional references. The most reliable method is via the absolute magnitude of the main sequence termination point of globular clusters (Schonberg and Chandrasekhar, 1942; Sandage and Schwarzschild, 1952; Schwarzschild, 1958; Sandage, 1958), but this requires knowledge of the absolute distance of the individual clusters. Two methods to determine such distances are (1) photometric fitting of observed sequences to various subdwarf sequences of appropriate metallicities
A. Sandage et al. / Physics Reports 307 (1998) 1—14
11
(Sandage, 1970; Sandage and Cacciari, 1990, for earlier reviews), and (2) normalizing observed globular cluster horizontal branches to absolute magnitudes via some calibration of metallicityabsolute magnitude relation of RR Lyrae stars that reside on the horizontal branches. Once the absolute magnitudes at the termination points of the main sequence are known, theoretical models of age vs. termination luminosities, modeled now by many independent groups, solve the calculation. Many summaries exist of the present status of the dating of the Galaxy and, by extension, by adding between 0.5 and 1 Gyr for the Galaxy out of the Big Bang to derive the “age of the universe”, ¹ . This is the other time scale needed for the timing test for q . In addition to the summary by Chaboyer in this volume, a different summary by (Sandage and Tammann, 1997, Table 6), only quoted here, gives ¹ "13.5 (#2!3) Gyr which we adopt.
(11)
5. The present status of the timing test We adopt H "55, 5 km s\ Mpc\ from Eq. (5) as our best interim solution pending the completion of the final HST calibration program, noting that Eqs. (6)—(10) are also compatible with it. The inverse of Eq. (5) gives H\"17.7$1.6 Gyr , for the Hubble time. The ratio of Eqs. (11) and (12) gives
(12)
H ¹ "0.76$0.16 , (13) which, although close to the required value of 0.67 required if X"1, it is not identical to it. Reading Table 8 of Sandage (1961) for q as a function of H ¹ , and recalling that X"2q gives X "0.44 (#1.16,!0.37) , 2 or 0.07(X (1.60 . (14) 2 Hence, although the time scale test is powerful, in principle, because it measures the total gravitating mass of all kinds, it is presently impotent in practice because the errors on H and ¹ , at even their optimistic error limits in Eqs. (12) and (13), are much too large. However, the one decisive conclusion from the present status of the problem is that there is no time scale crisis in cosmology if H "55. ¹he Hubble time, H\ (Eq. (12)), is well larger than the age of the Galaxy, cum ¹ in Eq. (11). References Aaronson, M. et al., 1980. Astrophys. J. 239, 12. Aaronson, M. et al., 1982. Astrophys. J. Suppl. Ser. 50, 241. Aaronson, M. et al., 1986. Astrophys. J. 302, 536.
12
A. Sandage et al. / Physics Reports 307 (1998) 1—14
Aaronson, M., and Mould, J., 1986. Astrophys. J. 303 1. Barbon, R., Ciatti, F., Rosino, L., 1973. Astron. Astrophys. 25, 241. Baron, E. et al., 1995. Astrophys. J. 441, 170. Bartel, N., 1991. In: S.E. Woosley (Ed.), Supernovae. Springer, New York. Birkinshaw, M., Hughes, J.P., 1994. Astrophys. J. 420, 33. Branch, D., Fisher, A., Nugent, P., 1993. Astron. J. 106, 2383. Branch, D., Nugent, P., Fisher, A., 1996. In: R. Canal, P. Ruiz-Lapuente, J. Isern (Eds.), Thermonuclear Supernovae. Kluwer, Dordrecht. Branch, D., Tammann, G.A., 1992. Ann. Rev. Astron. Astrophys. 30, 315. Cadonau, R., Sandage, A., Tammann, G.A., 1985. In: Bartel, N. (Ed.), Supernovae as Distance Indicators, Lecture Notes in Physics, vol. 224. Springer, Berlin, p. 15. Corbett et al., 1995. Preprint. de Vaucouleurs, G., 1979. Astrophys. J. 227, 729 (Appendix B). Dressler, A., 1987. Astrophys. J. 317, 1. Feast, M. and Catchpole, R.M., 1997. Mon. Not. R. Astron. Soc. 286, L1. Feast, M.W., Walker, A.R., 1987. Ann. Rev. Astron. Astrophys. 25, 345. Federspiel, M., 1998. In press. Federspiel, M., Sandage, A., Tammann, G.A., 1994. Astrophys. J. 430, 29. Federspiel, M., Tammann, G.A., Sandage, A., 1998. Astrophys. J. 495, 115. Filippenko, A. et al., 1998. Phys. Rep. 307, this volume. Freedman, W.L. et al., 1994. Nature 371, 757. Freedman, W.L. et al., 1998. Phys. Rep. 307, this volume. Giovanelli, R., 1996. Private communication. Goodwin, S.P., Gribbin, J., Hendry, M.A., 1997. Astron. J. 114, 2212. Hamuy et al., 1995. Astron. J. 109, 1. Hamuy et al., 1996. Astron. J. 112, 2391. Herbig, T., Lawrence, C.R., Readhead, A.C.S., 1995. Astrophys. J. 449, L5. Hoflich et al., 1996. In: R. Canal, P. Ruiz-Lapuente, J. Isern (Eds.), Thermonuclear Supernovae. Kluwer, Dordrecht. Hoflich, P., Khokhlov, A., 1996. Astrophys. J. 457, 500. Holzapfel, W.B. et al., 1996. cited in Lasenby (1996). Hoyle, F., Sandage, A., 1956. Pub. Astron. Soc. Pac. 68, 301. Hubble, E., 1926. Astrophys. J. 64, 321. Hubble, E., 1929. Proc. Nat. Acad. Sci. 15, 168. Jerjen, H., Tammann, G.A., 1993. Astron. Astrophys. 276, 1. Kennicutt, R.C. et al., 1998. Astrophys. J., in Press. Kowal, C.T., 1968. Astron. J. 73, 102. Kraan-Korteweg, R.C., 1986. Astron. Astrophys. Suppl. Ser. 66, 255. Kraan-Korteweg, R.C., Cameron, L.M., Tammann, G.A., 1988. Astrophys. J. 331, 620. Kundic, T. et al., 1997. Astrophys. J. 482, 75. Kundic, T., Cohen, J.G., Blandford, R.D., Lubin, L.M., 1997. astr-ph/9704109 v2. Lasenby, A.N., 1996. Talk at Baltimore May 1996 Workshop on the distance scale. Lasenby, A.N., Hancock, S., 1995. In: Sanchez, N., Zicichi, A. (Eds.), Current Topics in Astrofundamental Physics: The Early Universe. Kluwer, Dordrecht, p. 327. Lemaitre, G., 1927. Ann. de la Societe scientifique de Bruxelles 47A, 49. Lemaitre, G., 1931. Mon. Not. R. Astron. Soc. 91, 483 (translation of the above). McHardy, I.M. et al., 1990. Mon. Not. R. Astron. Soc. 242, 215. Madore, B., Freedman, W.L., 1991. PASP 103, 933. Myers, S.T., Baker, J.E., Readhead, A.C.S., Leitch, E.M., Herbig, T., 1997. Astrophys. J., in press. Mathewson, D.S., Ford, V.L., Buchhorn, M., 1992. Astrophys. J. Suppl. Ser. 81, 413. Mould, J. et al., 1997. in: Livio et al. (Eds.), The Extragalactic Distance Scale, Cambridge University Press, Cambridge. Narlikar, J.V., 1983. Introduction to Cosmology. Jones & Bartlett, Boston, Pertola Valley, pp. 120, 122.
A. Sandage et al. / Physics Reports 307 (1998) 1—14
13
Nair, S., 1995. Preprint. Parodi, B., Tammann, G.A., 1998. Astrophys. J., in press. Phillips, M.M., 1993. Astrophys. J. 413, L105. Pierce, M., 1994. Astrophys. J. 430, 53. Pritchet, C.J., van den Bergh, S., 1987. Astrophys. J., 318, 507. Pskovskii, Yu.P., 1977. Sov. Astr. 21, 675. Pskovskii, Yu.P., 1984. Sov. Astr. 28, 658. Rephaeli, Y., 1995. Ann. Rev. Astron. Astrophys. 33, 541. Robertson, H.P., 1928. Phil. Mag. 5, 845. Robertson, H.P., 1955. Pub. Astron. Soc. Pac. 67, 82. Ruiz-Lapuente, P., 1996. Preprint. Saha, A., Sandage, A., Labhardt, L., Tammann, G.A., Macchetto, F.D., Panagia, N., 1997. Astrophys. J. 486, 1. Sandage, A., 1958. In: O’Connell, D.J.K. (Ed.), Stellar Populations, vol. 5. Specola Vaticana, p. 41. Sandage, A., 1961. Astrophys. J. 133, 355. Sandage, A., 1970. Astrophys. J. 162, 841. Sandage, A., 1975. Astrophys. J. 202, 563. Sandage, A., 1993a. Astrophys. J. 402, 3. Sandage, A., 1993b. Astrophys. J. 404, 19. Sandage, A., 1994a. Astrophys. J. 430, 1. Sandage, A., 1994b. Astrophys. J. 430, 13. Sandage, A., 1996a. Astron. J. 111, 1. Sandage, A., 1996b. Astron. J. 111, 18. Sandage, A., Bedke, J., 1985a. Astron. J. 90, 1992. Sandage, A., Bedke, J., 1985b. Astron. J. 90, 2006. Sandage, A., Bell, R., Tripicco, M., 1999. Astrophys. J., in press. Sandage, A., Cacciari, C., 1990. Astrophys. J. 350, 645. Sandage, A., Hardy, E., 1973. Astrophys. J. 183, 743. Sandage, A., Kristian, J.A., Westphal, J., 1976. Astrophys. J. 205, 688. Sandage, A., Schwarzschild, M., 1952. Astrophys. J. 116, 463. Sandage, A., Tammann, G.A., 1968. Astrophys. J. 151, 531. Sandage, A., Tammann, G.A., 1982. Astrophys. J. 256, 339. Sandage, A., Tammann, G.A., 1990. Astrophys. J. 365, 1. Sandage, A., Tammann, G.A., 1993. Astrophys. J. 415, 1. Sandage, A., Tammann, G.A., 1995a. Astrophys. J. 446, 1. Sandage, A., Tammann, G.A., 1995b. In: Sanchez, N., Zichichi, A. (Eds.), Current Topics in Astrofundamental Physics: The Early Universe. Kluwer, Dordrecht, p. 403. Sandage, A., Tammann, G.A., 1995c. Astrophys. J. 452, 16. Sandage, A., Tammann, G.A., 1997. In: Turok, N. (Ed.), Critical Dialogues in Cosmology. World Scientific Press, Singapore, p. 130. Sandage, A., Tammann, G.A., 1998. Mon. Not. R. Astron. Soc. 293, L23. Sandage, A., Tammann, G.A., Federspiel, M., 1995. Astrophys. J. 452, 1. Schonberg, M., Chandrasekhar, S., 1942. Astrophys. J. 96, 161. Schwarzschild, M., 1958. In: O’Connell, D.J.K. (Ed.), Stellar Populations, vol. 5. Specola Vaticana, p. 207. Schmidt, B.P., Kirshner, R.P., Eastman, R.G., 1992. Astrophys. J. 395, 366. Tammann, G.A., 1988. In: van den Bergh, S., Pritchet, C.J. (Eds.), The Extragalactic Distance Scale, ASP conf. Ser. No. 4, p. 282. Tammann, G.A., 1996. Rev. Mod. Astron. 9, 139. Tammann, G.A., 1998. In: the 1998 Marcel Grossmann Conf., in press. Tammann, G.A., Federspiel, M., 1997. In: Livio, M., Donahue, M., Panagia, N. (Eds.), The Extragalactic Distance Scale, May 1996 STScI Workshop, Cambridge University Press, Cambridge. Tammann, G.A., Sandage, A., 1995. Astrophys. J. 452, 16.
14
A. Sandage et al. / Physics Reports 307 (1998) 1—14
Teerikorpi, P., 1987. Astron. Astrophys. 173, 39. Teerikorpi, P., 1990. Astron. Astrophys. 234, 1. Teerikorpi, P., 1997. Ann. Rev. Astron. Astrophys. 35, 101. Theureau, G. et al., 1997. Astron. Astrophys. 322, 730. Tonry, J.L., Davis, M., 1981. Astrophys. J. 246, 680. Turner, E.L., 1996. Talk at the Princeton Workshop on Critical Dialogues in Cosmology. van den Bergh, S., 1960a. Astrophys. J. 131, 215. van den Bergh, S., 1960b. Astrophys. J. 131, 558.
Physics Reports 307 (1998) 15—22
The nature of high-redshift galaxies Joel R. Primack *, Rachel S. Somerville, S.M. Faber, Risa H. Wechsler Physics Department, University of California, Santa Cruz, CA 95060, USA Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel UCO/Lick Observatory, University of California, Santa Cruz, CA 95060, USA
Abstract Using semi-analytic models of galaxy formation, we investigate the properties of z&3 galaxies and compare them with the observed population of Lyman-break galaxies (LBGs). In addition to the usual quiescent mode of star formation, we introduce a physical model for starbursts triggered by galaxy—galaxy interactions. We find that with the merger rate that arises naturally in the CDM-based merging hierarchy, a significant fraction of bright galaxies identified at high redshift (z92) are likely to be low-mass, bursting satellite galaxies. The abundance of LBGs as a function of redshift and the luminosity function of LBGs both appear to be in better agreement with the data when the starburst mode is included, especially when the effects of dust are considered. The objects that we identify as LBGs have observable properties including low velocity dispersions that are in good agreement with the available data. In this “Bursting Satellite” scenario, quiescent star formation at z92 is relatively inefficient and most of the observed LBGs are starbursts triggered by satellite mergers within massive halos. In high-resolution N-body simulations, we find that the most massive dark matter halos cluster at redshift z&3 much as the Lyman-break galaxies (LBGs) are observed to do. This is true for both the X"1 CHDM model and low-X KCDM and OCDM models, all of which have fluctuation power spectra P(k) consistent with the distribution of low-redshift galaxies. The Bursting Satellite scenario can resolve the apparent paradox of LBGs that cluster like massive dark matter halos but have narrow linewidths and small stellar masses. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.62.Ai; 98.62.Gq; 98.80.Es Keywords: Galaxies: formation; Galaxies: evolution; Galaxies: clustering; Galaxies: starburst; Cosmology: theory
1. Introduction Our window onto the high redshift universe (z92) has been expanded tremendously by the “Lyman-break” photometric selection technique developed by Steidel and collaborators [1,2]. * Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 6 - 3
16
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
Similar techniques were exploited by Madau et al. [3] to identify high-redshift candidates in the Hubble Deep Field (HDF). Extensive spectroscopic follow-up work at the Keck telescope has verified the accuracy of the photometric selection technique [4,5]. The morphologies and sizes of these objects can be studied using the HDF sample [6,5], and their clustering properties can be studied using the growing sample of hundreds of Lyman-break galaxies (LBGs) with spectroscopic redshifts [7,8]. One interpretation of LBGs [2,6,7] is that they are located in the centers of massive dark matter halos (M&10M ) and have been forming stars at a moderate rate over a fairly long time-scale > (91 Gyr). This “Massive Progenitor” scenario supposes that the galaxies identified as LBGs at z&3 are the progenitors of the centers of today’s massive luminous ellipticals and spheroids, around which the outer parts later accrete. This viewpoint has been supported by semianalytic modeling [9]. But even though the observed clustering of LBGs is very similar to that of massive dark matter halos in N-body simulations [10—12], it does not necessarily follow that the Massive Progenitor scenario is correct. Here we consider the viability of an alternative interpretation of the observations [13,14]. There is clear observational evidence that galaxy—galaxy interactions trigger “starbursts”, a mode of star formation with a sharply increased efficiency over a relatively short timescale. There is also observational evidence that galaxy—galaxy interactions and starbursts are more common at high redshift than they are today [15]. The similarity between the appearance of the spectra of the LBGs and local starburst galaxies has been noted [5,16]. It seems likely that at least some of the observed Lyman-break galaxies are relatively low-mass (&10—10M ) objects in the process of an intense > starburst, plausibly triggered by galaxy encounters. If a significant fraction of the objects are of this nature, this would have far-reaching implications for the interpretation of the observations. In this alternative “Bursting Satellite” scenario, we predict that the population of bright galaxies at high redshift would still be found in massive dark matter halos, but they would not necessarily evolve into the centers of elliptical or spiral galaxies; instead, many of them may form the low-metallicity Population II stellar halos that surround bright galaxies at low redshift [17,18], and they could also be a major source of enriched gas in massive dark matter halos. The objects that we identify as LBGs in our models have star formation rates, half-light radii, I—K colors, and velocity dispersions that are in good agreement with the available data. In Ref. [14] we also investigate global quantities such as the star formation rate density and cold gas and metal content of the universe as a function of z.
2. Clustering of LBGs The Bursting Satellite scenario predicts that LBGs are mainly in massive dark matter halos. That LBGs are associated with massive dark matter halos is a natural interpretation of the strong clustering in redshift exhibited by the LBGs [7,8]. Such halos at high redshift are expected to cluster much more strongly than the underlying dark matter, since they will be located mainly where sheets and filaments of dark matter intersect. In order to investigate quantitatively whether this model agrees with the observed redshift clustering of the LBGs, we [12] assumed that the LBGs are associated with the most massive dark matter halos at z&3 in a suite of high-resolution cosmological simulations [19,20] of several of the
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
17
most popular CDM-variant cosmologies: standard cold dark matter (SCDM) with X"1, h"0.5, and p "0.67, and four COBE-normalized models: CDM with X"1, h"0.5, and p "1.3; cold plus hot dark matter (CHDM) with X"1, h"0.5, and Xm"0.2 in Nm"2 species of light neutrinos; open CDM (OCDM) with X"0.5 and h"0.6; and a flat CDM cosmology ("CDM) with X"0.4, XK,K/(3H)"1!X , and h"0.6. These simulations were run in 75 h\ Mpc boxes with 57 million cold particles and a dynamic range in force resolution of &10; they thus had adequate resolution to identify all dark matter halos with a comoving number density at z&3 higher than that of the observed LBGs. Since redshifts were measured spectroscopically [7] for only about 40% of the photometric LBG candidates, we chose a mass threshold for dark matter halos in the relevant interval in redshift in each simulation such that the comoving number density of the halos would be equal to that of the LBG candidates, and then we sampled 40% of these at random. When we compared the clustering statistics of these halos to the observed distribution of the redshifts corrected for selection, we found that the CHDM, OCDM, and "CDM simulations predicted that pencil beams as long and wide as the first published one [7] would have a “spike” in the redshift distribution at z&3 as high as the one actually observed approximately 1/3 of the time, which appears consistent with the additional statistics now available [8]. However, the CDM model normalized either to clusters or to COBE had a spike as high as this less than 10% of the time. The distribution on the sky of these massive halos was very similar to that of the observed LBGs in all the simulations; i.e., they form extended structures approximately 10 h\ Mpc across. So the spikes are not yet clusters at z&3, but we find that they evolve into at least Virgo-size clusters by z"0 (cf. [21]). The autocorrelation function m(r) of the massive halos that we identify with LBGs in each simulation [12] has a power-law index of about !1.5, somewhat shallower than the !1.8 to !2.0 observed [22], and the correlation lengths that we calculate are a little larger than observed. But we expect that the agreement will improve when we take into account the higher autocorrelation of the more massive halos and the higher probability that these will host LBGs. We are presently doing this calculation combining our simulations with the semi-analytic models discussed in the next section.
3. Semi-analytic models of galaxy formation Semi-analytic techniques allow one to model the formation and evolution of galaxies in a hierarchical framework, including the effects of gas cooling, star formation, supernova feedback, galaxy—galaxy merging, and the evolution of stellar populations. The semi-analytic models used here are described in detail in [14,23,24]. These models are in reasonably good agreement with a broad range of local galaxy observations, including the Tully—Fisher relation, the B-band luminosity function, cold gas contents, metallicities, and colors. Our basic approach is similar in spirit to the models originally developed by the Munich [25] and Durham [26] groups, and subsequently elaborated by these groups in numerous other papers (reviewed in Refs. [23,24]). We have reproduced much of the work of these groups, and improved on it in modeling the low-redshift universe in three main ways: (1) correcting the local Tully—Fisher normalization; (2) including extinction due to dust, which is crucial to get correctly both the Tully—Fisher relation (always corrected for extinction) and the luminosity function (not corrected for
18
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
extinction); and (3) developing an improved disk-halo treatment of the energy and metals in supernova ejecta. The framework of the semi-analytic approach is the “merging history” of a dark matter halo of a given mass, identified at z"0 or any other redshift of interest. We construct Monte Carlo realizations of the “merger trees” using an improved method [27]. Each branch in the tree represents a halo merging event. When a halo collapses or merges with a larger halo, we assume that the associated gas is shock-heated to the virial temperature of the new halo. This gas then radiates energy and cools. The cooling rate depends on the density, metallicity, and temperature of the gas. Cold gas is turned into stars using a simple recipe with the stellar masses assumed to follow the standard Salpeter IMF, and supernova energy reheats the cold gas according to another recipe. The free parameters are set by requiring an average fiducial “reference galaxy” (the central galaxy in a halo with a circular velocity of 220 km s\) to have an I-band magnitude M !5 log h" ' !21.7 (this requirement fixes the zero-point of the I-band Tully—Fisher relation to agree with observations), a cold gas mass m "1.25;10h\M , and a stellar metallicity of about solar. > The star formation and feedback processes are some of the most uncertain elements of these models, and indeed of any attempt to model galaxy formation. As in our investigation of local galaxy properties [24], we have considered several different combinations of recipes for star formation and supernova feedback (sf/fb) and also several cosmologies [14], but here we will report high-redshift results only for a single choice of sf/fb recipe and cosmology (SCDM). 3.1. Modeling starbursts Previous semi-analytic models have not systematically investigated the importance of a bursting mode of star formation, particularly its effect on the interpretation of the observations of highredshift galaxies. We start with the ansatz that galaxy—galaxy mergers trigger starbursts. This premise has considerable observational support and is also supported by N-body simulations with gas dynamics and star formation [28,29]. In our models, galaxies that are within the same large halo may merge according to two different processes. Satellite galaxies lose energy and spiral into the center of the halo on a dynamical friction time-scale. In addition, satellite galaxies orbiting within the same halo may merge with one another according to a modified mean free path time-scale. Our modeling of the latter process is based on the scaling formula derived in Ref. [30] to describe the results of dissipationless N-body simulations in which galaxy—galaxy encounters and mergers were simulated, covering a large region of parameter space. When any two galaxies merge, the “burst” mode of star formation is turned on, with the star formation rate during the burst modeled as a Gaussian function of the time. The burst model has two adjustable parameters, the time-scale of the burst and the efficiency of the burst (the fraction of the cold gas reservoir of both galaxies combined that is turned into stars over the entire duration of the burst). The timescale and efficiency parameters that we use are based on the simulations [28,29] mentioned above, treating major (m /m 'f &0.3) and minor mergers separately. The quiescent mode of star formation continues as well. Details are given in [14]. Fig. 1 shows the total star formation rate for all the galaxies in a large group halo (» "500 km s\ at z"0). The star formation rate is shown in models with: (1) no starbursts (quiescent star formation only), (2) bursts in major mergers only and satellite galaxies only allowed
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
19
Fig. 1. The total star formation rate for all galaxies that end up within a typical halo with » "500 km s\ at z"0. The solid line shows a model (1) with no starbursts, the dotted line shows a model (2) with mergers only between satellite and central galaxies and bursts only in major mergers, and the dashed line shows a model (3) with satellite—satellite and satellite—central mergers and bursts in major and minor mergers. The lookback time is computed for X"1 and H "50 km s\ Mpc\. Note that in model (3) the peaks, representing starbursts, occur primarily at look-back times of 8—12 Gyr (redshifts z&1—5).
to merge with the central galaxy on a dynamical friction time-scale (no satellite—satellite mergers) and (3) satellite—central and satellite—satellite mergers, and bursts in both major and minor mergers. The star formation rate at high redshift is considerably amplified in model (3) compared to models (1) and (2), illustrating that neglecting satellite—satellite mergers and bursts in minor mergers will considerably underestimate the importance of starbursts at high redshift. The “burst” models discussed in the remainder of this paper correspond to the maximal burst scenario, model (3) above. (The models of [9] are similar to our model (2).) 3.2. Comoving number density of LBGs The first question is whether the models reproduce the observed number densities of objects at high z. Fig. 2 shows the comoving number density of galaxies brighter than a fixed magnitude limit as a function of redshift, over the redshift range probed by the observed LBGs. We show this function using three values for the magnitude limit: the top panel shows the abundance of bright LBGs (m(25.5), the middle panel shows the abundance of galaxies brighter than 26.5, and the bottom panel shows galaxies brighter than 27.5. We have calculated the comoving number density for the ground-based sample of LBGs with spectroscopic redshifts using data from Ref. [22], and the comoving number density of LBGs at z&3 and z&4 from the HDF using the list of º and B drop-outs from Ref. [3] (see [14] for details). We note that the number density of LBGs in the HDF is considerably higher than for the ground-based sample. This may be an indication that the ground-based sample is missing a substantial number of objects, or it may
20
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
Fig. 2. The comoving number density of galaxies brighter than m , where m "25.5 (top panel), 26.5 (middle panel), or 27.5 (bottom panel). The hexagon indicates the comoving number density of LBGs with spectroscopic redshifts from the ground-based sample of [22] and the stars indicate the number density of U (z&3) and B (z&4) drop-outs in the HDF [3]. Bold solid lines show the comoving number density of galaxies in the models with starbursts; light solid lines show the results of the no-burst models. Dashed lines show the result of reducing the flux of each galaxy by a factor of three to estimate the effects of dust extinction [16].
indicate that the small HDF volume probed by the º and B dropouts is an unusually overdense region. Fig. 2 shows that models without bursts underpredict the number density of galaxies, especially at the brightest magnitude limit m"25.5. Burst models reproduce or exceed the observed number densities of LBGs when dust extinction is neglected. The inclusion of starbursts causes a bigger change in the comoving number density of LBGs at higher redshifts and at brighter magnitude limits; i.e., the number densities in the burst models tend to have flatter dependences on both redshift and magnitude. By a redshift of z:2, including starbursts has little effect on the number counts. The galaxy—galaxy merger rate is larger at high redshift because the halos are denser, and the starbursts are more dramatic because these galaxies are relatively gas-rich. The inclusion of dust is an important correction. The observed colors of the LBGs, as well as comparison of the UV to Hb fluxes, indicate that there is almost certainly some dust in these galaxies [16,31,32]. However, the amount of dust and the resulting extinction are quite uncertain. These depend on the metallicity and age of the galaxy, the geometry and “clumpiness” of the dust, and the wavelength dependence of the attenuation law. The correction factors for the UV rest frame luminosity suggested in [16,31,32] range from &2 to &7. More dramatic corrections, as large as a factor of &15, have been suggested [33,34].
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
21
Our estimates of the effect of dust in Fig. 2 simply decrease the luminosity of each galaxy by a factor of three. However, according to any physical dust model, a uniform correction by a fixed factor is probably unrealistic. If dust traces metal production (and hence star formation activity), more intrinsically luminous galaxies will be more heavily extinguished. It seems unavoidable that this will further increase the deficit of bright galaxies in the no-burst models seen in Fig. 2. However, if most of the bright galaxies are starbursting objects, as in the burst models, the situation is less clear. Observations [35,36] indicate that the wavelength dependence of the attenuation due to dust is “greyer” (less steep) in the UV for local starburst galaxies than a Galactic or SMC-type extinction curve. Powerful starbursts could blow holes in the dust, especially in small objects, perhaps ejecting the dust (along with metals) out of the galaxy. On the other hand, regions of active star formation may be completely enshrouded in dust, leading to even stronger extinction. In any case, models without starbursts appear to have no hope of reproducing the observed abundance of bright LBGs with just the conservative factor of three correction for dust included in Fig. 2. Even our models including starbursts do not reproduce the observed abundance of the brightest HDF LBGs with this dust correction. Since the light observed in the visible was emitted as ultraviolet in the rest frame, changing from the assumed Salpeter IMF to one with more high-mass stars can significantly increase the predicted abundance of bright LBGs. However, even such top-heavy IMFs will probably not be sufficient to save the no-burst models, which also predict a LBG luminosity function that is too steep compared to observations [14].
3.3. Line-widths, ages, and masses The velocity dispersions of observed LBGs can be estimated based on the widths of stellar emission lines such as Hb or O[III]. Emission lines have been detected for a few of the brightest LBGs from the ground-based sample. The velocity dispersions p derived from the observed linewidths are p"60—80 km s\ for four objects and p&190 km s\ for one object [37]. These values agree well with the burst models, for which the probability distribution for p of the stars in a disk geometry peaks at &80 km s\, but are in strong disagreement with the no-burst models, which peak at p&180 km s\. These measurements may be affected by several biases, which remain to be unravelled. First, the observed data refer to bright LBGs, which may have systematically higher linewidths. This effect would increase the discrepancy with no-burst models mentioned above. The modelling of the velocity dispersion at the small radii probed by the observations (approximately the half-light radius) is also uncertain because of the uncertain morphology of the observed LBGs (disk vs. spheroid). This should be studied with high-resolution narrowband imaging. Still, overall the present ps suggest small galaxies whose brightnesses are being amplified temporarily by starbursts. The ages and stellar masses of the burst models [14] also agree well with recent estimates based on observed SEDs including IR photometry. LBG colors are well fit by young ((0.1 Gyr) stellar populations with moderate amounts of dust [34]. These young ages imply that stellar masses are also low (&10M ). The LBGs in our no-burst models (and also Ref. [9]) are systematically older > and more massive than the data indicate. More photometry and spectra of LBGs will help to clarify whether our models including “Bursting Satellites” adequately describe the properties of the LBGs.
22
J.R. Primack et al. / Physics Reports 307 (1998) 15—22
Acknowledgements JRP acknowledges support from a NASA ATP grant at UCSC; RSS, a GAANN Fellowship at UCSC and a University Fellowship from The Hebrew University; and RHW, a Cota-Robles Fellowship at UCSC.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37]
C.C. Steidel, D. Hamilton, Astron. J. 105 (1993) 2017. C.C. Steidel et al., Astron. J. 112 (1996) 352. P. Madau et al., Mon. Not. R. Astron. Soc. 283 (1996) 1388. C.C. Steidel et al., Astrophys. J. 462 (1996) L17. J. Lowenthal et al., Astrophys. J. 481 (1997) 673. M. Giavalisco, C.C. Steidel, D. Macchetto, Astrophys. J. 470 (1996) 189. C.C. Steidel et al., Astrophys. J. 492 (1998) 428. K.L. Adelberger et al., Astrophys. J. 505 (1998) 18. C.M. Baugh et al., Astrophys. J. 498 (1998) 504. Y.P. Jing, Y. Suto, Astrophys. J. 494 (1998) L5. J.S. Bagla, Mon. Not. R. Astron. Soc. (1997), in press (astro-ph/9707159). R.H. Wechsler et al., Astrophys. J. (1998), in press (astro-ph/9712141). R.S. Somerville, S.M. Faber, J.R. Primack, in: D. Hamilton (Ed.), Proc. Ringberg Workshop on Large Scale Structure, Sept. 1996, Kluwer Academic Publishers, Dordrecht, in press. R.S. Somerville, J.R. Primack, S.M. Faber, Mon. Not. R. Astron. Soc. (1998), in press (astro-ph/9806228). R. Guzman et al., Astrophys. J. 489 (1997) 559. M. Pettini et al., in: J.M. Shull et al., (Eds.), Origins, ASP Conference Series, in press (astro-ph/9708117). S.C. Trager et al., Astrophys. J. 485 (1997) 92. L. Searle, R. Zinn, Astrophys. J. 225 (1978) 357. M.A.K. Gross, UCSC Ph.D. thesis, 1997 (http://fozzie.gsfc.nasa.gov). M.A.K. Gross et al., Mon. Not. R. Astron. Soc. (1998), in press (astro-ph/9712142). F. Governato et al., Nature 392 (1998) 359. M. Giavalisco et al., Astrophys. J. (1998), in press (astro-ph/9802318). R.S. Somerville, UCSC Ph.D. thesis (www.fiz.huji.ac.il/&rachels/thesis.html). R.S. Somerville, J.R. Primack, Mon. Not. R. Astron. Soc. (1998), in press (astro-ph/9802268). G. Kauffmann, S.D.M. White, B. Guiderdoni, Mon. Not. R. Astron. Soc. 264 (1993) 201. S. Cole et al., Mon. Not. R. Astron. Soc. 271 (1994) 781. R.S. Somerville, T.S. Kolatt, Mon. Not. R. Astron. Soc. (1998), in press (astro-ph/9711080). J.C. Mihos, L. Hernquist, Astrophys. J. 425 (1994) L13. J.C. Mihos, L. Hernquist, Astrophys. J. 464 (1996) 641. J. Makino, P. Hut, Astrophys. J. 481 (1997) 83. M. Dickinson, in: M. Livio et al., (Eds.), The Hubble Deep Field (astro-ph/9802064). D. Calzetti, in: W. Waller (Ed.), The Ultraviolet Universe at Low and High Redshift. G. Meurer et al., Astrophys. J. 114 (1997) 54. M. Sawicki, H.K.C. Yee, Astrophys. J. 115 (1998) 1329. D. Calzetti, A.L. Kinney, T. Storchi-Bergman, Astrophys. J. 429 (1994) 582. D. Calzetti, Astrophys. J. 113 (1997) 162. M. Pettini et al., Astrophys. J. (1998), in press (astro-ph/9806219).
Physics Reports 307 (1998) 23—30
The age of the universe Brian Chaboyer* Hubble Fellow, Steward Observatory, The University of Arizona, Tucson, AZ 85721, USA
Abstract A minimum age of the universe can be estimated directly by determining the age of the oldest objects in our Galaxy. These objects are the metal-poor stars in the halo of the Milky Way. Recent work on nucleochronology finds that the oldest stars are 15.2$3.7 Gyr old. White dwarf cooling curves have found a minimum age for the oldest stars of 8 Gyr. Currently, the best estimate for the age of the oldest stars is based upon the absolute magnitude of the main sequence turn-off in globular clusters. The oldest globular clusters are 11.5$1.3 Gyr, implying a minimum age of the universe of t 59.5 Gyr (95% confidence level). 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k Keywords: Globular clusters; Cosmology; Stellar structure; White dwarfs
1. Introduction A direct estimate for the minimum age of the universe may be obtained by determining the age of the oldest objects in the Milky Way. This direct estimate for the age of the universe can be used to constrain cosmological models, as the expansion age of the universe is a simple function of the Hubble constant, average density of the universe and the cosmological constant. The oldest objects in the Milky Way are the metal-poor stars located in the spherical halo. There are currently three independent methods used to determine the ages of these stars: (1) nucleochronology, (2) white dwarf cooling curves and (3) main sequence turn-off ages based upon stellar evolution models. In this review I will summarize recent results from these three methods, with particular emphasize on main sequence turn-off ages as they currently provide the most reliable estimate for the age of the universe.
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 4 - 4
24
B. Chaboyer / Physics Reports 307 (1998) 23—30
2. Nucleochronology Conceptually, the simplest way to determine the age of a star is to use the same method which have been used to date the Earth — radioactive dating. The age of a star is derived using the abundance of a long lived radioactive nuclei with a known half-life (see, e.g., the review [1]). The difficulty in applying this method in practice is the determination of the original abundance of the radioactive element. The best application of this method to date has been on the very metal-poor star CS 22892 [2]. This star has a measured thorium abundance (half-life of 14.05 Gyr), and just as importantly, the abundance of the elements from 564Z476 are very well matched by a scaled solar system r-process abundance distribution. Thus, it is logical to assume that the original abundance of thorium in this star is given by the scaled solar system r-process thorium abundance. A detailed study of the r-process abundances in CS 22892 lead to an age of 15.2$3.7 Gyr for this extremely metal-poor star [2]. This in turn, implies a 2p lower limit to the age of the universe of t 57.8 Gyr from nucleochronology. This is not a particularly stringent constraint at present. However, the uncertainty in the derived age is due entirely to the uncertainty in the determination of the thorium abundance in CS 22892. The determination of the abundance of thorium in a number of stars with similar abundance patterns to CS 22892 will naturally lead to a reduction in the error. If 8 more stars are observed, then the error in the derived age will be reduced to $1.2 Gyr, making nucleochronology the preferred method of obtaining the absolute ages of the oldest stars in our galaxy.
3. White dwarf cooling curves White dwarfs are the terminal stage of evolution for stars less massive than &8M . As white > dwarfs age, they become cooler and fainter. Thus, the luminosity of the faintest white dwarfs can be used to estimate their age. This age is based upon theoretical white dwarf cooling curves [3—5]. There are a number of uncertainties associated with theoretical white dwarf models, which have been studied in some detail. However, the effect of these theoretical uncertainties are generally not included in deriving the uncertainty associated with white dwarf cooling ages. The biggest difficulty in using white dwarfs to estimate the age of the universe is that white dwarfs are very faint and so are very difficult to observe. Most studies of white dwarf ages have concentrated on the solar neighborhood, in an effort to determine the age of the local disk of the Milky Way. Even these nearby samples can be affected by completeness concerns. The age determination for these disk white dwarfs is complicated by the fact that the results are sensitive to the star formation rate as a function of time [3]. A recent study has increased the sample size of local white dwarfs and concluded that the local disk of the Milky Way has an age of t "9.5> Gyr, where the quoted errors are due to the observational uncertainties in \ counting faint white dwarfs [6]. This implies a 2p lower limit to the age of the local disk of t 57.9 Gyr.
The r-process is the creation of elements heavier than Fe through the rapid capture of neutrons by a seed nuclei.
B. Chaboyer / Physics Reports 307 (1998) 23—30
25
Recently, with the Hubble Space Telescope it has become possible to observe white dwarfs in nearby globular clusters. These observations are not deep enough to observe the faintest white dwarfs and can only put a lower limit to the age of the white dwarfs. Observations of the globular cluster M4 found a large number of white dwarfs, with no decrease in the number of white dwarfs at the faintest observed magnitudes [7]. Based upon the luminosity of the faintest observed white dwarfs, a lower limit to the age of M4 was determined to be t 98 Gyr [7]. When the advanced camera becomes operational on HST (scheduled to occur in the year 2000), it will be possible to obtain considerably deeper photometry of M4, leading to an improved constraint on the age of M4 from white dwarf cooling curves.
4. Main sequence turn-off ages Theoretical models for the evolution of stars provide an independent method to determine stellar ages. These computer models are based on stellar structure theory, which is outlined in numerous textbooks [8,9]. One of the triumphs of stellar evolution theory is a detailed understanding of the preferred location of stars in a temperature—luminosity plot (Fig. 1). A stellar model is constructed by solving the four basic equations of stellar structure: (1) conservation of mass; (2) conservation of energy; (3) hydrostatic equilibrium and (4) energy transport via radiation, convection and/or conduction. These four, coupled differential equations represent a two point boundary value problem. Two of the boundary conditions are specified at the center of the star (mass and luminosity are zero), and two at the surface. In order to solve these equations, supplementary information is required. The surface boundary conditions are based on stellar atmosphere calculations. The equation of state, opacities and nuclear reaction rates must be known. The mass and initial composition of the star need to be specified. Finally, as convection can be important in a star, one must have a theory of convection which determines when a region of a star is unstable to convective motions, and if so, the efficiency of the resulting heat transport.
Globular clusters are compact stellar systems containing &10 stars. These stars contain few heavy elements (typically 1/10 to 1/100 the ratio found in the Sun) and are spherically distributed about the Galactic center. Together, these facts suggest that globular clusters were among the first objects formed in the Galaxy. There is evidence for an age range among the globular clusters, so the tightest limits on the minimum age of the universe are found when only the oldest globular clusters are considered. These are typically the globular clusters with the lowest heavy element abundances (1/100 the solar ratio). All of the stars in a globular cluster have the same age and chemical composition. Their location in the colormagnitude diagram is determined by their mass. Higher mass stars have shorter lifetimes and evolve more quickly than low mass stars. The various evolutionary sequence have been labeled. Most stars are on the main sequence (MS), fusing hydrogen into helium in their cores (for clarity, only about 10% of the stars on the MS have been plotted). Slighter higher mass stars have exhausted their supply of hydrogen in the core, and are in the main sequence turn-off region (MSTO). After the MSTO, the stars quickly expand, become brighter and are referred to as red giant branch stars (RGB). These stars are burning hydrogen in a shell about a helium core. Still higher mass stars have developed a helium core which is so hot and dense that helium fusion is ignited. This evolutionary phase is referred to as the horizontal branch (HB). Some stars on the horizontal branch are unstable to radial pulsations. These radially pulsating variable stars are called RR Lyrae stars, and are important distance indicators.
26
B. Chaboyer / Physics Reports 307 (1998) 23—30
Fig. 1. A color-magnitude diagram of a typical globular cluster, M15 [10]. The vertical axis plots the magnitude (luminosity) of the stars in the V wavelength region, with brighter stars having smaller magnitudes. The horizontal axis plots the color (surface temperature) of the stars, with cooler stars towards the right.
Once all of the above information has been determined a stellar model may be constructed. The evolution of a star may be followed by computing a static stellar structure model, updating the composition profile to reflect the changes due to nuclear reactions and/or mixing due to convection, and then re-computing the stellar structure model. There are a number of uncertainties associated with stellar evolution models, and hence, age estimates based on the models. Probably the least understood aspect of stellar modeling is the treatment of convection. Numerical simulations hold promise for the future [11,12], but at present one must view properties of stellar models which depend on the treatment of convection to be uncertain, and subject to possibility large systematic errors. Main sequence, and red giant branch globular cluster stars have surface convection zones. Hence, the surface properties of the stellar models (such as its effective temperature, or color) are rather uncertain. Horizontal branch stars have convective cores, so the predicted luminosities and lifetimes of these stars are subject to possible systematic errors. Given the known uncertainties in the models, the luminosity (absolute magnitude) of the main-sequence turn-off has the smallest theoretical errors, and is the preferred method for obtaining the absolute ages of globular clusters (e.g. [13,14]). The theoretical calibration of age as a function of the luminosity of the main-sequence turn-off has changed somewhat over the last
B. Chaboyer / Physics Reports 307 (1998) 23—30
27
several years. It has long been realized that diffusion (the settling of helium relative to hydrogen) could shorten the predicted main sequence lifetimes of stars [15]. However, it was not clear if diffusion actually occurred in stars, so this process had been ignored in most calculations. Recent helioseismic studies of the Sun have shown that diffusion occurs in the Sun [16,17]. The Sun is a typical main sequence star, whose structure (convective envelope, radiative interior) is quite similar to main sequence globular cluster stars. Thus, as diffusion occurs in the Sun, it appears likely that diffusion also occurs in main sequence globular cluster stars. Modern calculations find that the inclusion of diffusion lowers the age of globular clusters by 7% [18]. The recent use of an improved equation of state has led to a further 7% reduction in the derived globular cluster ages [19]. The equation of state now includes the effect of Coulomb interactions [20]. Helioseismic studies of the Sun find that there are no significant errors associated with the equation of state currently used in stellar evolution calculations [21]. Together, the use of an improved equation of state and the inclusion of diffusion in the theoretical models have lead to a &2 Gyr (14%) reduction in the estimated ages of for the oldest globular clusters. The excellent agreement between theoretical solar models and the Sun (see Fig. 2) suggest that future improvements in stellar models will likely lead to small (less than &5%) changes in the derived ages of globular cluster stars. A detailed Monte Carlo study found that the uncertainties in the theoretical models led to an 1!p error of 7% in the derived globular cluster ages [22]. This study considered errors associated with 15 different parameters used in the construction of theoretical stellar models and isochrones. The parameter which lead to the largest uncertainty in the derived age of the globular clusters was the abundance of the a-capture elements (oxygen is the most important a-capture element) in globular cluster stars. It is difficult to determine the abundance of oxygen observationally [23,24],
Fig. 2. The difference between the squared sound speed between a theoretical solar model and the actual Sun ([c !c ]/c ), as a function of radius of the model. The sound speed of the Sun is obtained from helioseismology
1 — observations of the solar p-modes, whose frequencies depend on the sound speed. Note that the maximum difference between the model and the Sun is less than 0.5%, a level of accuracy rarely seen in astronomy. The best solar models constructed in the early 1990s lead to squared sound speed differences of order 3%.
28
B. Chaboyer / Physics Reports 307 (1998) 23—30
with estimates of the oxygen abundance varying by up to a factor of 3. When extreme values for the oxygen abundance are used in the theoretical calculations, the derived globular cluster ages change by 8%. The use of the luminosity of the main sequence turn-off as an age indicator requires that the distance to the globular cluster be known. Determining distances is one of the most difficult tasks in astronomy, and is always fraught with uncertainty. The release of the Hipparcos data set of parallaxes to nearby stars [25] has suggested that a revision in the conventional globular cluster distance scale is necessary. Hipparcos did not directly determine the distance to any globular clusters, but did provide the distance to a number of nearby metal-poor main sequence stars. Assuming that globular cluster stars have identical properties to these nearby stars, the nearby stars can serve as calibrators of the intrinsic luminosity of metal-poor main sequence stars and the distance to a globular cluster determined. This technique is referred to as main sequence fitting. There have been a number of papers which have used the Hipparcos data set to determine the distance to globular clusters using main sequence fitting [22,26—28]. Three of these papers conclude that globular clusters are further away than previously believed, leading to a reduction in the derived ages. The remaining paper [28] concluded that the Hipparcos data did not lead to a revision in the globular cluster distance scale. However, this work incorrectly included binary stars in the main sequence fitting [22]. When the known binaries are removed from the fit (a case which is also considered in [28]), then all four papers are in agreement — the Hipparcos data yields larger distances (and hence, younger ages) for globular clusters. My analysis [22] considered four distance determination techniques in addition to using the Hipparcos data, and concluded that the five independent distance estimates to globular clusters all led to younger globular cluster ages. A number of authors have recently examined the question of the absolute age of the oldest globular clusters. All of these works used the luminosity of the main sequence turn-off as the age indicator. The results are summarized in Table 1. Despite the fact that these investigators used a variety of theoretical stellar models (with differing input physics) and different methods to determine the distance to the globular clusters, the derived ages are remarkably similar, around 12 Gyr. These ages are &3 Gyr younger than previous determinations, due to improved input physics used in the models, and a longer distance scale to globular clusters. My work [22] considered a variety of distance indicators and included a very detailed Monte Carlo study of the possible errors associated with the theoretical stellar models. For this reason, my preferred age for the oldest globular clusters is 11.5$1.3 Gyr, implying a minimum age of the universe of t 59.5 Gyr at the 95% confidence level. Table 1 Estimates for the age of the oldest globular clusters Age (Gyr)
Distance determination
Reference
11.5$1.3 12$1 11.8$1.2 14.0$1.2 12$1 12.2$1.8
Five independent techniques Main sequence fitting (Hipparcos) Main sequence fitting (Hipparcos) Main sequence fitting (Hipparcos) including binaries Theoretical HB & main sequence fitting Theoretical HB
[22] [26] [27] [28] [29] [30]
B. Chaboyer / Physics Reports 307 (1998) 23—30
29
5. Summary A direct estimate for the minimum age of the universe can be obtained by determining the age of the oldest objects in the galaxy. These objects are the metal-poor stars located in the halo of the Milky Way. There are currently three independent techniques which have been used to determine the ages of the metal-poor stars in the Milky Way: nucleochronology, white dwarf cooling theory, and main sequence turn-off ages. The best application of nucleochronology to date has been on the very metal-poor star CS 22892 which has an age of 15.2$3.7 Gyr [2], implying a 2p lower limit to the age of the universe of t 57.8 Gyr. White dwarf cooling theory is difficult to apply in practice, as one needs to observe very faint objects. Currently, it is impossible to observe the faintest white dwarfs in a globular cluster, so white dwarf cooling theory can only provide a lower limit to the age of a globular cluster. Based upon the luminosity of the faintest observed white dwarfs, a lower limit to the age of M4 was determined to be t 98 Gyr [7]. Absolute globular cluster ages based upon the main sequence turn-off have recently been revised due to a realization that globular clusters are farther away than previously thought. The age of the oldest globular clusters is 11.5$1.3 Gyr [22], implying a minimum age of the universe of t 59.5 Gyr (95% confidence level). At the present time, main sequence turn-off ages have the smallest errors of the available age determination techniques and provide the best estimate for the age of the universe. To obtain the actual age of the universe, one must add to the above age the time it took for the metal-poor stars to form. Unfortunately, a good theory for the onset of star formation within the galaxy does not exist. Estimates for the epoch of initial star formation range from redshifts of z & 5—20. This corresponds to ages ranging from 0.1 to 2 Gyr, implying that the actual age of the universe lies in the range 9.64t 415.4 Gyr. Tightening the bounds of this estimate will require a better understanding of the epoch of galaxy formation, along with improved stellar models and distance estimates to globular clusters.
Acknowledgements The author was supported for this work by NASA through Hubble Fellowship grant number HF-01080.01-96A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
References [1] [2] [3] [4] [5] [6] [7] [8]
J.J. Cowan, F.-K. Thielemann, J.W. Truran, Annu. Rev. Astron. Astrophys. 29 (1991) 447. C. Sneden, A. McWilliam, G.W. Preston, J.J. Cowan, D.L. Burris, B.J. Armosky, Astrophys. J. 467 (1996) 819. M.A. Wood, Astrophys. J. 386 (1992) 539. L. Segretain, G. Chabrier, M. Hernanz, E. Garcı´ a-Berro, J. Isern, R. Mochkovitch, Astrophys. J. 434 (1994) 641. M. Salaris, I. Domı´ nguez, E. Garcı´ a-Berro, M. Hernanz, J. Isern, R. Mochkovitch, Astrophys. J. 486 (1997) 413. T.D. Oswalt, J.A. Smith, M.A. Wood, P. Hintzen, Nature 382 (1996) 692. H.B. Richer et al., Astrophys. J. 484 (1997) 741. M. Schwarzschild, Structure and Evolution of the Stars, Dover, New York, 1958.
30
B. Chaboyer / Physics Reports 307 (1998) 23—30
[9] C.J. Hanson, S.D. Kawaler, Stellar Interiors: Physical Principles Structure and Evolution, Springer, New York, 1994. [10] P.R. Durrell, W.E. Harris, Astron. J. 105 (1993) 1420. [11] W.P. Abbett, M. Beaver, B. Davids, D. Georgobiani, P. Rathbun, R.F. Stein, Astrophys. J. 480 (1997) 395. [12] Y.-C. Kim, K.L. Chan, Astrophys. J. 496 (1998) L121. [13] A. Renzini, in: T. Shanks et al. (Eds.), Observational Tests of Cosmological Inflation, Kluwer, Dordrecht, p. 131. [14] B. Chaboyer, Nuclear Physics B (Proc. Suppl.) 51B (1996) 10. [15] P.D. Noerdlinger, R.J. Arigo, Astrophys. J. 237 (1980) L15. [16] J. Christensen-Dalsgaard, C.R. Proffitt, M.J. Thompson, Astrophys. J. 403 (1993) L75. [17] D.B. Guenther, Y.-C. Kim, P. Demarque, Astrophys. J. 463 (1996) 382. [18] B. Chaboyer, P. Demarque, A. Sarajedini, Astrophys. J. 459 (1996) 558. [19] B. Chaboyer, Y.-C. Kim, Astrophys. J. 454 (1995) 76. [20] F.J. Rogers, in: G. Chabrier, E. Schatzman (Eds.), The Equation of State in Astrophysics, IAU Coll. 147, Cambridge University Press, Cambridge, 1994, p. 16. [21] S. Basu, J. Christensen-Dalsgaard, Astron. Astrophys. 322 (1997) L5. [22] B. Chaboyer, P. Demarque, P.J. Kernan, L.M. Krauss, Astrophys. J. 494 (1998) 96. [23] P. Nissen, B. Gustafsson, B. Edvardsson, G. Gilmore, Astron. Astrophys. 285 (1994) 440. [24] S.C. Balachandran, B.W. Carney, Astron. J. 111 (1996) 946. [25] ESA, The Hipparcos and Tycho Catalogues, ESA SP-1200, 1977. [26] I.N. Reid, Astron. J. 114 (1997) 161. [27] R.G. Gratton, F. Fusi Pecci, E. Carretta, G. Clementini, C.E. Corsi, M. Lattanzi, Astrophys. J. 491 (1997) 749. [28] F. Pont, M. Mayor, C. Turon, D.A. VanDenberg, Astron. Astrophys. 329 (1998) 87. [29] F. D’Antona, V. Caloi, I. Mazzitelli, Astrophys. J. 519 (1997) 534. [30] M. Salaris, S. Degl’Innocenti, A. Weiss, Astrophys. J. 479 (1997) 665.
Physics Reports 307 (1998) 31—44
Results from the High-z Supernova Search Team Alexei V. Filippenko*, Adam G. Riess Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA
On behalf of the High-z Supernova Search Team
Abstract We review the use of Type Ia supernovae for cosmological distance determinations. Low-redshift SNe Ia (z:0.1) demonstrate that (a) the Hubble expansion is linear, (b) H "65$2 (statistical) km s\ Mpc\, (c) the bulk motion of the Local Group is consistent with the COBE result, and (d) the properties of dust in other galaxies are similar to those in the Milky Way. We find that the light curves of high-redshift SNe Ia are stretched in a manner consistent with the expansion of space; similarly, their spectra exhibit slower temporal evolution (by a factor of 1#z) than those of nearby SNe Ia. The luminosity distances of our 16 high-redshift SNe Ia are, on average, 10—15% farther than expected in a low mass-density (X "0.2) universe without a cosmological constant. Our analysis strongly supports eternally expanding + models with positive cosmological constant and a current acceleration of the expansion. We address many potential sources of systematic error; at present, none of them reconciles the data with XK"0 and q 50. The dynamical age of the Universe is estimated to be 14.2$1.7 Gyr, consistent with the ages of globular star clusters. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 98.80.!k; 97.60.Bw
1. Introduction Supernovae (SNe) come in two main varieties (see Ref. [11] for a review). Those whose optical spectra exhibit hydrogen are classified as Type II, while hydrogen-deficient SNe are designated Type I. SNe I are further subdivided according to the appearance of the early-time spectrum: SNe Ia are characterized by strong absorption near 6150 A> (now attributed to Si II), SNe Ib lack this feature but instead show prominent He I lines, and SNe Ic have neither the Si II nor the He I lines. SNe Ia are believed to result from the thermonuclear disruption of carbon—oxygen white dwarfs,
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 2 - 0
32
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
while SNe II come from core collapse in massive supergiant stars. The latter mechanism probably produces most SNe Ib/Ic as well, but the progenitor stars previously lost their outer layers of hydrogen or even helium. It has long been recognized that SNe Ia may be very useful distance indicators for a number of reasons (see Refs. [2,6], and references therein). (1) They are exceedingly luminous, with peak absolute blue magnitudes averaging !19.2 if the Hubble constant, H , is 65 km s\ Mpc\. (2) “Normal” SNe Ia have small dispersion among their peak absolute magnitudes (p:0.3 mag). (3) Our understanding of the progenitors and explosion mechanism of SNe Ia is on a reasonably firm physical basis. (4) Little cosmic evolution is expected in the peak luminosities of SNe Ia, and it can be modeled. This makes SNe Ia superior to galaxies as distance indicators. (5) One can perform local tests of various possible complications and evolutionary effects by comparing nearby SNe Ia in different environments. Research on SNe Ia in the 1990s has demonstrated their enormous potential as cosmological distance indicators. Although there are subtle effects that must indeed be taken into account, it appears that SNe Ia provide among the most accurate values of H , q (the deceleration para meter), X (the matter density), and XK (the cosmological constant, cK/3H). + There are now two major teams involved in the systematic investigation of high-redshift SNe Ia for cosmological purposes. The “Supernova Cosmology Project” (SCP) is led by Saul Perlmutter of the Lawrence Berkeley Laboratory, while the “High-z Supernova Search Team” (HZT) is led by Brian Schmidt of the Mt. Stromlo and Siding Springs Observatories. One of us (A.V.F.) has worked with both teams, although his primary allegiance is now with the HZT. In this lecture we present results from the HZT.
2. Homogeneity and heterogeneity The traditional way in which SNe Ia have been used for cosmological distance determinations has been to assume that they are perfect “standard candles” and to compare their observed peak brightness with those of SNe Ia in galaxies whose distances have been independently determined (e.g., Cepheids). The rationale is that SNe Ia exhibit relatively little scatter in their peak blue luminosity (p +0.4—0.5 mag; [4]), and even less if “peculiar” or highly reddened objects are eliminated from consideration by using a color cut. Moreover, the optical spectra of SNe Ia are usually quite homogeneous, if care is taken to compare objects at similar times relative to maximum brightness (see [46] and references therein). Branch et al. [3] estimate that over 80% of all SNe Ia discovered thus far are “normal”. From a Hubble diagram constructed with unreddened, moderately distant SNe Ia (z:0.1) for which peculiar motions should be small and relative distances (as given by ratios of redshifts) are accurate, Vaughan et al. [60] find that 1M (max)2"(!19.74$0.06)#5 log(H /50) mag . In a series of papers, Sandage et al. [52] and Saha et al. [50] combine similar relations with Hubble Space ¹elescope (HS¹) Cepheid distances to the host galaxies of six SNe Ia to derive H "57$4 km s\ Mpc\.
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
33
Over the past decade it has become clear, however, that SNe Ia do not constitute a perfectly homogeneous subclass (e.g., [10]). In retrospect this should have been obvious: the Hubble diagram for SNe Ia exhibits scatter larger than the photometric errors, the dispersion actually rises when reddening corrections are applied (under the assumption that all SNe Ia have uniform, very blue intrinsic colors at maximum; [51,59]), and there are some significant outliers whose anomalous magnitudes cannot possibly be explained by extinction alone. Spectroscopic and photometric peculiarities have been noted with increasing frequency in well-observed SNe Ia. A striking case is SN 1991T; its pre-maximum spectrum did not exhibit Si II or Ca II absorption lines, yet two months past maximum the spectrum was nearly indistinguishable from that of a classical SN Ia [13,42]. The light curves of SN 1991T were slightly broader than the SN Ia template curves, and the object was probably somewhat more luminous than average at maximum. The reigning champion of well observed, peculiar SNe Ia is SN 1991bg [12,34,58]. At maximum brightness it was subluminous by 1.6 mag in » and 2.5 mag in B, its colors were intrinsically red, and its spectrum was peculiar (with a deep absorption trough due to Ti II). Moreover, the decline from maximum brightness was very steep, the I-band light curve did not exhibit a secondary maximum like normal SNe Ia, and the velocity of the ejecta was unusually low. The photometric heterogeneity among SNe Ia is well demonstrated by Suntzeff [54] with five objects having excellent B»RI light curves.
3. Cosmological uses 3.1. Luminosity corrections and nearby supernovae Although SNe Ia can no longer be considered perfect “standard candles”, they are still exceptionally useful for cosmological distance determinations. Excluding those of low luminosity (which are hard to find, especially at large distances), most SNe Ia are nearly standard [3]. Also, after many tenuous suggestions, convincing evidence has finally been found for a correlation between lightcurve shape and luminosity. Phillips [41] achieved this by quantifying the photometric differences among a set of nine well-observed SNe Ia using a parameter, Dm (B), which measures the total drop (in B magnitudes) from maximum to t"15 days after B maximum. In all cases the host galaxies of his SNe Ia have accurate relative distances from surface brightness fluctuations or from the Tully—Fisher relation. In B, the SNe Ia exhibit a total spread of &2 mag in maximum luminosity, and the intrinsically bright SNe Ia clearly decline more slowly than dim ones. The range in absolute magnitude is smaller in » and I, making the correlation with Dm (B) less steep than in B, but it is present nonetheless. Using SNe Ia discovered during the Cala´n/Tololo survey (z:0.1), Hamuy [21,23] confirm and refine the Phillips [41] correlation between Dm (B) and M (B, »): it is not as steep as had been
claimed. Apparently the slope is steep only at low luminosities; thus, objects such as SN 1991bg skew the slope of the best-fitting single straight line. Hamuy et al. reduce the scatter in the Hubble diagram of normal, unreddened SNe Ia to only 0.17 mag in B and 0.14 mag in »; see also Tripp [56]. In a similar effort, Riess et al. [43] show that the luminosity of SNe Ia correlates with the detailed shape of the light curve, not just with its initial decline. They form a “training set” of light-curve shapes from 9 well-observed SNe Ia having known relative distances, including very peculiar
34
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
objects (e.g., SN 1991bg). When the light curves of an independent sample of 13 SNe Ia (the Cala´n/Tololo survey) are analyzed with this set of basis vectors, the dispersion in the »-band Hubble diagram drops from 0.50 to 0.21 mag, and the Hubble constant rises from 53$11 to 67$7 km s\ Mpc\, comparable to the conclusions of Hamuy et al. [21,23]. About half of the rise in H results from a change in the position of the “ridge line” defining the linear Hubble relation, and half is from a correction to the luminosity of some of the local calibrators which appear to be unusually luminous (e.g., SN 1972E). By using light-curve shapes measured through several different filters, Riess et al. [45] extend their analysis and objectively eliminate the effects of interstellar extinction: a SN Ia that has an unusually red B—» color at maximum brightness is assumed to be intrinsically subluminous if its light curves rise and decline quickly, or of normal luminosity but significantly reddened if its light curves rise and decline slowly. With a set of 20 SNe Ia consisting of the Cala´n/Tololo sample and their own objects, Riess et al. [45] show that the dispersion decreases from 0.52 to 0.12 mag after application of this “multi-color light curve shape” (MLCS) method. Preliminary results with a very recent, expanded set of nearly 50 SNe Ia indicate that the dispersion decreases from 0.44 to 0.15 mag [49]. The resulting Hubble constant is 65$2 (statistical) km s\ Mpc\, with an additional systematic and zero-point uncertainty of $5 km s\ Mpc\. Riess et al. [45] also show that the Hubble flow is remarkably linear; indeed, SNe Ia now constitute the best evidence for linearity. Finally, they argue that the dust affecting SNe Ia is not of circumstellar origin, and show quantitatively that the extinction curve in external galaxies typically does not differ from that in the Milky Way (cf. [6,57]). Riess et al. [44] capitalize on another use of SNe Ia: determination of the Milky Way Galaxy’s peculiar motion relative to the Hubble flow. They select galaxies whose distances were accurately determined from SNe Ia, and compare their observed recession velocities with those expected from the Hubble law alone. The speed and direction of the Galaxy’s motion are consistent with what is found from COBE (Cosmic Background Explorer) studies of the microwave background, but not with the results of Lauer and Postman [33]. The advantage of systematically correcting the luminosities of SNe Ia at high redshifts rather than trying to isolate “normal” ones seems clear in view of recent evidence that the luminosity of SNe Ia may be a function of stellar population. If the most luminous SNe Ia occur in young stellar populations [5,21,22], then we might expect the mean peak luminosity of high-redshift SNe Ia to differ from that of a local sample. Alternatively, the use of Cepheids (Population I objects) to calibrate local SNe Ia can lead to a zero point that is too luminous. On the other hand, as long as the physics of SNe Ia is essentially the same in young stellar populations locally and at high redshift, we should be able to adopt the luminosity correction methods (photometric and spectroscopic) found from detailed studies of nearby samples of SNe Ia.
4. High-redshift supernovae 4.1. The search These same techniques can be applied to construct a Hubble diagram with high-redshift SNe, from which the value of q can be determined. With enough objects spanning a range of redshifts,
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
35
we can determine X and XK independently (e.g., [18]). Contours of peak apparent R-band + magnitude for SNe Ia at two redshifts have different slopes in the X —XK plane, and the regions of + intersection provide the answers we seek. Based on the pioneering work of Norgaard-Nielsen et al. [36], whose goal was to find SNe in moderate-redshift clusters of galaxies, Perlmutter et al. [39] and our team [53] devised a strategy that almost guarantees the discovery of many faint, distant SNe Ia on demand, during a predetermined set of nights. This “batch” approach to studying distant SNe allows follow-up spectroscopy and photometry to be scheduled in advance, resulting in a systematic study not possible with random discoveries. Most of the searched fields are equatorial, permitting follow-up from both hemispheres. Our approach is simple in principle; see Ref. [53] for details, and for a description of our first high-redshift SN Ia (SN 1995K). Pairs of first-epoch images are obtained with the CTIO or CFHT 4-m telescopes and wide-angle imaging cameras during the nights just after new moon, followed by second-epoch images 3—4 weeks later. (Pairs of images permit removal of cosmic rays, asteroids, and distant Kuiper-belt objects.) These are compared immediately using well-tested software, and new SN candidates are identified in the second-epoch images. Spectra are obtained as soon as possible after discovery to verify that the objects are SNe Ia and determine their redshifts. Each team has already found over 60 SNe in concentrated batches, as reported in numerous IAU Circulars (e.g., [38] — 11 SNe with 0.16:z:0.65; [54,55] — 17 SNe with 0.09:z:0.84). Intensive photometry of the SNe Ia commences within a few days after procurement of the second-epoch images; it is continued throughout the ensuing and subsequent dark runs. In a few cases HS¹ images are obtained. As expected, most of the discoveries are on the rise or near maximum brightness. When possible, the SNe are observed in filters which closely match the red-shifted B and » bands; this way, the K-corrections become only a second-order effect [31]. Custom-designed filters for redshifts centered on 0.35 and 0.45 are used by our team [53] when appropriate. We try to obtain excellent multi-color light curves, so that reddening and luminosity corrections can be applied [22,23,45]. Although SNe in the magnitude range 22—22.5 can sometimes be spectroscopically confirmed with 4-m class telescopes, the signal-to-noise ratios are low, even after several hours of integration. Certainly Keck is required for the fainter objects (mag 22.5—24.5). With Keck, not only can we rapidly confirm a large number of candidate SNe, but we can search for peculiarities in the spectra that might indicate evolution of SNe Ia with redshift. Moreover, high-quality spectra allow us to measure the age of a supernova: we have developed a method for automatically comparing the spectrum of a SN Ia with a library of spectra corresponding to many different epochs in the development of SNe Ia [46]. Our technique also has great practical utility at the telescope: we can determine the age of a SN “on the fly”, within half an hour after obtaining its spectrum. This allows us to rapidly decide which SNe are the best ones for subsequent photometric follow-up, and we immediately alert our collaborators on other telescopes. 4.2. Results First, we note that the light curves of high-redshift SNe Ia are broader than those of nearby SNe Ia; the initial indications of Leibundgut et al. [35] and Goldhaber et al. [17] are amply confirmed with our larger samples. Quantitatively, the amount by which the light curves are “stretched” is
36
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
Fig. 1. The upper panel shows the Hubble diagram for the low-redshift and high-redshift SN Ia samples with distances measured from the MLCS method. Overplotted are three world models: “low” and “high” X with XK"0, and the best + fit for a flat universe (X "0.24, XK"0.76). The bottom panel shows the difference between data and models with + X "0.20, XK"0. The open symbol is SN 1997ck (z"0.97) which lacks spectroscopic classification and an extinction + measurement. The average difference between the data and the X "0.20, XK"0 prediction is 0.25 mag. +
consistent with a factor of 1#z, as expected if redshifts are produced by the expansion of space rather than by “tired light”. We were also able to demonstrate this spectroscopically at the 2p confidence level for a single object: the spectrum of SN 1996bj (z"0.57) evolved more slowly than those of nearby SNe Ia, by a factor consistent with 1#z [46]. More recently, we have used observations of SN 1997ex (z"0.36) at three epochs to conclusively verify the effects of time dilation: temporal changes in the spectra are slower than those of nearby SNe Ia by roughly the expected factor of 1.36 [14]. Following our Spring 1997 campaign, in which we found a SN with z"0.97 (the highest known for a SN, but we do not have spectroscopic confirmation that it is a SN Ia), and for which we obtained HS¹ follow-up of three SNe, we published our first substantial results concerning the density of the Universe [15]: X "0.35$0.3 under the assumption that X "1, or + X "!0.1$0.5 under the assumption that XK"0. Our independent analysis of 10 SNe Ia using + the “snapshot” distance method (with which conclusions are drawn from sparsely observed SNe Ia) gives quantitatively similar conclusions [47]. Our more recent results [48], obtained from a total of 16 high-z SNe Ia, were announced for the first time at this meeting. The Hubble diagram (from a refined version of the MLCS method [48]) for the 10 best-observed high-z SNe Ia is given in Fig. 1, while Fig. 2 illustrates the derived confidence contours in the X —XK plane. We confirm our previous suggestion that X is + +
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
37
Fig. 2. Joint confidence intervals for (X , XK) from SNe Ia. The solid contours are results from the MLCS method + applied to well-observed SN Ia light curves, together with the snapshot method [47] applied to incomplete SN Ia light curves. The dotted contours are for the same objects excluding SN 1997ck (z"0.97). Regions representing specific cosmological scenarios are illustrated.
low. Even more exciting, however, is our conclusion that XK is nonzero at the 3p statistical confidence level. With the MLCS method applied to the full set of 16 SNe Ia, our formal results are X "0.24$0.10 if X "1, or X "!0.35$0.18 (unphysical) if XK"0. If we demand that + + X "0.2, then the best value for XK is 0.66$0.21. [Our data do not provide assumption-free, + independent constraints on X and XK to high precision without ancillary assumptions or + inclusion of a supernova (SN 1997ck) whose classification and extinction are uncertain.] These conclusions do not change significantly if only the 10 best-observed SNe Ia (Fig. 1) are used. The Dm (B) method yields similar results; if anything, the case for a positive cosmological constant strengthens. (For brevity, in this paper we won’t quote the Dm (B) numbers; see [48].) Note that the current upper limit on XK from studies of gravitational lenses (40.66 at 95% confidence; [32]) is not formally inconsistent with our results. Though not drawn in Fig. 2, the expected confidence contours from measurements of the angular scale of the first Doppler peak of the cosmic microwave background radiation (CMBR) are nearly perpendicular to those provided by SNe Ia [62]; thus, the two techniques provide complementary information. The space-based CMBR experiments in the next decade (e.g., MAP, Planck) will give very narrow ellipses, but a stunning result is already provided by existing measurements [24]: analysis done by our team after this meeting demonstrates that X #XK"0.94$0.26, when the SN and CMBR constraints are combined [16]. The confidence + contours are nearly circular, instead of highly eccentric ellipses as in Fig. 2. We eagerly look forward to future CMBR measurements of even greater precision.
38
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
The dynamical age of the Universe can be calculated from the cosmological parameters. In an empty Universe with no cosmological constant, the dynamical age is simply the “Hubble time” (i.e., the inverse of the Hubble constant); there is no deceleration. SNe Ia yield H "65$2 km s\ Mpc\ (statistical uncertainty only), and a Hubble time of 15.1$0.5 Gyr. For a more complex cosmology, integrating the velocity of the expansion from the current epoch (z"0) to the beginning (z"R) yields an expression for the dynamical age. As shown in detail by Riess et al. [48], we obtain a value of 14.2> Gyr using the likely range for (X , XK) that we \ + measure. (The precision is so high because our experiment is sensitive to the difference between X and XK, and the dynamical age also varies approximately this way.) Including the systematic + uncertainty of the Cepheid distance scale, which may be up to 10%, a reasonable estimate of the dynamical age is 14.2$1.7 Gyr. This result is consistent with ages determined from various other techniques such as the cooling of white dwarfs (Galactic disk '9.5 Gyr; [37]), radioactive dating of stars via the thorium and europium abundances (15.2$3.7 Gyr; [9]), and studies of globular clusters (10—15 Gyr, depending on whether Hipparcos parallaxes of Cepheids are adopted; [8,19]). Evidently, there is no longer a problem that the age of the oldest stars is greater than the dynamical age of the Universe.
5. Discussion High-redshift SNe Ia are observed to be dimmer than expected in an empty ºniverse (i.e., X "0) + with no cosmological constant. A cosmological explanation for this observation is that a positive vacuum energy density accelerates the expansion. Mass density in the Universe exacerbates this problem, requiring even more vacuum energy. For a Universe with X "0.2, the average MLCS + distance moduli of the well-observed SNe are 0.25 mag larger (i.e., 12.5% greater distances) than the prediction from XK"0. The average MLCS distance moduli are still 0.18 mag bigger than required for a 68.3% (1p) consistency for a Universe with X "0.2 and without a cosmological constant. + The derived value of q is !0.75$0.32, implying that the expansion of the Universe is accelera ting. It appears that the Universe will expand eternally. 5.1. Comparisons The results given here are consistent with other reported observations of high-redshift SNe Ia from our HZT [15,53], and the improved statistics of this larger sample reveal the potential influence of a positive cosmological constant. Our results are inconsistent at the &2p confidence level with those of Perlmutter et al. [39], who found X "0.94$0.3 (XK"0.06) for a flat Universe and X "0.88$0.64 for XK,0. They are + + marginally consistent with those of Perlmutter et al. [40] who, with the addition of a single very high-redshift SN Ia (z"0.83), found X "0.6$0.2 (XK"0.4) for a flat Universe and X " + + 0.2$0.4 for XK,0. They are consistent with the results presented at this meeting by Perlmutter and Goldhaber: they stated that analysis of 40 SNe Ia yields X "0.25$0.06 (XK"0.75) for a flat + Universe and X "!0.4$0.1 for XK,0. It will be of interest to learn the reason for the + monotonic shift in the results of Perlmutter’s SCP.
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
39
Although the HZT experiment reported here is very similar to that performed by the SCP [39,40] there are some differences worth noting. The HZT explicitly corrects for the effects of extinction evidenced by reddening of the SN Ia colors [15,48,53]. Not correcting for extinction in the nearby and distant samples could affect the cosmological results in either direction since we do not know the sign of the difference of the mean extinction. We also include H as a free parameter in each of our fits to the other cosmological parameters. Treating the nearby sample in the same way as the distant sample is a crucial requirement of this work. Our experience observing the nearby sample aids our ability to accomplish this goal. 5.2. Systematic effects A very important point is that the dispersion in the peak luminosities of SNe Ia (p"0.15 mag) is low after application of the MLCS method of Riess et al. [45,48]. With 16 SNe Ia, our effective uncertainty is 0.15/4+0.04 mag, less than the expected difference of 0.25 mag between universes with XK"0 and 0.76 (and low X ); see Fig. 1. Systematic uncertainties of even 0.05 mag (e.g., in the + extinction) are significant, and at 0.1 mag they dominate any decrease in statistical uncertainty gained with a larger sample of SNe Ia. Thus, our conclusions with only 16 SNe Ia are already limited by systematic uncertainties, not by statistical uncertainties. Here we explore possible systematic effects that might invalidate our results. Of those that can be quantified at the present time, none appears to reconcile the data with XK"0. Further details can be found in Schmidt et al. [53] and especially Riess et al. [48]. 5.2.1. Evolution The local sample of SNe Ia displays a weak correlation between light-curve shape (or luminosity) and host galaxy type, in the sense that the most luminous SNe Ia with the broadest light curves only occur in late-type galaxies. Both early-type and late-type galaxies provide hosts for dimmer SNe Ia with narrower light curves [22]. The mean luminosity difference for SNe Ia in late-type and early-type galaxies is &0.3 mag. In addition, the SN Ia rate per unit luminosity is almost twice as high in late-type galaxies as in early-type galaxies at the present epoch [7]. These results may indicate an evolution of SNe Ia with progenitor age. Possibly relevant physical parameters are the mass, metallicity, and C/O ratio of the progenitor [26]. We expect that the relation between light-curve shape and luminosity that applies to the range of stellar populations and progenitor ages encountered in the late-type and early-type hosts in our nearby sample should also be applicable to the range we encounter in our distant sample. In fact, the range of age for SN Ia progenitors in the nearby sample is likely to be larger than the change in mean progenitor age over the 4—6 Gyr lookback time to the high-z sample. Thus, to first order at least, our local sample should correct our distances for progenitor or age effects. We can place empirical constraints on the effect that a change in the progenitor age would have on our SN Ia distances by comparing subsamples of low-redshift SNe Ia believed to arise from old and young progenitors. In the nearby sample, the mean difference between the distances for the early-type (8 SNe Ia) and late-type hosts (19 SNe Ia), at a given redshift, is 0.04$0.07 mag from the MLCS method. This difference is consistent with zero. Even if the SN Ia progenitors evolved from one population at low redshift to the other at high redshift, we still would not explain the surplus in mean distance of 0.25 mag over the XK"0 prediction.
40
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
Moreover, it is reassuring that initial comparisons of high-redshift SN Ia spectra appear remarkably similar to those observed at low redshift. For example, the spectral characteristics of SN 1998ai (z"0.49) appear to be essentially indistinguishable from those of low-redshift SNe Ia [48]. We expect that our local calibration will work well at eliminating any pernicious drift in the supernova distances between the local and distant samples. However, we need to be vigilant for changes in the properties of SNe Ia at significant lookback times. Our distance measurements could be especially sensitive to changes in the colors of SNe Ia for a given light-curve shape. 5.2.2. Extinction Our SN Ia distances have the important advantage of including corrections for interstellar extinction occurring in the host galaxy and the Milky Way. Extinction corrections based on the relation between SN Ia colors and luminosity improve distance precision for a sample of nearby SNe Ia that includes objects with substantial extinction [45]; the scatter in the Hubble diagram is much reduced. Moreover, the consistency of the measured Hubble flow from SNe Ia with late-type and early-type hosts (see above) shows that the extinction corrections applied to dusty SNe Ia at low redshift do not alter the expansion rate from its value measured from SNe Ia in low dust environments. In practice, our high-redshift SNe Ia appear to suffer negligible extinction; their B!» colors at maximum brightness are normal, suggesting little color excess due to reddening. Riess et al. [44] found indications that the Galactic ratios between selective absorption and color excess are similar for host galaxies in the nearby (z40.1) Hubble flow. Yet, what if these ratios changed with lookback time? Could an evolution in dust grain size descending from ancestral interstellar “pebbles” at higher redshifts cause us to underestimate the extinction? Large dust grains would not imprint the reddening signature of typical interstellar extinction upon which our corrections rely. However, viewing our SNe through such grey interstellar grains would also induce a dispersion in the derived distances. Using the results of Hatano et al. [25], Riess et al. [48] estimate that the expected dispersion would be 0.40 mag if the mean grey extinction were 0.25 mag (the value required to explain the measured MLCS distances without a cosmological constant). This is significantly larger than the 0.21 mag dispersion observed in the high-redshift MLCS distances. Furthermore, most of the observed scatter is already consistent with the estimated statistical errors, leaving little to be caused by grey extinction. Nevertheless, if we assumed that all of the observed scatter were due to grey extinction, the mean shift in the SN Ia distances would only be 0.05 mag. With the observations presented here, we cannot rule out this modest amount of grey interstellar extinction. Grey intergalactic extinction could dim the SNe without either telltale reddening or dispersion, if all lines of sight to a given redshift had a similar column density of absorbing material. The component of the intergalactic medium with such uniform coverage corresponds to the gas clouds producing Lyman-a forest absorption at low redshifts. These clouds have individual H I column densities less than about 10 cm\ [1]. However, they display low metallicities, typically less than 10% of solar. Grey extinction would require larger dust grains which would need a larger mass in heavy elements than typical interstellar grain size distributions to achieve a given extinction. Furthermore, these clouds reside in hard radiation environments hostile to the survival of dust
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
41
grains. Finally, the existence of grey intergalactic extinction would only augment the already surprising excess of galaxies in high-redshift galaxy surveys [29]. We conclude that grey extinction does not seem to provide an observationally or physically plausible explanation for the observed faintness of high-redshift SNe Ia. 5.2.3. Selection bias Sample selection has the potential to distort the comparison of nearby and distant SNe. Most of our nearby (z(0.1) sample of SNe Ia was gathered from the Cala´n/Tololo survey [20], which employed the blinking of photographic plates obtained at different epochs with Schmidt telescopes, and from less well-defined searches [49]. Our distant (z90.16) sample was obtained by subtracting digital CCD images at different epochs with the same instrument setup. Although selection effects could alter the ratio of intrinsically dim to bright SNe Ia in the nearby and distant samples, our use of the light-curve shape to determine the supernova’s luminosity should correct most of this selection bias on our distance estimates. Nevertheless, the final dispersion is nonzero, and to investigate its consequences we used a Monte Carlo simulation; details are given by Riess et al. [48]. The results are extremely encouraging, with recovered values of X or XK exceeding the simulated values by only 0.02—0.03 for these two parameters considered + separately. There are two reasons we find such a small selection bias in the recovered cosmological parameters. First, the small dispersion of our distance indicator (p+0.15 mag after light-curve shape correction) results in only a modest selection bias. Second, both nearby and distant samples include an excess of brighter than average SNe, so the difference in their individual selection biases remains small. Future work on quantifying the selection criteria of the nearby and distant samples is needed. Although the above simulation and others bode well for using SNe Ia to measure cosmological parameters, we must continue to be wary of subtle effects that might bias the comparison of SNe Ia near and far. 5.2.4. Effect of a local void Zehavi et al. [63] find that the SNe Ia out to 7000 km s\ may (2—3p confidence level) exhibit an expansion rate which is 6% greater than that measured for the more distant objects; see Fig. 1. The implication is that the volume out to this distance is underdense relative to the global mean density. In principle, a local void would increase the expansion rate measured for our low-redshift sample relative to the true, global expansion rate. Mistaking this inflated rate for the global value would give the false impression of an increase in the low-redshift expansion rate relative to the high-redshift expansion rate. This outcome could be incorrectly attributed to the influence of a positive cosmological constant. In practice, only a small fraction of our nearby sample is within this local void, reducing its effect on the determination of the low-redshift expansion rate. As a test of the effect of a local void on our constraints for the cosmological parameters, we reanalyzed the data discarding the seven SNe Ia within 7000 km s\ (d"108 Mpc for H "65 km s\ Mpc\). The result was a reduction in the confidence that XK'0 from 99.7% (3.0p) to 98.3% (2.4p) for the MLCS method.
42
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
5.2.5. Weak gravitational lensing The magnification and demagnification of light by large-scale structure can alter the observed magnitudes of high-redshift SNe [30]. The effect of weak gravitational lensing on our analysis has been quantified by Wambsganss et al. [61] and summarized by Schmidt et al. [53]. SN Ia light will, on average, be demagnified by 0.5% at z"0.5 and 1% at z"1 in a Universe with a non-negligible cosmological constant. Although the sign of the effect is the same as the influence of a cosmological constant, the size of the effect is negligible. Holz and Wald [28] have calculated the weak lensing effects on supernova light from ordinary matter which is not smoothly distributed in galaxies but rather clumped into stars (i.e., dark matter contained in massive compact halo objects). With this scenario, microlensing becomes a more important effect, further decreasing the observed supernova luminosities at z"0.5 by 0.02 mag for X "0.2 [27]. Even if most ordinary matter were contained in compact objects, this effect would + not be large enough to reconcile the SNe Ia distances with the influence of ordinary matter alone. 5.2.6. Sample contamination Riess et al. [48] consider in detail the possibility of sample contamination by SNe that are not SNe Ia. Of the 16 objects, 12 are clearly SNe Ia and 2 others are almost certainly SNe Ia (though the region near Si II j6150 was poorly observed in the latter two). One object (SN 1997ck at z"0.97) does not have a good spectrum, and another (SN 1996E) might be a SN Ic. A reanalysis with only the 14 most probable SNe Ia does not significantly alter our conclusions regarding a positive cosmological constant. However, without SN 1997ck we cannot obtain independent values for X and XK. + Riess et al. [48] and Schmidt et al. [53] discuss several other uncertainties (e.g., differences between fitting techniques; K-corrections) and suggest that they, too, are not serious in our study.
6. Conclusions When used with care, SNe Ia are excellent cosmological probes; the current precision of individual distance measurements is roughly 5—10%, but a number of subtleties must be taken into account to obtain reliable results. SNe Ia at z:0.1 have been used to demonstrate that (a) the Hubble flow is definitely linear, (b) the value of the Hubble constant is 65$2 (statistical) $5 (systematic) km s\ Mpc\, (c) the bulk motion of the Local Group is consistent with that derived from COBE measurements, and (d) the dust in other galaxies is similar to Galactic dust. More recently, we have used a sample of 16 high-redshift SNe Ia (0.164z40.97) to deduce the following. (1) The luminosity distances exceed the prediction of a low mass-density (X +0.2) Universe by + about 0.25 mag. A cosmological explanation is provided by a positive cosmological constant at the 99.7% (3.0p) confidence level, with the prior belief that X 50. We also find that the expansion of + the Universe is currently accelerating (q 40, where q ,X /2!XK), and that the Universe will + expand forever. (2) The dynamical age of the Universe is 14.2$1.7 Gyr, including systematic uncertainties in the Cepheid distance scale used for the host galaxies of three nearby SNe Ia.
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44
43
(3) These conclusions do not depend on the inclusion of SN 1997ck (z"0.97; uncertain classification and extinction), nor on which of two light-curve fitting methods is used to determine the SN Ia distances. (4) The systematic uncertainties presented by grey extinction, sample selection bias, evolution, a local void, weak gravitational lensing, and sample contamination currently do not provide a convincing substitute for a positive cosmological constant. Further studies of these and other possible systematic uncertainties are needed to increase the confidence in our results. We emphasize that the most recent results of the SCP are consistent with those of our HZT. This is reassuring — but other, completely independent methods are certainly needed to verify these conclusions. The upcoming space CMBR experiments hold the most promise in this regard. Although new questions will undoubtedly arise (e.g., What is the physical source of the cosmological constant, if nonzero? Are evolving cosmic scalar fields a better alternative?), we speculate that this may be the beginning of the “end game” in the quest for the values of X and XK. + Acknowledgements We thank all of our collaborators on the HZT for their contributions to this work. Our supernova research at U.C. Berkeley is supported by the Miller Institute for Basic Research in Science, by NSF grant AST-9417213, and by grant GO-7505 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. A.V.F. is grateful to the Committee on Research at U.C. Berkeley for travel funds to attend this meeting.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
J.N. Bahcall et al., Astrophys. J. 457 (1996) 19. D. Branch, Annu. Rev. Astron. Astrophys. (1998), in press. D. Branch, A. Fisher, P. Nugent, Astron. J. 106 (1993) 2383. D. Branch, D.L. Miller, Astrophys. J. 405 (1993) L5. D. Branch, W. Romanishin, E. Baron, Astrophys. J. 465 (1996a) 73; erratum 467, 473. D. Branch, G.A. Tammann, Annu. Rev. Astron. Astrophys. 30 (1992) 359. E. Cappellaro et al., Astron. Astrophys. 322 (1997) 431. B. Chaboyer, P. Demarque, P.J. Kernan, L.M. Krauss, Astrophys. J. 494 (1998) 96. J.J. Cowan, A. McWilliam, C. Sneden, D.L. Burris, Astrophys. J. 480 (1997) 246. A.V. Filippenko, in: P. Ruiz-Lapuente et al. (Eds.), Thermonuclear Supernovae, Kluwer, Dordrecht, 1997, p. 1. A.V. Filippenko, Annu. Rev. Astron. Astrophys. 35 (1997) 309. A.V. Filippenko et al., Astron. J. 104 (1992a) 1543. A.V. Filippenko et al., Astrophys. J. 384 (1992b) L15. A.V. Filippenko et al. (1998), in preparation. P. Garnavich et al. Astrophys. J. 493 (1998) L53. P. Garnavich et al., Astrophys. J. (1998), in press. G. Goldhaber et al., in: P. Ruiz-Lapuente et al. (Eds.), Thermonuclear Supernovae, Kluwer, Dordrecht, 1997, p. 777. A. Goobar, S. Perlmutter, Astrophys. J. 450 (1995) 14. R.G. Gratton, F. Fusi Pecci, E. Carretta, G. Clementini, C.E. Corsi, M. Lattanzi, Astrophys. J. 491 (1997) 749. M. Hamuy et al., Astron. J. 106 (1993) 2392.
44 [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63]
A.V. Filippenko, A.G. Riess / Physics Reports 307 (1998) 31—44 M. Hamuy et al., Astron. J. 109 (1995) 1. M. Hamuy et al., Astron. J. 112 (1996) 2391. M. Hamuy et al., Astron. J. 112 (1996) 2398. S. Hancock, G. Rocha, A.N. Lazenby, C.M. Gutie´rrez, Mon. Not. R. Astron. Soc. 294 (1998) L1. K. Hatano, D. Branch, J. Deaton (1998), submitted. P. Ho¨flich, J.C. Wheeler, F.K. Thielemann, Astrophys. J. 495 (1998) 617. D.E. Holz (1998), private communication. D.E. Holz, R. Wald, Phys. Rev. D (1998), in press. J.S. Huang, L.L. Cowie, J.P. Gardner, E.M. Hu, A. Songaila, R.J. Wainscoat, Astrophys. J. 476 (1997) 12. R. Kantowski, T. Vaughan, D. Branch, Astrophys. J. 447 (1995) 35. A. Kim, A. Goobar, S. Perlmutter, PASP 108 (1996) 190. C.S. Kochanek, Astrophys. J. 466 (1996) 638. T. Lauer, M. Postman, Astrophys. J. 425 (1994) 418. B. Leibundgut et al., Astron. J. 105 (1993) 301. B. Leibundgut et al., Astrophys. J. 466 (1996) L21. H. N+rgaard-Nielsen et al., Nature 339 (1989) 523. T.D. Oswalt, J.A. Smith, M.A. Wood, P. Hintzen, Nature 382 (1996) 692. S. Perlmutter et al., IAUC 6270 (1995). S. Perlmutter et al., Astrophys. J. 483 (1997) 565. S. Perlmutter et al., Nature 391 (1998) 51. M.M. Phillips, Astrophys. J. 413 (1993) L105. M.M. Phillips et al., Astron. J. 103 (1992) 1632. A.G. Riess, W.H. Press, R.P. Kirshner, Astrophys. J. 438 (1995a) L17. A.G. Riess, W.H. Press, R.P. Kirshner, Astrophys. J. 445 (1995b) L91. A.G. Riess, W.H. Press, R.P. Kirshner, Astrophys. J. 473 (1996) 588. A.G. Riess et al., Astron. J. 114 (1997) 722. A.G. Riess, P.E. Nugent, A.V. Filippenko, R.P. Kirshner, S. Perlmutter, Astrophys. J. (1998), in press. A.G. Riess et al., Astron. J. 116 (1998) 1009. A.G. Riess et al. (1998), in preparation. A. Saha et al., Astrophys. J. Suppl. Ser. 107 (1996) 693. A. Sandage, G.A. Tammann, Astrophys. J. 415 (1993) 1. A. Sandage et al., Astrophys. J. 460 (1996) L15. B.P. Schmidt et al., Astrophys. J. (1998), in press. N. Suntzeff, in: R. McCray, Z. Wang (Eds.), Supernovae and Supernova Remnants, Cambridge University Press, Cambridge, 1996, p. 41. N. Suntzeff et al., IAUC 6490 (1996). R. Tripp, Astron. Astrophys. 325 (1997) 871. R. Tripp, Astron. Astrophys. 331 (1998) 815. M. Turatto et al., Mon. Not. R. Astron. Soc. 283 (1996) 1. S. van den Bergh, J. Pazder, Astrophys. J. 390 (1992) 34. T.E. Vaughan, D. Branch, D.L. Miller, S. Perlmutter, Astrophys. J. 439 (1995) 558. J. Wambsganss, R. Cen, X. Guohong, J. Ostriker, Astrophys. J. 475 (1997) L81. M. Zaldarriaga, D.N. Spergel, U. Seljak, Astrophys. J. 488 (1997) 1. I. Zehavi, A.G. Riess, R.P. Kirshner, A. Dekel, Astrophys. J. (1998), in press.
Physics Reports 307 (1998) 45—51
Measuring the Hubble constant Wendy L. Freedman* Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101, USA
Abstract Measuring the Hubble constant (H ), or the rate of expansion of the universe, has proved to be a much more difficult enterprise than originally anticipated. Recent, rapid progress in measuring accurate distances to galaxies, coupled with the prospects for measuring the Hubble constant using other physical techniques offers the promise for measuring the expansion rate to 10% accuracy, including systematic errors. The notorious factor-of-two uncertainty has already been eliminated. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.Es Keywords: Distances; Galaxies; Hubble constant; Expansion rate
1. Preamble It is extremely fitting that these proceedings be dedicated to the memory of David Schramm. Dave Schramm was one of those rare individuals with an exuberance and enthusiasm for life and research that is hard to exceed. His participation at these conferences is going to be sorely missed. On a personal level, I am very grateful to him for the encouragement that he gave to me over the years, and the support that he displayed recently when a colleague of mine was particularly uncollegial. It is only since his death that I have begun to learn of the extent to which he helped and encouraged other young people in the field. His shoes will be very difficult to fill.
2. Introduction The Hubble constant impacts a very broad range of areas in extragalactic astronomy and cosmology and the importance of establishing an accurate value for this parameter has not * E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 3 - 8
46
W.L. Freedman / Physics Reports 307 (1998) 45—51
diminished despite the immense progress made recently in other areas of cosmology. For example, H enters into the estimates of the baryon density from nucleosynthesis at early epochs in the universe, since the reaction rates are proportional to the Hubble constant at the time when nucleosynthesis of light elements occurs. The larger the value of H , the larger the component of non-baryonic dark matter is required. An even larger non-baryonic component is required if the universe has a critical density. The Hubble constant specifies both the time and length scales at the epoch of equality of the energy densities of matter and radiation. Both the scale at the horizon and the matter density determine the peak in the perturbation spectrum of the early universe. Hence, an accurate knowledge of the Hubble constant can provide powerful constraints on theories of the large-scale structure of galaxies. New instrumentation, the development of new techniques for measuring distances, and recent measurements with the Hubble Space Telescope (HST) have all resulted in new distances to galaxies with precisions at the $5—20% level. The current statistical uncertainty in some methods for measuring H is now only a few percent; however, with systematic errors, the total uncertainty is approaching $10%. Hence, the historical factor-of-two uncertainty in the value of the H is now behind us. Below, progress on H based on three different routes is briefly summarized: gravitational lenses, the Sunyaev—Zel’dovich effect, and the extragalactic distance scale.
3. Gravitational lenses Refsdael [1,2] noted that the arrival times for the light from two gravitationally lensed images of a background point source are dependent on the path lengths and the gravitational potential traversed in each case. Hence, a measurement of the time delay and the angular separation for different images of a variable quasar can be used to provide a measurement of H . This method offers tremendous potential because it can be applied at great distances and it is based on very solid physical principles. There are, of course, difficulties with this method as there are with any other. Astronomical lenses are galaxies whose underlying (luminous or dark) mass distributions are not independently known, and furthermore, they may be sitting in more complicated group or cluster potentials. A degeneracy exists between the mass distribution of the lens and the value of H (e.g., [3—5]). Ideally velocity dispersion measurements as a function of position are needed (to constrain the mass distribution of the lens). Such measurements are very difficult (and generally have not been available). Perhaps worse yet, the distribution of the dark matter in these systems is unknown. Unfortunately, to date, there are very few systems known which have both a favorable geometry (for providing constraints on the lens mass distribution) and a variable background source (so that a time delay can be measured). Two systems to date have been well studied: these yield values of H in the approximate range of 40—70 km/s Mpc [3,5,6] with an uncertainty of &20—30%. These values assume a value of X " 1, and rise by 10% for low X. As the number of favorable lens systems increases (as further lenses are discovered that have measurable time delays), the prospects for measuring H and its uncertainty using this technique are excellent.
W.L. Freedman / Physics Reports 307 (1998) 45—51
47
4. Sunyaev–Zel’dovich effect and X-ray measurements The inverse-Compton scattering of photons from the cosmic microwave background off of hot electrons in the X-ray gas of rich clusters results in a measurable decrement in the microwave background spectrum known as the Sunyaev—Zel’dovich (SZ) effect [7]. Given a spatial distribution of the SZ effect and a high-resolution X-ray map, the density and temperature distributions of the hot gas can be obtained; the mean electron temperature can be obtained from an X-ray spectrum. The method makes use of the fact that the X-ray flux is distance dependent, whereas the Sunyaev—Zel’dovich decrement in the temperature is not. Once again, the advantages of this method are that it can be applied at large distances and, in principle, it has a straightforward physical basis. Some of the main uncertainties result from potential clumpiness of the gas (which would result in reducing H ), projection effects (if the clusters observed are prolate, H could be larger), the assumption of hydrostatic equilibrium, details of the models for the gas and electron densities, and potential contamination from point sources. To date, a range of values of H have been published based on this method ranging from &40 to 80 km/s Mpc (e.g. [8—10]). The uncertainties are still large, but as more and more clusters are observed, higher-resolution (2D) maps of the decrement, and X-ray maps and spectra become available, the prospects for this method are improving enormously.
5. The Cepheid-calibrated extragalactic distance scale Establishing accurate extragalactic distances has provided an immense challenge to astronomers since the 1920s. The situation has improved dramatically as better (linear) detectors have become available, and as several new, promising techniques have been developed. For the first time in the history of this difficult field, relative distances to galaxies are being compared on a case-by-case basis, and their quantitative agreement is being established. Several, detailed reviews on this progress have been written (see, for example, the conference proceedings for the Space Telescope Science Institute meeting on the Extragalactic Distance Scale edited by Donahue and Livio [11]). The “Key Projects” for the Hubble Space Telescope were selected by peer review to enable science to be undertaken with the telescope that might require large amounts of telescope time. The HST H Key Project has been designed to measure H to $10% [12—15]. Rather than concentrating on a single method (which might be affected by unknown, systematic effects), the goal of the Key Project is to undertake a comparison and a calibration of several different methods so that cross-checks on both the absolute zero point as well as relative distances, and therefore on H , can be obtained. The underlying basis of the Key Project is the discovery of a class of well-understood stars, known as Cepheid variables. These stars obey a tight correlation between their periods of oscillation and their luminosities (see the reviews by [16,17]). Given an absolute calibration, distances to galaxies obtained using Cepheids can, in turn, be used to calibrate other methods for distance determination that can be used beyond the reach of the Cepheids; these methods are often referred to as secondary methods.
48
W.L. Freedman / Physics Reports 307 (1998) 45—51
The H Key Project has been designed with three primary goals: (1) to discover Cepheids in galaxies located out to distances of about 20 Mpc (where 1 Mpc"3.09;10 m), and thereby measure accurate distances to spiral galaxies that are suitable for the calibration of several independent secondary methods, (2) to make direct Cepheid measurements of distances to three spiral galaxies in each of the Virgo and Fornax clusters (located at approximately 16—18 Mpc) and (3) to provide a check on potential systematic errors both in the Cepheid distance scale and the secondary methods.
6. Measurement of Cepheid distances/calibration of secondary methods Determination of H to an accuracy of 10% requires that measurements be acquired at great enough distances and in a variety of directions so that the average contribution from motions induced by the gravitational interaction of galaxies (known as peculiar motions) is significantly less than 10% of the overall expansion velocity. The current limit for detection of Cepheids with HST is a distance of about 25 Mpc (or about 0.01% of the visible universe). At these distances peculiar motions can still contribute 10—20% of the observed velocity. Hence, the main thrust of the Key Project is the calibration of secondary distance indicators which can be applied out to distances significantly greater than can be measured with Cepheids alone. With the database of Cepheid distances being assembled as part of the H Key Project, a number of secondary indicators can be directly calibrated and tested. Several of these methods can be applied to velocity distances of 10 000 km/s or greater. These include, for example, type Ia supernovae, type II supernovae, and the Tully—Fisher relation, described in more detail in the section that follows. Type Ia supernovae can be observed at velocity distances of beyond 30 000 km/s, or &10% of the visible universe. In the limited space available here, only the calibration of the Tully—Fisher relation and type Ia supernovae will be discussed. For details on other results from the Key Project, the reader is referred to a recent summary [18].
6.1. Calibration of the Tully—Fisher relation One of the key elements of the HST H Key Project is the Cepheid calibration of the relation between the luminosity and rotational velocity of spiral galaxies, the Tully—Fisher (TF) relation. The TF relation provides a means of obtaining relative distances to individual galaxies with an accuracy of about 20%. By measuring Cepheid distances to nearby galaxies also having measured Tully—Fisher distances, the absolute distances to spiral galaxies out to velocity distances of 10 000 km/s can be then obtained. The new HST distances increase by a factor of 4 the numbers of TF calibrators previously available from ground-based Cepheid searches. The status of the H Key Project Tully—Fisher calibration has been reviewed recently by Mould et al. [14,19] and Madore et al. [15]. This preliminary calibration yields a value of H "73$10 km/s/Mpc. This value is in very good agreement with a recent analysis of 24 galaxy clusters with Tully—Fisher measurements by Giovanelli et al. [20]. Based on a similar set of Cepheid distances, these authors find H "69$5 km/s Mpc.
W.L. Freedman / Physics Reports 307 (1998) 45—51
49
6.2. Calibration of type Ia supernovae One of the most promising methods for measuring relative distances to distant galaxies is based on the measurement of type Ia supernovae luminosities. These supernovae are believed to result when a carbon—oxygen white dwarf is a member of a binary system where the companion star is losing mass to the white dwarf. If the Chandrasekhar mass is exceeded, a supernova explosion ensues. Cepheid calibrators have recently become available for this method as a result of the availability of HST (e.g., [21] and references therein). Several independent studies now suggest that type Ia supernovae do not all have the same intrinsic luminosities that they were earlier suggested to have, but they appear to obey a fairly well-defined relation between the absolute magnitude or brightness at maximum light and the shape or decline rate of the supernova light curve [22—25]. As part of the Key Project, we are also undertaking a calibration of type Ia supernovae, independently of the Sandage et al. group. We have measured the distance to a spiral galaxy in the southern-hemisphere Fornax cluster, which contains the elliptical-galaxy hosts of two wellobserved type Ia supernovae, in addition to another spiral galaxy host. As a result, we have three additional type Ia supernovae to supplement the sample of seven calibrators published by Sandage et al. Applying this calibration to the distant type Ia supernovae of Hamuy [23] gives H " 64—68 km/s Mpc [26]. The larger value of H compared to that of Sandage et al. [21] (57 km/s Mpc) is due to three factors: (1) we have given low weight to historical supernovae observed photographically, (2) we have allowed for the fact that supernovae do not all have the same luminosity (i.e., that there is a relationship between the luminosity of the supernova and how quickly it fades from its maximum luminosity — the decline-rate absolute-magnitude relation), and (3) we have added two new Fornax calibrators. All three factors contribute in roughly comparable proportions. We note that the addition of the Fornax calibrators changes the value of H by #3 km/s/Mpc or less than 5%; most of the remaining difference reflects the lower weight given to the historical supernovae 1895B, 1937C, and 1961F. 6.3. Absolute calibration Ideally, calibration of any method for measuring distances should be undertaken with a technique based on simple geometrical considerations e.g., parallax measurements). Hence, it was anticipated that milliarcsecond parallax measurements of Cepheids using the Hipparcos satellite would yield an accurate test or perhaps a revision of the Cepheid distance scale. Unfortunately, however, Cepheids being rare objects in the local neighborhood, the Hipparcos parallax measurements available for Cepheids have very poor precision; they range in signal to noise " p/p from p 0.3 to 5.3, at best [27,28]. The calibration of Cepheid extragalactic distances is currently undertaken relative to the nearby companion galaxy, the Large Magellanic Cloud, located at a distance of 50$5 kpc (where 1 kpc"3.09;10 m. The new Hipparcos results are consistent at a level of 4$7% with this distance, which is based on a wide range of different methods. Currently, the distance to the Large Magellanic Cloud represents one of the largest outstanding sources of systematic error in the extragalactic distance scale and determination of H . Parallax measurements from the upcoming Space Interferometry Mission (SIM) will be critical for improving this remaining uncertainty in the calibration.
50
W.L. Freedman / Physics Reports 307 (1998) 45—51
7. Summary Improved new methods for measuring relative distances to remote galaxies developed over the past decade, in parallel with improvements to the calibrating Cepheid distance scale, and large new numbers of Cepheid distances now available from HST, have led to an enormous increase in the accuracy and precision with which the expansion rate, or Hubble constant, H can be measured. Unlike the earlier factor-of-two discrepancy which persisted for about two decades, we now appear to be seeing some convergence in values of H , with values of 55$5 being reported by Allan Sandage at this meeting, and 73$6 (statistical) $8 (systematic) km/s/Mpc by our group. Moreover, recent and expected future advances in the techniques for measuring H using gravi tational lenses and the Sunyaev Zeldovich effect are very promising. There are now quantitative reasons for optimism that the extragalactic distance scale will soon be firmly established at the $10% level.
Acknowledgements I am very pleased to acknowledge the enormous efforts of my co-investigators on the H Key Project team discussed in Sections 4 and 5: my long-term collaborator on the Cepheid distance scale, B.F. Madore, co-PIs J. Mould and R.C. Kennicutt, and co-Is: L. Ferrarese, H. Ford, B. Gibson, J. Graham, M. Han, J. Hoessel, J. Huchra, S. Hughes, G. Illingworth, R. Phelps, A. Saha, S. Sakai, N. Silbermann, and P. Stetson, and graduate students F. Bresolin, P. Harding, D. Kelson, L. Macri, D. Rawson, and A. Turner. This work is based on observations with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute, which is operated by AURA, Inc. under NASA contract No. 5-26555. Support for this work was provided by NASA through grant GO-2227-87A from STScI.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
S. Refsdael, MNRAS 128 (1964) 295. S. Refsdael, MNRAS 132 (1966) 101. Kundic´ et al., Astrophys. J. 114 (1997) 2276. C.R. Keeton, C.S. Kochanek, Astrophys. J. 495 (1997) 42. P. Schechter et al., Astrophys. J. Lett. 475 (1997) 85. C. Impey, preprint, 1998. R.A. Sunyaev, Y.B. Zel’dovich, APSS 4 (1969) 301. M. Birkinshaw, J.P. Hughes, Astrophys. J. 420 (1994) 33. T. Herbig, C.R. Lawrence, A.C.S. Readhead, Astrophys. J. Lett. 449 (1995) 5. S. Myers et al., Astrophys. J. 485 (1997) 1. M. Donahue, M. Livio (Eds.), The Extragalactic Distance Scale, Cambridge University Press, Cambridge, 1997. W.L. Freedman et al., Nature 371 (1994) 757. R.C. Kennicutt, W.L. Freedman, J.R. Mould, Astrophys. J. 110 (1995) 1476. J.R. Mould et al., Astrophys. J. 441 (1995) 413. B.F. Madore et al., Astrophys. J. (1998), in press. B.F. Madore, W.L. Freedman, Publ. Astron. Soc. Pac. 103 (1991) 933.
W.L. Freedman / Physics Reports 307 (1998) 45—51
51
[17] G. Jacoby et al., Pub. Astron. Soc. Pacif. 104 (1992) 599. [18] W.L. Freedman, J. Mould, R.C. Kennicutt, B.F. Madore, in: K. Sato (Ed.), Cosmological Parameters and the Evolution of the Universe, IAU Symp. 183, 1998, in press. [19] J.R. Mould et al., in: M. Donahue, M. Livio (Eds.), The Extragalactic Distance Scale, Cambridge University Press, Cambridge, 1997. [20] Giovanelli et al., Astrophys. J. Lett. 477 (1997) L1—L4. [21] A. Sandage, A. Saha, G.A. Tammann, L. Labhardt, N. Panagia, F.D. Macchetto, Astrophys. J. Lett. 460 (1996) L15—L18. [22] M. Phillips, Astrophys. J. 413 (1993) L105—L108. [23] M. Hamuy, M.M. Phillips, J. Maza, N.B. Suntzeff, R.A. Schommer, R. Aviles, Astrophys. J. 109 (1995) 1—13. [24] Hamuy et al., Astron. J. 112 (1996) 2398. [25] A. Reiss, W. Press, R. Kirshner, Astrophys. J. 438 (1995) L17—L20. [26] W.L. Freedman, B.F. Madore, R.C. Kennicutt, in: M. Donahue, M. Livio (Eds.), The Extragalactic Distance Scale, Cambridge University Press, Cambridge, 1997, pp. 171—185. [27] M.W. Feast, R.M. Catchpole, Mon. Not. Roy. Astron. Soc. 286 (1997) L1—L5. [28] B.F. Madore, W.L. Freedman, Astrophys. J. 412 (1998) 110.
Physics Reports 307 (1998) 53—60
Inflation and the cosmic microwave background Andrew R. Liddle* Astronomy Centre, University of Sussex, Brighton, Sussex BN1 9QJ, UK
Abstract Cosmological inflation is the simplest and most plausible way in which perturbations can be created in the Universe, leading to the formation of structure and also to anisotropies in the cosmic microwave background. In this article, I review what can be learnt about inflation from studies of the microwave background, and also stress the importance of inflation as an paradigm underpinning attempts to obtain precision measures of cosmological parameters from the microwave background. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.Cq; 98.70.Vc Keywords: Cosmological inflation; Cosmic microwave background
1. Introduction Because of calculational simplicity, and because it provides a good fit to the observational data, an initial spectrum of adiabatic density perturbations is normally assumed responsible for all the observed structures in the Universe, such as galaxy clusters and microwave background anisotropies. A natural explanation for such a spectrum of perturbations is the inflationary cosmology, and indeed the causal generation of large-scale adiabatic perturbations requires a period of inflation [1]. Given an initial spectrum of such perturbations, one can compute their evolution throughout cosmic history. Initially, the scales of astrophysical interest are much larger than the Hubble scale, and causality dictates that they cannot evolve, regardless of the nature of physical laws which might be operating at that time. By the time the Hubble scale has grown to encompass the scales on which we can make observations, the energy scales are so low that conventional physics (primarily * Present address: Astrophysics Group, Imperial College, London SW 2BZ, UK. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 5 - 6
54
A.R. Liddle / Physics Reports 307 (1998) 53—60
gravitational, atomic and radiative physics), well tested in the laboratory, is all that is needed to study the evolution with time, at least as long as the perturbations remain small enough to be described by linear equations. The perturbations, evolved to the present, can then be compared with observations. The cosmic microwave background is a particularly powerful probe, because even at the present epoch it is described by linear theory on all scales, whereas the distribution of matter has become non-linear on scales less than about 10 Megaparsecs. Typical theoretical predictions for the anisotropies in the microwave background show considerable structure, seen in Fig. 1, which proves sensitive to the details of the cosmological model under consideration. These ‘details’ amount to a specification of a number of otherwise ill-determined parameters, such as the mass density of the Universe and its present expansion rate. Because the predicted anisotropies are sensitive to these parameters, high quality observations are able to make accurate measurements of the parameters, and the promise is that upcoming microwave background observations can pin down our cosmology to unprecedented accuracy. On the other hand, the microwave background anisotropies are entirely due to linear perturbation theory, and hence entirely dependent on the initial perturbations you have in the first place. The only reason that one can indeed hope to extract cosmological parameters is if one has a suitable preconception as to what the initial perturbations might be. For example, if the initial perturbations take the form of a power-law, parametrized by an amplitude and a spectral index giving the scale-dependence, then those parameters can be thrown into the melting pot along with the cosmological ones and fitted by the data. One of the delights of inflation is that the initial spectrum is indeed typically predicted to take on a simple form, and furthermore one which can readily be calculated to high precision for a given inflationary model.
Fig. 1. The microwave anisotropies predicted in a sample cosmological model, the standard cold dark matter model. The horizontal axis shows the multipole number of a spherical harmonic expansion (large angular scales to the left), while the vertical axis shows the expected mean square perturbation on that scale.
A.R. Liddle / Physics Reports 307 (1998) 53—60
55
2. Models of inflation By definition, inflation is any period of the Universe’s evolution during which it experiences an accelerated expansion: INFLATION 0 a( '0 0
d H\ (0 , dt a
(1)
where a is the scale factor and H"aR /a the Hubble parameter. As it happens, current observations [2] suggest that our present Universe is accelerating, and hence inflating, but for the purposes of this article I am considering a period of inflation which occurred in the Universe’s distant past. The final form in the above equation is best for a physical interpretation, because it features the characteristic scale of an expanding Universe, H\, and shows us that the condition for inflation is that the Hubble length is decreasing when expressed in units comoving with the expansion. Scales being carried along with the expansion may start much smaller than the Hubble radius H\, and end up much larger by the time the inflationary epoch has come to an end. This is the opposite of the usual non-inflationary behaviour, where any scale within the Hubble radius must remain there, and is the key to inflation’s ability to generate adiabatic perturbations. The Universe’s acceleration is governed by the equation 4nG a( "! (o#3p) , 3 a
(2)
where o is the energy density and p the pressure, so we immediately see that something out of the ordinary, possessing a negative pressure, is required if inflation is to take place. This can be achieved by invoking a kind of matter known as a scalar field , used in particle physics to induce symmetry breaking. Scalar fields possess a potential energy »( ), which measures the energy density associated with a given field configuration (much in the manner of a binding energy), and a homogeneous scalar field has energy density and pressure given by p " Q !»( ) . (3) o " Q #»( ); ( ( Provided Q (», we will have inflation. It turns out that this is fairly generic. In common with normal practice, I will focus my discussion on the simplest sub-class of inflationary models, where there is a single scalar field rolling slowly down some potential »( ). Since »( ) is the only input information, the spectrum of perturbations will depend only upon it, and it is the inflationary information we are trying to constrain via observations. It would be nice if particle theory were to predict some definite form for »( ), but that hope seems further off than the prospect of obtaining direct observational information. Although I will stick with this simplest scenario, in fact inflationary model building has developed into quite a complex art, with multiple scalar fields often cropping up. This can complicate the business of determining the spectrum which is produced, though the technology does exist to compute the perturbations in all known models.
56
A.R. Liddle / Physics Reports 307 (1998) 53—60
3. Perturbations from inflation The inflationary production of perturbations relies on the fact that we live in a quantum Universe, where Heisenberg’s uncertainty principle enforces a minimum level of irregularity. During an inflationary epoch, comoving length scales (i.e. scales carried along with the expansion) are continually being stretched to scales larger than the Hubble length, as shown in Fig. 2. As this happens, quantum fluctuations in the fields become ‘frozen in’ — unable to evolve on the Hubble timescale as their wavelength is so long — and begin to act like classical perturbations. The amplitude of these perturbations is readily calculable, and has been studied in many papers. See Ref. [3] for a review.
Fig. 2. How scales evolve during and after inflation. The two panels show the same thing, the upper one in physical coordinates and the lower in comoving coordinates. Whichever way you look at it, during inflation scales can start well within the Hubble length and end up well outside. The vertical scales cover many orders of magnitude.
A.R. Liddle / Physics Reports 307 (1998) 53—60
57
Because the Heisenberg uncertainty principle democratically affects everything, there are perturbations not only in the scalar field, which become adiabatic density perturbations, but also in the gravitational field which become gravitational waves. The precise form of both types of perturbation, their amplitude and scale-dependence, will depend on the potential energy »( ), it being the only input into the problem. If the inflationary expansion is sufficiently rapid, those scales on which observable perturbations are generated all cross outside the Hubble radius during a very narrow interval of time. In that event, physical conditions were clearly pretty much the same when the smallest interesting scales (corresponding to galaxies) and when the largest interesting scales (corresponding to COBE) acquired their perturbations. Hence a first guess is that the density perturbations on all scales will be the same; this is known as the scale-invariant or Harrison—Zel’dovich spectrum. Within this approximation all inflation models predict exactly the same outcome, so in particular observations could not distinguish between different models. The adequacy of that approximation depends on the quality of observations available, and in particular on whether or not the observations cover a wide enough range of scales to be able to probe any scale dependence of the perturbation amplitude. Early data sets on the galaxy correlation function were not able to say much about the scale dependence, and in fact neither is the famous data set of the COBE satellite [4] if taken in isolation. However, since COBE probes very large scales, as soon as it is combined with any other data on shorter scales it provides a strong lever arm for probing scale dependence, and it turns out that for present data the Harrison—Zel’dovich approximation is completely inadequate and one must do better. The next level of sophistication approximates the density perturbations and gravitational waves as power laws [3,5]; each needs an amplitude and a spectral index making four numbers in all, and these can readily be computed in any given inflation model. In fact there is a degeneracy between density perturbations and gravitational waves, the so-called consistency equation, which reduces this to three, but in practice it seems optimistic to believe that more than one piece of information about the gravitational waves can be observationally extracted, and one normally just considers a quantity r which measures the fractional contribution of gravitational waves to the COBE signal [3,5]. When the power-law approximation is adopted, different inflation models give different predictions for the spectra, in the form of these three numbers. The inflationary paradigm, as opposed to any particular model, therefore does not give a firm prediction. Nevertheless, the three parameters we have just discussed are not many to add to those already needed to describe the Universe’s global dynamics and contents, and one can readily hope to obtain them from good data. Further, that different inflation models predict different spectra gives us a way of distinguishing between inflation models, and the ability to exclude a large fraction of the models permitted by theoretical considerations alone.
4. Scale-dependence of the spectral index A benchmark for theoretical accuracy is set by the upcoming MAP and Planck satellites, due for launch in 2000 and 2007, respectively. Since they are expected to provide data which is vastly superior to that we presently have, might even the power-law approximation, well-suited to present
58
A.R. Liddle / Physics Reports 307 (1998) 53—60
data, prove inadequate? This requires one to consider scale dependence of the spectral indices. The optimal strategy appears to be to expand the log of the spectra as Taylor series in ln k, i.e.
k 1 dn k ln d (k)"ln d (k*)#(n*!1)ln # ln 2 . & & k* 2 d ln k * k*
(4)
Further details can be found in Lidsey et al. [6]. The expansion scale k* is arbitrary but presumably best chosen in the centre of the data. This expansion is to be truncated as soon as an adequate fit to the data is obtained. The first term corresponds to the Harrison—Zel’dovich spectrum, and the first two taken together to the power-law approximation with spectral index n (in principle evaluated at k*, but of course constant within this approximation). To these, a general inflationary model adds a sequence of derivatives of the spectral index, evaluated at the scale k*. In a given model, these can readily be calculated [7,8]. Typically one finds that only the first two terms are significant, but there are models where higher terms are important too [9,10]. In general, then, one might want to fit the microwave anisotropy data not just for the amplitudes and n, but also one or possibly more derivatives of n. From an inflationary point of view, this looks like a good thing, because we are saying that there is an extra piece of information available in the microwave anisotropies which we can extract from the data. However, there is a downside, which is that the extra piece of information has been stolen at the expense of all the other parameters; if we say we need to make a fit including one or more extra parameters, then the expected uncertainties on all cosmological parameters will be increased. We have examined [9] the extent to which the uncertainties are likely to increase, using the Fisher information matrix technique [11]. The expected uncertainties have some dependence on the choice of ‘correct’ model, and we consider the standard Cold Dark Matter model for illustration. The actual numbers aren’t very important; what is interesting is the trend as extra parameters describing the initial conditions are added. The results are shown in Table 1 See Ref. [9] for full details.
Table 1 Estimated uncertainties on parameters expected from the Planck satellite, assuming the Standard Cold Dark Matter model is correct. Successive columns introduce more freedom into the description of the initial parameters. The upper block contains the cosmological parameters and the lower one the inflationary parameters. Here X is the baryon density, h the Hubble parameter, XK a possible cosmological constant and q the optical depth to the last-scattering surface. X is fixed by the assumption of spatial flatness so the second row estimates the uncertainty in h Parameter
Planck with polarization
dX h/X h dX h/h dXKh/h dq
0.007 0.02 0.04 0.002
0.008 0.02 0.04 0.002
0.009 0.02 0.05 0.002
dn dr dn/d ln k dn/d(ln k)
0.004 0.04 — —
0.004 0.04 0.009 —
0.006 0.04 0.01 0.02
A.R. Liddle / Physics Reports 307 (1998) 53—60
59
We assume an experimental configuration of the Planck satellite with polarized detectors. The first column shows the results when the power-law approximation is assumed, and the successive columns each introduce an additional derivative of n. Encouragingly, we see that the cosmological parameters take only a very minor hit as extra initial condition freedom is introduced. This leads to the promising conclusion that the modelling of the initial perturbation spectra may not have much of an influence on the satellites’ abilities to constrain our cosmology. Note that unless a power-law is assumed, this increase in uncertainty applies even if the values of the derivatives are zero to within their observational uncertainties. Introducing one derivative of n hardly weakens the determination of n at all, while there is some deterioration when the second derivative is included. If these extra derivatives are measurable, then the loss in accuracy on n is likely to be overcompensated by the gain in information on higher derivatives [10]; indeed, one might expect that to be the case, as sneaking in an extra inflationary parameter is a way of transferring a small part of the information content in the microwave background away from the cosmological parameters and into the inflationary ones.
5. Inflationary expectations As to whether this scale-dependence is likely to show up in practice, we have no better guide than the current theoretical prejudice, which says E Most slow-roll inflation models do not give a significant scale-dependence, even by the standards of the Planck satellite. E ‘Designer’ models of inflation, for example the broken scale-invariance models, do give a large effect, but not one which is adequately treated within the perturbative framework I’ve outlined. Such models must be confronted with observation on a model-by-model basis. E A currently popular class of models known as hybrid inflation can give an observable effect. Partly this is due to the so-called g-problem; inflation requires that two slow-roll parameters e and g be less than one [3,5], but on the other hand supergravity models generically predict g"1#‘something’. Since the ‘something’ is unlikely to be extremely good at cancelling the 1, such models may well not respect slow-roll very well and this enhances the chance of getting detectable scale-dependence. The best-motivated models at the moment are those of Stewart [12].
6. Conclusions Inflation as a paradigm is readily testable by upcoming microwave background observations. For example, the prediction of a peak structure, as seen in Fig. 1, is extremely generic for inflationary perturbations, and quite specific to the situation where perturbations begin their evolution on scales much larger than the Hubble radius. Details such as the peak spacing promises a very strong test [13], and something as simple as an observed spectrum without multiple peaks appears sufficient to rule out inflation [14]). If inflation passes these tests, then detailed fitting to the observations promises startlingly high quality information about the inflationary mechanism.
60
A.R. Liddle / Physics Reports 307 (1998) 53—60
Inflation has an important role in underpinning the entire microwave background endeavour. I stressed at the start that we can only get highly quality constraints from the present radiation power spectrum if we have a simple form, preferably motivated by theory, for the initial perturbations. Since the observations aim to be accurate at the percent level, the input information needs this accuracy too, and inflationary theory is now in a position where predictions at this level of accuracy can be made for all known models. This can be contrasted with the situation for topological defect models, where it has proven much harder to make accurate theoretical predictions. Less accurate theoretical predictions will naturally lead to much more poorly determined cosmological parameters. It has also been argued [15] that in a defect model the observed spectrum is less sensitive to the cosmological parameters, implying poorer parameter estimation even if the theoretical calculations can after all be made more accurate.
Acknowledgements The author was supported by the Royal Society, and thanks Ed Copeland, Daniel Eisenstein, Ian Grivell, Rocky Kolb and David Lyth for discussions and collaboration.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
[12] [13] [14] [15]
A.R. Liddle, Phys. Rev. D 51 (1995) R5347. S. Perlmutter et al., Nature 391 (1998) 51. A.R. Liddle, D.H. Lyth, Phys. Rep. 231 (1993) 1. C.L. Bennett et al., Astrophys. J. 464 (1996) L1. A.R. Liddle, D.H. Lyth, Phys. Lett. B 291 (1992) 391. J.E. Lidsey, A.R. Liddle, E.W. Kolb, E.J. Copeland, T. Barreiro, M. Abney, Rev. Mod. Phys. 69 (1997) 373. E.J. Copeland, E.W. Kolb, A.R. Liddle, J.E. Lidsey, Phys. Rev. D 48 (1993) 2529; 49 (1994) 1840. A. Kosowsky, M.S. Turner, Phys. Rev. D 52 (1995) 1739. E.J. Copeland, I.J. Grivell, A.R. Liddle, Mon. Not. Roy. Astron. Soc., to appear, astro-ph/9712028. E.J. Copeland, I.J. Grivell, E.W. Kolb, A.R. Liddle, Phys. Rev. D 58 (1998) 043002. G. Jungman, M. Kamionkowski, A. Kosowsky, D.N. Spergel, Phys. Rev. D 54 (1996) 1332; J.R. Bond, G. Efstathiou, M. Tegmark, Mon. Not. Roy. Astr. Soc. 291 (1997) L33; M. Zaldarriaga, D.N. Spergel, U. Seljak, Astrophys. J. 488 (1997) 1. E.D. Stewart, Phys. Lett. B 391 (1997) 34; Phys. Rev. D 56 (1997) 2019. W. Hu, M. White, Phys. Rev. Lett. 77 (1996) 1687. J.D. Barrow, A.R. Liddle, Gen. Rel. Grav. 29 (1997) 1501. U.-L. Pen, Harvard preprint astro-ph/9804083.
Physics Reports 307 (1998) 61—66
The Sloan Digital Sky Survey and large-scale structure Joshua A. Frieman NASA/Fermilab Astrophysics Center, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510, USA Department of Astronomy and Astrophysics, The University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637, USA
Abstract The Sloan Digital Sky Survey (SDSS) will produce a 3D map of the universe of galaxies over a volume &4;10 h\ Mpc. Covering 1/4 of the sky, it will comprise a photometric imaging survey of 10 objects in 5 wavebands and a magnitude-limited spectroscopic (redshift) survey of 10 galaxies and 10 quasars. The SDSS will probe the structure of the universe to large scales with great precision and should shed light on the role of dark matter in the process of structure formation. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k; 95.35.#d
1. Introduction The study of large-scale structure formation is interesting because it involves fundamental questions, among them: Is gravitational instability the correct physical framework for structure formation? If so, what is the physical origin of the initial perturbations (e.g., inflation, topological defects, or something else)? Once perturbations evolve, how do luminous galaxies form and how are they related to the underlying mass distribution? What is the universe made of, that is, what is the dark matter(s), how much is there, and how is it distributed? How does the pattern of large-scale clustering arise and how should we quantify it? In recent years, a plausible framework for a theory of large-scale structure formation has emerged: primordial, quasi-scale-invariant perturbations from an early epoch of inflation, which subsequently grow by gravitational instability in a universe with a substantial component of cold
Address for correspondence. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 1 - 9
62
J.A. Frieman / Physics Reports 307 (1998) 61—66
dark matter (CDM) to form galaxies and larger structures. In the near future, new galaxy surveys, along with cosmic microwave background anisotropy measurements, will provide precise probes of this structure formation paradigm. One can think of the current prescription for large-scale structure formation as a machine with various inputs and outputs and with a number of knobs to dial that specify the models. The input is a physical model for the origin of density fluctuations: for the last fifteen years, the choice has predominantly been between inflation or topological defects (including cosmic strings, textures, and global monopoles). Both choices lead to initial fluctuations that are nearly scale-invariant, that is, for which the large-scale density power spectrum scales roughly as P (k)&kL with n K1$0.3. MG The physical mechanisms by which inflation and defects seed structure differ and lead to signatures by which they can be distinguished from each other. Once the initial perturbations are fed into the structure formation machine, they pass through a first stage which evolves them according to the linearized Einstein—Boltzmann equations (the density fluctuation amplitude is still small), leading to a processed spectrum P (k)"P (k)¹(k). M MG This linear processing part of the machine has knobs with which one can dial a number of cosmological parameters, such as the Hubble parameter h"H /(100 km/s Mpc), the fractional baryon density X , the densities of cold (X ) and hot (X ) dark matter, and the cosmological J constant XK. Changing the values of these parameters affects the linear processing, i.e., the transfer function ¹(k), in different ways. In cold dark matter models, the overall shape of the transfer function is largely determined by the primordial spectral index n and the product X h. In addition, P(k) will exhibit wiggles of appreciable amplitude if the ratio X /X is sufficiently large [1]; these reflect the acoustic oscillations of the photon-baryon fluid around the time of recombination. If the initial fluctuations are Gaussian, as expected in the simplest inflation models, the power spectrum completely specifies the statistical properties of the linear density field. The linear density power spectrum is then fed into a second stage of the machine that incorporates non-linear gravitational evolution (generally modelled via N-body simulations) and hydrodynamic effects (gas shocking and cooling, and in principle cloud fragmentation, star formation, and energy injection from hot stars and supernovae). Inclusion of the latter effects via hydro codes and semi-analytic techniques is the subject of intensive on-going work by many groups. The output of the non-linear processor would ideally be an evolving three-dimensional distribution of galaxies for a typical realization of the initial conditions for a model. Ultimately, one would compare the statistical properties of the model galaxy distribution with those of galaxy catalogs. The comparison is complicated by the necessity of accounting for instrument sensitivities, observational selection functions, and possible systematic errors (e.g., variable obscuration in galaxy catalogs). To complete the loop, one could iteratively tweak the input physics and cosmological parameter knobs (within the range allowed by current observations) until a best fit is obtained.
2. The power spectrum The lowest order deviation from homogeneity for a random field is characterized by two-point statistics, i.e., the power spectrum and its Fourier complement, the autocorrelation function; as such, they furnish the primary observational test of structure formation models. (For a recent
J.A. Frieman / Physics Reports 307 (1998) 61—66
63
Fig. 1. Estimates of the redshift-space galaxy power spectrum from several redshift surveys and one angular survey (APM), arbitrarily rescaled to coincide at k"0.1h Mpc\ [3].
discussion of some of the issues involved in extracting power spectra from surveys, see [2].) The recent situation with respect to observational determinations of the galaxy power spectrum P (k) is shown in Fig. 1, from a compilation of results by Vogeley [3]. In the figure, estimates from different surveys have been vertically rescaled to agree at an intermediate scale; this focuses the comparison on the shape of the power spectrum and allows for the fact that galaxies of different properties (morphology, luminosity, etc.) are known to be relatively biased with respect to each other on small scales. Although the error bars on the results are not displayed, Fig. 1 shows that the power spectrum is not yet well determined on large scales, due to the limited volume covered by current surveys. In particular, a turn-over to the expected scale-invariant behavior (P(k)&k! ) at small k is not well established. Nevertheless, one can make some quantitative conclusions based on current surveys. Assuming that the degree to which galaxies trace the underlying mass distribution is approximately scale-independent, i.e., that P (k)"bP (k) with constant bias parameter b, the galaxy power M spectrum on intermediate scales is reasonably well fit by a CDM spectrum with n K1 and X h"0.2—0.25. On the largest scales probed by current surveys, there are tantalizing hints of a bump or feature in the power spectrum [4].
3. New galaxy surveys: the SDSS In the near future, the galaxy data shown here will be substantially extended by new surveys using multifiber spectrographs to simultaneously measure many redshifts through optical fibers in
64
J.A. Frieman / Physics Reports 307 (1998) 61—66
the same field, an ingenious technological development which makes large-area redshift surveys possible to greater depths in a finite survey time. Advances in multi-fiber spectroscopy were exploited by the recent Las Campanas Redshift Survey of &20 000 galaxies (using a system of &100 fibers), and by the on-going 2dF survey of 250 000 galaxies at the Anglo—Australian Telescope, which uses a robotic system to position 400 fibers over a two-degree field. The Sloan Digital Sky Survey (SDSS) [5] will measure one million galaxy redshifts over a contiguous area of n sr in the northern sky, using a dedicated 2.5 m telescope at Apache Point Observatory in southern New Mexico. The SDSS has two components: a digital imaging survey will measure angular positions and magnitudes for about 100 million galaxies (and a comparable number of stars and about 10 QSOs), using a large camera with an array of 30 primary CCD chips (6 in each of 5 passbands). The effective depth of this photometric survey is about 23rd magnitude; the roughly one million brightest galaxies (r(18.1) and 10 QSOs will be targeted for follow-up spectroscopy. To accumulate redshifts at an unprecedented rate (of order 5000 per night), the SDSS will simultaneously use two 320-fiber spectrographs. The 640 fibers for each spectroscopic exposure will be plugged into pre-drilled holes in a large metal plate positioned in the focal plane of the telescope. As of May 1998, the SDSS telescope and imaging camera had been completed and were undergoing commissioning. The Survey proper is expected to start in early 1999 and should take roughly 5—6 years to complete. An estimate of the expected precision with which the SDSS will determine the large-scale power spectrum is shown in Fig. 2 [3]. The large three-dimensional volume to be surveyed should allow
Fig. 2. Predicted uncertainty in P(k) estimated from a volume-limited sample of SDSS North galaxies and for the Bright Red Galaxy (BRG) sample [3]. The errors assume that the true power spectrum is that of an X h"0.25 CDM model. K Plotted for comparison are CDM models with X h ranging from 0.2 (upper curve) to 0.5 (lowest curve) as well as the K approximate range of scales probed by the MAP and COBE CMB satellites.
J.A. Frieman / Physics Reports 307 (1998) 61—66
65
a precise determination of P (k) on large scales, including an unambiguous tracing of the turn-over to the primordial spectrum at small k. Since the theoretical predictions differ substantially on these scales, this should enable a finer distinction between CDM models to be made, as well as useful constraints on the primordial spectral index n (which is sensitive to a combination of the slope and curvature of the inflaton potential) and the baryon density X . In addition to the main Northern galaxy sample of &900 000 galaxies, the SDSS will include a spectroscopic sample of &100 000 intrinsically bright red galaxies, many of them brightest cluster members, which should provide a nearly volume-limited sample out to redshift z&0.5. As indicated in Fig. 2, they are expected to be more strongly clustered than typical galaxies.
4. Beyond the power spectrum The Sloan Survey will contain a rich lode of statistical information about galaxy clustering beyond that captured in the power spectrum. A complete statistical description of a homogeneous random field, e.g., the density contrast field d(r), is provided by its N-point correlation functions, m (r ,2, r )"1d(r ),2, d(r )2, or their Fourier transforms, the N-point spectra. In particular, the , , , N"3 point function is the lowest order statistic sensitive to phase correlations, and therefore encodes important information about the spatial coherence of the density field (e.g., the presence of coherent structures such as filaments and voids). It is often convenient to compress this information by considering instead the one-point Nth order cumulants, 1d,2 , that is, the connected moments of the density field smoothed through 0 A a window of scale R. In particular, the second-order cumulant is just the variance of the smoothed density field,
1 dk kP(k)¼(kR) , (1) 1d 2"p " 0 0 2n where the window function ¼ is usually taken to be the Fourier transform of a Gaussian or top-hat filter. These one-point cumulants can be straightforwardly measured in galaxy catalogs via counts in cells (see, e.g., [6]). It is useful to define the hierarchical amplitudes of the smoothed mass distribution, S (R)"1d,2 /p,\. The S have the convenient property that on large scales, i.e., in the weakly , 0 A 0 , non-linear regime p 41, and for Gaussian initial conditions, according to non-linear perturba0 tion theory the S are only weakly dependent on time, scale, density, or geometry of the , cosmological model [7]. As a result, they turn out to provide an excellent tool for probing bias [8,9] and for discriminating between Gaussian and non-Gaussian initial conditions [10]. The simplest interpretation of the S data in the APM survey [11], in particular the amplitudes and , relative scale-independence of the skewness S and kurtosis S , is that these optically selected galaxies are relatively unbiased on intermediate scales &10—30 h\ Mpc and that their distribution is consistent with that expected from nonlinear evolution from Gaussian initial conditions [9]. These conclusions are bolstered by recent measurement of the angular three-point function in the APM survey [12]. The SDSS will provide substantially more precise determinations of the higher order galaxy moments on large scales and therefore probe the nature of bias and the statistical properties of the initial fluctuations in finer detail.
66
J.A. Frieman / Physics Reports 307 (1998) 61—66
Acknowledgements Thanks to E. Gaztan aga for stimulating discussions and to M. Vogeley for permission to reproduce the figures. This research was supported in part by the DOE and by NASA grant NAG5-2788 at Fermilab.
References [1] D.J. Eisenstein, W. Hu, J. Silk, A. Szalay, Astrophys. J. 494 (1998) L1. [2] M. Tegmark, A. Hamilton, M. Strauss, M. Vogeley, A. Szalay, 1997, astro-ph/9708020. [3] M.S. Vogeley, in: D. Hamilton (Ed.), Proc. Ringberg Workshop on Large-Scale Structure, September 1996, Kluwer, Dordrecht, 1977. [4] S. Landy, S. Shectman, H. Lin, R. Kirshner, A. Oemler, D. Tucker, Astrophys. J. 456 (1996) L1; F. Atrio-Barandela, J. Einasto, S. Gottlober, V. Muller, A.A. Starobinsky, JETP Lett. 66 (1997) 397; E. Gaztan aga, C. Baugh, Mon. Not. Roy. Astron. Soc. 294 (1998) 229. [5] The SDSS is a project of the ARC Consortium; the participating institutions are Fermilab, the University of Chicago, Princeton University, the Institute for Advanced Study, the University of Washington, the U.S. Naval Observatory, and the Japan Participation Group. For a detailed overview of the project, see http://www.astro.princeton.edu/BBOOK/. [6] E. Gaztan aga, 1992, Astrophys. J. 398 (1992), L17; Mon. Not. Roy. Astron. Soc. 268 (1994) 913; F.R. Bouchet, M. Strauss, M. Davis, K. Fisher, A. Yahil, J. Huchra, Astrophys. J. 417 (1993) 36; I. Szapudi, A. Meiksin, R.C. Nichol, Astrophys. J. 473 (1996) 15. [7] F.R. Bouchet, R. Juszkiewicz, S. Colombi, R. Pellat, Astrophys. J. 394 (1992) L5; R. Juszkiewicz, F.R. Bouchet, S. Colombi, Astrophys. J. 412 (1993) 9; F. Bernardeau, Astrophys. J. 433 (1994) 1; Astron. Astrophys. 291 (1994) 67; J. Frieman, E. Gaztan aga, Astrophys. J. 425 (1994) 392; R. Scoccimarro, J. Frieman, Astrophys. J. (Suppl.) 105 (1996) 37; R. Scoccimarro, Astrophys. J. 487 (1997) 1. [8] J.N. Fry, E. Gaztan aga, Astrophys. J. 413 (1993) 447; R. Juszkiewicz, D.H. Weinberg, P. Amsterdamski, M. Chodorowski, F.R. Bouchet, Astrophys. J. 442 (1995) 39. [9] E. Gaztan aga, J. Frieman, Astrophys. J. 437 (1994) 13. [10] J.N. Fry, R. Scherrer, Astrophys. J. 429 (1994) 36; E. Gaztan aga, P. Mahonen, Astrophys. J. 462 (1996) L1. [11] E. Gaztan aga, Mon. Not. Roy. Astron. Soc. 268 (1994) 913. [12] E. Gaztan aga, J. Frieman, 1998, to appear.
Physics Reports 307 (1998) 67—73
Determining X from the cluster correlation function A. Kashlinsky * NORDITA, Blegdamsvej 17, Copenhagen DK-2100, Denmark Theoretical Astrophysics Center, Juliane Maries Vej 30, Copenhagen, Denmark
Abstract It is shown how data on the cluster correlation function can be used to reconstruct the density of the pregalactic density field on the cluster mass scale. The method is applied to data on the cluster correlation amplitude-richness dependence. The spectrum of the recovered density field has the same shape as that derived from data on the galaxy correlation function is measured as a function of a linear scale. Matching the two amplitudes relates the mass to the comoving scale it contains and thereby leads to a direct determination of X. The resultant density parameter turns out to be X"0.25. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k
1. Introduction This paper presents another way of determining X by comparing density fields determined from two independent datasets: APM data on the galaxy two-point correlation function and the data on the cluster correlation function-richness dependence. For reasons that will become clear later in the paper, I will reconstruct the quantity which is uniquely related to the correlation function, m(r), of the density field, or its Fourier transform — the power spectrum P(k):
P 3j (kr) k dk . (1) D"1(dM/M)2"4p m(r)r dr"4p P(k) kr The discussion and the formalism is limited to scales where the present density field is linear and therefore can be assumed to reflect the initial conditions, i.e. r'r ,8h\ Mpc. The outline and the idea of the paper are as follows: in Section 2 I discuss reconstruction of (1) as a function of the linear scale, r, from the APM data. In Section 3 I show how to use the data on the
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 0 - 7
68
A. Kashlinsky / Physics Reports 307 (1998) 67—73
cluster correlation amplitude-richness dependence in order to reconstruct (1) as a function of the cluster mass. Comparison of the two density fields gives the amount of mass contained in the given linear scale and provides a direct determination of X. I will show that the two density fields, although recovered in different and independent ways, give consistent results and require XK0.25 with a very small uncertainty. For more details the readers are referred to [1].
2. Density field from galaxy clustering First, I establish the rms fluctuation, D(r), as a function of the linear scale r from the APM data on the projected angular correlation function w(h). For this I use the APM data on w(h) [2] divided into six narrow magnitude bins Dm K0.5. Galaxies located in each of the bins span a narrow(er) range of z, so with the data presented in this way one can isolate effects of possible galaxy evolution in the entire APM catalog spanning 17(b (20 or 0.07(z(0.2 [3]. ( The projected angular correlation function for each bin is related to the three-dimensional power spectrum of galaxy clustering, P(k), via the Limber equation: w(h)"p
dz(c dt/dz)\ (z)
kx(z)h , dk P(k; z)kJ 1#z
(2)
where x(z) is the comoving distance, t is the cosmic time, and the selection function is
(z)"(dN/dz/N ). The latter is related to the range of magnitudes for galaxies in each bin, [m , m ], and their luminosity function, U(¸), via dN/dz"(d»/dz)*KJS U(¸; z) d¸ with » being the *K J S comoving volume. Eq. (2) allows to relate the data on w(h) to the underlying power spectrum for each of the six narrow magnitude width bins once all of the following are specified: the luminosity function in a given band (blue for APM), the K-correction which accounts for the shift along the galactic energy spectrum resulting from cosmic expansion, and the (possible) galaxy evolution out to the edge of the APM sample. For the numbers in the remainder of this section the luminosity function was adopted from the measurements of [4]; the K-correction was modeled from standard spectra of galaxy populations [5]. In order to estimate the importance of the (possible) evolution effects, I proceeded as follows: On very small angular scales, the data show that the angular correlation function can be described as a power law, w(h)"A h\A with c"0.7. On the other hand, as hP0, the main contribution in the U Limber equation comes from very small linear scales where the spatial galaxy correlation function can be approximated as m(r)"(r/r*)\\A with r*"5.5h\ Mpc. Relating the small-scale m(r) to w(h) via the Limber equation leads to the following expression for A : U >A cH\(1#z) A C()C( A ) r* W(z)(1#z)(1#Xz dz , (3) A "
(z) U C(>A) cH\ x(z) where W(z) accounts for the evolution of galaxy clustering, e.g. W(z)J(1#z)\ for a clustering pattern which is stable in comoving coordinates. Comparing the value of A "w(h)h computed U from Eq. (3) with the data for each bin allows to constrain the extent to which galaxy evolution
A. Kashlinsky / Physics Reports 307 (1998) 67—73
69
Fig. 1. (Left) RMS density fluctuation from APM data is plotted vs. the comoving scale. It is shown in units of density fluctuation over radius r . Solid lines correspond to empirical fit from [3]; dotted lines to deprojected spectra from [6]. Three lines of each type correspond to 1-p uncertainty in [6] and a similar uncertainty in [3]. The thick dashed line corresponds to CDM models that fit APM data in narrow magnitude bins: Xh"0.2 with n"1 or Xh"0.3 with n"0.7. (Middle) Plus signs correspond to the primordial density field inverted from data on cluster correlation amplituderichness dependence. The solid line shows the fit to the points: D(M)"D (M/1.7;10h\M )\ . (Right) The values > of D(r) from the cluster data for three different values of X: plus signs correspond to X"0.1, triangles to X"1 and diamonds to X"0.25. Square denotes the (by definition) value of unity of D(M)/D at 8h\ Mpc. The lines represent density fluctuations derived from the APM data redrawn from the left panel in the same notation.
affects inversion of the APM data in terms of the underlying spectrum. The fits of the power-law galaxy correlation function to the small-scale data on w(h) in all six narrow magnitude bins show that no galaxy evolution corrections are needed beyond the normal K-corrections and evolution of the clustering pattern with time [1]. Analysis of the data in narrow magnitude bins without galaxy evolution shows that the power spectrum obtained by deprojection of the entire APM dataset [6] fits the data well in all magnitude bins. Neglecting the dependence on X , the CDM power spectra can be parameterized by only two parameters: the primordial power index n and the excess power parameter Xh [7]. Fits of the CDM models to APM data in narrow magnitude bins at all depths show that the models require Xh"0.2 if n"1, or Xh"0.3 for tilted models with n"0.7. The latter thus still requires low X in order to fit the APM data in narrow magnitude bins. CDM models with larger values of Xh would give smaller w(h) at large h where w:10\; whereas smaller values would overshoot the data there [1]. Left panel in Fig. 1 shows the r.m.s. fluctuation from the fits to the APM data vs. the comoving scale. It is plotted in units of the fluctuation at r ,8h\ Mpc, D(r)/D . The reason for plotting the ratio is that for linear biasing the vertical axis in Fig. 1 is independent of the bias factor.
3. Density field from cluster correlation amplitude-richness dependence In this Section, I show how, using the data on the cluster correlation amplitude-richness dependence, one can reconstruct the quantity plotted in the left panel of Fig. 1 as a function of the cluster mass. Comparing the results with D over a given range of r gives the amount of mass in the given comoving scale and leads to a direct determination of X.
70
A. Kashlinsky / Physics Reports 307 (1998) 67—73
I assume that clusters of galaxies are formed by gravitational clustering [8] and evaluate their correlation function based solely on this assumption [9,10], i.e., I assume that clusters of galaxies are identified with regions that at some early epoch z had initial overdensity such that they would G turn-around in less than the age of the Universe. It is convenient to choose z to be sufficiently high G when the density field on all relevant scales is in the linear regime. In that case, the amplitude, d , of the fluctuation at z needed for it to turn around today at z"0 is related to D , the amplitude G G which grows to D "1 at z"0, via d "Q D . The factor Q K1.65 for a spherical model and is G almost independent of either z or cosmological parameters. G I further assume that the initial density field was Gaussian. In that case the joint probability density to find density contrasts d on scales containing masses M , respectively, is given by 1 1 exp(!iq ) d)exp ! q ) C ) q dq . (4) p(d ; d )" 2 (2p) \ \ The correlation matrix, C, is related to the spectrum of the primordial density field: its diagonal elements are the mean-square fluctuation D on scale containing mass M and the non-diagonal elements are Km(r) at cluster separations r greater than the comoving scale subtended by the cluster masses (&1—3 h\ Mpc). The probability of two such fluctuations to turn around by now and thereby form clusters of galaxies is P " p(d ; d ) dd dd . The fraction of such pairs at ++ B B the present time would be f "jP /jM /jM . The probability for a single cluster to form ++ ++ by now is P " p(d) dd; the fraction of such clusters is f "jP /jM. + + + B Now, one can construct the correlation function between the present-day clusters of different masses. By definition, the 2-point correlation function of an ensemble of objects with number density n is given by the probability to find two objects in small volumes d» , d» as dP "n(1#m) d» d» . Since the clusters of mass M , M will make the fraction f of such ++ pairs, the probability to find them is dP "f dP . On the other hand, the mean number + + ++ density of clusters of mass M would be f ;n and by definition the probability of finding two + clusters is dP "f f n(1#m ) d» d» . Hence the correlation function of clusters of + + ++ ++ mass M is given by
f (5) 1#m " ++ (1#m ) . G ++ f f + + Using expansions in terms of Hermite polynomials [11,10] leads to the following expression [1]: m
f (r)"A m(r) , ++ + (2
(6)
where the amplification factor is 1 mKC (x) A (x)" K + QKm! K
(7)
with
H (x) xK H (x) K> # K> . C (x)" K x (m#1)Q 4
(8)
A. Kashlinsky / Physics Reports 307 (1998) 67—73
71
Here H (x) are Hermite polynomials and f"Q D /D(M), which is directly related to the spectrum K of the primordial density field on scale M. Hence, the data on the cluster correlation amplitudemass dependence [12,13] can be used to invert Eqs. (5)—(7) to directly obtain D(M) for the primordial density field. The accuracy of these expressions has been confirmed in recent numerical experiments [15]. Since cluster correlation amplitude is known to depend on the cluster richness, N, one has to translate the richness into cluster mass. I assume that the two are proportional with the coefficient of proportionality normalized to the data on the Coma cluster:
N N "1.45;10 h\M . M(N)"M > ! N 106 !
(9)
Here, I adopted the values for the Coma cluster and richness following [14]. This assumes that there are no systematic variations in the galaxy luminosity function in clusters of various masses. The data on the cluster correlation function can now be used in conjunction with this mechanism for cluster formation in order to set further constraints on the cosmological models via Eqs. (5)—(7). Indeed, the dependence of the cluster correlation function at a given richness/mass on r constrains the power spectrum on the scale of cluster separation, whereas the dependence of the cluster correlation amplitude at a given r on the cluster richness/mass constrains the shape of the primordial power spectrum on the scale containing that mass. Eqs. (5)—(7) can be used in two ways: On the one hand, one can test cosmological paradigms, such as CDM, by evaluating the cluster correlation parameters by assuming their power spectrum and comparing computed numbers for both m (r) vs. r and A vs. M with observational data. ++ + Alternatively, one can use these expressions in conjunction with data on the cluster correlation amplitude in order to directly obtain the spectrum of the primordial density field on scale M independently of the assumed cosmological prejudices. The latter, as I show, can also be used to determine X. In terms of the CDM models, I find that no CDM model, whose only free parameters are (X, h, n), can simultaneously fit three sets of data: the APM data on the galaxy correlation, the data on the slope of the cluster correlation function with scale at a given richness, and the data on the dependence of the cluster correlation amplitude at a given separation on the cluster richness. Namely, the values of (X, h, n) required by fits to the APM data (XhK0.2 if n"1 and XhK0.3 if n"0.7) would be very different from those required by fits to either of the other two datasets and vice versa [1]. Accounting for a (slight) dependence on X in the CDM transfer function [16] would not change these conclusions. In order to invert Eq. (6) in terms of x, and subsequently D(M), I used the data on the cluster correlation amplitude-richness dependence from [13]. The amplification coefficient for a given richness/mass in Eqs. (5)—(7) depends on the value of m(r) on the scale where it is evaluated. The underlying correlation function m(r) enters Eq. (6) via the first and higher order terms and can contribute up to &10—30% to the total amplification. I chose a linear scale r"25h\ Mpc at which to evaluate the amplification factor from Eq. (7). This scale is sufficiently large compared to r to ensure validity of the analysis, but where at the same time m(r) can be determined sufficiently accurately from observations. In the discussion below I adopted m(25h\ Mpc)"0.07 in agreement
72
A. Kashlinsky / Physics Reports 307 (1998) 67—73
with the APM data (cf. [1]), but the numbers that follow are not very sensitive to varying values of m(25h\ Mpc) within reasonable limits. Once the data on the r.h.s. of Eq. (7) for A vs. richness are specified, Eqs. (5)—(7) can be solved + numerically in order to obtain D(M). The middle panel in Fig. 1 plots the results of this inversion. The top horizontal axis plots the values of N at which D(M)/D has been evaluated. The bottom horizontal axis shows the mass computed according to the normalization to Coma. I emphasize again that this method directly gives the pregalactic spectrum as it was at z independently of the G later gravitational or other effects. The plot in the middle panel of Fig. 1 shows a clearly defined slope of D(M)JM\ corresponding to the spectral index of nK!1.3. This slope is consistent with the APM implied power spectrum index of w(h)Jh\ . 4. Determining X from the two density fields The recovered density field allows one to relate the mass of the cluster to its comoving scale thereby directly determining X. The direct fit to the points in the middle panel of Fig. 1 gives D(M)"D (M/M )\? (10) with a"0.275 and M "1.7;10h\M . This fit is plotted as a solid line in the middle panel. > On the other hand, the mass contained in comoving radius r is M(r )"6.1;10Xh\M . > Equating this with M leads to X"0.28. Furthermore, one can determine X by comparing D(r) from the galaxy correlation data over the entire range of the relevant r with D(M) derived from the cluster correlation amplitude-richness dependence. In order to do this I converted the numbers for D(M) to those at a given r using that the mass contained in a given comoving radius in the Universe with density parameter X is M(r)"1.2;10(r/1h\ Mpc)Xh\M . The right panel in Fig. 1 shows the values of D(r) from > the cluster data for three different values of X: 0.1 (pluses), 0.25 (diamonds) and 1 (triangles). The square denotes the (by definition) value of unity of D(M)/D at 8h\ Mpc. The lines represent the APM data redrawn from the left panel. One can see that the spectral shape of the density field derived from the cluster data is in good agreement with that of the APM. ¹he amplitudes of the two fields would match at all r only for X"0.25. 5. Summary and conclusions In this presentation I discussed reconstructing the density field from the cluster correlation amplitude-richness dependence. It was shown that the data can be inverted to obtain the r.m.s. density fluctuation in the pregalactic density field on scales containing the mass of the clusters. The derived density field has the same spectral shape as the density field derived as a function of the comoving scale from the APM data. Comparing the two amplitudes fixes the amount of mass in a given comoving scale and allows to determine X. The value derived from application of the method to the data is X"0.25. This value of X, obtained after normalizing the mass-richness relation to Coma, is in good agreement with that implied by the dynamics of the same Coma cluster. This further argues that galaxies trace the overall mass distribution in the Universe.
A. Kashlinsky / Physics Reports 307 (1998) 67—73
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
A. Kashlinsky, Astrophys. J. 492 (1998) 1. S. Maddox et al., Mon. Not. R. Ast. Soc. 242 (1990) 43p. A. Kashlinsky, Astrophys. J. Lett 399 (1992) 1. S. Loveday et al., Astrophys. J. 390 (1992) 338. Y. Yoshii, F. Takahara, Astrophys. J. 326 (1988) 1. C. Baugh, G. Efstathiou, Mon. Not. R. Ast. Soc. 265 (1993) 145. J.M. Bardeen et al., Astrophys. J. 304 (1986) 15. W. Press, P. Schechter, Astrophys. J. 187 (1984) 425. A. Kashlinsky, Astrophys. J. 317 (1987) 19. A. Kashlinsky, Astrophys. J. 376 (1991) L5. L. Jensen, A. Szalay, Astron. J. Lett. 305 (1986) L5. N. Bahcall, R. Soneira, Astrophys. J. 270 (1983) 20. N. Bahcall, M. West, Astrophys. J. 392 (1992) 419. S. Kent, J. Gunn, Astron. J. 87 (1981) 945. M. Gramann, I. Sukhnonenko, Astrophys. J. (1998), submitted. N. Sugiyama, Astrophys. J. 100 (Suppl.) (1995) 281.
73
Physics Reports 307 (1998) 75—81
Do elliptical galaxies have dark matter? Michael Loewenstein *, Raymond E. White III NASA/Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA Department of Physics & Astronomy, University of Alabama, Tuscaloosa, AL 35487-0324, USA
Abstract We present constraints on the structure of dark matter halos in elliptical galaxies from the relationship between X-ray temperature and velocity dispersion observed in an optically complete sample. We demonstrate the ubiquity of dark matter in ¸'¸ galaxies, present limits on the dark matter structural parameters, and discuss the scaling of these H parameters with optical luminosity. We find that the dark matter is characterized by velocity dispersions that are greater than those of the luminous stars, and that the mass-to-light ratio within six half-light radii for bright elliptical galaxies has a universal value, M/¸ +25h M /¸ . The latter conclusion is consistent with gravitational lensing studies, but 4 > 4> conflicts with the simplest extension of CDM theories of large scale structure formation to galactic scales. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d; 98.62.Gq
1. Background and motivation There is a strong consensus that dark matter dominates the mass content of spiral galaxies and galaxy groups and clusters. Although traditionally less forthcoming and more controversial, evidence for dark matter in elliptical galaxies has been rapidly accumulating in recent years from improved stellar dynamical data and modeling techniques, gravitational lensing observations, and new high-quality X-ray images and spectra from the ROSAT and ASCA satellites. Motivated by the recently published measurements of X-ray temperatures in a complete optically selected sample [1], we address the following two questions [2]: (1) Do bright elliptical galaxies have dark matter halos in general? (2) How do the dark halo properties scale with optical luminosity?
* Corresponding author. E-mail:
[email protected]. Also with the University of Maryland Department of Astronomy. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 9 - 3
76
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
2. Observational constraints on the b parameter We address these issues through modeling designed to explain the observed magnitude and variation of the ratio of stellar to gas temperatures, b ,km p/k1¹2 , (1) where km is the mean mass per particle, p the projected “core” optical velocity dispersion, and 1¹2 the globally averaged gas temperature. b is an excellent diagnostic of the total mass-to-light ratio. The following characterize the observed “¹—p” relation, and must be reproduced by any successful model of the dark matter in elliptical galaxies: (1) b (1 (i.e., the gas is always hotter than the stars; typically b +0.5), and (2) 1¹2Jp or b Jp [1]. 3. Modeling and assumptions (1) Stars and gas are assumed to be in hydrostatic equilibrium in the same spherically symmetric gravitational potential. (2) Stellar orbits are assumed to vary monotonically from isotropic at the center to radial at infinity. (3) The stellar density profiles and scaling relations are determined by recent HST observations and the “fundamental plane” relation between the luminosity in the visual band (¸ ), half-light radius (r ), and central velocity dispersion in elliptical galaxies. (4) The “NFW” 4 dark-matter parameterization [3] is adopted,
r \ r \ , (2) 1# o (r)J a a where o and a are the dark matter density distribution and the dark matter scale length, respectively. b is primarily determined by the dark-to-luminous mass ratio inside the optical radius (r "6r , i.e., the radius enclosing +90% of the light), and the dark halo concentration (i.e., the ratio of dark matter to stellar scale lengths). Since b is a global parameter it is not sensitive to the functional form of the dark matter density distribution; the choice of Eq. (2) enables us to connect our results with numerical structure formation simulations. 4. Dark matter is universally required b '1.2 for models without dark matter — greater than in any observed galaxy [1]. A typical value of b "0.5 requires a dark-matter fraction of +75% within r for a +r . More than half of the mass within r is baryonic for models with b "0.5 if a 'r . Even for extreme stellar models, b always exceeds +0.75 unless ellipticals have dark matter. Therefore, dark halos must be generic to ¸'¸ elliptical galaxies. H 5. Limits on dark halo parameters By considering the entire dark halo parameter space, we place absolute limits on these parameters required to obtain any value of b . There are lower limits on the dark-matter scale length,
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
77
Fig. 1. Minimum dark-matter scale length in units of the “break radius”, 0.03r .
Fig. 2. Maximum values of baryon fractions within r , and r .
a — if the dark matter is too concentrated p increases relative to 1¹2, raising b . The minimum value of a consistent with b +0.5 is +0.3r +2(¸ /3¸ )h\ kpc, where ¸ +1.7; 4 H H 10h\¸ > (Fig. 1). Upper limits on the baryon fraction are analogous to maximum disk models 4 for spiral galaxies, where the contribution to the total mass from luminous matter is maximized (Fig. 2). The minimum dark matter mass fraction is +30—57% within 6r for b "0.4—0.7, and is (20% within r — dark matter need not dominate inside the half-light radius.
78
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
Fig. 3. Observed (crosses) and predicted correlation of b with dimensionless luminosity (¸ "5.2;10h\¸ ). 4> Solid curve denotes constant (within the virial radius) baryon fraction ( f "0.06) model and CDM scaling of dark matter concentration; dotted curve has f increasing with optical luminosity; dashed curve has a steeper-than-CDM scaling of concentration with dark halo mass.
6. Explaining the ¹—p relation The fact that b increases with increasing p (or, equivalently, with ¸ ) implies that more 4 luminous galaxies are less dark-matter dominated within r . We have extended our models to the scale of the virial radius and mass in order to investigate what sort of global dark matter scaling relation predicts such a trend. Successful scenarios (Fig. 3) include those where (1) a increases weakly with M as predicted in CDM simulations, but less luminous galaxies have smaller baryon fractions ( f ) — presumably as a result of mass loss by galactic winds (dotted curves in Figs. 3—5), and where (2) all elliptical galaxies have the same f but a increases much more steeply with M than in CDM models (dashed curves in Figs. 3—5). In this case, less luminous galaxies have relatively more dark matter within r because of a more concentrated dark-matter distribution rather than a larger overall dark-matter fraction. Models with dark halos that scale as predicted by CDM, but with constant f , badly fail to reproduce the observed ¹—p trend (Fig. 3). 7. How dark matter scales with optical luminosity Our constraints at r and r precisely agree with gravitational lensing results [4] — see Fig. 4. Integrated properties within r are robust (Fig. 5): M/¸ +25h M /¸ > (or, equivalently, 4 > 4
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
79
Fig. 4. M vs ¸ within (bottom to top) r , r (stars), r (total), and r . Solid curves show M(r ) and M(r ) inferred 4 from statistical weak lensing. Dotted and dashed line-types have the same connotations as in Fig. 3.
Fig. 5. Same as Fig. 3 for M/¸ at (bottom to top): r"0, r , r , r . Dotted and dashed line-types have the same 4 connotations as in Fig. 3.
f +0.35(¸ /3¸ )). On scales both larger and smaller than r , dark-matter scaling in the 4 H two scenarios described above diverges (Figs. 4 and 5). Adding in the contribution from spiral galaxies [5] yields a total virialized mass density on galactic scales, relative to the critical value, X '0.07.
80
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
Fig. 6. 1-D velocity dispersion distributions, assuming isotropic orbits, for an ¸ "¸ "5.2;10h\¸ galaxy. 4 4> “Mass loss” and “steep” models are denoted by dotted and solid curves, respectively, for the stellar profiles and dot-dashed and dashed curves, respectively, for the dark matter profiles.
We have calculated the velocity dispersion distributions for both the dark matter and stellar density distributions, assuming isotropic orbits. These are compared in Fig. 6 for an ¸ "5.2;10h\¸ > galaxy for both of the successful scenarios described above and in Figs. 4 4 4 and 5. Both velocity dispersion profiles have maxima since the total gravitational potential is not isothermal. The ratio (dark-matter-to-stars) of the squares of these maxima is greater than 1.4 over the luminosity range in Figs. 4 and 5 for the two models under consideration, and is +2 over the range ¸ (¸ (5¸ . In fact, the minimum value of this ratio for any model that produces H 4 H b (0.7 is greater than one. In this sense the dark matter is hotter than the stars, as simply reflected by the observation that the gas temperature exceeds that of the stars.
8. Conclusions We have constructed mass models of elliptical galaxies that are fully consistent with the fundamental-plane scaling relations and the latest published HST results on the structure of the centers of elliptical galaxies, and that have dark halos following the form indicated by simulations of the formation of large-scale structure. These models allow us to calculate the diagnostic parameter b as a function of the relative (to luminous) dark-matter mass and scale length. Comparison with the observed ¹—p relation — the main features of which are that the X-ray emitting gas is always hotter than the stars, and by an amount that increases for galaxies of lower
M. Loewenstein, R.E. White III / Physics Reports 307 (1998) 75—81
81
velocity dispersion/optical luminosity — provides constraints on the properties of dark halos around elliptical galaxies. Our main results are as follows. (1) In the absence of dark matter, b generally exceeds 1.2, with an absolute lower limit of 0.75. Since galaxies are observed to have b "0.3—0.8, we conclude that dark halos are generic to ¸'¸ elliptical galaxies. H (2) The most natural explanation of the observed correlation of b with luminosity is that less luminous galaxies are more dark-matter dominated inside r in such a way that the total mass-to-light ratio is nearly constant. This ratio, +25h M /¸ >, is exactly what is predicted for > 4 mass models of elliptical galaxies designed to explain the gravitational shear of background field galaxies measured for a disjoint sample of elliptical galaxies. (3) Our models can be embedded within theories of large scale structure by specifying how the scale length of dark matter scales with virial mass, and linking the virial mass to the observed luminosity by specifying a global baryon fraction. The standard CDM scaling with constant baryon fraction badly fails to reproduce the observed ¹—p relation, since it predicts an increase in dark-to-luminous ratio (inside r ) with luminosity. The following two successful variations are obtained by relaxing one of the two assumptions of the constant baryon fraction CDM scenario: (a) standard CDM scaling for the dark halos, but with smaller galaxies losing an increasingly large fraction of their initial baryonic content; or, (b) a constant baryon fraction, but with the darkmatter concentration varying much more strongly with virial mass than CDM models predict so that more luminous galaxies are less dark-matter dominated due to a relatively diffuse (rather than less massive) dark halo. In this latter scenario, dark matter becomes increasingly important inside r as ¸ decreases, becoming dominant for ¸(0.6¸ — a prediction that may be testable from 4 H stellar dynamical considerations. Such a deviation from CDM predictions on mass scales (10M could conceivably be due to a relatively flat primordial fluctuation spectrum or to the > effects on the dark-matter density profile from the evolution of the baryonic component.
References [1] [2] [3] [4] [5]
D.S. Davis, R.E. White III, Astrophys. J. 470 (1996) L35. M. Loewenstein, R.E. White III, Astrophys. J., submitted. J.F. Navarro, C.S. Frenk, S.D. White, Astrophys. J. 490 (1997) 493. R.E. Griffiths, S. Casertano, M. Im, K.U. Ratnatunga, Mon. Not. R. Astron. Soc. 282 (1996) 1159. M. Persic, P. Salucci, F. Stel, Mon. Not. R. Astron. Soc. 281 (1996) 27.
Physics Reports 307 (1998) 83—96
Dynamical constraints on dark compact objects B.J. Carr* School of Mathematical Sciences, Queen Mary & Westfield College, Mile End Road, London E1 4NS, UK
Abstract Many of the baryons in the Universe are dark and at least some of the dark baryons could be in the form of compact objects. Such objects could be in various locations — galactic discs, galactic halos, clusters of galaxies or intergalactic space — and each of these is associated with a dark matter problem. For each site we consider the various dynamical constraints which can be placed on the fraction of the dark matter in compact objects of different mass. We also apply these limits to the situation in which the compact objects are clusters of smaller objects. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 95.35.#d; 97.60.Lf; 98.20.!d; 98.65.!r Keywords: Dark matter; Compact objects; Black holes
1. Introduction Evidence for dark matter has been claimed in four different contexts. There may be local dark matter in the Galactic disc with a mass comparable to that in visible form (M &M ). There may be dark matter in the halos of galaxies with a mass which depends on the extent of the halos (M &10M (R /100 kpc) for a halo radius R ). There may be dark matter associated with clusters of galaxies (M &10M ), as well as a lot of hot gas. Finally there may be smoothly distributed background dark matter if one believes that the total cosmological density has the critical value which separates ever-expanding models from recollapsing ones (M &100M ). In all these cases evidence for dark matter arises because the mass inferred from the dynamical measurements exceeds the mass in visible form. In this paper we will explore how dynamical effects can also help constrain the nature of the dark matter.
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 3 - 2
84
B.J. Carr / Physics Reports 307 (1998) 83—96
A key question is whether the various forms of dark matter are baryonic or non-baryonic. The main argument for both baryonic and non-baryonic dark matter comes from Big Bang nucleosynthesis. This is because the success of the standard picture in explaining the primordial light element abundances only applies if the baryon density parameter X lies in the @ range [18]. 0.007h\(X (0.022h\ , (1) @ where h is the Hubble parameter in units of 100 km s\ Mpc\. The upper limit implies that X is @ well below 1, which suggests that no baryonic candidate could provide the critical density required in the inflationary scenario. This conclusion also applies if one invokes inhomogeneous nucleosynthesis since one requires X (0.09h\ even in this case [36]. The standard scenario therefore @ assumes that the total density parameter is 1, with only the fraction given by Eq. (1) being baryonic. On the other hand, the value of X allowed by Eq. (1) almost certainly exceeds the density of visible @ baryons X , a careful inventory by Persic and Salucci [41] showing that the density in galaxies and T cluster gas is only about 0.003 for reasonable values of h. Thus it seems that one needs both non-baryonic and baryonic dark matter. Which of the dark matter problems identified above could be baryonic? Certainly the dark matter in galactic discs could be: even if all discs have the 60% dark component envisaged for the Galaxy by Bahcall et al. [6], this only corresponds to X +0.001, well below the nucleosynthesis B bound. The more interesting question is whether the halo dark matter could be baryonic. If the Milky Way is typical, the density associated with halos would be X +0.03h\(R /100 kpc), so Eq. (1) implies that all the dark matter in halos could be baryonic providing R (70h\ kpc, which is marginally possible for our galaxy [24]. In this case, the dark baryons could be contained in the remnants of a first generation of (pregalactic or protogalactic) “Population III” stars. This corresponds to the “Massive Compact Halo Object” or “MACHO” scenario and has attracted considerable interest as a result of microlensing observations [1,4]. The cluster dark matter has a density X +0.1—0.2 and so could only be baryonic if one invoked inhomogeneous nucleosynthesis. The background dark matter would definitely need to be nonbaryonic. These forms of dark matter are therefore usually assumed to comprise “Weakly Interacting Massive Particles” or “WIMPs”. However, this does not preclude there being some intergalactic baryonic dark matter. For example, the usual estimate of X does not include a possible contribution from dwarf galaxies [10] or low surface brightness galaxies [37]. Indeed it has recently been claimed that such galaxies may provide all of the missing baryons [30]. Nor does it include any baryons contained in a hot intergalactic medium [22]. Here we also emphasize the possibility that there could be baryons in the form of “Dark Intergalactic Compact Objects” or “DICOs”: either the remnants of a first generation of pregalactic stars or primordial black holes which formed before cosmological nucleosynthesis. Both the MACHO and DICO scenarios require that most of the baryons in the Universe were processed through Population III stars. Although there are no observations which unambiguously require this, the existence of galaxies and clusters of galaxies implies that there must have been density fluctuations in the early Universe and, in many scenarios, these fluctuations would also give rise to a population of bound clouds in the period between decoupling and galaxy formation [3,16,46]. The first bound clouds could face various possible fates [12,43] but one certainly needs them to fragment into stars which are very different from the ones forming today if they are to
B.J. Carr / Physics Reports 307 (1998) 83—96
85
produce a lot of dark matter. Note that the epoch of Population III formation will be very important for the relative distribution of baryonic and non-baryonic dark matter, especially if the non-baryonic dark matter is “cold” so that it can cluster in galactic halos. In this case, if the Population III stars form before galaxies, one might expect their remnants to be distributed throughout the Universe, with the ratio of the non-baryonic and baryonic densities being the same everywhere and of order 10. If they form in the first phase of protogalactic collapse, one would expect the remnants to be confined to halos and clusters, so their contribution to the halo density could be larger.
2. The nature of the compact objects We have seen that compact dark objects could reside in galactic discs, galactic halos or the intergalactic medium. Such objects could also reside in clusters of galaxies but these are unlikely to be a separate population; probably they would have to comprise either accreted intergalactic objects or objects which derived from disrupted halos. We now discuss the possible form of these compact objects. Primordial black holes. Black holes may have formed in the early Universe, either from initial inhomogeneities or at some sort of cosmological phase transition [13]. Those forming at time t after the Big Bang would have a mass of order the horizon mass at that epoch &10(t/s)M . > Since there could not have been large-amplitude horizon-scale inhomogeneities at the epoch of cosmological nucleosynthesis (t&1 s), PBHs forming via the first mechanism are unlikely to be larger than 10M . On the other hand, since there is no phase transition after the quark—hadron > era at 10\s, those forming via the second are unlikely to be larger than 1M . The possibility that > PBHs may have formed at the quark—hadron transition [19] has attracted considerable attention recently [31] because they would naturally have a mass comparable to that required by the microlensing candidates. ¸ow mass objects. This term will be used to cover all objects below 0.8M which have either > not completed their nuclear burning phase or not passed through one at all. Stars in the range 0.08—0.8M are still on the main-sequence but seem to be excluded by source count > constraints from providing the disc or halo dark matter [7]. Objects in the range 0.001—0.08M > would never burn hydrogen and are termed “brown dwarfs”. They represent a balance between gravity and degeneracy pressure. Objects below 0.001M , being held together by intermolecular > rather than gravitational forces, have atomic density and are here termed “snowballs”. However, such objects would have evaporated within the age of the Universe if they were smaller than 10\M [21]. > Stellar remnants. Stars in the range 0.8—4M would leave white dwarf remnants, those between > 8M and some mass M +25M would leave neutron star remnants, and those in the range > & > between M and 60M could evolve to black holes. However, all these remnants would appear to & > be implausible dark matter candidates because their precursors would produce too much enrichment or background light [16]. Stars with initial mass in the range above 100M (termed > “VMOs”) would experience the pair-instability during their oxygen-burning phase, leading to disruption below some initial mass M +200M but complete collapse above it [8]. VMO black > holes may therefore be more plausible dark matter candidates than ordinary stellar black holes.
86
B.J. Carr / Physics Reports 307 (1998) 83—96
However, in the VMO scenario the background light generated by the very luminous precursors exceeds the observational upper limits for most parameters [9,52]. Supermassive objects. Metal-free stars larger than 10M collapse directly to black holes due to > general relativistic instabilities before any nuclear burning. They would therefore have no nucleosynthetic consequences and would also generate very little radiation. It should be stressed that it is very difficult for supermassive objects to collapse directly to black holes at the present epoch, especially if they have angular momentum. They are much more likely to fragment into smaller objects, with black holes only forming subsequently in the core as a result of relaxation. However, supermassive objects might collapse to black holes just after decoupling because the Compton drag of the background radiation could prevent their tidal spin-up [27,35]. Dark clusters. In many scenarios one expects the Population III stars to form in clumps of around 10M [14]. This arises naturally if the first clouds form protogalactically through some > type of two-phase instability [23]. One could also envisage scenarios in which the cluster mass is well below 10M [2,20,32,39,45] or in which entire galaxies form dark clusters much larger than > 10M . Provided such clusters maintain their integrity, it makes very little difference to their > dynamical effects what they comprise. However, in many circumstances, one would expect the clusters to be disrupted by collisions or Galactic tidal effects and this places constraints on the mass and radius of any surviving ones.
3. Dynamical constraints All sorts of constraints can be placed on the density of compact objects in different locations and these are reviewed by Carr [12]. Here we will focus exclusively on constraints associated with dynamical effects. These constraints have been examined in detail by Carr and Sakellariadou ( [15]; CS) and are summarized as upper limits on the density parameter X (M) for compact objects of mass ! M in Fig. 1, where the disc, halo and cluster densities are assumed to be 0.001, 0.1 and 0.2, respectively. Our conclusions will be largely independent of the nature of the compact objects, although for simplicity they are often assumed to be black holes. 3.1. Disc heating by halo objects As halo objects traverse the Galactic disc, they will impart energy to the stars there. This will lead to a gradual puffing up of the disc, with older stars being heated more than younger ones. This problem was first analysed by Lacey and Ostriker [34], who argued that black holes of around 10M could provide the best mechanism for generating the observed disc-puffing. Wielen and > Fuchs [51] claimed that black hole heating could also explain the dependence of the velocity dispersion upon Galactocentric distance. More recent data is probably inconsistent with this picture and heating by spiral density waves or giant molecular clouds is now usually invoked [33]. Nevertheless, one can still use the Lacey—Ostriker argument to place an upper limit on the density in halo objects of mass M. Lacey and Ostriker start with Chandrasekar’s [17] expressions for 2-body encounters and then use various approximations in applying them to the interactions between disc stars and halo objects. The latter are assumed to be much more massive than the stars and to have an isotropic
B.J. Carr / Physics Reports 307 (1998) 83—96
87
Fig. 1. Summary of dynamical limits on compact objects.
Maxwellian velocity distribution with velocity dispersion much larger than that of the stars. The disc velocity dispersion then evolves according to p(t)"(p#aGnMt/v ) (2) where p is the initial dispersion, n the number density of the halo objects, v the circular velocity in the disc and a is a dimensionless number of order unity. If the observed total velocity dispersion is p , this yields a limit nM((p v )/(at G) , (3) where t is the time for which the stars have been heated (i.e. their age). This implies a maximum mass for the halo objects of
o \ p t \ M , (4) "3;10 > 0.01M pc\ 60 km s\ 10 yr > where o is the local halo density and the general limit on the density in compact objects of mass M is M
X (X min[1, (M/M )\] !
(5)
88
B.J. Carr / Physics Reports 307 (1998) 83—96
as shown in Fig. 1. Friese et al. [25] obtain a comparable limit for the giant Sc galaxy NGC 3198. One can obtain an even stronger limit by applying the disc-heating argument to galaxies with higher dark matter density or lower stellar velocity dispersion or larger age [42]. However, there is no a priori reason to assume the compact objects in different galaxies would have the same mass. 3.2. The disruption of stellar clusters by halo objects Many types of limits are associated with the influence that compact objects would have on bound groups of stars. This problem has been studied in a variety of astronomical contexts. For example, Spitzer [44] has studied the disruption of open clusters by interstellar clouds. Carr [11] and Wielen [47] have studied the disruption of open clusters by black holes. Wielen [48—50], Ostriker et al. [40] and Moore [38] have studied the disruption of globular clusters by black holes. Bahcall et al. [5] have studied the disruption of binary systems by objects in the Galactic disc. Gerhard and Fall [26] have studied the tidal interactions of spiral galaxies in clusters. The following analysis is general enough to cover all of these applications. Let us assume that all the dark objects have mass M and velocity » and that all the clusters have mass M and radius R . Every time a halo object passes near a star cluster, the object’s tidal field heats up the cluster and thereby reduces its binding energy. For impulsive encounters the change in energy of the cluster is
GMM R/(»p) (p'R ) , (6) GMM /(»R) (p(R ) , where p is the impact parameter and » &(GM /R ) is the internal velocity dispersion of the cluster. If DE exceeds the cluster’s gravitational binding energy, E &M », then disruption will be a one-off event, requiring a single encounter. We can express the condition for this as an upper limit on the compact object’s speed: *E&
» (M/M )(R /p) (p'R ) , (7) » (M/M ) (p(R ) . If this condition is not satisfied, then the disruption of the cluster will be a cumulative effect, requiring many encounters. »/» is usually large and, in this case, one-off disruption can only occur for M<M . The impulse approximation breaks down if the traversal time of the object exceeds the dynamical time of the cluster and the condition for this is »(min[» ,» (p/R )]. The various possibilities are summarized in Fig. 2. The timescale for one-off disruption can be shown to be »(
t &M /(nRM» ) for (»/» )(M/M ((»/» ) . (8) The range of values of M arise because one only has one-off disruption for M'M (»/» ) and the impulse approximation fails for M'M (»/» ). The timescale on which clusters are disrupted by multiple encounters is
»M /(GMnR ) t & M /(nRM» )
for M/M ((»/» ) , for (»/» )(M/M ((»/» ) .
(9)
B.J. Carr / Physics Reports 307 (1998) 83—96
89
Fig. 2. Effect of encounters on clusters for different values of p and ».
The timescale increases exponentially for M'M (»/» ) because of the sharp decrease in disruptive efficiency when one enters the non-impulsive regime. We note that the timescale in the second expression in Eq. (9) is exactly the same as that given by Eq. (8), so multiple and one-off encounters contribute equally to disruption in this regime. We therefore take the “effective” disruption time to be half the value indicated by Eq. (8) in this case. By comparing the expected disruption time for clusters of mass M and radius R with the cluster lifetime t , one finds that the local density of compact objects of mass M must satisfy (cf. [47]) *
m »/(GMt r ) * o ( ! (m /Gtr) *
for M(M (»/» ) , for M (»/» )(M(M (»/» ) .
(10)
Any lower limit on t therefore places an upper limit on o . The crucial point is that the limit is * ! independent of M in the single-encounter regime, so it bottoms out at a density of order (o /Gt). * CM apply this limit to the disruption of binaries in the disc (taking M "2M and R "0.1 pc), > open clusters in the disc (taking M "10M and R "1 pc), globular clusters in the halo (taking > M "10M and R "10 pc) and galaxies in clusters (taking M "10M and R "10 pc). > > They also assume »+60 km s\ for disc objects, 300 km s\ for halo objects and 10 km s\ for cluster objects. This yields the limits shown in Fig. 3. Numerical calculations for the disruption of globular clusters by halo objects by Moore [38] confirm the qualitative features indicated above but he infers a stronger upper limit of 10M since his clusters are more diffuse. The limit > corresponding to the CM result is shown by a broken line in Fig. 1 since it involves certain caveats.
90
B.J. Carr / Physics Reports 307 (1998) 83—96
3.3. The effect of dynamical friction on halo objects Any objects in the Galactic halo will tend to lose energy to lighter objects via dynamical friction and consequently drift towards the Galactic nucleus [17]. At a given Galactocentric radius r, this means that all objects larger than some mass M (r) will have been dragged into the nucleus by now. Various sources of drag will act upon the halo objects: within the central few kiloparsecs the dominant source will be the spheroid stars; beyond that the effect of the disc stars will also be important; at still larger radii the halo objects themselves may provide the dominant source of drag. The dynamical effect of the spheroid stars has been discussed in detail by Carr and Lacey [14] and CM extend their analysis to the other regions. We assume that the spheroid density scales as r\ for r(r and r\ for r'r where r "800 pc and that the halo density is constant for r(r and scales as r\ for r'r where r is the halo core radius (probably in the range 2—8 kpc). The halo density dominates that of the spheroid beyond r +1.8(r /2 kpc) kpc. If we assume for simplicity that each halo object always moves on a circular orbit, then its orbital radius will shrink according to dr/dt&!GMro(r)v (r) ,
(11)
where o(r) and v (r) are the density and circular velocity for the objects providing the drag. For an approximate analytic calculation, we adopt the spheroid expressions for all the terms in Eq. (11) if r(r and the halo expression for all the terms if r'r , where r is the radius beyond which the halo can be assumed to dominate the drag. In the intermediate regime r (r(r , we merely invoke the observed constancy of v (r) and do not attempt to solve the problem self-consistently. For r(r and r'r the drift timescale t (r) is an increasing function of r, so this is also roughly the time t (r) taken to travel from the initial radius r to the origin. For r (r(r , t first
Fig. 3. Constraints on dark fraction from cluster disruption.
B.J. Carr / Physics Reports 307 (1998) 83—96
91
decreases and then increases with increasing r, so the time to drift to the origin is roughly t (r ) for values of r such that t (r) is less than this (i.e. for r(r ). One can show that t is less than the age of the Galaxy t providing r is below 1.0(M/10M )(t /10 yr) kpc (r (r ) , > (r (r (r ) , (12) r " 1.4(M/10M )(t /10 yr) kpc > 1.2(M/10M )(t /10 yr) kpc (r 'r ) , > where r increases discontinuously from r to r . If the halo objects all have the same mass M, then Eq. (12) immediately indicates the Galactocentric radius within which they are dragged into the nucleus by now. In the Lacey—Ostriker scenario, M"2;10M and r lies between r and r but > generally M could lie in any range. If the fraction of the halo in objects of mass M is f (M), then the mass dragged into the Galactic nucleus is just f (M) times the total halo mass within the radius r . This exceeds the observational upper limit of 3;10 M on any central dark mass providing f is smaller than the value indicated > in Fig. 4. In particular, if f "1, one requires M to be less than M "2;10(r /2 kpc) (t /10 yr)\ M . (13)
> This is stronger than the disc-heating limit but there is an important caveat here since, once more than two black holes have accumulated in the centre of the Galaxy, they may be ejected via the “slingshot” mechanism [29]. Because limit (13) is not completely firm, it is shown broken in Fig. 1.
3.4. Constraints on intergalactic dark objects The most interesting dynamical constraint on intergalactic compact objects comes from the fact that each galaxy should have a peculiar velocity due to its gravitational interaction with the nearest one [11]. If the objects were smoothly distributed, with number density n and density parameter " X , the typical distance between them would be " d+n\+30X (M)\(M/10M )h\ Mpc . (14) " " > This would also be the expected distance of the nearest object to a typical galaxy like our own. Over the age of the Universe the nearest one will therefore induce a peculiar velocity in the Milky Way of » +GMt /d+500hX (M)(M/10M )(t /10h\ yr) km s\ . (15) " > Since the microwave background dipole anisotropy shows that this velocity is only 400 km s\, one infers X ((M/5;10M )\(t /10h\ yr)\h\ (16) " > and this is shown in Fig. 1. Note that the requirement that there be at least one object of mass M within the current particle horizon implies a lower limit X '3;10\(M/10M )h , " >
(17)
92
B.J. Carr / Physics Reports 307 (1998) 83—96
Fig. 4. Dynamical friction radius and associated constraint on f .
where we have used Eq. (14) with d"3ct +10h\ Gpc (the current horizon size). This intersects Eq. (16) at a mass of order 10M , so this corresponds to the largest possible dark object within > the visible Universe. The other limits shown at the bottom right in Fig. 1 correspond to there being at least one compact object of mass M within each site. We term these “incredulity” limits since the compact object scenario is uninteresting if the number of objects is less than one. 3.5. Dark clusters We have seen that both the cluster disruption and dynamical friction constraints may be incompatible with the proposal that the halo is populated with supermassive black holes. However, it is still possible that the halo is made of supermassive objects which are themselves clusters of smaller objects. The dark cluster proposal was originally examined by Carr and Lacey [14] in an attempt to salvage the Lacey—Ostriker scenario for disc-heating by 2;10M black holes. We > have seen that this scenario may no longer be plausible but the cluster proposal itself is still viable and indeed arises very naturally in many models for Population III formation. We assume that the dark clusters all have the same mass M and radius R . They may then be disrupted by essentially the same processes discussed in Section 3.2, except that the “compact object” in that analysis is replaced by another cluster of the same mass (i.e. M"M ). This means that one-off disruption (which requires M<M ) never occurs but multiple encounters will still lead to cluster disruption within the age of the Galaxy providing R '1.8(r /2 kpc)(t /10 yr)\ pc . If this condition is satisfied, clusters will be disrupted within a Galactocentric radius r +1.5(R /pc)(t /1010 yr) kpc .
(18)
(19)
B.J. Carr / Physics Reports 307 (1998) 83—96
93
Indeed it is then a general feature of the cluster scenario, as emphasized by De Paolis et al. [20], that there is some Galactocentric radius within which the halo is broken up into its cluster components. The dynamical friction limit will be obviated providing r exceeds the radius r given by Eq. (12) with r 'r and the condition for this is R '0.8(M /10M ) pc . (20) > In order to avoid the clusters evaporating as a result of 2-body relaxation within the age of the Galaxy, one requires another lower limit on the cluster radius: R '0.04(m*/0.01M )(t /10 yr)(M /10M )\ pc , (21) > > where m* is the mass of the components. An upper limit on R comes from requiring that the clusters do not disrupt at our own Galactocentric radius and from Eq. (19) this implies R (30(r /8 kpc)(t /10 yr)\ pc . Clusters at radius r will also be destroyed by the Galactic tidal field unless [39]
(22)
R (100(M /10M ) pc . (23) > These limits are indicated in Fig. 5. This shows that the values of M and radius R are constrained to a rather narrow wedge. The globular cluster limit is shown with a broken line since the interpretation of this is not completely clear. There is some uncertainty in the position of the dynamical friction boundary since this is sensitive to the halo core radius: the limit is shown for r "2 kpc (solid line) and r "8 kpc (broken line). 3.6. Direct encounter constraints Compact objects smaller than Jupiter would have none of the disruptive effects on astronomical systems discussed above. However, if there were a large population of such objects in the disc or halo of the Galaxy, one would expect some of them to enter the solar system or impact the Earth occasionally and this would have observable consequences. Objects in the mass range 10\ g(M(10 g would resemble meteorites, those with 10 g(M(10 g would resemble comets, and those in the intermediate mass range would leave impact craters on Earth. Upper limits on the frequency of meteors, comets and the number of impact craters therefore provide constraints on compact objects too small to be eliminated by any other type of observation. Hills [28] has studied the encounter limits in detail and CM have updated his results. To calculate the rate at which objects hit the Earth, one must allow for the gravitational focussing of the Sun. This increases the flux at the orbital radius of the Earth by a factor 1.5 for disc objects or 1.02 for halo objects. The mass flux on Earth is then
1.5pR f (M)o»"5;10 f (M) g yr\ (disc) , dM # (24) " dt 1.0pR f (M)o»"9;10 f (M) g yr\ (halo) . # where R is the Earth’s radius, f(M) the fraction of the dark matter in objects of mass M, o the dark # matter density and » the average speed of the objects relative to the Earth. One expects
94
B.J. Carr / Physics Reports 307 (1998) 83—96
Fig. 5. Constraints on the mass and radius of dark clusters.
»"80 km s\ for disc objects and »"350 km s\ for halo objects. This compares to an average speed of 40 km s\ for Solar System objects. The local halo and disc densities are taken to be 0.01M pc\ and 0.15M pc\, respectively. > > The limits from observations of meteors, fireballs and impact craters are discussed in detail in CM. For M'10 g in the disc or M'10 g in the halo, one also gets a limit from the fact that no interstellar comet has been observed in telescopic surveys over the last 300 yr. Using 1AU rather R for the relevant cross-section in Eq. (24) then gives a constraint #
f (M)(
1;10\(M/10 g)
(disc) ,
6;10\(M/10 g)
(halo) ,
(25)
so only objects larger than 10 g in the disc or 2;10 g in the halo could provide all the dark matter. Naked eye observations would have sufficed to detect 10 g disc objects or 10 g halo objects (which are as bright as Halley) out to several AU over the last 400 yr and this increases the lower limit to 5;10 g (disc) or 10 g (halo). These limits are shown by the lines on the left in Fig. 1.
B.J. Carr / Physics Reports 307 (1998) 83—96
95
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48]
C. Alcock et al., Astrophys. J. 461 (1996) 84. K.A. Ashman, Mon. Not. R. Astron. Soc. 247 (1990) 662. K.A. Ashman, B.J. Carr, Mon. Not. R. Astron. Soc. 249 (1991) 13. E. Auborg et al., Astron. Astrophys. 301 (1995) 1. J.N. Bahcall, P. Hut, S. Tremaine, Astrophys. J. 290 (1985) 15. J.N. Bahcall, C. Flynn, A. Gould, Astrophys. J. 389 (1992) 234. J.N. Bahcall et al., Astrophys. J. Lett. 435 (1995) L31. J.R. Bond, W.D. Arnett, B.J. Carr, Astrophys. J. 280 (1984) 825. J.R. Bond, B.J. Carr, C.J. Hogan, Astrophys. J. 367 (1991) 420. P. Bristow, S. Phillipps, Mon. Not. R. Astron. Soc. 267 (1994) 13. B.J. Carr, Comm. Astrophys. 7 (1978) 161. B.J. Carr, Ann. Rev. Astron. Astrophys. 32 (1994) 531. B.J. Carr, Astron. Astrophys. Trans. 5 (1994) 43. B.J. Carr, C.G. Lacey, Mon. Not. R. Astron. Soc. 316 (1987) 23. B.J. Carr, M. Sakellariadou, Preprint, 1998. B.J. Carr, J.R. Bond, W.D. Arnett, Astrophys. J. 277 (1984) 445. S. Chandrasekhar, in: Principles of Stellar Dynamics, Dover, New York, 1960. C.J. Copi, D.N. Schramm, M.S. Turner, Phys. Rev. Lett. 75 (1995) 3981. M. Crawford, D.N. Schramm, Nature 298 (1982) 538. F. De Paolis, G. Ingrosso, Ph. Jetzer, M. Roncadelli, Astron. Astrophys. 295 (1995) 567; 299 (1995) 647. A. De Rujula, Ph. Jetzer, E. Masso, Astron. Astrophys. 254 (1992) 99. A.C. Fabian, X. Barcons, Rep. Prog. Phys. 54 (1991) 1069. S.M. Fall, M.J. Rees, Astrophys. J. 298 (1985) 18. M. Fich, S. Tremaine, Ann. Rev. Astron. Astrophys. 29 (1991) 409. V. Friese, B. Fuchs, R. Wielen, in: S.S. Holt, C.L. Bennett (Eds.), AIP Conf. Proc., vol. 336, Dark Matter, American Institute of Physics Press, New York, 1995, p. 129. O.E. Gerhard, S.M. Fall, Mon. Not. R. Astron. Soc. 203 (1983) 1253. N.Yu. Gnedin, J.P. Ostriker, Astrophys. J. 400 (1992) 1. J.G. Hills, Astron. J. 92 (1986) 595. P. Hut, M.J. Rees, Mon. Not. R. Astron. Soc. 259 (1992) 27P. C. Impey, G. Bothun, Ann. Rev. Astron. Astrophys. 35 (1997) 267. K. Jedamzik, Phys. Rev. D 55 (1997) R5871. E. Kerins, B.J. Carr, Mon. Not. R. Astron. Soc. 266 (1994) 775. C.G. Lacey, in: B. Sundelius (Ed.), Dynamics of Disk Galaxies, 1991, p. 257. C.G. Lacey, J.P. Ostriker, Astrophys. J. 299 (1985) 633. A. Loeb, Astrophys. J. 403 (1993) 542. G.J. Mathews, D.N. Schramm, B.S. Meyer, Astrophys. J. 404 (1993) 476. S. McGaugh, Nature 367 (1994) 538. B. Moore, Astrophys. J. Lett. 413 (1993) L93. B. Moore, J. Silk, Astrophys. J. Lett. 442 (1995) L5. J.P. Ostriker, J. Binney, P. Saha, Mon. Not. R. Astron. Soc. 241 (1989) 849. M. Persic, P. Salucci, Mon. Not. R. Astron. Soc. 258 (1992) 14P. H. Rix, G. Lake, Astrophys. J. 417 (1993) 1. J. Silk, Phys. Rep. 227 (1993) 143. L. Spitzer, Astrophys. J. 127 (1958) 17. I. Wasserman, E.E. Salpeter, Astrophys. J. 433 (1994) 670. S.D.M. White, M.J. Rees, Mon. Not. R. Astron. Soc. 183 (1978) 341. R. Wielen, in: J. Goodmann, P. Hut (Eds.), Dynamics of Star Clusters, Reidel, Dordrecht, 1985, p. 449. R. Wielen, Pub. Astron. Inst. Czech. Acad. Sci. 69 (1987) 157.
96
B.J. Carr / Physics Reports 307 (1998) 83—96
[49] R. Wielen, in: J.E. Grindlay, A.G.D. Philips (Eds.), Harlow Shapley Symp. on Globular Cluster Systems in Galaxies, IAU Symp. No. 126, Reidel, Dordrecht, 1988, p. 393. [50] R. Wielen, Astron. Soc. Pacific 13 (1991) 343. [51] R. Wielen, B. Fuchs, in: L. Blitz, F.J. Lockman (Eds.), Lecture Notes in Physics, vol. 306, Springer, Berlin, 1988, p. 100. [52] E.L. Wright et al., Astrophys. J. 420 (1994) 450.
Physics Reports 307 (1998) 97—106
Magellanic cloud gravitational microlensing results: What do they mean? David Bennett* Physics Department, University of Notre Dame, Notre Dame, IN 46556, USA.
Abstract Recent results from gravitational microlensing surveys of the Large Magellanic Cloud are reviewed. The combined microlensing optical depth of the MACHO and EROS-1 surveys is q "2.1> ;10\ which is substantially larger *+! \ than the background of q 40.5;10\ from lensing by known stellar populations although it is below the expected microlensing optical depth of q"4.7;10\ for a halo composed entirely of Machos. The simplest interpretation of these results is that nearly half of the dark halo is composed of Machos with a typical mass of order 0.5 M . This could be > explained if these Machos are old white dwarfs, but it is not obvious that the generation of stars that preceded these white dwarfs could have gone undetected. It is also possible that Machos are not made of baryons, but there is no compelling model for the formation of non-baryonic Machos. Therefore, a number of authors have been motivated to develop alternative models which attempt to explain the LMC microlensing results with non-halo populations. Many of these alternative models postulate previously unknown dark stellar populations which contribute significantly to the total mass of the Galaxy and are therefore simply variations of the dark matter solution. However, models which postulate an unknown dwarf galaxy along the line of site to the LMC or a distortion of the LMC which significantly enhances the LMC self-lensing optical depth can potentially explain the LMC lensing results with only a small amount of mass, so these can be regarded as true non-dark matter solutions to the Macho puzzle. All such models that have been proposed so far have serious problems, so there is, as yet, no compelling alternative to the dark matter interpretation. However, the problem can be solved observationally with a second generation gravitational microlensing survey that is significantly more sensitive than current microlensing surveys. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction In the past few years, the question of the composition of the Galactic dark matter has changed from a topic of theoretical speculation to an experimental question. A number of different * E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 7 - 5
98
D. Bennett / Physics Reports 307 (1998) 97—106
experiments have or will soon have the sensitivity to probe most of the leading dark matter candidates in realistic parameter regimes. The most exciting result to come from these experiments so far has been the apparent detection of dark matter in the form of Machos by the MACHO and EROS Collaborations [1,2]. (Macho stands for MAssive Compact Halo Objects and refers to dark matter candidates with masses in the planetary to stellar mass range.) The MACHO Project, in particular, has measured a gravitational microlensing optical depth that greatly exceeds the predicted background of lensing by ordinary stars. However, the timescales of the detected microlensing events indicate an average lens mass that is firmly in the range of hydrogen burning stars or white dwarfs at &0.5M . Both of these possibilities appear to have other observable > consequences that are not easy to reconcile with other observations. This difficulty has led a number of authors to propose alternative scenarios. One possibility is that the Machos are not baryonic despite having similar masses to stars [3,4]. Another possibility is that there is some population of normal stars that contributes a much higher microlensing optical depth than predicted by standard Galactic models. In this scenario, the detected microlensing events are caused by ordinary stars and have little to due with the Galactic dark matter. This possibility is particularly appealing to those searching for other dark matter candidates. In this paper, we will discuss the microlensing technique as a dark matter search tool, and we review the latest dark matter results from the MACHO and EROS collaborations. We will then explore the various models that have been proposed to explain these microlensing results, and we shall see that there is no compelling alternative to the dark matter interpretation. Then, we will show how observations of “exotic” microlensing events which do not conform to the standard microlensing lightcurve shape can be used help pin down the location of the lens objects along the line of sight. Finally, we will discuss the “Next Generation Microlensing Survey” which will have the sensitivity to discover a statistically significant sample of “exotic” microlensing events which should resolve the basic question of whether the lensing objects reside in the Galactic halo.
2. The microlensing technique Gravitational microlensing was proposed as a tool to study dark matter in our galaxy by Paczyn´ski in 1986 [5], and a few years later serious microlensing dark matter searches were initiated [6,7]. These searches are sensitive to dark matter in the mass range 10\—1 M , but region > of greatest initial interest was the 0.01—0.1 M brown dwarf mass range. > 2.1. Microlensing surveys of the Magellanic Clouds At present there are four gravitational microlensing surveys of the Magellanic Clouds currently in operation. The three original microlensing surveys, EROS, MACHO, and OGLE all discovered their first microlensing events in September, 1993, and all three are still in operation. MACHO was the first project to operate a dedicated microlensing survey telescope with large format CCD cameras. They have a 64 million pixel dual color imaging system on the 1.3 m telescope at Mt. Stromlo which has been observing for the microlensing project almost exclusively since late 1992. To date, they have obtained over 70000 images of the Magellanic Clouds and the Galactic bulge. Each image covers about half a square degree on the sky. Their most recently published dark
D. Bennett / Physics Reports 307 (1998) 97—106
99
matter results come from observations of 22 square degrees of the central regions of the LMC over a period of 2 years ending in late 1994. The EROS group is currently operating the EROS-2 telescope at La Silla. This 1-m telescope is equipped with a 128 million pixel dual color camera which images 1 square degree-similar to the MACHO camera but with a field of view that is a factor of 2 larger. Their published work is based upon their earlier EROS-1 system which involved digitizing Schmidt Plates from the ESO Schmidt telescope to search for microlensing events lasting longer than a week as well as a CCD survey which imaged the central region of the LMC very frequently using a 0.4 m telescope to search for short timescale microlensing events. The microlensing survey teams have yet to publish Magellanic Cloud microlensing survey results. The OGLE group is now observing the clouds with the OGLE-2 system which consists of a 1.3 m telescope with a single 2k;2k CCD camera operated in drift scan mode, but their published survey results are from the OGLE-1 survey which did not observe the Clouds. The other microlensing team observing the Magellanic Clouds is the MOA project which is a New Zealand/Japanese collaboration that observes the LMC from a site in New Zealand that is far enough South that it is able to observe the LMC at every hour of the night.
3. Microlensing results from MACHO and EROS The first LMC microlensing events were discovered in September 1993, by the EROS and MACHO collaborations [8,9]. This was the first hint that Machos might comprise a significant fraction of the Galactic Dark matter, but the first quantitative measurement of a microlensing optical depth significantly above the background did not come until a few years later when MACHO released the LMC results from their first two years of operation [1]. They found eight microlensing events which indicated an optical depth of q "2.9> ;10\ which significantly *+! \ exceeds the expected microlensing optical depth of q 40.54;10\ from known stellar popula tions in the Galaxy and the LMC. Formally, the odds of obtaining a measured optical depth as high as q in a Universe in which the true microlensing is q 40.54;10\ are about 0.04%. *+! A detailed comparison of the LMC microlensing results from the MACHO Collaboration and the various stellar lensing backgrounds is given in Table 1. The prediction for a standard halo composed of Machos is also shown. The EROS-1 microlensing survey has also reported an optical depth for the LMC of q "0.82> ;10\ [2]. These error bars were not reported by the EROS collaboration, but *+! \ they were computing assuming Poisson statistics for events of the same timescale. This value is substantially less than the MACHO 2-year result, but because of the substantial statistical uncertainties, these have been shown to be consistent with each other. Because there is little overlap between the time and spatial coverage of the MACHO 2-year and the EROS-1 experiments, the data sets are essentially independent and it makes sense to average them. The relative sensitivity of the two experiments can be determined by comparing the expected number of events for the standard halo models for an assumed Macho mass of 0.4M . According to [1,2], EROS-1’s > sensitivity is about 58% of that of the MACHO 2-year data set. With this weighting, we find a combined EROS-1 & MACHO-2-year microlensing optical depth of q "2.1> ;10\. *+! \
100
D. Bennett / Physics Reports 307 (1998) 97—106
Table 1 Microlensing by stars Population
q(10\)
1tK 2 (days)
1l2 (kpc)
N
Thin disk Thick disk Spheroid LMC center LMC average
0.15 0.036 0.029 (0.53) 0.32
112 105 95 93 93
0.96 3.0 8.2 49.8 49.8
0.29 0.075 0.066 (1.19) 0.71
Total
0.54
99
—
1.14
Observed
2.9> \ 4.7
87
?
8
?
14.4
?
100% MACHO dark halo
Note: This table shows the predicted properties for microlensing by known populations of stars for the MACHO 2-year data set. A Scalo PDMF is assumed, and the density and velocity distributions given in [10]. (l is the mean lens distance. The expected number of events N includes the MACHO detection efficiency averaged over the distribution. For the LMC, two rows are shown; firstly at the center, and secondly averaged over the location of our fields; only the averaged N is relevant. For comparison, the observed values and those predicted for a halo composed entirely of MACHOs are shown.
3.1. The dark matter interpretation Since the MACHO and EROS experiments were designed as dark matter detection experiments, the simplest interpretation of a signal above background is that some of the dark matter halo has been detected. As shown in Table 1, the line of sight to the LMC passes through only one known Galactic component massive enough to provide a microlensing optical depth of q "2.1> ;10\ and that is the Galactic halo. However, the typical mass of the lenses is *+! \ estimated to be &0.5M which is well above the hydrogen burning threshold. If these &0.5M > > lens objects are made of ordinary hydrogen and helium, then they would be bright main sequence stars which are far too bright to be the dark matter. A more reasonable choice for Macho dark matter would be white dwarf stars which would be too faint to be easily seen. However, white dwarfs generally form at the end of a star’s life and only a fraction of the star’s initial mass ends up as a white dwarf. Thus, scenarios in which white dwarfs comprise a significant fraction of the dark matter can be constrained by limits on the brightness and heavy element production from the evolution of the stars which preceded the white dwarfs [11—15,40]. However, a population of white dwarfs which contributes substantially to the mass of the dark halo must have some unusual characteristics in order to avoid appearing as a normal old stellar population, so we must be wary of constraints which rely upon the empirically determined properties of white dwarfs as a hypothetical dark matter population would necessarily have some differences. Another possibility is that the Machos do make up the dark matter, but they are not made of baryons. Non-baryonic Macho candidates include such things as black holes, strange stars, and neutrino balls [3,16]. These possibilities would behave as cold dark matter as far as galaxy formation scenarios are concerned, so the non-baryonic Macho option would allow Machos to comprise all of the dark matter in the Universe without posing any difficulties for the galaxy
D. Bennett / Physics Reports 307 (1998) 97—106
101
formation scenarios favored by cosmologists. It would seem to be a bit of a coincidence that the dark matter mass would end up nearly the same as the mass of stars. One scenario that might avoid this coincidence is the possibility of black hole formation at the QCD phase transition [3] because the horizon at the phase transition contains about a solar mass of radiation. It takes only one horizon size region in 10 to collapse to form a black hole in order to have a critical density of black holes today. If the QCD transition is first order, this might enhance horizon sized density perturbations somewhat, but this scenario probably requires a feature in the density perturbation spectrum at the mass scale of the observed Machos so it sees that some sort of fine tuning is required in this scenario. If the dark matter interpretation is correct, there is an additional puzzle of the composition of the remainder of the dark matter since the observed microlensing toward the LMC appears to be less than expected for a dark halo made of only Machos. It could be that the rest of the halo is made of particle dark matter such as WIMPs, axions, or massive neutrinos, but another possibility is more massive Machos such as &20M black holes which might form from the collapse of very massive > stars. These events would be 40 times less common than the observed shorter timescale events, so current microlensing searches cannot put an interesting limit on them. One microlensing event thought to be caused by a &20M black hole has been seen towards the Galactic bulge, however. > 4. LMC microlensing events as a background An alternative to the dark matter interpretation is the possibility that the lensing background from ordinary stars has been seriously underestimated. This requires a substantial increase above the estimates shown in Table 1 which would require a substantial revision of the standard Galactic or LMC model or else an entirely new component of the Galaxy. The different models that have been proposed are summarized in Table 2. 4.1. Stars in the Galactic disk or spheroid The local Galactic disk is perhaps the best understood part of the Milky Way, so it is difficult to make a modification to the standard model of the disk of the magnitude required to account for much of the observed microlensing signal. The thin disk, in particular, is very well characterized. Table 2 Models to explain the LMC microlensing results Lens population
q /q *+!
Mass of pop.
Lens identity
Problems
Halo Machos Dark thick disk & spheroid Foreground gal. ZL foreground gal. Warped flared disk LMC
1 1 1 (0.13 :0.5 (0.2
2;10M > &10M > 10—10M > 10M ? > &10M > 10M >
WD or BH? WD or BH? Stars Stars Stars Stars
Macho formation? Macho formation? Stars not seen; Contrived Stars in LMC Contrived q too small
Note: This table compares the various models that have been proposed to explain the LMC microlensing results.
102
D. Bennett / Physics Reports 307 (1998) 97—106
There are tight constraints on the observed density of all types of stars in the disk as well as constraints on the total column density of the disk within 1 kpc of the Galactic plane: R (80M pc\. The observed density of stars and gas in the thin disk comprises about half of > this limit [17], so there is some room for additional material in a massive thick disk. However, in order to explain the observed LMC microlensing events, the bulk of the matter in this massive thick disk must be more than 1 kpc from the Galactic plane. Furthermore, this thick disk is far more massive than the thick disk that has been observed in the stellar distribution, so it must be composed of objects that are much darker than an ordinary stellar population. The Galactic spheroid is a roughly spherical distribution of stars with a density that falls as r\ at large radii. This distribution has been determined by comparison to star counts. If a standard stellar mass function is assumed, then these data can be used to estimate the microlensing optical depth yielding the result shown in Table 1. In order to obtain a microlensing optical depth that can explain the observed signal, a dark spheroid population with 50—100 times the mass of the observed stellar population must be added. Thus, both the spheroid and thick disk models are more properly considered to be variations of the dark matter interpretation rather than lensing by the background of ordinary stars. Because the thick disk and spheroid densities drop more rapidly at large radii than the canonical dark halo models, these model predict a lower total mass in Machos for a given microlensing optical depth than a standard halo model does [18]. This might ease some of the difficulties with the white dwarf lens option because these distributions would require fewer white dwarfs in the Galaxy. 4.2. Warped and flared disk: the Galactic Bagel A recent paper by Evans et al. [19] has proposed that both the thin and thick disk might be both flared and warped in the direction of the LMC. This seems like a reasonable option because warping is observed in external galaxies and the gaseous component of our own Galactic disk does appear to be flared. Interactions with accreting dwarf galaxies like the recently discovered Sagittarius dwarf might plausibly give rise to both these effects. Evans et al. [19] have investigated such models and found that they did not yield an interesting microlensing optical depth, so they turned to a more radical set of models. The class of models that Evans et al. propose has a disk column density that grows with Galactic radius beyond the solar circle. Their most extreme model is able to generate a microlensing optical depth of q"0.9;10\ which is below the MACHO Project’s 95% confidence level lower limit on q even when the *+! other backgrounds listed in Table 1 are included. However, some drastic assumptions are required in order to generate this relatively modest microlensing optical depth. This model has a total disk mass of 1.5;10M inside of the solar circle (R "8 kpc), but it has 1.39;10M > > between 8 and 24 kpc. The total mass within 50 kpc is 2.12;10M . Thus, the mass distribution > resembles that of a bagel with most of the mass contained within a toroidal region outside of the solar circle. It is instructive to compare this mass distribution to that of an isothermal sphere with a circular rotation speed of 200 km/s. The isothermal sphere has a mass of 7.4;10M inside 8 kpc and > 1.49;10M between 8 and 24 kpc. (Note that a flattened distribution of matter supports a faster > rotation speed than a spherical distribution.) Thus, Evans et al.’s Galactic bagel provides only a small fraction of the mass needed to support the rotation curve at the solar circle, but it provides
D. Bennett / Physics Reports 307 (1998) 97—106
103
virtually all of the mass needed between 8 and 24 kpc. Thus, if dark halo provides the additional mass needed to support the Galactic rotation curve, it must have a relatively high density inside the solar circle, but essentially no mass between 8 and 24 kpc. The halo density would grow once again to comprise most of the Galactic mass beyond 24 kpc. Such a halo is virtually impossible with cold dark matter, but it might be possible to make a halo that resembles this with massive neutrinos if most of the mass inside the solar circle in a baryonic component that is different from the Galactic disk/bagel. In short, the Evans et al. model is similar to the thick disk model discussed above in that the microlensing optical depth toward the LMC is raised by significantly increasing the total stellar mass of the Galaxy. It differs from the massive thick disk model in that the additional stars are added far from the solar circle so that they can be brighter without violating limits on the stellar content of the solar neighborhood. 4.3. Magellanic Cloud Stars According to Table 1, lensing by ordinary stars in the LMC is the largest contribution to microlensing background, and because the SMC is thought to be elongated along the line of site, the SMC self-lensing optical depth is thought to be substantially larger than this [20]. Furthermore, we do not have as stringent constraints on star counts and the distance distribution of stars in the Magellanic Clouds. These facts led Sahu and Wu [21,22] to suggest that perhaps lensing by LMC stars could be responsible for most of the observed LMC microlensing events. However, Gould [23] showed that the self-lensing of self gravitating disk galaxy, inclined by less than 45° like the LMC, is related to its line of site velocity dispersion by the formula, q(41v2/c under the assumptions that the stellar disk is a relaxed virialized system. Since the velocity dispersion of the LMC is measured to be 20 km/s, the implied self-lensing optical depth of the LMC is q:2;10\ which is far too low to explain the LMC microlensing events. This constraint can be avoided only if the LMC has a higher velocity dispersion than current measurements indicate — perhaps in the central LMC where measurements are rather sparse and where the microlensing search experiments have concentrated their observations. Such a model would then predict that the microlensing optical depth would be much lower in the outer LMC than in the central bar. This is contrary to the observed distribution of event locations which seems independent of distance from the LMC center, but the current data sets are too small for a highly significant test of this effect. Another possible way to evade Gould’s limit on the LMC self-lensing optical depth would be if the LMC is not a virialized self-gravitating system. 4.4. Foreground dwarf or tidal tail Inspired by the recent discovery of the Sagittarius Dwarf Galaxy on the far side of the Galactic bulge, Zhao proposed that there might be another similar sized dwarf galaxy along the line of sight toward the LMC. This possibility could neatly explain the LMC microlensing events with a normal stellar system of very small mass, and so it involves no new population of Machos or faint stars in unexpected locations. Like the Sagittarius Dwarf, such a galaxy could possibly have evaded detection because its stars would generally be confused for LMC stars. The one obvious drawback of this model is that such dwarf galaxies are quite rare. The Sagittarius Dwarf is the only known dwarf galaxy with a mass large enough to explain the LMC microlensing results, and it only
104
D. Bennett / Physics Reports 307 (1998) 97—106
covers about one thousandth of the sky. So, the a priori probability of a chance foreground dwarf galaxy is only about 0.001. Zhao [24] also suggested that a more likely scenario might be to have a “tidal tail” of a Galaxy like the LMC or even the LMC itself be responsible for the lensing events. These “foreground galaxy” models got a significant boost when Zaritsky and Lin [25] (hereafter ZL) reported the detection of a feature in the LMC color-magnitude diagrams which they interpreted as evidence for a population of “red clump” stars in the foreground of the LMC. However, neither Zhao’s models or ZL’s interpretation of their observations has stood up very well under scrutiny. The following counter arguments indicate that these foreground galaxy or tidal tail models are not likely to be correct: E The MACHO Project [26] showed that there is no excess population of foreground RR Lyrae stars toward the LMC indicating that there is no foreground dwarf galaxy with a old, metal poor stars at a distance of less than 35 kpc towards the LMC. E Beaulieu and Sackett [27] argued that the CM diagram feature seen by ZL is a feature of the LMC’s giant branch rather than a foreground population. E Bennett [28] showed that even if ZL’s interpretation of the LMC CM diagram is correct, the implied microlensing optical depth is only 3—13% of the microlensing optical depth seen by MACHO. E Gould [29] showed that the outer surface brightness contours of the LMC do not allow for a foreground galaxy or tidal tail that extends beyond the edge of the LMC. E Johnston [30] showed that tidal debris from the LMC or a similar galaxy would not be expected to remain in front of the LMC for any significant length of time.
5. Exotic microlensing events Observations of exotic microlensing events such as caustic crossing events, parallax events, and binary source events can provide addition information that can pin down the location of the lens. For example, the recent observations of the binary caustic crossing for the event MACHO Alert 98-SMC-1 have measured the time it takes for the projected position of the lens center of mass to cross the diameter of the source star [31—33]. Since we can make a reasonable estimate of the source star size from multicolor photometry or spectra of the source star, a reasonable estimate of the angular velocity of the lens can be obtained. In the case, of MACHO 98-SMC-1, the angular velocity was low indicating that the lens resides in the SMC which is not a surprise because the self-lensing optical depth of the SMC is expected to be large [20]. Another type of exotic event that has only been definitively observed towards the galactic bulge is the microlensing parallax effect [34] which is a lightcurve deviation due to the motion of the Earth. This also yields information on the distance to the lens although accurate photometry is required to detect parallax effect for lenses in the halo. Also, the converse of the parallax effect (sometimes called the xallarap effect) can be observed if the source star is a binary and the effects of its orbital motion can be seen. This requires multiple follow-up spectra to characterize the binary source orbit [35]. It is generally the case that the characterization of these exotic microlensing events requires more frequent or higher accuracy photometry than can be obtained by the microlensing survey teams,
D. Bennett / Physics Reports 307 (1998) 97—106
105
but this is now routinely made possible by the routine discovery of microlensing events in real time [36—38]. However, the rate that these exotic microlensing events are detected in the LMC is only about 0.3 per year which is not enough to get a statistically significant sample in a reasonable amount of time. However, the next generation of microlensing surveys [39] should have a event detection rate that is more than an order of magnitude higher than this which should yield enough exotic events to resolve the Macho mystery.
Acknowledgements This research was supported by the Center for Particle Astrophysics through the Office of Science and Technology Centers of NSF under cooperative agreement AST-8809616 and by the Lawrence Livermore National Lab under DOE contract W7405-ENG-48.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31]
C. Alcock et al., Astrophys. J. 486 (1997) 697. C. Renault et al., A & A 324 (1997) 69. K. Jedamzik, Phys. Rev. D 55 (1997) 5871. J. Yokoyama, A & A 318 (1997) 673. B. Paczyn´ski, Astrophys. J. 304 (1986) 1. E. Roulet, S. Mollerach, Phys. Rep. 279 (1997) 67. B. Paczyn´ski, ARAA 34 (1996) 419. C. Alcock et al., Nature 365 (1993) 621. E. Aubourg et al., Nature 365 (1993) 623. C. Alcock et al., Astrophys. J. 461 (1996) 84. D. Graff, K. Freese, Astrophys. J. 456 (1996) L49. F. Adams, G. Laughlin, Astrophys. J. 468 (1996) 586. G. Chabrier, L. Segretain, D. Mera, Astrophys. J. 468 (1996) L21. B. Gibson, J. Mould, Astrophys. J. 482 (1997) 98. B. Fields, G. Mathews, D. Schramm, Astrophys. J. 483 (1997) 625. F. Weber, C. Schaab, M. Weigel, N. Glendenning, astro-ph/9609067. A. Gould, J. Bahcall, C. Flynn, Astrophys. J. 465 (1996) 759. E. Gates, G. Gyuk, G. Holder, M. Turner, Astrophys. J. 500 (1998) 145. W. Evans, G. Gyuk, M. Turner, J. Binney, Astrophys. J. 501 (1998) 45. N. Palanque-Delabrouille et al., A & A 332 (1998) 1P. K. Sahu, Nature 370 (1994) 275. X.-P. Wu, Astrophys. J. 435 (1994) 66. A. Gould, Astrophys. J. 441 (1995) 77. H.-S. Zhao, MNRAS 294 (1998) 139. D. Zaritsky, D. Lin, Astron. J. 114 (1997) 254. C. Alcock et al., Astrophys. J. 490 (1997) 59. J.-P. Beaulieu, P. Sackett, Astron. J. 116 (1998) 209. D. Bennett, Astrophys. J. 493 (1998) L79. A. Gould, Astrophys. J. 499 (1998) 728. K. Johnston, Astrophys. J. 495 (1998) 495. C. Afonso et al., A & A 337 (1998) L17.
106 [32] [33] [34] [35] [36] [37] [38] [39] [40]
D. Bennett / Physics Reports 307 (1998) 97—106 M. Albrow et al., astro-ph/9807086, Astrophys. J. Lett., in press. C. Alcock et al., astro-ph/9807163, Astrophys. J., submitted. C. Alcock et al., Astrophys. J. 454 (1995) L125. C. Han, A. Gould, Astrophys. J. 480 (1997) 196. A. Udalski et al., Acta Astronomica 44 (1994) 227. C. Alcock et al., IAUC 6068, 1994. A. Becker et al., IAUC 6935, 1998. C. Stubbs, BAAS 191 (1997) 8315. D. Graff, K. Freese, Astrophys. J. 467 (1996) L65.
Physics Reports 307 (1998) 107—115
The microlensing searches for galactic dark matter Bertrand Goldman DAPNIA/SPP, CE-Saclay, 91191 Gif-sur-Yvette Cedex, France
Abstract For 6 years several microlensing collaborations have been monitoring stars in the Magellanic Clouds, leading to the discovery of a dozen microlensing candidates. These candidates, which are unlikely disk or thick disk objects, reveal a new population of Galactic objects identified with the predicted halo of Dark Matter. The increasing statistics allow for preliminary interpretation of the results, although the very nature of the experiments, for the time being, severely limits the identification of the nature of these objects. Massive objects (0.5—1M ) are favored, corresponding to a proportion of > 30—100% of the halo, depending on the Galactic model. Observations towards the Galactic Bulge and disk are also discussed. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 95.35.#d Keywords: Microlensing; Dark matter; Magellanic clouds
1. Introduction Motivated by the search for baryonic Galactic Dark Matter (DM) and the suggestion of Bohdan Paczyn´ski [29,30], a few collaborations dedicated themselves to detect unseen halo compact objects by using the gravitational microlensing effect. In 1990, when searches started effectively using Schmidt plates, no constraint was put on the nature, mass distribution or location of this DM which is needed to explain the flatness of the Galactic rotation curve within usual Galactic and Gravitational models. With optical detectors and reduction capacity increasing in surface and power, it became possible to monitor enough stars with proper time sampling to be able to detect a significant number of light curve amplifications.
Presently at Departamento de Astronomia, U. de Chile, Cerro Cala´n s/n, Casilla 36-D, Santiago. E-mail:
[email protected]. Partially supported by ESO, Santiago de Chile. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 2 - 6
108
B. Goldman / Physics Reports 307 (1998) 107—115
The first experiments in 1993 [6,12,36] towards the Large Magellanic Cloud (LMC) proved the validity of the search, although the noise background (variable stars, novae, unknown variable objects) remain a great concern. The bulge observations however, with their dozens of detected events [2,7,38] addressed this issue. Following this success some collaborations upgraded their instrumentation (OGLE: Udalski et al. [39,40], EROS: Bauer et al. [13,14]) or were initiated (the Japan—New Zealand MOA collaboration; AGAPE: Ansari et al. [1] and Melchior et al. [27] and VATT/Columbia Microlensing Survey: Crotts [15] and Crotts and Tomaney [16] both using pixel lensing towards M31), while PLANET: Albrow et al. [5] and GMAN: Alcock et al. [11] perform intensive photometric follow-up observations to detect small amplitude, brief deviation from point source single lens light curve. We shall present in the following sections the basic principles of the observations; the instrumentation used by the main working collaborations; the current results concerning Local Group microlensing and DM in the direction of the Magellanic Clouds and Galactic bulge and disk; and briefly some prospects for the future of microlensing.
2. Description of the phenomenon After great sceptiscism about the possibility of secure detection of very rare events requiring unprecedent data flow, the microlensing phenomenon proved its efficiency to address Galactic questions it was first proposed for, and to motivate new fields of studies. 2.1. Basic principles The simplest configuration (point source, single lens) was described in the frame of General Relativity by Einstein [18,19] himself, and successfully confirmed by the famous Solar eclipse observation in 1919 by Arthur Eddington et al. [17] — even if it is now considered not conclusive quantitatively. In the ideal case where the source star, the lens object and the observer are aligned, the latter will see a luminous Einstein ring of radius 4GM¸ x(1!x) , R" # c
(1)
where ¸ is the observer—source distance, x the ratio between the distance lens-observer and that former distance. In the case where the source is slightly shifted by a distance r , called impact parameter (see Fig. 1), the symmetry of the problem implies that the observer will receive two images from the source rather than a continuous one. The image positions measured from the lens are r "(r $(r#4R) , #
(2)
B. Goldman / Physics Reports 307 (1998) 107—115
109
Fig. 1. The geometry of the gravitational lensing. The observer on Earth sees the light rays as if they were coming from I and I rather than the real position of the source star. Calculations in the text are done in the lens plane.
one (1) lying inside the Einstein ring, the other (2) outside. The amplification of each image is the ratio of its solid angle of emission in the lensed configuration, to the unlensed solid angle: u#2 r dr , (3) A " and A " u(u#4) r dr where u"r /R is the reduced impact parameter. # From these equations it can be seen that the Einstein radius is the main parameter of the problem, that determines amplification and image angular position by scaling the impact parameter r . Also, note that the amplification is A"1.34 when the impact parameter equals the Einstein radius, which is reasonable minimum for luminosity variation detection. Therefore the surface of the Einstein disk can be seen as a cross-section. In practical, the value of G/c is such that images are resolved and within matching radius for 10M mass at the Galactic scale, or galactic masses at the extra-Galactic scale. However, > unresolved images can be detected as soon as the geometry of the problem evolves during observations, therefore modifying the amplification. If the lens moves with respect of the line of sight with velocity v , u is now a function of time t, u(t)"u #(v .(t!t )), and the ,
,
amplification will change accordingly. Characteristic distance and mass are in this case solar masses at galactic distances, for expected lens proper motion and reasonable observing duration. For instance, a 1M lens, half way to the LMC, moving at 200 km/s with respect to the line of sight > of the LMC, will have a characteristic time of amplification of 70 days (time for the line of sight to cross the Einstein radius). To use this effect however, it is now necessary to monitor sources in order to detect change in their observed flux. Angular cross-section becomes very small, in the order of 10\ arcsec, for we want to have the event duration shorter than the possible observation duration. As a consequence, the probability for a star to be amplified at a certain time, by more then 1.34, which is the
110
B. Goldman / Physics Reports 307 (1998) 107—115
probability to find a lens in a cone centered on the line of sight, with radius equal to the lens Einstein radius (this is to be integrated over all lens masses and distances), becomes small as well, in the order of 10\ for a standard halo, in the direction of the LMC. This probability is called optical depth by the microlensing community, as it measures the probability of a light ray to be deviated. This scarcity delayed the use of the method, as it was necessary to observe millions of stars with sufficient time sampling, and because selection criteria had to be robust enough to exclude a far higher number of variable stars, novae and indeed unknown variable stars, for few astronomical phenomena are known that concern every millionth star. Eventually, this point source, single lens light curve has to be symmetric for a constant lens proper motion, and achromatic. Let us mention briefly that it is an ideal case, although no deviation is observed in most events. 2.2. Less basic amplifications First the source is not ponctual, so that an iso-amplification pattern is actually projected on the source surface, and moves according to the lens motion. Its influence is measured by ratio of the impact parameter to the source radius, both projected in some given plane, in the single lens case (see Ref. [21]). If the source radius is (much) larger than the impact parameter, the effect is to smooth the standard light curve, diminish the maximum amplification and increase the apparent duration of the event. If the source radius is close to the impact parameter, maximum amplification is increased (see Ref. [33]). Also, if there is colour or polarisation differences across the star surface, there is also a differential amplification of these parameters. The source may also be blended, by stars which are not amplified, but contributes by a fixed amount of light, to the reconstructed flux. Again the effect is to diminish the maximum amplification and increase the duration of the event. If the blending star has different colours than the lensed star, the event will be chromatic. Now, the lens may be a multiple star system. Depending on the components’ distance with respect to the individual Einstein raidus, the amplification may be repeated (large distance), or completly affected in a non-linear way. The former configuration may be lost because of selection criteria which tend to eliminate stars undergoing multiple variations in order to suppress vairable stars; however it represents only a small minority of all events. The physical description of the latter configuration, which includes binary stars and planetary system lens, is more complicated, with completely altered light curves (see Ref. [31] for a review). Finally, the velocity of the lens with respect to the line of sight will be modulated by the rotation of the Earth, this effect being known as parallax effect (see Ref. [20] and Section 4.1). The modulation due to the velocity of the source star when rotating in a binary system is nicely called xallarap effect. All the dynamic perturbations affect the shape of the light curve, but preserve its achromaticity. We will now describe the instrumentation of some microlensing searches, whose characteristics are directly determined by the rarity of the microlensing phenomenon. 3. Instrumentation and observing strategy We will mainly present the EROS 2 set-up. The other set-ups will be presented briefly at the end of the section.
B. Goldman / Physics Reports 307 (1998) 107—115
111
3.1. The EROS 2 instrumentation Two years after the first EROS 1991 observations, using either Schmidt plates taken at the ESO Schmidt 1 m telescope, or the 0.5° CCD mosaic mounted on a 40 cm telescope at La Silla, Chile, the EROS collaboration decided to build a larger instrument. We reused the 1 m Marly telescope of the Mont Chiran Observatory, France, which is a Ritchey—Chretien telescope, and we installed it in La Silla, Chile. The focale length f/8 has been reduced to f/5 using a 3 lens focal reducer designed at Obervatoire de Haute-Provence, France. The instrument is a two camera detector mounted behind a dichroic beam splitter, which provides two large passband beams: one Red centered on the Cousin I standard filter, with sensitivity up to 1 lm, one »isible centered on the Johnson » filter. Pixel size after focal reducer is 0.6 arcsec; field of view is 0.88° in the Red passband and 0.95° in the »isible band, due to cosmetic defects. The cameras are each equipped by eight 2k;2k thick Loral CCDs, with physical pixel size of 15 lm. The alignment between both mosaics is good with 45 pixels. For a more detailed description of the instrument and its calibration see Ref. [14]. 3.2. Some other instrumentations The Macho set-up, which was working three years before the EROS 2 instrument, is a close precursor, with its dichroic beam-splitter and two cameras. However the number of CCDs and the field-of-view is twice smaller, while the telescope has 1.3 m mirror. The OGLE set-up uses a brand new 1.3 m telescope, installed in Las Campanas, Chile. It is equipped by a focal reducer and one 2k;2k thin CCD with a 0.4 pixel size. It uses standard filters, mainly I and some ». It is expected to be upgraded with a CCD mosaic. The MOA set-up uses a 1.3 m telescope as well, installed at Mount John University Observatory, New Zealand. It is equipped by three 4k;2k thin SITe CCD mosaic. It covers 1.2° with a 0.8 pixel size. Observations started in early 1998. Finally, the DUO set-up used Kodak films III-aJ and III-aF, taken at ESO 1m Schmidt telescope at La Silla, Chile, during one bulge season, in 1995—1996. Films were scanned at the MAMA microdensimeter [35]. 3.3. Observing strategy All the mentioned collaborations but DUO use their telescope on a full-time basis, all year round. This allows for optimal observation of their different scientific programs. EROS 2 devotes around 80% of its observing time to microlensing programs. During the bulge season (End of February to End of October), 72° are monitored. Only Red Giants and bright stars are searched for amplification, to reduce the exposure time (2 min) and to obtain good photometric quality. Red giants give also scientifically more interesting microlensing events, as bulge red giants are less subject to confusion problems and their distances are better known, so that optical depth is better measured. Fields are taken every night if weather and elevation permit it, once every second night on an average. EROS 2 also observes 27° in the Galactic disk in order to detect disk—disk events, for which both the lens and the source lay in the disk. Fields are taken every third night on an average.
112
B. Goldman / Physics Reports 307 (1998) 107—115
Fig. 2. EROS light curves of the 1997 SMC event in the red (top) and visible (bottom) bands: logarithm of the EROS flux versus time since 1990.0. Superimposed is a standard microlensing fit (combined for the red and visible light curves).
During the Magellanic Clouds season (almost all year long), 72° are monitored in the LMC and 9° in the SMC. Fields are taken every fifth night on an average for the LMC, every third night for the SMC. (Weather had been very bad in Chile since the beginning of the observations.)
4. Current results 4.1. SMC results The Small Magellanic Cloud was first monitored by the EROS 1 CCD program, which measured 500 000 stars every 20 min in two bands, in its two last 1994—1996 seasons [34]. As in the EROS 1 LMC analysis, no candidate was found in the mass range 2;10\M to 0.02M . > > The 1996—1997 season revealed the first SMC candidate for both EROS 2 and MACHO collaborations [9,10,28] (see Fig. 2). As it is a long duration (q"123 d), this indicates either a halo massive 2.6> M (1p) object or a SMC low mass lens (SMC self-lensing). This latter case is \ > EROS and OGLE use the Einstein radius crossing time, while MACHO uses the (twice longer) Einstein diameter crossing time.
B. Goldman / Physics Reports 307 (1998) 107—115
113
possible only if the SMC is elongated along the line of sight by more around 5 kpc at least (95% C.L.). The non-detection of parallax effect tends to confirm these contraints (see Ref. [28] for details). After the peak, OGLE 2 started following the source star. It confirmed both P"2.5 d variability and blending of the source star, and because of better seeing and resolution, was able to attribute the 0.05 magnitude variability to the source star [39,40]. The uniqueness of this candidate makes difficult to attribute it to the (somewhat elongated) SMC or to a massive Galactic halo object, but it certainly proves the interest of the SMC line-of-sight for the constraint of the Galactic halo shape and composition and for the SMC structure. EROS 2 expects a dozen SMC events over its entire 6 yr running time. 4.2. LMC results The LMC direction provides the largest optical depth and the greatest number of candidates, as it is possible to resolve many more stars than in the SMC. To date about 20 candidates have been found, mostly by the MACHO collaboration. Results from both MACHO and EROS 1 tend to eliminate objects in the mass range of 2;10\—0.002M as the main halo component [34], and > favour the 0.1—1M range [9,10]. Statistics are still too small to allow for sky position analysis, > which may attribute the lenses to the LMC (distribution of candidates proportional to the square density of LMC stars) or to the (supposedly) uniform Galactic halo. Optical depth value is 10\, which corresponds to about 50% of a standard 10M halo. > However the mass to be associated to each of these events depends strongly on the lens velocity and distance distribution, so that error bars remain important. 4.3. Galactic bulge results The Galactic bulge line-of-sight proved to be very efficient to allow detection of rarer, exotic deformations of the standard Paczyn& ski light curve, including blending effect, double lens amplification [2,37] and parallax effect [8]. It also permitted the rediscovery of the Galactic bar. Interpretation in term of optical depth and duration distribution, however, proved more difficult. Raw short durations distribution indicates a minimum lens mass well into the brown dwarf regime, using standard mass functions and Galactic models (see Refs. [25,26]). Nevertheless, these short events may simply reflect the blending effect, which tends to diminish the reconstructed duration, in which case no brown dwarf would be needed [3,24]. The interpretation of the longest duration events — statistical fluctuation or stellar remnant lensing [23] — is still unclear and will require higher statistics. The high quality red giant, low blended EROS 2 sample, should provide important information on these issues. 4.4. Galactic disk results The EROS 2 started in 1996 to monitor 30° located in the Galactic disk, away from the bulge, so that no lens may lay in the bulge. Three candidates have been detected, with 78—90 days durations, one of the sources being a binary. The corresponding optical depth is q"0.3—1.2;10\, in loose
114
B. Goldman / Physics Reports 307 (1998) 107—115
agreement with theoretical predictions. One additional problem of this analysis is to evaluate the distance of the source stars, and this is currently under study. On the other hand, the MACHO collaboration has reported events whose source star lies in the disk main sequence. The very short mean duration indicates a brown dwarf lens, with minimum mass smaller than the bulge’s mass function [22]. Again, higher statistics and different line of sight should help in interpreting those results.
5. Near future of the microlensing searches The coming years of microlensing searches will be made of increased statistics towards the above mentioned lines of sight, with probably better software [4] and follow-up observations. M31 has been observed and may provide soon robust candidates or constraints, with higher sensitivity to small mass lens, as the finite size effect is less sensible. Higher statistic is likely to allow for a mass function determination towards the Galactic bulge and the LMC, while good measure of the optical depth towards the SMC and M31 will constrain the halo shape. Later, adaptative optics should increase the number of observable stars in a given direction, therefore the statistics and the available directions of search, while IR photometry may open new galactic bulge horizons. Other rare events may be detectable, like microlensing of the Galactic bulge by open clusters, with known distance, or very massive lens with duration of several years. If detection of Jupiter-like system may be achieved within years using current apparatus, observations of Earth-like planet systems will require a much more powerful instrumentation than those operated today, for detection and follow-up as well. The use of larger telescopes and of recent fast read-out CCDs may allow for observations of many more stars without expending the number of instruments and collaborations beyond present capacities. The VST project is one example of the survey telescopes to come. Eventually, future non-specific resources will give more informations, like space interferometers with microarcseconde accuracy (GAIA, SIM [32]).
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
R. Ansari et al., Astron. Astrophys. 324 (1997) 843. Ch. Alard et al., Astron. Astrophys. Lett. 300 (1995) L17. Ch. Alard, Astron. Astrophys. 321 (1997) 424. Ch. Alard, R.H. Lupton, Astrophys. J. Lett. (1998), submitted; astro-ph/9712287. M. Albrow et al., in: Kochanek, Hewitt (Eds.), Astrophysical Applications of Gravitational Lensing. Proc. IAU Symp 173, Kluwer, Dordrecht, 1996, p. 227. C. Alcock et al. (The MACHO collaboration), Nature 365 (1993) 621. C. Alcock et al. (The MACHO collaboration), Astrophys. J. 454 (1997) L125. C. Alcock et al. (The MACHO collaboration), Astrophys. J. 479 (1997) 119. C. Alcock et al. (The MACHO collaboration), Astrophys. J. 491 (1997) L11. C. Alcock et al. (The MACHO collaboration), Astrophys. J. 482 (1997) 697.
http://vst.na.astro.it/vst/scienceVST.html, part 3.
B. Goldman / Physics Reports 307 (1998) 107—115 [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40]
115
C. Alcock et al. (The GMAN collaboration) Astrophys. J. (1998), submitted; astro-ph/9702199. E¨. Aubourg et al. (The EROS collaboration), Nature 365 (1993) 623. F. Bauer, PhD. Thesis, Saclay report Dapnia-SPP 97/10, 1997. F. Bauer et al., Proc. ESO “Optical Detectors for Astronomy” Workshop, Kluwer Academic Publishers, Dordrecht, 1998. A.P.S. Crotts Astrophys. J. Lett. 399 (1992) L43. A.P.S. Crotts, A.B. Tomaney, Astrophys. J. Lett. 473 (1996) L87. F.W. Dyson et al., Mem. R. Astron. Soc. 62 (1920) 291. A. Einstein, Ann. Phys. 35 (1911) 898. A. Einstein, Ann. Phys. 49 (1916) 769. A. Gould, Astrophys. J. 392 (1992) 442. A. Gould, Astrophys. J. Lett. 421 (1992) L71. A. Gould et al., Astrophys. J. 482 (1997) 913. C. Han, A. Gould, Astrophys. J. 447 (1995) 53. C. Han, Astrophys. J. 484 (1997) 555. D. Me´ra et al., Astron. Astrophys. (1998), in press; astro-ph/9801050. D. Me´ra et al., Astron. Astrophys. (1998), in press; astro-ph/9801051. A.-L. Melchior et al., Astron. Astrophys., submitted; astro-ph/9712236. N. Palanque-Delabrouille et al. (The EROS collaboration), Astron. Astrophys. 332 (1998) 1. B. Paczyn´ski, Astrophys. J. 284 (1986) 670. B. Paczyn´ski, Astrophys. J. 301 (1986) 503. B. Paczyn´ski, ARAA 34 (1996) 419. B. Paczyn´ski, Astrophys. J. Lett., in press; astro-ph/9708155. C. Renault, PhD. Thesis, Saclay report Dapnia-SPP 97/10, 1997. C. Renault et al., Astron. Astrophys. Lett. 324 (1997) 69, and references therein. C. Soubiran, Astron. Astrophys. 259 (1992) 394. A. Udalski et al., Acta Astron. 43 (1993) 289. A. Udalski et al., Astrophys. J. Lett. 436 (1994) L103. A. Udalski et al., Acta Astron. 48 (1998) 113. A. Udalski et al., Acta Astron. 47 (1997) 319. A. Udalski et al., Acta Astron. 47 (1997) 431.
Physics Reports 307 (1998) 117—123
Creating black holes in the universe, and universes through black holes Raphael Bousso* Department of Physics, Stanford University, Standford, CA 94305, USA
Abstract We describe the quantum creation of black holes during inflation using Euclidean quantum gravity. We argue that a single topology change can lead to the formation of multiple black holes. They evaporate and disappear, causing the universe to disintegrate into large fragments. This process repeats iteratively, leading to the proliferation of inflationary universes. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.Bp; 04.60.!m
1. Introduction One usually thinks of black holes forming through gravitational collapse. Thus, it seems that inflation is not a good place to look for black holes, since matter is hurled apart by the rapid cosmological expansion. We show, however, that it is possible to get black holes in inflation through the quantum process of pair creation [1,2]. There are two physical motivations that might lead us to expect this: First of all, quantum fluctuations can be very large during inflation, which leads to large density perturbations. Secondly, in order to pair create any objects, whether particles or black holes, one needs a force to pull them apart. Think of electron—positron pair creation: unless there is a force pulling them apart, the virtual particles will just fall back and annihilate. But if they are in an external electric field, the field pulls them apart and provides them with the energy to become real particles. Similarly, whenever one pair creates black holes, one needs to do it on a background that will pull them apart. This could be, for example, Melvin’s magnetic universe, where oppositely charged black holes are separated by the background magnetic field, or a cosmic string, which can snap with black holes sitting on the bare terminals, pulled apart by the string
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 3 8 - 6
118
R. Bousso / Physics Reports 307 (1998) 117—123
tension. For the black holes we shall consider, the necessary force will be provided by the rapid expansion of space during inflation. So this expansion, which we naively thought would prevent black holes from forming, actually enables pair creation.
2. Inflation In quantum cosmology, one expects the universe to begin in a phase called chaotic inflation. In this era the evolution of the universe is dominated by the vacuum energy »( ) of some inflaton field . » starts out at about the Planck value, and then decreases slowly while the field rolls down to the minimum of the potential. During this time the universe behaves like de Sitter space with an effective cosmological constant K +». Like the scalar field, K decreases only very slowly in time, and for the purposes of calculating the pair creation rate, we can take K to be fixed [1].
3. Instanton method An instanton is a Euclidean solution of the Einstein equations, i.e., a solution with signature (####). Instantons can be used for the description of non-perturbative gravitational effects, such as spontaneous black hole formation. What follows is a kind of kitchen recipe for this type of application. We must consider two different spacetimes: de Sitter space without black holes (i.e., the inflationary background), and de Sitter space containing a pair of black holes. For each of these two types of universes, we must find an instanton which can be analytically continued to become this particular Lorentzian universe. The next step is to calculate the Euclidean action I of each instanton. According to the Hartle—Hawking’s no boundary proposal [3], the value of a wave function W is assigned to each universe. In the semi-classical approximation W"e\', neglecting a prefactor. P""W""e\'0 is then interpreted as a probability measure for the creation of each particular universe. (Note that P depends only on the real part of the Euclidean action.) The pair creation rate of black holes on the background of de Sitter space is finally obtained by taking the ratio C"P /P of the two probability measures. One can also think of C as the ratio of the & & number of inflationary Hubble volumes containing black holes to the number of empty Hubble volumes.
4. de Sitter We begin with the simpler of the two spacetimes, an inflationary universe without black holes. In this case the space-like sections are round three-spheres. In the Euclidean de Sitter solution, the three-spheres begin at zero radius, expand and then contract in Euclidean time. Thus they form a four-sphere of radius (3/K. The analytic continuation can be visualised (see Fig. 1) as cutting the four-sphere in half, and then joining to it half the Lorentzian de Sitter hyperboloid, where the three-spheres expand exponentially in Lorentzian time. The real part of the Euclidean action for this geometry comes from the Euclidean half-four-sphere only: I0 "!3n/2K. &
R. Bousso / Physics Reports 307 (1998) 117—123
119
Fig. 1. The creation of a de Sitter universe (left) can be visualised as half of a Euclidean four-sphere joined to a Lorentzian four-hyperboloid. The picture on the right shows the corresponding nucleation process for a de Sitter universe containing a pair of black holes. In this case the space-like slices have non-trivial topology.
Correspondingly, the probability measure for de Sitter space is
3n . P "exp & K
(1)
5. Schwarzschild-de Sitter Now, we need to go through the same procedure with the Schwarzschild-de Sitter solution, which corresponds to a pair of black holes immersed in de Sitter space. The space-like sections in this case have the topology S;S. This can be seen by the following analogy: Empty Minkowski space has space-like sections of topology R. Inserting a black hole changes the topology to S;R. Similarly, if we start with de Sitter space (topology S), inserting a black hole is like punching a hole through the three-sphere, thus changing the topology to S;S. In general, the radius of the S varies along the S. The maximum two-sphere corresponds to the cosmological horizon, the minimum to the black hole horizon. This is shown in Fig. 2. What we need is a Euclidean solution that can be analytically continued to contain this kind of space-like slice. It turns out that such a smooth instanton does not exist in general for the Lorentzian Schwarzschild-de Sitter spacetimes. The only exception is the degenerate case, where the black hole has the maximum possible size, and the radius of the two-spheres is constant along the S (see Fig. 2). The corresponding Euclidean solution is just the topological product of two round two-spheres, both of radius 1/(K [4]. It can be analytically continued to the Lorentzian Schwarzschild-de Sitter solution by cutting one of the two-spheres in half, and joining to it the two-dimensional hyperboloid of (1#1)-dimensional Lorentzian de Sitter space, as shown in Fig. 1. In the Lorentzian regime the S expands exponentially, while the two-sphere just retains its constant radius. The real part of the Euclidean action for this instanton is given by I0 "!p/K, & and the corresponding probability measure is P "exp(2p/K) . &
(2)
120
R. Bousso / Physics Reports 307 (1998) 117—123
Fig. 2. The space-like slices of Schwarzschild-de Sitter space have the topology S;S. In general (left), the size of the two sphere varies along the one sphere. If the black hole mass is maximal, however, all the two-spheres have the same size (right). Only in this case is a smooth Euclidean solution admitted.
6. Pair creation rate Now, we can take the ratio of the two probability measures, and obtain the pair creation rate C"exp(!p/K) .
(3)
Let us interpret this result. The cosmological constant is positive and no larger than order unity in Planck units. This means that black hole pair creation is suppressed. When K+1 (early in inflation), the suppression is week and one can get a large number of very small black holes. For smaller values of K (which are attained later in inflation), the black holes would be larger, but their creation becomes exponentially suppressed.
7. Evolution When the black hole appears, it will typically be degenerate; that is, it will have the same size as the cosmological horizon, and will be in thermal equilibrium with it. One may argue on an intuitive basis that this equilibrium would be unstable [4], with quantum fluctuations causing the black hole to be slightly smaller, and hotter, than the cosmological horizon. Then, presumably, the black hole would start to evaporate and eventually disappear. They showed that the time scale between black hole nucleations is vastly larger than the time needed for evaporation (see also Ref. [2]). In this sense de Sitter space would “perdure”. In a collaboration with S. Hawking, this argument was recently confirmed [5]. We used a model that includes the quantum radiation in the s-wave and large N limit, at one loop. We found, to our surprise, that nearly degenerate Schwarzschild-de Sitter black holes anti-evaporate. But there is a different way of perturbing the degenerate solution which leads to evaporation, and we found that this mode would always be excited when black holes nucleate spontaneously. The process of black hole creation and subsequent evaporation is shown in Fig. 3 (evolution of the space-like sections) and Fig. 4 (causal structure). Crucially, for the case of a single black hole, the topology reverts to the original de Sitter space after the process is completed.
R. Bousso / Physics Reports 307 (1998) 117—123
121
Fig. 3. Evolution of space-like hypersurfaces of de Sitter space during the creation and subsequent evaporation of a single black hole. The spontaneous creation of a handle changes the spatial topology from spherical (S) to toroidal (S;S) with constant two-sphere radius. (The double-headed arrow indicates that opposite ends of the middle picture should be identified, closing the S.) If the quantum fluctuations are dominated by the lowest Fourier mode on the S, there will be one minimum and one maximum two-sphere radius, corresponding to a black hole (b) and a cosmological horizon (c). This resembles a “wobbly doughnut” with cross-sections of varying thickness. As the black hole evaporates, the thinnest cross-section decreases in size. Finally, the black hole disappears, i.e. the doughnut is pinched at its thinnest place and reverts to the original spherical topology.
Fig. 4. Penrose diagram for the process depicted in Fig. 3. The shaded region is the black hole interior. In the region marked by the square brackets the spatial topology is S;S, and opposite ends should be identified. The middle picture in Fig. 3 corresponds to the handle nucleation surface shown here. After the black hole evaporates, a single de Sitter universe remains.
8. Disintegration and proliferation of the inflationary universe One may, however, consider different perturbations of the degenerate black hole solution, which correspond to higher quantum fluctuation modes. The nth mode leads to n minima and n maxima
122
R. Bousso / Physics Reports 307 (1998) 117—123
Fig. 5. Evolution of space-like hypersurfaces of de Sitter space during the creation of a handle yielding multiple black holes (n"2) and their subsequent evaporation. This should be compared with Fig. 3. If the quantum fluctuations on the S;S handle are dominated by the second Fourier mode on the S, there will be two minima and two maxima of the two-sphere radius, seeding to two black hole interiors (b) and two asymptotically de Sitter regions (c). This resembles a “wobbly doughnut” on which the thickness of the cross-sections oscillates twice. As the black holes evaporate, the minimal cross-sections decrease. When the black holes disappear, the doughnut is pinched at two places, yielding two disjoint spaces of the spherical topology, the daughter universes.
of the two-sphere radius around the S. Up till now, we have considered only the n"1 perturbation, which usually is the most important one. Occasionally, however, a higher-n mode will dominate, leading to the presence of multiple pairs of apparent black hole horizons in a single nucleation event, and eventually to the formation of several black hole interiors. One can show [6] that they evaporate, and that the universe will disconnect from itself at the event when a black hole finally disappears. If more than one black hole is present, this leads to the disintegration of the universe. In Fig. 5, the evolution of the space-like sections in a process of multiple black hole creation and subsequent evaporation is shown. The corresponding causal diagram is given in Fig. 6. The process can be viewed as a sequence of topology changes. The first topology change corresponds to the creation of a handle with multiple black hole horizons. It corresponds to a local non-perturbative quantum fluctuation of the metric on the scale of a single horizon volume. It will therefore happen independently in widely separated horizon volumes of de Sitter space. While the black holes evaporate, the asymptotically de Sitter regions between the black holes grow exponentially. Thus, the second topology change, corresponding to the final disappearance of the black holes, yields a number of de Sitter spaces which already contain exponentially large space-like sections. Inside these second-generation universes the handle-creation process will again occur locally. Thus, a single de Sitter universe decomposes iteratively into an infinite number of disjoint copies of itself. According to most inflationary models, the universe is vastly larger than the present horizon. The proliferation effect I have discussed renders our position even more humble. Not only do we live in an exponentially small part of the universe, compared to its global size; but our universe is
R. Bousso / Physics Reports 307 (1998) 117—123
123
Fig. 6. Penrose diagram for the process depicted in Fig. 5. The shaded regions are the two black hole interiors. In the region marked by the square brackets the spatial topology is S;S, and opposite ends should be identified. The middle picture in Fig. 5 corresponds to the horizon freezing surface shown here. After the black holes evaporate, two separate de Sitter universes remain.
only one of the exponentially many disconnected universes, all originating from the same small region in which inflation started. There is a large class of models that leads to eternal inflation; in this case, we inhabit one of an infinite number of separate universes produced from a single region.
References [1] [2] [3] [4] [5] [6]
R. Bousso, S.W. Hawking, The probability for primordial black holes, Phys. Rev. D 52 (1995) 5659, gr-qc/9506047. R. Bousso, S.W. Hawking, Pair creation of black holes during inflation, Phys. Rev. D 54 (1996) 6312, gr-qc/9606052. J.B. Hartle, S.W. Hawking, Wave function of the Universe, Phys. Rev. D 28 (1983) 2960. P. Ginsparg, M.J. Perry, Semiclassical perdurance of de Sitter space, Nucl. Phys. B 222 (1983) 245. R. Bousso, S.W. Hawking, (Anti-)evaporation of Schwarzschild-de Sitter black holes, Phys. Rev. D 57 (1998) 2436. R. Bousso, Proliferation of de Sitter space, to Phys. Rev. D 58 (1998) 083511.
Physics Reports 307 (1998) 125—131
Cosmological constraints from primordial black holes Andrew R. Liddle*, Anne M. Green Astronomy Centre, University of Sussex, Brighton BN1 9QJ, UK
Abstract Primordial black holes may form in the early Universe, for example from the collapse of large amplitude density perturbations predicted in some inflationary models. Light black holes undergo Hawking evaporation, the energy injection from which is constrained both at the epoch of nucleosynthesis and at the present. The failure as yet to unambiguously detect primordial black holes places important constraints. In this article, we are particularly concerned with the dependence of these constraints on the model for the complete cosmological history, from the time of formation to the present. Black holes presently give the strongest constraint on the spectral index n of density perturbations, though this constraint does require n to be constant over a very wide range of scales. 1998 Elsevier Science B.V. All rights reserved. PACS: 97.60.Lf; 98.80.Cq Keywords: Black holes; Inflationary cosmology
1. Introduction Black holes are tenacious objects, and any which form in the very early Universe are able to survive until the present, unless their Hawking evaporation is important. The lifetime of an evaporating black hole is given by
M q . K 10 g 10 s
(1)
From this we learn that a black hole of initial mass M&10g will evaporate at the present epoch, while for significantly heavier black holes Hawking evaporation is negligible. Another mass worthy of consideration is M&10g, which leads to evaporation around the time of nucleosynthesis,
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 9 - 6
126
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
which is well enough understood to tolerate only modest interference from black hole evaporation by-products. Several mechanisms have been proposed which might lead to black hole formation; the simplest is collapse from large-amplitude, short-wavelength density perturbations. They will form with approximately the horizon mass, which in a radiation-dominated era is given by M
K10 g &-0
10 GeV , ¹
(2)
where ¹ is the ambient temperature. This tells us that any black holes for which evaporation is important must have formed during very early stages of the Universe’s evolution. In particular, formation corresponds to times much earlier than nucleosynthesis (energy scale of about 1 MeV), which is the earliest time that we have any secure knowledge concerning the evolution of the Universe. Any modelling of the evolution of the Universe before one second is speculative, and especially above the electro-weak symmetry breaking scale (about 100 GeV) many possibilities exist. Note also that although we believe we understand the relevant physics up to the electro-weak scale, the cosmology between that scale and nucleosynthesis could still be modified, say by some massive but long-lived particle. In this article we will consider the standard cosmology and two alternatives [1,2]. We define the mass fraction of black holes as o b, , o and will use subscript ‘i’ to denote the initial values. In fact, we will normally prefer to use
(3)
b o " , (4) a, 1!b o !o which is the ratio of the black hole energy density to the energy density of everything else. Black holes typically offer very strong constraints because after formation the black hole energy density redshifts away as non-relativistic matter (apart from the extra losses through evaporation). In the standard cosmology the Universe is radiation dominated at these times, and so the energy density in black holes grows, relative to the total, proportionally to the scale factor a. As interesting black holes form so early, this factor can be extremely large, and so typically the initial black hole mass fraction is constrained to be very small. The constraints on evaporating black holes are well known, and we summarize them in Table 1. This table shows the allowed mass fractions at the time of evaporation. An additional, optional, constraint can be imposed if one imagines that black hole evaporation leaves a relic particle, as these relics must then not over-dominate the mass density of the present Universe [3]. For black holes massive enough to have negligible evaporation, the mass density constraint is the only important one (though in certain mass ranges there are also microlensing limits which are somewhat stronger). We will study three different cosmological histories in this paper, all of which are currently observationally viable. The first, which we call the standard cosmology, is the minimal scenario. It begins at some early time with cosmological inflation, which is necessary in order to produce the
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
127
Table 1 Limits on the mass fraction of black holes at evaporation Constraint
Range
Reason
a (0.04
10 g(M(10 g
Entropy per baryon at nucleosynthesis [4]
a (10\M/m .
MK5;10 g
c rays from current explosions [5]
a (6;10\(M/m ) .
10 g(M(10 g
nn production at nucleosynthesis [6]
a a
10 g(M(10 g
Deuterium destruction [7]
10 g(M(10 g
Helium-4 spallation [8]
(5;10\(M/m ) . (1;10\(M/m ) .
density perturbations which will later collapse to form black holes. Inflation ends, through the preheating/reheating transition (which we will take to be short), giving way to a period of radiation domination. Radiation domination is essential when the Universe is one second old, in order for successful nucleosynthesis to proceed. Finally, radiation domination gives way to matter domination, at a redshift z "24 000 X h where X and h have their usual meanings, to give our present Universe. The two modified scenarios replace part of the long radiation-dominated era between the end of inflation and nucleosynthesis. The first possibility is that there is a brief second period of inflation, known as thermal inflation [9]. Such a period is unable to generate significant new density perturbations, but may be desirable in helping to alleviate some relic abundance problems not solved by the usual period of high-energy inflation. The second possibility is a period of matterdomination brought on by a long-lived massive particle, whose eventual decays restore radiation domination before nucleosynthesis. For definiteness, we shall take the long-lived particles to be the moduli fields of superstring theory, though the results apply for any non-relativistic decaying particle.
2. The standard cosmology Once the cosmology is fixed, the limits on the mass fraction at evaporation shown in Table 1, along with the constraints from the present mass density, are readily converted into limits on the initial mass fraction [1,10]. The limits in different mass ranges are shown in Fig. 1; typically no more than about 10\ of the mass of the Universe can go into black holes at any given epoch. This limits the size of density perturbations on the relevant mass scale. These limits apply down to the lightest black hole which is able to form, which is governed by the horizon size at the end of inflation. The black hole constraint limits density perturbations on scales much shorter than conventional measures of the density perturbation spectrum using large-scale structure and the microwave
128
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
Fig. 1. Here a is the initial fraction of black holes permitted to form. The dotted line assumes black hole evaporation leaves a Planck-mass relic, and is optional.
background (though the scales corresponding to the black hole formation are similar to those on which gravitational waves may be probed by the LIGO, VIRGO and GEO interferometer projects [11]). However, an interesting application of the black hole constraints shown in Fig. 1 can be made if one has a definite form for the power spectrum. The simplest possibility is a power-law spectrum, whose spectral index n is assumed to remain constant across all scales, the interesting case being where n'1 — the so-called blue spectrum. The constancy of n is in fact predicted by some hybrid inflation models, which are the most natural way of obtaining a blue spectrum. With n'1 the shortest-scale perturbations dominate, and the black hole constraints were explored by Carr et al. [12], who found that n was limited to be less than around 1.4—1.5. We have redone their analysis and corrected two significant errors. First, they used an incorrect scaling of the horizon mass during the radiation era, which should read
M \L . (5) p (M)"p (M ) M Secondly, their normalization of the spectrum to the COBE observations, to fix the long wavelength behaviour, was incorrect (too low) by a factor of around twenty. With these corrections, the constraint on n tightens considerably to become n:1.25 [1]. Ignoring for the time being uncertainties in cosmological modelling, this should be regarded as a very hard limit; it is not useful to try and think of it as representing some confidence level. Because the density perturbations are assumed gaussian, the formation rate of black holes is extremely sensitive to the amplitude of perturbations on the scale under consideration. Hence a very modest change in n gives a huge change in the predicted black hole abundance, which makes a rapid transition between totally negligible and enormously excessive as n crosses the quoted limit. In fact, to obtain a black hole density near the present limits requires a considerable fine-tuning, as we saw
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
129
in Fig. 1 that only about 10\ or so of the mass of the Universe must be channelled into black holes. However, the previous paragraph did not take into account uncertainties in cosmological modelling, and that is what we will quantify in the remainder of this article. We will also note that a lifting of the gaussianity assumption, presently a controversial topic [13,14], makes little change.
3. With thermal inflation Thermal inflation is a brief period of inflation occurring at an intermediate energy scale. We model it as occurring from ¹"10 GeV down to the supersymmetry scale ¹"10 GeV, then reheating back up to 10 GeV, which is the standard thermal inflation scenario. It drives ln(10 GeV/10 GeV)K10 e-foldings of inflation. As we have seen [Eq. (2)], most of the interesting mass region contains black holes forming before ¹"10 GeV, which implies that the constraints are affected by thermal inflation. Three effects are important: E Dilution of black holes during thermal inflation. E A change in the correspondence of scales: COBE scales leave the horizon closer to the end of inflation. E A mass range which enters the horizon before thermal inflation, but leaves again during it. No new perturbations are generated on this scale during thermal inflation, so from the horizon mass formula we find a missing mass range between 10 and 10 g in which black holes will not form. Thermal inflation at higher energy could exclude masses below 10 g. The dilution effect is shown in Fig. 2; typical constraints now lie around 10\ rather than 10\. Taking all the effects into account, the constraint on the spectral index weakens to n:1.3 [1].
Fig. 2. Black hole constraints modified to include thermal inflation.
130
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
Fig. 3. Black hole constraints modified for prolonged moduli domination.
4. Cosmologies with moduli domination A prolonged early period of matter domination is another possible modification to the standard cosmology [2]. For example, moduli fields may dominate, and in certain parameter regimes can decay before nucleosynthesis. Various assumptions are possible; here we will assume moduli domination as soon as they start to oscillate (around 10 GeV). Part of the interesting range of black hole masses forms during moduli domination rather than radiation domination. Fig. 3 shows the constraints in this case, and with moduli domination the limit on n again weakens to n:1.3 [2].
5. Conclusions Although black hole constraints are an established part of modern cosmology, they are sensitive to the entire cosmological evolution. In the standard cosmology, a power-law spectrum is constrained to n(1.25, presently the strongest observational constraint on n from any source. Alternative cosmological histories can weaken this to n(1.30, and worst-case non-gaussianity [13] can weaken this by another 0.05 or so, though hybrid models giving constant n give gaussian perturbations. Finally, we note that while the impact of the cosmological history on the density perturbation constraint is quite modest due to the exponential dependence of the formation rate, the change can be much more significant for other formation mechanisms, such as cosmic strings where the black hole formation rate is a power-law of the mass per unit length Gk. After all, the permitted initial mass density of black holes does increase by many orders of magnitude in these alternative cosmological models.
A.R. Liddle, A.M. Green / Physics Reports 307 (1998) 125—131
131
Acknowledgements ARL was supported by the Royal Society and AMG by PPARC. We thank Toni Riotto for collaboration on the analysis of the moduli-dominated cosmology, and Bernard Carr and Jim Lidsey for discussions.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
A.M. Green, A.R. Liddle, Phys. Rev. D 56 (1997) 6166. A.M. Green, A.R. Liddle, A. Riotto, Phys. Rev. D 56 (1997) 7559. J.D. Barrow, E.J. Copeland, A.R. Liddle, Phys. Rev. D 46 (1992) 645. S. Mujana, K. Sato, Prog. Theor. Phys. 59 (1978) 1012; B.V. Vainer, P.D. Nasselskii, Astron. Zh 55 (1978) 231 [Sov. Astron. 22 (1978) 138]. J.H. MacGibbon, Nature 320 (1987) 308; J.H. MacGibbon, B. Carr, Astrophys. J. 371 (1991) 447. Ya.B. Zel’dovich, A.A. Starobinsky, M.Y. Khlopov, V.M. Chechetkin, Pis’ma Astron. Zh. 3 (1977) 308 [Sov. Astron. Lett. 22 (1977) 110]. D. Lindley, Mon. Not. R. Astron. Soc. 193 (1980) 593. B.V. Vainer, D.V. Dryzhakova, P.D. Nasselskii, Pis’ma Astron. Zh. 4 (1978) 344 [Sov. Astron. Lett. 4 (1978) 185]. D.H. Lyth, E.D. Stewart, Phys. Rev. Lett. 75 (1995) 201. B.J. Carr, in: J.L. Sanz, L.J. Goicoechea (Eds.), Observational and Theoretical Aspects of Relativistic Astrophysics and Cosmology, World Scientific, Singapore, 1985. E.J. Copeland, A.R. Liddle, J.E. Lidsey, D. Wands, Phys. Rev. D 58 (1998) 063508. B.J. Carr, J.H. Gilbert, J.E. Lidsey, Phys. Rev. D 50 (1994) 4853. J.S. Bullock, J.R. Primack, Phys. Rev. D 55 (1997) 7423. P. Ivanov, Phys. Rev. D 57 (1998) 7145.
Physics Reports 307 (1998) 133—139
Formation of primordial black holes in the inflationary universe Jun’ichi Yokoyama* Department of Physics, Stanford University, Stanford, CA 94305, USA and Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan
Abstract We propose a new double inflation model containing only one inflaton scalar field, in which new inflation follows chaotic inflation. It is shown that the large density fluctuation generated in the beginning of new inflation can result in significant formation of primordial black holes on astrophysically interesting mass scales. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k
If overdensity of order of unity exists in the hot early universe, a black hole can be formed when the perturbed region enters the Hubble radius [1]. While the properties of the primordial black holes (hereafter PBHs) thus produced were a subject of extensive study decades ago, there were no observational evidence of their existence and only observational constraints were obtained against their mass spectrum [2,3]. Recently, however, possibilities of their existence have been raised from a number of astrophysical and cosmological considerations, for example, in an attempt to explain the origin of MACHOs [4] or a class of gamma-ray bursts [5]. It is difficult, unfortunately, to realize a desired spectrum of density fluctuations for PBH formation in inflationary cosmology [6—8] because usual inflation models predict a scale-invariant spectrum [9] whose amplitude has been normalized to O(10\) by the observed anisotropy of the cosmic microwave background radiation (CMB) [10]. But in order to produce PBHs on some specific scale, we must prepare density perturbation whose amplitude has a high peak of O(10\) on the corresponding scale. In the present paper, we propose a new scenario of multiple inflation as a model to produce the desired spectrum of density fluctuation for PBH formation. Unlike previous double inflation
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 4 - 1
134
J. Yokoyama / Physics Reports 307 (1998) 133—139
models [11], our model contains only one source of inflation. We assume the Einstein gravity and employ an inflaton scalar field, , with a simple potential which has an unstable local maximum at the origin. This is the same setup as the new inflation scenario [7], but chaotic inflation [8] is also possible if has a sufficiently large amplitude initially. In fact, Kung and Brandenberger [12] studied the initial distribution of a scalar field with such a potential and concluded that chaotic inflation would be much more likely than new inflation. Thus, we also start with chaotic inflation, but we show that new inflation is also possible when evolves towards the origin after chaotic inflation. In fact, if the parameters of the potential are appropriately chosen, the scalar field acquires the right amount of kinetic energy after chaotic inflation and climbs up the potential hill to near the origin to start slow rollover there. Hence, in this model the initial condition for new inflation is realized not due to the high-temperature symmetry restoration nor for a topological reason [13], but by dynamical evolution of the field which has already become sufficiently homogeneous because of the first stage of chaotic inflation. We shall refer this succession of inflation simply to chaotic new inflation. With an appropriate shape of the potential, density fluctuations generated during new inflation can have larger amplitude than those during chaotic inflation. Furthermore, since the power spectrum of fluctuation generated during new inflation can be tilted, it can have a peak on the comoving Hubble scale when the inflaton enters the slow-rollover phase during new inflation. If the peak amplitude is sufficiently large, it results in formation of PBHs on the horizon mass scale when the corresponding comoving scale reenters the Hubble radius during radiation domination. We shall show below that such a scenario is indeed possible with a simple potential of the inflaton field. We adopt the following potential
j 1 1 »[ ]"! m # ln ! #» , v 4 4 2
(1)
that is, typical one-loop effective potential with nonvanishing mass term at the origin. A potential of this type but with a positive mass-squared at the origin was employed in the original inflation scenario [6], but for our purpose we adopt a negative mass term. The potential (1) has four parameters, but one of them, » , is fixed from the requirement that the vacuum energy density vanishes at the potential minima ,$ . Another parameter, say j, can K be fixed from the amplitude of large-scale CMB fluctuations using the COBE data as in chaotic inflation. Hence, we are essentially left with two free parameters, v and m. While v mainly controls the speed of around the origin and its fate, i.e., to which minimum it falls, and m mainly governs duration of new inflation, the entire dynamics is determined by a complicated interplay of the three parameters. For example, we cannot determine j until we calculate the duration of new inflation which also depends on j itself for fixed values of m and v. Hence, we must numerically solve the equations of motion iteratively to find out appropriate values of parameters to produce PBHs at the right scale with the right amount. In the new inflation regime the slow roll-over classical solution, (t), and the scale factor, a(t), read
m 8p» , H (t!t ) , a(t)Je&R, H"
(t)" exp 3M 3H .
(2)
J. Yokoyama / Physics Reports 307 (1998) 133—139
135
where the subscript s stands for the onset of slow-roll new inflation. The linear curvature perturbation on the comoving scale l"2p/k is given by H H H , ,f D( (t )) 3f (3) I m 2p (t ) 2p" Q (t )" I I where t is the epoch when k-mode left the Hubble radius during inflation. The above expression of I U(l) is valid until the scale l reenters the Hubble radius with f" () in the matter (radiation) domination. We find the spectral index of power-law density fluctuation is given by n"1!2m/(3H). Thus, the power spectrum can be significantly tilted. The linear perturbation has a peak amplitude U(l)"f
H H (4) max D"3 ,D , m 2p
on the comoving scale l ,2p/k where k ,a(t )H . It can be large if turns out to be small. We apply the stochastic inflation method [14] to estimate the probability distribution function (PDF) of curvature fluctuations. Since our potential can be approximated as »[ ]"» !m in the new inflation regime, the PDF of the coarse-grained inflaton field K (x, t), P[ K " , t], can be analytically calculated by solving the Fokker—Planck equation jP 1 j H jP " »[ ]P# . (5) jt 3H j
8p j Taking the initial condition P[ , t ]"d( ! ), its solution is given by the Gaussian: 2m ( ! (t)) 3H 1 exp exp ! , p(t)" H (t!t ) !1 . (6) P[ , q]" 3H 2p(t) 8pm (2pp(t) From this distribution function, we estimate the PDF of metric perturbation. In order to incorporate nonlinear effects at least partially, we write the coarse-grained metric in the quasiisotropic form,
ds"!dt#aL ( K (t, x)) dx ,
(7)
following Ivanov [15], where the scale factor aL now depends on the coarse-grained spatial coordinate through K (x, t). We quantify the metric perturbation in terms of hK ,aL (t, x)/a(t)!1, with a(t) being the average scale factor. In order to estimate the abundance of PBHs produced we calculate the PDF of hK , P[hK "h], from Eq. (6). We are interested in the metric perturbations on the comoving scales that leave the Hubble radius in the period between t"t and tKt #H\ when the classical solution has rolled down to (t #H\)" exp(m/3H), , because these scales correspond to the peak of the power spectrum and dominates the formation of PBHs when they reenter the Hubble radius during radiation domination. The desired PDF is approximately given by
1 m
P , (1#ln(1#h)) . P[hª "h]" H 3H 1#h
(8)
136
J. Yokoyama / Physics Reports 307 (1998) 133—139
Explicitly, we find the probability that h exceeds a threshold value h as P[h'h ] c [(1#h )A!1] [(1#h )A!e\A] K exp ! , 2c[(1#h )A!e\A] (2p (1#h)A[(1#h)A!1][(1#h)A!e\A] where we have defined c"m/(3H) and c"cD/2. The criterion for black hole formation has been numerically investigated by Nadezhin et al. [16] and by Biknell and Henriksen [17]. Although it depends on the shape of the perturbed region, the generic value of the threshold reads h '0.75—0.9. Here we take the black hole threshold as h "0.75. Then we can express the initial volume fraction of the region collapsing into PBHs as b(M )"P[h'0.75], where the typical black hole mass, M , is equal to the horizon mass when the comoving scale l reenters the Hubble radius. Note that since our model predicts density fluctuation which is highly peaked on the comoving scale l the resultant mass function of PBHs is also sharply peaked at the mass around M . Let us now consider a specific example of formation of MACHO-PBHs [18]. For this purpose we must realize a peak with b&10\ on the comoving scale leaving the Hubble radius 35 e-folds expansion before the end of new inflation, that is, we must have N,H (t !t )"35, where t is the time new inflation is terminated. After some iterative trials we have chosen j"3;10\ and m"6;10\M , and then solved the equation of motion for various values of v. In this choice of . j and m we find new inflation lasts for more-than ten e-folds expansion if we take v in the range v"0.2131M !0.2147M . Hence, we do not need much fine-tuning of the model parameters to . . realize a new inflationary stage itself. We also find that settles down to if v50.21384364M K . and to ! if v40.21384363M . K . We can obtain the appropriate spectrum of fluctuation for MACHO-PBHs if we take v"0.21384360M . Figs. 1 and 2 depict evolution of the scale factor and the inflaton , respective. ly, with the initial condition a "1 at "3.5M . The chaotic inflation ends at K0.89M and G G . . the slow roll-over new inflation starts at "!4.03;10\M , . In this case we find . + c"0.300590 and the abundance of the PBHs at formation reads
b"0.888D exp(!0.131072D\) . (9) One can also obtain an approximate shape of the mass spectrum of the PBHs using Eq. (9) with D replaced by D"H/(2p" Q ") at different epoch corresponding to different black hole mass. More specifically the mass of black holes, M, and their initial fraction, b(M), can be written by an implicit function of t as I M"exp(2[H (t !t )!35])M , b(M)K0.888D( (t ))exp(!0.131072D\( (t ))) . (10) D I > I I Fig. 3 depicts the mass spectrum of black holes obtained from Eq. (10). Thus, the PBH abundance is sharply peaked. Note, however, that the shape of the large-mass tail is not exactly correct which corresponds to the regime where slow-roll solution is invalid. Nonetheless, this figure correctly describes the location of the peak up to a factor of order of unity. A more proper analysis of the mass function [21] based on a newer numerical calculation of PBH formation [22] would also change its shape slightly.
J. Yokoyama / Physics Reports 307 (1998) 133—139
137
Fig. 1. Evolution of the inflaton in chaotic new inflation. Time and are displayed in units of the Planck time and M , . respectively.
Fig. 2. Evolution of the scale factor in chaotic new inflation.
We can also apply our model for the formation of PBHs with different masses and abundance. For example, we may produce PBHs with MK10M which may act as a central engine of AGNs > with the current density, say, n&10\ Mpc\ corresponding to b&10\ at formation [19]. From the first equation of Eq. (10) we find M"10M corresponds to N"44 and the desired > spectrum is realized for j"3.7;10\, m"6.1;10\M , and v"0.21532324M under the . . COBE normalization [10]. Another interesting possibility is to produce a tiny amount of PBHs
138
J. Yokoyama / Physics Reports 307 (1998) 133—139
Fig. 3. Expected mass spectrum of PBHs in chaotic new inflation (Eq. (10)). Mass is displayed in units of the solar mass.
which are evaporating right now [20], with the initial mass MK10 g. With the current abundance XK10\ or bK10\ at formation, they may explain a class of gamma-ray bursts [5]. In this case we should have only a short period of slow-roll new inflation, N"14. We find b"2;10\ at the right mass scale if we choose j"3;10\, m"5;10\M , and . v"0.16557604828M . . In summary, we have proposed a new double inflation model in which chaotic inflation is followed by new inflation and large density fluctuation is generated in the beginning of the latter regime. We have applied it for formation of PBHs and shown that we can choose values of model parameters so that significant numbers of PBHs are produced in the mass scales of astrophysical interest [23]. The author is grateful to Professor Andrei Linde for his hospitality at Stanford University, where this work was done. This work was partially supported by the Monbusho.
References [1] Ya.B. Zel’dovich, I.D. Novikov, Sov. Astronomy 10 (1967) 602; S.W. Hawking, Mon. Not. R. Astron. Soc. 152 (1971) 75. [2] B.J. Carr, Astrophys. J. 201 (1975) 1. [3] B.J. Carr, Astrophys. J. 206 (1976) 8; S. Miyama, K. Sato, Prog. Theor. Phys. 59 (1978) 1012; I.D. Novikov, A.G. Polnarev, A.A. Starobinsky, Ya.B. Zel’dovich, Astron. Astrophys. 80 (1979) 104. [4] C. Alcock et al., Nature 365 (1990) 623; Phys. Rev. Lett. 74 (1995) 2867; Astrophys. J. 486 (1997) 697; E. Aubourg et al., Nature 365 (1993) 623; Astron. Astrophys. 301 (1995) 1. [5] D. Cline, D.A. Sanders, W. Hong, Astrophys. J. 486 (1997) 169. [6] A.H. Guth, Phys. Rev. D 23 (1981) 347; K. Sato, Mon. Not. R. Astron. Soc. 195 (1981) 467.
J. Yokoyama / Physics Reports 307 (1998) 133—139
139
[7] A.D. Linde, Phys. Lett. 108B (1982) 389; A. Albrecht, P.J. Steinhardt, Phys. Rev. Lett. 48 (1982) 1220. [8] A.D. Linde, Phys. Lett. 129B (1983) 177. [9] S.W. Hawking, Phys. Lett. 115B (1982) 295; A.A. Starobinsky, Phys. Lett. 117B (1982) 175; A.H. Guth, S.-Y. Pi, Phys. Rev. Lett. 49 (1982) 1110. [10] C.L. Bennet et al., Astrophys. J. Lett. 464 (1996) 1. [11] L.F. Kofman, A.D. Linde, A.A. Starobinsky, Phys. Lett. B 157 (1985) 361; J. Silk, M.S. Turner, Phys. Rev. D 35 (1987) 419; D. Polarski, A.A. Starobinsky, Nucl. Phys. B 385 (1992) 623. [12] J.H. Kung, R.H. Brandenberger, Phys. Rev. D 42 (1990) 1008. [13] A.D. Linde, Phys. Lett. B 327 (1994) 208; A. Vilenkin, Phys. Rev. Lett. 72 (1994) 3137. [14] A.A. Starobinsky, in: H.J. de Vega, N. Sanchez (Eds.), Field Theory, Quantum Gravity, and Strings, Lecture Notes in Physics, vol. 246, Springer, Berlin, 1986, 107. [15] P. Ivanov, Phys. Rev. D 57 (1998) 7145. [16] D.K. Nadezhin, I.D. Novikov, A.G. Polnarev, Sov. Astron 22 (1978) 129. [17] G.V. Bicknell, R.N. Henriksen, Astrophys. J 232 (1978) 670. [18] J. Yokoyama, Astron. Astrophys. 318 (1997) 673. [19] E.L. Turner, Astron. J 101 (1991) 5. [20] S.W. Hawking, Nature 248 (1974) 30; Comm. Math. Phys. 43 (1975) 199. [21] J. Yokoyama, Phys. Rev. D 58 (1998) 107502. [22] J.C. Niemeyer, K. Jedamzik, Phys. Rev. Lett. 80 (1998) 5481. [23] J. Yokoyama, Phys. Rev. D 58 (1998) 083510.
Physics Reports 307 (1998) 141—154
Cosmic rays from primordial black holes and constraints on the early universe B.J. Carr *, J.H. MacGibbon School of Mathematical Sciences, Queen Mary & Westfield College, Mile End Road, London E1 4NS, UK Code SN3, NASA Johnson Space Center, Houston, Texas 77058, USA
Abstract The constraints on the number of evaporating primordial black holes imposed by observations of the cosmological gamma-ray background do not exclude their making a significant contribution to the Galactic flux of cosmic ray photons, electrons, positrons and antiprotons. Even if this contribution is small, cosmic ray data place important limits on the number of evaporating black holes and thereby on models of the early Universe. Evaporating black holes are unlikely to be detectable in their final explosive phase unless new physics is invoked at the QCD phase transition. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 98.80.Cq; 97.60.Lf; 98.70.Vc Keywords: Cosmic rays; Primordial black holes; Early universe
1. Introduction It is well known that primordial black holes (PBHs) could have formed in the early Universe [25,66]. A simple comparison of the cosmological density at time t with the density associated with a black hole shows that PBHs forming at time t would have of order the horizon mass:
ct t M (t)+ +10 g. & G 10\ s
(1)
PBHs could thus span an enormous mass range: those formed at the Planck time (10\ s) would have the Planck mass (10\ g), whereas those formed at 1 s would be as large as 10M , >
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 3 9 - 8
142
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
comparable to the mass of the holes thought to reside in galactic nuclei. PBHs would most naturally form from initial inhomogeneities but they might also form through other mechanisms at a cosmological phase transition [8]. The realization that small PBHs might exist prompted Hawking to study their quantum properties. This led to his famous discovery [26] that black holes radiate thermally with a temperature 10\(M/M )\ K and evaporate completely on a timescale 10(M/M ) yr. Indeed only > > black holes of primordial origin could be small enough for this effect to be important. Despite the conceptual importance of this result, it was bad news for PBH enthusiasts. For since PBHs with a mass of 10 g, which evaporate at the present epoch, would have a temperature of order 100 MeV, the observational limit on the c-ray background density at 100 MeV immediately implied that the density of such holes could not exceed 10\ times the critical density [53]. Not only did this render PBHs unlikely dark matter candidates, it also implied that there was little chance of detecting black hole explosions at the present epoch [55]. Despite this conclusion, it was realized that PBH evaporations could still have interesting cosmological consequences. In particular, they might generate the microwave background [67] or modify the standard cosmological nucleosynthesis scenario [39,48,50,59] or contribute to the cosmic baryon asymmetry [3]. There was also interest in whether PBH evaporations could account for the electrons and positrons observed in cosmic rays [9] or the annihilation-line radiation coming from the Galactic centre [51] or the unexpectedly high fraction of antiprotons in cosmic rays [35,62]. Renewed efforts were also made to look for black hole explosions after the realization that — due to the interstellar magnetic field — these might appear as radio rather than c-ray bursts [58]. Even if PBHs had none of these consequences, studying such effects led to strong upper limits on how many PBHs could ever have formed and thereby constrained models of the early Universe [10]. Later many of these calculations had to be modified when it was realized that the usual assumption that particles are emitted with a black-body spectrum as soon as the temperature of the hole exceeds their rest mass is too simplistic. If one adopts the conventional view that all particles are composed of a small number of fundamental point-like constituents (quarks and leptons), it would seem natural to assume that it is these fundamental particles rather than the composite ones which are emitted directly once the temperature goes above the QCD confinement scale of 250 MeV. One can therefore envisage a black hole as emitting relativistic quark and gluon jets which subsequently fragment into hadrons. On a somewhat longer timescale these hadrons themselves decay into photons, neutrinos, gravitons, electrons, positrons, protons and antiprotons. The results of such a calculation are very different from a simple direct emission calculation [41,42]. In an earlier paper we considered how these effects modify the cosmological consequences of evaporating PBHs and, in particular, their contribution to cosmic rays ([43]; MC). However, in recent years much more cosmic ray data has accrued, so it is timely to update these calculations. The results are described more fully elsewhere [12]. The plan of the present paper is as follows. In Section 2 we show why limits on the number of PBHs constrain models of the early Universe. In Section 3 we review the physics of black hole evaporation with particular emphasis on processes above the QCD temperature. In Section 4 we make a detailed comparison with the cosmic ray data. In Section 5 we consider the possibility of detecting PBH explosions and in Section 6 we draw some general conclusions.
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
143
2. Constraints on the early Universe One of the most important reasons for studying the cosmological effects of PBHs is that it enables one to place limits on the spectrum of density fluctuations in the early Universe. This is because, if the PBHs form directly from density perturbations, the fraction of regions undergoing collapse at any epoch is determined by the root-mean-square amplitude e of the fluctuations entering the horizon at that epoch and the equation of state p"co (0(c(1). One usually expects a radiation equation of state (c"1/3) in the early Universe. In order to collapse against the pressure, an overdense region must be larger than the Jeans length at maximum expansion and this is just (c times the horizon size. This implies that the density fluctuation must exceed c at the horizon epoch, so — providing the fluctuations have a Gaussian distribution and are spherically symmetric — one can infer that the fraction of regions of mass M which collapse is [8]
c , b(M)&e(M) exp ! 2e(M)
(2)
where e(M) is the value of e when the horizon mass is M. The PBHs can have an extended mass spectrum only if the fluctuations are scale-invariant (i.e. with e independent of M). In some situations, Eq. (2) would fail. For example, PBHs would form more easily if the equation of state of the Universe were ever soft (c;1). This might apply if there was a phase transition which channelled the mass of the Universe into non-relativistic particles or which temporally reduced the pressure. In particular, this might happen at the quark—hadron era [34]. In this case, only those regions which are sufficiently spherically symmetric at maximum expansion can undergo collapse; the dependence of b on e would then be weaker than indicated by Eq. (2) but there would still be a unique relationship between the two parameters [36]. The fluctuations required to make the PBHs may either be primordial or they may arise spontaneously at some epoch. One natural source of fluctuations would be inflation [37,49] and, in this context, e(M) depends implicitly on the inflationary potential [11,13,21,23,33,57,64]. Recently Bullock and Primack [6] have questioned the Gaussian assumption in the inflationary context, so that Eq. (2) may not apply, but they still find that b depends very sensitively on e. Some formation mechanisms for PBHs do not depend on having primordial fluctuations at all. For example, at any spontaneously broken symmetry epoch, PBHs might form through the collisions of bubbles of broken symmetry [17,29,38]. PBHs might also form spontaneously through the collapse of cosmic strings [7,22,28,44,54]. In these cases b(M) depends not on e(M) but on other cosmological parameters, such the bubble formation rate or the string mass-per-length. In all these scenarios, the current density parameter X associated with PBHs which form at . & a redshift z or time t is related to b by
t \ M \ +10b , X "bX (1#z)+10b . & 0 s 10g
(3)
where X +10\ is the density of the microwave background and we have used Eq. (1). The (1#z) 0 factor arises because the radiation density scales as (1#z), whereas the PBH density scales as (1#z). Any limit on X (M) therefore places a tight constraint on b(M) and the constraints are . & summarized in Fig. 1. The constraint for non-evaporating mass ranges above 10 g comes from
144
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
Fig. 1. Constraints on b(M).
requiring X (1. Much stronger constraints are associated with any PBHs which were small . & enough to have evaporated by now. For example, the constraints below 10 g assume that evaporating PBHs leave stable Planck mass relics, in which case these relics are required to have less than the critical density [4,13,40]. The constraints are discussed in detail by Carr et al. [13] but here we just wish to emphasize that the strongest one is the cosmic ray limit associated with PBHs evaporating currently. The constraints on b(M) can be converted into constraints on e(M) using Eq. (2) and these are shown in Fig. 2.
3. Evaporation of primordial black holes A black hole of mass M will emit particles in the energy range (Q, Q#dQ) at a rate [27]
C dQ Q \ exp $1 dNQ " , 2p
¹
(4)
where ¹ is the black hole temperature, C is the absorption probability and the # and ! signs refer to fermions and bosons respectively. This assumes that the hole has no charge or angular momentum. This is a reasonable assumption since charge and angular momentum will also be lost through quantum emission but on a shorter timescale that the mass [52]. C goes roughly like
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
145
Fig. 2. Constraints on e(M).
Q¹\, though it also depends on the spin of the particle and decreases with increasing spin, so a black hole radiates almost like a black-body. The temperature is given by
¹+10
M \ M \ K+ GeV . g 10 g
(5)
This means that it loses mass at a rate MQ "!5;10M\f (M) g s\ ,
(6)
where the factor f (M) depends on the number of particle species which are light enough to be emitted by a hole of mass M, so the lifetime is q(M)"6;10\f (M)\M s .
(7)
The factor f is normalized to be 1 for holes larger than 10 g and such holes are only able to emit “massless” particles like photons, neutrinos and gravitons. Holes in the mass range 10 g(M(10 g are also able to emit electrons, while those in the range 10 g(M(10 g emit muons which subsequently decay into electrons and neutrinos. The latter range includes, in particular, the critical mass for which q equals the age of the Universe. This can be shown to M*"4.4;10h\ g where h is the Hubble parameter in units of 100 and we have assumed that the total density parameter is 1 [42].
146
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
Once M falls below 10 g, a black hole can also begin to emit hadrons. However, hadrons are composite particles made up of quarks held together by gluons. For temperatures exceeding the QCD confinement scale of K "250—300 GeV, one would therefore expect these fundamental /!" particles to be emitted rather than composite particles. Only pions would be light enough to be emitted below K . Since there are 12 quark degrees of freedom per flavour and 16 gluon degrees /!" of freedom, one would also expect the emission rate (i.e. the value of f ) to increase dramatically once the QCD temperature is reached. The physics of quark and gluon emission from black holes is simplified by a number of factors. Firstly, since the spectrum peaks at an energy of about 5¹, Eq. (5) implies that most of the emitted particles have a wavelength j+2.5M (in units with G"k"c"1), so they have a size comparable to the hole. Secondly, one can show that the time between emissions is Dq+20j, which means that short range interactions between successively emitted particles can be neglected. Thirdly, the condition ¹'K implies that Dq is much less than K\ +10\ cm (the characteristic strong /!" /!" interaction range) and this means that the particles are also unaffected by gluon interactions. The implication of these three conditions is that one can regard the black hole as emitting quark and gluon jets of the kind produced in collider events. The jets will decay into hadrons over a distance which is always much larger than M, so gravitational effects can be neglected. The hadrons may then generate other particles through weak and electomagnetic decays. To find the final spectra of stable particles emitted from a black hole, one must convolve the Hawking emission spectrum given by Eq. (4) with the jet fragmentation function. The fragmentation function has an upper cut-off at Q, a lower cut-off and peak around the hadron mass, and an E\ Bremmstrahlung tail in between. The convolution then gives the instantaneous emission spectrum shown in Fig. 3 for a ¹"1 GeV black hole [42]. The direct emission just corresponds to the small bumps on the right. All the particle spectra show a peak at 100 MeV due to pion decays; the electrons and neutrinos also have peaks at 1 MeV due to neutron decays.
4. Cosmic rays from PBHs In order to determine the present day background spectrum of particles generated by PBH evaporations, one must first integrate over the lifetime of each hole of mass M and then over the PBH mass spectrum [41]. In doing so, one must allow for the fact that smaller holes will evaporate at an earlier cosmological epoch, so the particles they generate will be redshifted in energy by the present epoch. If the holes are uniformly distributed throughout the Universe, the background spectra should have the form indicated in Fig. 4. All the spectra have rather similar shapes: an E\ fall-off for E'100 MeV due to the final phases of evaporation at the present epoch and an E\ tail for E(100 MeV due to the fragmentation of jets produced at the present and earlier epochs. Note that the E\ tail generally masks any effect associated with the PBH mass spectrum (cf. [9]). The situation is more complicated if the PBHs evaporating at the present epoch are clustered inside our own Galactic halo (as is most likely). In this case, any charged particles emitted after the epoch of galaxy formation (i.e. from PBHs only somewhat smaller than M*) will have their flux enhanced relative to the photon spectra by a factor m which depends upon the halo concentration factor and the time for which particles are trapped inside the halo by the Galactic magnetic field.
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
147
Fig. 3. Instantaneous emission from a 1 GeV black hole.
MC assume that the particles are uniformly distributed throughout a halo of radius R and infer q o q R \ X \ +10h\ , (8) m" q o q 10 kpc 0.1 where X is the density parameter associated with halos. The ratio of the leakage time q to the age of the Galaxy q is rather uncertain and also energy-dependent. At 100 MeV we take q to be about 10 yr for electrons or positrons (m&10) and 10yr for protons or antiprotons (m&10). The postgalactic contribution of charged particles is shown in Fig. 5. For comparison with the observed cosmic ray spectra, one needs to determine the amplitude of the spectra at 100 MeV. This is because the observed fluxes all have slopes between E\ and E\, so the strongest constraints come from measurements at 100 MeV. The amplitudes all scale with X and are found to be (MC) . & 1.5;10\hX GeV\ cm\ (c) . & dF (e>, e\) (9) " 9.5;10\hX (m/10) GeV\ cm\ . & dE 4.5;10\hX (m/10) GeV\ cm\ (p, p ) . . & We now use the observed cosmic ray spectra to constrain X . . &
148
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
4.1. Gamma-rays Our earlier c-ray background constraint was based on a comparison with the observations of Fichtel et al. [19]:
E \ ! dF A"1.1($0.2);10\ cm\ GeV\ 100 MeV dE
(10)
between 35 and 175 MeV; this led to an upper limit (MC) X 4(7.6$2.6);10\h\. (11) . & Indeed the comparison suggested that PBH emission might even be the dominant contribution above 50 MeV. However, more recent EGRET observations [61] give a background of
dF E \ ! A"7.3($0.7);10\ cm\ GeV\ dE 100 MeV
(12)
between 30 MeV and 120 GeV. This leads to a slightly stronger upper limit X 4(5.1$1.3);10\h\ (13) . & and the form of the spectrum no longer suggests that PBHs provide the dominant contribution. If PBHs are clustered inside our own Galactic halo, then there should also be a Galactic c-ray background and, since this would be anisotropic, it should be separable from the extragalactic background. Wright [63] has shown that the ratio of the anisotropic to isotropic intensity is
I 3j (R )H R "g(l, b, R /R , q) , I 4cj
Fig. 4. Spectrum of particles from uniformly distributed PBHs.
(14)
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
149
Fig. 5. Spectrum of charged particles from PBHs in our own halo to be (MC).
where j and j are the halo and background emissivities. The function g depends on Galactic longitude (l) and latitude (b), the ratio of the core radius (R ) to our Galactocentric radius (R ), and the halo flattening (q). A detailed fit to the EGRET data, subtracting various other known components, gives 3j (R )H R "0.4—2.5 . 4cj
(15)
Note that this assumes the isotropic intensity given by Fichtel et al. [19] and replacing this with the Shreekumar et al. [61] intensity increases the ratio by 1.5. Eq. (15) requires the PBH clustering factor to be (2!12);10h\. This is comparable with the expected local density enhancement, given by Eq. (8) without the (q /q ) factor, providing X is in the range 0.04 to 0.2, which is plausible. Recently Dixon [18] also claims to have detected diffuse halo emission from EGRET. 4.2. Electrons and positrons There is now extensive data on the spectra of cosmic ray electrons and positrons between 100 MeV and 100 GeV. The positron fraction in this range has been summarized by Barwick et al. [2] and is shown in Fig. 6. In the simplest picture all positrons, together with an equal number of electrons, are secondary particles which are generated through the decay of pions created in the collisions between protons and interstellar matter. The remaining electrons are supposed to be produced by primary cosmic ray sources like supernovae. However, it is not clear that measurements of the positron fraction support this picture. Between 5 and 10 GeV there seems to be an
150
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
Fig. 6. Data on positron fraction from Barwick et al. [2].
increase in the positron fraction, in contrast with the predicted decrease of the “leaky box” model. Below 500 MeV (more relevant to the PBH scenario), the measured positron fraction is 0.3, while the “leaky box” model prediction is 0.1. One may therefore need to invoke a primary source of electrons and positrons. PBH evaporations are one such source since they naturally emit electrons and positrons in equal numbers. MC calculated the PBH density required to explain the interstellar positron flux at 300 MeV inferred by Ramaty and Westergaard [56]. However, they used a rather simplistic model in which the positrons were assumed to be spread uniformly throughout the Galactic halo. It is probably more appropriate to assume that the positrons come from PBHs within a few kiloparsecs, in which case the limit becomes
q \ X . X K(1.7!2.1);10\ . & 10 yr 0.1
(16)
This is comparable with the c-ray limit (13). An updated version of this limit, incorporating the new data shown in Fig. 6, is given by Carr and MacGibbon [12]. However, it must be stressed that the inconsistencies of the “leaky box” model may just reflect inadequacies in the propagation model or insufficient allowance for solar modulation effects (which depend on the sign of the charge). One should therefore not interpret Eq. (16) as positive evidence for PBHs. 4.3. Antiprotons Since the ratio of antiprotons to protons in cosmic rays is less than 10\ over the energy range 100 MeV—10 GeV, whereas PBHs should produce them in equal numbers, PBHs could only contribute appreciably to the antiprotons. It is usually assumed that the observed antiproton cosmic rays are secondary particles, produced by spallation of the interstellar medium by primary
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
151
Fig. 7. Comparison of PBH emission and antiproton data from Maki et al. [46].
cosmic rays. However, the spectrum of secondary antiprotons should show a steep cut-off at kinetic energies below 2 GeV, whereas the spectrum of PBH antiprotons should increase down to 0.2 GeV, so this provides a distinct signature [35]. MC calculated the PBH density required to explain the interstellar antiproton flux at 1 GeV estimated by Ip and Axford [32]. However, this calculation is prone to the same criticism (regarding the assumed cosmic ray distribution) as their positron one. Making the equivalent correction gives a limit
q \ X X K(1.6—3);10\ . & 10 yr 0.1
(17)
which is somewhat stronger than the c-ray limit. More recent data on the antiproton flux below 0.5 GeV comes from the BESS balloon experiment [65] and Maki et al. [46] have tried to fit these data in the PBH scenario. They model the Galaxy as a cylindrical diffusing halo of diameter 40 kpc and thickness 4—8 kpc and then using Monte Carlo simulations of cosmic ray propagation. In contrast to MC, they find that most of the antiprotons come from PBHs within a few kiloparsecs of the solar neighbourhood. A comparison with the data in Fig. 7 shows that there is no positive evidence for PBHs (i.e. there is no tendency for the positron fraction to tend to 0.5 at low energies). However, they require the fraction of the local halo density in PBHs to be less than 3;10\ and this is stronger than the c-ray background limit. However, Maki et al. do not allow for the fact that solar modulation will affect protons and antiprotons differently. Mitsui et al. [47] have pointed out that a key test of the PBH hypothesis will arise during the solar minimum period. This is because the flux of primary antiprotons should be enhanced then, while that of the secondary antiprotons should be little affected.
152
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
5. PBH explosions One of the most striking observational consequences of PBH evaporations would be their final explosive phase. However, in the standard particle physics picture, where the number of elementary particle species never exceeds around 100, the likelihood of detecting such explosions is very low. Indeed, in this case, observations only place an upper limit on the explosion rate of R(5;10 pc\ yr\ [1,60]. This compares to Wright’s c-ray halo limit of R(0.3 pc\ yr\ and the Maki et al. antiproton limit of R(0.02 pc\ yr\. However, the physics at the QCD phase transition is still uncertain and the prospects of detecting explosions would be improved in less conventional particle physics models. For example, in a Hagedorn-type picture, where the number of particle species exponentiates at the quark—hadron temperature, the limit is strengthened to R(0.05 pc\ yr\ [20]. Cline and colleagues have argued that one might expect the formation of a QCD fireball at this temperature [14] and this might even explain some of the short period (100 ms) c-ray bursts observed by BATSE [15]. Although this proposal is speculative, it has the attraction of making testable predictions (e.g. the hardness ratio should increase as the duration of the burst decreases and the spatial distribution should be Euclidean since the bursts are local). A rather different way of producing a c-ray burst is to assume that the outgoing charged particles form a plasma due to turbulent magnetic field effects at sufficiently high temperatures [5]. Some people have emphasized the possibility of detecting very high energy cosmic rays from PBHs using air shower techniques [16,24]. However, recently these efforts have been set back by the claim of Heckler [30] that QED interactions could produce an optically thick photosphere once the black hole temperature exceeds ¹ "45 GeV. In this case, the mean photon energy is reduced to m (¹ /¹ ), which is well below ¹ , so the number of high energy photons is much & & reduced. He has proposed that a similar effect may operate at even lower temperatures due to QCD effects [31]. However, these arguments should not be regarded as definitive: MacGibbon et al. [45] claim that Heckler has not included Lorentz factors correctly in going from the black hole frame to the centre-of-mass frame of the interacting particles; in their calculation QED interactions are never important.
6. Conclusions We have seen that PBH evaporations could contribute significantly to the Galactic flux of cosmic rays photons, electrons, positrons and antiprotons at energies of around 100 MeV. Indeed it is striking that the PBH density required to explain the fluxes of these particles are all comparable with the c-ray background limits. On the other hand, the evidence that PBH evaporations produce cosmic rays is far from conclusive: there is some uncertainty in interpreting the charged particle data (e.g. due to solar modulation effects) and there are anyway other sources of primary cosmic rays at these energies (e.g. decaying wimps). Therefore the most conservative approach is to use the cosmic ray data to constrain the number of evaporating PBHs and we have seen that this in turn places important constraints on models of the early Universe (including inflationary secarios). PBHs therefore provide a unique probe of the earliest moments of the Big Bang and even their non-existence provides vital cosmological information.
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
153
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47]
D.E. Alexandreas et al., Phys. Rev. Lett. 71 (1993) 2524. S.W. Barwick et al., Phys. Rev. Lett. 75 (1995) 390. J.D. Barrow, Mon. Not. R. Astron. Soc. 192 (1980) 427. J.D. Barrow, E.J. Copeland, A.R. Liddle, Phys. Rev. D 46 (1992) 645. A.A. Belyanin et al., Preprint, 1997. J.S. Bullock, J.R. Primack, Phys. Rev. D 55 (1997) 7423. R. Caldwell, P. Casper, Phys. Rev. D 53 (1996) 3002. B.J. Carr, Astrophys. J. 201 (1975) 1. B.J. Carr, Astrophys. J. 206 (1976) 8. B.J. Carr, in: J.L. Sanz, L.J. Goicoechea (Eds.), Observational and Theoretical Aspects of Relativistic Astrophysics and Cosmology, World Scientific, Singapore, 1985, p. 1. B.J. Carr, J.E. Lidsey, Phys. Rev. D 48 (1993) 543. B.J. Carr, J.H. MacGibbon, Preprint, 1998. B.J. Carr, J.H. Gilbert, J.E. Lidsey, Phys. Rev. D 50 (1994) 4853. D.B. Cline, W. Hong, Astrophys. J. Lett. 401 (1992) L57. D.B. Cline, D.A. Sanders, W. Hong, Astrophys. J. 486 (1997) 169. D.G. Coyne, C. Sinnis, R. Somerville, in: Proc. Houston Advanced Research Center Conference on Black Holes, 1992. M. Crawford, D.N. Schramm, Nature 298 (1982) 538. D. Dixon, New Astronomy 3 (1998) 539. C.E. Fichtel et al., Astrophys. J. 198 (1975) 163. C.E. Fichtel et al., Astrophys. J. 1434 (1994) 557. J. Garcia-Bellido, A. Linde, D. Wands, Phys. Rev. D 54 (1996) 6040. J. Garriga, M. Sakellariadou, Phys. Rev. D 48 (1993) 2502. A.M. Green, A.R. Liddle, Phys. Rev. D 56 (1997) 6166. F. Halzen, E. Zas, J. MacGibbon, T.C. Weekes, Nature 298 (1991) 538. S.W. Hawking, Mon. Not. R. Astron. Soc. 152 (1971) 75. S.W. Hawking, Nature 248 (1974) 30. S.W. Hawking, Comm. Math. Phys. 43 (1975) 199. S.W. Hawking, Phys. Lett. B 231 (1989) 237. S.W. Hawking, I. Moss, J. Stewart, Phys. Rev. D 26 (1982) 2681. A. Heckler, Phys. Rev. D 55 (1997) 480. A. Heckler, Phys. Rev. Lett. (1997) 3430. W.H. Ip, W.I. Axford, Astron. Astrophys. 149 (1985) 7. P. Ivanov, P. Naselsky, I. Novikov, Phys. Rev. D 50 (1994) 7173. K. Jedamzik, Phys. Rev. D 55 (1997) R5871. P. Kiraly et al., Nature 293 (1981) 120. M.Yu. Khlopov, A.G. Polnarev, Phys. Lett. B 97 (1980) 383. M.Yu. Khlopov, B.E. Malomed, Ya.B. Zeldovich, Mon. Not. R. Astron. Soc. 215 (1985) 575. D. La, P.J. Steinhardt, Phys. Lett. B 220 (1989) 375. D. Lindley, Mon. Not. R. Astron. Soc. 196 (1980) 317. J.H. MacGibbon, Nature 329 (1987) 308. J.H. MacGibbon, Phys. Rev. D 44 (1991) 376. J.H. MacGibbon, B.R. Webber, Phys. Rev. D 41 (1990) 3052. J.H. MacGibbon, B.J. Carr, Astrophys. J. 371 (1991) 447. J.H. MacGibbon, R.H. Brandenberger, U.F. Wichoski, Preprint, 1998. J.H. MacGibbon, B.J. Carr, D.N. Page, Preprint, 1998. K. Maki, T. Mitsui, S. Orito, Phys. Rev. Lett. 76 (1996) 3474. T. Mitsui et al., Preprint, 1997.
154
B.J. Carr, J.H. MacGibbon / Physics Reports 307 (1998) 141—154
[48] S. Miyama, K. Sato, Prog. Theor. Phys. 59 (1978) 1012. [49] P.D. Naselsky, A.G. Polnarev, Sov. Astron. 29 (1985) 487. [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67]
I.D. Novikov, A.G. Polnarev, A.A. Starobinsky, Ya.B. Zeldovich, Astron. Astrophys. 80 (1979) 104. P.N. Okeke, M.J. Rees, Astron. Astrophys. 81 (1980) 263. D.N. Page, Phys. Rev. D 16 (1977) 2402. D.N. Page, S.W. Hawking, Astrophys. J. 206 (1976) 1. A.G. Polnarev, R. Zemboricz, Phys. Rev. D 43 (1988) 1106. N.A. Porter, T.C. Weekes, Nature 277 (1979) 199. R. Ramaty, N.J. Westergaard, Astrophys. Sp. Sci. 45 (1976) 143. L. Randall, M. Soljacic, A.H. Guth, Nuc. Phys. B 472 (1996) 377. M.J. Rees, Nature 266 (1977) 333. T. Rothman, R. Matzner, Astrophys. Space Sci. 75 (1981) 229. D.V. Semikoz, Astrophys. J. 436 (1994) 254. P. Shreekumar et al., Astrophys. J. 494 (1998) 523. M.S. Turner, Nature 297 (1982) 379. E.L. Wright, Astrophys. J. 459 (1996) 487. J. Yokoyama, Astron. Astrophys. 318 (1997) 673. K. Yoshimura et al., Phys. Rev. Lett. 75 (1995) 3792. Ya.B. Zeldovich, I.D. Novikov, Sov. Astron. Astrophys. J. 10 (1967) 602. Ya.B. Zeldovich, A.A. Starobinsky, JETP Lett. 24 (1976) 571.
Physics Reports 307 (1998) 155—162
Could MACHOS be primordial black holes formed during the QCD epoch? Karsten Jedamzik* Max-Planck-Institut fu( r Astrophysik, 85740 Garching bei Mu( nchen, Germany
Abstract Observations by the MACHO collaboration indicate that a significant fraction of the galactic halo dark matter may be in the form of compact objects with masses M&0.5M . Identification of these objects as red or white dwarfs is > problematic due to stringent observational upper limits on such dwarf populations. Primordial black hole (PBH) formation from pre-existing density fluctuations is facilitated during the cosmic QCD transition due to a significant decrease in pressure forces. For generic initial density perturbation spectra this implies that essentially all PBHs may form with masses close to the QCD-horizon scale, M/!"&1M . It is possible that such QCD PBHs contribute > significantly to the closure density today. I discuss the status of theoretical predictions for the properties of QCD PBH dark matter. Observational signatures of and constraints on a cosmic solar mass PBH population are also discussed. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. PBH formation during the QCD epoch It is long known that only moderate deviations from homogeneity in the early universe may lead to abundant production of PBHs from radiation [1]. For a radiation equation of state (i.e. p"o/3, where p is the pressure and o the energy density) there is approximate equality between the cosmic Jeans-, M , and horizon-, M , masses. The ultimate fate of an initially super-horizon density ( fluctuation, upon horizon crossing, is therefore determined by a competition between dispersing pressure forces and the fluctuation’s self-gravity. For fluctuation overdensities exceeding a critical threshold at horizon crossing (do/o) 5d0"+0.7 [2] formation of a PBH with mass M &M results. The universe must have passed through a color-confinement quantum chromodynamics (QCD) transition at cosmic temperature ¹+100 MeV. Recent lattice gauge simulations indicate that the * E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 7 - 2
156
K. Jedamzik / Physics Reports 307 (1998) 155—162
transition between a high-temperature quark—gluon phase and a low-temperature hadron phase may be of first order [3], even though such simulations are still plagued by limited resolution and problems to account for finite strange quark mass. A first order phase transition is characterized by the coexistence of high- and low-temperature phase at the coexistence temperature ¹ . Both phases may exist in pressure equilibrium, p"p but with different and constant (at ¹ ) energy densities, o!o"¸, where ¸ is the latent heat. During phase coexistence adiabatic expansion of the universe causes a continuous growth of the volume fraction occupied by hadron phase (1!f ), on the expense of quark—gluon phase, such that through the release of latent heat the universe is kept at ¹ . The transition is completed when all space is occupied by hadron phase. Consider a volume element of mixed quark-gluon and hadron phases during phase coexistence. Provided a typical length scale of the volume element is much larger than the mean separation between quark—gluon and hadron phases (i.e. the mean hadron- or quark-gluon-bubble separation, l ) one may regard the volume element as approximately homogeneous. The average energy density of the volume element is 1o2"o#f ¸ and continuously varies with the change of f , whereas pressure remains constant, p"p. Upon adiabatic compression of mixed quark-gluon/hadron phases there is therefore no pressure response, v"((jp/j1o2) "0 [4]. Of course, the pressure response may only vanish if thermodynamic equilibrium is maintained. For rapid compression time scales or small compression amplitudes this may not be the case, whereas it is anticipated that approximate thermodynamic equilibrium applies over a Hubble time and order unity compression factors. During phase coexistence the universe is effectively unstable to gravitational collapse for all scales exceeding l . Note that a vanishing of v, which was independently discovered by Schmid et al. [5] may also have interesting non-gravitational effects on density perturbations. These considerations have led me [4] to propose PBHs formed during the QCD epoch from pre-existing initially superhorizon density fluctuations, such as leftover from an early inflationary period of the universe, as a candidate for non-baryonic dark matter. Fluctuations crossing into the horizon during the QCD epoch experience a significant reduction of pressure forces over that regime of the fluctuation which exists in mixed phase. Since the PBH formation process is a competition between self-gravity and pressure forces, and v "1/(3 is constant during most other radiation dominated epochs, the threshold for PBH formation should be smaller during the QCD epoch than during other early eras, d/!"(d0". Only a slight favor for PBH formation during the QCD epoch may effectively lead to the production of PBH on only approximately the QCD horizon mass scale M/!"+2M (¹ /100 MeV)\. This holds true for strongly declining > probability distribution functions for the pre-existing fluctuation overdensities. For example, assuming Gaussian statistics, PBH formation is dominated for do/o in the range between d and d #p/d , where p is the variance of the Gaussian distribution. This range is very small, p/d :10\, if PBH mass density is not to exceed the present closure density, X :1. For X +1 PBH formation during the QCD epoch is also a very rare event with only a fraction &10\ of horizon volumes collapsing to black holes. The possible production of PBH during the QCD epoch is not a completely new suggestion. In fact, in the mid seventies it was believed that a QCD era was characterized by an ever-increasing production of massive hadronic resonances. Such a “soft” (i.e. almost pressureless) Hagedorn era was argued to be suspect since overproduction of primordial black holes seemed likely [6]. In the eighties it was argued that the long-range color force could lead to the generation of subhorizon density fluctuations which in turn could collapse to planetary sized PBHs [7]. Nevertheless, the
K. Jedamzik / Physics Reports 307 (1998) 155—162
157
simple properties of mixed phase during a cosmic first-order transition and their possible implications for PBH formation on the QCD horizon mass scale have so far been overlooked. Currently there are two groups attempting to simulate the PBH formation process during a QCD transition with aid of a general-relativistic hydrodynamics code [8,9]. Preliminary results by [9] verify the reduction of PBH formation threshold during the QCD epoch. Assuming a bag equation of state and phase transition parameter, ¸/o"2, we have found a PBH formation threshold reduction, d/!"/d0"+0.77, for fluctuations entering the horizon approximately during the middle of the phase transition. Note that a canonical bag model with total statistical weights of g "51.25 and g "17.25 for quark-gluon- and hadron-phases [10], respectively, predicts even larger ¸/o"2.63, whereas lattice simulations may favor smaller ¸ [11]. For fluctuations entering the horizon during the QCD epoch one typically finds the evolution of the fluctuation into two different spatial regimes. An inner part of the fluctuation exists in pure quark-gluon phase o'o surrounded by an outer part existing in pure hadron-phase o(o. The enhanced density in the inner part of the fluctuation assists the collapse to a PBH. This is in contrast to PBH formation during simple radiation dominated eras, where the fluctuation’s density distribution is continuous. It has also been attempted to derive approximate analytic estimates of the threshold reduction and PBH masses for PBH formation during the QCD epoch [12]. The model predicts that d/!" is minimized for fluctuations entering the horizon well before the transition resulting into PBH masses considerably smaller than the QCD horizon.
2. Theoretical predictions for QCD PBH scenarios It is valuable to advance the initial suggestion of possible abundant production of PBH during the QCD epoch to a complete and predictive scenario. I outline here to which degree this may be accomplished and briefly describe the theoretical issues in QCD PBH scenarios. 2.1. Threshold reduction The bias for forming PBHs almost exclusively on the QCD scale is dependent on the PBH formation threshold reduction, d/!"(d0". Within the context of a bag model equation of state for a first order transition preliminary results of numerical simulations confirm the proposed threshold reduction. For higher order QCD transitions threshold reduction could still occur but would have to be verified by using accurate o(¹), p(¹), and v (¹) determined from lattice gauge simulations. Due to the duration of the QCD transition, ¸/o&1, threshold reduction will be of order unity. A very accurate determination of d/!" is only necessary if PBH formation is efficient over a range in do/o which is also of order unity. This is not the case for Gaussian statistics of the pre-existing density fluctuations but may apply for non-Gaussian statistics. 2.2. Mass function A crucial prediction of a QCD PBH dark matter scenario is the average QCD PBH mass. There is seemingly rough agreement of the QCD horizon mass scale M/!"+2M (¹ /100 MeV)\ and > the inferred masses of compact objects in the galactic halo by the MACHO collaboration,
158
K. Jedamzik / Physics Reports 307 (1998) 155—162
M&0.5M . Nevertheless, currently there are large uncertainties in the prediction for average > QCD PBH mass, 1M/!"2. Even incorrectly assuming M/!""1M/!"2, there is a factor eight uncertainty in M/!" depending on if the horizon length is taken as radius or diameter of a spherical horizon volume. The QCD equation of state, order of the transition, and transition temperature are as yet not precisely determined. The transition temperature may fall somewhere in the range 200 MeV9¹ 950 MeV implying a factor sixteen uncertainty in M/!", and probably equal uncertainty in 1M/!"2. Assuming a first order transition, 1M/!"2 may also depend on ¸ and the equation of states above, and below, the transition point. An accurate determination of 1M/!"2 requires detailed and reliable lattice gauge simulation data. Approximate trends may be obtained by using a bag equation of state. A PBH mass function, as well as 1M/!"2, is obtained by convolving the distribution function for density contrast of the pre-existing density perturbations, do/o, with a scaling relation associating final PBH mass with density contrast [13]. The average PBH mass is thus also dependent on the statistics of the density perturbations. Further, it has been shown that resulting PBH masses are dependent on the fluctuation shape [14]. These uncertainties are particularly difficult to remove since they require knowledge about the underlying physics creating density perturbations, presumably occurring at a scale not accessible to particle accelerators. 2.3. Contribution to X The contribution of QCD PBHs to the closure density at the present epoch is dependent on the fraction of space which is overdense by more than d/!". COBE normalized, exactly scale-invariant (n"1) Gaussian power spectra, imply negligible PBH production. Gaussian blue spectra with 1.374n41.42 predict X in QCD PBHs in the range 10\—10 [12]. Such spectral indices are consistent with cosmic microwave background observations [15]. Nevertheless, blue spectra resulting from inflationary epochs have been shown to generically be non-Gaussian, skew-negative [16]. Density perturbations with an exactly scale-invariant, COBE normalized power spectrum, but with a non-Gaussian, skew-positive distribution tail, may yield X &1. One argument against QCD PBH dark matter is the degree of fine-tuning involved for obtaining X &1. 2.4. Accretion around recombination It is long known that black holes may efficiently accrete after the epoch of recombination [17]. Whereas accretion does not appreciably change the black hole masses, conversion of accreted baryon rest mass energy into radiation may produce substantial radiation backgrounds. The presently observed X-ray and/or UV backgrounds may be incompatible with a population of PBHs with mass M '10M and X '0.1 [17]. A population of M &1M PBHs with > > large X is consistent with the observed X-ray and/or UV backgrounds. Accretion of baryons on PBH shortly before the epoch of recombination may produce distortions in the blackbody of the cosmic microwave background radiation. PBHs with M &1M would accrete at the Bondi > rate, with Thomson drag inefficient. Tidal interactions between the accreting gas and neighboring PBHs would lead to the transfer of angular momentum and the formation of disks around the PBH. Preliminary results of an investigation of PBH accretion before recombination indicate that the resulting blackbody distortions would be below the current FIRAS limit.
K. Jedamzik / Physics Reports 307 (1998) 155—162
159
2.5. PBH formation during other epochs Efficient PBH formation during the QCD era may, in principle, imply formation of PBHs during other epochs as well. For example, during the e>e\-annihilation there is a decrease in the speed of sound which may result in a bias to form PBHs on the approximate horizon scale of this era. Further, for power spectra of the underlying density distribution characterized by n'1 QCD PBH formation may be accompanied by PBH formation at earlier times on mass scales M;M/!". It is important to verify that such PBHs do not violate observational constraints [18].
3. Observational signatures of QCD PBH dark matter Ultimately, only by observational technique the existence of a population of QCD PBH may be established. It is therefore important to establish the observational signatures of QCD PBH dark matter. Particular emphasis is laid on observations which may be performed in the not-to-distant future. 3.1. Galactic halo microlensing searches The recent results of microlensing searches for compact, galactic halo dark matter by the MACHO collaboration [19] provide some motivation for QCD PBH dark matter. Low event statistics as well as uncertainties about the halo model which is to be adopted result in fairly large ranges for the average MACHO mass, 0.1M :M:1M , and halo > > dark matter fraction provided by MACHOs, f 90.2. The error bars may be reduced by + increasing the number of observed microlensing events and observing towards several line-of-sights (e.g. towards the Large and Small Magellanic Clouds). Nevertheless, it will not be possible by only the observational MACHO project to determine an accurate mass function. Only in combination with follow-up observations, such as by a space interferometry sattelite, degeneracy between MACHO lens mass, distance, and projected velocity may be lifted and a mass function may be determined. 3.2. Alternative interpretations of the MACHO results The inferred masses of MACHOs are close to those of stars, stellar remnants, or brown dwarfs. The most straightforward interpretation of the observations by the MACHO collaboration are that baryonic objects have been detected. However, one has to resort to fairly extreme galactic models in order for a characteristic MACHO mass of M:0.1M to be consistent with the > observations and for brown dwarfs to remain a viable interpretation for the lenses. A significant contribution to the halo dark matter by red dwarfs seems ruled out by observations of the Hubble deep field [20]. Halo white dwarfs with halo dark matter fractions exceeding f 90.1 seem also in + conflict with observations of the Hubble deep field, even though this constraint is dependent on somewhat uncertain white dwarf ages and cooling curves [21]. In addition, it has been argued that the light which would be emitted by the progenitors of abundant halo white dwarf populations has
160
K. Jedamzik / Physics Reports 307 (1998) 155—162
not been observed in deep galaxy surveys [22]. It has been suggested that the lenses responsible for the observed microlensing are not within the halo, but within a warped or thick galactic disk. Such scenarios may possibly be rejected by microlensing observations on more than one line of sight. There are other more, or less, radical interpretations of the results of the MACHO collaboration. It is important, not only for the viability of QCD PBH dark matter, to establish, or rule out, these alternative interpretations. 3.3. Quasar microlensing The optical depth for microlensing of distant quasars by a cosmic component of compact, solar mass objects with X &1 is remarkably large. In fact, a constraint of X :0.2 for a population of compact objects with masses M &1M has been derived from observations of broad line > radiation to continuum radiation flux ratios of &100 quasars [23]. This limit relies on the assumption that most continuum radiation is emitted from within a compact :0.1 pc region in the center of the quasar, whereas the broad line radiation emerges from a much more extended region around the quasar. The limit is independent of the clustering properties of the compact objects. There is as yet no conclusive model for quasar variability. It has thus been proposed that quasar variability is due to microlensing of an X &1 component of compact objects with M &10\M > [24]. QCD PBH dark matter may therefore be constrained by large, homogeneous samples of quasar observations, such as expected to result from the Sloan Digital Sky Survey, hopefully accompanied by an improved understanding of the physics of quasars. 3.4. Gravitational wave detection from PBH binaries It has been shown that a fraction 10\—10\ of QCD PBHs may form in PBH binaries [25]. This values is in rough agreement with the fraction of binaries observed by the MACHO collaboration. Gravitational waves emitted during PBH-PBH mergers are above the expected detection threshold for the LIGO/VIRGO interferometers when occurring within a distance of &15 Mpc. For galactic halos consisting exclusively of QCD PBH dark matter with M &0.5M this implies that up to a few mergers per year may be detected by the next > generation gravitational wave interferometers [25]. It is particularly encouraging that the gravitational wave signal is sensitive to the masses of PBH within the binary. One may hopefully also distinguish between neutron star and black hole binaries. Establishing the existence of black holes with masses well below the upper mass limit for neutron stars may strongly argue in favor of primordial black holes. 3.5. Galactic disk accretion Limits may be placed on galactic halo PBH number densities by the accretion induced radiation which may be observed when a halo PBH passes through the galactic disk in the solar vicinity [26]. Nevertheless, even the &10 objects which will be observed within the Sloan Digital Sky Survey will not provide sufficient statistics to establish, or rule out, an all QCD PBH halo with masses as small as &1M . >
K. Jedamzik / Physics Reports 307 (1998) 155—162
161
4. Conclusions QCD PBHs may be attractive dark matter candidate. I have outlined here to which degree accurate predictions for the properties of QCD PBH dark matter may be made. Most uncertain is the contribution to X of such objects since it relies on the knowledge about the underlying density perturbations on mass scales not accessible to cosmic microwave background radiation observations. Predicting QCD PBH mass functions beyond the approximate equality between MACHO masses and the QCD horizon mass may improve with detailed numerical simulations of the PBH formation process and future results of lattice gauge simulations for the QCD equation of state. A combination of observational techniques, such as galactic microlensing searches, quasar microlensing searches, and gravitational wave interferometry may point towards the abundant existence of such objects. Ultimately, the unambiguous detection of a black hole well below the maximum mass for neutron stars may argue strongly for its primordial nature.
Acknowledgements I am very grateful to my collaborator J.C. Niemeyer, for his permission to present preliminary results of our work in these proceedings, and for many useful discussions. I am also indebted to T. Abel, C.R. Alcock, G.M. Fuller, and S.D.M. White for their encouragement.
References [1] Ya.B. Zel’dovich, I.D. Novikov, Sov. Astron. 10 (1967) 602; S.W. Hawking, Mon. Not. R. Astron. Soc. 152 (1971) 75; B.J. Carr, Astrophys. J. 201 (1975) 1. [2] J.C. Niemeyer, K. Jedamzik, D. Cline (Ed.), Proc. Dark Matter 98, Los Angeles, Elsevier, Amsterdam, 1998. [3] Y. Iwasaki et al., Zeit. f. Phys. 71 (1996) 343; K. Kanaya, astro-ph/9604152, 1996. [4] K. Jedamzik, Phys. Rev. D 55 (1997) R5871. [5] C. Schmid, D.J. Schwarz, P. Widerin, Phys. Rev. Lett. 78 (1997) 791. [6] G.F. Chapline, Phys. Rev. D 12 (1975) 2949. [7] M. Crawford, D.N. Schramm, Nature 298 (1982) 538. [8] J.R. Wilson, G.M. Fuller, C.Y. Cardall, in preparation. [9] K. Jedamzik, J.C. Niemeyer, in preparation. [10] G.M. Fuller, G.J. Mathews, C.R. Alcock, Phys. Rev. D 37 (1988) 1380. [11] C. DeTar, Nucl. Phys. B (Proc. Suppl.) 42 (1995) 73; T. Blum et al., Phys. Rev. D 51 (1995) 5153; C. Bernard et al., Nucl. Phys. B 47 (1996) 503. [12] C.Y. Cardall, G.M. Fuller, astro-ph/9801103. [13] J.C. Niemeyer, K. Jedamzik, Phys. Rev. Lett. 80 (1998) 5481. [14] G.V. Bicknell, R.N. Henriksen, Astrophys. J. 232 (1979) 670. [15] C.L. Bennett et al., Astrophys. J. 436 (1994) 423; W. Hu, D. Scott, J. Silk, Astrophys. J. Lett. 430 (1994) L5. [16] J.S. Bullock, J.R. Primack, Phys. Rev. D 55 (1997) 7423. [17] B.J. Carr, Mon. Not. R. Astron. Soc. 189 (1979) 123. [18] A.R. Liddle, A.M. Green, in: D. Cline (Ed.), Proc. Dark Matter 98, Los Angeles, Elsevier, Amsterdam, 1998. [19] C.F. Alcock et al., Astrophys. J. 486 (1997) 697; K. Cook et al., Astron. Astrophys. Suppl. Ser. 191 (1997) 8301. [20] C. Flynn, A. Gould, J.N. Bahcall, Astrophys. J. Lett. 466 (1996) L55.
162 [21] [22] [23] [24] [25] [26]
K. Jedamzik / Physics Reports 307 (1998) 155—162 D.S. Graff, K. Freese, Astrophys. J. Lett. 456 (1996) L49. S. Charlot, J. Silk, Astrophys. J. 445 (1995) 124. J.J. Dalcanton, C.R. Canizares, A. Granados, C.C. Steidel, J.T. Stocke, Astrophys. J. 424 (1994) 550. M.R.S. Hawkins, Mon. Not. R. Astron. Soc. 278 (1996) 787. T. Nakamura, M. Sasaki, T. Tanaka, K.S. Thorne, Astrophys. J. Lett. 487 (1997) L139. A.F. Heckler, E.W. Kolb, Astrophys. J. Lett. 472 (1996) 85.
Physics Reports 307 (1998) 163—171
Acceleration radiation for orbiting electrons W.G. Unruh Program in Cosmology and Gravity of CIAR, Department Physics and Astronomy University of B.C. Vancouver, V6T 1Z1 Canada.
[email protected]
Abstract This paper presents an analysis of the radiation seen by an observer in circular acceleration, for a magnetic spin. This is applied to an electron in a storage ring, and the subtilty of the interaction of the spin with the spatial motion of the electron is explicated. This interaction is shown to be time dependent (in the radiating frame), which explains the strange results found for the electron’s residual polarisation in the literature. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.30. Gv; 95.30.Qd
A uniformly accelerated objects sees the vacuum fluctuations as a thermal bath [1]. For an acceleration which is constant in amplitude and direction, this temperature is given by
a
¹" . 2p ck
(1)
Since it requires an acceleration of 2.6;10 cm/s to produce a temperature of 1 K the experimental verification of this prediction is difficult, although recent advances in the laser acceleration of electrons promise to produce accelerations of over 100 times this value [2]. These would give effective temperatures of the order of room temperature, but for very short times. Those brief intense accelerations make it difficult to measure the effects of the thermal bath on the object. As was pointed out by Bell and Leinaas [3], the spin of an electron in a magnetic field is a possible candidate for a detector of acceleration radiation. Such an electron has two energy levels, and an examination of the population of those two energy levels after a long time can be used to measure the bath that the electron sees itself in at the Larmour precession frequency. Since even room temperature corresponds to very low frequencies (compared with the compton frequency of the electron), and since dipole radiation is suppressed by the third power of the frequency, the use of the electron spin as a detector of the acceleration bath requires a very long period of acceleration to produce an effect. Bell and Leinaas [3] therefore suggested that circular accelerations (placing the electron into a circular orbit) be used instead. In an electron storage ring, 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 8 - 4
164
W.G. Unruh / Physics Reports 307 (1998) 163—171
the acceleration experienced by the electrons in the bending magnets corresponds to high temperature (about 10 K for HERA), and relatively short-decay times (about half an hour). The electron can be kept in the storage ring for that time and be expected to equilibrate with the thermal bath. They thus suggested that the residual polarisation of the electrons is therefore a measure of the temperature of the bath seen by the electrons. Jackson [4], however, has raised serious doubts about the use of a thermal model to explain the residual polarisation of the electrons in such storage rings. In particular, he has argued that the details of the polarisation of the electron as a function of the g factor found by Derbenev and Kondratenko [5] (hereafter DK) and himself [6] as plotted in Fig. 1, made such a thermal explanation highly suspect. The primary purpose of this paper is to analyse the behaviour of an electron undergoing circular acceleration. I will restrict myself to the electron’s travelling at constant velocity in a constant magnetic field (i.e., I assume that some tangential electric field is present to make up for the energy lost in synchrotron radiation). My conclusion is that the electron responds to a thermal bath, but one with a frequency-dependent temperature. However, the electron is actually a system with two field detectors, which are, furthermore, coupled to each other. In addition to the spin, there are the vertical fluctuations in the orbit. A simple coupling should present no difficulties for understanding the system, but in this case that coupling is also time dependent (in the radiating frame of the spin). This time-dependent coupling distorts the response of both detectors.
Fig. 1. Polarisation of an electron vs. the g factor.
W.G. Unruh / Physics Reports 307 (1998) 163—171
165
Throughout I will use the convention that "c"1. This paper will only present the results. For details I refer the reader to my paper in the proceedings of the Monterey Workshop on the Quantum Mechanics of Beam Physics [7]. The expectation that the spin polarisation should simply have a distribution which corresponds to the thermal distribution of the field is too naive. One must understand both the interaction of the spin with the bath and with other degrees of freedom of the system itself. After all, the snow seen on the screen of a TV set has a temperature vastly higher than the room temperature fluctuations which cause it, due to the interaction with other components (e.g. amplifiers, etc). The fact that the observed temperature is so much higher than the temperature which causes the snow is not a failure of the TV set to act as a detector of thermal fluctuations, but rather a failure to understand how the TV set works. As we shall see, the electron also has similar complex behaviour including time-dependent couplings which act as “amplifiers”. The first question to ask is whether or not the electromagnetic field appears to be a thermal field to the circularly accelerating observer. Clearly, the fluctuations are Gaussian, since they are simply the vacuum fluctuations of the electromagnetic field, which are Gaussian. Thus, the key question is whether or not they are stationary, i.e. is the two-point correlation function of the fields a function only of the difference in times, or does it also depend on the absolute time? We shall find that for the few components of the vacuum fluctuations which are important for spin transitions that the components in the Fermi—Walker frame are stationary and do correspond to a weakly frequencydependent temperature. Let us examine the equation of motion of the electron, and its interaction with the field fluctuations along the path of the particle. I will attack the problem in three steps. First, I will examine the equations of motion of the spin on its own, assuming that the electron follows the circular orbit exactly (i.e. all fluctuations from this path will be ignored). Then I will include the effects of fluctuations in the path in the vertical (z) direction. This will introduce another internal system, a harmonic oscillator in the z component of the motion. Oscillations in the other two directions turn out not to couple to the spin system to lowest order, and will be ignored. I will calculate the harmonic oscillations in the z direction driven by the electromagnetic field and the effects of such oscillations on the spin. Those effects arise because of the changes in the Thomas precession of the spin induced by this extra motion. (See [9—12].) The equations of motion of the spin of an electron in a constant circular orbit can be derived from a Hamiltonian. Taking sG"pG, the Pauli spin matrices, the Hamiltonian is H"gk p ) B!cKp , X
(2)
where B is the total magnetic field as seen in the rest frame of the electron. This is composed of the fixed field cB in the z direction, and the fluctuating vacuum field, BG. To calculate the equilibrium population, I use the standard technique of time-dependent perturbation theory to calculate the transition probabilities of the spin going from one state to the other. Using first-order timedependent perturbation theory, the transition probability per unit time to go from the lower to the upper state or the upper to the lower state is to lowest order in 1/c
P "4(¼#1)k(cK) j
"X" F (¼)! F (¼) e\(5!H(!¼)F (¼) , X
166
W.G. Unruh / Physics Reports 307 (1998) 163—171
"¼" P "4(¼#1)k(cK) H(¼)F (¼)# F (¼)! F (¼) e\(5 , i ¼
(3)
where ¼"X/Kc"X/a"g/2!1. The common factor of 4(¼#1)k term is just (gk ). The functions F are given by G F "(¼#1)((¼#1)#1) , (3 F " (84¼!168¼#113) , 288 1 F " (144¼!294¼#192) . 288
(4)
In Fig. 2, I have plotted the polarisation P !P i P" j P #P j i vs. the normalised frequency ¼, or the anomalous g factor g/2!1.
Fig. 2. Polarisation of the spin vs. the g factor.
(5)
W.G. Unruh / Physics Reports 307 (1998) 163—171
167
In which frame should one ask about the thermality of the spectrum of fluctuations? For an electromagnetic dipole moment, the damping is proportional to D m/Dq!a(D m/Dq) where $5 $5 D refers to the Fermi—Walker derivative of the magnetic dipole moment m [8], i.e. the damping is $5 related to the rate of change of the magnetic moment in the Fermi—Walker frame. This thus makes the Fermi—Walker frame the natural frame to look at the fluctuations in. Thus, in the Fermi—Walker frame, we can calculate the temperature of the radiation bath by gX . (6) ¹"! ln(P /P ) j i This temperature is frequency dependent (or since the frequency here is proportional to g, is g dependent), and is regular in this frame. Fig. 3 is a plot of the fluctuation temperature seen by the spin vs. the g factor, with the temperature plotted in units of the acceleration cK. Note that the temperature in this case is slightly more complex than it was in the scalar case, but has a very similar form, namely it varies slightly (by about 20%) across the frequency range. The spin polarisation is exactly what one would expect naively from the electron’s coming into equilibrium with this temperature. Were the electron fixed in its orbit (i.e. were the particle not allowed to deviate from the given orbit), then the above analysis would be complete. However, for a real electron, the particle is not
Fig. 3. The effective temperature of the EM field in the Fermi—Walker frame.
168
W.G. Unruh / Physics Reports 307 (1998) 163—171
fixed in its orbit, but can deviate therefrom. These excursions themselves can be regarded as electric dipole moments around the fiducial circular path, and will themselves couple to the electromagnetic fields. They will also couple to the spin. Let me begin by analysing these dipole fluctuations in the absence of the spin and show that they themselves also act as detectors for other components of the electromagnetic field. Let me assume that there is a harmonic restoring force which drives the particle back toward its fiducial orbit. This force will be assumed to have a frequency K and a damping term i. The equation of motion for these excursions in the various directions is therefore e X$ G#KXG" EG , m
(7)
where EG has two components, the quantum fluctuations, and the radiation reaction term. As I showed elsewhere [8] (and as BL derived in a different manner), the radiation reaction term is
2 D D $5!a $5 eXG , EG " 00 3 Dq Dq
(8)
since eXG is the dipole moment corresponding to this deviation from the fiducial path. I will be interested in the fluctuations in the z direction since those are the ones which will couple to the spin. For the z direction, the FW derivative is the same as the ordinary one. The g effective temperature for the z oscillations due to the electric vacuum field fluctuations is again “thermal” with the temperature plotted in Fig. 4 as a function of K, the frequency of oscillation of the electron in the z direction. The vertical oscillations see a thermal spectrum with frequency-dependent temperature which is similar, but not identical, to that seen by the spin alone. Finally, we must take into account the coupling between these two “detectors”, namely the spin and the z oscillations. Each separately acts like a detector and sees a thermal spectrum of fluctuations of very similar temperatures. The z oscillations couple to the spin through two routes. The first is if the classical magnetic field strength varies in direction or amplitude with position in the rest frame of the particle. The particle will then see a different field depending on where it is located. The second is an interaction through the velocity of the particle. This will happen both because the particle will see the strong electric field (causing the acceleration) as having a magnetic component due to any velocity of the electron from the fiducial circular orbit. Furthermore, the particle will have a different FW frame because of any such velocity, causing an extra “Thomas precession”. Variations in the magnetic field off the fiducial orbit are typically present, and I will assume that the magnetic field is arranged in the “weak focusing” configuration such that an effective restoring force to vertical oscillations is provided by the presence of off-orbit radial magnetic fields. If we were to assume that the coupling to B field fluctuations by the spin are zero, then these cross couplings from the z motion of the electron driven by the vacuum fluctuations would give transitions of the form (X!(g/2)K) (!G H(!¼)#e\(5G ) , P "k(cK) 'j (X!K)#Xi (X!(g/2)K) P "P #k(cK) , 'i 'j (X!K)#Xi
(9)
W.G. Unruh / Physics Reports 307 (1998) 163—171
169
Fig. 4. The effective temperature of the EM field affecting the oscillations in the z direction.
where G "¼(¼#1) , G " ((3(12¼#13)#30"¼")) ,
(10)
K is the restoring frequency and i the damping constant of these vertical oscillations. However, there are also correlations between the components E which drive the vertical X oscillations, and the B components which drive the spin flips. These add additional complications ! to the expressions. Let me define these correlation spin flip probabilities by
P +!gk !j
(g/2)K!X "¼" (cK) !H(!¼)¸ #e\(5 ¸ # ¸ K!X ¼
where I have assumed i is very small. Similarly,
P +!gk !i
(g/2)K!X "¼" (cK) H(¼)¸ #e\(5 ¸ # ¸ K!X ¼
.
,
170
W.G. Unruh / Physics Reports 307 (1998) 163—171
Here, we have ¸ "!(2¼#1) , (3 ¸ "! (4¼!5¼#3) , 24 ¸ " (6¼!10¼#4) . The total interactive transition rate is then
(11)
P "P #P #P (12) 2j j !j 'j and similarly for P . These, for K"i"0 lead to the polarisation curve of Fig. 1. 2i There are a number of points which must be emphasised. The first is that it is the transverse oscillations of the electron which drive the extra terms in the transition. In the above I assumed that the only effects which drive those transverse oscillations are the vacuum fluctuations of the electric field in the rest frame of the particle. If there are other sources of noise which drive those transverse oscillations (field misalignments, interactions with other particles in the beam, etc.) then those will all also contribute to the spin polarisation. Secondly, for g"2 the factor (gK/2!X)/(K$iiX!X) is unity except at the resonance. However, for g not equal to 2, the system behaves very differently below the resonance than above. I have argued that both detectors are immersed in essentially identical temperature heat baths. Why would the interaction not then not bring the joint system to the same joint temperature? The important point here is that the Fermi—Walker frame is the natural frame for the spin, and is the frame in which the fluctuations look thermal. However, the interaction is static in the corotating frame. The z oscillations and E are unaffected by the transformation to the FW frame, and thus in X this frame both systems independently have the same distribution. However, in that frame, the interaction between the oscillator and the heat bath is no longer static, but is time dependent. (Although Z is unaffected by the transformation, the coupling is through p and p which do change M F under the transformation.) Because of this time-dependent coupling, the harmonic system tends to preferentially drive up-transitions in the spin system (and the spin system preferentially drives down-transitions in the oscillator). The coupling acts like an amplifier, driving the spin system to its higher-energy state (at least for g'0). The power for this amplifier clearly comes from the motion around the storage ring. The electron system thus acts like the snow on the TV screen. Without understanding the details of the system, one cannot conclude that the lack of thermal output (the spin’s polarisation is not thermally distributed) in the detector demonstrates a lack of thermal input (the fluctuations in the relevant components of the EM field in the FW frame).
Acknowledgements I would like to thank P. Chen and David Cline for inviting me to their respective conferences and reviving my interest in this subject. I would like to thank David Jackson whose doubts about
W.G. Unruh / Physics Reports 307 (1998) 163—171
171
treating the circular acceleration as a thermal bath incited me to look more closely at that problem. Finally I thank the CIAR and NSERC for support while this work was being done. Much of this paper is taken from the more extensive treatment in Ref. [7].
References [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10] [11] [12]
W.G. Unruh, Phys. Rev. D 14 (1976) 870. P. Chen, T. Tajima, Testing Unruh radiation with ultra-intense lasers, SLAC-PUB-7543, March 1998. J.S. Bell, J.M. Leinaas, Nucl. Phys. B 212 (1983) 131; J.S. Bell, J.M. Leinaas, Nucl. Phys. B 284 (1987) 488. J.D. Jackson, Comment made at the Quantum Aspects of Beam Physics, Monterey, January 1998. Also On effective temperatures and electron spin polarisation in storage rings, in: P. Chen (Ed.), Proc. Quantum Aspects of Beam Physics Conf. World Scientific, Singapore, 1999, to appear. Ya.S. Derbenev, A.M. Kondratenko, Zh. Exp. Theory Phys. 64 (1973) 1918. J.D. Jackson, Rev. Mod. Phys. 48 (1976) 417. W.G. Unruh, Acceleration radiation for orbiting electrons, in: P. Chen (Ed.), Proc. Quantum Aspects of Beam Physics Conf. — Monterey 1998, World Scientific, Singapore, 1999; W.G. Unruh, preprint hep-th/9804158. W.G. Unruh, preprint physics/9802047; Phys. Rev. A, in press. J.M. Leinaas, Accelerated electrons and the Unruh effect, in: P. Chen (Ed.), Proc. Quantum Aspects of Beam Physics Conf. — Monterey 1998, World Scientific, Singapore, 1999; J. Leinaas, preprint hep-th/9804179. D.P. Barber, S.R. Mane, Phys. Rev. A 37 (1988) 456. S.K. Kim, K.S. Soh, J.H. Yee, Phys. Rev. D 35 (1987) 557. J.D. Jackson, Electrodynamics, 2nd ed., Wiley, New York, 1975, eqn 11.162.
Physics Reports 307 (1998) 173—180
Primordial black holes and short gamma-ray bursts David B. Cline* Department of Physics and Astronomy, Box 951547, University of California Los Angeles, Los Angeles, CA 90095-1547, USA
Abstract The possible existence of primordial black holes (PBHs) has long been a key observational question. At present, the only known way to detect low-mass ones (below 10 g) is through the Hawking radiation process. This radiation in turn depends on the nature of hadron interactions in the 100 MeV region and beyond. In the case of a first-order quantum-chromodynamics phase transition, we expect a sharp burst of gamma rays from the PBH. We have pointed out that there is a class of gamma-ray bursts that have this property and could provide evidence for PBHs in the galaxy. Future study of these events is crucial. Recently, we have shown that a diffuse gamma-ray glow should exist in the halo from these PBHs. There now seems to be evidence for this effect as well. 1998 Elsevier Science B.V. All rights reserved. PACS: 97.60.Lf
1. Introduction The search for evidence for primordial black holes (PBHs) has continued since the first discussion by Hawking [2]. In fact, this was about the time that gamma-ray bursts (GRBs) were first discovered, making a natural association with PBHs [3]. However, in the intervening years it has become clear that the time history of the typical GRB is not consistent with the expectations of PBH evaporation. While the theory of PBH evaporation has been refined, there are still no exact predictions of the GRB spectrum, time history, etc. [4]. However, reasonable phenomenological models have been made, and the results again indicate that most GRBs could not come from PBHs [5,6]. In addition, there are new constraints on the production of PBHs in the Early Universe that indicate that the density of PBHs in the Universe should be very small, but not necessarily zero [7].
* E-mail:
[email protected]. Sections 1—4 have been adapted; see Ref. [1] for further details. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 6 - 5
174
D.B. Cline / Physics Reports 307 (1998) 173—180
After the initial discovery of GRBs, it took many years to uncover the general properties. Around 1984, several GRBs were detected, indicating that there was a class of short bursts with time duration of &100 ms and a very short rise time [8]. A separate class of GRBs was declared [9]. This classification seems to have been forgotten and then rediscovered by some of us [10]. It is still possible that there is a sizable density of PBHs in our galaxy and that some of the GRBs could be due to PBH evaporation. Recently, we showed that the BATSE 1B data [11] have a few events that are consistent with some expectation of PBH evaporation (short bursts with time duration (200 ms and that are consistent with »/» & [10].
2. Primordial black-hole evaporation Ever since the theoretical discovery of the quantum-gravitational particle emissions from black holes by Hawking [2], there have been many experimental searches (see Ref. [4] for details) for high-energy gamma-ray radiation from PBHs. They would have been formed in the Early Universe [7,12] and would now be entering their final stages of extinction. The violent final-stage evaporation or explosion is the striking result of the expectation that the PBH temperature is inversely proportional to the PBH mass, e.g., ¹ +100 MeV (10 g/m ), since the black hole becomes . & . & hotter as it radiates more particles and can eventually attain extremely high temperatures. In 1974, Hawking showed in a seminal paper [2] that an uncharged, non-rotating black-hole emits particles with energy between E and E#dE at a rate per spin of helicity state of
8pGME \ C dN , " Q exp !(!1)Q hc dtdE 2ph
(1)
where M is the PBH mass, s is the particle spin, and C is the absorption probability. It can be Q considered that this particle emission comes from the spontaneous creation of particle—antiparticle pair escapes to infinity, while the other returns to the black hole. Thus, the PBH emits massless particles, photons, and light neutrinos, as if it were a hot black-body radiator with temperature ¹ 10 g/M MeV, where M is the black-hole mass. A black hole with one solar mass, M 2;10 g, has an approximate temperature of 10\ K, while a black hole with mass of 6;10 g has a temperature of &20 MeV. The temperature of a black hole increases as it loses mass during its lifetime. The loss of mass from a black hole occurs at a rate, in the context of the standard model of particle physics, of a(M) dM "! , M dt
(2)
where a(M), the running constant, counts the particle degree of freedom in the PBH evaporation. The value of a(M) is model dependent. In the standard model (SM), with a family of three quarks and three leptons, it is given [2,6,13] as a(M)"(0.045 S #0.162 S );10\, where H> H S and S are the spin and color degrees of freedom for the fermions and gauge particles, H> H respectively. For the SM, a(M)"4.1;10\. A reasonable model of the running coupling constant is illustrated in Fig. 1, where the regions of uncertainty are indicated. These are the regions where there could be a rapid increase in the
D.B. Cline / Physics Reports 307 (1998) 173—180
175
Fig. 1. Running coupling or density of states factor a, showing regions of uncertainty due to the QGP transitions (I) or the increase in the number of new elementary particles (II). It is possible that intense short GRBs could occur at either of these temperatures (the decrease of mass of the black hole is given by dm /dt"!a/m ). A rapid mass decrease or . & . & burst can occur when m 410 g or if a changes rapidly near the QGP transition. . &
effective number of degrees of freedom due to the quark—gluon phase (QGP) transition. The phase transition would lead to a rapid burst in the PBH evaporation or, at high energy, there could be many new particle types that would also lead to an increase in the rate of evaporation. Also shown in Fig. 1 are the regions in PBH temperature where short duration GRBs may occur when the PBH mass is either 10 or 10 g. Black holes at the evaporation stage at the present epoch can be calculated as having M* [3a(M*)q ] 7;10 g for a(M*) 1.4;10\. The bound on the number of black holes at their critical mass, constrained by the observed diffuse gamma-ray background, has been put in the 10—100 MeV energy region [4]:
N"
dn d(ln M)
410 pc\ .
(3)
++*
Thus, the number of black holes with critical mass M* in their final state of evaporation is 3a(M*) dn "! "N"2.2;10\ N pc\ yr\ . M* dt
(4)
176
D.B. Cline / Physics Reports 307 (1998) 173—180
3. Concept of a primordial black-hole fireball Based on previous calculations and numerous direct observational searches for high-energy radiation from an evaporating PBH, we might conclude that it is not likely to be able to single out such a monumental event. However, we pointed out [6] a possible connection between very short GRBs observed by the BATSE team [11] and PBH evaporation emitting very short energetic gamma rays. If we want to accept this possibility, we may have to modify the method of calculating the particle emission spectra from an evaporating PBH, in particular, at or near the quark—gluon plasma (see Refs. [14], for a review, and [15]) phase transition temperature at which the ¹ arrives eventually. We briefly discussed [15] that inclusion of the QGP effect around the . & evaporating PBH at the critical temperature may drastically change the resulting gamma-ray spectrum. The QGP interactions around the evaporating PBH form an expanding hadronic (mostly pions) matter fireball. Shortly, after the decay of pions, the initial hadronic fireball converts to a fireball with mixtures of photons, leptons, and baryons. Using the simplest picture, i.e., only ns produced in the QGP transition, we can obtain the properties of the fireball. We find out that given that ¸ &¸ &5;10 ergs and . & /%. ¹ &¹ 5160 MeV, a simple radiation-dominated model would give q &10 cm, which . & /%. Q implies that q is of order 100 ms. However, this is very uncertain and could even be the order of seconds in some cases. Thus one can expect GRBs from a fireball to have both a very short rise time (41 ls) and duration (&50—200 ms) also in this model.
4. Study of BATSE 3B data The BATSE catalogs are cumulative; therefore, the 3B Catalog also includes the data from the 1B Catalog. The fit was made to a combination of a Gaussian and a fourth-order polynomial. The Gaussian fit the peak and the polynomial fit the background events. The duration was calculated as equal to 3 times the full width (3p) for the Gaussian fit. The cuts on the data were based on the burst duration, the completeness of the data, and the quality of the data. These cuts were: 1. short duration bursts where ¹ (250 ms; 2. insisting that complete sets of data were available for both the hardness ratio and spatial distribution analysis, which includes TTE data for the duration calculation, the fluence data for the hardness ratio, and the counts in the peak C /C for the »/» tests;
3. a cut on the quality of the data was made, requiring single-spike data and a peak count rate at least twice the background level. This removed weak bursts and bursts with multiple peaks. 4. a cut was made on data with a duration of (100 ms. The hardness ratio of the bursts selected, after making the listed cuts, from the BATSE 3B data [16] is shown in Table 1; the table also lists the calculated duration and the ¹ duration. Fig. 2 shows a tendency for an increasing hardness ratio with the shorter GRBs. Within the given statistics, we believe these GRBs are candidates for PBH evaporation in our galaxy. We cannot
D.B. Cline / Physics Reports 307 (1998) 173—180
177
Table 1 Hardness ratio versus duration (BATSE 3B) Trigger no.
Duration (s)
¹ (s)
Hardness ratio
1453 0512 0207 2615 3173 2463 0432 0480 3037 2132 0799
0.006$0.0002 0.014$0.0006 0.030$0.0019 0.034$0.0032 0.041$0.0020 0.049$0.0045 0.050$0.0018 0.062$0.0020 0.066$0.0072 0.090$0.0081 0.097$0.0101
0.192 0.183 0.085 0.028 0.208 0.064 0.034 0.128 0.048 0.090 0.173
6.68$0.33 6.07$1.34 6.88$1.93 5.43$1.16 5.35$0.27 1.60$1.55 7.46$1.17 7.14$0.96 4.81$0.98 3.64$0.66 2.47$0.39
Fig. 2. Hardness ratio for some GRBs reported in the BATSE 3B Catalog. A simple fitting to Eq. (12) of these data indicates an anti-correlation of hardness versus burst duration. The dashed line represents the average hardness for the bursts of time duration greater than 2 s. The hardness for events longer than 2 s is about of the value for the events below 2 s. Note that these short time bursts have a much harder spectrum, a trend that would be expected if some of the short bursts came from PBH evaporation. Events have been selected with a single spike time history as would be expected for PBH evaporation.
exclude the possibility that all of the short (&1 s) GRBs are due to PBH evaporation, since a change in the average distance to the PBH—GRB changes the rate like the distance cubed. 5. Diffuse gamma-ray flux — a galactic halo? Over the past two decades, the reality of a diffuse component of the gamma-ray flux has been established [17,18]. While there is no firm explanation of the source of these gamma rays, one
178
D.B. Cline / Physics Reports 307 (1998) 173—180
possible candidate is due to the evaporation of PBHs in the Universe [2,13]. In fact, the diffuse gamma-ray spectrum and flux have been used to put the only real limit on the density of PBHs in the Universe, leading to a limit of X (10\ [5,19]. The recent work of Dixon et al. [20] has . & claimed the existence of an important component of gamma rays in the galactic halo. This method of analysis is very different from previous analyses and gives some confidence that the results are consistent with those of Osborne et al. [18]. This suggests that there is a halo component of diffuse gamma rays. The flux level of this component is very similar to the extragalactic diffuse flux. The existence of a local diffuse flux from PBH evaporation will depend on the level of clustering of the PBHs in galaxies and galactic halos. This same clustering magnitude will determine whether signals from individual PBH evaporation can be detected in real time [1], as discussed in the previous sections. A powerful argument would result from the detection of the galactic component of the diffuse gamma rays from PBH evaporation. There are two possibilities: 1. There could be a glow or distinct component of the diffuse radiation from the halo due to PBH evaporation. This may have been detected by EGRET [20]. In this analysis, a statistically significant large-scale halo surrounding the Milky Way is identified. The analysis uses GeV photons but shows that the effect is likely larger for 100 MeV photons, which are used in the work presented here. The basic concept is to identify a component of the gamma-ray flux that has the scale of the halo using a method of Hoar wavelets. Further information can be found in Ref. [20]. 2. There could be an anisotropy of the diffuse gamma rays if there is a sufficient density of PBHs in the halo, which may also have been detected [18,21]. Point no. 2 has been discussed by Wright [21] in order to put a limit on PBH evaporation. In fact, his analysis of the uncorrected EGRET map indicates the possible existence of such an anisotropy due to PBH evaporation, which we believe provides additional evidence for the existence of PBHs clustered in the Milky Way. The analysis [18] of the diffuse extragalactic background identified a larger anisotropy in 1994. We note that the direction of this effect is the same as the dipole anisotropy in the cosmic microwave background radiation according to the work of Osborne et al. [18]. We now turn to a simple model that explains the current observations by assuming the existence of PBHs at the level of the relaxed Page—Hawking bound discussed previously and the galactic clustering enhancement factors of 5;10 or greater. In Fig. 3, we show the logic of the possible detection of individual PBHs by very short GRBs and the connection with the diffuse gamma-ray background [1]. We now make a very simple estimate of the photon flux. The emissivity from the local flux from PBHs compared with the flux from the Universe can be estimated by using the ratio from Eqs. (12) and (13) of Ref. [21]: 3H R I %" ) G , (5) 4c I 3 where H is the Hubble constant, R the galactocentric radius, G the clustering factor, I is the % diffuse or halo “glow” flux, and I is the diffuse flux from the Universe. We take G from the 3 observations of GRBs consistent with PBH evaporation [1] to be 5;10 for a normal
D.B. Cline / Physics Reports 307 (1998) 173—180
179
Fig. 3. Schematic of the connection between GRB events and the diffuse c background, I , where N3 is density of PBHs A in the Universe, X3 is ratio of density of the Universe, ¹/%. is QGP transition temperature, S is GRB luminosity from PBH, N% is number of PBHs in the Galaxy, G is galaxy clumping factor, S is fluence sensitivity of GRB detectors, r is distance to the source, NQ % is rate of PBH decay in the Galaxy, q is age of the Universe, R is number of GRBs detected 3 per year from PBHs, m is GRB detector efficiency (including the fraction of energy detected).
Page—Hawking bound of 2;10 pc\ from Fig. 3 (we use the relaxed bound here), which is a clustering factor consistent with the GRB observations and the model of PBH evaporation [1]. We then find I "0.12 photons m\ s\ sr\ , % which is consistent with the value from Osborne et al. [18] and Dixon et al. [22] of I(E)"9.6;10\E\ m\ s\ sr\ GeV\ .
(6)
(7)
We predict that most of the isotropic background is due to the halo radiation from PBHs and that the anisotropy continues to be observed.
180
D.B. Cline / Physics Reports 307 (1998) 173—180
There are several possibilities for the production of PBHs in the Early Universe [13,23,24]. While some of these schemes are for solar mass PBHs, it is very possible that similar mechanisms could produce the lower mass PBHs described here. At the very small level of X &10\, there . & could be several ways to produce them. The real test is whether they can be detected in our Galaxy by direct observation of the gamma-ray halo or the anisotropy, as suggested by Wright [21] or by the individual evaporation of a PBH, as suggested in the work of Cline et al. [1]. We therefore have data that suggest the existence of PBHs in the local galaxy.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
D.B. Cline, D.A. Sanders, W.P. Hong, Astrophys. J. 486 (1997) 169. S.W. Hawking, Nature 248 (1974) 30. R.W. Klebesadel, I.B. Strong, R.A. Olsen, Astrophys. J. 182 (1973) L85. F. Halzen, E. Zas, J.H. MacGibbon et al., Nature 353 (1991) 807. J.H. MacGibbon, B.J. Carr, Astrophys. J. 371 (1991) 447. D.B. Cline, W.P. Hong, Astrophys. J. 401 (1992) L57. B.J. Carr, J.E. Lidsey, Phys. Rev. D 41 (1993) 543. C. Barat, R.I. Hayles, K. Hurley et al., Astrophys. J. 285 (1984) 791. J.P. Norris, T.L. Cline, U.D. Desai et al., Nature 308 (1984) 434. D.B. Cline, Astropart. Phys. 5 (1996) 175. G.J. Fishman, C.A. Meegan, R.B. Wilson et al., Astrophys. J. Suppl. 92 (1994) 229. B.J. Carr, J.H. Gilbert, J.E. Lidsey, Phys. Rev. D 50 (1994) 4853. S.W. Hawking, Commun. Math. Phys. 43 (1975) 199. J. Rafelski, B. Muller, Phys. Rev. Lett. 48 (1982) 1066. D.B. Cline, Nucl. Phys. A 610 (1996) 500. C.A. Meegan, G.N. Pendleton, M.S. Briggs et al., Astrophys. J. Suppl. 106 (1996) 65. G.E. Fichtel, G.A. Simpson, D.J. Thompson, Astrophys. J. 222 (1978) 833. J.L. Osborne, A.W. Wolfendale, L. Zhang, J. Phys. G 20 (1994) 1089. D.N. Page, S.W. Hawking, Astrophys. J. 206 (1976) 1. D. Dixon, D.H. Hartman, E.D. Koloczyk et al., Evidence for a GeV halo surrounding the milky way, U. California Riverside, preprint, 1998; also D. Dixon, private communication, 1998. E.L. Wright, Astrophys. J. 459 (1996) 487. J.H. MacGibbon, B.R. Webber, Phys. Rev. D 41 (1990) 3052. M. Crawford, D.N. Schramm, Nature 298 (1982) 538. A. Dolgov, J. Silk, Phys. Rev. D 47 (1993) 4244.
Physics Reports 307 (1998) 181—190
Black hole MACHO and its identification Takashi Nakamura* Yukawa Institute for Theoretical Physics, Kyoto University, 08190 Kyoto 606, Japan
Abstract If MACHOs are primordial black holes of mass &0.5M (BHMACHO), it is extremely difficult, if not impossible, to > identify BHMACHOs by their accretion-driven emission. However using gravitational wave detectors, BHMACHOs may be identified. There may exist &5;10 BHMACHO binaries in the halo up to &50 kpc whose coalescing time due to the emission of gravitational waves is comparable to the age of the universe. This means that the event rate will be &5;10\ events/yr/galaxy and several events/yr within 15 Mpc are expected. The gravitational waves from such coalescing BHMACHO binaries, if they exist, can be detected by LIGO, VIRGO, TAMA and GEO within next 5 years. 1998 Elsevier Science B.V. All rights reserved. PACS: 97.60.Lf
1. Introduction The analysis of the first 2.1 years of photometry of 8.5 million stars in Large Magellanic Cloud (LMC) by MACHO collaboration [1] suggests that 0.62> of the halo consists of MACHOs \ of mass of 0.5> M in the standard spherical flat rotation halo model. The preliminary analysis \ > of the four year data suggests the existence of at least five additional microlensing events in the direction of LMC [2]. The estimated mass of MACHOs is just the mass of red dwarfs. However the contribution of the halo red dwarfs to MACHO events should be at most a few percent from the observations of the number density of red dwarfs [3—6]. As for white dwarf MACHOs, the IMF should have a sharp peak around 2M [7—9]. Several times more gas mass than MACHOs is > needed to make white dwarf MACHOs since progenitors of white dwarfs are massive stars with mass (8M . Extreme parameters or models are needed for the case of white dwarf MACHOs, > although future observations of the high velocity white dwarfs in our solar neighborhood might prove the existence of white dwarf MACHOs. * E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 1 - 1
182
T. Nakamura / Physics Reports 307 (1998) 181—190
The measurement of the optical depth to other lines of sight including SMC and M31 are needed to confirm that MACHOs exist everywhere in the halo. One microlensing event toward SMC [10,11] is not enough to determine the optical depth toward SMC reliably. At present only the optical depth toward LMC is available so that in principle MACHOs may not exist in the other line of sight. Any objects somewhere between LMC and the sun with the column density larger than 25M pc\ [12] might interpret the data. They include; LMC-LMC self-lensing, spheroids, > the thick disk, the dwarf galaxy, the tidal debris, the warping and flaring of the Galactic disk [13—16]. However none of them are confirmed to explain microlensing events toward LMC. This means that MACHOs may not be stellar objects but absolutely new objects such as black holes of mass &0.5M or boson stars with the mass of the boson &10\ eV. > In this review we consider black hole MACHO (BHMACHO) case. Since it is impossible to make a black hole of mass &0.5M as a product of the stellar evolution, it is necessary to consider > the formation of solar mass size black holes in the very early universe due to large metric perturbations or some unknown mechanism in various possible phase transitions [17—19]. A stand point here, however, is not to review the detailed models of formation but to review what is the outcome of the formation of BHMACHOs. In short the formation of solar mass black holes in the early universe is assumed without asking its theoretical origin and argue how to identify BHMACHOs. Note here that BHMACHOs of mass &M do not contribute much to the > background radiation [20,21]. In Section 2 identification of BHMACHOs by IR, optical and X-ray detectors will be discussed. In Section 3 formation of BHMACHO binary is discussed. In Section 4 gravitational waves from coalescing BHMACHO binary and its detectability will be argued.
2. Identification of BHMACHOs by IR, optical and X-ray detectors Since BHMACHOs are moving in the halo, some of them may exist in dense molecular clouds such as Orion nebula. Such a BHMACHO of mass M moving with velocity » (
(1)
The accretion rate is
M n » \ MQ &nr o »+7.4;10 g s\ M 10 cm\ 10 km s\ > M n » \ , (2) +5.3;10\M Q # M 10 cm\ 10 km s\ > where o and n are, respectively, the mass and number density of the medium, and M "1.39;10 (M/M ) g s\ is the Eddington accretion rate with unit efficiency. The advec# > tion dominated accretion flow (ADAF) model [22—26] is relevant for systems with low accretion rates compared with the Eddington one. With minimal assumptions and few free parameters,
T. Nakamura / Physics Reports 307 (1998) 181—190
183
Fig. 1. The luminosity spectra of BHMACOs in the ADAF model (solid lines) with M"0.5M and three different > values of M Q (marked on each curve in Eddington units), which correspond to black hole velocities »"5, 10, 20 km s\ for n "10 cm\. To be compared are the Ipser-Price model spectra (dashes lines) for two MQ values. The typical point source detection limits for various observational facilities assuming a distance of 400 pc (e.g. Orion).
ADAFs self-consistently predict a stable, hot, two temperature structure that generates broad-band spectra from radio to gamma-rays, and have been successfully applied to a variety of lowluminosity objects. For high accretion rate » should be smaller than &20 km/s. Fig. 1 [27] shows the luminosity spectra of BHMACHOs in the ADAF model for »"5, 10, 20 km/s. The fraction of BHMACHOs with »(20 km/s is estimated as 2;10\ for the velocity dispersion 155 km s\ of BHMACHOs. If the density of MACHOs in the solar neighborhood is 0.0079M pc\ [28], the BHAMCHO > number density is 0.016 pc\. Then the number of near-IR observable objects in Orion is only &0.4, while the chances of an X-ray detection are hopelessly small. For BHMACHOs in the general interstellar matter of n "1 cm\ the detection requirements are »(4 km s\ (near-IR) and »(0.9 km s\ (X-ray) at distances (400 pc; fraction ((4 km s\)"2;10\ then results in &2 near-IR BHMACHOs, with the X-ray numbers again being miniscule. BHMACHO spectra in the Ipser and Price spherical accretion model [29,30] are also shown in Fig. 1. Its greater luminosity at optical wavelengths may raise the number of detections to &10. However discriminating isolated, accreting black holes from other sources based solely on IRoptical observations is akin to searching for needles in a haystack. Therefore the identification of BHMACHOs by their accretion-driven emission is extremely difficult if not impossible [27].
3. Formation of solar mass size black hole MACHO binary Since there are a huge number (&4;10) of black holes in the halo it is natural to expect that some of them are binaries. The fraction and the distribution function with respect to the semimajor
184
T. Nakamura / Physics Reports 307 (1998) 181—190
axis and the eccentricity of binary BHMACHOs are estimated [21,31]. The main purpose of the estimate is the following two points. Firstly if the semimajor axis of a BHMACHO binary in a circular orbit is &10 cm, the binary coalesces in the time scale of the age of the universe so that the gravitational waves in the last three minutes [32] can be detected by LIGO, VIRGO, TAMA and GEO [33—36]. Secondly in the microlensing events one event toward LMC is due to a binary with separation &2;10 cm [37]. Although this may be a binary in the Galactic disk or in LMC, it is important to ask the possibility of it being a BHMACHO binary. The density parameter of BHMACHOs, X , should be comparable to X (or X ) to explain &+ !"+ the number of the observed MACHO events. In order to simplify the discussion it is assumed that BHMACHOs are the only dark matter and dominate the matter energy density, i.e., X"X , &+ although it is possible to consider other dark matter components than BHMACHOs. To determine the mean separation of the BHMACHOs, it is convenient to consider it at the time of matterradiation equality, t"t with the normalization of the scale factor R"1 at t"t . At t"t , both energy densities of the radiation and BHMACHOs are the same and given by o "1.4;10\(Xh) g/cm , (3) where h is the Hubble parameter in units of 100 km/s Mpc. The mean separation of BHMACHOs with mass, M , at this time is given by & M M & & "1.1;10 (Xh)\ cm . (4) xN " o M > Since the scale factor R is unity at t"t , xN is regarded as the comoving mean separation. The Hubble horizon scale at t"t becomes 3c ¸ & "1.1;10(Xh)\ cm . (5) 8nGo During the radiation dominated era, the total energy inside the horizon changes as R. Since the Jeans mass in this era is essentially the same as the horizon mass, black holes are formed only at the time when the horizon scale is equal to the Schwarzschild radius of a BHMACHO. Thus the scale factor at the formation epoch becomes
GM M &"1.2;10\ & (Xh) . (6) c¸ M > The age of the universe and the temperature at R"R are &10\ s and &GeV, respectively. Note here that at R"R only the fraction of &R of the total energy density is in the form of black holes since the ratio of the radiation density to the black hole is in proportion to the scale factor R. Consider a pair of black holes with the same mass M and the initial comoving separation & x;xN at R"R in the radiation dominated universe. At the beginning, the comoving separation does not change and the physical length of the separation increases in proportion to R. The energy density due to a pair of black holes is given by R"
o xN 1 , o , & x R
(7)
T. Nakamura / Physics Reports 307 (1998) 181—190
185
while that due to the radiation is given by o o " . R
(8)
Hence, the energy density of BHMACHOs dominates that of the radiation in the neighborhood of this BHMACHO binary for [21]
x . R'R ,
xN
(9)
This means that a pair of black holes decouples from the cosmic expansion when R"R to be
a bound system. To confirm this simple argument Newtonian numerical simulations of formation of such a bound system in the expanding radiation dominated universe have been done [31]. Fig. 2 [31] shows simulations for x"0.1xN with zero initial relative velocity at R"R . From Eq. (9), R is &10\, while Fig. 2 shows R is &1.5;10\ irrespective of the starting time R ,
which proves the simple estimate of Eq. (9). It is also confirmed numerically that the formation time is almost independent of the initial velocity at R"R since due to the cosmic expansion before R the peculiar velocity ceases out even if the peculiar velocity is comparable to the light velocity at R"R [31]. In Fig. 2, the pair of black holes has no angular momentum so that they coalesce to be a single black hole in the free fall time scale. However, in reality, there exists tidal force from neighboring black holes so that the pair of black holes obtains the angular momentum and becomes a binary black holes with an eccentric orbit.
Fig. 2. The time evolution of the separation of a pair of black holes with x"0.1xN at R"R . Solid lines show the relative physical separation of the pair of black holes as a function of the scale factorR for three values of initial R . At R"R , the relative velocity of black holes is assumed to be zero. At first the relative separation increases due to the cosmic expansion but eventually due to the gravity between black holes, a pair of black holes decouples from the cosmic expansion. Finally two black holes will collide.
186
T. Nakamura / Physics Reports 307 (1998) 181—190
Define the semimajor axis and the semiminor axis of the binary as a and b, respectively. Suppose that the comoving separation of the nearest neighboring black hole from the center of mass of the binary is y. a is estimated as [21] x a"xR " ,
xN
(10)
while b is evaluated by (tidal force);(free fall time) as
GM xR (xR ) x &
" b" a. (yR ) GM y
& Hence, the eccentricity e is given by [21]
e" 1!
x . y
(11)
(12)
Fig. 3 [31] shows a numerical simulation of such a three body problem in the expanding radiation dominated universe. The dotted and solid lines show the relative orbits of the third and the second black hole to the first one, respectively. The third black hole follows the expansion of the universe and goes away giving the angular momentum to the first and the second one. From 300 simulations with various x and y, it is confirmed that relations (10) and (12) hold with a slight modification [31]. Ioka et al. [31] considered the various effects on the above three body models such as angle dependence of the third body, 3-body collision, effect of mean fluctuation field, initial condition dependence, radiation drag effect and so on. They found that within 50% ambiguity the estimate of
Fig. 3. Formation of a binary black holes. The dotted and solid lines show the relative orbits of the third and the second black hole to the first one in the physical two-dimensional plane, respectively. The third black hole follows the expansion of the universe and goes away giving the angular momentum to the first and the second one. The initial parameters are shown in the figure.
T. Nakamura / Physics Reports 307 (1998) 181—190
187
the distribution function and event rate of coalescence based on a simple estimate of Eqs. (10) and (12) are correct [31]. Now assume that x and y have the uniform probability distribution in the range x(y(xN . Then the probability distribution of the binary parameters, a and e, becomes [21] 18xy dx dy , f (a, e) da de" xN
(13)
3ea da de . " 2xN (1!e)
(14)
Integrating f (a, e) with respect to e, the distribution of the semimajor axis f (a) is given as ?
a a da ! . xN xN a
3 f (a) da" ? 2
(15)
From Eq. (15), it is found that the fraction of the binary BHMACHOs with a&2;10 cm is &8% and &0.9% for Xh" 1 and 0.1, respectively. The estimated fraction of the &10 AU size BHMACHO binary is slightly smaller than the observed rate of the binary event (i.e. one binary event in 8 observed MACHOs).
4. Gravitational waves from coalescing BHMACHO binary Short period BHMACHO binaries may coalesce due to the emission of gravitational waves within the age of the universe [21]. The coalescing time is approximately given by [38,39] t"t
a (1!e) , a
(16)
where t "10 yr and
M & a "2;10 cm , M >
(17)
is the semimajor axis of the circular orbit BHMACHO binary which coalesces in t . Integrating Eq. (14) for a fixed t with the aid of Eq. (16), the probability distribution of the coalescing time is obtained as [21]
3 f (t) dt" R 29
t
t
!
t
t
dt , t
(18)
188
T. Nakamura / Physics Reports 307 (1998) 181—190
where
xN . (19) "t
a If the halo of our galaxy consists of BHMACHOs of mass &0.5M , &10 BHMACHOs exist > up to LMC. The number of coalescing binary BHMACHOs with t&t becomes &5;10 for Xh"0.1 from Eq. (18) so that the event rate of coalescing BHMACHO binaries becomes &5;10\ events/yr/galaxy. This can be compared with the event rate of coalescing binary neutron stars which is one of the most important sources of gravitational waves. Based on the number of the three known binary neutron stars, the event rate is estimated as 10\&2;10\ events/yr/galaxy [40—42]. The event rate of the coalescing BHMACHO binary is three orders of magnitude larger than this and is comparable to or greater than the upper limit [40]. If, however, BHMACHOs extend up to the half way to M31, the number of coalescing binary BHMACHOs with t&t can be &3;10 and the event rate becomes even higher as &0.3 events/yr/galaxy. The detectability of these waves by interferometers is most easily discussed in terms of the waves’ “characteristic amplitude” h given by [43] M l \ r \ , (20) h "5.3;10\ M 100 Hz 15 Mpc > where l and r are the frequency of the gravitational waves in the last three minutes [32] and the distance to the BHAMCHO binary, respectively, while M "(M M )/(M #M ) is the “chirp mass” of the binary whose components have individual masses M and M . For the first LIGO and VIRGO interferometers [33,34], which are expected to be operational in 2001, the sensitivity is expected as h K3;10\. For equal mass BHMACHO binary, M "M "0.5M , 1 > M becomes 0.43M and h is &3;10\ for l"100 Hz and r"15 Mpc. LIGO/VIRGO > should be able to detect coalescing BHMACHO binaries, with high confidence, out to about 15 Mpc distance with an event rate of several per year since the number density of the galaxy is &0.01 Mpc\. If BHMACHO binary coalescence occurs in our halo, TAMA300 [35] and GEO600 [36] can see the event by the end of this century. If the gravitational waves from the coalescing BHAMCHO binary is detected, by making a cross correlation of the observational data with the theoretical template of the waves in the last three minutes [32], each mass, the distance and the direction of BHMACHO binary [44] can be determined so that BHAMCHO can be identified. One of the difference between the coalescing BHMACHO binary and the coalescing neutron stars is the mass determined by the gravitational waves. Even if the mass of a BHMACHO is &1M , the gravitational wave forms of coalescing neutron star binaries may be different from > that of BHMACHO binaries in the final merging phase since the radius of the neutron star is larger than the Schwarzschild radius. Moreover if the coalescing binary neutron stars are sources of the gamma-ray bursts [45], the gamma-ray appears &1 s after the amplitude of the gravitational waves becomes zero while in the coalescence of binary BHMACHOs the emission of gamma-ray is not expected. Therefore we may be able to distinguish coalescing binary BHMACHOs from coalescing binary neutron stars. t
T. Nakamura / Physics Reports 307 (1998) 181—190
189
In conclusion, if MACHOs are primordial black holes of mass &0.5M , they will be identified > by gravitaional wave detectors within next 5 years.
Acknowledgements The author would like to thank Thorne, Sasaki, Tanaka, Ioka, Chiba, Fujita and Inoue for useful discussions. This work was supported in part by the Grant-in-Aid for Basic Research of the Ministry of Education, Culture, and Sports No. 08NP0801 and No. 09640351.
References [1] C. Alcock et al., Astrophys. J. 486 (1997) 697. [2] K. Cook, Talk at 3rd Int. Workshop on Gravitational Microlensing Surveys, College de France and Institut d’Astrophysique de Paris, January 15—17, 1998. [3] J.N. Bahcall, C. Flynn, A. Gould, S. Kirhakos, Astrophys. J. Lett. 435 (1994) L51. [4] D.S. Graff, K. Freese, Astrophys. J. Lett. 456 (1996) L49. [5] D.S. Graff, K. Freese, Astrophys. J. Lett. 467 (1996) L65. [6] C. Flynn, A. Gould, J.N. Bahcall, Astrophys. J. Lett. 466 (1996) L55. [7] G. Chabrier, L. Segretain, D. Mera, Astrophys. J. Lett. 468 (1996) L21. [8] F.C. Adams, G. Laughlin, Astrophys. J. 468 (1996) 586. [9] B.D. Fields, G.J. Mathews, D.N. Schramm, Astrophys. J. 483 (1997) 625. [10] C. Alcock et al., Astrophys. J. 491 (1997) L11. [11] N. Palanque-Delabrouille et al., Astron. Astrophys. 332 (1998) 1. [12] T. Nakamura, Y. Kan-ya, R. Nishi, Astrophys. J. Lett. 473 (1996) L99. [13] K.C. Sahu, Nature 370 (1994) 275. [14] H.S. Zhao, astro-ph/9606166; astro-ph/9703097. [15] N.W. Evans, G. Gyuk, M.S. Turner, J. Binney, Astrophys. Lett., in press; astro-ph/9711224. [16] E.I. Gates, G. Gyuk, G.P. Holder, M.S. Turner, astro-ph/9711110. [17] J. Yokoyama, Astron. Astrophys. 318 (1997) 673. [18] M. Kawasaki, N. Sugiyama, T. Yanagida, hep-ph/9710259. [19] K. Jedamzik, Phys. Rev. D 55 (1997) 5871. [20] B.J. Carr, Mon. Not. R. Astron. Soc. 189 (1979) 123. [21] T. Nakamura, M. Sasaki, T. Tanaka, K.S. Thorne, Astrophys. J. Lett. 487 (1997) L139. [22] S. Ichimaru, Astrophys. J. 214 (1977) 840. [23] R. Narayan, I. Yi, Astrophys. J. 428 (1994) L13. [24] M.A. Abramowicz, X. Chen, S. Kato, J.-P. Lasota, O. Regev, Astrophys. J. Lett. 438 (1994) L37. [25] R. Narayan, in: S. Kato, S. Inagaki, S. Mineshige, J. Fukue (Eds.), Physics of Accretion Disks, Gordon & Breach, New York, 1996, p. 15. [26] R. Narayan, 1997, in: D.T. Wickramasinghe, L. Ferrario, G.V. Bicknell (Eds.), IAU Colloq. No. 163, Accretion Phenomena & Related Outflows, ASP, San Francisco, in press, and references there in. [27] Y. Fujita, S. Inoue, T. Nakamura, T. Manmoto, K.E. Nakamura, Astrophys. J. Lett. 495 (1998) L85. [28] C. Alcock et al., Astrophys. J. 482 (1997) 98. [29] J.R. Ipser, R.H. Price, Astrophys. J. 255 (1982) 654. [30] J.R. Ipser, R.H. Price, Astrophys. J. 267 (1983) 371. [31] K. Ioka, T. Chiba, T. Tanaka, T. Nakamura, Phys. Rev. D 58 (1998) 06 3003. [32] C. Cutler et al., Phys. Rev. Lett. 70 (1993) 2984.
190
T. Nakamura / Physics Reports 307 (1998) 181—190
[33] B. Barish, in: K. Tsubono, M.-K. Fujimoto, K. Kuroda (Eds.), Gravitational Wave Detection, Universal Academic Press, Tokyo, 1996, pp. 155—162. [34] A. Brillet, in: K. Tsubono, M.-K. Fujimoto, K. Kuroda (Eds.), Gravitational Wave Detection, Universal Academic Press, Tokyo, 1996, pp. 163—174. [35] K. Tsubono, in: K. Tsubono, M.-K. Fujimoto, K. Kuroda (Eds.), Gravitational Wave Detection, Universal Academic Press, Tokyo, 1996, pp. 183—192. [36] J. Hough, in: K. Tsubono, M.-K. Fujimoto, K. Kuroda (Eds.), Gravitational Wave Detection, Universal Academic Press, Tokyo, 1996, pp. 175—182. [37] D.P. Bennett et al., Nucl. Phys. B 51 (Proc. Suppl.) (1996) 152. [38] P.C. Peters, J. Mathews, Phys. Rev. 131 (1963) 435. [39] P.C. Peters, J. Mathews, Phys. Rev. 136 (1964) B1224. [40] E.S. Phinney, Astrophys. J. Lett. 380 (1991) L17. [41] R. Narayan, T. Piran, A. Shemi, Astrophys. J. Lett. 379 (1991) L17. [42] E.P.J. van den Heuvel, D.R. Lorimer, Mon. Not. R. Astron. Soc., 283 (1996) L37. [43] K.S. Thorne, in: S.W. Hawking, W. Israel (Eds.), 300 Years of Gravitation, Cambridge University Press, Cambridge, 1996, p. 330. [44] C. Cutler, E.E. Flangan, Phys. Rev. 49 (1994) 2658—2697. [45] P. Me´sza´ros, astro-ph/9711354 and references therein.
Physics Reports 307 (1998) 191—196
Astrophysical constraints on primordial black hole formation from collapsing cosmic strings Ubi F. Wichoski *, Jane H. MacGibbon, Robert H. Brandenberger Department of Physics, Brown University, Providence, RI 02912, USA Code SN3, NASA Johnson Space Center, Houston, TX 77058, USA
Abstract Primordial black holes (PBH) may have formed from the collapse of cosmic string loops. The spectral shape of the PBH mass spectrum can be determined by the scaling argument for string networks. Limits on the spectral amplitude derived from extragalactic c-ray and galactic c-ray and cosmic ray flux observations as well as constraints from the possible formation of stable black holes remnants are reanalyzed. The new constraints are remarkably close to those derived from the normalization of the cosmic string model to the cosmic microwave background anisotropies. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 97.60.Lf; 98.80.Cq
1. Introduction Cosmic strings are linear topological defects that are believed to originate during phase transitions in the very early Universe [1—3]. Here, we consider the “standard” cosmic string model [4,5], according to which the network of linear defects quickly reaches a “scaling” solution characterized by having the statistical properties of the string distribution independent of time if all lengths are scaled to the Hubble radius (R "ct, where c is the speed of light). Cosmic string loops & are continually formed by the intersection and self-intersection of long cosmic strings (infinite cosmic strings or cosmic string loops with radius of curvature larger than R ). After formation, & a loop oscillates under its own tension and slowly decays by emitting gravitational radiation. The initial length of a cosmic string loop is l(t)"aR , where a is expected to be 5Gk/c. The mass of & a cosmic string loop is m(t)"l(t)k , where k is the mass per unit of length of the string. * Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 0 - 2
(1)
192
U.F. Wichoski et al. / Physics Reports 307 (1998) 191—196
Since cosmic strings also lead to cosmic microwave background (CMB) anisotropies, the string model can be normalized by the recent COBE observations giving the constraint [6,7] Gk/c41.7($0.7);10\ .
(2)
Our assumption is that a distribution of PBH was formed by the collapse of a fraction f of cosmic string loops [8,9]. Hence, from the observational consequences of a present surviving distribution of PBH we can derive updated constraints on the cosmic string scenario [10]. These constraints are important because: (i) They may indicate new ways to search for direct signatures from cosmic strings; (ii) They may provide constraints on cosmic string models with symmetry breaking scale k smaller than 10 GeV which are not constrained by CMB and large-scale structure data; and (iii) They may provide tighter limits than the CMB on cosmic string models with Gk/c&10\. In order not to dominate the energy density of the Universe, the cosmic string network must lose energy. We derive the rate of cosmic string loop production dn /dt from the conservation of string energy in the “scaling” scenario, oR #2Ho "!(dn /dt)aktc , where o "lkt\ is the energy density in long strings and l is proportional to the average number of long strings crossing each Hubble volume. Hawking [8] and Polnarev and Zembowicz [9] first studied the possibility that a fraction f of the cosmic string loops could collapse within their Schwarzschild radii and then form black holes. In the case of a planar circular cosmic string loop, the collapse will occur after a quarter of period of oscillation. In the general case, however, a typical string loop emerging either immediately after a phase transition in the early Universe or subsequently at a much later time due to the intersection of long strings will mostly be asymmetric, and hence have a tiny, but still significant, probability of collapsing to a radius small enough to form a black hole. The most recent estimate of f is due to Caldwell and Casper [11]. By numerically simulating loop fragmentation and evolution, they found f to be f"10 ! (Gk/c) ! .
(3)
Black holes so created are sufficiently small that they lose mass due to the Hawking evaporation process. The fraction of the critical density of the Universe in PBH today due to collapsing cosmic string loops is (see Ref. [10] and references quoted therein)
R dn 1 &m(t, t ) , dt (4) dt (t ) RH where dn /dt is the rate of black hole formation at time t; t is the present age of the Universe; & t is the time when a PBH with initial mass M "4.4;10h\ g, which is just expiring today, H H formed; m(t, t ) is the present mass of a PBH created at time t(t ; and h is the Hubble parameter in units of 100 km s\ Mpc\. PBH formed at times t(t (with initial mass M(M ) do not contribute to this integral H H because they will have evaporated by today. We can approximate m(t, t ) by the initial mass of the black hole as given in Eq. (1): X (t )" . & o
m(t, t )+akct .
(5)
U.F. Wichoski et al. / Physics Reports 307 (1998) 191—196
193
Since black holes with initial masses greater than M will have evaporated little by today, this H approximation adds an uncertainty of less than 6% to the value of X . This can be shown by . & taking the mass loss rate of an individual black hole [12,13] dM +5.34;10 (M)M\ g s\ , dt
(6)
solving for m(t, t ), and comparing a numerical evaluation of the resulting integral (4) with that obtained from the analytical approximation using Eq. (5). In Eq. (6), (M) is a slowly increasing function which depends on the number of particle species emitted by the black hole. (M) is normalized to unity when only massless particles are emitted, and (M )K2. H
2. Galactic and extragalactic flux constraints It is well known [14—16] that the extragalactic c-ray flux observed at 100 MeV provides a strong constraint on the population of black holes evaporating today. Too many black holes would lead to an excess of such radiation above the observed value. In particular, it was shown that if the present day number density distribution of black holes of mass M has the form [14]
M \@ dn M\X o , M :M , "(b!2) H . & H M dM H
(7)
where b"2.5 for black holes formed in the radiation-dominated era, then comparing the Hawking emission from the black hole distribution with the c-ray background observed by the EGRET experiment implies the limit on the present black hole density of [17] X :(5.1$1.3);10\h\ ! . . &
(8)
The c-ray flux per unit energy from the black hole distribution (7) turns over from an E\ slope below about 10 MeV to a steeper slope around EK100 MeV, the peak energy of the instantaneous emission from a black hole with mass M . The emission from the black hole distribution falls off as H E\ above EK1 GeV. The origin of the observed keV—GeV extragalactic c-ray background is unknown. Since the observed c-ray background falls off as E\ ! between 30 MeV and 120 GeV, this raises the possibility that black hole emission may explain, or contribute significantly, to the observed extragalactic background between about 50—200 MeV. From the rate of formation of cosmic string loops it follows dn l dn &"f " fc\t\t\ . dt a dt
(9)
Note that this distribution is also proportional to M\ . Thus, we can apply the limit (8) to Eq. (4). Taking into account the formation rate of cosmic string loops given by Eq. (9) we can determine an
194
U.F. Wichoski et al. / Physics Reports 307 (1998) 191—196
upper bound on the fraction f of cosmic string loops which collapses to form PBH [10]
f:6.8($1.7);10\
;
l \ c \ M H 40 100 4.4;10 h\ gm
Gk/c \ t \ h\ ! . 1.7;10\ 3.2;10 h\ s
(10)
In this expression, c"a(Gk/c)\ is a dimensionless coefficient describing the strength of gravitational radiation generated by string loops. Now, if we assume the validity of the Caldwell and Casper simulations [11], we can deduce an upper bound on the value of Gk/c. Combining Eqs. (3) and (10) and taking into account Eq. (8), we have Gk/c42.1($0.7);10\ .
(11)
This limit is very close to, and overlaps, that given in Eq. (2) from the normalization of the cosmic string model to the CMB. We can also apply the limits on X (t ) derived from the observations of the Galactic c-ray, . & antiproton, electron and positron fluxes. These limits, however, are less certain than the diffuse extragalactic c-ray flux because of the dependence on the unknown degree to which PBH cluster in the Galaxy and on the propagation and modulation of emitted particles in the Galaxy and Solar System. Assuming a halo model in which the spatial distribution of black holes is proportional to the isothermal density distribution of dark matter within the Galactic halo and simulating the diffusive propagation of antiprotons in the Galaxy, Maki et al. [18] derive an upper limit on X of . & X (1.8;10\h\ (12) . & based on antiproton data from the BESS ’93 balloon flight. This value would imply a limit on f in Eq. (10) that is stronger by a factor of about 3 and a corresponding limit on Gk/c in Eq. (11) of Gk/c(1.8($0.5);10\ .
(13)
Fig. 1 summarizes the constraints on f and Gk/c for the values l"40, c"100 and h"0.5. Curves (a1) and (a2) show the upper limit on f given in Eq. (10) (the two curves indicate the error range); curves (b1) and (b2) represent the upper and lower limits of the Caldwell and Casper estimates for f; and curve (c) indicates the CMB constraint on Gk/c. The allowed region of parameter space is below curve (a1) and to the left of curve (c). The Caldwell and Casper analysis restricts the parameter space to the shaded region on the graph.
3. Limits on black hole remnants The final stage of an expiring black hole is unknown [19,20]. The evaporation may stabilize at or before the black hole mass diminishes to the Planck mass, m [21—23]. In such cases, a mass
U.F. Wichoski et al. / Physics Reports 307 (1998) 191—196
195
Fig. 1. Main constraints on the value of f as a function of Gk/c.
M would remain, implying that the present fraction of the critical density in black hole relics is m RYR dn M dt & . (14) X " dt m o (t ) G R For X 41, we obtain the constraint l \ c \ m Gk/c \ t \ f41.9;10\ ;h .(15) 40 100 M 1.7;10\ 3.2;10 h\ s Similarly if we apply the bounds on f and Gk/c from Eqs. (10) and (2) to Eq. (14), we can derive an upper limit on X M M Gk/c H h\ ! . (16) X 43.6($0.9);10\ m 4.4;10 h\ g 1.7;10\ Thus, the bound on the black hole formation efficiency factor f given by Eq. (10) implies that black hole remnants from collapsing cosmic string loops would contribute significantly to the dark matter of the Universe in the cosmic string scenario of structure formation (Gk/cK1.7;10\) if the black hole remnants have a relic mass larger than about 10m .
4. Conclusions We have taken advantage of the recent numerical simulations to better understand PBH formed by cosmic string loop collapse. The observational consequences of a PBH distribution were used to
196
U.F. Wichoski et al. / Physics Reports 307 (1998) 191—196
constrain the cosmic string scenario. We have found that the limits on Gk/c are comparable to those stemming from other criteria. We have also investigated the constraints which can be derived if black holes do not evaporate completely, but instead evolve into stable massive remnants at the end of their life. We have found that unless the mass of the black hole remnants is larger than 10m , these remnants will contribute negligibly to the dark matter of the Universe, even if the black hole formation rate has the maximal value allowed by the c-ray flux constraints. A remnant mass of 10m , however, can arise naturally in some models [24] of black hole evaporation. In this case, cosmic strings could consistently provide an explanation for the origin of cosmological structure, for the dark matter, and for the origin of the extragalactic c-ray and Galactic cosmic ray backgrounds around 100 MeV.
References [1] A. Vilenkin, E.P.S. Shellard, Cosmic Strings and Other Topological Defects, Cambridge Univ. Press, Cambridge, 1994. [2] M. Hindmarsh, T.W.B. Kibble, Rep. Prog. Phys. 58 (1995) 477. [3] R. Brandenberger, Int. J. Mod. Phys. A 9 (1994) 2117. [4] Ya.B. Zel’dovich, Mon. Not. R. Astron. Soc. 192 (1980) 663. [5] A. Vilenkin, Phys. Rev. Lett. 46 (1981) 1169. [6] L. Perivolaropoulos, Phys. Lett. B 298 (1993) 305. [7] B. Allen et al., Phys. Rev. Lett. 77 (1996) 3061. [8] S. Hawking, Phys. Lett. B 231 (1989) 237. [9] A. Polnarev, R. Zembowicz, Phys. Rev. D 43 (1991) 1106. [10] J.H. MacGibbon, R. Brandenberger, U.F. Wichoski, Phys. Rev. D 57 (1998) 2158. [11] R. Caldwell, P. Casper, Phys. Rev. D 53 (1996) 3002. [12] J.H. MacGibbon, Phys. Rev. D 44 (1991) 376. [13] D. Page, Phys. Rev. D 13 (1976) 198. [14] B.J. Carr, Astrophys. J. 201 (1975) 1. [15] D. Page, S. Hawking, Astrophys. J. 206 (1976) 1. [16] B.J. Carr, Astrophys. J. 206 (1976) 8. [17] J.H. MacGibbon B.J. Carr (1998), in preparation. [18] K. Maki et al., Phys. Rev. Lett. 76 (1996) 3474. [19] S. Hawking, Nature 248 (1974) 30. [20] S. Hawking, Comm. Math. Phys. 43 (1975) 199. [21] J. MacGibbon, Nature 329 (1987) 308. [22] M. Trodden, V. Mukhanov, R. Brandenberger, Phys. Lett. B 316 (1993) 483. [23] B. Carr, J. Gilbert, J. Lidsey, Phys. Rev. D 50 (1994) 4853. [24] S. Coleman, J. Preskill, F. Wilczek, Mod. Phys. Lett. A 6 (1991) 1631.
Physics Reports 307 (1998) 197—200
Dark matter from particle physics Gordon L. Kane* Randall Lab of Physics, University of Michigan, Ann Arbor, MI 48109-1120, USA
Abstract Eventually, the properties of particles relevant to the possible amounts of dark matter from various particle physics sources (lightest superpartner (LSP), massive neutrinos, axions, etc.) will be well determined. I discuss how well X h *1. can be calculated once there is collider data on the properties of the LSP. Study of the supersymmetry Lagrangian implies that current limits on WIMPs are not general. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction By the early 1980s particle physics had proposed three main candidates for non-baryonic dark matter, massive neutrinos, axions, and the supersymmetry LSP. All of them automatically give interesting Xh&0.1 for reasonable values of the relevant parameters. Once the neutrino and axion masses are measured their contribution to Xh can be calculated. The main purpose of this talk is to discuss how well that can be done for the LSP [1]. It should be noted that from the point of view of particle theory all these candidates are natural, and occur together in a typical theory. Unified theories (with or without a GUT gauge group) typically have a stable LSP and massive neutrinos — it would require explanation if they did not. If there are any broken global symmetries they are likely to have an axion. In principle, the ratios of X ,X,X,X are calculable from the particle physics, although we do not yet understand *1. such theories well enough to do so at present. In this talk I will focus on calculating X once there is collider data on its properties. First, it *1. should be emphasized that detection of an LSP escaping a detector will imply that it has a lifetime about 10 times longer than other sparticles, consistent with being stable, but not demonstrating
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 3 - X
198
G.L. Kane / Physics Reports 307 (1998) 197—200
that. It will be necessary to observe LSP scattering or annihilation to prove it is indeed a significant part of the CDM. Once M and its couplings to other particles are known, X can be calculated. The *1. *1. calculation gives a density so it is actually Xh that results, but by then h should be well enough known so it is not a large source of error. Then two main possible sources of error remain, the cosmology, and how well the needed parameters can be measured at colliders.
2. Cosmological uncertainties The standard [2] calculations require solving the Boltzmann equation to obtain the relic density. Although this is usually treated with approximations, it can be done numerically to any desired accuracy, so no obstacle to accurate determination of X arises there. *1. One normally assumes a standard cosmology, with homogeneity, etc. [2]. If these assumptions are modified the relic density could change, so a relic density calculation under the assumption of a standard cosmology will be subject to an unknown uncertainty at this level. On the other hand, the calculation can be repeated for any particular non-standard cosmology. A possible problem arises because the Boltzmann calculation involves a thermal average, so the LSP annihilation cross section at all energies is needed. If there were large contributions from high energies then many couplings and masses of other sparticles would be needed. Fortunately, the weighting factors normally suppress the integrand rapidly [1], so that over 95% of the integral occurs at (s/M within a factor about 1.15 from threshold; in practice, this will not be hard to *1. deal with. Finally, the LSP relic density is determined relative to the total entropy, so extra entropy production could reduce the relic LSP density. For any particular evolution of the universe one can get a precise result, but there is an uncertainty here that has to be kept in mind.
3. Particle physics uncertainties Here there are two major possible sources of uncertainty. The annihilation calculation will depend on several parameters, such as M , tan b, the LSP wave function, and masses of other *1. sparticles that occur in annihilation diagrams. Measurement errors in these parameters will propagate through the calculation and increase the uncertainty in the answer. Second, the correct theory may not be just the MSSM (which we define as the SM particles#their superpartners, the SM gauge group, two Higgs doublets, and R-parity conservation), but an extended theory. For example, it may have extra º symmetries. These two sources of uncertainty are related, and both can be discussed in terms of the parameters of the neutralino mass matrix. Although the neutralino mass matrix is the main thing to study, other parameters can enter. For example, bino annihilation will proceed mainly through annihilation to q’s via stau exchange in some models, so the stau mass must be accurately measured. If coannihilation effects can be important several masses and couplings must be measured. While the focus here is on calculating LSP annihilation it should be noted that all the same particle physics uncertainties arise in dealing with LSP scattering as well. Indeed, the effects of
G.L. Kane / Physics Reports 307 (1998) 197—200
199
phases discussed below can lead to large changes in LSP scattering, and imply that WISP (weakly interacting supersymmetric particle) limits are not general and need to be re-evaluated. Consider the symmetric neutralino mass matrix [3]. The lightest eigenvalue will be the LSP.
M e ( 0
0 M
!M s c 8U @ M c c 8 U @ 0
M s s 8U@ M c s 8 U@ . !ke (I
!ke (I
0
Here M and k are real. The matrix depends on six unknown parameters, M , M , k, , , tan b. G I The LSP wave function is usually written. LSP"acJ #bZI #chI #dhI . S B Then the LSP mass and each of a, b, c, d are functions of the six parameters M ,2. We can only learn a, b, c, d by measuring M , M , ,2 and then calculating a, b, c, d. Writing out the eigenvalues and eigenvectors of this matrix, one can see that every observable is a complicated function of all parameters. We cannot ignore the phases and expect the results to be meaningful, either for dark matter or for measuring any of the parameters at colliders. In particular, these results imply that tan b cannot be measured at LEP since too few observables are available there. In fact, fully general measurements of tan b and the other parameters needed to determine Xh can only be done [3] at a lepton collider with polarized beams and sufficient energy to produce the neutralinos and charginos (NPLC, Next Polarized Lepton Collider). Falk, Olive, and Srednicki [4] already pointed out earlier that the phase of the off-diagonal elements of the sfermion mass matrices could have an effect on WIMP mass limits. In practice, their effect is only significant if annihilation through stops is important. That is a special case of our argument that several phases can be very important. Our main effects for CDM come from the phases of k and , even if stop annihilation is not important. Once the data is available, and the soft parameters, M , , M , k, , and tan b are measured, I the accuracy with which Xh can be determined depends on the accuracy with which the parameters can be measured. That accuracy seems to have no limits, in principle, though in practice, of course, it will require large statistics and several extra observables (because of the non-linearity of the equations relating the soft parameters to the observables) to achieve a few percent accuracy. Before data is available from NPLC it is likely that superpartners will be studied at Fermilab, perhaps even at LEP. There is no completely general analysis that can be done there, but it may be possible to define MRM(s), minimally reasonably model(s), that have few enough parameters so all of them can be measured, yet enough so that they can be “right”, i.e., like nature. We do not know yet what the patterns among the soft parameters really are — perhaps they are all real, perhaps many are zero or equal to others. Effects beyond the MSSM such as additional º symmetries, or Planck scale operators, both of which occur in most models, are allowed for in the usual MSSM effective Lagrangian, and typically can modify overly simple assumptions about relations among soft parameters.
200
G.L. Kane / Physics Reports 307 (1998) 197—200
We conclude that it will be possible to calculate the contribution of the LSP to cold dark matter with a few percent accuracy after the properties of superpartners are measured at colliders. Determining the needed parameters sufficiently accurately will require measurement of soft parameters, including phases, which will probably have to wait until a lepton collider with a polarized beam is available. Before then the number of independent observables is smaller than the number of relevant soft parameters.
Acknowledgements I am grateful to M. Brhlik for discussions and a fruitful collaboration, and to J. Wells for discussions and suggestions. This research was supported in part by the U.S. Department of Energy.
References [1] [2] [3] [4]
M. Brhlik, G.L. Kane, in preparation. See, for example, E. Kolb, M. Turner, The Early Universe. M. Brhlik, G.L. Kane, Measuring the supersymmetric Lagrangian, hep-ph/9803391. T. Falk, K.A. Olive, M. Srednicki, Phys. Lett. B 375 (1996) 196.
Physics Reports 307 (1998) 201—206
Neutralino relic density from minimal supergravity: Direct detection vs. collider searches Howard Baer Department of Physics, Florida State University, Tallahassee, FL 32306, USA.
[email protected]
Abstract Working within the framework of the minimal supergravity (mSUGRA) model, we show regions of model parameter space which lead to cosmologically interesting values of the neutralino relic density Xh. In these regions, we also show expected event rates for direct detection of neutralinos by a Germanium detector. We compare the direct dark matter detection event rates with expected decay rates for bPsc. We also compare neutralino direct detection rates with the reach of various colliding beam experiments for mSUGRA. We present results for both small and large values of the parameter tan b. 1998 Elsevier Science B.V. All rights reserved. PACS: 14.80.Ly; 98.80.Cq; 98.80.Dr Keywords: Supersymmetry; Dark matter; Collider searches
1. The minimal supergravity model The paradigm model for weak scale supersymmetry is known as the minimal supergravity model [1]. In this model, one posits the existence of a “hidden sector” which serves as the arena for supersymmetry breaking. Supersymmetry breaking is communicated from the hidden sector to the visible sector (the Minimal Supersymmetric Standard Model, or MSSM) via gravitational interactions. Since gravitation is a universal force, a common soft SUSY breaking mass m is induced for all scalars, and a common gaugino mass m is induced for all gauginos. A common soft breaking trilinear term A and bilinear term B are also induced. Inspired by gauge coupling unification at M K2;10 GeV, it is usually assumed that the various soft SUSY breaking terms unify at %32 M as well. The gauge couplings, Yukawa couplings and soft SUSY breaking parameters are %32 evolved via renormalization group equations from M to M . Electroweak symmetry is %32 broken radiatively and at M &(m I *m I 0, the renormalization group improved one-loop effec R R tive potential is minimized. This allows one to determine the value of the superpotential Higgsino 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 5 - 3
202
H. Baer / Physics Reports 307 (1998) 201—206
mass term k, and to effectively trade the parameter B for tan b, the ratio of Higgs field vevs. Thus, the superparticle masses and mixings are determined by the parameter set m , m , A , tan b, sign(k) . (1) This entire procedure has been implemented numerically in the event generator program ISAJET, which we use for much of this analysis [2]. Since the mSUGRA model is “minimal”, the phenomenologically unnecessary R-violating interactions in the superpotential are assumed to be entirely absent from the theory. Thus, a major consequence of the mSUGRA model is that the lightest SUSY particle, the lightest neutralino ZI , is absolutely stable. If mSUGRA is an appropriate description of nature, then the ZI is a weakly interacting massive particle (a WIMP), and should form a significant component of the cold dark matter (CDM) in the universe. The dark matter density of the universe is usually calculated in terms of Xh, where X"o/o (o is the critical closure density) and h is the scaled Hubble constant, defined by H"100h km/s/Mpc, with 0.5:h:0.8. Values of Xh(0.025 cannot even account for the dark matter needed by galactic rotation data, while Xh'1 yields a universe with lifetime less than 10 billion years. Cosmologically favored models [3] of mixed dark matter (MDM) or CDM plus a cosmological constant K (KCDM) favor 0.15:Xh:0.4.
2. Relic density of neutralinos To estimate the relic density of neutralinos in the universe, it is necessary to solve the Boltzmann equation as formulated for a Friedman—Robertson—Walker universe. The method of solution is outlined in Refs. [4—6]. We adopt the covariant procedure given in Ref. [6]. Central to the relic density calculation is to calculate the thermally averaged neutralino annihilation cross section times velocity: 1pv 2. We calculate [7] all 156 neutralino annihilation amplitudes numerically + using the HELAS program. Co-annihilation processes are neglected, but these are rarely important for the mSUGRA model. Some sample results are shown in the m vs. m plane for A "0, for both signs of k, and for tan b"2 (Fig. 1) and for tan b"35 (Fig. 2). For low tan b, the dominant annihilation channel occurs via t-channel slepton exchange. Hence, most of the large m region of parameter space is ruled out since it would give too young of a universe (Xh'1). The exception can come in narrow corridors of m if efficient neutralino annihilation can take place through s-channel Z or h poles. The theoretically favored regions of parameter space lie between the dot-dashed contours, which roughly correspond to slepton masses of m I K100—300 GeV. J For the high tan b"35 results shown in Fig. 2, much more of the parameter space is allowed. This is because the dominant annihilation channel occurs through a very broad s-channel H and A resonance, which is enhanced by the large b and q Yukawa couplings [8]. In this case, there does not even appear to be an upper bound on sparticle masses due to the age of the universe constraint. Furthermore, the very low m and m regions below the dotted contour give a relic density Xh(0.025, and cannot even account for galactic rotation rates. The favored MDM region extends up to very large values of m &700—800 GeV.
H. Baer / Physics Reports 307 (1998) 201—206
203
Fig. 1. A plot of mSUGRA model parameter space for tan b"2 showing the neutralino relic density, neutralino direct detection rates, bPsc excluded regions, and the reach of LEP2 and Tevatron Main Injector for SUSY particles.
3. Direct detections rates for a Ge detector If a gas of relic neutralinos pervades the universe, then it might be possible to directly detect some of them in a laboratory experiment. The idea is to detect typically &keV of energy from a neutralino—nucleus collision. Such detectors must operate at very low temperatures, and in very low background environments. Present detectors [9], such as the Ge crystal detectors operated by CDMS and HDMS groups, are sensitive to neutralino detection rates of a few events per kilogram of detector material per day. The hope is to achieve a sensitivity of &0.01—0.001 events/kg/day by about the year 2000. To calculate the direct detection rates for neutralinos in the mSUGRA model, one must first evaluate the neutralino-quark and neutralino-gluon effective interactions. These divide into two cases: scalar and spin interactions. Formulae using effective Lagrangian techniques to include
204
H. Baer / Physics Reports 307 (1998) 201—206
Fig. 2. A plot of mSUGRA model parameter space for tan b"35 showing the neutralino relic density, neutralino direct detection rates, bPsc excluded regions, and the reach of LEP2. No reach was found for the Tevatron Main Injector for SUSY particles.
QCD corrections are presented in Refs. [10,5]. The neutralino—parton interaction can be converted to an effective neutralino—nucleon interaction using moments of parton distribution functions. Finally, one can obtain neutralino—nucleus collision cross sections by convoluting with either scalar or spin nuclear form factors. The final event rate requires knowledge of the local relic density of neutralinos in our sector of the galaxy; estimates come from galaxy formation models. Results for events rates in a Ge detector are presented in Ref. [11] and in Figs. 1 and 2, by the solid lines labelled 10\—10\. In Fig. 1a, we see that direct detection rates never reach the 10\ events/kg/day level, although the region below m K220 GeV is accessible if a sensitivity of 10\ events/kg/day is achieved. In Fig. 1b, the region below m K200 GeV will be accessible to detectors achieving 10\ events/kg/day. Also, in Fig. 1b, the 10\ events/kg/day level covers most of the MDM favored region, although higher sensitivities are needed to scan the entire allowed
H. Baer / Physics Reports 307 (1998) 201—206
205
parameter space. In Fig. 2, the 10\ events/kg/day level is attained for all parameter space with m :300 GeV. This is due to enhanced Yukawa loop interactions at large tan b [10]. For large tan b, the sensitivity actually reaches the &1 event/kg/day level in the extreme lower left of the allowed parameter space.
4. Results for bP Psc The FCNC rare decay bPsc is particularly interesting for supersymmetry since loop contributions from supersymmetric particles should occur with roughly the same strength as the Standard Model (SM) t¼ loop. This decay branching fraction has been measured by CLEO to occur at the (2.32$0.57$0.35);10\ level, while the SM predicts B(BPX c)"3.2$0.3;10\. Thus, the Q current 95% CL limit implies 1;10\(B(BPX c)(4.2;10\. Q The bPsc rate calculation requires a calculation of loop amplitudes at scale Q&M [12]. An effective Lagrangian approach is used to connect the weak scale loop calculation with the b decay amplitude calculation performed at scale choice Q&m . In the mSUGRA calculations of @ Ref. [13], NLO QCD corrections have been incorporated in the running of the Wilson coefficients and in the evaluation of the b decay matrix elements. Furthermore, the RGE running between various high energy scales has been accounted for using the approach of Anlauf [14]. In the second of Refs. [13], gluino and neutralino loops have been included in the calculation as well. The region below the solid line marked “bPsc excluded” in Fig. 1a is excluded due to too large a bPsc branching fraction at 95% CL. Taken at face value, this excludes a significant fraction of the region accessible to direct searches for neutralinos. In Fig. 1b, the entire m vs. m plane is allowed by the bPsc constraint. As tan b increases, most of the interesting mSUGRA parameter space is ruled out by the bPsc constraint for k(0; for k'0, most of parameter space is allowed, and there are significant regions which agree exactly with the CLEO measured central value for B(bPsc). At very large tan b and k(0, as shown in Fig. 2a, most of parameter space is excluded. In Fig. 2b, most of parameter space is allowed except the lower left corner, which actually gives too low a rate for B(bPsc)!
5. Comparison with the reach of LEP2, the Tevatron and the LHC Currently, the greatest reach by LEP2 for SUSY at low tan b occurs via the search for Higgs bosons, for which m '88 GeV, for a SM Higgs. This limit actually excludes m values below & &500 GeV in Fig. 1a and below m &180 GeV in Fig. 1b. If somehow the Higgs sector is not well described by mSUGRA (perhaps due to violations of universality), then the region of Fig. 1 excluded by SUSY particle searches at LEP2 is denoted by the EX region. The ultimate reach of LEP2 operating at (s"192 GeV is shown by the dotted lines in Fig. 1, which always lie below the cosmologically interesting regions. Thus, the best prospect for a SUSY discovery at LEP2 (from the mSUGRA point of view) is a discovery of the light SUSY Higgs h. At large tan b, m is much F heavier, so LEP2 has hardly any reach for h. The reach for sparticles is shown by the dashed contour in Fig. 2 [15].
206
H. Baer / Physics Reports 307 (1998) 201—206
The Tevatron Main Injector (MI) is expected to acquire &2 fb\ of integrated luminosity in Run 2. Its best reach at low tan b is via ¼ I ZI P3l searches. The cumulative reach is illustrated by the dashed contour in Fig. 1, and does overlap with some, but not all, of the cosmologically interesting parameter space. Roughly, the reach is comparable to direct detection search experiments achieving 10\ events/kg/day. At large tan b, Yukawa coupling effects work to destroy much of the clean trilepton channel for mSUGRA, and in fact there is no reach of the MI beyond current limits, at least for the simplistic analyses of Ref. [16]. However, this is precisely the region where direct detection experiments work best (large tan b). From these plots, it is apparent that the first superpartner that is detected could well be the lightest neutralino via direct searches for cold dark matter! Finally, the CERN LHC, even with modest integrated luminosity, ought to probe well beyond the cosmologically allowed regions in Figs. 1 and 2 via multi-jet plus multi-lepton plus E searches 2 [17]. Thus, in 10 years, we will likely have definitive evidence regarding the reality of supersymmetric dark matter. Acknowledgements I thank M. Brhlik, D. Castano, C.H. Chen, F. Paige and X. Tata for collaboration on these projects. This research is supported by the US Department of Energy under grant number DE-FG-05-87ER40319. References [1] A. Chamseddine, R. Arnowitt, P. Nath, Phys. Rev. Lett. 49 (1982) 970; R. Barbieri, S. Ferrara, C. Savoy, Phys. Lett. B 119 (1982) 343; L.J. Hall, J. Lykken, S. Weinberg, Phys. Rev. D 27 (1983) 2359. [2] F. Paige, S. Protopopescu, in: D. Soper (Ed.), Supercollider Physics, World Scientific, Singapore, 1986, p. 41; H. Baer, F. Paige, S. Protopopescu, X. Tata, in: J. Hewett, A. White, D. Zeppenfeld (Eds.), Proc. Workshop on Physics at Current Accelerators and Supercolliders, Argonne National Laboratory, 1993, hep-ph/9305342; the ISAJET mSUGRA model is described in H. Baer, C.H. Chen, R. Munroe, F. Paige, X. Tata, Phys. Rev. D 51 (1995) 1046. [3] See e.g. J. Primack, astro-ph/9707285, 1997. [4] For reviews, see e.g. E.W. Kolb, M.S. Turner, The Early Universe, Addison-Wesley, Redwood City, 1989. [5] G. Jungman, M. Kamionkowski, K. Griest, Phys. Rep. 267 (1996) 195. [6] P. Gondolo, G. Gelmini, Nucl. Phys. B 360 (1991) 145. [7] H. Baer, M. Brhlik, Phys. Rev. D 53 (1996) 597. [8] M. Drees, M. Nojiri, Phys. Rev. D 47 (1993) 376. [9] For a review of dark matter detectors, see e.g. J.R. Primack, B. Sadoulet, D. Seckel, Ann. Rev. Nucl. Part. Sci. B 38 (1988) 751; P.F. Smith, J.D. Lewin, Phys. Rep. 187 (1990) 203; D. Cline, UCLA-APH-0096-3-97, 1997. [10] M. Drees, M. Nojiri, Phys. Rev. D 47 (1993) 4226; Phys. Rev. D 48 (1993) 3483. [11] H. Baer, M. Brhlik, Phys. Rev. D 57 (1998) 567. [12] S. Bertolini, F. Borzumati, A. Masiero, G. Ridolfi, Nucl. Phys. B 353 (1991) 591. [13] H. Baer, M. Brhlik, Phys. Rev. D 55 (1997) 3201; H. Baer, M. Brhlik, D. Castano, X. Tata, Phys. Rev. D 58 (1998) 015007. [14] H. Anlauf, Nucl. Phys. B 430 (1994) 245. [15] H. Baer, M. Brhlik, R. Munroe, X. Tata, Phys. Rev. D 52 (1995) 5031. [16] H. Baer, C.H. Chen, F. Paige, X. Tata, Phys. Rev. D 54 (1996) 5866; H. Baer, C.H. Chen, M. Drees, F. Paige, X. Tata, Phys. Rev. D 58 (1998) 075008. [17] H. Baer, C.H. Chen, F. Paige, X. Tata, Phys. Rev. D 53 (1996) 6241.
Physics Reports 307 (1998) 207—214
Neutralino relic density in the minimal supergravity model Vernon Barger, Chung Kao* Department of Physics, University of Wisconsin, Madison, WI 53706, USA
Abstract The relic density (X h) of the neutralino dark matter is evaluated in the minimal supergravity model for values of the Q Higgs sector parameter tan b,v /v between the two infrared fixed points of the top quark Yukawa coupling, 1.8: tan b:56. For a cosmologically interesting relic density, 0.1:X h:0.4, we find that the supergravity parameter Q space is tightly constrained. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.Cq; 95.35.#d
1. Introduction In the minimal supergravity (mSUGRA) model [1], it is assumed that SUSY is broken in a hidden sector with supersymmetry (SUSY) breaking communicated to the observable sector through gravitational interactions, leading naturally to a common scalar mass (m ), a common gaugino mass (m ), a common trilinear coupling (A ) and a common bilinear coupling (B ) at the grand unified scale (M &2;10 GeV). Through minimization of the Higgs potential, the %32 B parameter and magnitude of the superpotential Higgs mixing parameter k are related to tan b and M . In supersymmetric (SUSY) theories with a conserved R-parity, the lightest supersymmet8 ric particle (LSP) cannot decay into normal particles and the LSP is an attractive candidate for cosmological dark matter [2,3]. In most of the supergravity parameter space, the lightest neutralino (s) is the LSP [4,5]. In a supersymmetric grand unified theory (GUT) with a large top quark Yukawa coupling (½ ) at M , radiative corrections with a large ½ to Higgs bosons can drive the corresponding Higgs %32 boson mass squared parameter negative, spontaneously breaking the electroweak symmetry and naturally explaining the origin of the electroweak scale. In addition, there exist an infrared fixed point (IRFP) at low tan b and another IRFP solution (d½ /dtK0) at tan b&60 [4,6]. For * Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 2 - 3
208
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
m "175 GeV, the two IRFPs of top quark Yukawa coupling appear at tan bK1.8 and tan bK56 [4]. Recent measurements of the bPsc decay rate by the CLEO [7] and LEP collaborations [8] excludes most of the mSUGRA parameter space when tan b is large and k'0 [9]. Although we choose k(0 in our analysis, our results and conclusions are almost independent of the sign of k. In our convention, !k (#k) appears in the chargino (neutralino) mass matrix.
2. Mass spectrum of supersymmetric particles The minimal supersymmetric standard model has two Higgs doublets and [10,11] that couple to the t "! and t "# fermions, respectively. After spontaneous symmetry breaking, there remain five physical Higgs bosons: a pair of singly charged Higgs bosons H!, two neutral CP-even scalars H (heavier) and h (lighter), and a neutral CP-odd pseudoscalar A. At the tree level, all Higgs boson masses and couplings are determined by two parameters: the mass of the pseudoscalar (m ) and the ratio of vacuum expectation values (VEVs) of Higgs fields tan b,v /v . We calculate masses and couplings in the Higgs sector with one-loop corrections from the top and the bottom Yukawa interactions in the RGE-improved one-loop effective potential [12] at the scale Q"(m I *m I 0 [13,14]. R R In the mSUGRA model, the SUSY particle masses and couplings at the weak scale can be predicted by the evolution of renormalization group equations (RGEs) [15] from the unification scale with four parameters: m , m , A and tan b; and the sign of the Higgs mixing parameter k [4,5]. Since A mainly affects the masses of third generation sfermions, it is taken to be zero in most of our analysis. In Fig. 1, we present masses, in the case of k(0, for the lightest neutralino (s), the lighter top squark (tI ), the lighter tau slepton (qJ ), and two neutral Higgs bosons: the lighter CP-even (h) and the CP-odd (A). For m &100 GeV and m 9400 GeV, the mass of qJ can become smaller than m so such Q regions are theoretically excluded. Also, shown are the regions that do not satisfy the following theoretical requirements: electroweak symmetry breaking (EWSB), tachyon free, and the lightest neutralino as the LSP. The region excluded by the m >'85 GeV limit from the chargino search Q [16] at LEP 2 is indicated.
3. Relic density of neutralino dark matter The matter density of the Universe o is described in terms of a relative density X"o/o with o "3H/8pG K1.88;10\h g/cm the critical density to close the Universe. Here, H is the , Hubble constant, h"H /(100 km s\ Mpc\), and G is Newton’s gravitational constant. , Studies on clusters of galaxies and large-scale structure suggest that the matter density in the Universe should be at least 20% of the critical density (X 90.2) [17], but the big-bang + nucleosynthesis and the measured primordial abundance of helium, deuterium and lithium constrain the baryonic density to 0.01:X h:0.03 [18]. The anisotropy in the cosmic microwave
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
209
Fig. 1. Masses of s, tI , qJ at M and masses of h and A at m I m I versus m . 8 R* R0
background radiation measured by the cosmic background explorer (COBE) suggests that at least 60% of the dark matter should be cold (nonrelativistic) [19]. Inflationary models usually require X"1 for the Universe [2]. Recent measurements on the Hubble constant are converging to hK0.6—0.7 [20]. Therefore, we conservatively consider the cosmologically interesting region for X to be Q 0.1:X h:0.4 . Q
(1)
Following the classic papers of Zel’dovich, Chiu et al. [21], many studies of the neutralino relic density in supergravity models have been made [22—24]. The time evolution of the number density n(t) of weakly interacting mass particles is described by the Boltzmann equation [21] dn "!3Hn!1pv2[n!n] , # dt
(2)
where H"1.66g¹/M is the Hubble expansion rate with g K81 the effective number of * . * relativistic degrees of freedom, M "1.22;10 is the Planck mass, 1pv2 is the thermally . averaged cross section times velocity, v is the relative velocity and p is the annihilation cross section, and n is the number density at thermal equilibrium. #
210
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
In the early Universe, when the temperature ¹<m , the LSP existed abundantly in thermal Q equilibrium. Deviation from the thermal equilibrium began when the temperature reached the freeze-out temperature (¹ Km /20). After the temperature dropped well below ¹ , the annihila Q tion rate became equal to the expansion rate and n "H/1pv2. The resulting relic density can be Q expressed in terms of the critical density by [2] X h"m n /o , Q Q Q
(3)
where n is the number density of s. Q The relic density at the present temperature (¹ ) is then obtained from o(¹ ) , Xh" 8.0992;10\ GeV
(4)
where
1 1 ¹ Q ¹(g o(¹ )K1.66; . A * $ V 1pv2 dx M ¹ . A
(5)
The ratio ¹ /¹ is the reheating factor [25] of the neutralino temperature (¹ ) compared to the Q A Q microwave background temperature (¹ ) [23,14]. The scaled freeze-out temperature (x ,m /¹ ) A $ Q is determined by iteratively solving the freeze-out relation
m 45 1pv2 $x , x\"log Q V $ $ 2n 2g G * ,
(6)
starting with x " . $ The relic density of the neutralino dark matter is presented in Fig. 2. When 2m is close to the Q mass of Z, h, H, or A, the cross section is significantly enhanced by the Breit—Wigner resonance pole, and a corresponding dip appears in the relic density distribution. An increase in m raises the mass of Higgs bosons but does not affect the mass of s. Increasing tan b slightly raises m , slightly lowers m , and greatly reduces m and m . For & F Q tan b&45, extra dips appear in Fig. 2c. These resonance dips are associated with the A and H masses. For tan b410 and m '200 GeV, the annihilation cross section is too small for a cosmologically interesting X h, while for tan b&45, m must be close to 1000 GeV for Q m :750 GeV and be larger than 400 GeV for m 9750 GeV to obtain an interesting relic density. For tan b950, m :2m , and there are no A and H resonant contributions; an Q acceptable Xh is found for m 9400 GeV and 400 GeV :m :800 GeV. To show the constraints of Eq. (1) on the SUGRA parameter space, we present contours of X h"0.1 and 0.4 in the (m , tan b) plane in Fig. 3. Also, shown are the regions that do not Q satisfy the theoretical requirements (EWSB, tachyon free, and lightest neutralino as the LSP) and the region excluded by the chargino search (m >(85 GeV) at LEP 2 [16]. Q We summarize below the central features of these results. If m is close to 100 GeV, (a) most of the (m , tan b) plane with tan b940 is excluded by the above-mentioned theoretical requirements; (b) the chargino search at LEP 2 excludes the region where m :100 GeV for k'0 and
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
211
Fig. 2. The relic density of the neutralino dark matter (X h) versus m . Q
Fig. 3. Contours of X h"0.1 and 0.4 in the (m , tan b) plane. Q
m :120 GeV for k(0; (c) the cosmologically interesting region is 100 GeV :m :400 GeV and tan b:25 for either sign of k. If m is close to 500 GeV, (a) most of the (m , tan b) plane is theoretically acceptable for tan b:50 and m 9120 GeV; (b) the LEP 2 chargino search excludes (i) m :140 GeV for tan b910, and (ii) m :80 GeV (k'0) or m :120 GeV (k(0) for tan b&1.8; (c) The cos mologically interesting regions lie in two narrow bands.
212
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
Fig. 4. Contours of X h"0.1 and 0.4 in the (m , m ) plane. Q
Contours of X h"0.1 and 0.4 in the (m , m ) plane are presented in Fig. 4 for k'0 with Q tan b"1.8 and 50. Also, shown are the regions that do not satisfy the theoretical requirements (EWSB, tachyon free, and lightest neutralino as the LSP) and the region excluded by the chargino search (m >(85 GeV) at LEP 2 [16]. Q If tan b is close to 1.8, (i) most of the (m , m ) parameter space is theoretically acceptable; (ii) the chargino search at LEP 2 excludes the region where m :80 GeV for k'0 and m :110 GeV for k(0; (iii) most of the cosmologically interesting region is 80 GeV :m :450 GeV and m :200 GeV. If tan b is close to 50, (a) the theoretically acceptable region in the (m , m ) plane is constrained to have m 9160 GeV and m 9150 GeV for tan b&50; (b) The LEP 2 chargino search excludes (i) the region with m :125 GeV for k'0 or (ii) the region with m :135 GeV for k(0, which is already inside the theoretically excluded region; (c) the cosmologically interesting region lies in a band with (i) 475 GeV :m :800 GeV for k'0, or (ii) 500 GeV :m :840 GeV for k(0, and m 9300 GeV. 4. Conclusions The existence of dark matter in the Universe provides a potentially important bridge between particle physics and cosmology. Requiring that the neutralino relic density should be in the cosmologically interesting region, we were able to place tight constraints on the SUGRA parameter space, especially in the plane of m versus tan b, since the mass of the lightest neutralino depends mainly on these two parameters. The cosmologically interesting regions of the parameter space with tan b close to the top Yukawa infrared fixed points are found to be tan b"1.8: 80 GeV:m :450 GeV and m :200 GeV , tan b"50: 500 GeV:m :800 GeV and m 9300 GeV ,
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
213
where the high tan b result is based on A "0. Both regions are nearly independent of the sign of k. The results presented in this article were based on the GUT scale trilinear coupling choice A "0; however, for tan b:10, the neutralino relic density X h is almost independent of the trilinear Q couplings A [14]. For tan b&40, the relic density is reduced by a positive A while enhanced by a negative A [14]. The value of A significantly affects X h only when tan b is large and both Q m and m are small. Acknowledgements Research supported in part by the U.S. Department of Energy under Grant No. DE-FG0295ER40896, and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
References [1] A.H. Chamseddine, R. Arnowitt, P. Nath, Phys. Rev. Lett. 49 (1982) 970; L. Iban ez, G. Ross, Phys. Lett. B 110 (1982) 215; J. Ellis, D. Nanopoulos, K. Tamvakis, Phys. Lett. B 121 (1983) 123; L. Alvarez-Gaume´, J. Polchinski, M. Wise, Nucl. Phys. B 121 (1983) 495; L. Hall, J. Lykken, S. Weinberg, Phys. Rev. D 27 (1983) 2359. [2] E.W. Kolb, M.S. Turner, The Early Universe, Addison-Wesley, Redwood City, CA, 1989. [3] G. Jungman, M. Kamionkowski, K. Griest, Phys. Rep. 267 (1996) 195. [4] V. Barger, M.S. Berger, P. Ohmann, Phys. Rev. D 47 (1993) 1093; Phys. Rev. D 49 (1994) 4908; V. Barger, M.S. Berger, P. Ohmann, R.J.N. Phillips, Phys. Lett. B 314 (1993) 351. [5] J. Ellis, F. Zwirner, Nucl. Phys. B 338 (1990) 317; G. Ross, R.G. Roberts, Nucl. Phys. B 377 (1992) 571; R. Arnowitt, P. Nath, Phys. Rev. Lett. 69 (1992) 725; M. Drees, M.M. Nojiri, Nucl. Phys. B 369 (1993) 54; S. Kelley et al., Nucl. Phys. B 398 (1993) 3; M. Olechowski, S. Pokorski, Nucl. Phys. B 404 (1993) 590; G. Kane, C. Kolda, L. Roszkowski, J. Wells, Phys. Rev. D 49 (1994) 6173; D.J. Castan o, E. Piard, P. Ramond, Phys. Rev. D 49 (1994) 4882; W. de Boer, R. Ehret, D. Kazakov, Z. Phys. C 67 (1995) 647; H. Baer, M. Drees, C. Kao, M. Nojiri, X. Tata, Phys. Rev. D 50 (1994) 2148; H. Baer, C.-H. Chen, R. Munroe, F. Paige, X. Tata, Phys. Rev. D 51 (1995) 1046. [6] B. Pendleton, G.G. Ross, Phys. Lett. B 98 (1981) 291; C.T. Hill, Phys. Rev. D 24 (1981) 691; C.D. Froggatt, R.G. Moorhouse, I.G. Knowles, Phys. Lett. B 298 (1993) 356; W.A. Bardeen, M. Carena, S. Pokorski, C.E.M. Wagner, Phys. Lett. B 320 (1994) 110; M. Carena, M. Olechowsk, S. Pokorski, C.E.M. Wagner Nucl. Phys. B 419 (1994) 213; B. Schrempp, M. Wimmer, DESY-96-109 (1996). [7] M.S. Alam et al., CLEO Collaboration, Phys. Rev. Lett. 74 (1995) 2885. [8] R. Barate et al., the ALEPH Collaboration, CERN Report CERN-EP-98-044, 1998. [9] P. Nath, R. Arnowitt, Phys. Lett. B 336 (1994) 395; Phys. Rev. Lett. 74 (1995) 4592; Phys. Rev. D 54 (1996) 2374; F. Borzumati, M. Drees, M. Nojiri, Phys. Rev. D 51 (1995) 341; H. Baer, M. Brhlik, Phys. Rev. D 55 (1997) 3201; H. Baer, M. BrhliK, D. Castano, X. Tata, Florida State University Report FSU-HEP-971104, 1997. [10] H.P. Nilles, Phys. Rep. 110 (1984) 1; H. Haber, G. Kane, Phys. Rep. 117 (1985) 75. [11] J. Gunion, H. Haber, G. Kane, S. Dawson, The Higgs Hunter’s Guide, Addison-Wesley Publishing Company, Redwood City, CA, 1990. [12] H. Haber, R. Hempfling, Phys. Rev. Lett. 66 (1991) 1815; J. Ellis, G. Ridolfi, F. Zwirner, Phys. Lett. B 257 (1991) 83; T. Okada, H. Yamaguchi, T. Tanagida, Prog. Theor. Phys. Lett. 85 (1991) 1; We use the calculations of M. Bisset, Ph.D. Thesis, University of Hawaii, 1994. [13] H. Baer, C.-H. Chen, M. Drees, F. Paige, X. Tata, Phys. Rev. Lett. 79 (1997) 986. [14] V. Barger, C. Kao, Phys. Rev. D 57 (1998) 3131. [15] K. Inoue, A. Kakuto, H. Komatsu, H. Takeshita, Prog. Theor. Phys. 68 (1982) 927; and 71 (1984) 413.
214
V. Barger, C. Kao / Physics Reports 307 (1998) 207—214
[16] ALEPH collaboration, talk presented at CERN by G. Cowan, February, 1997. [17] A. Dekel, M.J. Rees, Astrophys. J. 422 (1994) L1; A. Dekel, Ann. Rev. Astron. Astrophys. 32 (1994) 371; M. Strauss, J. Willick, Phys. Rep. 261 (1995) 271; N. Bahcall, L.M. Lubin, V. Dorman, Astrophys. J. 447 (1995) L81; R.G. Carlberg, H.K.C. Yee, E. Ellingson, astro-ph/9512087. [18] T.P. Walker, G. Steigman, D.N. Schramm, K.A. Olive, H.-S. Kang, Astrophys. J 376 (1991) 51; M.S. Smith, L.H. Kawano, R.A. Malaney, Astrophys. J Suppl. Ser. 85 (1993) 219; C.J. Copi, D.N. Schramm, M.S. Turner, Science 267 (1995) 192. [19] M. White, D. Scott, J. Silk, Annu. Rev. Astron. Astrophys. 32 (1994) 319. [20] A.G. Riess, R.P. Krishner, W.H. Press, Astrophys. J. 438 (1995) L17; Astrophys. J. 473 (1996) 88; D. Branch, A. Fisher, E. Baron, P. Nugent, Astrophys. J. 470 (1996) L7. [21] Ya.B. Zel’dovich, Adv. Astron. Astrophys, 3 (1965) 241; H.-Y. Chiu, Phys. Rev. Lett. 17 (1966) 712; B. Lee, S. Weinberg, Phys. Rev. Lett. 39 (1977) 165. [22] R. Barbieri, M. Frigeni, G.F. Giudice, Nucl. Phys. B 313 (1989) 725; K. Griest, D. Seckel, Phys. Rev. D 43 (1991) 3191; J.L. Lopez, D.V. Nanopoulos, K.-J. Yuan, Phys. Lett. B 267 (1991) 219; Nucl. Phys. B 370 (1992) 445; and Phys. Rev. D 48 (1993) 2766; J. Ellis, L. Roszkowski, Phys. Lett. B 283 (1992) 252; L. Roszkowski, R. Roberts, Phys. Lett. B 309 (1993) 329; M. Drees, M.M. Nojiri, Phys. Rev. D 47 (1993) 376. G.L. Kane, C. Kolda, L. Roszkowski, J.D. Wells, Phys. Rev. D 49 (1994) 6173; E. Diehl, G.L. Kane, C. Kolda, J.D. Wells, Phys. Rev. D 52 (1995) 4223. R. Arnowitt, P. Nath, Phys. Lett. B 299 (1993) 58; B 307 (1993) 403(E); Phys. Rev. Lett. 70 (1993) 3696; Phys. Rev. D 54 (1996) 2374; J. Ellis, T. Falk, K.A. Olive, M. Schmitt, Phys. Lett. B 388 (1996) 97; Phys. Lett. B 413 (1997) 355. [23] H. Baer, M. Brhlik, Phys. Rev. D 53 (1996) 597; Phys. Rev. D 57 (1998) 567. [24] D. Matalliotakis, H.P. Nilles, Nucl. Phys. B 435 (1995) 115; M. Olechowski, S. Pokorski, Phys. Lett. B 344 (1995) 201; V. Berezinskii et al., Astropart. Phys. 5 (1996) 1; P. Nath, R. Arnowitt, Phys. Rev. D 56 (1997) 2820. [25] J. Ellis et al., Nucl. Phys. B 238 (1984) 453.
Physics Reports 307 (1998) 215—226
Cosmological constraints on supergravity unified models R. Arnowitt *, Pran Nath Center for Theoretical Physics, Department of Physics, Texas A&M University, College Station, TX 77843-4242, USA Department of Physics, Northeastern University, Boston, MA 02115-5005, USA
Abstract Supergravity GUT models with R-parity invariance possess a cold dark matter candidate (the lightest neutralino) with relic amounts consistent with astronomical measurements, and predict proton decay at rates accessible to on-going and future experiments. Future sattelite, balloon and ground based experiments will give precision determinations of the basic cosmological parameters, and thus affect accelerator and non-accelerator (p-decay, dark matter detection rate) predictions. Thus for the KCDM model we find an upper bound of gluino (neutralino) mass of about 520(70) GeV and for the HCDM model a bound of 720(100) GeV with gaps (forbidden regions) in the parameter space at lower masses. Kamiokande proton decay data combined with relic density constraints already excludes m J '400 GeV for minimal E Sº(5), and Super Kamiokande will be sensitive to m J '500 GeV for non-minimal models even for m as large as 5 TeV. E Proton decay and CDM relic density constraints also imply a reduction (by as much as 10\) in expected maximum CDM event rates for terrestial detectors. 1998 Elsevier Science B.V. All rights reserved. PACS: 98.80.!k; 04.65.#e
1. Introduction In spite of the remarkable success of the Standard Model (SM) in explaining all phenomena at the electroweak scale O(M ), it is generally believed that this model will break down at energies 8 O(1 TeV). Thus the quadratic divergence in the Higgs mass self energy implies an unreasonable amount of fine tuning of the parameters of the theory unless new physics arises at the TeV scale. At present, the only models of new physics that can eliminate this difficulty and still retain the successes of the SM are those based on supersymmetry (SUSY) [1]. SUSY models imply the existence of a whole array of new particles (the SUSY partners of the SM model particles) whose masses may be as light as 40 GeV and as heavy as a TeV. Current and future accelerator
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 7 ) 0 0 0 4 7 - 7
216
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
experiments are expected to be able to verify or deny the existance of such particles (and hence of the validity of SUSY models) over the next decade. Supersymmetry also resolves the gauge hierarchy problem in that one can maintain gauge bosons at two very different mass scales without quantum loop corrections destroying this mass separation. This has lead to the development of grand unified models based on supergravity (SUGRA) [2,3], and the fact that the LEP data is consistent with the unification of the three SM gauge coupling constants, a , a and a , at scale M 2;10 GeV with value a 1/24 for % % models with three generations and two Higgs doublets [4] has stimulated considerable theoretical examination of the predictions of such theories. In SUGRA GUT models, it is possible to achieve spontaneous symmetry breaking of SUSY in a “hidden sector” leading to softly broken SUSY masses in the physical sector. We consider here models where this breaking takes place at a scale of O(M ) or above, with gravity as the messinger % communicating the breaking from the hidden sector to the physical sector, i.e. “gravity mediated” models. [These models are briefly reviewed in Section 2.] One can in this fashion build models that are consistent with all data at the electroweak (EW) scale with a natural extension up to M . As % a consequence, they can also be applied to cosmological problems back to the very early universe, i.e. to times when the temperature was (M , thus generating a link between cosmology and % particle physics. Most SUGRA GUT models require the existance of a conserved R-parity [R"(!1) >*>1, where (B, ¸)"(baryon, lepton) number, S"spin] to prevent too rapid proton decay. A consequence of R-parity invariance is that the lightest supersymmetric particle (LSP) is absolutely stable. Models of this type automatically predict the existance of dark matter in the universe, i.e. the relic LSP that did not annihilate in the early universe [5]. Over almost all of the SUSY parameter space, the LSP is the lightest neutralino, the s, and remarkably, over a significant portion of the SUSY parameter space, the relic density of s is consistent with the current astronomical estimates of the amount of cold dark matter (CDM) seen in the universe. Thus there is a correlation between the SUSY parameters, which govern supersymmetry predictions at accelerators, and cosmological phenomena. Much of the uncertainty in this relation is due to the current uncertainty in the basic cosmological parameters. However, over the next 5—10 years, the MAP and Planck Surveyor sattelites [6] (and the many balloon and ground based experiments) will be able to determine the Hubble constant H, the amount of CDM, baryonic dark matter, etc. with errors at the level of only a few percent [7]. This will strikingly restrict the SUSY parameter space, and hence influence what may be expected at accelerators as well as what event rates may be seen at terrestial CDM detectors. In Section 3 below we illustrate this for two cosmological models: the KCDM model (with non-zero cosmological constant) and the lCDM model (with hot dark matter (HDM) from massive neutrinos). In all GUT models, the proton is unstable, and in most SUSY models the dominant decay mode is pPlN #K>. The decay rate is sensitive to the SUSY parameter space. In Section 4, we discuss the restrictions arising from current astronomical determinations of the amount of CDM, and the effects this has on expectations for current and future proton decay experiments. Section 5 gives conclusions.
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
217
2. Supergravity GUT models The minimal supergravity GUT model (mSUGRA) depends upon four new parameters and one sign. These may be taken to be m (the universal scalar SUSY soft breaking mass), m (the universal gaugino soft breaking mass, which is related to the gluino (gJ ) mass by m J (a /a )m ), % E A (the trilinear soft breaking mass parameter), and tan b,1H 2/1H 2 where the Higgs doublet VEVs 1H 2 gives rise to (down, up) quark masses. In addition there is the Higgs mixing parameter k appearing in the superpotential term kH H . Using the renormalization group equations (RGE), the spontaneous breaking of SUSY at O(M ) gives rise to the spontaneous % breaking of Sº(2);º(1) at the EW scale, and this radiative breaking determines k [8], leaving, however, the sign of k arbitrary. While the universality (or near universality) of the soft breaking parameters for first two generations of squarks and sleptons is needed to suppress flavor changing neutral currents, such universality is not needed for the third generation. One may parametrize the Higgs H and third generation scalar masses at M as m "m(1#d ), m*"m(1#d ), m0"m(1#d ), O S % & m0"m(1#d ), m0"m(1#d ) and m*"m(1#d ), where q ,(tI , bI ) is the ¸ squark doub B J * * * C let, u "tI the R stop squark singlet, etc. There may also be distinct third generation cubic soft 0 0 breaking parameters at M which we label A , A , A . In the following we will restrict our % R @ O parameters to the range m , m J 41 TeV, "A /m "47, "d "41, tan b425 , R G E
(1)
so that an unnantural amount of fine tuning is not required. (A is the t-quark parameter at the R electroweak scale.) The condition on tan b implies that A and A make only small contributions, @ O and d do not contribute significantly to the determination of k. Thus neglecting these one finds [9] t k" t!1
1 1#D 1 1!3D 1!D d # d # # (d #d )! t 2 t 2 2
m
1 t A 1 1 t#1 a (M ) # (1!D ) 0#C J mJ ! M# 1! 8 S E E 8 t!1 2 D 2 22 t!1 a % #loop corrections ,
(2)
where t,tan b, D 1!(m /200 sin b), A A !0.613m J , S "¹r½m (½"hypercharge, R 0 R E m"masses at M ) and C J is given in Iban ez et al. [8]. Here D vanishes at the t-quark Landau % E pole, and A is the residue at the pole. 0 As will be discussed below, k plays a central role in the predictions of SUGRA GUT models. Thus over much of the parameter space, one finds k<M and scaling relations hold among 8 neutralinos (s, i"1,2,4) and charginos (s!, i"1, 2) [10]: 2m s! s (!)m J and E G G Q m ! m <m . Further, D is generally quite small (D 40.23 for m "175 GeV), and so the Q Q R Q non-universal effects in k are governed approximately by the combination d !(d #d ) !2d /t.
218
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
3. Relic density analysis 3.1. Cosmological constraints As mentioned earlier, under the assumption that R parity is conserved the lightest supersymmetric particle (LSP) is absolutely stable. Further detailed analyses in the framework of supergravity unification show that the lightest neutralino is the LSP over most of the parameter space of the model and thus is a candidate for cold dark matter. The neutralinos are produced in abundance in the early universe and they then annihilate to produce the desired relic density at current temperatures. The quantity of theoretical interest for the computation of dark matter is X h where X "o /o where o is the neutralino matter density and o is the critical Q Q Q Q matter density needed to close the universe where o is given by o "3H/8nG " , 1.88h;10\gm/cm and h is the Hubble parameter H in units of 100 km/ (s Mpc). The amount of cold dark matter is governed by the number density of neutralinos which satisfies the Boltzman equation dn "!3Hn!1pv2(n!n) , dt
(3)
where n is the number density, n is the number density at the freezeout temperature, and 1pv2 is the thermal average of the neutralino annihilation cross-section p times the relative velocity v. The thermal average is defined by
k¹ dv ve\TV, x" . (4) mG Q The “freeze-out” temperature ¹ is the temperature where the neutralino annihilation rate becomes smaller than the expansion rate of the universe so that the neutralinos decouple from the background. Integration of the above equation from the the freezeout temperature to the current temperature gives for X h the result Q ¹ N , X h 2.48;10\ Q (5) Q ¹ J(x ) A where 1pv2"
dv v(pv)e\TV
J(x )"
V
dx1pl2(x) GeV\ ,
(6)
¹ is the current background temperature, (¹ /¹ ) is the reheating factor, N is number of A Q A massless degrees of freedom at freezeout, and x "k¹ /m . In the relic density analysis we use the Q accurate method for its computation which correctly integrates over the poles in the thermal averaging. The need for the accurate method for the computation of relic density arises because the neutralinos can annihilate into direct channel Z and Higgs poles, and the usual approximation of expansion in powers of v breaks down in this region. Thus in the thermal averaging one must integrate correctly over the Z and Higgs poles.
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
219
3.2. Cosmological models As mentioned earlier, future sattelite, balloon and ground based experiments will give a much more precise determination of the cosmological parameters. Specifically one will have precise determinations of the Hubble parameter and of the amount of hot dark matter, cold dark matter and of the cosmological constant. These determinations will affect the allowed SUSY parameter space and strongly affect the predictions of the SUSY mass spectrum at accelerators. We discuss here the implications of these accurate measurements in the context of two cosmological models: the lCDM model, and the KCDM model. (i) l CDM model. The l CDM model is a model using neutrinos as the hot dark matter component (HDM) and the neutralinos as the cold dark matter (CDM) component. As an illustrative example we work in the inflationary scenario with X"1 and make the choice X "0.2, X "0.05, X "0.75 and h"0.62. Using the estimates of the errors with which one J Q expects the Planck sattelite can measure the cosmological parameters [7], one finds that X h lies Q in the 1 std range 0.2754X h40.301 . Q
(7)
Imposing the constraint of Eq. (7) on the relic density one finds several interesting phenomena as shown in Fig. 1 [11]. First one finds that the range of event rates in dark matter detectors shrinks as compared to the results previously obtained for the larger range 0.14X h40.4. This is Q expected as the parameter space has shrunk. One also finds the appearance of forbidden zones in the event rates plot as a function of the gluino mass. Finally one finds that there is an upper limit on the gluino mass of about 700 GeV within the naturalness constraint that the gluino mass lies in the mass range below 1 TeV. (ii) KCDM model. As a second example we consider the KCDM model. Recently, there is considerable astrophysical data that points towards the existence of a cosmological constant. For specificity we consider here the possibility that X "0.4, X "0.05, XK"0.55 and h"0.62 as Q before. Using the estimate of the accuracies expected from the Planck sattelite experiment [7] one then finds that X h lies in the range Q 0.1404X h40.174 . Q
(8)
The effect of the constraint Eq. (8) on the event rates is described in Fig. 2 [11] where the maximum and minimum event rates for the scattering of a neutralino in a xenon detector is plotted as a function of the gluino mass. As for the lCDM case, comparison with the case when one imposes the less stringent constraint 0.14X h40.4 shows that there is a general narrowing of Q the maximum and minimum of event rate. For this case the upper bound on the gluino mass is 520 GeV and the upper bound on the neutralino mass is 70 GeV. Additionally, an interesting spectrum of SUSY masses emerges. One finds that the light chargino mass lies in the range 80—135 GeV, the neutralino mass lies in the range 40—70 GeV, the Higgs mass lies in two ranges 70—80 GeV, or 90—116 GeV. The first branch on the Higgs mass has already been excluded by the LEP data. The right-handed selectron obeys the limit that its mass lies below 100 GeV if the gluino mass lies above 460 GeV. Thus one is lead into the interesting situation that for m J 5460 GeV the E slepton would be accessible at LEP200. However, if the gluino lies below 460 GeV, it may be
220
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
Fig. 1. Maximum and minimum event rates for a xenon detector in the lCDM model with the relic density constraint of Eq. (7) [11].
accessible to discovery at the Tevatron if the upgraded Tevatron can reach its maximum energy and luminosity. 4. Dark matter and proton decay In the previous section we have seen the effects dark matter constraints have on accelerator physics predictions of SUGRA models (and what may be expected as more precise determinations of the cosmological parameters are obtained). We consider here the effects of these constraints on predictions for proton decay experiments, and for the possibility of terrestial detection of dark matter. In most SUSY models with R-parity, p decay arises from dimension five operators due to the exchange of superheavy color triplet Higgsinos, HI . For a wide class of models, i.e. Sº(5), SO(10), E(6), etc., the dominant decay mode of the proton is pPlN #K> with decay width given by [12]
b N "A""B"C , C(pPlN K>)" C(pPlN K>)" G M & GCIO
(9)
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
221
Fig. 2. Maximum and minimum event rates for a xenon dector in the KCDM model with the relic density constraint of Eq. (8). The solid curves are for the case d "0"d , the dotted for the case d "1"!d , and the dashed for the case d "!1"d with d "0"d in each case [11].
where M is the Higgs triplet mass. b is the three quark matrix element between the vacuum and & N the proton state. (Lattice gauge calculations give [13] b 5.6;10\ GeV). The constant A is N a factor depending on the quark masses and CKM matrix elements, B is the dressing loop integral which depends on the SUSY mass spectrum and C contains the chiral current algebra factors that convert the quark states into mesons and baryons. In the following we will constrain M to satisfy & M 410M as one expects strong gravitational effects to arise above this scale, phenomena which % & SUGRA models cannot treat. 4.1. mSUGRA models We begin by analysing the p decay rate for the simplest case of the mSUGRA Sº(5) model. Here the decay rate is dominated by the second generation loop diagram in B (the first generation is negligible, and the third generation gives +20% correction) and roughly one finds
C+const.
mJ E tan b . m
(10)
One sees that small m J , small tan b and large m enhances the p lifetime. Fig. 3 [14] exhibits some E of this behavior. Here the maximum value of q(pPlN K) (as one scans the full SUSY parameter space) is plotted vs. m J . We see that current data [15] does not exclude much of the allowed E parameter space for m 41 TeV. Super Kamiokande will be able to explore most of the parameter space for m 41 TeV, but not the region m 52 TeV.
222
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
Fig. 3. The maximum q(pPlN K) lifetime in mSUGRA Sº(5) model when m 41 TeV (solid), m 41.5 TeV (dash-dot) and m 42 TeV (dashed) [14]. The solid horizontal line is the current experimental lower bound and the dashed horizontal line the expected sensitivity of Super Kamiokande.
We next examine what constraints the current astronomical bounds on the relic neutralino cold dark matter, 0.14X h40.4, place on the SUSY parameter space. As discussed in Section 3, the Q early universe s annihilation cross section will decrease with increasing m which leads to two regions in the parameter space. The first occurs for m 60 GeV (or by the scaling relations Q m J :450 GeV). Here annihilation can proceed rapidly through s-channel Higgs (h) or Z poles. This E allows m to become large and still have sufficient s annihilation to satisfy the astronomical limits (and may even require m to be large so that the lower bound on X h is not violated). Thus in this Q region m J is small and m is large enhancing the p lifetime by Eq. (10). In the second region, E m 60 GeV (m J 450 GeV), the s-channel pole contribution becomes small, (2m 'm ) and the Q E Q F8 annihilation cross section is dominated by the t and u channel sfermion poles. To get sufficient annihilation so that the upper bound on X h is not violated, one requires relatively light Q sfermions, i.e. m small, and in fact one generally has m :100 GeV. Since in this region both m is small and m J is large the p lifetime is reduced, and detailed calculations show a reduction factor of E 10—30. Fig. 4 [14] exhibits this phenomena where one sees that for m (1 TeV (solid curve), current p-decay data already excludes all the parameter space for m J 400 GeV, and Super E Kamiokande [16] will be able to explore the full parameter space. Even for m (5 TeV (dashed curve) current data eliminates all the parameter space for m J 500 GeV. The ICARUS experiment E expects to be able to achieve a sensitivity of 10 yr [17], which would be able to examine the full
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
223
Fig. 4. The maximum q(pPlN K) lifetime in mSUGRA Sº(5) model when the constraint 0.1(X h(0.4 is imposed Q with m 41 TeV (solid) and m 45 TeV (dashed) [14]. Horizontal lines are as in Fig. 3.
parameter space for m (5 TeV, except for a few high points. We see, therefore, that the implementa tion of dark matter constraints greatly effects the SUGRA predictions for proton decay. 4.2. Non-minimal models The above discussion was done for mSUGRA Sº(5). We discuss now what changes might occur for non-minimal models. (i) ¹extures. Treatment of textures in the Yukawa couplings generally require the inclusion of higher dimensional operators in the effective potential [18]. A rough analysis shows that the pPlN K lifetime can be enhanced by a factor of + (3—1). The additional suppression due to the relic density constraint remains unchanged. We see then from Fig. 4 that the sensitivity to the parameter space m J 500 GeV will be achieved when Super Kamiokande achieves a sensitivity to p decay of E +5;10 yr, which should occur in 1999. (ii) Non-universalities. As discussed in Section 2, non-universal soft breaking may occur in the Higgs sector and the third generation squarks. Much of the effects of such non-universalities occurs from their modifications of k in Eq. (2) where only the combination dI !d /tanb occurs (dI ,d !d !d ). We have considered the case where d "0"d and have examined two extreme limits of d "!1"!d and d "1"!d . In both these cases the p lifetime is generally reduced from the values obtained for d "0 and thus Super Kamiokande and ICARUS G will be able to probe the same regions of parameter space for these non-universalities as in the universal case.
224
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
(iii) Gº¹ group. While the analysis has been done for Sº(5), the above results hold for any GUT group that has an Sº(5) subgroup with the matter embedded in the Sº(5) subgroup in the conventional way and tan b is restricted to be low (tan b:25). There are two important exceptions, the flipped Sº(5) model and SO(10). In the former case, the change of the matter embedding (ud, el) causes the dimension five pPlN K operator to vanish leaving pPe>n as the dominant decay mode. For SO(10) the t!b Yukawa unification at M requires that tan b be large, i.e. % tan b"56. As can be seen from Eq. (10), this significantly reduces the proton lifetime requiring that models possess a number of GUT thresholds (with effective masses as large as 10\ GeV) to simultaneously satisfy both the grand unification and the p lifetime constraint [19]. To see which models survive the additional reduction expected from the relic density constraint requires a more detailed treatment. 4.3. Effects on SUSY mass spectrum We saw above that the current p decay data and relic density constraints imply for the Sº(5)-type models that m J :500 GeV and hence by scaling m !:140 GeV. The upgraded Tevatron Q E (with 20 fb\ of data) is expected to be sensitive to gluinos with m J :450 GeV and charginos with E m !:225 GeV. For m 41 TeV one finds 75 GeV:m :115 GeV and for m 42 TeV one has F Q 75 GeV:m :130 GeV [14]. LEP200 may be expected to see a light Higgs with m :100 GeV and F F the upgraded Tevatron with m :120 GeV. Thus models of this type may be tested at accelerators F even before the LHC. 4.4. Dark matter detection Terrestial detection of Milky Way dark matter particles depends on the scattering of the incident s by quarks in a nuclear target, which proceeds through s-channel squark poles and t-channel h and Z poles. This cross section is increased by having m small (light squarks) and also grows with tan b. As can be seen from Eq. (10), then, large scattering event rates will occur in the part of the parameter space where the p lifetime is reduced. If we impose both the p decay and relic density constraints, we saw in Fig. 4 that for Sº(5)-type models, the parameter space is limited to m J (500 GeV, i.e. m !(70 GeV. One finds that current data then reduces the maximum detector Q E event rates (which in this domain can be as large as O(1) events/kg da for large tan b) down to O(10\!10\) events/kg da [14]. Thus models of this type, which possess both p decay and neutralino dark matter candidates, will require highly sensitive detectors for the direct detection of local dark matter.
5. Conclusions Supergravity allows one to build grand unified models that correctly reduce to the Standard Model at low energies and make accelerator predictions for new physics in the 100 GeV—TeV domain, as well as making cosmological predictions concerning dark matter and GUT predictions involving proton decay. Future precision determinations of the basic cosmological parameters will thus impact significantly on other predictions of the model. For example, if X 0.4 (as might be !"+
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
225
the case for the KCDM model) one finds upper bounds on the gluino and neutralino masses of 520 GeV and 70 GeV respectively, with squark and selectrons becoming light for m J (m )'420(55) GeV. Current astronomical bounds on dark matter significantly influence Q E proton decay predictions for Sº(5)-type models. The Kamiokande data already excludes the parameter space for m J 400 GeV for the mSUGRA Sº(5), with m 41 TeV when relic density E constraints are imposed, and Super Kamiokande should be sensitive to the domain m J '500 GeV E within about a year for non-minimal extensions and with m 45 TeV. Thus one should expect in the future significant interactions between cosmological phenomena and particle physics phenomena for models of this type. Acknowledgements This work was supported in part by National Science Foundation Grants PHY-9722090 and PHY-9602074. References [1] H. Haber, G. Kane, Phys. Rep 117 (1985) 75. [2] A.H. Chamseddine, R. Arnowitt, P. Nath, Phys. Rev. Lett. 49 (1982) 970; R. Barbieri, S. Ferrara, C.A. Savoy, Phys. Lett. B 119 (1982) 343; L. Hall, J. Lykken, S. Weinberg, Phys. Rev. D 27 (1983) 2359; P. Nath, R. Arnowitt, A.H. Chamseddine, Nucl. Phys. B 227 (1983) 121. [3] P. Nath, R. Arnowitt, A.H. Chamseddine, Applied N"1 Supergravity, World Scientific, Singapore, 1984; H.P. Nilles, Phys. Rep. 110 (1984) 1; R. Arnowitt, P. Nath, in: E. Eboli, V.O. Rivelles (Eds.), Proc. VII J.A. Swieca Summer School, World Scientific, Singapore, 1994; X. Tata, in: TASI-95 Lectures, University of Hawaii Report No. UH-511-833-95, 1995; M. Drees, S.P. Martin, hep-ph/9504324. [4] P. Langacker, in: P. Nath, S. Reucroft (Eds.), Proc PASCOS 90-Symp., World Scientific, Singapore 1990; J. Ellis, S. Kelley, D.V. Nanopoulos, Phys. Lett. B 249 (1990) 441; B 260 (1991) 131; U. Amaldi, W. de Boer, H. Furstenau, Phys. Lett. B 260 (1991) 447; F. Anselmo, L. Cifarelli, A. Peterman, A. Zichichi, Nuovo. Cimento 104A (1991) 1817; 115A (1992) 581. [5] G. Jungman, M. Kamionkowski, K. Greist, Phys. Rep. 267 (1995) 195; E.W. Kolb, M.S. Turner, The Early Universe, Addison-Wesley, Reading, MA, 1989. [6] http://map.gsfc.nasa.gov/; http://astro.estec.esa.nl:80/SA-general/Projects/Cobras/cobras.html. [7] A. Kosowsky, M. Kamionkowski, G. Jungman, D. Spergel, Nucl. Phys. Proc. (Suppl.) 51B (1996) 49. We assume here an angular resolution of 0.17 and a noise level of u\"1.3;10\; S. Dodelson, E. Gates, A. Stebbins, Astrophys. J. 467 (1996) 10. We assume here the cosmological parameters Q and n are a priori unknown and allowed to vary. [8] K. Inoue et al., Prog. Theor. Phys. 68 (1982) 927; L. Iban ez, G.G. Ross, Phys. Lett. B 110 (1982) 227; L. Alvarez-Gaume´, J. Polchinski, M.B. Wise, Nucl. Phys. B 221 (1983) 495; J. Ellis, J. Hagelin, D.V. Nanopoulos, K. Tamvakis, Phys. Lett. B 125 (1983) 2275. [9] P. Nath, R. Arnowitt, Phys. Rev. D 56 (1997) 2820. [10] R. Arnowitt, P. Nath, Phys. Rev. Lett. 69 (1992) 725; P. Nath, R. Arnowitt, Phys. Lett. B 289 (1992) 368. [11] P. Nath, R. Arnowitt, hep-ph/9801454. [12] J. Ellis, D.V. Nanopulos, S. Rudaz, Nucl. Phys. B 202 (1982) 43; B.A. Campbell, J. Ellis, D.V. Nanopulos, Phys. Lett. 141 B (1984) 299; S. Chadha, G.D. Coughlan, M. Daniel, G.G. Ross, Phys. Lett. 149 B (1984) 47; R. Arnowitt, A.H. Chamseddine, P. Nath, Phys. Lett. 156 B (1985) 215; P. Nath, R. Arnowitt, A.H. Chamseddine, Phys. Rev. 32 D (1985) 2348; J. Hisano, H. Murayama, T. Yanagida, Nucl. Phys. B 402 (1993) 46; R. Arnowitt, P. Nath, Phys. Rev. D 49 (1994) 1479.
226 [13] [14] [15] [16]
R. Arnowitt, P. Nath / Physics Reports 307 (1998) 215—226
M.B. Gavela et al., Nucl. Phys. B 312 (1989) 269. R. Arnowitt, P. Nath, hep-ph/9801246. Particle Data Group, Phys. Rev. D 54 Part 2 (1996). Y. Totsuka, in: R. Kotthaus, J.H. Kuhn (Eds.), Proc. XXIV Conf on High Energy Physics, Munich, 1988, Springer, Berlin, 1989. [17] ICARUS II, a Second Generation Proton Decay Experiment and Neutrino Observatory at the Gran Sasso Laboratory, Proposal vols. I, II LNGS-94199-I&II, May 1994 and Addendum, May 19, 1995. [18] H. Georgi, C. Jarlskog, Phys. Lett. B 86 (1979) 297; J. Harvey, P. Ramond, D. Reiss, Phys. Lett. B 92 (1980) 309; P. Ramond, R.G. Roberts, G.G. Ross, Nucl. Phys. B 406 (1993) 19; G. Anderson, S. Raby, S. Dimopulos, L. Hall, G.D. Starkman, Phys. Rev. D 49 (1994) 3660; P. Nath, Phys. Rev. Lett. 76 (1996) 2218. [19] K.S. Babu, S.M. Barr, Phys. Rev. D 51 (1995) 2463; V. Lucas, S. Raby, Phys. Rev. D 54 (1996) 2261; 55 (1997) 6986; S. Urano, R. Arnowitt, hep-ph/9611389.
Physics Reports 307 (1998) 227—234
Origin of dark matter axions E.P.S. Shellard*, R.A. Battye Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Silver Street, Cambridge, CB3 9EW, UK
Abstract We discuss the possible sources of dark matter axions in the early universe. In the standard thermal scenario, an axion string network forms at the Peccei-Quinn phase transition ¹&f and then radiatively decays into a cosmological background of axions; to be the dark matter, these axions must have a mass m &100 leV with specified large uncertainties. An inflationary phase with a reheat temperature below the PQ-scale ¹ : f can also produce axion strings through quantum fluctuations, provided that the Hubble parameter during inflation is large H 9f ; this case again implies a dark matter axion mass m &100 leV. For a smaller Hubble parameter during inflation H :f , ‘anthropic tuning’ allows dark matter axions to have any mass in a huge range below m :1 meV. 1998 Elsevier Science B.V. All rights reserved. PACS: 14.80.Mz; 95.35.#d
1. Introduction The axion has remained a popular dark matter candidate because of its enduring motivation as an elegant solution to the strong CP-problem [1]. Despite early hopes of discovery, it turned out that in order to be consistent with accelerator searches and astrophysics, the axion must be nearly ‘invisible’ and extremely light. Its couplings and mass are inversely proportional to the (large) Peccei-Quinn scale f as in
m "6.2;10\ eV
10 GeV . f
(1)
Conservative estimates for the strongest astrophysical constraints on the axion mass yield m :10 meV [2]. Present large-scale axion search experiments [3] are sensitive to a mass range
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 8 - 7
228
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
m &1—10 leV, which has been chosen for a variety of historical and technological reasons. Our primary focus here, however, is not on constraints on the viable axion mass range, but rather on efforts to predict the mass of a dark matter axion from cosmology.
2. Standard thermal axion cosmology The cosmology of the axion is determined by the two energy scales f and K . The first /!" important event is the Peccei-Quinn phase transition which is broken at a high temperature ¹&f 910 GeV. This creates the axion, at this stage an effectively massless pseudo-Goldstone boson, as well as a network of axion strings [4] which decays gradually into a background cosmic axions [5]. (Note that one can engineer models in which an inflationary epoch interferes with the effects of the Peccei-Quinn phase transition, as we shall discuss in the next section.) At a much lower temperature ¹&K after axion and string formation, instanton effects ‘switch on’, the /!" axions acquire a small mass, domain walls form [6] between the strings [4] and the complex hybrid network annihilates into axions in about one Hubble time [8]. There are three possible mechanisms by which axions are produced in the ‘standard thermal scenario’: (i) thermal production, (ii) axion string radiation and (iii) hybrid defect annihilation when ¹"K . Axions consistent with the astrophysical bounds must decouple from thermal equilib/!" rium very early; their subsequent history and number density is analogous to the decoupled neutrino, except that unlike a 100 eV massive neutrino, thermal axions cannot hope to dominate the universe with m :10 meV. We now turn to the two dominant axion production mechanisms, but first we address an important historical digression. Misalignment misconceptions. The original papers on axions suggested that axion production primarily occurred, not through the above mechanisms, but instead by ‘misalignment’ effects at the QCD phase transition [9]. Before the axion mass ‘switches on’, the axion field h takes random values throughout space in the range 0 to 2n; it is the phase of the PQ-field lying at the bottom of a º(1) ‘Mexican hat’ potential. However, afterwards the potential becomes tilted and the true minimum becomes h"0, so the field in the ‘misalignment’ picture begins to coherently oscillate about this minimum; this homogeneous mode corresponds to the ‘creation’ of zero momentum axions. Given an initial rms value h for these oscillations, it is relatively straightforward to estimate the total energy density in zero momentum axions and compare these to the present mass density of the universe (assuming a flat X"1 FRW cosmology) [9,10]: X
+2Dh\hf (h )
10\ eV , m
(2)
where D+3! accounts for both model-dependent axion uncertainties and those due to the nature of the QCD phase transition, and h is the rescaled Hubble parameter. The function f (h) is an anharmonic correction for fields near the top of the potential close to unstable equilibrium h+n, that is, with f (0)"1 at the base h+0 and diverging logarithmically for hPn [11]. If valid, the estimate (2) would imply a constraint m 95 leV for the anticipated thermal initial conditions with h "O(1) [9,10].
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
229
As applied to the thermal scenario, the expression (2) is actually a very considerable underestimate for at least two reasons: First, the axions are not ‘created’ by the mass ‘switch on’ at t"t , /!" they are already there with a specific momentum spectrum g(k) determined by dynamical mechanisms prior to this time. The actual axion number obtained from g(k) is much larger than the rms average assumed in (2) which ignores the true particle content. Secondly, this estimate was derived before much stronger topological effects were realized, notably the presence of axion strings and domain walls. In any case, these nonlinear effects complicate the oscillatory behaviour considerably, implying that the homogeneous estimate (2) is poorly motivated. Axion string network decay. Axions and axion strings are inextricably intertwined. Like ordinary superconductors or superfluid He, axion models contain a broken º(1)-symmetry and so there exist vortex-line solutions. Combine this fact with the Peccei-Quinn phase transition, which means the field is uncorrelated beyond the horizon, and a random network of axion strings must inevitably form. An axion string corresponds to a non-trivial winding from 0 to 2n of the axion field h around the bottom of its ‘Mexican hat’ potential. It is a global string with long-range fields, so its energy per unit length k has a logarithmic divergence which is cut-off by the string curvature radius R:t, that is, k+2nf ln(t/d), where the string core width is d+f\. The axion string, despite this logarithmic divergence, is a strongly localized object; if we have a string stretching across the horizon at the QCD temperature, then ln(t/d)&65 and over 95% of its energy lies within a tight cylinder enclosing only 0.1% of the horizon volume. To first order, then, the string behaves like a local cosmic string, a fact that can be established by a precise analytic derivation and careful comparison with numerical simulations [12]. After formation and a short period of damped evolution, the axion string network will evolve towards a scale-invariant regime with a fixed number of strings crossing each horizon volume (for a cosmic string review see Ref. [13]). This gradual demise of the network is achieved by the production of small loops which oscillate relativistically and radiate primarily into axions. The overall density of strings splits neatly into two distinct parts, long strings with length l't and a population of small loops l(t, that is, o"o #o . High resolution numerical simulations * confirm this picture of string evolution and suggest that the long string density during the radiation era is o +13 k/t [14]. To date, analytic descriptions of the loop distribution have used the well-known string ‘one scale’ model, which predicts a number density of loops defined as kln(l, t)dl"o (l, t)dl in the interval l to l#dl to be given by * 4a(1#i/a) , n(l, t)" (l#it)t
(3)
where a is the typical loop creation size relative to the horizon and i+65/[2n ln(t/d)] is the loop radiation rate [7]. Once formed at t"t with length l , a typical loop shrinks linearly as it decays into axions l"l !i(t!t ). The key uncertainty in this treatment is the loop creation size a, but compelling heuristic arguments place it near the radiative backreaction scale, a&i. (If this is the case, we note that the loop contribution is over an order of magnitude larger than direct axion radiation from long strings.) String loops oscillate with a period ¹"l/2 and radiate into harmonics of this frequency (labelled by n), just like other classical sources. Unless a loop has a particularly degenerate trajectory, it will have a radiation spectrum P Jn\O with a spectral index q'4/3, that is, the spectrum is L
230
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
dominated by the lowest available modes. Given the loop density (3), we can then calculate the spectral number density of axions dn /du, which turns out to be essentially independent of the exact loop radiation spectrum for q'4/3. From this expression we can integrate over u to find the total axion number at the time t , that is, when the axion mass ‘switches on’ and the string /!" network annihilates. Subsequently, the axion number will be conserved, so we can find the number-to-entropy ratio and project forward to the present day. Multiplying the present number density by the axion mass m yields the overall axion string contribution to the density of the universe [7]: X
a !1 1# i
+110Dh\
10\ eV . m
(4)
The key additional uncertainty from the string model is the ratio a/i&O(1), which should be clearly distinguished from particle physics and cosmological uncertainties inherent in D and h (which appear in all estimates of X ). With a Hubble parameter near h"0.5, the string estimate (4) tends to favour a dark matter axion with a mass m &100 leV, as we shall discuss in the conclusion. A comparison with (2) confirms that X is well over an order of magnitude larger than the ‘misalignment’ contribution. Axion mass *switch on+ and hybrid defect annihilation. Near the QCD phase transition the axion acquires a mass and network evolution alters dramatically because domain walls form. Large field variations around the strings collapse into these domain walls, which subsequently begin to dominate over the string dynamics. This occurs when the wall surface tension p becomes comparable to the string tension due to the typical curvature p&k/t. The demise of the hybrid string—wall network proceeds rapidly, as demonstrated numerically [8]. The strings frequently intersect and intercommute with the walls, effectively ‘slicing up’ the network into small oscillating walls bounded by string loops. Multiple self-intersections will reduce these pieces in size until the strings dominate the dynamics again and decay continues through axion emission. An order-of-magnitude estimate of the demise of the string—domain wall network indicates that there is an additional contribution [15] X
&O(10)Dh\
10\ eV . m
(5)
This ‘domain wall’ contribution is ultimately due to loops which are created at the time &t . /!" Although the resulting loop density will be similar to (3), there is not the same accumulation from early times, so it is likely to be subdominant [7] relative to (4). More recent work [16], questions this picture by suggesting that the walls stretching between long strings dominate and will produce a contribution anywhere in the wide range X & (1—44) X ; however, this assertion requires stronger quantitative support. Overall, like most effects, the domain wall contribution will serve to further strengthen the string bound (4) on the axion.
We note briefly that it is also possible to weaken any axion mass bound through catastrophic entropy production between the QCD-scale and nucleosynthesis, that is, in the timescale range 10\ s:t :10\ s. Usually this involves the energy density of the universe becoming temporarily dominated by an exotic massive particle with a tuned decay timescale.
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
231
Up to this point we have only considered the simplest axion models with a unique vacuum N"1, so what happens when N'1? In this case, any strings present become attached to N domain walls at the QCD-scale. Such a network ‘scales’ rather than annihilates, and so it is cosmologically disastrous being incompatible (at the very least) with CMB isotropy.
3. Inflationary scenarios The relationship between inflation and dark matter axions is rather mysterious. Its significance depends on the magnitude of the Peccei-Quinn scale f relative to two key inflationary parameters, (i) the reheat temperature of the universe ¹ at the end of inflation and (ii) the Hubble parameter H as the observed universe first exits the horizon during inflation. Inflation is irrelevant to the axion if ¹ 9f because, in this case, the PQ-symmetry is restored and the universe returns to the ‘standard thermal scenario’ in which axion strings form and the estimate (4) pertains. Consider, then, the two inflationary axion scenarios with ¹ :f . H (f : Anthropic misalignment and quantum fluctuations. In an inflationary model for which f 'H '¹ , the h-parameter or axion angle will be set homogeneously over large inflationary domains before inflation finishes [17]. In this case, the whole observable universe emerges from a single Hubble volume in which this parameter has some fixed initial value h . Because the axion remains out of thermal equilibrium for large f , subsequent evolution and reheating does not disturb h until the axion mass ‘switches on’ at ¹&K . Afterwards, the field begins to oscillate /!" coherently, because it is misaligned by the angle h from the true minimum h"0. This homogene ous mode corresponds to a background of zero momentum axions and it is the one circumstance under which the misalignment formula (2) actually gives an accurate estimate of the relative axion density X . By considering the dependence X Jh in Eq. (2), we see that inflation models have an intrinsic arbitrariness given by the different random magnitudes of h in different inflationary domains [17]. While a large value of f <10 GeV might have been thought to be observationally excluded, it can actually be accommodated in domains where h ;1. This may seem highly unlikely but, if we consider an infinite inflationary manifold or a multiple universe scenario including ‘all possible worlds’, then life as we know it would be excluded from those domains with large h "O(1) because the baryon-to-axion ratio would be too low [18]. Thus, accepting this anthropic selection effect, we have to concede that axions could be the dark matter X +1 if we live in a domain with a ‘tuned’ h-parameter:
m . h +0.3 D\h 10\ eV
(6)
For h +O(1), this suggests an axion with m &5 leV (h"0.5), though actually inflation makes no definite prediction from Eq. (6) beyond apparently specifying m 910 leV. But even this restriction is not valid; if we observe Eq. (2) carefully we see that we can also obtain a dark matter axion for
This is not quite in the spirit of the original motivation for the axion!
232
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
higher m by fine-tuning h near n. The anharmonic term f (h) with an apparent logarithmic divergence allows X +1 for a much heavier dark matter axion [11]. This simple ‘anthropic tuning’ picture is significantly altered by quantum effects. Like any minimally coupled massless field during inflation, the axion will have a spectrum of quantum excitations associated with the Gibbons—Hawking temperature ¹&H/2n. This implies the field will acquire fluctuations about its mean value h of magnitude dh"H/2nf , giving an effective rms value h "(h #dh). Even if our universe began in an inflationary domain with h "0, there will be a minimum misalignment angle set by dh; this implies that we cannot always fine-tune h in Eq. (6) such that X :1. Worse still, the fluctuations dh imply isocurvature fluctuations in the axion density [19] given by Eq. (2), that is, do Jh +h#2h dh#dh. Such density fluctuations will also create cosmic microwave background (CMB) anisotropies which are strongly constrained, d¹/¹&do/o&2dh/h :10\. Combining the quantum fluctuation dh with the h-requirement for a dark matter axion (6), we obtain a strong constraint on the Hubble parameter during inflation [20]
H :10 GeV
10\ eV , m
(7)
which is valid for small masses m :10 leV. Here, H is the Hubble parameter as fluctuations associated with the time when the microwave anisotropies first leave the horizon some 50—60 e-foldings before the end of inflation. We conclude from Eq. (7) that inflation in the f 'H '¹ regime must have a small Hubble parameter H or the dark matter axion must be extremely light, m ;1 leV. Inflation now has many guises and models exist with Hubble parameters anywhere in the range H &10—10 GeV, so a detectable inflationary axion with m &1 leV is possible in principle. Quantum effects also constrain the ‘anthropic tuning’ h Pn which is required for a larger mass axion m 910 leV. For simple inflation models, excessive isocurvature fluctuations imply the dark matter axion mass is bounded above by m :1 meV for H 910 GeV [11]. Note that this is a considerably heavier dark matter axion than the oft-quoted inflationary limit m :5 leV. H 9f : Axion string creation during inflation. Even for a low reheat temperature ¹ (f , one can envisage inflation models with a large Hubble parameter during inflation H 9f (such as chaotic inflation with H &10—10 GeV). The quantum fluctuations in this case are sufficient to take the Peccei-Quinn field over the top of the potential leaving large spatial variations and topologically non-trivial windings [22]. We can interpret this as the Gibbons—Hawking temperature ‘restoring’ the PQ-symmetry ¹ 9f . As inflation draws to a close and H falls below f , these %& fluctuations will become negligible and a string network will form. Provided inflation does not continue beyond this point for more than about another 30 e-foldings, we will effectively return to the ‘standard thermal’ scenario in which axions are produced by a decaying string network. So such low reheat inflation models are again only compatible with a dark matter axion, m +100 leV. Note that the constraint (7) can be circumvented in more complicated multifield inflation models [21]. Note that there is also a borderline scenario with H &f and in which domain walls form but few strings. Since strings are required to remove them, these domian walls will be cosmologically unacceptable [23].
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234
233
4. Conclusions We have endeavoured to provide an overview of axion cosmology focussing on the mass of a dark matter axion. First, the cosmological axion density was considered in the standard thermal scenario where the dominant contribution comes from axion strings. In this case there is, in principle, a well-defined calculational method to precisely predict the mass m of a dark matter axion. For the currently favoured value of the Hubble parameter (H+60 km s\ Mpc\), the estimate (4) predicts a dark matter axion of mass m +200 leV, where significant uncertainties from all sources approach an order of magnitude. The additional uncertainty in this string calculation is the parameter ratio a/i+1, that is, the ratio of the loop size to radiation backreaction scale. Secondly, we have reviewed inflationary axion cosmology showing that (i) many inflation models return us to the standard thermal scenario with m &100 leV, (ii) some inflation models are essentially incompatible with a detectable dark matter axion and (iii), because of the possibility of ‘anthropic fine-tuning’, other inflation models can be constructed which incorporate a dark matter axion mass anywhere below m :1 meV. We conclude that, while a dark matter axion might possibly lurk anywhere in an enormous mass range below m :1 meV, the best-motivated mass for future axion searches lies near m &100 leV, a standard thermal scenario prediction which is also compatible with a broad class of inflationary models.
References [1] R.D. Peccei, H.R. Quinn, Phys. Rev. Lett. 38 (1977) 1440; Phys. Rev. D 16, 1791; F. Wilczek, Phys. Rev. Lett. 40 (1978) 279; S. Weinberg, Phys. Rev. Lett. 40 (1978) 223. [2] G.G. Raffelt (1997), astro-ph/9707268. [3] C. Hagmann et al. (1998), astro-ph/9802061; I. Ogawa, S. Matsuki, K. Yamamoto, Phys. Rev. D 53 (1996) R1740. [4] A. Vilenkin, A.E. Everrett, Phys. Rev. Lett. 48 (1982) 1867. [5] R.L. Davis, Phys. Lett. B 180 (1986) 225; R.L. Davis, E.P.S. Shellard, Nucl. Phys. B 324 (1989) 167. [6] P. Sikivie, Phys. Rev. Lett. 48 (1982) 1156. [7] R.A. Battye, E.P.S Shellard, Phys. Rev. Lett. 73 (1994) 2954; Erratum: Phys. Rev. Lett. 76, 2203. [8] E.P.S. Shellard, in: J. Demaret (Ed.), Proc. 26th Liege Int. Astrophysical Colloquium, University de Liege, 1986. [9] J. Preskill, M.B. Wise, F. Wilczek, Phys. Lett. B 120 (1983) 127; L. Abbott, P. Sikivie, Phys. Lett. B 120 (1983) 133; M. Dine, W. Fischler, Phys. Lett. B 120 (1983) 137. [10] M.S. Turner, Phys. Rev. D 33 (1986) 889. [11] E.P.S. Shellard, R.A. Battye, Cosmic Axion astro-ph/9802216, 1998; E.P.S. Shellard, R.A. Battye, Inflationary axion cosmology revisited, DAMTP preprint, 1998. [12] R.L. Davis, E.P.S. Shellard, Phys. Lett. B 214 (1988) 219; R.A. Battye, E.P.S. Shellard, Nucl. Phys. B 423 (1994) 260. [13] A. Vilenkin, E.P.S. Shellard, Cosmic strings and other Topological Defects, Cambridge University Press, Cambridge, 1994. [14] D.P. Bennett, F.R. Bouchet, Phys. Rev. D 41 (1990) 2408; B. Allen, E.P.S. Shellard, Phys. Rev. Lett. 64 (1990) 119. [15] D.H. Lyth, Phys. Lett. B 275 (1992) 279. [16] M. Nagasawa, Prog. Theor. Phys. 98 (1997) 851; M. Nagasawa, M. Kawasaki, Phys. Rev. D 50 (1994) 4821.
234 [17] [18] [19] [20] [21] [22] [23]
E.P.S. Shellard, R.A. Battye / Physics Reports 307 (1998) 227—234 S.Y-. Pi, Phys. Rev. Lett. 52 (1984) 1725. A.D. Linde, Phys. Lett. B 201 (1988) 437. M.S. Turner, F. Wilczek, A. Zee, Phys. Lett. B 120 (1983) 127. D.H. Lyth, Phys. Lett. B 236 (1990) 408; M.S. Turner, F. Wilczek, Phys. Rev. Lett. 66 (1991) 5. A.D. Linde, Phys. Lett. B 259 (1991) 38. D.H. Lyth, E.D. Stewart, Phys. Lett. B 283 (1992) 189. A.D. Linde, D.H. Lyth, Phys. Lett. B 246 (1990) 353.
Physics Reports 307 (1998) 235—241
New signatures and sources for the detection of WIMP dark matter in the solar system Lawrence M. Krauss* Departments of Physics and Astronomy, Case Western Reserve University, 10900 Euclid Ave., Cleveland, OH 44106-7079, USA
Abstract I first outline new results on the angular modulation of WIMP dark matter scattering on targets in terrestrial laboratories, based on our uncertainties of the WIMP halo distribution, I then outline an exciting new result which indicates that for the high end of allowed SUSY WIMP scattering cross sections there exists a new distribution of WIMP dark matter in our solar system which could produce a dramatically different signal from halo WIMP dark matter in terrestrial detectors. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction If our galactic halo is dominated by WIMPs, searching for signatures which allow a WIMP signal to be extracted from other backgrounds in terrestrial detectors will be of increasing importance, especially as the predicted signal levels in detectors decrease. Here I outline two different ongoing research programs I am involved in which bear on this issue. The first involves the first detailed examination of the directionality of WIMP scattering in terrestrial detectors for a variety of models of the WIMP distribution in the galactic halo. We have developed a formalism which allows the differential scattering rate over energy and angle to be determined from any incident distribution, including anisotropic ones. We then explore not only the predicted distributions, but also the ability, given a fixed number of events, to unambiguously demonstrate evidence for underlying directionality in the events. The second project uncovers the existence of a new
* E-mail:
[email protected]. Research supported in part by DOE. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 6 - 0
236
L.M. Krauss / Physics Reports 307 (1998) 235—241
WIMP distribution in the solar system. Due to the onset of chaos in solar system orbits induced by Jupiter, some WIMPs trapped in the solar system due to scattering in the Sun can move to orbits which no longer intersect the Sun. We estimate the space density of such WIMP orbits near the Earth, and find it can be considerable for realistic SUSY parameters at the upper end of the allowed range of scattering cross sections. 2. Angular anisotropy and WIMP signatures It has been recognized for some time that while the annual modulation of the WIMP scattering rate in detectors, due to the Earth’s motion about the Sun is small, possible angular anisotropies can be large (i.e. [1]). However, estimates that were performed were based on the simple assumption of a spherical isothermal sphere model for the galactic dark matter halo. Moreover, while forward—backward asymmetries can be very large, this measure can be misleading. Sometimes a large anisotropy simply reflects a very small number of events, so that poisson uncertainties can dramatically reduce the significance of any of such effect. We have developed a formalism [2] which allows the incorporation of aniostropies in the underlying WIMP distribution, due either to a non-spherical isothermal distribution, or perhaps due to non-isothermal flows, for example such as those predicted by Sikivie [3].The rate in a detector depends upon the density o of WIMPs near the Earth, and the velocity distribution f (v) M of WIMPs in the Galactic halo near the Earth. As a function of the energy deposited, Q, direct-detection experiments measure the number of events per day per kilogram of detector material. Qualitatively, this event rate is simply R&np1v2/m , where the WIMP number density is L n"o /m . p is the elastic-scattering cross section, 1v2 is the average speed of the WIMP relative to M Q the target, and we divide the detector mass M by the target nucleus mass m , to get the number of L target nuclei. More accurately, one should take into account the fact that the WIMPs move in the halo with velocities determined by f (), that the differential cross section depends upon f () through a form factor dp/d"q"JF(Q), and that detectors have a threshold energy E , below which they are 2 insensitive to WIMP-nuclear recoils. In addition, the Earth moves through the Galactic halo and this motion should be taken into account via f (). Consider a WIMP of mass m moving with velocity "cos axL #sin a sin byL #sin a cos bzL in Q the laboratory frame. This WIMP scatters off of a nucleus of mass m which recoils with energy L E "(m m/(m #m ))v(1!k), where k"cos h* and h* is the WIMP scattering angle in L L Q L Q the center-of-mass frame. The velocity u of the nuclear recoil makes an angle c relative to the x axis: cos c"xL ) u/"u""((1!k)/2)cos a!((1#k)/2)cos m sin a, where !((1#k)/2 sin m" u ) (xL ;)/"u ) (xL ;)". The WIMP distribution function f (v, a, b) determines the event rate per nucleon in the detector: dR"f (v, a, b) v dv d cos a db(dp/dk)dk dm/2p, where 0(v(R,!1(cos a(1, 0(b(2p, !1(k(1, and 0(m(2p. Assume that the nucleon—WIMP scattering has following form: dp/dk"(p /2)F(Q) with form factor suppression. The rate of events in which the nucleus recoils M with energy Q is then given by dR/dQ"(p o /2mm )F(Q) v dvdX f (v, a, b), where ?@ M Q P Q T v "(m #m )Q/2mm is the minimum WIMP velocity that can produce a nuclear recoil of
Q L Q L energy Q.
L.M. Krauss / Physics Reports 307 (1998) 235—241
237
We then derived the following equation for the angular differential cross section:
po dR " M Q v dv F(Q(v, J)) dX f (v, a, b) J(a, b; c, )H(J) . (1) ?@ pm m dX L Q T A( Here the function J(a, b; c, ) takes care of the arbitrary dependence on the detector position and has the following property:
J(a, b; c, ),[cos c cos a#sin c sin a cos( !b)] dX J(a, b; c, )H(J(a, b; c, ))"p . ?@ Note that the equation above holds for any (c, ) (and H(J) is the usual step function). Eq. (1) was originally obtained by the series of Jacobians evaluated for successive transformations of angles from the incident direction to the outgoing direction of the recoiled target nucleus. The end result is actually independent of the intermediate two-body collision channels and gives a simple geometric identity. When we follow the track of recoiled particles, not the scattered one as in traditional fixed target experiments, we have this simple relation between the incident angle and the outgoing events. We have also derived the event rate as a function of both deposited energy and the outgoing angle.
mpo H(J(a, b; c, )) dR " L M Q Q F(Q) dX f (v(Q, J), a, b) . (2) ?@ 8pmm J dQ dX P Q A( With these three event rate formulae, we have evaluated the differential cross sections and event rates for arbitrary forms of the WIMP halo distribution function. Our results [2] indicate that depending upon the actual distribution function, only 15—30 events will be required over the course of a year in order to unambiguously differentiate a non-isotropic distribution from a flat background, with a signal to noise ratio of unity, independent of the halo distribution, unless it is corotating. Displayed in Fig. 1, for example, are the estimates, as a function of detector threshold, for the number of events required to differentiate isothermal halo events from a flat background, with and without noise. For each set of curves, the lower one gives the number of events needed above the threshold, while the upper curve gives the equivalent total number of events required for zero threshold to yield the required number above the stated threshold. Also, shown are the number of events required if only the forward/backward ratio and not the full distribution is measured. With a far greater number of events, features of the underlying WIMP distribution could be inferred.
3. A new solar system WIMP distribution? WIMP dark matter can scatter elastically in the Sun, be gravitationally captured, and eventually settle in the Solar core and annihilate. Recently, however, we [4,5] have found that perturbations due to the planets, combined with the non-Coulomb nature of the gravitational potential inside the Sun, imply that WIMPS which are gravitationally captured by scattering in surface layers of the Sun can evolve chaotically into orbits whose trajectories no longer intersect the Sun. For orbits
238
L.M. Krauss / Physics Reports 307 (1998) 235—241
Fig. 1. Number of events needed to distinguish halo events arising from an isothermal distribution (with v "220 km s\) from a flat noise background. M
having a semi-major axis smaller than of Jupiter’s orbit, WIMPS can persist in the solar system for billions of years. There can thus be a new, previously unanticipated, distribution of WIMPs which intersect the Earth’s orbit. For WIMPs which might be detected in the next generation of underground detectors, this solar system distribution could be significant, providing a complementary signal to that of galactic halo dark matter. We focus on the sub-population of WIMPs which scatter on a nucleus located near the surface of the Sun, and thereby loose just enough energy to stay in Earth-crossing orbits. These will be susceptible to small gravitational perturbations by the planets. We are interested then in the differential capture rate, per energy, and per angular momentum, of WIMPs in the Sun, and, in particular, only in the fraction of WIMPs which have angular momenta in a small range [J !e, J ] where J is the angular momentum for a WIMP exactly grazing the Sun. Using the 1 1 1 scattering cross section of WIMPs on nuclei given above, one can, after considerable computation (see [5]), derive the rate with which WIMPs scatter on nuclei with atomic number A to end up into bound solar orbits with semi-major axis between [a, a#da] (corresponding to [a, a#da] with a,G M /a), and with specific angular momentum J5J . , 1
dNQ da
n J K (r, a) . K 6 dx n (x)p 1! v rv (r)
(Y( M PYP
(3)
L.M. Krauss / Physics Reports 307 (1998) 235—241
239
Here, the minimum radius r (impact parameter) is defined in terms of the minimum angular
momentum J by r v (r ),J , where v (r ) is the escape velocity at r , n is the
6 density of atomic targets A, and WIMPs X, respectively, v is the r.m.s. circular velocity of WIMPs M in the Galaxy, and the “capture function” K (r, a) involves an integral over the WIMP local phase space distribution at the Sun, weighted over the scattering form factor. The Sun scattering events create a population of solar-system bound WIMPS, moving (for a&1 AU) on very elliptic orbits which traverse the Sun over and over again. For the values of WIMP-nuclei cross sections we shall be mostly interested in here (corresponding to effective WIMP-proton cross sections in the range 4;10\—4;10\ cm), the mean opacity of the Sun for orbits with small impact parameters is in the range 10\—10\. This means that after 10—10 orbits these WIMPs will undergo a second scattering event in the Sun, and end up in its core where they will ultimately annihilate. The only way to save some of these WIMPs is to consider a fraction of WIMPs which have impact parameters r in a small range near the radius of the Sun R . Focussing on such
1 a subpopulation of WIMPs has two advantages: (i) they traverse a small fraction of the mass of the Sun and therefore their lifetime on such grazing orbits is greatly increased, and, more importantly (ii) during this time, gravitational perturbations due to the planets can build up and push them on orbits which no longer cross the Sun. The crucial point to realize is the following. If we consider a WIMP orbit with a generic impact parameter r ;R , it will undergo a large perihelion precession *u&2p per orbit, i.e. uR &n,
1 because the potential º(r) within the Sun is modified compared to the exterior 1/r potential leading to the absence of perihelion motion. In other words, the trajectory of the WIMP will generically be a fast advancing rosette. When the rosette motion is fast, planetary perturbations do not induce any secular evolution in the semi-major axis and in the eccentricity of the WIMP orbit. Such WIMPs will end up in the core of the Sun. A new situation arises for WIMP orbits which graze the Sun, because these orbits feel essentially a 1/r potential due to the Sun, so that their rosette motion will be very slow. One can then show that planetary perturbations will in this case alter the eccentricity of orbits so that such orbits will no longer intersect the Sun. We can finally estimate the density of WIMPs which diffuse out to solar-bound orbits and which can survive to the present time by simply integrating our differential capture rate over all trajectories J'J ,G which end up out of the Sun, suitably averaged over the initial WIMP
distribution. The rate (per a"G m /a) of solar capture of WIMPs which subsequently survive out , 1 of the Sun to stay within the inner solar system then depends on the A-dependent combination: g ,(f /m )p K , where f is the fraction (by mass) of element A in the Sun, and K is the Sun-surface value of the capture function mentioned above. Note that the A-dependence is entirely contained in g with dimensions (cross section)/(mass). The total capture rate is then dependent on g "g . From the point of view of experiment, however, the differential rate, dR/dQ, per keV per kg per day, of scattering events in a laboratory sample made of element A is what counts. Comparing this differential rate due to our new WIMP population with that due to the predicted galactic halo population we find a ratio which reaches the maximum, (dR/dQ) "1.1g\ o(Q), (dR/dQ)
(4)
240
L.M. Krauss / Physics Reports 307 (1998) 235—241
Fig. 2. Solar capture factor g\ as a function of WIMP mass and cross section. Hatched lines give existing upper limits assuming a local halo density of 0.3 GeV cm\ (lower) or 0.1 GeV cm\ (upper).
for energy transfers Q smaller than Q "2(m /(m #m ))m v, where g\ is the value of # 6 6 # g when cross sections are expressed in terms of 10\ GeV\. For a target of Germanium (the present material of choice in several ongoing cryogenic WIMP detection experiments), Q is in the # keV range. If g\&1 this yields a 100% increase of the differential event rate below the value Q"Q &keV which is typical for the energy deposit due to the new WIMP population, whose # characteristic velocity &v is smaller than that of galactic halo WIMPs. # This is our central result. We expect that this is, in fact, an underestimate of the actual contribution due to WIMPs trapped in the solar system, because we have incorporated in our calculations only the average long-range perturbing effects of the planets. In any case, the relevance of this result depends completely on the actual value of g\. Interestingly, if one explores the allowed parameter space of neutralinos in the Minimal Supersymmetric Standard Model, shown for example in Fig. 2, for SUSY parameter k'0, we find values of g\ in excess of 1 for the range of parameters which involve a remnant WIMP density in excess of 3% of the closure density, and whose scattering rate remains below the sensitivity of current detectors. Could such a new WIMP population be detectable? Existing detectors tend to lose sensitivity in the range of a few keV. However, these results motivate pushing hard in this direction. In the first place, the new population will have a strongly anisotropic velocity distribution. Not only will this greatly help distinguish it from backgrounds, but a comparison of the annual modulation of any signal from this distribution with the higher energy signal from a halo WIMP distribution would be striking. If neutralinos exist in the range detectable at the next generation of detectors, this new WIMP population must exist at a sizeable level. Finally, we note that the indirect neutrino signature of such WIMPs which might subsequently be captured by the Earth, and annihilate, could be dramatic. The new population has a characteristic velocity which more closely matches the escape velocity from the Earth than does the background halo population. As a result, resonant capture off elements such as Iron in the Earth could be greatly enhanced.
L.M. Krauss / Physics Reports 307 (1998) 235—241
References [1] [2] [3] [4] [5]
D.N. Spergel, Phys. Rev. D 37 (1988) 3495. C. Copi, J. Heo, L.M. Krauss, in preparation. P. Sikivie, Phys. Lett. B 432 (1998) 139. T. Damour, L.M. Krauss, astro-ph/9806165, Phys. Rev. Lett., submitted. T. Damour, L.M. Krauss, astro-ph/9807099, Phys. Rev. D, submitted.
241
Physics Reports 307 (1998) 243—252
The AMANDA neutrino telescope and the indirect search for dark matter AMANDA Collaboration R.C. Bay , Y. He , D. Lowder , P. Miocinovic , P.B. Price , M. Solarz , K. Woschnagg , S.W. Barwick, J. Booth, P.C. Mock, R. Porrata, E. Schneider, G. Yodh, D. Cowen, M. Carlson, C.G.S. Costa, T. DeYoung, L. Gray, F. Halzen*, R. Hardtke, J. Jacobsen, V. Kankhadai, A. Karle, I. Liubarsky, R. Morse, S. Tilav, T.C. Miller, E.C. Andre´s, P. Askebjer, L. Bergstro¨m, A. Bouchta, E. Dalberg, P. Ekstro¨m, A. Goobar, P.O. Hulth, C. Walck, A. Hallgren, C.P. de los Heros, P. Marciniewski, H. Rubinstein, S. Carius, P. Lindahl, A. Biron , S. Hundertmark , M. Leuthold , P. Niessen , C. Spiering , O. Streicher , T. Thon , C.H. Wiebusch , R. Wischnewski , D. Nygren , A. Jones, S. Hart, D. Potter, G. Hill, R. Schwarz University of California, Berkeley, USA University of California, Irvine, USA University of Pennsylvania, USA University of Wisconsin, Madison, USA Bartol Research Institute, USA Stockholm University, Sweden University of Uppsala, Sweden Kalmar University, Sweden DESY — Institute for High Energy Physics, Germany
Lawrence Berkeley National Laboratory, USA South Pole Winter-Overs, Antarctica Abstract With an effective telescope area of order 10 m, a threshold of &50 GeV and a pointing accuracy of 2.5°, the AMANDA detector represents the first of a new generation of high energy neutrino telescopes, reaching a scale envisaged over 25 years ago. We describe its performance, focussing on the capability to detect halo dark matter particles via their annihilation into neutrinos. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
* Corresponding author: F. Halzen. Physics Department, University of Wisconsin, 1150 University Avenue. Madison, WI 53706, USA. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 1 - 6
244
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
1. The indirect detection of halo dark matter High energy neutrino telescopes are multi-purpose instruments; their science mission covers particle physics, astronomy and astrophysics, cosmology and cosmic ray physics. Their deployment creates new opportunities for glaciology and oceanography, possibly geology of the earth’s core [1]. Astronomy with neutrinos does have definite advantages. They can reach us, essentially without attenuation in flux, from the largest red-shifts. The drawback is that neutrinos are difficult to detect: the small interaction cross sections that enable them to travel without attenuation over a Hubble radius, are also the reason why kilometer-scale detectors are required in order to capture them in sufficient numbers to do astronomy [2]. Some science missions do not require a detector of kilometer size. The best opportunities to search for halo dark matter are, in fact, associated with the present instrument which, while smaller in telescope area than the planned extension of AMANDA to ICE3 (ICECUBE), has a lower threshold. At this meeting, the capability of neutrino telescopes to discover the particles that constitute the dominant, cold component of the dark matter is of special interest. The existence of the weakly interacting massive particles (WIMPs) is inferred from observation of their annihilation products. Cold dark matter particles annihilate into neutrinos; massive ones will annihilate into high-energy neutrinos which can be detected in high-energy neutrino telescopes. This so-called indirect detection is greatly facilitated by the fact that the earth and the sun represent dense, nearby sources of accumulated cold dark matter particles. Fig. 1 displays the neutrino flux from the center of the earth calculated in the context of supersymmetric dark matter theories [3]. The direct capture rate of the WIMPs in germanium detectors is shown for comparison. Contours indicate the parameter space favored by grand unified theories. Most of this parameter space can be covered by improving the capabilities of existing detectors by 2 orders of magnitude. Existing neutrino detectors have already excluded fluxes of neutrinos from the earth’s center of order 1 event per 1000 m per year.
Fig. 1. Direct and indirect detection rates (for neutrinos from the center of the earth in the figure shown) of cold dark matter particles predicted by supersymmetric theory. Grand unified theories favor the parameter space indicated. Part of it is already excluded by present experiments as indicated by the horizontal and vertical lines.
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
245
The best limits have been obtained by the Baksan experiment [4]. They are already excluding relevant parameter space of supersymmetric models. We will show that, with data already on tape, the AMANDA detector will have an unmatched discovery reach for WIMP masses in excess of 100 GeV. The potential of neutrino telescopes as dark matter detectors has been documented in detail [5]. With a sensitivity which increases with the WIMP mass, they are complementary to direct, cryogenic detectors. They can detect WIMPs beyond the kinematic limits of the LHC: about 500 GeV for neutralinos. A striking way to illustrate their potential is to use the possible detection [6] in the DAMA NaI detector in the Gran Sasso tunnel as an example. If their seasonal variation is indeed evidence for WIMPS, observation of a signal in an exposure of 4500 kg days requires a WIMP-nucleon cross section of 10\—10\ cm for a WIMP mass of 50—150 GeV. This information is sufficient to calculate their trapping and annihilation rate in the sun and earth. Both will be a source of, on average, 100 neutrinos per year of WIMP origin in the existing AMANDA detector with an effective area of 10 m. The exact rate varies with the mass of the WIMP.
2. Status of the AMANDA project First generation neutrino detectors, launched by the bold decision of the DUMAND collaboration over 20 years ago to construct such an instrument, are designed to reach a relatively large telescope area and detection volume for a neutrino threshold of tens of GeV, not higher. This relatively low threshold permits calibration of the novel instrument on the known flux of atmospheric neutrinos. Its architecture is optimized for reconstructing the Cherenkov light front radiated by an up-going, neutrino-induced muon. Up-going muons are to be identified in a background of down-going, cosmic ray muons which are more than 10 times more frequent for a depth of 1—2 km. The method is sketched in Fig. 2. Construction of the first-generation AMANDA detector [7] was completed in the austral summer 1996—1997. It consists of 300 optical modules deployed at a depth of 1500—2000 m; see Fig. 3. An optical module (OM) consists of an 8 inch photomultiplier tube and nothing else. OM’s have only failed when the ice refreezes, at a rate of less than 3%. Calibration of this detector is in
Fig. 2. The arrival times of the Cherenkov photons in 6 optical modules determine the direction of the muon track.
246
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
Fig. 3. The Antarctic muon and neutrino detector array (AMANDA).
progress, although data has been taken with 80 OM’s which were deployed one year earlier in order to verify the optical properties of the ice (AMANDA-80). As anticipated from transparency measurements performed with the shallow strings [10] (see Fig. 3), we found that ice is bubble-free at 1400—1500 m and below. The optical properties of the ice are quantified by studying the propagation in the ice of pulses of laser light of nanosecond duration. The arrival times of the photons after 20 and 40 m are shown in Fig. 4 for the shallow and deep ice [8]. The distributions have been normalized to equal areas; in reality, the probability that a photon travels 70 m in the deep ice is &10 times larger. There is no diffusion resulting in loss of information on the geometry of the Cherenkov cone in the deep bubble-free ice.
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
247
Fig. 4. Propagation of 510 nm photons indicate bubble-free ice below 1500 m, in contrast to ice with some remnant bubbles above 1.4 km.
The AMANDA detector was antecedently proposed on the premise that inferior properties of ice as a particle detector with respect to water could be compensated by additional optical modules. The technique was supposed to be a factor 5—10 more cost-effective and, therefore, competitive. The design was based on then current information [9]: E the absorption length at 370 nm, the wavelength where photomultipliers are maximally efficient, had been measured to be 8 m; E the scattering length was unknown; E the AMANDA strategy would have been to use a large number of closely spaced OMs to overcome the short absorption length. Muon tracks triggering 6 or more OMs are reconstructed with degree accuracy. Taking data with a simple majority trigger of 6 OMs or more at 100 Hz yields an average effective area of 10 m, somewhat smaller for atmospheric neutrinos and significantly larger for the high energy signals previously discussed. The reality is that: E the absorption length is 100 m or more, depending on depth [10]; E the scattering length is 25—30 m (preliminary, this number represents an average value which may include the combined effects of deep ice and the refrozen ice disturbed by the hot water drilling); E because of the large absorption length, OM spacings are similar, actually larger, than those of proposed water detectors. Also, a typical event triggers 20 OMs, not 6. Of these more than 5 photons are, on average, “not scattered”. A precise definition of “direct” photons will be given further on. In the end, reconstruction is therefore as before, although additional information can
248
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
Fig. 5. Reconstructed zenith angle distribution of muons triggering AMANDA-80: data and Monte Carlo. The relative normalization has not been adjusted at any level. The plot demonstrates a rejection of cosmic ray muons at a level of 10\.
be extracted from scattered photons by minimizing a likelihood function which matches measured and expected delays [11]. The measured arrival directions of background cosmic ray muon tracks, reconstructed with 5 or more unscattered photons, are confronted with their known angular distribution in Fig. 5. There is an additional cut in Fig. 5 which simply requires that the track, reconstructed from timing information, actually traces the spatial positions of the OMs in the trigger. The power of this cut, especially for events distributed over only 4 strings, is very revealing. It can be shown that, in a kilometer-scale detector, geometrical track reconstruction using only the positions of triggered OMs is sufficient to achieve degree accuracy in zenith angle. We conclude from Fig. 5 that the agreement between data and Monte Carlo simulation is adequate. Less than one in 10 tracks is misreconstructed as originating below the detector [8]. Monte Carlo simulation, based on this exercise, confirms that AMANDA-300 is a 10 m detector with 2.5° mean angular resolution [11]. We have verified the angular resolution of AMANDA-80 by reconstructing muon tracks registered
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
249
Fig. 6. Zenith angle distributions of cosmic rays triggering AMANDA and the surface air shower array SPASE. Reconstruction by AMANDA of underground muons agrees with the reconstruction of the air shower direction using the scintillator array.
in coincidence with a surface air shower array SPASE [12]. Fig. 6 demonstrates that the zenith angle distribution of the coincident SPASE-AMANDA cosmic ray beam reconstructed by the surface array is quantitatively reproduced by reconstruction of the muons in AMANDA.
3. Neutrinos from the earth’s Center: AMANDA-80 We have performed a search [13] for upcoming neutrinos from the center of the earth. One should keep in mind that the preliminary results are obtained with only 80 OMs, incomplete calibration and only 6 months of data. We nevertheless obtain limits near the competitive level of less than 1 event per 250 m per year for WIMP masses in excess of 100 GeV. With calibration of AMANDA-300 completed and 1 year of data on tape, we anticipate a sensitivity beyond the limit shown in Fig. 1. Increased sensitivity results from: lower threshold, better calibration (factor of 3), improved angular resolution (factor of &2), longer exposure and, finally, an effective area larger by over one order of magnitude. Recall that, because the search is limited by atmospheric neutrino background, sensitivity only grows as the square root of the effective area. We reconstructed 6 months of filtered AMANDA-80 events subject to the conditions that 8 OMs report a signal in a time window of 2 ls. While the detector accumulated data at a rate of
250
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
about 20 Hz, filtered events passed cuts [14] which indicate that time flows upwards through the detector. In collider experiments this would be referred to as a level 3 trigger. The narrow, long AMANDA-80 detector (which constitutes the 4 inner strings of AMANDA-300) thus achieves optimal efficiency for muons pointing back towards the center of the earth which travel vertically upwards through the detector. Because of edge effects the efficiency, which is, of course, a very strong function of detector size, is only a few percent after final cuts, even in the vertical direction. Nevertheless, we will identify background atmospheric neutrinos and establish meaningful limits on WIMP fluxes from the center of the earth. The final cut selecting WIMP candidates requires 6 or more residuals in the interval [!15,#15] ns and a50.1 m/ns. Here a is the slope parameter obtained from a plane wave fit zG"atG#b, where zG are the vertical coordinates of hit OMs and tG the times at which they were hit. The two events surviving these cuts are shown in Fig. 7. Their properties are summarized in
Fig. 7. Atmospheric neutrino events passing the cuts imposed on the data to search for neutrinos from WIMP annihilation in the center of the earth.
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
251
Table 1. The expected number of atmospheric neutrino events passing the same cuts is 4.8$0.8$1.1. With only preliminary calibration, the systematic error in the time-calibration of the PMTs is &15 ns. This reduces the number of expected events to 2.9$0.6$0.6. The fact that the parameters of both events are not close to the cuts imposed, reinforces their significance as genuine neutrino candidates. Their large tracklengths suggest neutrino energies in the vicinity of 100 GeV which implies that the parent neutrino directions should align with the measured muon track to better than 2°. Conservatively, we conclude that we observe 2 events on a background of 4.8 atmospheric neutrinos. With standard statistical techniques this result can be converted into an upper limit on an excess flux of WIMP origin; see Fig. 8. Table 1 Characteristics of the two events reconstructed as up-going muons Event IDC
4706879
8427905
a (m/ns) Length (m) Closest approach (m) h ()
() Likelihood/OM OM multiplicity String multiplicity
0.19 295 2.53 14.1 92.0 5.9 14 4
0.37 182 1.23 4.6 348.7 4.2 8 2
Fig. 8. Upper limit at the 90% confidence level on the muon flux from the center of the earth as a function of neutralino mass. The light shaded band represents the ¼>¼\ annihilation channel and the dark one represents the bbM annihilation channel. The width of the bands reflects the inadequate preliminary calculation.
252
R.C. Bay et al. / Physics Reports 307 (1998) 243—252
In order to interpret this result, we have simulated AMANDA-80 sensitivity to the 2 dominant WIMP annihilation channels [15]: into bbM and ¼>¼\. The upper limits on the WIMP flux are shown in Fig. 8 as a function of the WIMP mass. Limits below 100 GeV WIMP mass are poor because the neutrino-induced muons (with typical energy KmQ/6) fall below the AMANDA-80 threshold. For the heavier masses, limits approach the limits set by other experiments in the vicinity of 10\ cm\ s\. We have previously discussed how data, already on tape from AMANDA-300, will make new incursions into the parameter space of supersymmetric models shown in Fig. 1.
Acknowledgements The AMANDA collaboration is indebted to the Polar Ice Coring Office and to Bruce Koci for the successful drilling operations, and to the National Science Foundation (USA), the Swedish National Research Council, the K.A. Wallenberg Foundation and the Swedish Polar Research Secretariat. F.H. was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-95ER40896 and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
References [1] For a review, see T.K. Gaisser, F. Halzen, T. Stanev, Phys. Rep. 258 (3) (1995) 173; R. Gandhi, C. Quigg, M.H. Reno, I. Sarcevic, Astropart. Phys. 5 (1996) 81. [2] F. Halzen, The case for a kilometer-scale neutrino detector, in: R. Kolb, R. Peccei (Eds.), Nuclear and Particle Astrophysics and Cosmology, Proc. Snowmass 94. [3] S. Scopel, Proc. Int. Workshop on Aspects of Dark Matter in Astrophysics and Particle Physics, Heidelberg, Germany, 1996; see also L. Bergstr¨om, J. Edsjo, P. Gondolo, Phys. Rev. D, to be published. [4] O. Suvorova et al., Baksan neutralino search, Presented at the 1st Int. Workshop on Non-Accelerator New Physics (NANP97), Dubna, Russia, July 7—12, 1997. [5] G. Jungman, M. Kamionkowski, K. Griest, Phys. Rep. 267 (1996) 195. [6] P. Belli, Phys. Rep. 306 (1998), these Proceedings. [7] S.W. Barwick et al., The status of the AMANDA high-energy neutrino detector, in: Proc. 25th Int. Cosmic Ray Conf., Durban, South Africa, 1997. [8] S. Tilav et al., First look at AMANDA-B data, in: Proc. 25th Int. Cosmic Ray Conf., Durban, South Africa, 1997. [9] S.W. Barwick et al., Proc. 22nd Int. Cosmic Ray Conf., Dublin, Dublin Institute for Advanced Studies, 1991, vol. 4, p. 658. [10] The AMANDA collaboration, Science 267 (1995) 1147. [11] C. Wiebusch et al., Muon reconstruction with AMANDA-B, in: Proc. 25th Int. Cosmic Ray Conf., Durban, South Africa, 1997. [12] T. Miller et al., Analysis of SPASE-AMANDA coincidence events, in: Proc. 25th Int. Cosmic Ray Conf., Durban, South Africa, 1997. [13] A. Bouchta, University of Stockholm, PhD Thesis, 1998. [14] J.E. Jacobsen, University of Wisconsin, PhD Thesis, 1996. [15] Muflux code written by J. Edsjoe, implemented as an event generator by A. Goobar; see also L. Bergstro¨m, J. Edsjo¨, P. Gondolo, Indirect neutralino detection rates in neutrino telescopes, Phys. Rev. D 55 (1997) 1765.
Physics Reports 307 (1998) 253—261
WIMP dark matter detectors above 100 K Neil Spooner University of Sheffield, Physics Department, Hounsfield Rd., Sheffield S3 7RH, UK
Abstract Of key motivation in the development of detector strategies in the search for Weakly Interacting Massive Particles (WIMPs) is the need to maximize the information obtainable from the expected elastic recoil events so that these can be distinguished from background interactions and systematic errors can be controlled. Low temperature techniques may provide one route. However, most of the detector technology currently used, and those responsible for the best limits so far, do not involve cryogenic methods but rely on more conventional technology such as scintillators. There is increasing confidence that these techniques can reach the required sensitivity and may even provide the first detector with sensitivity to the direction of WIMP recoils. Reviewed here is the status of these techniques and the improvements planned to allow exploration of possible dark matter signal rates below 0.1 kg\ d\. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction Technology for the detection of Weakly Interacting Massive Particles (WIMPs) relies on devices sensitive to the rare, low energy, nuclear recoils expected from elastic scattering of dark matter particles in the presence of a higher electron recoil background. Much emphasis has been placed on the use of bolometric techniques since these can provide low energy threshold ((10 keV) and, by the separate collection of ionization and thermal energy for instance, the possibility of efficient electron recoil rejection [1]. However, it is quite feasible for more conventional technology, including NaI and liquid xenon scintillator, also to achieve recoil discrimination and low energy threshold sufficient for dark matter detection and indeed the best limits on WIMPs are currently set by these non-cryogenic techniques. This paper constitutes a review of the technology employed in these devices, recent progress on detector design and characterization, and prospects for the future. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 2 - 8
254
N. Spooner / Physics Reports 307 (1998) 253—261
Table 1 gives a list of the non-cryogenic WIMP experiments presently known to the author [2—15]. This includes experiments that have either obtained limits, are expected to do so soon or are under construction. It excludes Ge, Si and low temperature experiments. The sensitivity of a particular detector technology such as those listed in Table 1 hinges on a series of factors and the extent to which they can be optimized. This can be expressed in terms of a relation between the observed background energy spectrum dR/dE and the recoil spectrum expected from dark matter interactions in the form [16]: dR "R S(E)F(E)I(A) , (1) dE where, R is the total event rate in kg\ d\ (predicted to be (0.1 kg\d\) and arises from the expression dR/dE "(R /E r)exp\#0#P. This gives the recoil spectrum in terms of recoil energy 0 E assuming a point like nucleus and zero Earth velocity relative to the dark matter distribution of 0 mean incident energy E [17]. Here r is a kinematic factor"4M M /(M #M ) involving the " 2 " 2 target nucleus mass M and WIMP mass M . In Eq. (1) S(E) contains factors including the 2 " detector resolution, threshold effects (such as photoelectron statistics in photomultipliers), allowance for the case of more than one target isotope and the conversion efficiency of recoil energy into observed energy in the detector, f . F(E)I(A) contain the nuclear-WIMP physics, F(E) being " a form factor correction due to the finite size of the nucleus ((1) and I(A) the nuclear coherence or spin factor correction which for spin independent interactions allows for coherent enhancement of the cross section. S(E) also contains cosmological factors to correct for the motion of the Earth through the dark matter halo. In the following sections a summary is given of the principle non-cryogenic techniques. However, prior to this it is worth summarizing the importance of the key factors contained in Eq. (1) and their relevance to obtaining best dark matter detection sensitivity. Firstly, the expected recoil spectrum is featureless, rising sharply at low energy with most recoil events expected in the keV Table 1 Non-cryogenic dark matter experiments Experiment
Site
Targets
Reference
UKDMC (IC-Sheffield-RAL) DAMA (Rome) ELEGANTS (Japan) Saclay USC-PNL-Zaragoza DAMA (Rome) ELEGANTS (Japan) CASPAR (Sheffield) DAMA (Rome) DAMA (Rome) ZEPLIN (UCLA-UKDMC-Torino) SIMPLE (CERN, Lisbon, Paris) Montreal/Chalk River DRIFT (UKDMC-Temple-Oxy-Surrey)
Boulby Gran Sasso Kamioka Frejus Canfranc Gran Sasso Oto-Cosmo Boulby Gran Sasso Gran Sasso Boulby Paris — Xe, Ar
Na, I Na, I Na, I Na, I Na, I Ca, F Ca, F Ca, F (C, H) Xe Xe (inelastic) Xe F, Cl, C F, Cl, C Boulby
2 3 4 5 6 7 8 9 10 11 12 13 14 15
N. Spooner / Physics Reports 307 (1998) 253—261
255
range. This determines the need for a low energy threshold. Secondly, this rising spectrum is similar to that expected from background electrons arising from U and Th in the surroundings, but with typically 10 lower rate. This determines the need for active discrimination against electron recoils and also the use of high radio-pure shielding materials, such as water, Pb or Cu. Simple energy cuts cannot be used as there are no peaks in the expected recoil spectrum. There is also a need to go deep underground to eliminate cosmic ray muon induced neutron background that could produce background nuclear recoils. Further neutron shielding is also desirable to eliminate remaining neutrons including those arising from alpha-n reactions in the rock. Thirdly, the signal count rate per kg is low and clearly increases linearly with mass. In general this points to a requirement for large detector mass ('kgs) and hence low cost/kg. However, sensitivity to dark matter does not scale with mass but is determined also by the background discrimination power so that high mass is not necessarily required if good event identification is obtainable. Forthly, the choice of target nucleus (particularly spin matrix and A) is important for gaining optimum sensitivity to spindependent or spin-independent, and high mass or low mass, dark matter candidates. Finally, the ability with which systematic errors can be controlled must be considered.
2. NaI, CaF2 and other alkali halide scintillation detectors Low background NaI(Tl) detectors have become the most widely used non-cryogenic technique. They combine the possibility of high mass ('10 s kg), isotopes with nuclear spin, high A for sensitivity to higher mass WIMPs (from iodine), purifiable materials, simplicity of design and, most importantly, the possibility of background discrimination. The latter arises because the scintillation pulse shape for nuclear recoils has a faster decay time in NaI(Tl) than electrons of the same energy. Fig. 1 shows example mean pulse shapes for Na recoils and electron recoils at 15 keV, taken from Ref. [18]. As can be seen, at low energy the difference between the two types of event is not large so that in practice event by event discrimination is not possible. Instead discrimination must be performed statistically by examining the form of the distribution of time constants accumulated for many pulses [19]. PSD analysis in NaI(Tl) is being performed by the UKDMC and the DAMA groups [2,3] using detectors constructed from low background, selected, materials and shielded from photomultipliers by radio-pure light guides. In the UKDMC experiment pulses are binned in a time constant vs. energy array and for each energy interval the time constant distribution is compared to a gamma calibration distribution [2]. The error in the fit determines a 90% confidence limit for a small admixture of nuclear recoils. From this it is possible to reduce the observed spectrum to a nuclear recoil spectrum. Assuming consistency with zero then the error bars allow limits to be placed on the dark matter rate. Although PSD in NaI(Tl) has proved quite powerful there is a drive to find improved techniques both for better sensitivity and to ensure good control of systematic errors [19]. This has been highlighted recently by UKDMC results from their new 5 kg NaI(Tl) detector now operating with improved light collection. This has revealed a population of fast time constant events of as-yet unknown origin seen at all measured energies (4—80 KeV ) [20]. This illustrates the need for alternative or additional methods to provide more information on events to help identification. One possibility is to search for the predicted annual modulation in the dark matter event rate.
256
N. Spooner / Physics Reports 307 (1998) 253—261
Fig. 1. Mean pulse shapes for Na and electron recoils in NaI (left) and F and electron recoils in CaF (right).
Fig. 2. Energy spectrum of the fast time constant events seen in the UKDMC 5 kg detector.
However, the effect is small so that pulse shape discrimination is, in principle, always more sensitive [19,20]. Nevertheless, a new experiment by the Zaragoza-USC-PNL collaboration at Canfranc has analysed data from a 32.1 kg NaI array for annual modulation and set limits [6] and the DAMA group have also analysed data from part of their 115 kg NaI array. Analysis by the UKDMC has shown a significant (4p) annual modulation in their fast time constant events but, intriguingly, not in the background gamma events [20]. Fig. 2 shows the energy spectrum of these events. DAMA have claimed an annual modulation signal at low energy in undiscriminated data [22] though this result has been criticized [23].
N. Spooner / Physics Reports 307 (1998) 253—261
257
Alternative alkali halide scintillators have been used. For instance, the DAMA group have published limits from a 0.37 kg CaF crystal [7] and the Osaka group is constructing a large CaF array at the new Oto-Cosmo underground site (1500 mwe) in Japan (ELEGANTS VI) [8]. However, CaF cannot provide as much information on events as NaI since conventional CaF has no pulse shape discrimination (Fig. 1 shows typical mean pulse shapes for nuclear recoils and electrons in CaF [18]). Thus annual modulation is likely to be the only means of identifying a signal in CaF (but see Section 4). One advantage of CaF though is the fluorine content which has a favourable spin matrix for neutralinos [24]. CsI has also been examined but this suffers from high radioactivity due to Cs contamination [5]. Of the alkali halides then NaI remains the most promising particularly also since there is considerable scope for improvement in NaI technology. Larger mass and increased run time will improve statistical sensitivity by (mass;time) and improved light collection will lower the energy and allow better discrimination. The intrinsic light yield in NaI of 40—60 photons/keV means a Na recoil energy threshold (1 keV is feasible. The UKDMC have demonstrated this using a 50 mm unencapsulated NaI test crystal with PMTs close-coupled. 14 photoelectrons/keV was achieved using a 1 cm thick PTFE reflective coating [25]. There are also prospects for improved discrimination by optimizing temperature, crystal growth technique, doping and analysis methods, and for lower background, by using chemical purification of NaI with Diphonix [26] and the addition of a Compton veto.
3. Liquid xenon detectors Despite improvements the intrinsic limitations of pulse shape discrimination in NaI will limit the amount of event information available. Consequently, alternative techniques are being sought. One possibility is to use liquid Xe scintillator in which there is potentially much more information because interactions produce several processes that compete depending on the dE/dx [27]. These are: (i) excitation — resulting in Xe* molecules which decay emitting 175 nm photons with a mixture of 3 and 27 ns time constants depending on the dE/dx, and (ii) ionization — resulting in Xe> ions which, after a delay of &40 ns for gammas and (3 ns for nuclear recoils, can recombine to give Xe*. The latter can decay as in (i) or, if an electric field is applied the recombination can be stopped and the charge drifted and accelerated to produce a second (proportional) scintillation pulse. Thus there are two means of discrimination possible, either conventional pulse shape analysis, with recombination, or by scintillation-ionization in which the primary scintillation pulse S1 is followed by a secondary pulse S2. The mean ratio S2/S1 for nuclear recoils is predicted to be 0.1—0.3 compared to 1—10 for electrons [21]. The realization of these event recognition ideas in liquid Xe represents a challenge because very high purity Xe is needed. However, the reward is more information on events and so better discrimination and control of systematics. Xe is also an attractive target because it can be purified, isotopically enriched and has a high A. To date a liquid Xe dark matter experiment without discrimination has been run by the DAMA group and limits set [10] and the scintillationionization techniques at low energy have been demonstrated by the ICARUS group [28, 29] who found near event by event discrimination between electron and nuclear recoils (&3% overlap at 5 keV). A collaboration involving the UKDMC, UCLA, Torino and ITEP is now planning an
258
N. Spooner / Physics Reports 307 (1998) 253—261
experiment based on the technique, called ZEPLIN [12]. The first step will be a 1 kg Xe chamber with pulse shape discrimination followed by a 10—20 kg scintillation-ionization detector with outer Compton veto. The plan is to use new Gas Electron Multiplication (GEM) technology developed at CERN in a two phase Xe design to increase efficiency and discrimination. It may also be possible to operate without the radioactive photomultipliers by using direct amplification and imaging of the ionization clouds produced in liquid Xe with TEA additive [30]. The Doke group (Japan) are also planning a Xe detector with discrimination [31]. Attempts have also been made to search for inelastic events in Xe [11], (and in NaI [21]).
4. Detectors with recoil range discrimination A further means of obtaining extra event information is to make use of the shorter track range of nuclear recoils compared to recoils from electrons of the same energy [17]. Use has been made of this in geophysical searches for dark matter where ancient mica has been studied for damage caused by nuclear recoil tracks [32]. In principle a real-time range discriminating detector can also be envisaged in a solid. One possibility is to use multi-layer semiconductor or scintillator detectors, but these have proved difficult to fabricate [33]. An alternative, is to use sub-micron scintillator granules suspended in a liquid scintillator of matched refractive index [34]. The Sheffield group has studied this technique using precipitate grown CaF granules suspended in a dioxan—methanol— cabosyl mixture [9], termed CASPAR. This makes use of fluorine as a target. Discrimination arises by pulse shape analysis because WIMP interactions will give rise mainly to short range ((500 nm) Ca and F recoils within the grains yielding slow pulses (&900 ns) whereas electron recoils travel further and introduce a 4 ns component from the liquid. Tests with monoenergetic neutrons in a 50 mm dia.;50 mm test CASPAR cell have demonstrated '90% discrimination of Ca and F recoils from electrons at 60 keV [9]. A 1 lt detector is under construction for operation underground. The event information available is much greater than in conventional alkali halide techniques as separate identification of H and C recoils in the liquid is feasible in addition to Ca and F in the precipitate. This may allow separate measurement of background neutron events. Furthermore, the precipitation process used, via the water-soluble precursor CaCl , allows radio purification of the grains. The low fabrication cost of the technique, which requires no expensive furnace growth of crystals, means very large detectors may be feasible with CASPAR.
5. Alternative techniques Several alternative detection schemes have been proposed as a route to discrimination [17] but the Superheated Droplet Detector (SDD) has perhaps progressed the furthest and has potentially complete discrimination against electrons. The SDD consists of a dispersion of 10—100 mm droplets of superheated liquid (freon) in a viscous gel that is sensitive only to highly localized ionization from nuclear recoils. Each droplet acts as a mini bubble chamber so that energy deposited greater than a value E within a range R invokes a phase change which can be detected by piezo-electric sensors. Progress with prototypes has been made by the SIMPLE collaboration and by the Montreal/Chalk River collaboration [13,14]. In principle the SDD can provide high
N. Spooner / Physics Reports 307 (1998) 253—261
259
mass and excellent recoil discrimination with a fluorine component in the target. However, a disadvantage of SDDs is the lack of energy resolution.
6. Direction sensitive techniques An additional, potentially powerful, source of event information not included in any of the techniques discussed in Sections 2—5 is the possibility of measuring the direction of WIMP induced recoiling nuclei. The motion of the Solar System through the Galactic halo (at &230 km s\) ensures a forward-back asymmetry in these recoil directions that increases rapidly with recoil energy ('1 : 100 above 100 keV) [17]. This is a unique feature of WIMP events and allows the prospect of discrimination from all normal isotropic backgrounds by correlating event direction with motion through the halo. This would confirm the Galactic origin of a signal. One possibility for a direction sensitive detector is an adaption of the CASPAR detector in which the CaF grains are made asymmetric. Ca and F recoils will then either emerge from the grains or remain inside depending on grain orientation relative to the WIMP flux. Production of the necessary single crystal, sub-micron, asymmetric grains of CaF has been demonstrated [35] and the CASPAR group now plan to build test cells. Orientation of the grains can be achieved, in principle, with an electric field. Alternatively, use has been proposed of the anisotropy of scintillation response in organic scintillators such as stilbene [36]. Measurements with monoenergetic neutrons have recently confirmed a directional response of the light output from low energy ((50 keV) carbon recoils depending on their orientation relative to crystal axis [37]. However, the effect is small (&20%) and occurs with 180° symmetry in crystal orientation so that only the weaker perpendicular-parallel change in recoil directions can be observed rather than the full forward-back asymmetry. This means a large mass of material ('100s kg) is probably needed to gain sufficient sensitivity in a dark matter detector. The most sensitive directional technique so far proposed is to use ionization tracks in a low pressure gas (10—20 torr) TPC [38]. In principle, such a device could provide full 3 dimensional information on all events thereby combining recoil discrimination based on dE/dx and track length, direction sensitivity, event location and Compton vetoing all in a low background configuration. The DRIFT detector collaboration (Sheffield-RAL-Temple-UCSD-Oxy-Surrey) is now developing such a gas-based direction sensitive detector for installation at Boulby mine [15]. Although the target mass in DRIFT would necessarily be low this is not a disadvantage because the discrimination power is so high. Event by event discrimination is achievable so that sensitivity improves as mass;time. The observed track length corresponds to the true recoil energy so that lower threshold can be achieved than in solids. The principle technical challenge with a TPC detector has been the need to minimize electron diffusion during drift so that sufficient track resolution is obtained ((1 mm). Preliminary work by the UCSD group, who constructed a prototype low pressure Ar TPC of 1 m;0.4 m diameter, solved this by using a superconducting magnet to provide a 0.5 T magnetic field in the chamber [38]. They demonstrated nuclear recoils from neutron scattering. However, to reach reasonable sensitivity in a dark matter experiment (0.01—0.1 ct kg\ d\) would require a scale up of 5—10 and this rules out the use of magnets on cost grounds. Consequently the DRIFT collaboration has sought alternative methods. One possibility is to use short drift lengths in a stacked array. However, a promising new idea is to use Xe with an
260
N. Spooner / Physics Reports 307 (1998) 253—261
Fig. 3. Predicted sensitivities for a 1 m;0.4 m diameter directional Xe TPC.
add-mixture of a suitable electro-negative gas such as CS and operate in a negative ion drift mode. Fig. 3 shows results of sensitivity predictions for such a prototype low pressure Xe directional detector.
References [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
B. Canbrera, This conference. J.J. Quenby et al., Phys. Lett. B 351 (1995) 70; P.F. Smith et al., Phys. Lett. B 379 (1996) 299. R. Bernabei et al., Phys. Lett. B 389 (1996) 757. K. Fushimi et al., Proc. Dark Matter in the Universe and its Direct Detection, Universal Academy Press, Tokyo, 1997, p. 115; H. Ejiri, K. Fushimi, H. Oshsumi, Phys. Lett. B 317 (1993) 14; K. Fushimi et al., Phys. Rev. C 47 (2) (1993) R425. G. Gerbier, Proc. Int. Workshop on Identification of Dark Matter, World Scientific, Singapore, 1997, p. 378. M.L. Sarsa et al., Nucl. Phys. B (Proc. Suppl) 48 (1996) 73; M.L. Sarsa et al., Phys. Rev. D 56 (4) (1997) 1856; M.L. Sarsa et al., Phys. Lett. B 386 (1996) 458. R. Bernabei et al., Astropart. Phys. 7 (1997) 73; C. Bacci et al., Astropart. Phys. 4 (1996) 195. T. Kishimoto et al., Proc. Dark Matter in the Universe and its Direct Detection, Universal Academy Press, Tokyo, 1997, p. 71. N.J.C. Spooner, D.R. Tovey, Astropart. Phys. 8 (1997) 20. P. Belli et al., Nucl. Phys. B (Proc. Suppl.) 48 (1996) 60; R. Bernabei et al., Nuovo Cimento C 19 (1996) 537. P. Belli et al., Phys. Lett. B 387 (1996) 222; P. Belli et al., Phys. Lett. B 387 (1996) 222. T.J. Sumner et al., Proc. TAUP97, 7th—11th September 1997, Gran Sasso, Italy. J.J. Collar et al., Proc Int. Conf. on the Identification of Dark Matter, World Scientific, Singapore, 1997, p. 563. L.A. Hamel et al., Nucl. Instr. and Meth. A 388 (1997) 91. J.C. Martoff, 2nd ROPAC Symp., Manchester, 17th April 1998. J.D. Lewin, P.F. Smith, Astropart. Phys. 6 (1996) 87.
N. Spooner / Physics Reports 307 (1998) 253—261
261
[17] P.F. Smith, J.D. Lewin, Phys. Rep. 187 (1990) 203. [18] D.R. Tovey et al., Phys. Lett. B 433 (1998) 150. [19] N.J.C. Spooner, Proc. 1st Int. Conf. on Particle Physics and the Early Universe, Ambleside, UK, 15—19th September 1997, in press. [20] P.F. Smith, Phys. Rep. 307 (1998), this issue; N.J.C. Spooner, Proc. PASCOS98, Boston, 22—29 March 1998, in press. [21] P.F. Smith, Nucl. Phys. B (Proc. Suppl.) 51 (1996) 284. [22] P. Belli et al., Phys. Rep. 306 (1998), this issue; P. Belli et al., Proc. TAUP97, 7—11 September 1997, Gran Sasso, Italy. [23] G. Gerbier, J. Mallet, L. Mosca, C. Tao, Astropart. Phys. (1997), submitted. [24] J. Ellis, R.A. Flores, Nucl. Phys. B 307 (1988) 883. [25] C.D. Peak et al., IEEE Trans. Nucl. Sci. (1998), submitted. [26] I.M. Blair, J.A. Edgington, N.J.C. Spooner, Proc. the Dark Side of the Universe, World Scientific, Singapore, 1995, p. 128. [27] G.J. Davies et al., Phys. Lett. B 320 (1994) 395. [28] H. Wang, Proc. 3rd Int. Symp. on Sources of Dark Matter, UCLA, Los Angeles, 1998, in press. [29] S. Susuki et al., Proc. Dark Matter in the Universe and its Direct Detection, Universal Academy Press, Tokyo, 1997, p. 113. [30] H. Wang, CERN, private communication. [31] T. Doke, K. Masuda, E. Shibamura, Nucl. Instr. and Meth. A 291 (1990) 617. [32] D.P. Snowden-Ifft, A.J. Westphal, Phys. Rev. Lett. 78 (9) (1997) 1628. [33] N.J.C. Spooner, 18th Texas Symp. on Relativistic Astrophysics, Chicago, 15—20 December 1996. [34] N.J.C. Spooner, P.F. Smith, J.D. Lewin, ICRC 23 (1993) 760. [35] Y. Ota et al., J. Am. Ceram. Soc. 79 (1996) 2986. [36] P. Belli et al., Nuovo Cim. C15 no. 4 (1992) 475; N.J.C. Spooner et al., Astropart. Phys. 8 (1997) 13. [37] N.J.C. Spooner, J.W. Roberts, D.R. Tovey, Proc Int. Conf. on the Identification of Dark Matter, World Scientific, Singapore, 1997, p. 481. [38] K.N. Buckland et al., IEEE Trans. Nucl. Sci. 44 (1) (1997) 6.
Physics Reports 307 (1998) 263—267
Xenon as a detector for dark matter search Hanguo Wang* Physics and Astronomy, University of California, Los Angeles, 405 Hilgard Ave., CA 90095, USA For the ICARUS Collaboration
Abstract We will discuss the detailed properties of xenon and its utilization for detecting weakly interacting massive particles (WIMPs). Xenon scintillation and proportional scintillation are the key factors in the problem of background rejection. We will also consider the possibility of doping liquid Xe with triethylamine (TEA), to lower the energy threshold and to improve background rejection efficiency. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 95.35.#d Keywords: WIMP; Dark matter; Xenon; Scintillation
1. Introduction It is generally believed that more than 90% of the mass in our universe is dark matter (DM), which neither radiates nor absorbs electromagnetic waves. The observed evidence [1] shows that there are about 0.3 GeV/cm DM around our solar system. If the DM is in fact weekly interacting massive particles (WIMP), then it should be detectable by Earth-based detectors. Because of the motion of the solar system relative to the centre of our Galaxy, the WIMPs will transfer energy to the detector nuclei via elastic scattering when the WIMPs pass through the detector. When xenon (Xe) is used as the detector media, the mean energy transfer is about 50 keV, and the event rate is in the range of 1;10\ to 0.1 events/(day kg) [2]. It is a challenging task to detect such low energy and low rate events, because of the high radioactive background events near and from the detector. Currently, many experiments, involving different techniques, are being conducted. Liquid xenon (LXe) was chosen as the detector media primarily because it has high scintillation yield and because of its high density and high atomic number. Since both the scintillation and ionization can be measured simultaneously, it is easy to differentiate the WIMP signal from those emanating from the background. * E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 5 8 - 1
264
H. Wang / Physics Reports 307 (1998) 263—267
In fact the signal due to Xe nuclei recoil by WIMP elastic scattering behaves like a heavy ionization signal inside LXe, while the signals from radioactive background are minimum ionising signals. Therefore, for the signals from WIMPs and background, the scintillation/ionization ratios of the total recoil energy are different [3]. The high density of Xe makes it easy to built a large-mass detector with a small structure and, thereby, reduces the cost of the dectector shieldings. The high atomic number is especially good for matching the high mass WIMPs kinematicly. The longest lifetime of the radioactive xenon isotope, Xe, is about 36 days, and using enriched xenon [4], which is Kr-free, makes it possible to construct a detector with extremely low background from the detector itself. Within the Zeplin II [5] project, the UK RAL group is working on the initial phase of the detector. This phase will only measure Xe scintillation and will use techniques that are similar to those employed by the DAMA group [4]. Discrimination scintillation pulse shape will be considered in this test, because the decay profile of the scintillation by Xe recoil is different from that of the background signal [6]. Because the mean recoil energy is only around 50 keV, the ionization yield is too small for readout by charge amplifiers. Proportional scintillation, which is scintillation caused by electrons drifting under very high electric field around thin wires in LXe, makes it possible to measure the ionization component from the low-energy gamma background [3]. Extensive studies have been conducted with a 2 kg detector by the ICARUS group [3,7]. During the past two years, we have been studying the double-phase Xe detector that is installed at CERN. The electroluminescence in gaseous Xe gives much higher light output and better stability for the ionization measurement. The number of luminescence photons produced by one electron under uniform electric field is well approximated by N "70;[(E/P)!1.0];dP [8]. Where P, E, and d are, respectively, gas pressure, electric field strength, and drift distance. By drifting the ionization electrons from the liquid to the gas phase, the ionization component can be measured by means of luminescence photons. As in the liquid-only case, the ionization components can be measured by means of proportional scintillation photons. Fig. 1 shows the correlation plot of primary versus secondary scintillation. It is shown clearly that the background rejection (left) obtained by the double phase (liquid and gas) detector is much better than that obtained by the single phase detector (right). A 99.8% rejection was obtained by the liquid-only phase; the double phase is still under investigation. Another possibility, which is also being studied at CERN, is to use double-phase Xe doped with triethylamine (TEA)[(C2H5)3N] as the DM detector. The TEA acts as the internal photocathode to convert the Xe primary scintillation photons into free electrons, which will then be transported to the gas phase. Electroluminescence will take place when electrons are moving under very high electric field in gaseous Xe. When uniform electric fields are provided in the drift region of the LXe, the shape of the electron cloud can be maintained so that it gives a unique signal-to-background discrimination.
2. The detector design We will concentrate on the two-phase TEA-doped Xe detector. Because it collects primary light much better, it has a very low energy threshold; it may also have a much better background
H. Wang / Physics Reports 307 (1998) 263—267
265
Fig. 1. Primary vs. secondary scintillation in double phase (left) and liquid-only phase detector (right).
Fig. 2. The proposed Zeplin II detector (left); the projected free electron on the X—½ plane perpendicular to drift (right).
rejection. A cross-sectional view of the proposed Zeplin II detector is shown in Fig. 2 (left). The LXe is confined at the bottom, and the liquid surface is controlled so that it stays between two (focusing and defocusing) plates (to be discussed later). In the gas phase, there are two additional wire planes, which provide strong electric field for electroluminescence, with photon detection devices placed above them. The detector is vacuum insulated with a double layer chamber. This setup is common for both double-phase pure Xe and double-phase TEA-doped Xe.
266
H. Wang / Physics Reports 307 (1998) 263—267
Fig. 3. Structure of the focusing holes (left); proper electric field (right).
The ionising potential of TEA in LXe is about 5.9 eV. The photon absorption wave length for TEA is peaked at about 170 nm, which is excellent for Xe scintillation, and the efficiency of the free electron yield is almost 100% [9]. The absorption length of 40 ppm TEA-doped LXe is about 2 cm. Xenon recoil nuclei will lose energy via scintillation. A 50 keV Xe recoil nuclei will lose all the energy in a very short distance and, therefore, can be considered to be a point scintillation light source. The scintillation photons will then be converted into free electrons by the TEA internal photocathode, which will then be distributed around the event centre. The process of energy loss of a radioactive event has significant differences from that of the recoil events [3]. In the case of radioactive background, the total number of free electrons due to ionization under normal electric field in LXe is comparable to the total number of free electrons converted by the internal photocathode from primary scintillation. Conversely, the total number of free electrons due to ionization by recoil nuclei is negligible. The result is that the final electron distribution due to radioactivity has a centre core, but the recoil electrons will be relatively smoothly distributed, as can be seen in Fig. 2 (right). This method enables background rejection on an event by event basis. The main feature of this method is that the primary photon numbers are amplified, resulting in a much better detection efficiency; hence, a low energy threshold can be achieved. Field-shaping rings are placed along the drift direction with one centimeter spacing; a focusing plate is placed right below the liquid surface, and a defocusing plate is placed a few millimeters above the liquid surface. The focusing plate is an 80 lm thick, three-layer, Kapton-insulated PCB board, shown in Fig. 3 (left). Many small holes (400 lm in diameter) are chemically etched to form the electron focusing structure. When the proper electric potential is provided on the electrodes, these two plates will transport the electron from the liquid to the gas phase. The electric field is shown in Fig. 3 (right). The electron transport was tested in a gas chamber, and the transparency is better than 95%. The electroluminesence in the gas phase provides the amplification of the primary signal with a controllable factor so that the quantum efficiency of the photon detection devices will not play important roles for the overall detection efficiency and energy threshold.
H. Wang / Physics Reports 307 (1998) 263—267
267
This investigation is on going, and it is clear that the xenon detector gives us hope for a large-scale detector that is suitable for mapping high-mass WIMPs. The Torino-UK-UCLA project is dedicated to the task.
Acknowledgements I wish to thank Dr. D.B. Cline, my PhD Thesis adviser, and Prof. P. Picchi and other ICARUS members for their continuing support during the past few years.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
R. Flores, Phys. Lett. B 126 (1983) 28. R. Arnowitt, P. Nath, Phys. Rep. 307 (1998), this issue. P. Bennetti et al., Nucl. Instr. and Meth. A 327 (1993) 178. P. Delli, Nucl. Phys. B (P.S.) 48 (1996) 62. D. Cline et al., UCLA proposal for NSF 98-2. A. Hitachi et al., Phys. Rev. B 27 (1983) 5279. H. Wang, CERN-UCLA, PhD thesis, in progress. A. Bolozdynya et al., Nucl. Instr. and Meth. A 385 (1997) 225. S. Suzuki et al., Nucl. Instr. and Meth. A 245 (1986) 78.
Physics Reports 307 (1998) 269—273
The DAMA experiments at Gran Sasso P. Belli *, R. Bernabei , F. Montecchia , W. Di Nicolantonio, A. Incicchitti, D. Prosperi, C.J. Dai, L.K. Ding, H.H. Kuang, J.M. Ma, M. Angelone, P. Batistoni, M. Pillon Dipartimento di Fisica, Universita+ di Roma ‘‘Tor Vergata++ and INFN, sez. Roma2, I-00133 Rome, Italy Dipartimento di Fisica, Universita+ di Roma ‘‘La Sapienza++ and INFN, sez. Roma, I-00185 Rome, Italy IHEP, Chinese Academy, P.O. Box 918/3, Beijing 100039, China ENEA — C. R. Frascati, P.O. Box 65, I-00044 Frascati, Italy
Abstract The main activities of the DAMA experiments at the Gran Sasso National Laboratory of INFN are briefly summarized. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d; 95.30.Cq; 14.80.Ly Keywords: Dark matter; Elementary particles and nuclear processes; Supersymmetric partners
1. Introduction The DAMA activities are mainly focused — since several years — on the direct search for particle Dark Matter by using low radioactive scintillators. In the following the main features of the set-ups and the more recent results are summarized, paying particular attention to the NaI(Tl) and LXe target-detectors [1—8]. Results obtained with CaF (Eu) and other scintillators can be found in Ref. [9]. 2. The K100 kg NaI(Tl) set-up This experiment is the largest one ever built with the first aim of particle Dark Matter search. It is sensitive to WIMP interactions depending and not depending on spin and to “small” (Na) and * Corresponding author. E-mail:
[email protected]. For the measurements with neutrons. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 3 - 5
270
P. Belli et al. / Physics Reports 307 (1998) 269—273
“large” (I) WIMP masses. Its main goal is the search for WIMP by the annual modulation signature over several years; periodical upgradings of the set-up, to increase its performance and sensitivity, are foreseen. It is constituted by nine 9.70 kg and four 7.05 kg NaI(Tl) detectors built by Crismatec company [1]. Each detector is viewed through a 10 cm long light guide by two low background EMI9265B53/FL-3 diameter PMTs working in coincidences; the hardware threshold for each PMT is at single photoelectron level. The detectors are kept in sealed Cu box flushed with HP N from bottle long stored deep underground. The Cu box is closed in a low radioactive Cu/Pb/Cd/Paraffin and Polyethylene shield. The shield is sealed by Supronyl as well as the lower level of the barrack itself to increase the protection from environmental air. On the top of the shield a glow-box (maintained in the same HP N atmosphere) is directly connected to the Cu box through 4 Cu thimbles. Source holders can be inserted into the thimbles to calibrate all the detectors at the same time avoiding any contact with environmental air. The glow-box is equipped with a compensation chamber. When the source holders are not inserted, Cu bars fill completely the thimbles. The whole installation is conditioned both to assure a suitable working temperature for the electronics and to avoid any possible influence of external seasonal variations. In addition, a hardware/software stability monitoring system is operating; in particular, several probes are read by a CAMAC system and stored with the production data. They are the external Radon level, two temperatures of references (and the temperature of the cooling H O for conditioning), the HP N flux, the overpressure of the Cu box containing the detectors, the total and single rates over the single photoelectron threshold (i.e., from noise to “infinity”). Periodical calibrations are performed; in addition, a small Pb — mainly on the surface — contamination allows an intrinsic monitoring of the stability of the set-up by verifying the peak position and resolution summing together the collected data each K7 days. Furthermore, a continuous monitor and alarm system by self-controlled processes is operating. The software energy threshold is set to 2 keV [1,2]; the energy threshold has been also verified by calibrations with Fe (5.9 keV) source (through a special low Z window on one side of the detectors), with Compton electrons in the keV range and by considering the energy spectrum of an Am source below 20 keV. Detection efficiencies have been measured in the keV range. As regards the PMT noise rejection near threshold we recall that 9 5.5 photoelectrons/keV (depending on the detector) are available and that the NaI(Tl) scintillation pulses have a time distribution with a decay time of hundreds of ns, while the PMT noise is present in the form of fast signals. Therefore, the timing structure of the pulse and the available number of photoelectrons/keV allow the signal/noise discrimination, near threshold, by using the pulse information recorded by a Transient Digitizer over 3250 ns. The values of several useful variables can be calculated for each pulse and used for this purpose, such as e.g. the ratio of the pulse area within 100—600 ns over the pulse area within 0—600 ns, which is expected to be around K0 for noise pulses and near K0.7 for NaI(Tl) scintillation pulses. For any kind of detector, when rejecting noise near threshold, good events also could be cut; therefore, analysis cut efficiencies have to be evaluated and properly used. In our case the analysis cut efficiencies for each considered energy bin have been evaluated profiting from the data collected with a Am source in the same experimental conditions and energy range (2—20 keV); for the data analysis of Refs. [1,2], they ranged from about 30 to 40% (depending on the crystal) up to 100%
P. Belli et al. / Physics Reports 307 (1998) 269—273
271
between 2 and 12 keV. The lower value at threshold accounts for the statistical spread of the variables when evaluated from pulses having a lower number of photoelectrons. Some interesting results have been already achieved with this set-up. In fact, significant limits on WIMP-nucleus elastic scattering by using the pulse shape discrimination between electromagnetic background and recoils on a statistics of 4123.2 kg d have been obtained in Ref. [1]. Furthermore, as a by-product of measurements at very high energy, a search for non-Paulian transition in Na and I has been realized [3]. A recent result regards a preliminary study on the annual modulation signature for WIMPs by using a statistics of 4549.0 kg d: 3363.8 kg d in the winter time and 1185.2 kg d in the summer period [2]. No pulse shape analysis (PSA) to reject electromagnetic background has been considered in this case, having a PSA with an intrinsic statistical nature which would affect the annual modulation studies. In any case the annual modulation analysis itself acts as a very effective background rejection and the results achieved in this annual modulation study are in good agreement with those previously obtained by using PSA in Ref. [1]. To determine the possible presence of a yearly variation of the measured rate and — in positive case — the cross section and mass of a WIMP candidate which could account for that, a time correlation analysis of all the data, properly considering the time occurrence and energy of each event, has been performed. A maximum likelihood function: ¸ " e\IGHI k,GHI/N ! has been considered. Here N is the number of GHI GHI GHI GHI events in the ith day, kth energy bin and jth detector; it follows a Poissonian statistics with mean value given by k "(b #S #S cos u(t !t ))M Dt DEe ; there a time independent backGHI HI I KI G H G HI ground contribution, b , has been included in addition to the dark matter signal searched for, HI having S as unmodulated part and S as modulated one. In addition, M is the jth detector I KI H mass; Dt represents the detector running time during the ith day (Dt4 1 day); e is the analysis cut G HI efficiency and DE"1 keV. As first, a direct simple comparison of the sensitivity reached here with those of previous experiments [4],[10] has been performed by maximizing the ¸ function with respect to S and to KI all the (b #S ) contributions. The obtained S values support both the interest in pursuing HI I KI a suitable data analysis on modulation and the absence of sistematics significantly exceedings the statistical errors (see for details Ref. [2]). The maximum likelihood method has been then applied to the data to test if they could be accounted (in total or in part) — or not — by the possible presence of a WIMP with cross section and mass allowed for istance for the neutralino [11], assuming a spin-independent (SI) interaction and, if they could, at which C.L. The description of the analysis procedures and tests of the consistency of the result can be found in Ref. [2]. A minimum value of the y function has been found [2] for M "(59>) GeV and mp "(1.0> )10\ pb and an U \ N \ unbiased statistical analysis — on all the available data at the same time — rejects the no modulation hypothesis in favour of the hypothesis of modulation with the given M and mp at 90% C.L. U N Checks on the period and phase offer some support to the result [2], although the limitation of the available running periods have to be considered. It has been shown that the region allowed at 90% C.L. by the obtained mp and M values is well embedded in the Minimal Supersymmetric N U Standard Model (MSSM) estimates for neutralino [11].
With m"o /o , o "0.3 GeV cm\ and p the WIMP cross section on proton. 5'+. N
272
P. Belli et al. / Physics Reports 307 (1998) 269—273
To control possible systematics, we summarize that: (a) data have been taken firstly in winter and, then, in summer time; (b) the threshold, the PMT gain and electronic line stability have been monitored both by the features of Pb peak and by monitoring the total and single crystal rates over the single photoelectron threshold; (c) the operating temperature has been controlled; (d) the external environmental Radon has been recorded, although the multiple levels of sealing; (e) the HP N flux in the Cu box and its overpressure have been monitored. Furthermore, “off-line” controls have been also performed such as: (a) stability of the rate in the 12—20 keV energy region; (b) C.L. in the overall analysis accounting also for the single crystal response; (c) verification of the period and phase of a possible effect by the maximum likelihood method. This last one would allow to restrict our attention mainly to systematics able to induce an annual modulation with 1 yr period and K2nd june phase. We note that the muon modulation measured by MACRO coll. has similar features, but this effect results to be negligible with respect to the modulated component which — at the given C.L. — could be ascribed to a possible modulation according to M and mp [2]. U N However, we have to remark — considering both the difficulty and the relevance of this kind of searches — that only large exposures would possibly allow to reach a stringent conclusion. Similar exposures will be obtained in next future by our experiment, which is continuously running. The analysis of K 20 000 kg d will be soon released as a new step of this search. Several upgradings of the set-up are already in preparation and/or in progress to improve its sensitivity; in particular, if a new R& D — now in progress — for further radiopurification of the NaI(Tl) will be successful, the installation is planned to be fulfilled up to 250 kg. In this case, a further step toward a ton set-up [5] would be considered.
3. The liquid xenon scintillator Several prototype detectors have been realized in the past (see e.g. [6]); the final set-up has an K 2 l vessel filled by Kr-free liquid Xenon enriched in Xe at 99.5%. This experiment presents several competitive aspects, such as: (i) gain in sensitivity with respect to detectors filled with natural Xenon, because of the enrichment procedure which assures Kr-free gas; (ii) gain of a factor 3 in the effective target mass with respect to natural Xenon filled detectors, of the same volume, for spin-dependent coupled WIMPs because of the used enrichment and, in addition, a very favourable spin factor for the Xe with respect to other possible targets (with the exception of F); (iii) interesting expected value for the quenching factor and possibility of statistical pulse shape discrimination between electromagnetic background and highly ionizing particles (see for discussion Ref. [6]). Preliminary results on WIMP-Xe elastic scattering and preliminary experimental test on the possible search for annual modulation signature have been obtained [4]. Moreover, a statistics of 823.1 kg d has been used to study the WIMP-Xe inelastic scattering [7] improving the limits previously set by the Ejiri group. As a by-product of these measurements, a limit on the electron mean lifetime and charge conservation has been set [12].
As mentioned above, this will be a stronger constraint when almost whole years data taking will be considered.
P. Belli et al. / Physics Reports 307 (1998) 269—273
273
A 40 cc set-up has been realized to perform measurements of the xenon quenching factor and to quantify the PSA capability at low energy, by using both Am-B neutron source and neutron generator at ENEA-Frascati. The results of these measurements and the analysis — using also PSA — of a statistics of 1763.2 kg day have been released after this workshop [8]. A data taking with Kr-free xenon enriched in Xe is planned at completion of the present program.
4. Conclusions The DAMA experiments are continuously running deep underground in the Gran Sasso Laboratory. Particular efforts are in progress to study the annual modulation signature for WIMPs, to increase the sensitivity of the existing set-ups by periodical upgradings and to develop higher radiopure detectors.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
R. Bernabei et al., Phys. Lett. B 389 (1996) 757, and refs. therein. R. Bernabei et al., ROM2F/97/39, in press on Taup97 Proc.; R. Bernabei et al., Phys. Lett. B 424 (1998), 195. R. Bernabei et al., Phys. Lett. B 408 (1997) 439. P. Belli et al., Nuovo Cimento C 19 (1996) 537; Nucl. Phys. B 48 (1996) 62. R. Bernabei, in: Proc. IDM96, World Scientific, Singapore, 1997, p. 574. P. Belli et al., Nucl. Instr. and Meth. A 316 (1992), 55; P. Belli et al., Nucl. Instr. and Meth. A 336 (1993) 336, and refs. therein. P. Belli et al., Phys. Lett. B 387 (1996) 222. R. Bernabei et al., ROM2F/98/08 to be submitted for publication. R. Bernabei et al., Astrop. Phys. 7 (1997) 73; R. Bernabei et al., Il Nuovo Cimento 110A (1997) 189; P. Belli et al., ROM2F/98/12 to be submitted for publication. M.L. Sarsa et al., Phys. Lett. B 386 (1996) 458. A. Bottino et al., Phys. Lett. B 423 (1998) 109; A. Bottino et al., Phys. Lett. B 402 (1997) 113. P. Belli et al., Astrop. Phys. 5 (1996) 217.
Physics Reports 307 (1998) 275—282
Dark matter experiments at the UK Boulby Mine UK Dark Matter Collaboration P.F. Smith *, N.J.T. Smith , J.D. Lewin , G.J. Homer , G.J. Alner , G.T.J. Arnison , J.J. Quenby, T.J. Sumner, A. Bewick, T. Ali, B. Ahmed, A.S. Howard, D. Davidge, M. Joshi, W.G. Jones, G. Davies, I. Liubarsky, R.A.D. Smith, N.J.C. Spooner, J.W. Roberts, D.R. Tovey, M.J. Lehner, J.E. McMillan, C.D. Peak, V.A. Kudryatsev, J.C. Barton Rutherford Appleton Laboratory, Chilton, OX11 0QX, UK Imperial College, Astrophysics, Blackett Laboratory, London SW7 2BZ, UK Imperial College, High Energy Physics, Blackett Laboratory, London SW7 2BZ, UK University of Sheffield, Houndsfield Road, Sheffield S3 7RH, UK Birkbeck College, London WC1E 7HX, UK
Abstract The current status is summarised of dark matter searches at the UK Boulby Mine based on pulse shape discrimination in NaI, together with further plans for international collaboration on detectors based on nuclear recoil discrimination in liquid and gaseous xenon. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Boulby Mine and programme objectives The Boulby Mine is a working salt and potash mine in the North—East of England. The mine operators, Cleveland Potash Ltd., have provided access to several disused tunnels and caverns in low background salt rock, as a permanent location for the UK underground physics programme. These have been provided with power, lighting, telephone, fibre-optic data links, flooring and control rooms as a basic infra-structure for all experiments, and shielding systems have been installed consisting of (a) a 6 m tank of purified water and (b) a number of shielding castles built from a 20 cm outer lead shield and a 10 cm copper inner shield. * Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 9 - 0
276
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
The object is to search for a continuous spectrum of nuclear recoil pulses with energies (50 keV from WIMP collisions. This requires methods of distinguishing these from the much higher gamma background. Our current and planned programme includes both NaI and Xe targets to cover the WIMP mass range 10—1000 GeV. The Boulby facility has so far been used specifically to run WIMP searches based on pulse shape discrimination in sodium iodide targets, summarised in Section 2. The available space is now being increased to accommodate liquid xenon experiments to be carried out in collaboration with the UCLA/Torino groups [1], as described further in Section 3. Future more advanced liquid xenon detectors may involve a larger collaboration including the ITEP group [2]. This is referred to as the ZEPLIN programme. The available space would also accommodate much larger detectors with directional sensitivity, based on observation of tracks in low pressure gases. Studies of this are in progress in collaboration with Temple, UCSD, and other groups [3,4]. This scheme is named DRIFT (directional recoil identification by formation of tracks). The mine contains many kilometers of disused salt caverns, some of which would be available for neutrino physics experiments. These are outlined in a separate paper. The present paper reports the current and planned dark matter programme.
2. Dark matter searches with NaI targets Experiments have been based on NaI crystals, 2—10 kg in mass, observed with two photomultipliers and silica light guides, all materials being selected for lowest activity. Calibration with neutron and gamma sources shows that nuclear recoil pulses have a decay constant about 70% that of Compton gamma interactions. The pulse time constant distributions have a significant width due to the small number of photoelectrons (typically 3—6 per keV) so that at low energies the nuclear recoil and gamma distributions overlap. An initial phase of work with a water-shielded 5 kg crystal showed that a factor 10—30 below gamma background could be set as a statistical limit on a population of the shorter pulses [5]. The stability and resolution of this detector has been improved by larger PMTs, shorter light guides, and a stabilised and reduced operating temperature (10°C$0.1°C). Gamma sources are lowered automatically into the shielding tank to provide energy calibration once a week and Compton calibration for 5 h each day. A running period of 4000 h (excluding calibration periods) between August 1996 and October 1997 has been analysed. The improved resolution reveals a small population of pulses of shorter time constant (mean &230 ns), distinct from the gamma time constant distribution (mean 360 ns) and close to the time constant observed for neutroninduced recoils. The shorter pulses are absent in the periods of Compton calibration (Fig. 1) suggesting that they are not an analysis artefact. For further confirmation, these are also seen in a second crystal (made from the same material) but with less good resolution. The shorter pulses are otherwise normal in shape, with photoelectrons distributed equally between PMTs so they could in principle be low-energy alpha events. Fig. 2 shows the energy spectrum of these events together with the spectrum of normal high energy alphas (due to U & Th impurities) from a data run extended to 5 MeV. Their number appears much larger than would be expected from photodisintegration of iodine by gammas above 2.6 MeV. Neutron events at this rate are excluded by the water shielding and low flux of muons at 1100 m.
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
277
Fig. 1. Time constant distributions (normalised to unity) for 5 kg NaI crystal: (upper graph) Background distribution showing additional population of shorter pulses; (lower graph) Compton calibration with Co source (5 h each day).
278
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
Fig. 2. Energy spectrum of unidentified events compared with high-energy alpha spectrum.
There is so far no explanation of this additional population of events. Further data runs are being made, with different sized crystals, to establish whether the spectrum and events/mass is similar for all crystals. It is of interest that there appears to be a significant summer—winter difference. In the energy range 20—60 keV there are approximately 700 events in 70 days in the summer period, compared with 600 events in 70 days in the winter period. This demonstrates that the care is needed in investigating annual modulation, since any spurious signal (for example alphas) could also show differences over several month periods for a variety of reasons. A second winter run is currently in progress. The energy spectrum of the anomalous events falls less rapidly than expected for a dark matter spectrum based on a Galactic velocity dispersion 230 km/s. Summing predicted Na and I spectra with appropriate form factors [6]) approximate agreement can be achieved only with a high velocity dispersion '300 km/s for the Galactic dark matter, together with a dark matter particle mass '200 GeV. This differs from the mass (100 GeV deduced from the marginally significant annual modulation reported by the Rome group [7]. Thus we continue to search for an explanation in terms of normal particles. A number of ideas are being investigated to further improve the performance of NaI detectors, including optical coupling of unencapsulated crystals with liquid paraffin [4].
3. The sodium iodide diagnostic array (‘NaIaD’) The above situation emphasises the need for dark matter experiments to have good diagnostic capability, to allow investigation of spurious events which may mimic a dark matter signal. In particular, one needs: (a) A principal target with good energy resolution and minimum background. (b) Targets of different size, to investigate proportionality with mass.
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
279
(c) Targets with data acquired over different energy ranges — e.g. 0—50, 0—500, 0—5 MeV, to investigate energy cut-off and higher-energy background, such as MeV-range alphas. (d) Targets of the same size in different shielding systems, to investigate internal origin. (e) A larger multi-crystal array for faster data acquisition and annual modulation search. (f) Where possible different target nuclei, or isotopic variations, to study spin-dependence. Our present underground array contains (a), (b), (c), (d) but not yet (e) due to funding limits (Fig. 3). Item (f) is not currently possible with NaI targets (since the Na and I recoils cannot be distinguished), but can be achieved more easily in experiments using Ge [8] and Xe [9] targets where stable odd and even isotopes can be separated.
4. Liquid xenon detectors Liquid xenon allows a variety of ways of separating nuclear recoils from background, owing to the production of both scintillation light and ionisation: 1. the ionisation may be allowed to promptly recombine, adding to the scintillation light and giving differences in scintillation pulse shape [10], 2. an electric field can be used to prevent recombination, the charge being drifted to create a second scintillation pulse S2 in addition to the primary pulse S1. The ratio S2/S1 differs for nuclear recoils and gammas [11], 3. the charge in (ii) can be extracted from the liquid surface to the gas phase, and accelerated to give a larger secondary scintillation pulse [1], 4. all signals may be enhanced by TEA amplification, the shape of the initial charge distribution producing a difference in geometric light distribution [1].
Fig. 3. Array of NaI detectors for diagnostic studies of signal-like events.
280
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
These permit a much greater degree of background discrimination than in the case of NaI pulse shape. As a first step we are constructing a 5 kg detector based on (i) for running experience at Boulby (Fig. 4). Following this, the objective is to construct and operate a 20—30 kg detector based on method (iii) in collaboration with the UCLA and Torino groups (Fig. 5). This is planned to be running by the year 2000. Each detector will be located inside a 30 cm thick liquid scintillator Compton veto, to reduce both photomultiplier and ambient background. Discussions are in progress for a further detector based on method (iii) or other design variations [2].
5. The Xe diagnostic array (ZEPLIN collaboration) As in the case of NaI, it is essential to be able to view any candidate signal or anomalous event population in Xe in several different detectors, in order to investigate its behaviour and origin. The above principles allow not only different types of Xe detector, but also diagnostic procedures in a given detector, in particular varying the electric field used to drift the charge. It would also be possible to run the detectors with different Xe isotopes. Fig. 6 shows the array of detectors which
Fig. 4. Single-phase liquid Xe detector.
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
Fig. 5. Two-phase ZEPLIN detector in scintillator veto.
Fig. 6. Proposed array of Xe detectors for signal diagnosis.
281
282
P.F. Smith et al. / Physics Reports 307 (1998) 275—282
could result from our programme and which would provide varying target mass and discrimination technique, giving excellent overall diagnostic capability. Included in Fig. 6 is the possibility of ultimately adding a Xe gas target to verify directionality in the Galaxy, through the collaborative DRIFT programme mentioned in Section 1. In this connection it is of interest that the Boulby Mine happens to be located at the ideal latitude for directional experiments, the rotation of the earth automatically providing orientations parallel, anti-parallel and perpendicular to the Galactic motion for a detector placed with axis horizontal relative to the earth’s surface [12].
Acknowledgements We are grateful to Cleveland Potash Ltd., UK for their co-operation in permitting the use of the surface and underground facilities of the Boulby Mine for this programme.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
H. Wang, Phys. Rep. 307 (1998), this issue. D. Akimov et al., This conference. P.F. Smith et al., This conference. N.J.C. Spooner, Phys. Rep. 307 (1998), this issue. P.F. Smith et al., Phys. Lett. B 379 (1996) 299. J.D. Lewin & P.F. Smith, Astropart. Phys. 6 (1996) 87. P. Belli et al., This conference. R. Schnee et al., This conference. P. Belli et al., Nuovo Cimento 19C (1996) 537. G.J. Davies et al., Phys. Lett. B 320 (1994) 395. P. Benetti et al., Nucl. Instr. Meth. Phys. Res. A 327 (1993) 203. M. Lehner, K. Buckland, personal communication.
Physics Reports 307 (1998) 283—290
Results and status of the Cryogenic Dark Matter Search (CDMS) R.W. Schnee *, D.S. Akerib , P.D. Barnes Jr, D.A. Bauer, P.L. Brink, B. Cabrera, D.O. Caldwell, R.M. Clarke, P. Colling, M.B. Crisler, A. DaSilva, A.K. Davies, B.L. Dougherty, S. Eichblatt, K.D. Irwin, R.J. Gaitskell, S.R. Golwala, E.E. Haller, J. Jochum, W.B. Knowlton, V. Kuzminov , S.W. Nam, V. Novikov , M.J. Penn, T.A. Perera , R.R. Ross , B. Sadoulet , T. Shutt, A. Smith, A.H. Sonnenschein, A.L. Spadafora, W.K. Stockwell , S. Yellin, B.A. Young Department of Physics, Case Western Reserve University, Cleveland, OH 44106, USA Center for Particle Astrophysics, University of California, Berkeley, CA 94720, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Department of Physics, University of California, Santa Barbara, CA 93106, USA Department of Physics, Stanford University, Stanford, CA 94305, USA Fermi National Accelerator Laboratory, Batavia, IL 60510, USA Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Department of Material Science and Mineral Engineering, University of California, Berkeley, CA 94720, USA Baksan Neutrino Observatory, Institute for Nuclear Research, Russian Academy of Science, Russia
Department of Physics, University of California, Berkeley, CA 94720, USA Department of Physics, Santa Clara University, Santa Clara, CA 95053, USA
Abstract The Cryogenic Dark Matter Search experiment uses cooled germanium and silicon detectors for a direct search for weakly interacting massive particles in our Galaxy. The novel detectors allow a high degree of background rejection by discriminating between electron and nuclear recoils through the simultaneous measurement of the energy deposited in phonons and ionization. Exposures on the order of one kilogram-day from initial runs of our experiment yield (preliminary) upper limits on the WIMP-nucleon cross section that are comparable to much longer runs of other experiments. Current and future runs promise significant improvement, primarily due to improved detectors and reduced surface-electron backgrounds. 1998 Published by Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
* Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 1 - 4
284
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
1. Introduction Observations of stars and galaxies over a large range of distance scales indicate that most of the matter in the universe is “dark,” seen only through its gravitational effects [1,2]. Further observations imply that that much of the dark matter is non-baryonic and “cold” [3]. The Cryogenic Dark Matter Search (CDMS) experiment is an attempt to directly detect WIMPs, or weakly interacting massive particles, a generic candidate for non-baryonic cold dark matter. The experimental challenge is defined in part by considerations of the early Universe and the properties of our Galaxy. Constraints from the thermal production of WIMPs in the early universe that yield a dominant WIMP density today are satisfied by particles with masses in the 10—1000 GeV/c range and cross sections on the scale of the weak interaction [4]. This range of particle properties suggests that supersymmetry or other extensions to the standard model may provide the dark matter [5]. If WIMPs exist they would now make up a major component of the dark matter in our own galactic halo [6]. For a standard halo comprised of WIMPs with a Maxwellian velocity distribution characterized by v "270 km/s and a mass density of 0.4 GeV/cm, the expected rate for WIMP-nuclear scattering is in the range 1—0.001 events per kilogram of detector per day, and the expected recoil energy is as low as 1 keV [5,7]. Despite considerable worldwide efforts, WIMPs have not yet been detected. Ultimately, experiments have been dominated by irreducible backgrounds, primarily photons and electrons from radioactive contamination or activation. Further progress can be made by discriminating these background events from WIMP events. The CDMS experiment allows rejection of 99% of the photon background by using detectors that simultaneously measure the recoil energy in both phonon- and charge-mediated signals. The ratio ½ of the two measurements distinguishes electron-recoil events due to background photons from nuclear-recoil events such as those due to WIMPs, since nuclear recoils are less ionizing. Following a decade-long development effort, detectors began running in a low-background environment in 1996. Our early data runs yield preliminary upper bounds on the WIMP-nucleon cross section that are comparable to much longer exposures of other experiments, illustrating the power of this technique.
2. Description of the experiment Key to the experiment is the simultaneous measurement of recoil energy DE in both phononmediated signals and ionization. The ionization measurement is made by applying a small (&2 V/cm) bias voltage across the two sides of the semiconductor targets. Electron—hole pairs are collected efficiently throughout the bulk of the detectors, resulting in FWHM energy resolutions as good as 640 eV. Unfortunately, trapping sites near detector surfaces result in a 10—30 lm-thick “dead layer” where charge collection is incomplete [9]. The CDMS detectors employ two distinct technologies for performing the phonon-mediated measurement of the energy deposited in a scattering event [10]. One technology uses two neutron-transmutation-doped (NTD) germanium thermistors eutectically bonded to 1.3 cm thick Reviewed in Ref. [8]. CDMS papers can be found at cfpa.berkeley.edu/group/directdet/gen.html.
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
285
6 cm diameter 165 g cylindrical crystal of high-purity germanium. When the device is in contact with a 20 mK bath, monitoring the thermistor resistance gives the temperature rise D¹"C\DE, where C is the heat capacity. The resulting energy measurement has a FWHM resolution of 650 eV at 10 keV. The use of two NTDs permits the rejection of events that originate in an NTD and would otherwise mimic the small ionization of a nuclear recoil. The other technology uses quasiparticle-trap-assisted electrothermal-feedback transition-edge sensors (QETs). Tungsten meanders on a surface of a cooled 1 cm thick cylindrical detector are held in the middle of its superconducting transition by electrothermal feedback using a voltage bias. Deposited energy drives the tungsten towards normal conduction, producing a current signal. The time integral of this signal is proportional to the deposited energy, which is measured to 650 eV (FWHM) in our 100 g silicon targets; the technology is now being transferred to germanium targets. Since the phonon collection time is fast (a few microseconds), relative-timing information from the four sensors on a device allows a two-dimensional determination of the event position to a few millimeters. Timing information also promises the ability to reject events on the top and bottom surfaces of the detectors, where the charge dead layer may otherwise compromise the detector discrimination capability. The capability of the detectors to distinguish photon backgrounds from WIMP-induced nuclear recoils is demonstrated using photon and neutron calibration sources; the neutrons serve as test particles to induce nuclear-recoil events. In separate calibration runs the detectors are alternately exposed to photons from a Co source and neutrons (as well as photons) from a Cf source. Figs. 1 and 2 show histograms of the charge yield ½, or the ratio of the charge-mediated energy measurement to the phonon-mediated energy measurement, for NTD and QET detectors, respectively. These data show that 99% of photon-induced recoils are rejected while high acceptance is
Fig. 1. NTD-based germanium detector: charge yield in the recoil-energy range of 15—45 keV for Co data (black) and Cf data (grey). Photon rejection of 99% is obtained for a nuclear-recoil acceptance of 98%. Fig. 2. QET-based silicon detector: charge yield in the recoil-energy range of 10—30 keV for Cf data. Photon rejection of 99% is obtained for a nuclear-recoil acceptance of 95%.
286
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
maintained for nuclear recoils. The events between the main recoil peaks are due to electrons that deposit energy in the dead layer and thus have a low charge yield relative to electron recoils in the bulk. The remainder of the experimental apparatus consists of specialized low-activity detectorhousing modules mounted in a shielded cryostat made from a set of nested copper cans. The cans are cooled by conduction through a set of concentric horizontal tubes extending in a dog-leg from a dilution refrigerator. The cryostat is shielded externally with lead, to reduce the flux of photons, and polyethylene, to reduce the flux of neutrons [11]. Samples of all materials internal to the shield are carefully screened in a low-background HPGe counting facility for radio contaminants. Further shielding close to the detectors is achieved with ancient ultra-low-activity lead which has a low concentration of Pb, a beta-emitter. Due to the complexity of the detectors and cryostat the first phase of the experiment is being performed at a shallow site at Stanford University at a depth of 17 m water-equivalent (mwe). Since the cosmic-ray muon flux is reduced by only a factor of 5 at this depth, further rejection of backgrounds is achieved with a hermetic plastic-scintillator muon veto.
3. Results 3.1. Data sets Several data runs have been taken in the low-background facility over the past two years, indicating that the experiment can successfully operate over months-long timescales with energy resolutions comparable to those for calibration data. The rates of photon and neutron backgrounds have been consistent with or less than the expected levels, confirming that our goals at the shallow site are attainable and that our screening procedures are effective in limiting these sources of background. The rate versus energy from a 1.60 kg d exposure of a 165 g NTD-based germanium detector shows a number of features (Fig. 3). The uppermost curve is the full data set (following event quality cuts) and is dominated by photon events coincident with the muon veto; the peak at 9 keV is due to florescence of copper by muon-related photons. The middle curve, events in anticoincidence with the muon veto, represents a factor of 20 reduction in rate. The line at 10 keV is consistent with internal Ge, which undergoes electron capture and leads to a Ga X-ray. The broad distribution below 18 keV is due to electrons from tritium decay interacting on the surface of the detector. We have also observed a tritium distribution in events intrinsic to the NTD-Ge thermistors and have since demonstrated that tritium diffuses out of the NTDs at 550°C, similar to the temperature used during the eutectic bonding. It should therefore be possible to control this contamination in future detectors by baking the NTD prior to bonding. A cut on charge yield to select nuclear recoils, which is based on a fit to neutron calibration data, results in the solid histogram in Fig. 3. At low energy, the spectrum is dominated by the tritium events that have low charge yield and survive the cut. Above the tritium endpoint the remaining events are likely due primarily to beta emitters in surface contaminants such as K from human perspiration or Pb from radon plating. Further steps are now being taken to control the contamination by a surface etch of the detectors late in the fabrication process and through more
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
287
Fig. 3. Event rate versus recoil energy for a 1.60 kg d exposure of a 165 g NTD-based germanium detector. The shaded histogram comprises events that pass the nuclear recoil cut.
Fig. 4. Event rate versus recoil energy for a 0.52 kg d exposure of a 100 g QET-based silicon detector (prior to 10-fold increase in sensitivity).
careful handling following the final etch (e.g., storage in dry nitrogen or vacuum). We also expect to reduce our susceptibility to beta sources external to the detectors by self-shielding them in a close-packed geometry. Finally, work is continuing on reducing the dead layer so that surfaceelectron events are distinct from nuclear-recoil events; current results appear promising. The energy resolution of the QET-based detector quoted above is due to a recent 10-fold increase in phonon collection. Prior to this improvement, an exposure of 0.52 kg d was obtained with a previous 100 g silicon detector. Fig. 4 shows rate versus energy for these data. As with the germanium data, the muon veto reduces the rate by a factor of 20. The number of events in the nuclear-recoil region above the threshold of 30 keV are consistent with the expected number of misidentified photons.
288
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
Fig. 5. The WIMP-nucleon cross section for spin-independent couplings versus WIMP mass. Upper limits on the cross section are shown for earlier runs of CDMS (preliminary), published results using NaI scintillators and Ge diodes, and the goals for CDMS and the CRESST experiment. The shaded region in the lower part of the graph indicates the region where supersymmetric particles could be the dominant dark matter.
3.2. Preliminary dark matter limits The rates of events consistent with nuclear recoils from the two data sets described above yield upper limits (calculated following reference [7]) on the WIMP-nucleon cross section for spinindependent couplings. Fig. 5 shows these limits versus WIMP mass along with other experimental bounds [12,13]. Also shown is the region expected for minimal supersymmetric models (MSSM) that give a relic density greater than 10% of the critical density for a Hubble parameter of 50 km/s/Mpc [5]. Although our exposure is far less than those of the previous experiments, the sensitivity is comparable, thus clearly demonstrating the advantage of background discrimination.
4. Current status and plans Earlier runs have revealed that the primary background source presently limiting our sensitivity consists of low-energy electrons that suffer reduced charge collection. For the current data run, the
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
289
NTD detectors have been re-etched to remove surface contamination. The improved QET detector currently running should indicate the promise of fiducial-volume cuts using timing information. For future runs, improved cleanliness procedures, increased detector self-shielding, and improved charge collection technologies all should help minimize the surface-electron background and restore the full effectiveness of our event discrimination technique. Once this has been accomplished, we expect to be limited by the cosmogenic neutron backgrounds at the Stanford site with an exposure of about 100 kg d. To obtain this exposure we will instrument two silicon and four germanium devices with QET readout and six germanium devices with NTD readout, for a total of 200 g of silicon and 2 kg of germanium. Comparison of backgrounds in the Ge and Si will provide information on the backgrounds, especially neutrons. Multiple scattering of neutron backgrounds in the detector arrays will also provide a handle for background subtraction. In order to take full advantage of these advanced detectors, we plan to continue the experiment at the Soudan Mine. The 2000 mwe overburden at Soudan attenuates cosmic-ray muons by some 5 orders of magnitude, which will greatly reduce cosmogenic activity in the apparatus and greatly reduce the neutron background. Fig. 5 shows the expected sensitivity for a 100 kg d exposure at the Stanford site and a 5000 kg d exposure at the Soudan site. For reference, the projected sensitivity of the CRESST experiment is also included [14]. As seen in the figure, the CDMS experiments will explore a significant new region of WIMP parameter space, and, in particular, a region where supersymmetric models could provide the dark matter. Acknowledgements This work was supported by the Center for Particle Astrophysics, an NSF Science and Technology Center operated by the University of California, Berkeley, under Cooperative Agreement No. AST-91-20005, and by the Department of Energy under contracts DE-AC03-76SF00098, DE-FG03-90ER40569, and DE-FG03-91ER40618. We gratefully acknowledge the skillful and dedicated efforts of the technical staffs at LBNL, Stanford University, UC Berkeley, and UC Santa Barbara. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
V. Trimble, Ann. Rev. Astron. Astrophys. 25 (1987) 425. E.W. Kolb, M.S. Turner, The Early Universe, Addison-Wesley, Reading, MA, 1988. K.A. Olive, astro-ph/9707212. B.W. Lee, S. Weinberg, Phys. Rev. Lett. 39 (1977) 165. G. Jungman, M. Kamionkowski, K. Griest, Phys. Rep. 267 (1996) 195. E.I. Gates, M.S. Turner, Phys. Rev. Lett. 72 (1994) 2520. P.F. Smith, J.D. Lewin, Phys. Rep. 187 (1990) 203. N. Booth, B. Cabrera, E. Fiorini, Ann. Rev. Nucl. Part. Sci. 46 (1996) 471. T. Shutt et al., Proc. 7th Int. Workshop on Low Temperature Detectors, Munich, Max Planck Institute of Physics, 1997, p. 224. [10] T. Shutt et al., Phys. Rev. Lett. 69 (1992) 3425; R.J. Gaitskell et al., Proc. 7th Int. Workshop on Low Temperature Detectors, Munich, Max Planck Institute of Physics, 1997, p. 221; R.M. Clarke et al., Proc. 7th Int. Workshop on Low Temperature Detectors, Munich, Max Planck Institute of Physics, 1997, p. 229; A.K. Davies et al., Proc. 7th Int. Workshop on Low Temperature Detectors, Munich, Max Planck Institute of Physics, 1997, p. 227.
290
R.W. Schnee et al. / Physics Reports 307 (1998) 283—290
[11] A. Da Silva et al., Nucl. Instr. and Meth. A 354 (1995) 553. [12] D.O. Caldwell et al., Phys. Rev. Lett. 65 (1990) 1305; M. Beck et al., Phys. Lett. B 336 (1994) 141; E. Garcia et al., Phys. Rev. D 51 (1995) 1458; A. Morales, private communication. [13] R. Bernabei et al., Phys. Lett. B 389 (1996) 757. [14] M. Buhler et al., Nucl. Instr. and Meth. A 370 (1996) 237.
Physics Reports 307 (1998) 291—295
Direct detection of WIMPs with the HDMS-experiment and new WIMP-limits from the Heidelberg—Moscow experiment L. Baudis*, J. Hellmig, G. Heusser, H.V. Klapdor-Kleingrothaus, B. Majorovits, Y. Ramachers, H. Strecker Max—Planck—Institut fu( r Kernphysik, Heidelberg, Germany
Abstract The Heidelberg-Dark-Matter-Search (HDMS) experiment consists of two HPGe-detectors which operate in a unique configuration. The anticoincidence between the detectors will act as an effective background reduction method allowing to reach a sensitivity comparable to those of planned cryogenic experiments. The experiment is situated in the Gran Sasso Underground Laboratory and starts to take data now. New upper limits on WIMP-nucleon cross sections are obtained after 0.53 kg yr of measurement with one of the enriched Ge detectors of the Heidelberg—Moscow experiment. An energy threshold of 8.8 keV and a background level of 0.05 events/kg yr keV in the energy region between 8.8 and 100 keV was reached. The derived limits are at present the most stringent ones for spin independent interaction obtained by using only raw data without background subtraction. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction WIMPs can be directly detected via elastic scattering with nuclei in a low level underground detector [1—3]. The limiting factor is the low signal to noise ratio due to the very low expected event rates in the relevant energy region (below 50 keV). Low background and low energy threshold are thus two requirements of any experimental effort to directly detect WIMPs.
* Corresponding author. E-mail:
[email protected]. Spokesman of the Collaboration. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 4 - 7
292
L. Baudis et al. / Physics Reports 307 (1998) 291—295
2. The HDMS detector The new detector of the HDMS-experiment is conceived to further reduce the already very low background that we achieve in the Heidelberg—Moscow experiment [4,14]. A small Ge-crystal (natural Ge for the prototype, isotopically enriched Ge for the second stage) is surrounded by a well-type Ge-crystal, the configuration being run in anticoincidence mode. Both crystals are mounted in a common cryostat system. To shield leakage currents on the surfaces there is a thin insulation (1 mm) placed between them (see Fig. 1). Two effects will contribute to the background reduction. First the outer detector will act as an active veto shield for the inner detector, thereby reducing the Compton background from multiple scattered photons. The relative background suppression due to the anticoincidence was estimated in a Monte-Carlo simulation (using the GEANT code and the schematic geometry of Fig. 1) to be of the order of 10—20 [5]. Second, the overall background level is expected to be reduced relative to usual Ge ionization detectors, since in the immediate vicinity of the measurement crystal there is one of the radiopurest known materials, high purity germanium. Estimates for the absolute background suppression will be possible only in the low background environment of the Gran Sasso Underground laboratory, the final location of the experiment.
3. First tests in the low level laboratory in Heidelberg The detector has been in Heidelberg for the period of August—October 1997 and first tests regarding its general performance, electronics, data acquisition system and anticoincidence
Fig. 1. Schematic view of the HDMS-Detector. The well-type crystal is a n-type semiconductor, the small measurement crystal a p-type.
L. Baudis et al. / Physics Reports 307 (1998) 291—295
293
efficiency were done. The crystal performance has been checked with standard calibration sources like Th, Ba, Am, Co, as well as from the background in the Low Level Laboratory in Heidelberg. The sensitivity of the detectors as a function of their geometry was tested by a scan measurement with a collimated Barium source. Both detectors show a good energy resolution (1.87 keV at 1332 keV for the inner detector and 4.45 keV at 1332 keV for the outer one) and a low energy threshold (2.5 keV inner detector, 7.5 keV outer detector). By looking at the energy deposited in one detector as a function of the energy deposited in the other detector (for example with a barium or cobalt source) we found a pick up between the two detectors. There is a strong linear correlation between the pickup of one detector and the energy released in the other one. The width of these correlations is determined by the energy resolution of the detectors. The linear correlation can be corrected off-line, in order to obtain the true anticoincidence spectra. Another possibility would be an extra grounded shield between the two detectors. We decided against this hardware solution because of the risk of constructing a new detector holder and of introducing further material between the crystals. An offline correction requires the stability of the slope of the linear correlation. This stability which has been monitored and confirmed. To estimate the relative background reduction, we measured the anticoincidence efficiency in Heidelberg. Fig. 2 shows the low energy region of a background spectrum taken in the Heidelberg Low Level Laboratory. After correction for the cross-talk, the original and the discriminated energy spectra were computed. This measurement alone gives a suppression factor between 5 and 6 in this region and an energy threshold of ca. 2.5 keV. The difference to the estimated efficiency results from a slight difference between the simulated crystal-geometry and the technically achievable crystals. In addition to the energy spectra we are also able to record the pulse shape of each event in an event-by-event data acquisition mode. This will open the possibility of a discrimination between pulses from gamma- or nuclear recoil interaction-events and microphonics. In a former experiment
Fig. 2. Background spectrum and anticoincidence spectrum of the inner detector in the Heidelberg low level laboratory.
294
L. Baudis et al. / Physics Reports 307 (1998) 291—295
[6] we could show that pulse shapes of nuclear recoil and c-interactions are indistinguishable within the timing resolution of Ge-detectors. Thus a reduction of c-ray background based on pulse shape discrimination (PSD) is not possible. Consequently c-sources can be employed to calibrate a PSD method against microphonism, which was shown to reveal a different pattern in the pulse shape. This should allow a further background reduction in the low energy region, where Ge-crystals are very sensitive to microphonics, especially in a low background environment.
4. Conclusions and outlook According to our estimate of a reachable background index we can improve WIMP limits to a level comparable to new cryogenic experiments. In contrast to this WIMP-detection technique, we use the well known semiconductor ionization detector technique and use raw data to obtain cross section limits for WIMPs. A background reduction of a factor 5 compared to the Heidelberg—Moscow experiment will already allow us to test the region reached now by the DAMACollaboration [7—9,15], in which they claim to see a WIMP candidate. This would be an independent test with different target nuclei and by using raw data only. The experiment has been built up in the Gran Sasso Underground laboratory in March this year and starts to take data. As a next step we construct a new crystal holder of selected copper and we plan to replace the inner crystal made of natural germanium by an enriched Ge-crystal. 5. New dark matter WIMP-limits from the Heidelberg—Moscow experiment A 2.8 kg enriched Ge detectors of the Heidelberg—Moscow experiment [4,14] was run for a period of 70 days in a special configuration developed for low energy measurements [10]. Fig. 4 shows the sum spectrum as a function of time. Accumulations of events, irregularly distributed in time and with energy depositions up to 30 keV can be seen clearly. These events can be generated by microphonism or electronic noise and lead to an enhancement of the countrate in the low energy region. A possibility to deal with microphonism is to apply a Poisson time cut [11]. The method relies on the assumption that physical events are Poisson distributed, whereas microphonic events arrive in bursts. The complete measuring time is divided into 30 min intervals and the probability for N events to occur in one time interval is computed for energy depositions between 9 and 100 keV. The mean value of the distribution is (3.71$0.02) events/30 min and p"(1.73$0.01). The cut is set at the value N"3.71#3 ) pK9. The initial significance of the measurement was 0.54 kg yr after the time cut the significance is 0.53 kg yr. In this way, 98% of the initial data are used. The resulting low energy spectrum is shown in Fig. 5. The energy threshold is 8.8 keV and the background counting rate in the energy region between 8.8 and 100 keV is about 0.05 cts/kg yr keV. This is a factor of two better than the background level reached in [11] with the same Ge-detector four years ago. The new upper limit exclusion plot in the p versus M plane is shown in Fig. 3. Since we do 5'+. not use any background subtraction in our analysis, we consider our limit to be very conservative. We are now sensitive to WIMP masses greater than 13 GeV and to cross sections as low as
L. Baudis et al. / Physics Reports 307 (1998) 291—295
295
9.4;10\ pb. At the same time we start to test the region (blue evidence contour) where the DAMA NaI-Experiment [12,13] claims to see positive evidence for WIMP detection.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
B. Cabrera, This conference. N. Spooner, Phys. Rep. 307 (1998), this issue. P.F. Smith, J.D. Lewin, Phys. Rep. 187 (1990) 203. L. Baudis et al., Heidelberg—Moscow-Collaboration, Phys. Lett. B 407 (1997) 219. L. Baudis et al., Nucl. Instr. and Meth. A 385 (1997) 265. L. Baudis, J. Hellmig, H.V. Klapdor-Kleingrothaus, Y. Ramachers, J.W. Hammer, A. Meyer, Nucl. Instr. and Meth., submitted. R. Bernabei et al., Phys. Lett. 389 (1997) 757. P.D. Barnes et al., Nucl. Instr. and Meth. A 370 (1996) 233. V. Bednyakov, H.V. Klapdor-Kleingrothaus, S.G. Kovalenko, Y. Ramachers, Z. Phys A 357 (1997) 339. L. Baudis et al., Heidelberg—Moscow-Collaboration, to be published. M. Beck et al., Phys. Lett. B 336 (1994) 141—146. P. Belli et al., Proc. TAUP97, Gran Sasso, Italy, 7—11 September 1997, Nucl. Phys. Suppl. (A. Bottino, A. di Credico, P. Monacelli, eds.). D.S. Akerib et al., CDMS Collaboration, Proc. TAUP97, Gran Sasso, Italy, 7—11 September 1997, Nucl. Phys. Suppl. (A. Bottino, A. di Credico, P. Monacelli, eds.). M. Gu¨nther et al., Heidelberg—Moscow Collaboration, Phys. Rev. D 55 (1997) 54. R. Bernabei et al., astro-ph/9710290.
Physics Reports 307 (1998) 297—300
Status of the EDELWEISS experiment D. Drain *, A. Benoit, L. Berge´, I. Berkes , B. Chambon , M. Chapellier, G. Chardin, P. Charvin, M. De Je´sus , P. Di Stefano, L. Dumoulin, C. Goldbach, J.-P. Hadjout , A. Juilliard, D. L’Hoˆte, J. Mallet, S. Marnieros, L. Miramonti, L. Mosca, X.-F. Navick, G. Nollez, P. Pari, C. Pastor , S. Pe´court , E. Simon , R. Turbot, L. Vagneron , D. Yvon Institut de Physique Nucle& aire de Lyon & Universite& Claude Bernard, Lyon1, IN2P3-CNRS 43 Bd. du 11 novembre 1918, F-69622 Villeurbanne Cedex, France Centre de Recherche sur les tre% s basses Tempe& ratures, BP 166, 38042 Grenoble Cedex, France Centre de Spectroscopie Nucle& aire de Masse, IN2P3-CNRS, Univ. Paris XI, Bat. 108, F-91405 Orsay Cedex, France CEA, Centre d+Etudes de Saclay, DSM/DRECAM, Service de Physique de l’Etat Condense´ , F-91191 Gif-sur-Yvette Cedex, France CEA, Centre d+Etudes de Saclay, DSM/DAPNIA, Service de Physique des particules, F-91191 Gif-sur-Yvette Cedex, France Laboratoire Souterrain de Modane CEA-CNRS, 90 rue Polset, F-73500 Modane, France Institut d+Astrophysique de Paris, INSU-CNRS, 98, Bd. Arago, F-75014 Paris, France
Abstract The status of the EDELWEISS experiment, installed in the Fre´jus tunnel, is presented. In its first stage, the experiment uses a 70 g high purity Ge bolometer with heat-ionization discrimination. Based on physics data, gamma and neutron calibrations, the influence of inverse bias voltage (!2 and !6 V) on incomplete surface charge collection, which limits at present the performances of these detectors, is presented. Analysing runs with a total exposure of 1.17 kg;day after cuts, an upper limit of 0.6 event/day kg keV at 90% confidence level in the 12—70 keV recoil energy interval is obtained. Planned upgrades are summarized. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
EDELWEISS is a WIMPs direct detection experiment set in the Modane underground laboratory (Laboratoire Souterrain de Modane). This laboratory is located close to the middle of a highway tunnel in the Alps connecting France and Italy. The experimental setup, the cryostat, and the low radioactivity shielding have been described elsewhere [1]. * Corresponding author. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 7 5 - 1
298
D. Drain et al. / Physics Reports 307 (1998) 297—300
The results presented are preliminary and were obtained without Roman lead shielding of the detector and without radon removal. The detector is a 70 g (48 mm diameter, 8 mm thick) high purity Ge ionization-heat detector [2—4]. The phonon signal is provided by a 2;1;0.8 mm NTD sensor glued on the Ge disc. The ionization signal is obtained by charge collection using two implanted electrodes covering the entire area of each side of the disc. Due to the p—i—n type for the Ge crystal, relatively high bias voltage can be used for charge collection. Although bias voltage up to !12 V have been used, most of the data has been accumulated under bias voltages of !2 or !6 V. Under these conditions, typical detector performances are summarized in Table 1. Fig. 1b shows the phonon energy versus charge energy scatter diagram obtained in a physics measurement (no gamma or neutron sources). In Fig. 1a nuclear recoil events provided by a Cf source have been surimposed. The gamma and recoil lines are clearly evidenced but a third population of events can be seen between these two lines, most likely due to incomplete charge collection. In order to estimate the overlap of these events with the neutron zone, a piecewise liner region represented in Fig. 2 has been used to define the full neutron zone in the ionization/heat
Table 1 Detector performances at 20 mK under either !2 or !6 V bias voltage. Energies, expressed in keV, have been normalized using a Co gamma source
FWHM at 122 keV Base line noise FWHM Rise time Fall time
Heat
Ionization
1.2 keV 0.85 keV +10 ms +30 ms and +1 s
1.2 keV 1.1 keV +10 ls +100 ls
Fig. 1. Heat vs. ionization with (a) and without (b) Cf neutron source. The contour contains 99% of the neutrons.
D. Drain et al. / Physics Reports 307 (1998) 297—300
299
ratio versus heat scatter diagram. For this neutron region, the middle line is determined from the average of the ionization/heat ratio for nuclear recoil events in each bin. The full zone contour is then drawn by considering the 2 standard deviation from this average ratio. This contour then contains +95% of neutrons, while the lower-half zone contains 47% of neutrons. The incomplete charge collection events are most likely due to gamma, X or beta interactions occurring near the detector surface [6,7]. For these events, part of the carriers are trapped by the nearby electrode before they can be drifted by the electric field. It can be seen in Fig. 2 that these surface events partially overlap the neutron zone (mainly the upper half of the neutron zone). In order to improve the charge collection for the surface events, we have accumulated data with an increased bias voltage of !6 V. Fig. 3 shows the comparison of the ionization to heat ratio histogram for !2 V and !6 V data. The incomplete charge collection events appear as an intermediate population between the neutron induced recoils and the gamma-ray peak. Increasing the bias voltage from !2 to !6 V of course reduces the separation between the peaks because of the Luke effect and does not significantly reduce the proportion of incomplete collection events. Nevertheless, it appears rather clearly that the distribution of these events is shifted towards the gamma-ray peak, leading to a smaller overlap with the neutron zone. Analysing runs for a total exposure of 1.17 kg;day using !6 V bias voltage, and after applying the selection procedure consisting in keeping only the events inside the lower-half of the neutron zone (see Fig. 2), we obtain an upper limit for the number of nuclear recoil events of 0.6 event/day kg keV at a 90% confidence level in the 12—70 keV recoil energy interval. A comparison of this sensitivity with that obtained by other groups can be found in the review by B. Cabrera [5]. In terms of sensitivity, our result is at the same level as CDMS, and still a factor 3 from the level
Fig. 2. Display of the full neutron zone and half neutron zone (!6 V bias voltage) for the physics data runs. Fig. 3. Influence of the bias voltage on the incomplete collection events.
300
D. Drain et al. / Physics Reports 307 (1998) 297—300
obtained by the Heidelberg-Moscow collaboration, except at low WIMPs masses where our sensitivity is already better. Obviously, in order to make efficient use of the rejection power of our detector, a strong reduction of the radioactive background is needed. This can be done through a better selection and/or purification of materials used for the bolometer mounting and of electronic devices (LED, cables, connectors, 2). A preliminary run with the Roman lead shielding surrounding the detector indicates a factor 2.5 to 3 reduction in the counting rate (ionization channel only). The main objective is now the reduction of the incomplete charge collection events. A first step in these directions is realized by adopting new Ge detector design. In addition, the feasability of event position determination through ionization pulse shape discrimination or use of the fast component of the phonon signal is being investigated as possible means of identifying surface events. A second step of the EDELWEISS experiment is presently at the stage of approval. A large reverse cryostat, with an experiment chamber of 100 l is under construction and is expected to be installed in the Fre´jus underground laboratory at the end of 1999. A series of 20 detectors in the mass range 200—300 g should be realized in 1999 in order to be installed in the new cryostat at the beginning of 2000. Based on the experience of this first series of detectors, a second series of 80 detectors should then be started.
References [1] [2] [3] [4] [5] [6] [7]
A. de Bellefon et al., Astrop. Phys. 6 (1996) 35. D. L’Hoˆte et al., Proc. LTD-7, 1997, 240, ISBN 3-00-002266-X. D. L’Hoˆte, X.-F. Navick, J. Appl. Phys., submitted. X.-F. Navick, Thesis, Universite´ Paris, VII, 1997. B. Cabrera, This conference. T. Shutt et al., Proc. LTD-7, 1997, 224, ISBN 3-00-002266-X. M.J. Penn et al., J. Appl. Phys. 79 (1996) 8179.
Physics Reports 307 (1998) 301—308
GENIUS: a new dark matter project L. Baudis*, J. Hellmig, M. Hirsch, G. Heusser, H.V. Klapdor—Kleingrothaus, B. Majorovits, Y. Ramachers, H. Strecker Max-Planck-Institut fu( r Kernphysik, Heidelberg, Germany
Abstract GErmanium in liquid NItrogen Underground Setup (GENIUS) is a proposal for a high mass germanium experiment with a much larger sensitivity for direct WIMP detection relative to existing experiments. It would operate 1 ton of “naked” Ge-detectors in a liquid nitrogen shielding of very low level radioactivity. Already in a first 100 kg of natural Ge version, GENIUS would be able to reach a sensitivity of the order 0.01 counts/d kg and additionally to look for the annual modulation WIMP-signature. Operating 1 t of enriched Ge-detectors, GENIUS could search for the neutrinoless double beta decay, probing neutrino masses down to 0.02 eV. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
1. Introduction Direct dark matter search experiments starting at present or in the near future like [2] or [3], will improve existing WIMP-nucleon cross section limits by a considerable amount. Nevertheless, they will only start to probe the region of the MSSM parameter space predicting neutralino dark matter (compare Fig. 1). For a substantial improvement of WIMP-nucleon cross section limits, future dark matter experiments will have to be either massive direction-sensitive detectors or massive ton-scale detectors with almost zero background. In this paper we present a project for a 1 t germanium dark matter experiment with an extremely low radioactive background [1,4,14]. We discuss the layout of the experiment and the shielding and purity requirements based on a GEANT Monte Carlo simulation. To demonstrate its feasibility we show first results from operating a “naked” germanium detector in liquid nitrogen.
* Corresponding author. E-mail:
[email protected]. Spokesman of the Collaboration. E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 5 - 9
302
L. Baudis et al. / Physics Reports 307 (1998) 301—308
Fig. 1. WIMP-nucleon cross section limits from the Heidelberg—Moscow Experiment [9,10] and the DAMA NaI experiment [11,16] for scalar interactions as function of the WIMP-mass and of possible results from upcoming experiments like HDMS [3], CDMS [2] and GENIUS. Shown are also expectations for WIMP-neutralino cross sections calculated in the MSSM framework with non-universal scalar mass unification [12,13].
We give preliminary estimates of the reachable background level and thus of the sensitivity for WIMP nucleon scattering and for the 0lbb decay. 2. THE GENIUS experiment The idea of GENIUS is to operate a large amount of “naked” enriched or, in a first step, natural Ge-detectors in a liquid shielding of very low-level radioactive material. From the experience with the Heidelberg—Moscow experiment [5,15] we know that the main contributions to the background arise from the cryostat system and the lead or copper shielding. A promising way to reduce the background from materials next to the measuring crystal is to operate the Ge-detectors directly in a liquid which cools them to their working temperature and provides the shielding from external radioactivity (see Fig. 2). The idea to operate Ge detectors in liquid nitrogen was earlier considered by [6]. 2.1. The shielding Table 1 summarizes some properties of possible shielding materials. All of them can be processed to very high purity. Nitrogen has the advantage of a low temperature and price. Its drawback is the low density which requires a large setup to achieve sufficient shielding from external activities. The shielding
L. Baudis et al. / Physics Reports 307 (1998) 301—308
303
Fig. 2. Simplified model of the GENIUS experiment: 288 enriched Ge detectors with a total of one ton mass in the center of a 9 m high liquid nitrogen tank with 9 m diameter; GEANT Monte Carlo simulation of 1000 2.6 MeV photons randomly distributed in the nitrogen is also shown.
Table 1 Properties of the liquids nitrogen, argon and xenon Liquid
Melting point (K)
Boiling point (K)
Density (g/cm)
Nitrogen Argon Xenon
63 83 161
77 87 165
0.80 1.63 3.52
Density at normal pressure and boiling temperature, except argon density at 81.7 K.
effect of liquid argon is superior by a factor of two, but LiA contains the b-emitter Ar with ¹ "279 y, produced through the reaction Ar(n, c)Ar, with a Q-value of 565 keV. Liquid xenon has a higher density, but its temperature is at the upper edge for the operation of HPGe-detectors. Altogether, the best choice for a liquid shielding would be nitrogen. 2.2. Ge detectors inside liquid nitrogen To demonstrate the possibility to operate Ge detectors inside liquid nitrogen we used a p-type HPGe-detector inside a 50 l dewar surrounded by 10 cm of lead. The FET was mounted on a small board inside the nitrogen 6 cm above the crystal and connected to the preamplifier using 1 m long HV and signal cables. The energy resolution was measured with Ba and Co sources to be 1.21$0.01 keV at 81.0 keV and 1.95$0.01 keV at 1332 keV. Fig. 3 shows a spectrum from the Ba source. Lead X-rays and an energy threshold below 10 keV can also be seen.
304
L. Baudis et al. / Physics Reports 307 (1998) 301—308
Fig. 3. Spectrum of a Ba source measured with a HPGe-detector operated in liquid nitrogen in the Heidelberg underground low-level laboratory. X-rays from lead can be seen with energies of 72.8 and 75.0 keV.
Fig. 4. Monte Carlo simulation of U/Ra, U/Th and K (shaded), Rn (black histogram) activities in the liquid nitrogen; the sum of the activities is shown with anticoincidence between the 288 detectors (thick line) and without (dashed line); the 2lbb-decay dominates the spectrum with 4 million events per year; the impurity levels are assumed as given in Table 2.
L. Baudis et al. / Physics Reports 307 (1998) 301—308
305
2.3. Expected performance The radioactive background sources in the GENIUS experiment can be differentiated according to their origin. The external background arises from neutrons and c-fluxes from the surrounding rock. The muon induced background is not negligible in spite of six orders of magnitude reduction by the Gran Sasso Mountain. Internal background is expected from impurities in the vessel, the liquid nitrogen, the crystal supports and the crystals themselves. To determine the size of the experiment and the required purity levels we used the Monte Carlo code GEANT extended for simulation of radioactive decays. For a background estimation, we used the simplified model of the experiment showed in Fig. 2. It consists of 288 enriched Gedetectors of 3.6 kg each, situated in the center of a 9 m in height and 9 m in diameter tank filled with liquid nitrogen. The main contribution of radioactivities inside the nitrogen are expected from the U/Ra and U/Th nuclear decay chains, from primordial K and Rn. We simulated 28;10 decays of each isotope randomly distributed inside the nitrogen. The spectra of each isotope from the decay chains are summed under the assumption that the chains are in equilibrium. The energy spectra of all noncoincident events are shown in Fig. 4 assuming purity levels as listed in Table 2. The reduction through anticoincidence is roughly a factor of 10. The counting rate in the energy region below 100 keV is about 10 counts/keV y t. The spectrum is dominated by the 2lbb decay of Ge, with ¹ "1.77;10 y. With 4;10 events/y, this signal has to be subtracted for the evaluation of the low-energy region. The shielding of radioactivity from outside the vessel is shown in Fig. 5. It shows the distribution of the nearest interaction to the center of the detector for 6;10 simulated Tl decays randomly
Fig. 5. Simulation of Tl activity randomly distributed in the wall of the vessel; the histogram contains for each event the distance of the nearest interaction to the detector center; for comparison filling of liquid nitrogen and argon are simulated.
306
L. Baudis et al. / Physics Reports 307 (1998) 301—308
Table 2 Required purity levels for the liquid nitrogen Isotope
Activity
Decays/year
Rn U(4n#2 series) Th(4n series) K
5;10\ mBq/m 1;10\ g/g 5;10\ g/g 1;10\ g/g
8;10 2;10 3;10 4;10
distributed inside the 2 cm thick vessel surrounding the nitrogen. The radioactivity drops 0.8 orders of magnitude per meter diameter. For an equal contribution to the count rate in the Ge detectors as the impurities in the LiN, an impurity concentration of 10\ g/g U/Th in the vessel is required. The required c-flux from outside the vessel has been estimated to be 3.2;10\ cm\ s, which is by a factor of 50 lower than the flux of Tl 2.6 MeV photons measured in hall C of the Gran Sasso laboratory. Therefore, an additional shielding of 10 cm lead or 0.8 m liquid nitrogen has to be applied. To study the influence of muons penetrating the Gran Sasso rock we simulated a flux of 2.3;10\ muons/m s with 200 GeV [7] crossing the tank from the top. The induced events in the Ge detectors are shown in Fig. 6. A count rate reduction by two orders of magnitude through the anticoincidence can be seen. This reduces the muon induced background far below the background from natural radioactivity. To study a possible electrical interference effect between several Ge detectors and also to record spectra with longer HV and signal cables (6 m) than in the previous tests, a measurement with 3 naked Ge detectors in a 50 l dewar has been performed. Neither an electrical interference nor a substantial deterioration of the energy resolution by using long signal or HV cables could be observed. 2.4. Some open questions An open question is the neutron induced background. The low mass of the nitrogen would thermalize the neutrons. One prominent reaction is the N(n, p)C reaction, which would contribute to the background through the C b\ decay (however, with a decay probability of 10\) and subsequent c-radiation from the excited C nucleus. A Monte Carlo simulation of this background will be performed. Also, possible activations of the liquid shielding through the products of muon showers and muon capture in the nitrogen have to be considered. According to the simulation with a single detector, the detector support should have the same purity level as the liquid nitrogen. The possibility to obtain organic substances with 10\g/g purity levels was demonstrated by the liquid scintillator operated in the CTF of the Borexino experiment. Nevertheless, more detailed studies about this point are also needed. 2.5. Expected results Assuming a radioactive background as stated above, the GENIUS experiment would already in a starting version with 100 kg of natural Ge reach a background level of roughly 10 counts/keV y t
L. Baudis et al. / Physics Reports 307 (1998) 301—308
307
Fig. 6. Background from outside the nitrogen: 200 GeV muons induced events (dashed line) and single hit events (filled histogram); decay of Tl in the steel vessel (light shaded histogram) and the background originating from the nitrogen impurities for comparison (thick line).
in the low-energy region. With this background it would have the potential to test the MSSM predictions for neutralinos as cold dark matter over the whole SUSY parameter space (see Fig. 1). Besides that, GENIUS could search for the seasonal modulation of the WIMP signal. In the region of interest for the neutrinoless double beta decay (2038 keV), under the same radioactive background assumptions, the countrate would be 0.04 counts/keV y t. In case of nonobservation of a positive signal, after one year of measurement a lower half-life limit on the 0lbb decay of ¹J 55.8;10 y (68% CL) would be obtained. This can be converted into an upper limit of the neutrino mass of 1m 240.02 eV (68% CL). The final sensitivity of an one ton J experiment could be obtained after 10 years of measurement, with an upper limit on the neutrino mass of 1m 240.006 eV (68% CL). In addition to searching for the 0lbb decay, GENIUS would J also be sensitive for neutrino oscillations [1,8]. It could test the atmospheric and large angle solution of the solar neutrino problem and would be able to confirm or rule out all degenerate or inverted neutrino mass scenarios [1,8]. 3. Conclusions and outlook The GENIUS experiment would be a very sensitive tool to search for cold dark matter and for the neutrino mass. Both issues are of central interest to cosmology and particle physics. The sensitivity for WIMP-nucleon scattering would be four orders of magnitude better than in existing experiments and two orders of magnitude better than those under construction. Regarding the search for the neutrino mass, the experiment would surpass the existing neutrino mass experiments by a factor of 50—500.
308
L. Baudis et al. / Physics Reports 307 (1998) 301—308
The shielding and purity requirements were studied using the GEANT Monte Carlo code. Although no requirements beyond those of the latest solar neutrino experiments could be found, evidently further studies are needed. GENIUS could be able to cover the full MSSM parameter space of interest for neutralino dark matter. It would be one of the major experiments of future non-accelerator (astro-) particle physics. Even if LHC would discover supersymmetry first, it would still be exciting to see whether neutralinos form the dark matter in our galactic halo.
References [1] H.V. Klapdor—Kleingrothaus, talk Beyond the Desert, 8—14 June 1997, Castle Ringberg, Germany, Proc. at IOP, Bristol, 1998. [2] D.S. Akerib et al., CDMS Collaboration, in: A. Bottino, A. di Credico, P. Monacelli (Eds.), Proc. TAUP97, Gran Sasso, Italy, 7—11 September 1997, Nucl. Phys. Suppl. [3] L. Baudis et al., Nucl. Instr. and Meth. A 385 (1997) 265. [4] H.V. Klapdor—Kleingrothaus, J. Hellmig, Z. Phys. A 359 (1997) 351. [5] L. Baudis et al., Heidelberg—Moscow Collaboration, Phys. Lett. B 407 (1997) 219. [6] G. Heusser, Ann. Rev. Nucl. Part. Sci. 45 (1995) 543. [7] C. Arpesella. Nucl. Phys. Proc. Suppl., B 28A (1992) 420. [8] H.V. Klapdor—Kleingrothaus, M. Hirsch, Z. Phys. A 359 (1997) 361. [9] M. Beck et al., Phys. Lett. B 336 (1994) 141—146. [10] L. Baudis, Phys. Rep. 307 (1998), this issue. [11] R. Bernabei et al., Phys. Lett. 389 (1997) 757. [12] V. Bednyakov, H.V. Klapdor—Kleingrothaus, S.G. Kovalenko, Y. Ramachers, Z. Phys. A 357 (1997) 339. [13] P. Belli et al., in: A. Bottino, A. di Credico, P. Monacelli (Eds.), Proc. TAUP97, Gran Sasso, Italy, 7—11 September 1997, Nucl. Phys. Suppl. [14] H.V. Klapdor—Kleingrothaus, J. Hellmig, M. Hirsch, J. Phys. G 24 (1998) 483—516. [15] M. Gu¨nther et al., Heidelberg—Moscow Collaboration, Phys. Rev. D 55 (1997) 54. [16] R. Bernabei et al., astro-ph/9710290.
Physics Reports 307 (1998) 309—317
CUORE: a cryogenic underground observatory for rare events E. Fiorini Dipartimento di Fisica dell+Universita% di Milano e Sezione di Milano dell+INFN, I-20133 Milan, Italy
Abstract The proposal for an array of 1000 cryogenic detectors of a mass between 0.5 and 1 kg each is presented. It would be operated underground to search for neutrinoless double beta decay, interaction of WIMPS and of solar axions and for rare events in nuclear physics. The first results of an array of 20 TeO detectors totaling a mass of almost 7 kg will be also reported. 1998 Elsevier Science B.V. All rights reserved. PACS: 95.35.#d
Great interest is presently addressed to experiments on rare events and in particular on direct interactions of WIMPs. Other subjects are neutrinoless double beta decay and more in general other rare decays in nuclear physics, and interactions of solar neutrinos and axions. The use of large cryogenic thermal detectors [1,16] in searches for rare events like double beta decay has been suggested since 1984 [2] and a series of experiments with large TeO bolometers has been carried out by the Milano group in the Gran Sasso underground Laboratory. The same group has recently constructed and operated an array of 20 of these crystals with a total mass of almost 7 kg [3]. Three other calorimetric experiments specifically devoted to the search for WIMPS are presently in the running stage [4—6], others are planned [7]. The principle of operation of these bolometers is as follows. A dielectric and diamagnetic crystal is kept at low temperatures (tens of millikelvin) in a dilution refrigerator. In such conditions the heat capacity is very low, being proportional to the cube of the ratio between the operating and Debye temperatures. As a consequence even the tiny energy released in the crystal by a nuclear event can be revealed and measured by the increase in temperature recorded by a sensor in thermal contact with the absorber. We would like to note that the energy resolution of these bolometers is theoretically much superior than that of conventional detectors. In practice microbolometers with an absorber mass of a milligram or less reach energy resolutions superior by an order of magnitude E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 6 0 - X
310
E. Fiorini / Physics Reports 307 (1998) 309—317
with respect to Si(Li) semiconductors in the keV region, while in the MeV region macrobolometers with masses of a few hundred grams are competitive with germanium diodes. As a decisive step with respect to the presently operating twenty crystal array we propose an international collaboration to construct CUORE (Cryogenic Underground Observatory for Rare Events) a cryogenic setup made by a thousand crystals each of a mass similar to the present ones, and a total mass near to one ton. In our opinion this set-up will allow to achieve important results in fields like double beta decay, direct interactions of WIMPS, solar axions and neutrino interactions from an artificial source. We would like however to stress that the possibilities offered by the device proposed here should be by no means limited to these subjects. A peculiar property of cryogenic detectors, unlike conventional ones, is that they offer a very broad choice of nuclei in the detecting materials and the possibility to exchange them or, at least in principle, to run the set-up contemporarily with different nuclear targets. The scheme considered here is a preliminary one and is based on the experience collected so far in the construction of an array of 20 crystals of TeO of 340 g each, whose main aim is the search for double beta decay. The size of the crystals proposed here is different and the mass larger by a factor of two: their performance is going to be checked and tested in the next summer. We would like to stress however that by no means we should consider only this material. The advantage in CUORE of the unique flexibility of cryogenic detectors in the choice of the detecting materials should not be forgotten. While essential for double beta decay of Te and also promising for searches of WIMPS, TeO is not one of the best candidates for a thermal detector, also because its Debye temperature is of 265 K only. Other materials of high atomic number are possible alternatives to TeO especially for searches on WIMPS. Our tentative plan is based on an array of 1000 cubic crystals of TeO of 5 cm side with a mass of 750 g each, about twice the mass of the presently operating ones. The array will be made by a cube with ten crystals on each side with a determined orientation (e.g. 11002) for a possible search on axions (see later). Taking into account also the copper frame necessary to hold the crystals and allowing for thermal and electronic connections, we foresee a cube of 70 cm side and a total mass around one ton. Stabilization of the crystal response is essential to cure thermal instabilities and consequent fluctuations in the gain. We plan to adopt the same procedure that we have already tested and that will be adopted in the twenty crystal array: a periodic injection in the absorber of Joule power through an heavily doped meander on a Si chip glued on it. In the present array we adopt neutron transmutation doped (NTD) germanium thermistors, provided us by Prof. E. Haller, as thermal sensors glued on the crystals. They are very reproducible and easy to handle, which is obviously essential in an array with a large number of bolometers. Our colleagues of the CDMS dark matter experiment, who also adopt germanium NTD thermistors as sensors, but in eutectic contact with germanium absorbers [4], claim a background at low energy due to contamination of tritium which is normally produced in the thermistors as a consequence of the neutron irradiation. In our experimental approach, where the sensors are glued to the absorber, this background seems not to be a serious problem, since the thermal signals in the thermistor can be easily distinguished by pulse shape discrimination. In the CDMS experiment [4] some pulses seem to come indeed from the absorber where some tritium has possibly diffused through the eutectic germanium-gold-germanium contact. The problem needs obviously to be studied further. An alternative option for thermal sensors is adopted by the CRESST dark matter collaboration [5] with tungsten superconducting phase transition thermometers, read out by commercial
E. Fiorini / Physics Reports 307 (1998) 309—317
311
DC-SQUIDS. These and other thermal sensors could also be considered, but we believe at present that the NTD option is preferable in a large array like ours, being simpler, less critical to changes in the temperature of each absorber and definitely less expensive. Crystals operated contemporarily with NTD thermistors and with superconducting edge thermometers, to investigate the high and low energy regions of the spectrum, respectively, could also be considered if this solution should prove to be technically and financially viable. The best location for CUORE is the future Hall D to be constructed in the Gran Sasso Underground Laboratory (LNGS) and to be devoted to cryogenic experiments. The overburden of rock in LNGS reduces the muon and neutron background by six and three orders of magnitude, respectively [8]. The construction of a dilution refrigerator capable to cool the array down to 10 mK should be done in collaboration with a highly specialized factory. Much care will be devoted, like in the past, to the choice of materials with very low radioactive contamination to be employed in the construction. As a first hint we are considering a structure made by vertical copper bars to which the bolometers are fastened in order to allow a reasonably easy access to all of them. This would enable each crystal to “see” almost completely all the surrounding ones and allow the use of the coincidence-anticoincidence method which we consider essential for the background reduction. We expect the total volume of the cryogenic set-up to be of 70;70;70 cm. Taking into account the thermal shields, a reasonable final structure of CUORE would be a cube of 90 cm side. The total mass of the copper frame could be about 200 kg. Severe technological problems could come from the internal shield of Roman lead directly connected to the mixing chamber and from the large number of wires entering the array. We foresee 2000 wires connected to the sensors and 200 wires for the heaters which allow response stabilization (groups of 10 heaters would be linked in parallel). In addition, 50 wires will be needed to connect 25 thermometers in various positions inside the array. Alternative electric connections can be envisaged, and are presently being studied. The shielding against local radioactivity could be an improved version of the one presently adopted for the twenty crystal array. An internal shield could be made by a layer of 4 cm minimum thickness of Roman lead which has been found [9] to be totally free from radioactivity due to Pb (see later). The entire set-up should be inside a Faraday cage to avoid electromagnetic interference. At present our cryogenic experiments have not been optimized for the background: the external shielding (10 cm of normal lead) is insufficient and there is no internal or external shielding with Roman lead, which would have strongly suppressed the bremsstrahlung due to Pb. Moreover, the material of the crystal holder, in immediate contact with the crystal, is not our final choice for very low radioactive contamination. We think that, using carefully chosen materials and with an appropriate shield, we should reach a similar background as in the Heidelberg—Moscow [9] and IGEX experiments. Further improvements could come from the obvious shielding effect of the external layers of crystals and from the application of the anticoincidence method, already tested in our detector array, thus achieving a very efficient Compton suppression. We are going to evaluate it with a detailed Monte Carlo calculation. As detector crystals will have large mass, a large rise and decay thermal time constant are expected, and the signal energy spectrum will be confined in a frequency band extending from DC to a few tens of Hz. Under such circumstances we do not expect excessive integration of the signal in the parasitic capacitance of the detector-preamplifier link. Low noise JFET
312
E. Fiorini / Physics Reports 307 (1998) 309—317
preamplifiers located outside the cryostat could be used avoiding inside power dissipation. Considering the large number of channels, this will translate into a non-negligible saving of liquid helium consumption. One aspect that deserves due attention is the stabilization of the overall system gain, which is determined by measuring the height of a known energy pulse and the detector bias voltage. On the detector side, as said in a previous paragraph, it will be done by injecting a small thermal power in the absorber. On the preamplifier side, as it is DC coupled, its drift must be kept at the lowest level. This stabilization has been successfully accomplished in the present running array of 20 detectors [3]. Since the pulse shape of large bolometers contains very useful information on the event, the data acquisition has to be specifically studied to collect the entire signal pulse. In the Milano experiment we use a low sampling rate ADC (400 ksamples/s maximum) multiplexed on our 20 channels. A sample per millisecond is normally acquired. In addition, for a measurement with large dynamic range (from a few keV to around 10 MeV) and the good resolution expected an ADC with at least 16 bits is needed. We presently use a threshold trigger which allows the acquisition of the signal with a voltage amplitude above a pre-defined level. An interrupt would be generated and the triggered channel acquired. Since for an experiment like CUORE it is very important to reach an energy threshold as low as possible, we plan to adopt a trigger approach based on neural network analysis. It would then be possible, at least in principle, to recognize the presence of the signal by its shape and not by the crossing above a certain voltage level. The actual realization of this type of trigger requires further analysis. It will be possible with an hardware approach, but it will not be simple, in this case, to instruct the neural network. On the contrary a software approach would require a powerful computer, capable to recognize the signals in real time. We will only briefly mention here double beta decay [10,17], even if it is one of the main aims of CUORE. The most popular decay channels for this process are (A, Z)P(A, Z#2)#2e\#2v ,
(1)
(A, Z)P(A, Z#2)#2e\#s ,
(2)
(A, Z)P(A, Z#2)#2e\ .
(3)
Our interest is addressed to the third of these channels, the so-called neutrinoless double beta decay, which would prove violation of the total lepton number and indicate that neutrino is a Majorana particle with an effective non zero mass. Its rate would be strongly enhanced with respect to process (1), thus providing a very powerful test of the lepton number conservation. From its rate one can derive a value for an average neutrino mass 1m 2 which is however subject to the T large uncertainty of the nuclear matrix element calculations. From the experimental point of view, neutrinoless double beta decay would be revealed by the presence of a peak corresponding to the transition energy in the spectrum of the sum of the two electron energies. We would like to stress that this requires a very good energy resolution of the detector. With a poor resolution even a strong reduction of the background could not help: the peak could be hidden in the end tail of the sum energy spectrum of the two neutrino double beta decay.
E. Fiorini / Physics Reports 307 (1998) 309—317
313
Due to the uncertainty in nuclear matrix calculations, searches for neutrinoless double beta decay should be extended to as many favorable nuclear candidates as possible. Particularly effective to search for this channel is the source"detector approach, where the detecting material contains the candidate nucleus for double beta decay. This is a peculiar advantage of thermal detectors, due to their ample choice of detecting materials. The limitations in experiments on neutrinoless double beta decay stay on background, energy resolution and effective mass of the candidate isotope. In many of the present experiments this last requirement was achieved by using the large stocks of enriched materials available in the former Soviet Union. A substantial increase in the mass of isotopically enriched materials seems at present financially impossible, unless new enrichment methods will be discovered. Our favored candidate for CUORE is Te (33.8% isotopic abundance, 2528 keV transition energy) using crystals of TeO . We would like to stress that tellurium is the candidate with one of the largest natural atomic abundance, thus allowing a massive double beta decay experiment even with natural tellurium. In addition, TeO is a cheap material relatively free from radioactive impurities. This material would be also very promising for experiments on WIMPS and solar axions. Let us consider an array made by 10;10;10 TeO cubic crystals of paratellurite with 5 cm side. The mass of each crystal would be of 750 g, about twice the mass of the crystals in the present 20 bolometer array. The total mass of tellurium would be almost 600 kg, corresponding to about 10 nuclei of Te. The sensitivity to neutrinoless double beta decay depends obviously on the background. In a year of effective running time with a 5 keV resolution and the present background of the Heidelberg—Moscow or IGEX experiments the sensitivity to neutrinoless double beta decay should be of a few 10 years. We do not like to extract from this number a limit on the average neutrino mass, since it is strongly dependent on nuclear matrix elements, but only indicate a value around a tenth of electronvolt or less. Ge (7.4% isotopic abundance, 2038 keV transition energy) would also be an excellent candidate for CUORE both for double beta decay and for direct interactions of WIMPS. Germanium crystals have been already operated as thermal detectors with an energy resolution superior to that of germanium diodes [4] (the Debye temperature is around 370 K). In addition, the standard crystals to be used in a bolometer need only to be free from radioactive contamination: they should therefore be much cheaper than crystals to be used as semiconductor detectors. However a version of CUORE made by one thousand bolometers with 5;5;5 cm crystals of natural germanium as absorbers would not be much competitive for double beta decay compared with present experiments on Ge double beta decay. This detector would on the contrary be competitive for searches on WIMPS, axions or possibly neutrino interactions. A detector like CUORE filled with germanium enriched in Ge could be competitive with the recently proposed GENIUS experiment [9], but we consider its cost to be prohibitive, at least for us. Other double beta decay candidates like Ca, Mo, Cd, Sn and Nd have been suggested for searches on double beta decay with thermal detectors, and bolometers with CdWO and CaF as absorbers have been operated [7]. The isotopic abundance of the candidate nucleus in all these elements is rather low and prevents, at least at present, a competitive experiment. A great variety of experiments is being carried out underground on direct interactions of WIMPS as a component of dark matter, and most of them are reported at this conference. In addition to those performed with conventional detectors four recent ones [3—6] are based on thermal detection. A relevant parameter in these searches is the so-called Quenching Factor,
314
E. Fiorini / Physics Reports 307 (1998) 309—317
namely the ratio between the pulses produced by a nuclear recoil due to the WIMP interaction and by an electron of the same energy. While in conventional detectors this factor is normally below 30%, in bolometers it has been proved to be around one for slow recoiling nuclei independently on energy [11]. While conventional detectors allow a model dependent suppression of the background by pulse shape discrimination, a similar suppression can be achieved in bolometers by recording simultaneously ionization and heat [4,6] or scintillation and heat [12]. This approach can be considered as a second step for CUORE. Experiments performed so far on direct detection of WIMP’s consist with few exceptions of measurements of the background counting rate in the low energy region of the recorded pulse spectrum. The exclusion plots which are extracted from them depend therefore on the background counting rate per unit mass. With the configuration proposed for CUORE we expect a very low background in the central part of the array since these detectors can be placed in anticoincidence with all the surrounding ones. This will provide a very strong Compton suppression, and will therefore minimize the continuum background at low energy where the counts due to direct interactions of WIMPS could be hidden. An unambiguous evidence for WIMPS can come from detection of an effect typical of the interactions of these particles, like the seasonal variation of the counting rate due to the revolution of the Earth around the Sun. This requires a large detector mass and this is the case with CUORE. We consider here different options of materials to be used in CUORE: (a) ¹he ¹eO option. The background of CUORE in the low-energy region should be reduced to the level of the present Heidelberg—Moscow or IGEX experiments. In fact, the former of these collaborations claims that half of this background is due to the presence of a tiny contamination of Pb in their Johnson and Mathey lead, which will be totally absent in our shielding of Roman lead. In our detectors we plan to reach a thermal and therefore effective threshold of 5 keV, which would be equivalent to 1 keV in a germanium diode due to the corresponding Quenching Factor. Our background in the energy region of interest for coherent interactions of WIMPS in a mass region equivalent with the mass of Tellurium nuclei can be estimated to be one event kg\ d. From the absence of a 5% seasonal effect we would exclude in a model independent mode a WIMP interaction rate of more than 0.07 interaction kg\ d\. (b) ¹he germanium option in CUORE seems at present roughly equivalent to the TeO one. Due to the larger Debye temperature, the thermal performance, and therefore the energy resolution and the threshold, should be better. On the contrary the larger atomic number of tellurium represent a considerable advantage when searching for coherent interactions of WIMPS of large mass. The already mentioned possibility to run all the germanium crystals in the ionization#heat mode is obviously very attractive. (c) ¹he Pb¼O option. Our group has successfully operated bolometers with large crystals of CdWO made with non enriched materials as absorbers. Their use for searches on WIMPS was obviously prevented by the presence of the naturally occurring single beta decay of Cd. We are presently investigating the possibility to use crystals of PbWO on the basis of a series of measurements carried out on the intrinsic radioactive contamination of Lead. No presence of K, and of the nuclei of the Th and U chains (when in secular equilibrium) was found in clean samples of this metal. This is not the case for Pb which obviously breaks secular equilibrium and whose activity in materials used in normal shields was found to be up to 200 Bq kg\. We have
E. Fiorini / Physics Reports 307 (1998) 309—317
315
determined with a cryogenic experiment that even the best (and very expensive) Lead with low Pb content available commercially has an activity of 250 mBq kg\ [13]. This result is confirmed, for a similar sample, by the Heidelberg-Moscow experiment [9]. We have however extracted from the wreck of a Roman ship sunk near Sardinia a considerable amount of Roman lead which is totally free from Pb (22.3 years lifetime) contamination. In fact, an upper limit of 4 mBq kg\ has been measured cryogenically by us [13]. As a consequence we have constructed and operated in the Gran Sasso Laboratory a crystal of PbWO of 3;3;6 cm with a mass of 447 g. A resolution of 3 keV in the low-energy region has been obtained in this preliminary measurement. Many other materials are of obvious interest for searches of Dark Matter with CUORE. Examples of good “thermal” candidates are Al O , CaF , LiF, etc. which present, especially in the first case, a large Debye temperature. We have also operated crystals of these materials, but we prefer to consider for this preliminary version of CUORE only materials with high Z nuclei. They are complementary to those already adopted in the presently running and planned experiments. In addition, the N enhancement factor for vector interactions seems to us very promising in searches for WIMPS in the high mass region. The possibility to search for direct interaction of axions by coherent Primakoff conversion in germanium detectors has been recently proposed by Creswick et al. [14]. Detection rates in a deeply located germanium diode would be enhanced if axions from the Sun coherently convert into photons when their incident angle with a given crystalline germanium plane fulfills the Bragg condition. This modulation of the rates would be correlated to the relative position of the lattice planes of the detector with respect to the Sun, and would lead to a sub-diurnal rate variation. An experiment has been already carried out by analyzing the temporal behavior of the rates recorded by a germanium detector operated underground [15]. Even if only one of the axis of the lattice cell was known, the experiment yielded an upper limit of 2.7;10\ GeV\ on the axion-photon coupling. A similar experiment will be performed by our group in the Gran Sasso laboratory with the presently installed array of 20 TeO crystal, whose lattice planes are oriented. CUORE seems to us an ideal detector to search for coherent interactions of axions coming from the Sun and from other cosmic sources. The TeO crystals which we are presently using have a tetragonal structure and (1 0 0) (0 1 0) (0 0 1) orientation. In the crystal cell the (1 0 0) side is equal to the (0 1 0) one and different from (0 0 1). The crystal is normally grown along the (1 0 0) side. As a consequence different coherent interactions would take place in correlation with the position of the (1 0 0) (0 1 0) and (1 0 0) (0 0 1) planes relative to the Sun. Different orientations for the direction of growing with respect to those of the crystal cell can be considered. Growing along the (0 0 1) axis would lead to identical (0 0 1) (1 0 0) and (0 0 1) (0 1 0) orthogonal orientation planes, and therefore to a signal modulation repeated twice. The sensitivity of the CUORE observatory to coherent interactions of axions would be many orders of magnitude superior than in the presently running germanium experiments for the following reasons: (a) large mass, (b) an axion cross section proportional to the square of the atomic number, (c) perfect orientation of the lattice planes. Our confidence on the feasibility of CUORE is based on some preliminary results obtained recently with our array of 20 TeO crystals of 340 g each which operate successfully.
316
E. Fiorini / Physics Reports 307 (1998) 309—317
In particular 1. the fluctuation of the temperature among them is less than 1 mK. It is very important to note that the bolometers which are far from the mixing chamber (the array is made by five planes of four detectors each) are not warmer than those near to it; 2. the characteristic curves of the resistance of all bolometers as a function of the temperature are similar; 3. the load curves reported in show the reproducibility of the static performance of the 20 detectors. In order to show the excellent reproducibility of the twenty detectors we present in Fig. 1 the sum of the 20 spectra in the region of neutrinoless double beta decay obtained in a preliminary run corresponding to slightly more than a week of effective running time. Five groups have already expressed their intention to participate to CUORE: Berkeley (E. Haller), Leiden (G. Frossati), Milano (E. Fiorini), South Carolina (F. Avignone) and Zaragoza (A. Morales). Contacts have been taken with the other members of the European Network on Cryogenic detectors (Genova, Napoli, Garching, Munich, Oxford and Saclay) and with the University of Neuchatel. The participation of two Italian National Laboratories (Frascati and Gran Sasso) is being discussed. The Collaboration is obviously open to other interested groups in Europe and in the USA. The evaluation of the cost of the experiment (to be diluted in five years) is difficult, since it depends on the competition among the various cryogenic companies and especially on the cost of the material adopted for the crystals, which in the case of TeO represent the major contribution. As a pure indication one could evaluate a cost of 2 M$ divided in approximately three equal parts for electronics, dilution refrigerator and the rest of the apparatus (liquefier, consumables, etc.). The cost of the crystals could range from 0.5 to 6 M$ according to the choice of material. These costs do not include travel expenses. As a first step towards CUORE we would like to propose a simple and much less expensive experiment, CUORICINO, made with 1 0 0 crystals of TeO of 5 cm side with a total mass of 75 kg. We would not like CUORICINO to be considered as a test, but as a real important experiment. Its mass would be by far the largest one for a cryogenic detector. It would contain about 20 kg of
Fig. 1. The sum of the spectra in the region of bb decay. Note the peaks at 2204 and 2447 keV due to Bi and at 2615 keV due to Tl.
E. Fiorini / Physics Reports 307 (1998) 309—317
317
Te, a mass larger than in any runnng double beta decay experiment. It would allow a very significant experiment on direct coherent interactions of WIMPS based on the seasonal effect. It would allow to operate a great variety of target nuclei which could be indicated by future theoretical suggestion on searches for rare decays.
References [1] D. Twerenbold, Rep. Prog. Phys. 59 (1996) 349. [2] E. Fiorini, T. Niinikoski, Nucl. Instr. and Meth. 224 (1984) 83. [3] A. Alessandrello et al., Preliminary results on double beta decay ofTe with an array of twenty cryogenic detectors, Phys. Rev. B, in press. [4] B. Cabrera, Direct detection WIMP searches below 100 K, Phys. Rep. this Conf. [5] M. Sisti et al., Performance of the CRESST detectors and status of the experiment, in: S. Cooper (Ed.), Proc. VII Int. Workshop on Low Temperature Detectors, Munich 27 July—2 August 1997, p. 232. [6] D. Drain, Status of the EDELWEISS experiment, Phys. Rep. 307 (1998), this issue. [7] E. Fiorini, Nucl. Phys. B 48 (proc. Suppl.) (1996) 41. [8] L. Zanotti, J. Phys. G. 17 (1991) S373. [9] L. Baudis, The GENIUS Project using one ton of enriched 76Ge, Phys. Rep. 307 (1998), this issue. [10] M.K. Moe, P. Vogel, Ann. Rev. Nucl. Part. Sci. 44 (1994) 247. [11] A. Alessandrello et al., Phys. Lett. B 408 (1997) 465. [12] A. Alessandrello et al., Contemporary measurement of scintillation and heat in view of an experiment on double beta decay ofCa, Phys. Lett. B, in press. [13] A. Alessandrello et al., Measurements of internal radioactive contamination in samples of Roman lead to be used in experiments on rare events, Nucl. Instr. and Meth. B, in press. [14] R.J. Creswick et al., Theory for the direct detection of solar axions by coherent Primakoff conversion in germanium detectors, Phys. Lett. B, submitted. [15] F.T. Avignone et al., Experimental search for solar axions via coherent Primakoff conversion in a Germanium detector, Phys. Lett. B, submitted. [16] N. Booth, B. Cabrera, E. Fiorini, Ann. Rev. Nucl. Sci. 46 (1996) 471. [17] V.I. Tretyak, Yu. Zdesenko, Atom. Data Nuclear Data Tables 61 (1995) 43.
Physics Reports 307 (1998) 319—324
Results from CHORUS experiment at CERN Alfredo G. Cocco * Universita% degli Studi di Napoli ‘‘Federico II++ & INFN Napoli, Italy Complesso Universitario di Monte S.Angelo — Napoli, I-80126 Italy For the CHORUS collaboration
Abstract The CHORUS experiment searchs for l Pl oscillation in the CERN wide-band neutrino beam. The detector I O comprises two sets of 800 kg nuclear emulsion target followed by electronic detectors. The experiment has been taking data from 1994 through 1997. In a sample of 31 423 events with a leading muon and 4934 muonless neutrino interactions, no charged-current interactions induced by l were found. For large *m values this corresponds to a limit on the mixing O IO angle of sin 2h (2.3;10\ at 90% CL, thus improving the previous best result. 1998 Published by Elsevier IO Science B.V. All rights reserved. PACS: 14.60.Pq
1. Introduction CHORUS is an experiment designed to search for l Pl oscillation through the observation of I O charged-current interactions, l NPq\X, followed by the decay of the q lepton. The experiment is O sensitive to *m above few eV. Massive neutrinos in this range have been proposed as candidates IO for the hot component of dark matter in the Universe [2], and the l Pl oscillation search is I O further motivated if one assumes a hierarchical pattern of neutrino masses.
* E-mail:
[email protected]. 0370-1573/98/$ — see front matter 1998 Published by Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 4 0 - 4
320
A.G. Cocco / Physics Reports 307 (1998) 319—324
2. The experimental setup 2.1. The neutrino beam The CERN wide-band neutrino beam (WBB) contains predominantly muon neutrinos from n> and K> decay, with contamination levels for lN and l ,lN of, respectively, 5% and 1%. The I C C estimated l background [3] is of the order of 3.3;10\ l charged-current interactions per O O l charged-current interaction, and is therefore negligible. The l component of the beam has an I I average energy of 27 GeV. 2.2. The apparatus The apparatus is described in detail in Ref. [4]. As shown in Figs. 1 and 2, it comprises an emulsion target, a scintillating-fiber tracker system, trigger hodoscopes, a magnetic spectrometer, a lead scintillating-fiber calorimeter, and a muon spectrometer. The emulsion target has a mass of 770 kg and a surface area of 1.42;1.44 m. It consists of four stacks of 36 plates each. Each plate has two 350-lm-thick layers of nuclear emulsion on both sides of a 90-lm-thick plastic base.
Fig. 1. General layout of the detector.
Fig. 2. Layout of an emulsion stack and associated fiber trackers.
A.G. Cocco / Physics Reports 307 (1998) 319—324
321
Neutrino interactions occur in nuclear emulsion, whose exceptional spatial resolution (below 1 lm) and hit density (300 grains/mm along the track) allow a three-dimensional reconstruction of the trajectories of the q lepton and its decay products. Scintillating-fiber trackers locate the trajectories of charged particles produced in the neutrino interaction. The trajectories of these particles are extrapolated to the downstream face of the emulsion stack. Downstream of the target region, a magnetic spectrometer allows the reconstruction of the charge and momentum of charged particles. A lead scintillating-fiber calorimeter following the spectrometer measures the energy and direction of the hadronic showers and allows detection of neutral particles. The calorimeter is followed by a muon spectrometer, which identifies muons and measures their charge and momentum. It is composed of magnetized iron disks and tracking devices.
3. The data collection The detector has been exposed to the WBB in two periods: 1994—1995 (Run I) and 1996—1997 (Run II). After the first period, the target emulsion was replaced and the exposed emulsion developed. During Run I, CHORUS collected approximately 969 000 triggers, corresponding to 2.01;10 protons on target. On the basis of neutrino flux estimates, trigger efficiency, cross section, dead-time correction and target mass, about 320 000l charged-current interactions are I expected to have occurred in the emulsion target. Of these 250 932 events have been reconstructed in the electronic detectors with an identified negative muon and an interaction vertex in the emulsion. 4. The q\ search analysis 4.1. Event selection and vertex location Two decay topologies have been searched for in this analysis: q\ lepton decay in muon (k\ channel) and single negative hadron (h\ channel). An event with a q\ lepton is identified by the presence of a change of direction (kink) of the negative track due to the q\ decay in a single negative charged particle and neutrals. A cut on the transverse momentum of the decay (P '250 MeV/c) is applied to eliminate K\ decays, as shown in Fig. 3A. The search for q\ decays starts from the reconstructed events recorded in the electronic detectors. The event selection provides us with a set of events with a reconstructed negative charged track (called in the following scan-back track) and a prediction for the trajectory of this track through the interface emulsion sheets. The track is then located in the interface emulsion sheets and followed into the target emulsion up to the interaction point using automatic techniques. 4.2. Data analysis The decay search is performed following two different procedures, which are applied concurrently on each event located.
322
A.G. Cocco / Physics Reports 307 (1998) 319—324
Fig. 3. Transverse momentum (left) and impact parameter distribution (right) for the k\ channel data sample compared with the MC simulation for oscillation events.
(A) Short decay path search: This search is designed to detect decays in the vertex plate. If there is at least one track matched with the tracker predictions, the scan-back track impact parameter is evaluated as the minimum distance with respect to any other track of the event (Fig. 3B). Only events that show a large impact parameter, i.e., bigger than 2—8 lm according to the longitudinal position inside the plate, are visually inspected. (B) ¸ong decay path search: The scan-back procedure follows the scan-back track implicitly assuming its straightness. A “long” decay path can be detected in two ways, according to the kink angle: 1. If the decay angle is larger than the scan-back angular tolerance, the scan-back procedure stops and the kink plate is assumed to be that plate. The “parent track” search technique is then applied: the upstream part of the vertex plate is scanned in order to check the possibility of associating a track in the emulsion with the scan-back track. If the minimal distance between a track and the scan-back track is less than 15 lm, this track will be considered its “parent” and the event is inspected by eye. 2. If the kink angle is smaller than the tolerances of the scan-back, the vertex plate is indeed the interaction plate. To check the presence of a kink the complete event is scanned by eye in five plates downstream.
4.3. Background estimate A possible source of background common to the two channels is l contamination of the O beam. As shown in [3], taking into account cross sections, branching ratios, and kink detection efficiency, we expect the current sample to contain less than 0.01 of an event produced by “prompt” l . O
A.G. Cocco / Physics Reports 307 (1998) 319—324
323
The main source of potential background is charm production, since lifetime and decay modes of some charmed mesons can produce, together with detector inefficiencies, a fake q\ decay topology. These events are expected to occour at a rate of about 10\ per l charged-current interaction. I In the h\ channel another source of background is the elastic scattering of pions on nuclei without the recoil of the nucleus (white kink). A conservative limit on this process lead to an estimate of about 3;10\ events per l charged-current interaction. I
5. Analysis and result A total of 31 423 events in the k\ channel and 4934 events in the h\ channel have been analyzed as described above. No q\ decays have been found and the expected number of background events are 0.1 and 0.5, respectively, in the k\ and h\ channels. By evaluating the q-decay detection efficiency, this negative result can be used to exclude a significant region of the oscillation parameter space. In the two-flavour mixing scheme, the result can be expressed as an exclusion plot in the parameter space (sin 2h , *m ). The oscillation probability depends on the number of observed IO IO l and l events. O I Since we have observed zero candidate events, we can express the 90% CL upper limit on sin 2h for large *m values according to IO IO 2.37 "2.3;10\ . sin 2h 42 IO NI [e BR(k)#e BR(h) r ] p A !! I F OI OI
(1)
In the above formula the numerical factor takes into account the total systematic error (16%) following the prescriptions given in [5], NI is the total number of analyzed events in each !! channel, e is the q kink detection efficiency for k\ and h\ channels, r is a normalization factor IF between the two channels, p is the weighted cross-section ratio and A is the acceptances ratio. OI OI Relevant terms are shown in Table 1. The exclusion plot in the (sin 2h , *m) plane is shown in IO Fig. 4. Table 1 Scanning results summary for k\ and h\ channels
k\
h\
N !!
e
r
BR
p IO
A IO
1994
16 837
0.54
1995
14 586
0.34
—
0.173
0.53
1.05
1994
15 162
0.245
0.4
1995
13 375
0.25
0.3
0.49
0.53
1.05
324
A.G. Cocco / Physics Reports 307 (1998) 319—324
Fig. 4. Result of this analysis compared with previous limits from CCFR [6], CDHS [7], CHARM II [8], and E531 [9].
6. Conclusions We have presented limits on l Pl oscillation parameters based on the k\ and h\ decay I O channels of the q lepton using a fraction of the data collected by the CHORUS experiment. The limit on sin 2h for large *m values is sin 2h (2.3;10\ at 90% CL, thus improving the IO IO IO previous best result [1]. By improving the efficiencies, by increasing the number of q decay channels searched for, and by enlarging the l search to the complete set of data, we expect to reach the O design sensitivity of the experiment. If no candidates are found, this corresponds to a limit of sin 2h (2;10\ for large *m values. IO IO References [1] [2] [3] [4] [5] [6] [7] [8] [9]
E. Eskut et al., CHORUS Collaboration, Phys. Lett. B 424 (1998) 202 and references therein. Ya.B. Zel’dovic, I.D. Novikov, Relative Astrophysics, Nauka, Moscow, 1967. B. Van der Vyver, Nucl. Instr. and Meth. A 385 (1997) 91. E. Eskut et al., CHORUS Collaboration, Nucl. Instr. and Meth. A 401 (1997) 7. R.D. Cousins, V.L. Highland, Nucl. Instr. and Meth. A 320 (1992) 331. K.S. McFarland et al., Phys. Rev. Lett. 75 (1995) 3993. F. Dydak et al., CDHS collaboration, Phys. Lett. B 134 (1984) 103. M. Gruwe´ et al., CHARMII collaboration, Phys. Lett. B 309 (1993) 463. N. Ushida et al., Phys. Rev. Lett. 57 (1986) 2897.
Physics Reports 307 (1998) 325—331
A study of 42 type Ia supernovae and a resulting measurement of X and XK + Gerson Goldhaber*, Saul Perlmutter Lawrence Berkeley Laboratory & Center for Particle Astrophysics, University of California at Berkeley, Berkeley, CA 94720, USA
For the Supernova Cosmology Project
Abstract A search for cosmological supernovae has discovered over 75, most of which are type Ia supernovae. There is strong evidence from measurements of nearby type Ia supernovae that they can be considered as distance indicators or “standard candles” after correction for the width (time scale stretch parameter) of the individual light curves. Measurements and analysis are completed on 42 of these distant, z"0.18 to 0.83, supernovae. These supernovae, together with 18 “nearby”, z(0.1 supernovae from the Cala´n/Tololo Supernova Survey allow us to measure the ratio of the matter density of the universe to the critical density X together with the normalized cosmological constant XK, the energy + density of the vacuum. For a flat universe, i.e. X #XK"1, we obtain X "0.28> (statistical)> (systematic). The + \ \ + data are strongly inconsistent with a K"0 flat cosmology, the simplest inflationary universe model. An open, K"0 cosmology also does not fit the data well: the data strongly suggest that the cosmological constant is non-zero and positive, with a confidence of P(K'0)'99%. 1998 Elsevier Science B.V. All rights reserved. PACS: 97.60.Bw
1. The matter density and vacuum energy density of the universe When Einstein introduced his General Theory of Relativity (1916) the universe was assumed to be static. To accomplish such a static universe Einstein had to introduce a repulsive force
* Corresponding author. E-mail:
[email protected]. This work was supported in part by the United States Department of Energy, contract numbers DE-AC0376SF00098, CfPA, and NSF contract number AST-9120005. G. Aldering, B.J. Boyle, P.G. Castro, W.J. Couch, S. Deustua, S. Fabbro, R.S. Ellis, A.V. Filippenko, A. Fruchter, G. Goldhaber, A. Goobar, D.E. Groom, I.M. Hook, M. Irwin, A.G. Kim, M.Y. Kim. R.A. Knop, J.C. Lee, T. Matheson, R.G. McMahon, H.J.M. Newberg, C. Lidman, P. Nugent, N.J. Nunes, R. Pain, N. Panagia, C.R. Pennypacker, S. Perlmutter, R. Quimby, P. Ruiz-Lapuente, B. Schaefer and N. Walton. 0370-1573/98/$ — see front matter 1998 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 9 1 - X
326
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
corresponding to a “cosmological constant”, K, to compensate for the gravitational attraction of the matter in the universe. Later, when Hubble discovered that the universe was actually expanding (1924), Einstein called the cosmological constant “his greatest mistake”. All the same, since K is consistent with Einstein’s theory one need not assume that K"0. Thus the value of K becomes an experimental question. In our current study we primarily obtain a linear combination of these two densities. Here XK the vacuum energy density, is given by XK"K/(3H) and X is the matter + density of the universe. When we started out with this study, 10 years ago, we were planning to measure the deceleration of the universe q . On the then prevalent assumption (prejudice) that XK"0 we expected a positive deceleration of the universe due to the matter density. With our present observations we find, to a high probability P(K'0)'99%, that q is actually negative, corresponds to an acceleration of the expansion of the universe! Our best fit corresponds to the relation 0.8X !0.6XK+ + !0.2$0.1. For a flat, X #XK"1, cosmology this corresponds to X "0.28> (statist+ \ + ical)> (systematic). \ Before our present work started, estimates of X , based on a variety of experiments, ranged from + 0.05 to 1.5. Since the light emitting matter contributes roughly 0.01 to X , a measurement of + X also determines the contribution of dark matter to the universe. + 2. Type Ia SNe as calibrated standard candles? There is good evidence that type Ia supernovae (SNe Ia), the brightest of all the different types of SNe, can be calibrated to have a standard brightness. A plausible explanation for this behavior is that SNe Ia are the consequence of the explosion of a white dwarf star as it approaches a critical mass, of 1.4 solar masses, the Chandrasekhar limit. A white dwarf is a star that has burnt all of its hydrogen and helium by fusion primarily to carbon and oxygen and as a result has collapsed under the gravitational force to a degenerate electron gas in which the C and O nuclei are embedded. The white dwarf thus has the mass of the order of the mass of the sun but a radius comparable to that of the earth. If this white dwarf is in a binary system with another star, a common occurrence, the companion star can transfer matter to the white dwarf until it approaches the Chandrasekhar limit. Near this mass, the electron degeneracy pressure can no longer support the star and a runaway thermonuclear explosion occurs. In this supernova explosion, the temperature and density reach the point at which the C and O nuclei fuse and produce higher mass nuclei. The fusion process stops at Ni, a radioactive isotope with equal number of protons and neutrons. In fact, an enormous amount of Ni is produced, typically 0.6 of a solar mass. In the explosion the newly produced material is ejected with velocities of about 10 000—30 000 km/s. At first, the exploding star is too small to be seen. Over a period of a few days, however, it expands rapidly and its brightness reaches a maximum value in about 18 days in the rest system. It is during this period that we first discover the SN as we will describe below. The light observed is produced by ionization from the decay products of Ni and its daughter isotope Co. The Ni isotope has a half-life of about 6 days and Co has a half-life of 77 days, so it is primarily the Co decay that is powering the supernova at maximum light. Finally Co decays to stable Fe.
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
327
All the same, not all SNe Ia have exactly the same brightness at the peak of the lightcurve. There is a dispersion of about 0.24—0.5 magnitudes (depending on the sample selection criteria) in the maximum brightness distribution [1]. Phillips [2] noted a correlation between the brightness and the decline rate of the SNe Ia. We express this correlation [3] in terms of a stretch parameter s applied to the time axis in the lightcurve of the SN. Thus, s affects both the rise rate and the decline rate of the SN. The greater s, meaning the wider the SN light curve, with reference to the standard Leibundgut [4] curve, the brighter it is. We measure typical values for s from 0.8 to 1.2 with a few outliers above and below these values. We find a correction to the B-band absolute magnitude M of !a(s!1). In our analysis we apply this correction to the apparent magnitude m giving an effective magnitude m "m #a(s!1) which then corresponds to a unique value of the absolute magnitude M, with an uncertainty of approximately 0.17 magnitudes, or &17% in flux. (This dispersion can be improved still further with multiple filter-band constraints on extinction; see discussions in [5] and references therein.) In theoretical models, it has been suggested that this lightcurve width-luminosity relation can be understood in terms of varying amounts of Ni produced and hence variation in the temperature of the ejecta. 3. The observation of 42 SNe In an ongoing systematic search we have, over the last 6 years, discovered and studied over 75 SNe. The majority of these were the most distant SNe ever observed. We have now completed the analysis of 42 of these SNe. The analysis of the first 7 of our SNe has already been published [3]. An additional SN at z"0.83 has been measured with the Hubble Space Telescope and added to this sample [6]. The first analysis of the 42-supernova sample was presented at the January 1998 meeting of the American Astronomical Society [7]. The current more extensive analysis [5] agrees with those results — as well as with the more recent, mostly independent findings discussed by Filippenko and Riess in this volume. (Note that these findings are not completely independent since they both share much of the most significant low-redshift supernova data, and two of our Supernova Cosmology Project supernovae have been included in their confidence-region analyses.) When we began our program, only one high-redshift supernova had been discovered (18 days past maximum) [8]. We developed a technique that made it possible to discover batches of high-redshift supernovae, while still on the rise to maximum light, in a systematic, predictable way. About twice a year, during two sets of two nights at the telescope, our procedure is to take about 50—70 different CCD images, depending on the depth in redshift we are trying to achieve. Each image covers about 15;15 min of arc on the sky. The total area covered is thus about 3—4 square degrees. The first set of images are considered as “reference” images. The second set observed about 3 weeks later, are the “search” images in which we hunt for the new light due to a supernova. The timing relative to the moon is critical since we deal with such dim objects that they are difficult to observe at or near full moon. We take our reference images just after the new moon and our search images just before the next new moon. After discovery we take followup images of the SNe as well as spectra. The new supernovae in this sample of 42 were all discovered while still brightening, using the Cerro Tololo 4 m telescope with the 2048 prime-focus CCD camera or the 4;2048 Big Throughput Camera. The supernovae were followed with photometry over the peak of their
328
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
lightcurves, and approximately two—three months further (&40—60 days restframe) using the CTIO 4 m, WIYN 3.6 m, ESO 3.6 m, INT 2.5 m, and WHT 4.2 m telescopes. (SN 1994ap and other more recent supernovae have also been followed with HST photometry.) The supernova redshifts and spectral identifications were obtained using the Keck I and II 10 m telescopes with LRIS and the ESO 3.6 m telescope. We have observed spectra of the host galaxies for all of these. SNe spectra were observed for almost all SNe after the first 6 SNe. The photometry coverage was most complete in Kron—Cousins R-band, with Kron—Cousins I-band photometry coverage ranging from two or three points near peak to relatively complete coverage paralleling the R-band observations. 4. The light curve and K-corrections We compare our measurements to nearby SN light curves measured with a blue filter. Because of the large redshift, the spectral features captured in the blue filter appear in the “red”. We thus carry out our measurements with a red filter and then translate our measurements into the blue. As it turned out, this correction — called a K-correction — was particularly straightforward for our first distant SN because 1#z"1.458 gives the ratio of the redshifted wavelength to the wavelength at emission. This value turned out to be the ratio of the central wavelength transmitted by our red filter to that of the blue filter, used for the nearby standard light curve. The acceptance widths of the two filters were also roughly in this same ratio. For other z values the K-correction is a function of the epoch on the light curve and the stretch. The procedure modifies the light curve to allow for changes in the portion of the spectrum captured by each filter [9,10]. 5. Our method for obtaining X and XK + Our method for obtaining both X and XK is discussed in [3,5,11]. + To obtain X and XK we make use of the following measurements: + E The redshift of the supernova or parent galaxy, z is related to the wavelength change j!j of the receding object. Here z is defined as z"(j!j )/j and j is the wavelength in the rest system while j is the measured wavelength for identified lines. This requires taking a spectrum of the SN, to identify type Ia SN, whenever possible as well as of the galaxy for an accurate redshift determination. E The apparent magnitude m of the high-redshift SN at the peak of its light curve, calibrated to a standard lightcurve width (s"1). This is based on measurement of the SN light intensity as a function of time (generally in two filters), the photometry calibration, K-corrections, and a fit to the lightcurve to obtain the peak brightness and the lightcurve width s. (The two-filter measurements make it possible to characterize the extinction due to dust that reddens the light.) E The same measurements obtained for a group of 18 well measured nearby Type Ia SNe from the Cala´n/Tololo Supernova Survey [12—14]. The relation between the measured apparent magnitude m and the absolute magnitude M for 0 type Ia SN, is given by m "5 log D #M #25#K . 0 * 0
(1)
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
329
Fig. 1. (a) Hubble diagram from Perlmutter et al. [5] for 42 high-redshift Type Ia supernovae from the Supernova Cosmology Project, and 18 low-redshift Ia supernovae from the Cala´n/Tololo Supernova Survey, after correcting both sets for the SN Ia lightcurve width-luminosity relation. The inner error bars show the uncertainty due to measurement errors, while the outer errors bars show the total uncertainty when the intrinsic luminosity dispersion, 0.17 mag, of lightcurve-width-corrected Type Ia supernovae is added in quadrature. The unfilled circles indicate supernovae not included in Fit C. The solid curves are the theoretical m (z) for a range of cosmological models with zero cosmological constant: (X , XK)"(0, 0) on top, (1, 0) in middle and (2, 0) on bottom. the dashed curves are for a range + of flat cosmological models: (X , XK)"(0, 1) on top, (0.5, 0.5) second from top, (1, 0) third from top, and (1.5,!0.5) + on bottom. (b) The magnitude residuals from the best-fit flat cosmology for the Fit C supernova subset, (X , + XK)"(0.28, 0.72). The dashed curves are for a range of flat cosmological models: (X , XK)"(0, 1) on top, (0.5, 0.5) third + from bottom, (0.75, 0.25) second from bottom, and (1, 0) is the solid curve on bottom. The middle solid curve for (X , + XK)"(0, 0). (c) The uncertainty-normalized residuals from the best-fit flat cosmology for the Fit C supernova subset, (X , XK)"(0.28, 0.72). +
330
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
Here K is the K-correction factor between red and blue SNe magnitudes, and D "D (z; X , XK, 0 * * K H ) is the luminosity distance given here in Mpc (+3 million light years). Here H is the Hubble constant, c the velocity of light, z is the measured red shift, usually obtained from the spectrum of the host galaxy. At first sight it appears that D depends on the Hubble constant H as well. *
Fig. 2. The 68%, 90%, 95%, and 99% confidence regions in the X —XK plane from Perlmutter [5] (see this reference + for details of the fit procedure). (The table of this two-dimensional probability distribution is available at http:// www-supernova.lbl.gov/.) Note that the spatial curvature of the universe — open, flat, or closed — is not determinative of the future of the universe’s expansion, indicated by the near-horizontal solid line. In cosmologies above this nearhorizontal line the universe will expand forever, while below this line the expansion of the universe will eventually come to a halt and recollapse. This line is not quite horizontal because at very high mass density there is a region where the mass density can bring the expansion to a halt before the scale of the universe is big enough that the mass density is dilute with respect to the cosmological constant energy density. The upper-left shaded region, labeled “no big bang”, represents “bouncing universe” cosmologies with no big bang in the past. The lower right shaded region corresponds to a universe that is younger than the oldest heavy elements, for any value H 550 km s\ Mpc\.
G. Goldhaber, S. Perlmutter / Physics Reports 307 (1998) 325—331
331
However, it turns out that the Hubble constant cancels because H also appears in the determina tion of M obtained from type Ia SN measurements in nearby galaxies [12—14]. Fig. 1 shows m for our 42 SNe as well as the 18 Cala´n/Tololo Supernova Survey SNe plotted against redshift. The curves represent the Hubble diagram for a variety of cosmological models as indicated on the figure. The fit to the expression for D gives an approximately linear * relation between X and XK in the region of interest (X (1.5). Our best fit is illustrated in Fig. 2. + + We have carried out a series of fits under somewhat different data selections, e.g. (A) all the data, (B) with two outlying SNe removed, (C) with two reddened and two extreme stretch SNe removed, etc. (see [5] for details). Our results are quite stable with respect to these various sample selections. In particular, one fit was made with no stretch corrections, and this yielded a fit completely consistent with our results with stretch corrections. We have used the two-filter color measurements to study the possible effect of extinction due to dust that reddens the supernova light, and find that our results are robust with respect to this effect (again, see [5] for details). Our measurements have to be carried out at cosmological distances i.e., large values of z since for small z values z(0.1, the dependence on X is negligible and the luminosity distance becomes D "cz/H . * Furthermore, the shape of the light curve is redshifted (time dilated) by a factor 1#z on the time axis. We presented this evidence that the redshift corresponds to the expansion of the universe at a 1995 conference in Aigua Blava [15]. A similar result was presented later by Leibundgut et al. [16].
References [1] [2] [3] [4] [5] [6] [7]
[8] [9] [10] [11] [12] [13] [14] [15] [16]
D. Branch, G.A. Tammann, Ann. Rev. Astron. Astrophys. 30 (1992) 359—389. M.M. Phillips, Astrophys. J. 413 (1993) L105. S. Perlmutter et al., Astrophys. J. 483 (1997) 565. B. Leibundgut, in: S.E. Woosley (Ed.), Supernovae, Springer, New York, 1991, p. 751. S. Perlmutter et al., Astrophys. J. (1998) submitted. Also available at www-supernova.LBL.gov and astro-ph. S. Perlmutter et al., Nature, 391 (1998) 51 and erratum (on author list), 392 (1998a) 311. S. Perlmutter et al., Astrophys. J. (1998) LBL-42230: Presentation at the January 1998 Meeting of the American Astronomical Society. Available at www-supernova.LBL.gov and astro-ph; referenced in B.A.A.S., vol. 29, 1997, p. 1351. Norgaard-Nielsen et al., Nature 339 (1989) 523. A. Kim, A. Goobar, S. Perlmutter, PASP 108 (1996) 190. P. Nugent et al., PASP (1998) in preparation. A. Goobar, S. Perlmutter, Astrophys. J. 450 (1995) 14. M. Hamuy, M.M. Phillips, L.A. Wells, J. Maza, PASP 105 (1993) 787. M. Hamuy et al., Astrophys. J. 106 (1993b) 2392. M. Hamuy, M.M. Phillips, J. Maza, N.B. Suntzeff, R. Schommer, R. Aviles, Astrophys. J. 109 (1995) 1. G. Goldhaber et al., in: R. Canal, P. Ruiz-Lapuente, J. Isern (Eds.), Thermonuclear Supernovae, Kluwer, Dordrecht, 1997. B. Leibundgut et al., Astrophys. J. Lett. 466 (1996) L21.
Physics Reports 307 (1999) 333—432
Wave transmission in nonlinear lattices D. Hennig *, G.P. Tsironis Freie Universita( t Berlin, Fachbereich Physik, Institut fu( r Theoretische Physik, Arnimallee 14, 14195 Berlin, Germany Department of Physics, University of Crete and Foundation for Research and Technology Hellas, P.O. Box 2208, Heraklion 71003, Crete, Greece Received February 1998; editor: D.K. Campbell
Contents 1. Nonlinear lattice systems 1.1. Introduction 1.2. The discrete nonlinear Schro¨dinger equation 1.3. The Holstein model and the DNLS 1.4. Coupled nonlinear wave guides and the DNLS 1.5. A generalized DNLS and nonlinear electrical lattices 1.6. Connection with the Holstein model 1.7. General properties of nonlinear maps 1.8. Integrable mappings and soliton equations 2. Spatial properties of integrable and nonintegrable discrete nonlinear Schro¨dinger equations 2.1. Integrable and nonintegrable discrete nonlinear Schro¨dinger equations 2.2. The generalized nonlinear discrete Schro¨dinger equation 2.3. Stability and regular solutions 2.4. Reduction of the dynamics to a twodimensional map 2.5. Period-doubling bifurcation sequence 2.6. Transmission properties 2.7. Amplitude stability 3. Soliton-like solutions of the generalized discrete nonlinear Schro¨dinger equation 3.1. Introduction
336 336 336 337 338 341 343 344 346
348 348 350 351 356 358 362 364
3.2. The real-valued stationary problem of the GDNLS 3.3. The anti-integrable limit and localized solutions 3.4. The Melnikov function and homoclinic orbits 3.5. Normal form computation of the homoclinic tangle 3.6. Homoclinic, heteroclinic orbits and excitations of localized solutions 3.7. The soliton pinning energy 3.8. Summary 4. Effects of nonlinearity in Kronig—Penney models 4.1. Motivation 4.2. The nonlinear Kronig—Penney model 4.3. Propagation in periodic and quasiperiodic nonlinear superlattices 4.4. Wave propagation in periodic nonlinear superlattices 4.5. Transmission in a quasiperiodic lattice-I 4.6. Transmission in a quasiperiodic lattice-II 4.7. Field dependence and multistability 5. Conclusions Acknowledgements References
367 367
* Corresponding author. 0370-1573/99/$ — see front matter 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 0 - 1 5 7 3 ( 9 8 ) 0 0 0 2 5 - 8
367 370 372 375 378 382 385 387 387 387 392 394 401 403 412 426 427 427
WAVE TRANSMISSION IN NONLINEAR LATTICES
D. HENNIG , G.P. TSIRONIS Freie Universita( t Berlin, Fachbereich Physik, Institut fu( r Theoretische Physik, Arnimallee 14, 14195 Berlin, Germany Department of Physics, University of Crete and Foundation for Research and Technology Hellas, P.O. Box 2208, Heraklion 71003, Crete, Greece
AMSTERDAM — LAUSANNE — NEW YORK — OXFORD — SHANNON — TOKYO
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
335
Abstract The interplay of nonlinearity with lattice discreteness leads to phenomena and propagation properties quite distinct from those appearing in continuous nonlinear systems. For a large variety of condensed matter and optics applications the continuous wave approximation is not appropriate. In the present review we discuss wave transmission properties in one dimensional nonlinear lattices. Our paradigmatic equations are discrete nonlinear Schro¨dinger equations and their study is done through a dynamical systems approach. We focus on stationary wave properties and utilize well known results from the theory of dynamical systems to investigate various aspects of wave transmission and wave localization. We analyze in detail the more general dynamical system corresponding to the equation that interpolates between the non-integrable discrete nonlinear Schro¨dinger equation and the integrable Albowitz—Ladik equation. We utilize this analysis in a nonlinear Kronig—Penney model and investigate transmission and band modification properties. We discuss the modifications that are effected through an electric field and the nonlinear Wannier—Stark localization effects that are induced. Several applications are described, such as polarons in one dimensional lattices, semiconductor superlattices and one dimensional nonlinear photonic band gap systems. 1999 Elsevier Science B.V. All rights reserved. PACS: 63.10.#a
336
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
1. Nonlinear lattice systems 1.1. Introduction Nonlinear lattices are formed when physical properties of a system are described through an infinite set of coupled nonlinear evolution equations. The lattice has typically spatial connotation since, in most cases of interest, the physical system corresponds to coupled sets of linear or nonlinear oscillators distributed in space. If we are interested in phenomena with a length scale much larger than the typical distance between the oscillators, we can perform a continuous approximation and obtain nonlinear partial differential equations. Many of these equations that result from this continuous approximation have interesting properties and lead to solitons, solitary waves, breathers, etc. The other limit where the waves of interest are in the same scale of the typical interoscillator distance is also quite interesting and the corresponding wave properties are quite distinct from those in the continuous limit. In this realm nonlinearity and discreteness conspire into producing localized modes as well as global lattice properties different from those of the continuous model. The emphasis of this review will be towards describing global lattice properties related to wave transmission through one dimensional discrete nonlinear systems. We will use as our paradigmatic equation for this review a generalized version of the discrete nonlinear Schro¨dinger (DNLS) equation that contains both integrable and nonintegrable terms. A major part of the review will deal with a nonlinear version of the Kronig—Penney model with delta functions, or a nonlinear Dirac comb, that will be shown to be equivalent to a DNLS-like nonlinear lattice. 1.2. The discrete nonlinear Schro¨ dinger equation The DNLS or discrete self-trapping equation (DST) describes properties of chemical, condensed matter as well as optical systems where self-trapping mechanisms are present. These mechanisms arise either from strong interaction with environmental variables or genuine nonlinear properties of the medium. The DNLS equation was introduced in order to describe the dynamics of a set of nonlinear anharmonic oscillators and to understand nonlinear localization phenomena [1]. It can also be viewed as an equation describing the motion of a quantum mechanical particle interacting strongly with vibrations [2]. If t (t) denotes the probability amplitude for the particle to be at site L n of a one dimensional lattice at time t, DNLS reads: i dt /dt"e t #»(t #t )!c"t "t , L L L L\ L> L L
(1)
where e designates the local energies at site n of a one dimensional crystal, » is the nearestL neighbor wavefunction overlap and c is the nonlinearity parameter that is related to the local interaction of the particle with other degrees of freedom of the medium. Typically an infinite, discrete set of equations, such as DNLS, is viewed in two different ways, either as a discretization of a corresponding continuous field equation, or an equation describing dynamics in discrete geometries. In the case of DNLS, the corresponding continuous field equation is the celebrated nonlinear Schro¨dinger equation. The present exposition will take the point of view that DNLS represents dynamics in a discrete one dimensional lattice. We will therefore not relate properties of DNLS with the corresponding continuous equation.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
337
The DNLS equation has a long history; in its time independent form was first obtained by Holstein in his study of the polaron problem [3]. Subsequently derived in a fully time-dependent form by Davydov in his studies of energy transfer in proteins and other biological materials [4—7]. Eilbeck, Lomdahl and Scott [1,8—10] studied DNLS as a Hamiltonian system of classical oscillators, focused on analytical and perturbative results and showed that bifurcations occur in the space of stationary states for different values of the nonlinearity parameter. These bifurcations in the discrete set of equations are associated with the nonlinearity induced self-trapping described by DNLS. In order to understand the dynamical properties of DNLS solutions, Kenkre, Campbell and Tsironis studied extensively the nonlinear dimer, the smallest nontrivial DNLS unit [2,11,12]. The latter proved to be completely integrable and from its complete solution a number of interesting properties of self-trapping were obtained. Additionally, the effects of nonlinearity on a variety of physical observables were studied leading to predictions for possible experiments [13—15]. By adding to the DNLS equation a nonlinear term that is identical to the one in the Ablowitz—Ladik (AL) [16] equation one obtains a combined AL-DNLS equation [17,18]: i dt (t)/dt"(1#k"t (t)")[t (t)#t (t)]!c"t (t)"t (t) , (2) L L L> L\ L L where we set for simplicity e "0. We note that for c"0 Eq. (2) reduces to the integrable L Ablowitz—Ladik equation whereas in the other extreme when k"0 it becomes the nonintegrable DNLS. The combined AL-DNLS equation interpolates between these two extreme cases. In this article we will deal almost exclusively with stationary properties of DNLS and DNLS-like equations, such as the AL-DNLS equation, in extended lattice systems. Since our interest in these problems is motivated through physical applications we will first discuss the context in which DNLS arises in applications. 1.3. The Holstein model and the DNLS In order to see the connection of DNLS with the Holstein model [3] for molecular crystals we start with the Hamiltonian: H"(K/2) u#(1/2)M (du /dt)# e "n21n"!J ["n#121n"#"n21n#1"] L L L L L L L !A u "n21n" . (3) L L This Hamiltonian represents an excitation moving in a one-dimensional crystal while interacting with local Einstein-type oscillators. In Eq. (3) e represents the local site energy at site n, J gives L the magnitude of the wavefunction overlap of neighboring sites, "n2 and 1n" are related to the probability amplitudes at site n whereas u is the displacement of the n-th local oscillator. The L exciton-phonon coupling term is diagonal in the "n2 basis and depends only on local oscillator displacements. If we neglect the kinetic energy terms and expand the time-dependent wave function as "W2" W "p2, where the "p2 represent Wannier states. Inserting this into the time-dependent N N Schro¨dinger equation i(d"W2/dt)"H"W2, and using the orthonormality property for the "p2’s,
338
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
we obtain: i dW /dt"(K/2) u W #e W !J[W #W ]!Au W . (4) L K L L L L\ L> L L K Next, we eliminate the vibrational degrees of freedom by imposing the condition of minimization of the energy of the stationary states [3]. Inserting W & exp[iEt] and using the normalization L condition for the amplitudes W , "W ""1, we get N N N (5) E"(K/2) u# [e !Au ]"W "!J (W !W )W* . L L L L L\ L> L L L L Imposing the extremum energy condition, i.e. dE/du "0, we obtain u "A"W "/K. Inserting this L L L back into Eq. (4), we get i dW /dt"(A/2K) "W "#e W !J[W #W ]!(A/K)"W "W . (6) L N L L L\ L> L L N This last step represents a departure from the Holstein adiabatic approach being valid in a limit where the assumed classical vibrational degrees of freedom adjust rapidly to the excitonic motion. In this anti-adiabatic limit, it is still possible to retain approximately the dynamics in the original Eq. (4). The quantity (A/2K) "W " represents the total vibrational energy. If we measure N N energies with respect to this background value, we arrive at an effective nonlinear equation for the amplitude W (t): L i dW /dt"e W !J[W #W ]!(A/K)"W "W . (7) L L L L\ L> L L This closed nonlinear equation describes the effective motion of the “polaron” in the aforementioned anti-adiabatic limit. The “time step” dt in the time derivative should be understood as short compared to the time scale of the “bare exciton motion” (proportional to 1/J) but long compared to the fast vibrational motion (proportional to 1/K). 1.4. Coupled nonlinear wave guides and the DNLS In the previous section we showed how DNLS can be motivated in a solid-state context. In an optics context, DNLS describes wave motion in coupled nonlinear waveguides. When an electromagnetic wave is sent through a nonlinear waveguide coupled to other waveguides in its vicinity, W represents the amplitude coefficient in an expansion of the electromagnetic field in terms of L the wave normal modes in the waveguide. Coupling causes power to be exchanged among the waveguides. The nonlinear nature of the materials in each waveguide (coupler) can cause a “trapping” of power in one of the waveguides. Self-trapping now happens in space rather than in time. These features could be exploited in the design of optical ultrafast switches with applications in optical computers [19,20]. Nonlinear couplers arranged in various geometries are known to have properties that make them attractive candidates for all optical switching devices. In Fig. 1 we show a typical configuration for an array of such couplers. The basic nonlinear coupler model, introduced by Jensen in 1982, involves two waveguides made of similar optical material embedded in a different host material [19]. The waveguides have strong nonlinear susceptibilities whereas the host is made out
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
339
Fig. 1. A system of coupled nonlinear waveguides extending in the z-direction.
of material with a purely linear susceptibility. The host enables interaction between the modes propagating in the two waveguides whereas the nonlinear susceptibility gives rise to the phenomenon of mode self-trapping in each waveguide. For a device of a given length, the launching of power in one side of the device can give a wide range of amplitudes in each guide. For sufficiently large values of power, the nonlinear susceptibility terms dominate and we have almost complete self-trapping of the energy in the initially excited guide. Switching is possible for a variety of different initial electric field amplitudes in both waveguides [19,21]. We assume an extended system involving many couplers distributed as in Fig. 1 and perform normal mode analysis. From [19], the amplitude aL of the kth mode of the nth guide, obeys the I equation
u daL I" dx dyEL ) P , (8) I 4P dz I where the axes of the guides are along z, EL is the electric field of the kth mode in the nth guide, I P is the power in the kth mode and P is the perturbing polarization due to linear and nonlinear I effects. For the nth guide, !i
P /e "ELd#(d#e)[EL>#EL\]#s["EL"#"EL\"#"EL>"]EL , (9) in which e is the dielectric coefficient of the host material, e#d that of the guide material and s is the third-order susceptibility [22]. EG is the total field due to the ith guide. Eq. (8) then gives !i
daL ue I " dx dy[dEL* ) EL#(e#d)EL* ) (EL\#EL>) I I 4P dz I #s("EL"#"EL\"#"EL>")+EL* ) EL,] . I
(10)
340
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Similar equations hold for guides (n!1) and (n#1). Expanding the total fields in normal modes, Eq. (10) gives a set of mode-coupled equations for the mode amplitudes. If we assume only the lowest single-mode operation for each guide, so that EL"aLEL etc. for EL\ and EL>, then I I Eq. (10) with k"1, gives the following set of equations: !i daL/dz"QLaL#Q aL\#Q aL>#QL"aL"aL . LL\ LL> The coefficients are given by
ue QL" dx dy d"EL" , 4P
ue QL" s dx dy d"EL" , 4P
ue Q " dx dy(e#d)EL* ) EJ, (nOl ) . LJ 4P
(11)
(12) (13) (14)
The coupling coefficient Q for nOl is generally complex due to the phase mismatch associated LJ with the assumed mode factor exp(ib z). The latter factor enters in a normal mode expansion I EL" aL exp(ib z)EL I I I I for each mode, with b being the wave vector of propagation of the kth mode propagating in the I z direction. The inner product in the integrand of Eq. (14) may be positive or negative depending on the polarization direction in each waveguide respectively. In the general case of dissimilar waveguides (say guides n and n!1) phase mismatch will result in spatially modulated Q terms LL\ proportional to exp(iDbz), with Db the difference of the wavevectors of the waves propagating in the two waveguides respectively. When the waveguides are taken to be identical, Db"0 and space modulation is not present. Additionally, with the present boundary conditions at z"0, the inner product in Eq. (14) is positive and as a result Q is real and positive. If, on the other hand, phase LL\ mismatch leads to a spatial Q modulation much faster than the mode amplitude change over LL\ the length of the waveguide, an average effective Q can be used. An average of a rapidly LL\ oscillating factor exp(iDbz) over the device length ¸ leads, in general to a complex Q term, LL\ whose actual value depends on the product (Db)¸. In this context, it is possible to choose values of Db that give rise to a real but negative effective Q term resulting in more abrupt switching LL\ properties. Taking Q real, then Q "Q and by symmetry then Q "Q . LL\ LL\ L\L LL> L>L Defining Q "Q "!», Eq. (11) can now be written as LL\ LL> daL "Q aL!»(aL\#aL>)#Q "aL"aL , (15) !i dz where the subscript 1 in the variables aG was dropped. Now letting aL"c (P exp(iQ z) , L L
(16)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
where P is the total input power and c"Q P gives the simplified equations dc i L"»(c #c )!c"c "c . L> L\ L L dz
341
(17)
We recognize the DNLS with the standard unity normalization condition "c ""1 . (18) N N We note that by normalizing the variables c (z) to one, we can associate each of them with G a probability amplitude. We can thus consider DNLS as an effective equation describing the motion (when z is interpreted as “time”) of a quantum mechanical particle in a lattice while interacting strongly with other degrees of freedom. Furthermore, since the nonlinearity parameter c is formally proportional to the total input power P, we can express the dependence of coupler properties on P equivalently as the influence of the value of c on the “probability” "c (z)". For the nonlinear optical couplers the nonlinear G parameter s is proportional to the third order electric field susceptibility s which in turn is proportional to the Kerr coefficient n . It is well known that for a Kerr type medium the index of refraction is given by n"n #n "E", where n is the linear index of refraction and n is called the Kerr coefficient. When the latter has a positive sign the medium has self-focusing properties whereas when it is negative the medium is self-defocusing. 1.5. A generalized DNLS and nonlinear electrical lattices Following the work of Marquie´, Bilbault and Remoissenet on the nonlinear discrete electrical lattice, we will show that the dynamics of modulated waves can be modeled approximately through a generalized discrete nonlinear Schro¨dinger equation interpolating between the Ablowitz—Ladik equation and the DNLS [23]. We consider a lossless nonlinear electrical lattice of N identical cells as shown in Fig. 2. In each of the cells there is a linear inductance ¸ in parallel with a nonlinear capacitor C(» ) and L neighboring cells are bridged via series linear inductances ¸ . Using Kirchhoff ’s laws one derives
Fig. 2. A schematic representation of the electric network (after [23]).
342
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
a system of nonlinear discrete equations containing the nonlinear electrical charge Q (t) of the nth L cell and the corresponding voltage » (t): L dQ /dt"(1/¸ )(» #» !2» )!(1/¸ )» , n"1, 2,2 . (19) L L> L\ L L We assume further for the charge a voltage dependence similar to that of an electrical Toda lattice [24,25] Q (t)"AC ln[1#» /A] , (20) L L which is justified if the inverse of the nonlinear capacitance follows a linear relation [26] according to 1/C(» )"(A#» )/AC . (21) L L With the help of Eq. (19) one obtains the linear dispersion relation typical for a bandpass filter u"u#4u sin(k/2) , (22) where u"1/¸ C and u"1/¸ C . Due to the lattice discreteness the spectrum is bounded from above by a cutoff frequency f "u /2p"(u#4u).
Inserting the expression for Q of Eq. (20) into Eq. (19) one obtains [23] L d» u u d» L " (A#» ) » #» ! 2# » . L! (23) (A#» ) L L> L\ L dt dt A u L Discreteness of the lattice is maintained for a gap angular frequency u much larger than any other frequency of the system, i.e. u<4u. Using this fact, we can neglect all the harmonics of a wave with any frequency f since they lie above the cutoff frequency. Therefore the study is restricted to slow temporal variations of the wave envelope and solutions of the form
(24) » (t)"eW (¹) exp(!iut)#eW*(¹) exp(#iut) , L L L are searched for and e is a small parameter rescaling the time unit as ¹"et. Upon substituting this expression in Eq. (23) and retaining only terms of order e we arrive at
u 2u "W "W "! W #W ! 2# W "W " L> L\ L u L u L L u (25) ! W* #W* ! 2# W* W . L> L\ u L L Collecting now terms in exp(!iut), setting q"u¹/2u and W "U exp[iq(u!u!2u)/u] L L gives finally i
dU L#[U #U ]#[k(U #U )!2lU ]"U ""0 , L> L\ L> L\ L L dq
(26)
with parameters 1 k" , A
2u#u#2u . l" 2uA
(27)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
343
With the help of the generalized discrete nonlinear Schro¨dinger equation Remoissenet et al. demonstrated theoretically the possibility for the system to exhibit modulational instability leading to a self-induced modulation of an input plane wave with the subsequent generation of localized pulses. In this way energy localization in a homogeneous nonlinear system is possible and is manifested in the formation of envelope solitons [26]. Experimentally these results are confirmed by the observation of a staggered localized mode in the real electrical network. 1.6. Connection with the Holstein model We saw previously how the nonlinear nonlocal term of the AL-DNLS equation arises in the context of the electrical lattice. Given the connection of DNLS with the Holstein model, it makes sense to ask whether this nonlocal term could be also associated with that model as well in some form. It is customary to view the nonlinear term in the pure DNLS equation as being associated with a local energy distortion that arises variationally from the adiabatic elimination of vibrational degrees of freedom. The AL equation, on the other hand, does not carry a similar physical interpretation in this context. We can, however, identify the physics behind the interpolating AL-DNLS equation in a more precise way, starting from an extension of the one-electron Holstein Hamiltonian. We assume that an electron is moving in a one dimensional tight-binding lattice with nearest neighbor matrix elements J while at the same time interacts with local Einstein oscillators of mass M and frequency u . The oscillators modulate both the local electron energies (as in the # conventional Holstein model) but also affect the transfer rates; we assume that the modification of the rate between adjacent sites is determined through the average local oscillator distortion. We have the following Hamiltonian: 1 H " M (yR #uy)!J (a>a #a> a ) L # L L L> L> L & 2 L L !a y a>a ,!b (y #y )(a> a #a>a ) , (28) L L L L> L L> L L L> L L where a>, a are the electron creation and annihilation operators at site n respectively, y is the L L L displacement of the local Einstein oscillator in the same site and a, b are coupling parameters. We note the physical significance of the additional interaction contribution that are proportional to b: the two terms proportional to y a> a #y a>a correspond to the modulation of the L> L> L L L L> transfer rate as a result of the distortion in the “destination site” whereas the other two terms depend on the distortion in the originating site. We proceed now and perform the same adiabatic elimination of the vibrational degrees of freedom as in Section 1.3. After some straightforward manipulations and keeping only terms proportional to a, ab but not of the order of b results in the following stationary equation for the energy E: Ea "(a #a )!c"a "a #k(a #a )"a "#O(ab)#O(b)#2 , (29) K K> K\ K K K> K\ K where we set J"!1, c"a/(Mu ), k"2ab/(Mu ) and we designated with O(ab) the terms # # "a #"a "a ]. These last terms that are also of order O(ab)"k[(a* #a* )a* #"a K> K\ K K> K> K\ K\ k are the modification to the transfer rate due to the local deformation in the originating site.
344
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Clearly, if these terms are absent, the resulting equation becomes the stationary equivalent of the AL-DNLS equation. Consequently, the physical significance of the AL-type terms becomes transparent: they correspond to the modification in the transfer rates when the local distortion is taken only partially into account and depends only on the modification at the landing site. Furthermore, since dropping the terms O(ab) is equivalent to dropping two of the interaction terms (out of the four), the resulting nonhermitianity of the problem is manifested in the existence of a norm which is different from the usual probability norm of DNLS. It is easy to check that if all four terms are kept, the resulting nonlinear equation has a probability norm. 1.7. General properties of nonlinear maps The study of the stationary properties of DNLS-like equations will be done through the use of two dimensional nonlinear maps. By now there exists a vast literature on maps (see e.g. [27—31]). We restrict ourselves to give a brief overview and summarize the basic features of such maps necessary for an analysis of the physical properties of the underlying nonlinear lattices once they have been casted into discrete maps. Nonlinear maps are used in a variety of properties ranging from stability studies of colliding particle beams in storage rings [32—35], to Anderson localization [36—39] and commensurate—incommensurate structure studies in solid state physics [40—42]. A map ¹ of the plane generates a sequence of points x "(x , y )31, n"0, 1, 2,2 by assigning L L L to each point x of the plane a new point via x "¹(x ). The entire sequence x with L L> L L n"0, 1, 2,2 is called an orbit. The map ¹ is said to be area-preserving if for any given measurable subset » of the plane, ¹\(») has the same area as ». This condition becomes equivalent to that the determinant of the Jacobian is one, i.e. Det(D¹)"1 ,
(30)
for all x and D is the differential operator (j/jx, j/jy). We begin the discussion of the motion on the map plane with an important class of orbits, namely periodic orbits. An orbit is periodic with period q if x "x #p, O M
y "y , O M
(31)
for some integer p. We denote such an orbit by (p,q). For area-preserving maps we distinguish three types of periodic orbits depending on the stability properties of points in their neighborhood, viz. elliptic, hyperbolic (regular saddle), and reflection hyperbolic (flip saddle). The stability is determined by the eigenvalues of the tangent map. Mapping a linearized periodic orbit through its whole period in tangent space is achieved by the product of the local Jacobians taken at each periodic point O\ (D¹)O(x )" D¹(x ) . (32) M L L Since the determinant of each local Jacobian D¹(x ) is one the determinant of any product of them L is also one. This implies that the eigenvalues j of (D¹)O are either conjugate points on the unit circle or appear as real reciprocal numbers j,1/j. The eigenvalues are connected with the trace of the
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
345
tangent map (D¹)O via 1 j" [Tr(D¹)O$([Tr(D¹)O]!4] . 2
(33)
The stability condition becomes thus "Tr(D¹)O"(2. But a stability classification is most conveniently given in terms of Greene’s residue [43]: 1 R" [2!Tr(D¹)O] . 4
(34)
We summarize the results as follows [31]: Stability
j
Tr(D¹)O
R
Elliptic Hyperbolic Reflection Hyperbolic
exp(2p iu) '0
(!2, 2) '2
(0, 1) (0
(0
(!2
'1
Nearby points of each elliptic periodic point rotate about the point in ellipses by an angular increment arccos[1!2R] on average per iteration of the map (D¹)O. Equivalently the rotation frequency u is linked with the value of the residue via R" sin(pu). For irrational u the orbit never returns to its initial point and such an orbit is called quasiperiodic. The orbit points come to lay on invariant closed curves. According to the Kolmogorov—Arnold—Moser (KAM) theorem the orbit is stable provided RO0, 3/4, 1/2. For the cases of R"0, 3/4, 1/2 corresponding to the so called Arnold resonances the linearization does not suffice for a stability analysis [44]. For R(0, where the points of the periodic orbit are hyperbolic, nearby points diverge from them with an exponential separation rate "1!2R#2(R!R". Upon varying a parameter of the map the location of the periodic orbit as well as its residue will be changed. If the residue changes from a positive value to the negative range then a tangent bifurcation takes place where an elliptic point handles its stability over to a hyperbolic point. Whenever the residue passes the value of one from below (if u"1/2) a stable elliptic orbit converts into an unstable hyperbolic point with reflection accompanied by the creation of two new stable elliptic points via a period-doubling bifurcation. The latter remain stable until the corresponding value of u reaches one half and another period doubling bifurcation occurs destroying also the nearby quasiperiodic orbits. After a cascade of such period doubling bifurcations local chaos appears. The character of motions on the map depends on the initial conditions. In general we distinguish regular (integrable) and irregular (chaotic) regimes for a map. A map is said to be integrable in the Liouville sense if there exists a sufficient number of integrals. An integral is a function on the map-plane I(x), which is invariant under the map, that is ¹(I(x))"I(x). More precisely, if a 2N-dimensional map possesses N independent integrals I , j"1, 2,2, N which are in H involution, i.e. all the mutual Poisson brackets vanishes +I , I ,"0, then the motion is integrable L K and lies on a family of N-dimensional nested tori appearing in the case of N"2 as closed invariant curves.
346
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
We discuss now the transition from regular behavior to the occurrence of global chaos in a two-dimensional area-preserving map. Applying a small nonintegrable perturbation to an integrable map the KAM theorem assures the survival of most of the invariant tori for sufficiently weak perturbations. However, some of the invariant tori, namely those with rotational frequency u close enough to a rational value, will break up into resonance chains (Poincare´—Birkhoff-chains). These resonance chains consist of periodic points of alternating stability type. The elliptic points are again surrounded by stable quasiperiodic cycles. The stable and unstable manifolds belonging to the unstable hyperbolic points embrace the stability zones around the elliptic points and an island like structures are formed called resonances. In the vicinity of the hyperbolic points local stochasticity (chaos) occurs due to the tangling of the invariant stable and unstable manifolds. With further increase in the perturbation strength the width of the resonance islands grows and the resonant cycles break up giving birth to new resonance chains of higher order while the other quasiperiodic cycles remain stable due to the KAM theorem. We find local regions of regular and irregular motions coexisting on the map plane exhibiting a complicated hierarchical structure of islands within islands. At the same time with increased island width KAM cycles lying in between two neighboring resonances can be destroyed as a result of resonance overlap. When further increasing the nonintegrability parameter a very complicated network of orbits develops where more and more regions are covered by chaotic orbits. Finally, above a critical perturbation strength even the most resisting final KAM cycle breaks up and the phase plane will be densely covered with global chaos except for a few tiny islands of stability. There exists a useful property for a special class of area-preserving maps. If the map ¹ factors according to ¹"¹ ¹ , with ¹ and ¹ being orientation-reversing involutions, i.e. they satisfy ¹"¹"1, det(D¹ )"det(D¹ )"!1 , (35) then ¹\"¹ ¹ . That the map ¹ can be written as the product of two orientation-reversing involutions establishes its reversibility. The invariant sets of the two involutions form the symmetry lines of the map. For reversible area-preserving maps there exists a particular symmetry line on which at least one point of every positive residue Poincare´—Birkhoff orbit (elliptic or reflection hyperbolic) lays. This line is called the dominant symmetry line. For reversible area-preserving maps it suffices therefore a one-dimensional search for the periodic orbits. Furthermore one can prove that the homoclinic orbits belonging to the (transversal) intersections of the stable and unstable manifolds of a hyperbolic point fall on symmetry lines in reversible area-preserving maps [45]. The next section is devoted to integrable maps since they are of importance both for the study of soliton equations as well as serve as the starting point for perturbation theory (see e.g. [46]). 1.8. Integrable mappings and soliton equations We review the properties of nonlinear integrable maps describing the solution behavior of certain kinds of (stationary) solutions of nonlinear integrable lattices. Integrable partial difference equations or nonlinear integrable lattices may arise from space discretization of integrable nonlinear partial differential equations giving differential-difference (DD) equations. The latter give rise to hierarchies of integrable PDEs [47] and are of importance especially from the point of view of integrability in classical nonlinear Hamiltonian systems in general. Many physical systems are close to integrable systems so that their study is also of practical interest to follow e.g. bifurcations
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
347
and the transition to chaos with a perturbational approach [48]. Furthermore, nonlinear integrable maps are also of interest for the construction of numerical integration schemes of nonlinear PDE’s [49,50]. The oldest example of a nonlinear integrable map is certainly Jacobi’s celebrated elliptic billiard [51]. Quispel et al. [52,53] reported on an eighteen-parameter family of nonlinear integrable maps which is actually a generalization of the four-parameter family found by McMillan [54]. The authors established a linkage of these maps to soliton theory and statistical mechanics. They treated in detail various examples of physical interest, namely the stationary reductions of a DD isotropic Heisenberg spin chain, and besides the discrete modified Korteweg—de Vries equation, and the integrable DD nonlinear Schro¨dinger equation, viz. the Ablowitz—Ladik equation. All of these stationary soliton equations are described by symmetric integrable maps. We summarize here their results for the discrete modified Ablowitz—Ladik equation (»"1): dt i L"t #t #k"t "[t #t ] . L> L\ L L> L\ dt
(36)
The stationary solutions follow from the ansatz t " exp(!iut) with real , yielding a twoL L L dimensional map u !(1# )[ # ]"0 . (37) L L L> L\ This map was studied by Ross and Thompson [55]. The map possesses the following integral of motion u !(1#k ) "K , (38) L> L L L> L where K is the integration constant. Particularly for K"0 we obtain the separatrix solution corresponding to the stationary AL-soliton
" sinh b sinh[b(n!x )] , (39) L where b is a parameter and x specifies the soliton center. Since this map can be written as the product of two involutions it is a reversible map. It turns out to be a special case of the eighteen-parameter family of integrable reversible planar maps given by f ( )!
f ( ) L\ L ,
" L L> f ( )!
f ( ) L L\ L where f (U)"(M U);(M U) , with U"( , , 1)2 and
a L M" d L L i L
b L e L j L
c L f , n"0, 1 . L k L
(40)
(41)
(42)
348
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Each member of this family possesses a one-parameter family of invariant curves fulfilling the relation (a #Ka ) #(b #Kb )( # )#(c #Kc )( # ) L> L L> L L L> L> L #(e #Ke ) #(f #Kf )( # )#(k #Kk )"0 . (43) L> L L> L A parameterization of this equation in terms of Jacobian elliptic functions is possible and in dependence of the integration constant periodic, quasiperiodic as well as solitonic solution behavior can be distinguished. There exist many more examples for differential-difference equations (DDE) the stationary solutions of which are determined by a map of the type of (40)—(43). This led Quispel et al. to the following conjecture: “Consider a differential-difference equation. ¹hen every autonomous difference equation obtained by an exact reduction of the DDE is an integrable mapping”. However one should be aware that the reverse of this conjecture does not necessarily hold [52]. On the other hand, for many physically relevant problems the solutions cannot be given at all in closed analytical form and rather irregular (chaotic) dynamics is encountered pointing to nonintegrability as it is the case also for DNLS. 2. Spatial properties of integrable and nonintegrable discrete nonlinear Schro¨ dinger equations 2.1. Integrable and nonintegrable discrete nonlinear Schro¨ dinger equations The nonlinear Schro¨dinger equation (NLS) is one of the prototypical nonlinear partial differential equations, the study of which has lead to fundamental advances in nonlinear dynamics. The study of NLS was motivated by a large number of physical and mathematical problems ranging from optical pulse propagation in nonlinear fibers to hydrodynamics, condensed matter physics and biophysics. We now know that NLS provides one of the few examples of completely integrable nonlinear partial differential equations [56]. Since most work in nonlinear wave propagation involves at some stage a numerical study of the problem, the issue of the discretization of NLS was addressed early in Ref. [56]. Ablowitz and Ladik noticed that among a large number of possible discretizations of NLS there is one that is also integrable [16]. The study of the integrable version of the discrete nonlinear Schro¨dinger equation, called hereafter Ablowitz—Ladik, or AL equation, showed that it has solutions which are essentially the discrete versions of the NLS solitons [16]. Another discrete version of the NLS equation was studied in detail later [1]; the latter usually referred to as discrete nonlinear Schro¨dinger equation DNLS or discrete self-trapping equation (DST), has quite a number of interesting properties, but it is not integrable [50]. We note that the motivation for studying the two discrete versions of NLS, viz., the AL equation and the DNLS equation respectively, are quite different: The AL equation, on one hand, has very interesting mathematical properties, but not very clear physical significance; the introduction of the DNLS equation, on the other hand, is primarily motivated physically. In particular, the latter seems to arise naturally in the context of energy localization in discrete condensed matter and biological systems as well as in optical devices [1—7,57—67]. Even though in these problems one typically assumes that the length scale of the nonlinear wave is much larger than the lattice spacing and therefore NLS provides a good description for those problems, the study of DNLS (and AL)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
349
equation is important when the size of the physical system is small or the nonlinear wave is strongly localized. The motivation for the present chapter is an equation introduced by Salerno [17] and studied recently by Cai et al. [18] that interpolates between DNLS and AL equations while containing these two as its limits [17]. By varying the two nonlinearity parameters of this new equation one is able to monitor how “close” it is to the integrable or nonintegrable version of the NLS. The new equation finds its physical explanation in the context of the nonlinear coupler problem [68] and the application in a nonlinear electrical transmission line [23]. However, its basic merit is that it allows us to study the interplay of the integrable and nonintegrable NLS-type terms in discrete lattices. In addition, one can address the issue of “nonlinear eigenstates” of the new equation and their connection to the integrability/nonintegrability issue. Before presenting the basic properties of the stationary (generalized) DNLS we give a review of other papers on it: Salerno [17] studied the quantum deformation of AL-DNLS and derived for the two-particle chain (a dimer) some explicit formulas for the first excited levels of the quantized version showing that they can be continuously deformed into the corresponding ones of the two extreme limits of c"0 respectively k"0. Kivshar and Peyrard obtained the DNLS in their study of modulational instability in the discrete nonlinear Klein—Gordon lattice [69]. The DNLS arises there as the envelope function in a rotating wave approximation for slowly modulated carrier waves of the Klein—Gordon field amplitudes. Furthermore, for comparison they studied also the AL equation. Claude et al. investigated the creation and stability of localized modes in Fermi—Pasta—Ulam chains and nonlinear discrete Klein—Gordon lattices [70]. Their ansatz function for localized modes resulted in the DNLS. For a perturbational approach they wrote DNLS as a perturbed AL system yielding the AL-DNLS equation. Cai et al. studied the AL-DNLS equation focusing interest on the interplay of integrability and nonintegrability [18]. They pointed out that the localized states of the AL system are the AL-solitons. Furthermore they showed that for AL-DNLS there exist two types of localized modes. One type is a state with in-phase oscillations of neighboring particles having oscillation frequencies lying below the linear phonon band and is called an unstaggered state. The other one exhibits out-of-phase oscillations of neighboring particles and has oscillation frequencies above the linear (phonon) band and is called a staggered state. For further discussion of localized (stationary) states in DNLS we refer to Section 3. Kivshar and Salerno studied analytically and numerically modulational instability for the AL-DNLS with emphasis on how different discretizations of the nonlinear interaction term change modulational instability in the lattice [71]. Hays et al. studied a generalized lattice with equation iAQ #F("A ")(A #A )#F("A ")A "0 that contains DNLS, AL and several other models H H H> H\ H H as special cases. They developed a fully nonlinear modulation theory for harmonic plane wave solutions of the form A (t)"a exp[i(kn!ut)] [72]. Konotop et al. [73] and Cai et al. [74] proved L integrability of the dynamics of the AL system in a time-varying, spatially uniform electric field along the chain direction which is of the form » "E(t) n. In the limit of a static electric field, the L system exhibits a periodic evolution which is a nonlinear counterpart of Bloch oscillations. Further it was shown that for certain strengths of a harmonic field dynamical localization can be caused which can be interpreted as a parametric resonance effect. (We refer also to Section 4.7 for a treatment of similar effects in Kronig—Penney models.) Cai et al. extended the (integrable) AL system study by incorporating the integrability-breaking DNLS term as well and showed that Bloch oscillations and dynamical localization are maintained in the AL-DNLS system. Hence they
350
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
are effects of the lattice and does not depend on integrability. The special case of a AL system with a potential depending linearly on the spatial coordinate, i.e. » "A n, was treated Scharf and L Bishop by means of the inverse scattering transform. These authors obtained a UV pair proving integrability of the model [75]. Hennig et al. investigated the formation of breatherlike impurity modes in a “disordered” version of the AL-DNLS containing a single impurity [76]. An interesting study on soliton interactions and beam steering in nonlinear waveguide arrays modeled by DNLS was performed by Aceves and co-workers in Refs. [77—79]. Special attention is paid to the existence and control of the propagation of stable localized wave packets in waveguide arrays. It is shown that localized modes with energy stored only in a few lattice sites are the preferred stable steady patterns of the system. Among several analytical methods (discrete variational approaches) also soliton perturbation theory based on a perturbed AL system was used. The issue of perturbation theory of discrete nonlinear Schro¨dinger equations was also addressed in [80]. 2.2. The generalized nonlinear discrete Schro¨ dinger equation The main purpose of this section is to study the stationary properties of the following generalized discrete nonlinear Schro¨dinger equation (GDNLS) i dt (t)/dt"(»#k"t (t)")[t (t)#t (t)]!c"t (t)"t (t) , (44) L L L> L\ L L where t is a complex amplitude, k and c are nonlinearity parameters and » is the transfer matrix L element coupling adjacent oscillators at site n and n$1, respectively. We note that Eq. (44) interpolates between two possible discretizations of NLS viz. the DNLS and AL equation obtained by setting k"0 (with cO0) and c"0 (with kO0), respectively [17]. To reduce the number of parameters we use a time scaling according to »tPt and introduce the ratios cJ "c/» and kJ "k/». For ease of notations we drop the tildes afterwards. We remark that a linear contribution et (t) appearing on the right hand side (r.h.s.) of the Eq. (44) can be removed by a global phase L transformation t (t)Pexp(!iet)t (t). L L The stationary equation is obtained by substituting t (t)" exp(!iEt) in Eq. (44) giving L L E !(1#k " ")[ # ]#c" " "0 , (45) L L L> L\ L L with complex variables and E is the phase of the stationary ansatz. The stationary equation (45) L was analyzed in the aforementioned extreme limits through map approaches in Refs. [52,81,82]. The stationary real-valued AL system satisfies an integrable mapping which is contained in the 18-parameter family of integrable mappings of the plane reported by Quispel et al. in [53]. In this section we will present an analysis of Eq. (45) and discuss the interplay of the integrable and nonintegrable nonlinear terms in the context of the complete equation [83]. Eq. (45) may be rewritten as E#c" " L , (46)
# " L> L\ 1#k" " L L which obviously reduces to a degenerate linear map if c"Ek. Although reduction of the complexvalued amplitude dynamics to a two-dimensional real-valued map is possible, (see Section 2.4), we concentrate in this section on the study of the recurrence relation " ( , ) appropriL> L> L L\ ate for the investigation of stability of the nonlinear lattice chain. Eq. (46) can also be derived as the
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
351
relation which makes the action functional
1 c c E! ln(1#k" ")# " "!( * # * ) F" L L> L L> L k k k L L an extremum. In the limit k"0, the latter is replaced by
(47)
(48) F" +E" "#c" "!( * # * ), . L L> L L> L L L The extremal sets + , define the orbits and together with appropriate boundary conditions L determine the solutions of a particular physical problem. However, concerning the stability properties one has to distinguish between the (dynamical) stability of the physical solutions and the (linear mapping) stability of the corresponding map orbit generated by the recurrence relation
" ( , ) [41,42]. In general, a dynamical stable solution minimizing the action L> L> L L\ corresponds to a linearly unstable map orbit, whereas physically unstable solutions corresponding to maximum energy configurations are reflected in the map dynamics as linearly stable orbits. In the present study we focus on the transmission properties of the “nonlinear lattice” of Eq. (45), since finding the linearly stable map solutions (propagating wave solutions) is essential. 2.3. Stability and regular solutions The second-order difference equation (46) can be regarded as a symplectic nonlinear transformation relating the amplitudes in adjacent lattice sites. This transformation can be considered as a dynamical system where the lattice index n plays the role of the discrete time n. The resulting dynamics of the two-component (amplitude) vector ( , )2 is determined by the following L> L Poincare´ map:
E !1
L> " L L , (49)
1 0
L L\ where the nonlinear transfer matrix depends on the amplitude through L E#c" " L . (50) E" L 1#k" " L The stability of the orbits + ,, (n"0,2, N), or equivalently the transmission properties of the L nonlinear lattice of chain length N, is governed by the solution behavior of the corresponding linearized equations in the neighborhood of an orbit ranging from ( , ) to (
, ). For ,\ , a semi-infinite (or infinite) one-dimensional lattice chain the (finite) sequence + ,, (n"0,2, N), L defines an orbit segment. To investigate the linear stability of a given orbit we introduce a small (complex-valued) perturbation u and consider the perturbed orbit P #u . Linearizing the map equations L L L L results in the second-order difference equation for the perturbations u : L 1 (51) +(E#2c" "#kc" ")u #(c!Ek) u*, . u #u " L L L L L L> L\ (1#k" ") L M: L
352
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Writing further "A #iB and u "x #iy with real A , B , x , y , we obtain the four-dimenL L L L L L L L L L sional map in tangent space
x x
L> L
"
y L> y L
with
EV L 1
!1
b
0
0
0
0
b
0
!1
0
0
EW L 1
0
x
x L x x L\ ,JM L\ L y y L L y y L\ L\ L
(52)
1 EV" [E#Ek(B!A)#c(3A#B)#kc(A#B)] , (53) L [1#k(A#B)] L L L L L L L L 1 EW" [E#Ek(A!B)#c(3B#A)#kc(A#B)] , (54) L [1#k(A#B)] L L L L L L L L 2 (c!Ek)A B . (55) b" L L [1#k(A#B)] L L First we deal with a local stability criterion. The eigenvalues aN of the Jacobian matrix JM follow L from the characteristic polynomial given by 2 aN ! [E#2c" "#kc" "](1#aN )aN L L [1#k" "] L 1 # 2# +E#4cE" "#(4ckE!3c!kE)" " L L [1#k" "] L
#4ck" "#ck" ", aN #1"0 . L L
(56)
Being interested in the derivation of a sufficient condition for linear stability we note that if the following inequality holds Ek5c ,
(57)
and if additionally "E"(2 then all four eigenvalues aN lie on the unit circle. On the other hand, for the following (eigenvalue) equation 2 a! [E#2c" "#kc" "](1#a)a L L [1#k" "] L 1 +E#2c" "#kc" ", a#1 # 2# L L [1#k" "] L 1 , a! (58) [E#2c" "#kc" "]a#1 "0 , L L [1#k" "] L can be readily shown that whenever the inequality (57) holds then the Eq. (58) has only complex solutions of modulus one. Moreover, for fixed parameters E, c and k we find that "Im(aN )"5"Im(a)".
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
353
We conclude that as long as the system (58) possesses exclusively complex roots so does the original Eq. (56). The characteristic polynomial (58) can be related to the (original) eigenvalue problem of Eqs. (51)—(56) if we introduce in (51) polar coordinates "r exp(ih) and neglect rapidly oscillatL L ing terms of the order &r exp(2ih). The corresponding map in tangent space reads then as L L x x L> L J(M ) 0 x x L L " L\ . (59) 0 J(M ) y y L L> L y y L L\ Hence, the original four-dimensional problem splits into two identical two-dimensional systems. Therefore it suffices to investigate the two-dimensional local variational equation assigned to Eq. (58) which is given by
d
d
L> "J(M ) L , L d
d
L L\ j( , ) where J(M )" L> L is the real matrix L j( , ) L L\ EI (" ") !1 , J(M )" L L L 1 0
(60)
(61)
with [E#2c" "#kc" "] L L . EI (" ")" (62) L L [1#k" "] L , d ) is accomplished by the product of the real Mapping the variations from (d , d ) to (d
,\ , 2;2 symplectic Jacobian transfer matrices ,\ J(M)" J(M ) . (63) L L Before proceeding with the stability analysis of the nonlinear discrete Schro¨dinger equation (45), we note that in the corresponding linear tight-binding model given by the equation # L>
"E , the (stable) solutions in the passing band of "E"(2 are parameterized by a wave L\ L vector k3[!p, p] corresponding to the linear dispersion relation E"2 cos(k). Upon increasing the nonlinearity parameters c and k from zero, the nonlinear dispersion relation for
" "constant reads as E"2 cos(k)#[2k cos(k)!c]" " and the stability of the orbits can L alter where rational values of the winding number k/(2p)"p/q, with integers p, q, yield periodic orbits whereas irrational values result in quasiperiodic orbits. First of all, we study the linear stability of periodic orbits " with cycle lengths q. The L>O L linear stability of a periodic orbit is governed by its multipliers, i.e. the eigenvalues of the corresponding linearized map. In examining the linear stability of the periodic orbits we make use of the fact that solving the linearized equations becomes equivalent to a band problem of a linear discrete Schro¨dinger (tight-binding) equation with periodic potential (see e.g. [84—86]) where we can invoke the (linear) transfer matrix method [87].
354
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
In the following we derive a sufficient criterion for linear stability. We substitute d ,u , and L L the linearized equation corresponding to (46) can be written in matrix notation
u u L> "M (EI ) L , (64) L u u L L\ where M (EI ),J(M ) and EI are given in Eqs. (61) and (62), respectively. The matrix product L L L O\ M " M (EI ) (65) O L L transfers (u , u ) to (u , u ) through a complete periodic cycle of length q. Since the periodic \ O O\ orbit members enter the individual transfer matrices M (EI ), Eq. (64) represents a linear equation L with periodic potential EI "EI (" ") and u "exp(ikq)u [88]. Thus M has eigenvalues L L L L>O L O exp($ikq), and its trace is given by Tr[M ]"2 cos(kq) , (66) O which leads to the condition "Tr[M ]"42 for the stable Bloch solutions and to two equivalence O classes for the total symplectic transfer matrix M corresponding to different stability properties. O For the real matrix M these equivalence classes are determined by the solution of the eigenvalue O problem, a!(Tr[M ])a#1"0 , (67) O where the roots a determine the multipliers of the periodic orbit [28]. When "Tr[M ]"(2, then O M has a pair of complex conjugate eigenvalues a on the unit circle leading to a stable elliptic O periodic cycle or an oscillating Bloch-type solution (passing band state). When "Tr[M ]"'2, this O yields real reciprocal eigenvalues corresponding to an unstable hyperbolic periodic cycle which has to be excluded as physically unacceptable, since it increases exponentially with larger chain length (stop band or gap state). Computation of Tr[O\[M (EI )]] for a general periodic orbit of arbitrary cycle length L L q requires tedious algebra. However, if c, k and E satisfy the inequality (57) then "EI (" ")"("E" and L L each individual transfer (Jacobian) matrix M has the important property #M (EI )#4#¹ (E )# , (68) L L where
E ¹ (E )" L L 1
!1 0
,
(69)
is the individual transfer matrix of a linear lattice chain at constant E "E. The norm of a matrix L A is defined by #A#"max #Az#, i.e. the natural norm induced by the vector norm #z# [89]. ,X, We note that the inequality (68) imposes no restriction to the amplitudes , since it is a global L feature of the mapping in the parameter range satisfying (57). For the linear lattice chain the total transfer matrix satisfies "Tr[¹ (E)]""Tr[O\¹ (E )]"(2 as long as "E """E"(2, i.e. is in the L L L O range of the passing band. Furthermore, because all the local Jacobians are identical, it is easy to
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
355
show that the global trace Tr[¹ (E)]"2 cos(hq), where h"cos\(Tr[¹ ])"cos\(E). With the O help of #AL#4#A#L and the modified inequality (68) O\ +#¹ (E )#!#M (EI )#,50 L L L we infer that
(70)
O\ O\ [¹ (E )] 5 [M (EI )] . L L L L Further, a natural matrix norm satisfies the inequality
(71)
max"a "4#A# . (72) Using Eqs. (71) and (72), one sees that the spectral radius of the matrix O\ [M (EI )] is bounded L L from above by #O\[¹ (E )]#. Since the eigenvalues are related to the trace via Tr[A]"(a#1/a), L L it can be readily shown that, whenever Eq. (57) holds, then "Tr[O\M (EI )]"4"Tr[O\¹ (E )]"" L L L L "2 cos(hq)"(2. Hence, all periodic orbits for the nonlinear lattice chain are linearly stable. Moreover, since for symplectic mappings the linear stability is both necessary and sufficient for nonlinear stability [28,44,90] the existence of KAM tori close to the periodic orbits is guaranteed for the combined AL-DNLS chain, when Ek5c and "E"(2. With the help of this sufficient stability condition we show in Section 2.4 below, that the reduced two-dimensional (real-valued) map then possesses a stable period-1 orbit which is surrounded by integrable quasiperiodic solutions. On the other hand, from the stability condition "Tr[M ]"(2 one can also deduce a necessary O condition for the stability of a periodic orbit. The transfer matrix depends parametrically on EI . L Therefore, in order to be compatible with "Tr[M (EI )]"(2 we have to distinguish between allowed O L and forbidden EI , if c'Ek. Because of EI "EI (" "), the allowed EI become amplitude dependent L L L L L imposing constraints on the latter. Since each member of a periodic orbit family exhibits the same stability type [28], it is sufficient to consider only one of the periodic points of each family, e.g. "Tr M (EI )"(2. A periodic orbit point is compatible with the allowed E-range of the passing band L when the amplitude fulfills the necessary condition: 1 " "( L k
1
E!2 !1 , 2!c/k
(73)
which reduces to " "((1!E/2)/c in the limit kP0. When the condition (73) is violated a stop L band (gap) state is encountered. Generally, whenever the inequality (57) holds, we are able to prove that all solutions of the combined AL-DNLS equation (46) are regular. The linear stability of general orbits is governed by a Lyapunov exponent (LE) representing the rate of growth of the amplitudes and is defined as [28,41,91]
1 , 2 , JM j" lim j " lim ln JM L L , 2N L L , , 1 , lim ln#JM 2 JM #, lim j . , , , 2N , ,
(74)
356
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
An orbit is linearly stable (unstable) with respect to the initial conditions if j"0 (O0). It has been proven that almost all initial conditions (except for a set of measure zero) lead to the largest LE [92,93], which in our case is the non-negative LE j50. In the parameter range Ek5c and "E"(2 we get with the help of the norm properties #AL#4#A#L and #AB#4#A##B#:
1 1 1 , , 1 (75) j " ln#JM 2 JM #4 ln(#JM 2 ##JM #)" ln JM 4 ln #JM # , , , , , L L , 2N 2N N N L L and JM is the Jacobi matrix determined by Eq. (52). Denote by #JM #"max #JM #, then we get L
L L 1 j 4 ln#JM #,"ln#JM #"ln#A\A JM A\A#4ln#A\# , N
#ln#diag(aN )##ln#A#"0 ,
(76)
with A being the matrix whose columns are formed by the (normalized) eigenvectors of JM and
diag(aN ) is the diagonal matrix the elements of which are the eigenvalues of JM of modulus one. The
LE vanishes and hence, all solutions are linearly stable. Particularly, sensitive dependence with respect to the initial conditions is excluded so that the combined AL-DNLS system possesses only stable regular orbits, whenever the sufficient condition Ek5c holds. In the parameter range given by the inequality (57), the AL-DNLS chain is transparent (see Section 2.6 below). We note that in the range outside that of the sufficient condition for regularity (Ek5c), and for the special case of constant amplitude " , i.e. for a period-1 cycle, the condition "Tr[M ]"(2 can be satisfied, if L O "Tr[J(M )]"(2. Especially, for large amplitudes the trace of all local Jacobians EI (" ")"EI (" ") L L becomes a constant c lim "EI "" , k ( which implies that, if c'0 and k'0 satisfy the inequality 2k'c ,
(77)
(78)
the global trace is "Tr[J(M)]"""2 cos(Nh)"(2, where h"cos\[c/(2k)] and the large amplitude motion is stable regardless of E. 2.4. Reduction of the dynamics to a two-dimensional map We now study the dynamics of Eq. (45) utilizing a planar nonlinear dynamical map approach. The discrete nonlinear Schro¨dinger equation (45) gives a recurrence relation " ( , ) L> L> L L\ acting as a four-dimensional mapping CPC. By exploiting the conservation of probability current, the dynamics can be reduced to a two-dimensional mapping on the plane RPR [81,82]. Following Wan and Soukoulis [82], we use polar coordinates for , i.e. L
"r exp(ih ) and rewrite Eq. (46) equivalently as L L L E#cr Lr , (79) r cos(Dh )#r cos(Dh )" L> L> L\ L 1#kr L L r sin(Dh )!r sin(Dh )"0 , (80) L> L> L\ L
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
357
where Dh "h !h . Eq. (80) is equivalent to conservation of the probability current L L L\ J"r r sin(Dh ) . (81) L L\ L We further introduce real-valued Sº(2)-variables defined by bilinear combinations of the wave amplitudes on each “dimeric” segment of the lattice chain: cos(Dh ) , (82) x " * # * "2r r L L\ L L\ L L\ L L * * y "i[ ! ]"2J , (83) L L L\ L L\ z "" "!" ""r!r . (84) L L L\ L L\ The relations with the polar coordinates (r , h ) are also given. Note that the variable y is L L L a conserved quantity since it is proportional to the probability current, i.e. y ,2J. The variable L z is determined by the difference of the amplitudes of adjacent lattice sites whereas information L about the phase difference is contained in the variable x . We remark, that our map variables differs L from those used by Wan and Soukoulis [82] in their study of the stationary DNLS system. The system of equations (79) and (80), can be rewritten as a two-dimensional real map M that describes the complete dynamics:
E#c(w #z ) L (w #z )!x , L x " L> 1#k(w #z ) L L L L L (85) M: x !x L !z , z " L> L> w #z L L L with w "(x#z#4J. L L L The map M is reversible, proven by the identities MM MM "Id and M M "Id, where the map M is xL "x , (86) M : zL "!z .
We can cast the map M into the product of two involutions M"AB with A"M M\ and B"M M , and A"Id, B"Id, and M is xL "x , M : (87) zL "z .
The inverse map is then given by M\"BA. This reversibility property of the map M can be exploited in studying the transmission properties of the discrete nonlinear chain (see Section 2.6). To analyze the dynamical properties of the nonlinear map M it is convenient to introduce the scaling transformations 2Jx PxN , 2Jz PzN , JcPcN and JkPkN . Finally, for the sake of simplicity L L L L of notation, we drop the overbars and obtain the scaled map E#c(w #z ) L L (w #z )!x , x " L L L> 1#k(w #z ) L L L 1 x !x L !z , z " L> L> 2 w #z L L L with w "(x#z#1. L L L
(88) (89)
358
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
The map M depends on three parameters, namely (E, c, k). Whereas for Ek5c, M represents a map, for which all solutions are bounded, it can contain bounded and diverging orbits both in the pure DNLS case (cO0 and k"0) as well as in the combined AL-DNLS case, if c'2k according to the findings in Section 2.3. Only the bounded orbits correspond to transmitting waves, whereas the unbounded orbits correspond to waves with amplitude escaping to infinity and hence do not contribute to wave transmission. On inspection we find the first integral for the AL system to be E(x !x)"[(x #x )!K][k(x !x)#2(z #z )] , L> L L> L L> L L> L
(90)
where K is a constant determined by the initial conditions. The structure on the phase plane is organized by a hierarchy of periodic orbits surrounded by quasiperiodic orbits. The sets of the corresponding fixed points form the invariant sets of the two involutions (fundamental symmetry lines) and are given by S : z"0 ,
(91)
1 E#c(w#z) (w#z) , S : x" 2 1#k(w#z)
(92)
respectively. The symmetric periodic orbits are arranged along higher order symmetry lines and the intersection of any two symmetry lines SL "MLS , SL "MLS with n"0, 1,2, fall on a periodic orbit of M. The symmetry line z"0 is the dominant symmetry line and contains at least one point of every positive residue Poincare´—Birkhoff orbit. The organization of the periodic orbits by the symmetry lines can be exploited for a one-dimensional search to locate any desired periodic orbit on the x—z plane [28,30]. For a classification of the periodic orbits Greene’s method can be used according to which the stability of an orbit of period q is determined by its residue o"(2!Tr[O DML]), where DM is the linearization of M. The periodic orbit is stable when L 0(o(1 (elliptic) and unstable when o'1 (hyperbolic with reflection) or o(0 (hyperbolic) [28,43]. As can be seen from the determinant of the Jacobian 1 x !x L , det(DML)"1# L> 2 w (w #z ) L L L
(93)
the map M is area-preserving for periodic orbits, after mapping through the complete period, i.e. O det(DML)"1. M is thus topologically equivalent to an area-preserving map ensuring the L existence of KAM-tori near the symmetric elliptic fixed points [44]. 2.5. Period-doubling bifurcation sequence We focus on the period-1 orbits (fixed-points of M) which have in the case of elliptic-type stability the largest basins of stability among all elliptic orbits. Thus the stable elliptic solutions encircling the fixed point form the main island on the map plane which plays therefore a major role in determining the stability properties of the wave amplitude dynamics.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
359
The period-1 orbit is determined by 1 E#cwN wN , xN " 2 1#kwN
(94)
zN "0 ,
(95)
where wN "(1#xN . Eq. (94) possesses one real root for c"0, resulting in a stable elliptic fixed point and has either no root or two real roots for c'0. The two real roots correspond to one hyperbolic and one elliptic fixed point, respectively. The residue is given by 1 (E#cwN )(E#2cwN #ckwN ) . o"1! (1#kwN ) 4
(96)
When, c"Ek, we recover the degenerate linear case, for which xN "sign(E)(E/(4!E) and o"1!E/4, like in the genuine linear case c"k"0. Concerning the stability of the period-1 orbit, Eq. (96) tells us that the residue remains positive and never passes through zero, if the parameters obey the inequalities c(Ek and "E"(2. As a result, the period-1 orbit cannot lose stability caused by a tangent bifurcation. According to the results obtained in Section 2.3, all orbits of the map are regular in this parameter range. Eq. (96) allows a further conclusion to be drawn: For E'0 the value of the residue for the period-1 orbit is always less than one, because the second term on the right-hand side then remains positive upon parameter changes and the position of the fixed point is merely shifted and never experiences loss of stability due to a period-doubling bifurcation. In this parameter range the route to global chaos is via resonance overlap. Only for E(0 can the residue pass the value of one connected with the onset of a period-doubling bifurcation, where the stable fixed point is converted into an unstable hyperbolic point with reflection accompanied by the creation of two additional elliptic points. This period-doubling bifurcation for the period-1 orbit sets in when "E"/c'1 (E(0) and the newborn period-2 orbits are located at x"$((E/c)!1 ,
(97)
z"0 .
(98)
Note that the location of the period-2 orbits depends only on the (E, c) values and is independent of k, the AL-nonlinearity strength, whereas their stability, determined by the corresponding residue 1 E!c o" c , 2 (c#k"E")
(99)
depends on the values of all three parameters. Due to the presence of the denominator in Eq. (99) we recognize that, for fixed parameters (E, c), enhancing the k value reduces the residue. Hence, the period-2 orbits become more resistant with respect to period-doubling bifurcation. Moreover, for k'k the value for the residue of the two stable elliptic points is bounded from above by one, so A that a further destabilizing bifurcation can be excluded. This critical AL-nonlinearity strength k (o(1) can be obtained as A 1 c c ! . (100) k ' c 1! A 2 E "E"
360
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
To study the period-doubling sequence as the mechanism by which the transition from regular to chaotic motion occurs, we take advantage of the renormalization technique developed for twodimensional invertible maps [28—30,33,94]. We expand the map M up to terms quadratic in the deviation from the bifurcation point
du L du du L> "A L #B du dv . L L dv dv L> L dv L The 2;2 matrix A has the following entries:
A "!1, E#c , A "! 1#k
(101)
E#2c#kc , A " [1#k] 1 [E#c] [E#3c#ck!Ek] A " !1 , 2 [1#k]
(102)
and the elements of the 2;3 matrix B are given by B "A , B "0 , B "A #1, B "A , E#4c#3ck!Ek#ck , B " [1#k] 1 E#8c!4Eck#9c#ck!4Ek#kE . B " 2 [1#k]
(103)
Finally, we bring the De Vogelaere-type map (101) into the standard form of a closed second-order difference equation (see Ref. [33] for the details how to achieve this form): Q #Q "CQ #2Q , L> L\ L L where the parameter C is determined via the sum of the eigenvalues of the matrix A: 1 [E#c] [E#3c#ck!Ek] !1 . C" [1#k] 4
(104)
(105)
The fixed point of equation (104) at QM "0 is stable for "E"/c(1 and gets unstable for 3'"E"/c'1 leading to a period-doubling bifurcation. Both points of the newborn period-2 orbit are located on the S -symmetry line, along which they get shifted upon increasing "E". Eventually, at a sufficiently high "E" value the period-2 orbit also loses stability caused by a next period-doubling bifurcation, which in turn gives rise to the birth of the corresponding period-4 orbit having one point on the S -symmetry line and two points on the S -symmetry line. For further increased "E" the period-4 orbit also goes unstable in the next step of the period-doubling cascade. This cascade of period-doubling bifurcations terminates at a universal critical parameter C , called the accumulation point, where local chaos appears. Employing a quadratic renormalization scheme for Eq. (104), this accumulation point has been determined to be C +!1.2656
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
[28—30,33,94]. Solving Eq. (105) for E "E(c, k, C ) we obtain 1 E "! [2c#(1#k)(c!"C "(1!k) ] . 1!k
361
(106)
Apparently, for a given DNLS-nonlinearity strength c, we conclude that enhancing the ALnonlinearity strength k results in an increase of the accumulation value "E " (provided k(1), i.e. the k term has the stabilizing tendency to prevent period-doubling sequences. In Fig. 3a we show a number of orbits of the map M for E"0.5, c"0.2 and k"0.1 together with the symmetry lines S and S . This map exhibits a rich structure involving regular
Fig. 3. Orbits of the map M given in Eqs. (88) and (89) for the parameters: (a) AL-DNLS case: E"0.5, c"0.2 and k"0.1. (b) AL case: E"!1.0, c"0 and k"1.0. Superimposed in (a) are also the fundamental symmetry lines S and S .
362
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
quasiperiodic (KAM) curves which densely fill the large basin of attraction of the stable period-1 orbit. The elliptic fixed points of the Poincare´—Birkhoff chains of various higher-order periodic orbits are also surrounded by regular KAM curves, while thin chaotic layers develop in the vicinity of the separatrices of the corresponding hyperbolic fixed points. Moreover, outside the structured core containing trapped trajectories inside the resonances, a broad chaotic sea has been developed where the corresponding unstable orbits may escape to infinity. For comparison we illustrate in Fig. 3b the integrable behavior for the AL-map where the corresponding orbits can be generated from Eq. (90). In order to study the global stability properties of the map M, we plotted in Fig. 4 the stability diagram in the x —E plane. For a set of initial conditions located on the dominant symmetry line, i.e. z"z "0 and various x"x , we iterated the map Eqs. (88) and (89). The dark region in Fig. 4 corresponds to stable solutions where the resulting orbit remains in a bounded region on the x—z plane of the map, whereas the white region on the x —E plane represents unbounded orbits. The curve separating the two regimes exhibits a rich structure. Practically, all lines of constant E pass several branches of either transmitting or nontransmitting solutions indicating multistable behavior. Multistability in the wave transmission along the nonlinear lattice chain will be considered in more detail in the next section. We further note that with increasing AL-nonlinearity parameter k the area of transmitting solutions on the E—x plane enhances. Beyond a certain nonlinearity strength k5c/E, the nonlinear lattice chain eventually becomes transparent for all amplitudes. In Fig. 4a we also superimposed the line for the location of the fixed point (period-1 orbit) of the map M. Following that line the occurring transition from a bounded to an unbounded regime for initial conditions around the period-1 orbit is a consequence of stability loss when passing from elliptic-type stability to hyperbolic-type instability upon changing E. They may experience a bifurcation from an unstable hyperbolic fixed point to a stable elliptic fixed point by increasing E where the corresponding x values then come to lie in the basin of attraction of the elliptic point. At a critical E value the initial conditions x leave the basin of attraction of the elliptic point and fall into the range of the unstable reflection hyperbolic point. Between the lower and the upper stability boundaries the elliptic point loses its stability temporarily caused by a quadrupling-bifurcation where the corresponding residue is o"0.75 leading to a local shrinkage of the area for bounded solutions which appears in Fig. 4a for EK!1.075 and x K0.24. 2.6. Transmission properties In this section we study as a physical application the wave transmission properties of the nonlinear lattice chain. Our aim is to gain more insight into the effects of the combined ALnonlinearity term and the DNLS-nonlinearity term with regard to wave transparency of a finite nonlinear segment embedded in a linear chain. Since the work of Winful et al. it is known that periodic modulation of a nonlinear medium leads to bistable behavior and the transmitted intensity of an incident plane wave on an finite one dimensional nonlinear medium is no longer a unique function of the input intensity [109]. Delyon et al. [81] have shown that the transmission of a wave through a (finite) sample of periodic nonlinear medium such as the DNLS exhibits nonanalytical properties. The transmission properties are further characterized by multistability induced by the spatial periodic modulation of the
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
363
Fig. 4. Illustration of the stability behavior of the map M with c"1.5 in the E—x parameter plane. Bounded solutions correspond to the dark areas whereas diverging solutions are indicated by the white area (see text). The AL-nonlinearity strength k is (a) 0 and (b) 0.76.
medium due to the lattice discreteness in addition to the nonlinearity. They found that the transmitted energy exhibits plateaus as a function of the incident intensity and the frequency versus intensity propagation diagram is fractal. Kahn and co-workers studied the nonlinear optical response of a superlattice constructed of alternating layers of dielectric, and an (insulating) antiferromagnetic film and observed also multistability and transmittivity gaps [96] (see also below).
364
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
2.7. Amplitude stability We study the following transmission problem: Plane waves with momentum k are sent from the left towards the nonlinear chain, where they will be scattered into a reflected and transmitted part:
R exp(ikn)#R exp(!ikn)
" L ¹ exp(ikn)
for 14n4N , for n5N .
We denote by R , R the amplitudes of the incoming and reflected waves and by ¹ the transmitted amplitude at the right end of the nonlinear chain. The wave number k is in the interval [!p, p] yielding "E"42. Since the superposition principle is no longer valid in the nonlinear case, the transmitted amplitude ¹ is not uniquely defined by the incident amplitude R . To circumvent this difficulty we solve the inverse transmission problem, i.e. compute the input amplitude R for fixed output amplitude ¹ (see also [81]). The procedure relies on the inverse map given by M\"M MM which we interpret as a “backward map” in the following manner: For a given output plane wave with transmitted intensity ¹ at the right end of the nonlinear chain we have (
, )"[¹ exp(ik(N#1)), ¹ exp(ikN)]. From the pair (
, ) we obtain (r , r ) and ,> , ,> , ,> , (h , h ) as well as (x "2"¹"cos(k), z "0). The latter are used as initial conditions for the ,> , ,> ,> map M\ in the study of the fixed output transmission problem. For a given wavenumber k the current J is fixed through the expression J""¹" sin(k). We see, therefore, that the pair (k, "¹") initializes the map M\ completely. Iterating the map M\ from n"N to n"0 successively determines the amplitudes (r , , r ) and phases (h , , h ) and eventually results in the value ,\ 2 ,\ 2 of on the left end of the nonlinear chain. Fig. 5 displays the transmission behavior in the k!"¹" parameter plane (momentum versus intensity amplitude of an outgoing wave), showing regions of transmitting (white) and nontransmitting (hatched) behavior. This representation is similar to that used by Delyon et al. and Wan and Soukoulis in the study of the corresponding stationary DNLS-model [81,82]. For a given output wave with intensity "¹" and momentum k the inverse map M\ has been iterated by taking a grid of 500 values of k and 250 values "¹". Correspondingly, to initialize the map M\ we populate the z-axis with initial conditions x "2"¹"cos(k), z "0 and iterate on each individual point. When the resulting incoming wave intensities "R " are of the same order of magnitude as the transmitted intensity, the nonlinear chain is said to be transmitting (white area in Fig. 5). In Fig. 5 we show the AL-DNLS case k"0.25, with c"1. For wave numbers "k"3[p/2, p] the regions of bounded and unbounded solutions are separated by a sharp smooth curve which can be obtained approximately from the analysis for the initial wave amplitude stability performed in Section 2.3. Using Eq. (73) the boundary follows from
1 " "" 2 k
2k(1!cos(k)) 1# !1 , c!2k
(107)
where is the critical value for the wave amplitude above which stable transmission is necessarily 2 impossible.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
365
Fig. 5. Wave transmission properties in the intensity versus momentum k plane. Hatched regions correspond to the nontransmitting regime whereas the white region indicate transmission. The chain length is N"200. We show the AL-DNLS-case c"1.0 and k"0.25. The dashed curve represents the boundary between nonescaping and escaping solutions obtained analytically from Eq. (107).
As AL-nonlinearity increases its stabilizing effect manifest itself in an area enhancement of the region for transmitting solutions. This effect becomes more pronounced for higher AL-nonlinearity strength (not shown here), eventually exhibiting perfect transmittance, when Ek5c. For "k"(p/2 the region for nontransmitting solutions ranges down into the region of linearly transmitting solutions, thus decreasing transparency. The boundary discerning between bounded and unbounded solutions shows a complex structure created by numerous narrow hatched tongues. The white regions between each of these tongues can be assigned to a corresponding stability basin of an elliptic periodic orbit and the fractal structure of the boundary curves originates from the hierarchical, self-similar structure of islands around islands formed by higherclass periodic orbits [81,82]. In the transmission diagrams represented in Fig. 5, several branches are created for k3[!p/2, p/2] at critical intensities "¹", indicating bistable or multistable behavior. Such multistability is illustrated in Fig. 6, where the transmitted wave intensity is plotted versus the intensity of the incoming wave for k"0.927. The curve in Fig. 6a illustrating the pure DNLS case (c"1.0, k"0) shows oscillations resulting in numerous different output energies for a given input energy. Above an output intensity of 0.68, a transmission gap ranging up to 1.08 occurs. Fig. 6b demonstrates that the presence of a stabilizing AL nonlinearity of k"0.5 closes the gap, i.e. transmittivity of the nonlinear lattice is restored. We presented an investigation of the nonlinear stationary problem of a discrete nonlinear equation that interpolates between the Ablowitz—Ladik and discrete nonlinear Schro¨dinger equation utilizing map approaches. The different regimes of the dynamical system were seen to depend on the two nonlinearity parameters k and c as well as the wave energy E. Using the properties of the
366
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 6. Transmitted intensity as a function of the incident intensity exhibiting multistability of the nonlinear transmission dynamics for a nonlinear chain consisting of 200 sites. The parameters are E"!1.2, c"1.0 and (a) k"0. A transmission gap occurs. (b) The gap has been closed for k"0.5.
tangent space map we found that, when Ek5c is satisfied, all orbits are characterized by a (largest) Lyapunov exponent that is equal to zero, thus leading to stable solutions. Consequently, this inequality marks the regime where transmission through the nonlinear lattice is ensured. Furthermore, for waves with energies outside this regime we found that stable map solutions are also guaranteed when 2k'c. The existence of these regular regimes shows that the presence of the AL term in Eq. (45) has the significant function of creating an “integrability regime” for the nonintegrable DNLS. We studied also the transmission properties of Eq. (45) in two ways and showed that in addition to the regular transparent regimes, there are also cases where multistability is possible. The effect of the AL term is to close the transmission gaps and thus enhances the transparency of the nonlinear lattice [83].
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
367
3. Soliton-like solutions of the generalized discrete nonlinear Schro¨ dinger equation 3.1. Introduction In this chapter we study the localized stationary solutions of the GDNLS equation in the real domain in greater detail. We show that the results of the stationary analysis can be used to excite localized stationary states of designed patterns on the lattice. Stationary localized solutions of a pure DNLS system were studied in [82,107,108] in the context of wave propagation in periodically modulated media. In nonlinear optics Kerr type nonlinearities give rise to DNLS equations and the localized solutions are supported by states in the first gap, therefore called gap solitons [34,95,96,110,112—114]. The corresponding stationary system can be treated by a nonlinear map approach. In searching for localized solutions one has to be aware that the stationary nonintegrable DNLS system exhibits irregular chaotic behavior which led the authors of [82,107,108] to the conclusion that perfect localization in a nonintegrable lattice system is impossible. Nevertheless, we demonstrate that stable localized lattice states conspire with the nonintegrability of the map orbits through homoclinic and heteroclinic connections. The chapter is organized as follows: In Section 3.2 we introduce the real-valued stationary AL-DNLS problem and link it with a two-dimensional area-preserving map. We discuss the stability properties of the fixed points of the map. In Section 3.3 we discuss the anti-integrable limit of the GDNLS and prove the existence of localized solutions. In Section 3.4 the Melnikov method is used to prove the existence of homoclinic orbits thus showing nonintegrability of the map. With the help of the Birkhoff normal forms we determine homoclinic orbits analytically in Section 3.5 and compute the soliton pinning energy. In order to obtain the heteroclinic orbits we exploit a variational approach relating the heteroclinic points to the critical points of a certain action function. In Section 3.6 we excite bright (dark) solitons in the dynamical DNLS using the homoclinic (heteroclinic) map orbits as initial data. Finally, in Section 3.8 we give a short summary. (Also, see Refs. [99—103,105,106,116].) 3.2. The real-valued stationary problem of the GDN¸S We investigate the solution properties of the GDNLS where we focus on time-periodic but spatially localized solutions [115]. As is shown below such a study of localized GDNLS states implies real-valued stationary amplitudes. Substituting the ansatz t (t)" exp(!iut) , (108) L L with amplitudes and the phase (oscillation frequency) u into Eq. (44), we obtain the following L coupled system for the amplitudes : L u #c " " #(»#k" ")( # )"0 . (109) L L L L L> L\ To distinguish the present study (in the real domain) from those performed in the preceding chapter (in the complex domain) we use for the stationary phase the notation u. We further remark that for computational convenience the current sign convention for the dispersion term is chosen to be opposite to those of the stationary Eq. (45).
368
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
We are particularly interested in solutions exponentially localized at a single site and distinguish two situations: (1) " "'" " for n'0 and " "(" " for n(0 with lim " ""0 correL L> L L> L L sponding to the bright soliton-like solution, and (2) " "(" " for n'0 and " "(" " for L L> L> L n(0 with lim " ""a'0 resulting in the dark soliton-like solution. Without loss of generalL L ity we assume that both types of the soliton-like solutions have their main deviations from the background around the central element of the lattice. Furthermore we request for the bright (dark) soliton solution exponential decrease (increase) of the amplitudes apart from the central site for "n"PR. It can be readily seen that the current J defined by (110) J"i [ * ! * ] L L\ L L\ is conserved for the system of the stationary equations. Since we consider an open lattice chain (without periodic boundary conditions) we can show that localized solutions implies real amplitudes 31. To this end we consider the value for the current at one of the ends of the chain L assumed to be of finite length N for the moment. Representing
by the r.h.s. of the ,\ corresponding stationary equation u#c" " , , (111) »
"! ,\ »#k" " , , we immediately obtain that J,0. Due to the conservation of J this result must hold for all lattice indices n3[!N, N] which however can only be fulfilled for either the special case of constant amplitudes "constant or, in general, only for real-valued 31. The last result is independent L L of the value of N. Hence, for the remainder of this chapter we consider real-valued amplitudes . L It is convenient to cast the real-valued second-order difference equation (109) into a twodimensional map 1P1 by defining x " and y "
where the lattice index plays the role L L L L\ of discrete “time”. We arrive at the map: uJ #cJ x L x !y , x "! L> L 1#kJ x L M: L y "x . L> L
(112)
We used the notation uJ "u/» and cJ "c/» and the tildes are conveniently dropped afterwards. Reversibility of the map M is established by the factorization M"M M with xN "y , (113) M : yN "x ,
xN "x , M : yN "!u#cx!y , 1#kx
(114)
where M are involutions and their corresponding symmetry lines are given by S : x"y and S : y"!(1/2)(ux#cx)/(1#kx). Furthermore, the map M is an analytic area-preserving map. In order to investigate stationary localized solutions in the form of the bright (dark) soliton, respectively, it suffices to study the fixed points (period-1 orbits) of the map M. The fixed points, for
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
369
which xL "yL , of this map are located at xL "0, xL "$(!u#2/c#2k , (115) ! where xL exists only if sign(u#2)"!sign(c#2k). The stability of the fixed points is governed ! by their value for the corresponding residues [28,43] o"1/4 [2!Tr(DM(xL ))]. The tangent map DM is determined by
DM(x)"
u L 1
!1 0
,
(116)
with u#(3c!ku)x#ckx L L. u "! L (1#kx) L The residue corresponding to the fixed point at the origin is
(117)
(118) o"(u#2) . For u values within the range of the linear band, i.e. "u"(2, 0(o(1 holds and the origin is a stable elliptic fixed point encircled by stable elliptic type map orbits. For "u"'2 (outside the range of the linear band) we distinguish the following two cases: (i) u(!2, c#2k'0. In this case the residue passes through zero, i.e. o(0, and hence the origin loses stability and is turned into an unstable hyperbolic point caused by a tangent bifurcation. This hyperbolic point is connected to itself by a homoclinic orbit created by the (invariant) unstable and stable manifold. As will be shown below the homoclinic orbit is manifested on the lattice chain as a soliton-like solution which is equivalent to the so called gap soliton of nonlinear optics lying in the stop band below the linear passing band [34,112,71]. The pair of points xL on the symmetry line S form stable elliptic fixed points. ! (ii) u'2, c#2k(0. The value for the residue at the origin passes through the value of one, that is o'1, connected with the onset of a period-doubling bifurcation, where the fixed point is converted into an unstable hyperbolic point with reflection. The newly created period-2 orbits are located on the line x"!y. The homoclinic map orbit supports on the lattice chain a soliton-like solution which exists in the gap above the linear passing band and has alternating signs for adjacent amplitudes, i.e. sign( )"!sign( ) as a characteristic feature (see below). This stationary localized structure L> L has been called a staggered soliton by Cai et al. in their study of the combined AL-DNLS equation [18]. Correspondingly the soliton solution of case (i) is called unstaggered soliton. Note that upon sign change cP!c and uP!u the map has the symmetry property of sign( )"!sign( ) L> L so that the unstaggered and staggered soliton replace each other. A third case of unstable fixed points can also be attributed to the occurrence of a stationary localized structure on the nonlinear lattice, namely: (iii) "u"(2 and c#2k(0. Since the frequency is in the linear band the origin is still a stable elliptic fixed point whereas the pair of fixed points xL on the symmetry line S represents two unstable hyperbolic fixed points !
370
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
which are connected to each other via a (pair of) heteroclinic orbits. This heteroclinic map orbit represents a kink-like solution, also called a dark soliton. There exist staggered and unstaggered versions of this soliton, too. 3.3. The anti-integrable limit and localized solutions In this section we apply the concept of the anti-integrable limit introduced by Aubry and Abramovici [117,118]. We are particularly interested in the impact of the integrable AL term of Eq. (44) on the formation of breathers, i.e. the occurrence of time-periodic, spatially localized solutions. Recently, MacKay and Aubry have proven the existence of localized solutions in form of breathers for weakly coupled arrays of oscillators [119]. These authors suggested also the application of the anti-integrable (or better called no-coupling) limit to prove the existence of breather solutions for the DNLS system (cf. Section 9 of Ref. [119]). Using the anti-integrable limit Bressloff proved the existence of localized ground states for the standard diffusive Haken model describing a neural network [120]. Due to the formal equivalence of the Haken model to a DNLS-Hamiltonian one can also infer from Bressloff’s result on the existence of (stationary) breathers for the DNLS. However, the present study involves besides the nonintegrable DNLS term also additionally the integrable AL contribution. It is worth mentioning that the AL system by itself has no anti-integrable limit. The spatial behavior of the breathers t (t)"exp(!iut) of the GDNLS is described by L L a stationary equation with real amplitudes 3R [119] (see also discussion above). Since L "t (t)"" does not change with time these solutions can be viewed as static breathers. The action L L for the real-valued AL-DNLS system reads
1 c c » u! ln(1#k )! # ( ! ) . F" ! L L 2k k 2k L 2 L> L (We shifted the linear band by 2».) The map orbits are determined by
(119)
u#c L . (120) »( # !2 )"! L> L\ L 1#k L L The anti-integrable limit for the AL-DNLS system is obtained for vanishing hopping parameter »"0, where the action is represented as the sum over local on-site potentials º( ): L 1 c c u! ln(1#k )# (121) F "! L ' 2k k 2k L L
, º( ) . L L The orbits for the stationary problem are determined by jº( ) L ,º( )"0 , L j
L yielding the rest positions
I "0,
I "$(!u/c, u(0, c'0 . !
(122)
(123)
(124)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
371
Since orbit points at sites n#1 and n are mutually independent of each other an orbit can be associated with an arbitrary sequence of three symbols assigned to ( I , I ). Hence, the orbits ! are trivially equivalent to a Bernoulli shift establishing the existence of chaotic orbits [117,118]. We can prove that some chaotic solutions of the anti-integrable limit persist, at least up to a critical value » . Theorem. For "º("u ")" ! (125) » 4 4
there exists a unique solution of Eq. (120) such that for all n deviations from the rest position I are bounded by " "4u , L > and deviations from I
(126) !
lie in the range given by
4" "4 ,
L
(127)
with
1 u "$ ! 2c
c c c !10u u!3 # u# 3 k k k
1 c 8» #u!3 #
"
2c k and
,
c #16c» !4cu 3 !u!8» k
is the maximum positive root of the equation
º( )"º(u ) . L >
(128)
(129)
(130)
Proof. Eq. (120) can be rewritten as »( # !2 )"º( ) . L> L\ L L From Eq. (127) we obtain that
(131)
sup " # !2 "44 , L> L\ L
L which yields
(132)
4» 5"º( )" .
L When
(133)
max "º( )" "º("u ")" ! , L L " (134) » ( 4
4
then Eq. (120) has infinitely many solutions. But, in each interval given by the inequalities (126) and (127), there is one and only one solution and hence, the problem is uniquely defined.
372
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
From the graph of º( ) it can be readily seen that º( )'0 in the interval given by Eq. (126). L L Further, when (127) holds then º( )(!4». The Jacobian operator DF is tridiagonal with L diagonal elements º#2» and off-diagonal elements !». Since "º#2»"'2», the operator DF is invertible and its inverse is bounded. Hence it follows from the implicit function theorem that an orbit has a locally unique continuation for »(» [117—123]. 䊐 It can easily be shown that j» /jk(0 and j /jk(0. From this we note that the presence
of the integrable AL-term has a destabilizing influence on the continuation process of the solutions of the anti-integrable limit in the sense that, the higher the k-values are the lower become the »-values to which the continuation can be carried out. Moreover, the possible maximal amplitudes
of the continued solutions decrease with increasing k. An interesting case arises when in the
no-couling limit »"0 only one site is excited while all the others remain unexcited (e.g. " I > and "0). This results in a so called one-site breather which is maintained under the action of L$ the coupling 0(»(» and its spatially exponential localization can be proven with the help of Theorem 3 of [119]. 3.4. The Melnikov function and homoclinic orbits As is well known in generic nonintegrable maps the stable and unstable manifolds of hyperbolic equilibria cross each other in homoclinic points; or there might appear crossings of the stable and the unstable manifolds of different hyperbolic points, called heteroclinic connections. Such twodimensional maps are often associated with the Poincare´ map of periodically perturbed twodimensional flows [28]. For these time-continuous flows the Melnikov function proved to be a powerful method to show the existence of orbits homoclinic to a hyperbolic equilibrium. Glasser et al. [124] extended the Melnikov analysis to two-dimensional discrete maps of the plane which are of the form u "F(u )#eG(u ) with u"(x, y)31. Thus the r.h.s. is assumed to consist of L> L L a completely integrable part F and a small nonintegrable perturbation eG with e;1. Furthermore, the unperturbed system of e"0 possesses an unstable equilibrium characterized by coinciding stable and unstable manifolds forming a perfect unperturbed separatrix on which the solution is known explicitly. Based on geometric arguments a Melnikov function was developed in [124] measuring the distance between the stable and unstable manifolds under the action of the perturbation. For a perturbational treatment we consider the nonlinear term related with c as a small (nonintegrable) perturbation of the integrable AL map (c"0, kO0). Therefore we introduce in Eq. (112) the small parameter e: u#ecx L !y , x "! L L> 1#kx L y
"x , L> L
(135)
whose integrable part (e"0) possesses a separatrix given by kxy#x#y#uxy"0 .
(136)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
373
In the first quadrant the separatrix loop can be parameterized by 1 sinh b sech(t!nb) , x (t)" L (k
(137)
1 sinh b sech(t!(n#1)b) , y (t)" L (k
(138)
where cosh b"!u/2 and t is a real parameter regulating the position on the separatrix loop. Note that the AL soliton center of x (0)"((u/4!1)/k and y (0)"!2x (0)/u is determined on the map plane by the intersection point of the AL separatrix loop with the symmetry line S . According to [124] the Melnikov function is given by M(t; u, k, c)"""u (t)""D(0) ,
(139)
with D(0)" G(x , y )v , (140) I\ I\ I I\ where the wedge product is (u , u )(v , v )"u v !u v . Therefore the unit tangent vector to the separatrix is given as v (t)"u (t)/""u (t)"" , I I
(141)
where
u u u (t)" !y ! x !kxy , x # y #kx y . I 2 I I I I 2 I I I I
(142)
In our case the perturbation is G(x )"(!ecx/(1#kx), 0). Thus we have L L L
u x I\ x # x #kx x D(0)"!ec I 2 I\ I I\ 1#kx I\ I\ ec u (x #x )x x # x #kx x " I> I\ I I> I> I u 2 I I\ ec [xx #uxx #kxx #x ] . (143) " I I> I I> I I> I> u I\ We therefore need to calculate sums of the form S (a, b, j)" sech(jn!a) sech(jn!b). I\ Using the Poisson summation formula we have
S (a, b, j)" exp(2pinx) sech(jx!a) sech(jx!b) dx . I\ \
(144)
374
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
The integral is evaluated using residue calculus:
exp(ax) sech(jx!a) sech(jx!b) dx \ ap a aa ab p cosech d exp ! #exp ! "! cosech 2j j j j j
aa ab # 2 coth d exp ! !exp ! j j
,
(145)
where d"a!b. Setting a"!2pin and performing the remaining sum we get 2p S (a, b, j)" [h(d)II (a, b, j)#f (d)I(a, b, j)] , j
(146)
with
I"
d pb p pa p , !
, # , p j j j j
(147)
j j 2 II "! ! ! , jb ja p
(148)
where K
(x, u)" sin(2nu) cosech(nx)" [E(am(2Ku/p))!2uE/p] p L and finally h(d)"cosech d and f (d)"2 cosech d coth d. Similarly, we have
(149)
S(a, b, j)" sech(jn!a) sech(jn!b)"g(d)I(a, b, j) , (150) I\ sech(jn!a) sech(jn!b) follows from where g(d)"cosech d. The sum S (a, b, j)" I\ S "(S!S)/2, where the primes denote derivatives with respect to a. The sum sech(jn!a) sech(jn!b) can be constructed as S "S !S while the last S (a, b, j)" I\ sech(jn!a) is S (a, a, j). Thus we obtain sum I\ ec D(0)" sinh b(S (t, t!b, b)!2 cosh bS (t, t!b, b) uk # sinh bS (t, t!b, b)#S (t, t, b)) which reduces to
(151)
Kk 2Kt 2Kt 2Kt ec dn sn cn D(0)"! sinh b 4 coth b b b b b uk
8 Kk 2Kt 2Kt 2Kt 2Kt 2Kt 2Kt # dn cn !dn sn ksn cn 3 b b b b b b b
, (152)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
375
where E(k)/K(k)"p/b, K and E being the complete elliptic integral of the second kind and the associated complete elliptic integral of the second kind, respectively. Eq. (152) shows that the Melnikov function is a periodic function of t with an infinite number of simple zeros proving the presence of homoclinic chaos in the perturbed map. As a result of the nonintegrability the stable and unstable manifolds of the hyperbolic fixed points of the perturbed map are no longer identical, but intersect and create a homoclinic tangle. Eq. (152) also shows that the separatrix splitting is proportional to the ratio c/(uk). The same ratio was found in [83] to limit the parameter region where the map (112) shows regular motion. Finally, from Eq. (152) it is obtained that the distance between successive transversal intersections of the stable and unstable manifolds purely depends on the ratio K/b, which in turn only depends on u and not directly on the nonlinearity parameters c and k. The nonlinearity parameters appear only in front of the Melnikov function (153) as a factor regulating the degree of the separatrix splitting. Apparently with higher nonintegrability parameter c the splitting grows whereas the integrability parameter k acts in the opposite direction, namely suppresses the splitting. According to the defining relation between k and u we can obtain that k has a rather slow dependence on u which means that k can be considered small even for rather large u such that it is reasonable consider k as small in Eq. (152). This approximation reduces the complexity of D(0) considerably. Using small k expansions of the Jacobian Elliptic functions [125]
4K ec t#h #O(k) , D(0)"! A(u) cos b k
(153)
where
26K 2Kk cothb# sinhb, A(u)" 9b ub
3b coth b tan h" . 4K
(154)
The distance between successive zeros of the Melnikov function is given by
u u 1 pb 1 !1 , ln(j) , Dt" K ln ! # 4 2 2 4K 2
(155)
telling us that the distance between two adjacent intersections of the stable and unstable manifold depends solely on the oscillation frequency u. The distance vanishes at the band edge of u"!2 and grows logarithmically when u ranges further down in the gap. Interestingly, the quantity j in Eq. (155) is identical to the maximal eigenvalue of the linearized map around the hyperbolic point at the origin. 3.5. Normal form computation of the homoclinic tangle In this section we use the Birkhoff normal forms to compute the homoclinic orbit corresponding to the unstable hyperbolic point at the map origin for u(!2 and c#2k'0. The Birkhoff normal form of an area-preserving map yields a simplified version of the map achieved by a canonical transformation in form of a formal series expansion [126]. Normal forms are powerful tools for analytical determination of homoclinic orbits of two-dimensional maps. Recently Tabacman [127] developed another method for computing homoclinic and heteroclinic orbits relying on
376
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
an action principle. In a subsequent section we exploit this method to obtain the orbit heteroclinic to the fixed points (xL ,yL ). Later we need the “exact” location of the intersection points of the ! ! stable and unstable manifolds to use them as initial data in order to excite stationary localized states of the GDNLS. We begin by rewriting the map of Eq. (112) as follows:
M:
xN "!ux!u yN "x .
(c/u!k)x x!y , 1#kx
(156)
The linear part xN "!ux!y,
yN "x
(157)
is diagonalized through the canonical transformation P"j>x!y,
Q"j\x!y ,
(158)
and (159) j!"[!u$(u!] are the eigenvalues of the linear transformation (157). The inverse transformation is given by P!Q x" , j>!j\
j\P!j>Q y" . j>!j\
(160)
After a scaling PP(kP and QP(kQ and with the help of j,j>"1/j\ we obtain for the transformed map:
c j 1 !u (P!Q) , k (j!1) 1#1/(j!1/j)(P!Q)
(161)
1 1 c j QM " Q! !u (P!Q) , j 1#1/(j!1/j)(P!Q) k (j!1)
(162)
PM "jP!
or its equivalent Taylor expansion about the origin
c j L> , !u j (!1)L (P!Q) k (j!1) L c 1 j L> 1 (!1)L . QM " Q# !u (P!Q) k j (j!1) j L Birkhoff [126] introduced a canonical transformation based on the series expansion PM "jP#
I P"m# p mI\JgJ , IJ I J I Q"g# q mI\JgJ , IJ I J
(163) (164)
(165) (166)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
377
such that the (m, g) map is given by mM "º(mg)m ,
(167)
gN "[º(mg)]\g .
(168)
The function º depends only on the product mg and has a formal series expansion
º(mg)"j 1# º (mg)I . (169) I I Moser [128] proved the convergence of the series (167)—(169) in a disc surrounding the origin provided the series in Eqs. (163) and (164) represent analytical functions which is true in our case. Moreover, it was shown, that whenever the inverse map is also analytic, the region of convergence of the series can be extended in narrow strips along the stable and unstable manifolds, respectively [129]. Furuya and Ozorio de Almeida [130] used the Birkhoff normal form for a precise computation of homoclinic points of the standard map and our approach proceeds along the same lines for the AL-DNLS map. It is useful to define the auxiliary series (P!Q)L>" (dL>) mI\JgJ . IJ IL> The recursion relations for the expansion coefficients are then determined by
(170)
c j L> I\ jº d !jp " !u j (!1)L(dL>) IJ j!1 I\ IJ> IJ k L !jI\J p (ºI\J) , I\LJ\L L L 1 c 1 I\ j L> j(º\) d ! q " !u (!1)L(dL>) I\ IJ\ j IJ IJ k j j!1 L
(171)
!jI\J q (ºI\J) . (172) I\LJ\L L L The stable and unstable manifold of the map M are the images of the g"0 and the m"0 axes under the transformation º. Since the Melnikov function possesses infinitely many simple zeros the stable and unstable manifold cross each other in homoclinic points which we can compute from the images of the two axes under º. This method provides the homoclinic orbit with uniform precision. The unstable manifold as the projection on the m axis is determined by the odd-power series P"m# p mI, I I
Q" q mI I I
(173)
378
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
for which the coefficients p
and q can be given in closed form I I 1 I\ c j L> (!1)L (dL>) p " !u j I I jI!j k j!1 L c j I 1 , (!1)I\ K !u j k j!1 jI!j
c 1 1 I\ j L> q " !u (!1)L (dL>) I I k j jI!1/j j!1 L c 1 1 j I K !u . (!1)I\ k j jI!1/j j!1
(174)
(175)
We have omitted terms of order higher than j\. Inserting Eqs. (174) and (175) into Eq. (173) we obtain
c j I 1 mI . (176) !u j (!1)I\ k j!1 jI!j I If again terms of the order higher than j\ are dropped the series can be summed up yielding P"m#
P"m!
c m m !u . k 1#(m/j) j
(177)
Correspondingly, we obtain
c 1 1 j I !u (!1)I\ mI , k j jI!(1/j) j!1 I c m m #O(j\) . "! !u k j 1#(m/j)
Q"
(178) (179)
Using the inverse transformation of Eq. (160) the unstable manifold is determined by
1 y . x"jy 1#(c!uk) 1! j j#(j!1)ky
(180)
Apparently there is no intersection for c"uk for which the map degenerates to a linear one. Since the map orbits obey the symmetry xy the stable manifold is obtained from Eq. (180) by exchanging x and y. 3.6. Homoclinic, heteroclinic orbits and excitations of localized solutions We have seen in Section 3.2 that in the map-plane the origin (x ,y ),( , )"(0, 0) forms L L L> L a hyperbolic fixed point p as long as "u"'2 which possesses its invariant stable and unstable manifolds. Points belonging to the stable manifold W(p) approach the fixed point p under map iteration ML for nPR, likewise points on the unstable manifold W(p) reach the fixed point p for nP!R. Thus going along the invariant manifolds of the hyperbolic fixed point localized
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
379
stationary solutions could be created. However, searching for soliton-like solutions, one has to be aware that the DNLS system is nonintegrable; a fact which normally prevents it from having soliton-like solutions, since these are associated with an integrable system. As already mentioned the integrable Ablowitz—Ladik (AL) equation possesses soliton solutions which are the discrete versions of the solitons of the (integrable) continuum nonlinear Schro¨dinger equation [16]. These discrete AL-solitons manifest in the integrable map as a perfect separatrix with coinciding stable and unstable manifold. Since the DNLS system is nonintegrable (see Section 3.4) we know that the separatrix is destructed in the sense that the stable and unstable manifolds no longer coincide but rather intersect each other transversally in homoclinic points, creating complicated chaotic dynamics developing eventually Smale horseshoes. The relation between homoclinic and heteroclinic orbits of nonintegrable maps with localized solutions of the underlying lattice system generating the map is known since the pioneering work of Aubry and coworkers [41,42]. Aubry and Le Daeron [42] studied the Frenkel—Kontorova model consisting of an infinite sequence of equal springs and masses under the action of a periodic potential. They interpreted the Frenkel—Kontorova model as a generating variational for the orbits of the standard map and showed that homoclinic (heteroclinic) intersections, called also discommensurations, are attributed to localized states pinned by the lattice. (We refer to the next section for details.) Coste and Peyrard [107] as well as Wan and Soukoulis [82] dealt with the linkage between the homoclinic orbit of the DNLS map and localized states of the lattice. Coste and Peyrard draw the conclusion that “perfect localization in a DNLS system is impossible” because of the residual stochasticity near the hyperbolic points. Instead of exhibiting a “one-peak solution” as in an integrable system where a solution can approach a hyperbolic point as arbitrarily close as is wanted they claim that in a nonintegrable system multipeak solutions are expected to appear. Wan and Soukoulis came to the same conclusion regarding the DNLS system in the context of Holstein’s polaron model. They interpreted the homoclinic chaos with its stochastic behavior of the map orbits in the vicinity of the hyperbolic point as a splitting of the large polaron solution (represented by a soliton-like orbit) into an array of randomly distributed small polarons pinned by the discrete lattice [82]. In contrast to the propositions in [82,107,108], there exist stable stationary localized solutions to the DNLS related to homoclinic and heteroclinic orbits of the related map. This is the case even though there exist neighboring map orbits which are strongly chaotic. The reason is that the localized states rely on the structural stability of orbits homoclinic or heteroclinic to unstable hyperbolic fixed points such that their amplitudes are represented by a homoclinic (heteroclinic) orbit in the corresponding map plane of M. A homoclinic point ( , ),q is defined by L> L q3W5W and qOp. Since q belongs both to the stable and the unstable manifold of p it follows that ML(q)Pp as nP$R. In order to depict the homoclinic tangle of the global invariant manifolds we approximate the stable respectively the unstable manifold in the vicinity of the hyperbolic fixed point by the linear subspaces (straight lines in the direction of the eigenvectors to the two eigenvalues with modulus apart from the unit circle) of the linearized map. Iterating a few thousands initial points on them several times, we obtain finally the homoclinic tangle of the hyperbolic fixed point. In Fig. 7a we show the homoclinic tangle for the parameter choice of c"1, u"0.883 and »"0.2. One clearly recognizes the homoclinic points. Points below the symmetry line S are characterized by
( for n'0, i.e. belong to W. Each homoclinic point is mapped into another one and after L> L only a few map iterations rapidly approaches the map origin where P0. L
380
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 7. Soliton-like solutions. (a) The map plane depicting the homoclinic tangle of the hyperbolic fixed point at the origin. The amplitudes resulting from the dynamical study are shown as diamonds. They appear at the transversal intersections of the invariant manifolds. (b) Profile "t (t)" of the stationary bright soliton-like solution of the DNLS. L Parameters are: c"1, k"0 and »"0.2.
Correspondingly, the homoclinic points above the line S for which (
for n'0 will be L L> mapped into the map origin in course of the inverse map, i.e. belong to W reflecting the translational invariance of the discrete lattice under the operation n !n.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
381
Let us now use our knowledge about the homoclinic (heteroclinic) orbits to initiate (stationary) localized solutions for the time-dependent DNLS dynamics. In order to invoke the homoclinic map orbit as an initial condition for the dynamics, a sufficiently accurate location of the orbit members (homoclinic intersection points) is demanded. Obviously, the corresponding amplitudes could be read off from the map plane as the coordinates of the homoclinic intersections. However, this may not be accurate enough to ensure that the spatial behavior of the amplitudes of corresponding dynamical trajectory t (t)" exp(!iut) follows the homoclinic orbit closely enough, thus L L L representing a nonlinear eigenstate. Therefore we use the normal form of Eqs. (176) and (178) to compute the homoclinic orbit “exactly”. For a study of the dynamics of soliton-like solutions for the DNLS given in Eq. (44) we use a lattice of chain length N"201. We implement the analytically computed homoclinic orbit
with n3[!N/2, N/2] as initial conditions Re t (t"0)" and Im t (t"0)"0. The result L L L L for the soliton-like solution is illustrated in Fig. 7b. Using a Fast-Fourier-Transform-routine we determined the oscillation frequency to u"0.879$0.004, which is in fairly good agreement with the value for frequency put in the map, i.e. u"0.883. We inserted the (dynamical) amplitudes "t (t)" as diamonds on the map plane in Fig. 7a to show that they coincide with the homoclinic L orbit. The stationary localized soliton-like solution has the following amplitude pattern (2,!,!,!,2) where the dots stand for vanishingly small amplitudes. This localized mode is centered at a single site. Aceves et al. showed also that these excitation pattern of DNLS results in stable steady-state solutions [77—79]. In the same manner we proceed with the kink-like (dark soliton) solution for values of u inside the linear band. To derive the heteroclinic orbit with high precision we apply a variational approach developed recently by Tabacman [127] to compute homoclinic and heteroclinic orbits for twist maps. The advantage of this method is that it only requires knowledge of the generating function of the map and a local approximation of the stable and unstable manifolds of orbits near the fixed points. The approximate manifolds can be obtained with the help of the linear subspaces of the tangent map taken at the fixed point at the origin as described above. Equipped with these approximate invariant manifolds, there remains to find the critical point of a certain function ¼ which is related to the generating function of the map (see Proposition 7 in [127]). The map , can be rewritten in terms of the variables q " and p "q !q . The corresponding map L L L L L\ orbits can be derived from the generating function S(q , q )"(q !q )#(1/2k)(c/k!u)ln(1#kq)!(c/2k#1) , L L L L> L> with the relations p "!jS(q , q )/jq and p "jS(q , q )/jq . One can define an action L L L> L L> L L> L> function ¼ the critical points of which delivers the orbit heteroclinic to the fixed points at , (qL "xL , pL "0) and (qL "xL , pL "0). The action function ¼ is then given by \ \ \ > > > , , ¼ (q ,2, q )"U(q )# S(q , q ) !U(q ) , , , L L> , L where the functions U(q ) and U(q ) describe locally the stable and unstable manifolds W(qL , pL ) , \ and W(qL , pL ), respectively. These functions U can be computed using the linear subspaces at the > fixed points. To compute the critical values of the function ¼ we used a Newton method. , Apparently it is sufficient to obtain one single member of the heteroclinic orbit and then to use the map for getting the others as iterates. When iterating along the stable manifold we approach soon
382
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
(typically after 5—8 numbers of iteration) the close vicinity of the hyperbolic fixed points where the orbit stays. Alternatively, one can also use normal form computations in order to generate the heteroclinic orbit. However, for heteroclinic orbits more than one normal form has to be evaluated. Fig. 8a shows the map plane for the kink-like solution. Again we have inserted the kink amplitudes "t (t)" along the lattice as diamonds in the map plane shown in Fig. 8a. L In this way excitation of the staggered solitons is also possible. Note that staggered localized DNLS modes have been observed experimentally in a real electrical network [23]. The map for the stationary solutions enables one also to predict the existence of another kind of stationary localized solution with amplitude pattern of the form (2,!,!,!,!,2), i.e. it is supported by a homoclinic orbit having the turnstile as one homoclinic point located on the symmetry line y"x, i.e. " . This localized mode is centered between two lattice sites. Its energy is higher L> L than that of the above mentioned localized mode centered at one single lattice site (see also next section). We close this section with the remark that the complete dynamical DNLS system is studied in Ref. [132]. It was found that the odd-parity mode is in fact a stable localized standing excitation of DNLS sustaining symmetry breaking perturbations of its mode pattern. Recently Aceves et al. also showed that the preferred stable localized DNLS states are supported by states having exponentially decaying amplitudes around the maximal amplitude at a single-site, i.e. the odd parity mode. On the other hand, the even-parity mode exhibits a dynamical instability and collapses to the odd-parity mode under the impact of symmetry breaking perturbations. These results are in agreement with the findings in [104]. 3.7. The soliton pinning energy As a consequence of the nonintegrability of the map M and the resulting transversal intersection of the stable and unstable manifolds the soliton-like solutions are pinned, i.e. they cannot be translated over the lattice from one point to an adjacent without overcoming an energetic barrier [118]. The pinning energy can be computed with the help of the normal forms as done in [130] for the solitons of the standard map. We use here another approach based on the findings of the Melnikov function. Kivshar and Campbell [133] studied the pinning energy (Peierls—Nabarro potential barrier) for the localized modes of the DNLS system, i.e. for cO0 and k"0. There exist two homoclinic orbits whose points alternate along the invariant manifolds. Each of the homoclinic orbits has one of its points on the symmetry line S and S , respectively. These points rapidly approach the map plane origin under the mapping where they stay most of the time. The homoclinic orbit crossing S which we denote by +U , represents an excitation pattern of (2!!!!2) on the lattice chain. Such a stationary excitation pattern was called even-parity mode in [98] and sometimes also inter-site centered local mode [104]. The other homoclinic orbit +U , possesses three large amplitudes ( , , ) and has the mode pattern (2!!!2) which was \ called the odd-parity mode [97] or on-site centered local mode [104]. The point ( , ) is \ located on S . For positive (negative) c#2k the unstaggered odd-parity (staggered even-parity) mode has lower action (energy) than the unstaggered even-parity (staggered odd-parity) mode. To see this for c#2k'0, one starts iterating the map M at the turnstile of x "y , (a member of the unstaggered even-parity mode), and goes along the stable manifold in the range of y'x till the next intersection point is met. Then one follows the unstable manifold back to the turnstile. In this
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
383
Fig. 8. Soliton-like solutions. (a) The map plane illustrating the heteroclinic connection of the hyperbolic fixed points at " """ ""(!(u#2»)/c. The diamonds represent the squared modulus of the kink amplitudes taken from the L> L dynamics of Fig. 8b. (b) Profile "t (t)" of the stationary kink-like solution (dark soliton) of the DNLS. Parameters as in L figure (a).
way a closed curve has been described and the area enclosed by it gives the action. We then apply the same procedure for the next pair of homoclinic points. Going down the stable manifold from the largest point of the unstaggered odd-parity mode (x , y ) one hits the next homoclinic point
384
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
and then switches back to the unstable manifold. The obtained closed curve and thus the action (energy) is completely below the first one. Thus only the odd parity map orbit corresponds to a physically relevant discommensuration of lowest energy. In the same manner one can show that for c(0 the staggered even-parity mode has lower action (energy) than the staggered odd-parity one. Following Aubry [118] we define the pinning energy as E "E !E .
(181)
The energy functional is given by
1 c 1 c E(+U ,)" ( ! )# !u ln(1#k )! #1 . L L> L L L 2 2k 2k k L
(182)
We can compute E “exactly” by using the homoclinic orbits obtained from the normal forms. Moreover, we can exploit the symmetry properties of the map M. The Melnikov function provides us with the knowledge of the location of the intersections of the stable and unstable manifolds. Regarding the DNLS term proportional to c as a small perturbation to the AL map, we can get one orbit point for +U , as the intersection of the AL separatrix with S as
u#2
" " ! . \ k
(183)
To express the symmetry properties of the even parity mode we take the lattice indices n3Z!+0,. Similarly, we obtain for the point on +U ,
1 u !1 .
" k 4
(184)
The complete homoclinic orbits can be generated with help of the relations
1 u
" !1 sech[2nDt], n"0,$1,2 , L k 4
(185)
1 u !1 sech[(2n#1)Dt], n"$2,2 .
" L k 4
(186)
Using Dt and determined by Eqs. (155) and (183), respectively, we obtain !
1 (u!4) (jL#j\L)\, n"0,$1,2 ,
" L k
(187)
1 (u!4) (jL#j\L)\, "n"'1 .
" L k
(188)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
385
Taking the respective excitation patterns into account, we derive for the soliton energies
1 1 c 1 , 1 E " [u!4] ! #2 ! k jL>#j\L> jL#j\L k jL#j\L L , 1 c 1 # !u ln 1#[u!4] # O(j\,>) , k k jL#j\L L
(189)
and
1 1 E "! (u#2 (2!u !1 k j#j\ #
c 1 c !u ln(1#k( ))! #2 ( ) k k k
1 ,> 1 1 c 1 # [u!4] ! ! #2 k jL>#j\L> jL#j\L k jL#j\L L ,> 1 c 1 ln 1#[u!4] # !u # O(j\,>) . (190) k k jL#j\L L A plot of the pinning energy as a function of u reveals a remarkable decrease of the pinning energy with increased c which becomes clear in recalling that the computation of the pinning energy relied on the homoclinic orbit which was identified with location of the zeros of Melnikov function on the unperturbed AL separatrix loop. This computation is the result of a perturbational calculation to first order in ec. Moreover, the first correction to Eq. (184) is given by 1
" c
4k #(u!4)!2k , c
(191)
demonstrating how the maximal amplitude of the odd-parity mode is shifted upwards on the AL separatrix loop with c diminishing the difference of the peaks of the odd-parity mode and even-parity. Finally the pinning energy decreases with increasing integrability parameter k. We note that we can design an (unstaggered) odd parity mode of desired width by varying u. If d denotes a given decrease of the center amplitude then the lattice point NI O0 where 4d ,I holds obeys the relation becomes
NI 5
2
(u!4!u
cosh\(d) ,
(192)
where [A] denotes the integer part of A. Similar expressions can be derived for the staggered odd-parity mode as well as the even-parity modes. 3.8. Summary We have studied in detail the stationary localized solutions of the GDNLS equation. First, we have described the general properties and features of the GDNLS and shown how this equation
386
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
can be turned into a map by using a stationary solution ansatz. The bifurcational behavior of the fixed points of this map has been set out followed by a discussion how the homoclinic and heteroclinic connections between the unstable fixed points can be related to the bright and dark solitons on the lattice. In Section 3.4 the DNLS term was assumed to be a small nonintegrable perturbation to the integrable AL equation, which allows to calculate the Melnikov function explicitly. The latter describes the splitting of separatrix related to the hyperbolic point at the map origin and leads to the result that the magnitude of the separatrix splitting depends exclusively on the ratio c/(uk). In investigations [83] this ratio was shown to determine the parameter region where the behavior of the map is regular. Furthermore, the Melnikov function show that the position of the homoclinic intersections along the unperturbed homoclinic orbit solely depends on u and not directly on the nonlinearity parameters c and k. In Section 3.5 the Birkhoff normal forms were applied to calculate the homoclinic orbits related to the hyperbolic point at the map origin. The derived expression was shown to approximate the manifolds with high accuracy. We have in Section 3.6 discussed how the homoclinic orbit of the related map supports localized solutions to the GDNLS. This means that the irregular behavior of the map through the existence of homoclinic intersections actually ensures the existence of the localized solutions to the GDNLS. We also pointed out, in this way, that the map allows us to design localized excitations patterns of the GDNLS. Designing standing localized solutions of the GDNLS is only possible with the help of the stationary analysis which becomes clear from the fact that the broadness of a localized solution and its spatial exponential decay rate depend barely on the oscillation frequency u. The latter is accessible only in the stationary equation, whereas the two nonlinearity parameters c and k appearing in the time-dependent GDNLS do not play a role for the purpose of soliton designing. Finally, we applied in Section 3.7 the result of the Melnikov computations to calculate the pinning energy of the bright solitons on the lattice and showed that it can be tuned by varying the integrability and nonintegrability parameters, respectively. It is interesting to compare the current findings with the result of Ref. [131] that continuous wave equations of the type 䊐u"F(u) possess time-periodic and spatially localized solutions only for a small restricted class of functions F(u). An example exhibiting time-periodic localized solutions is the completely integrable case of F(u)"$sin u. In order to obtain soliton-like solutions of the field equations the authors of [131] used an asymptotic expansion method where the formal solution is represented in an asymptotic expansion as power law of the leading nonlinear term. A base equation containing the essential nonlinearity is derived and the remaining hierarchy of equations is solved by a perturbation theory. The self-localized solution of the base equation is supported by a separatrix loop belonging to a hyperbolic point (the equilibrium state u"0) in the phase plane. It was shown that the dimension of the stable and unstable manifolds W of the hyperbolic point is, in general, finite. However, for localized solutions of the field system the existence of a separatrix loop with an infinite number of transversal intersections of W is demanded. Hence the infinite system of intersection conditions is overdetermined which prevents the existence of time-periodic and spatially localized field solutions. Our approach of utilizing the separatrix intersections of a planar map to obtain soliton-like solutions of an infinite lattice systems is successful, since the one-dimensional stable and unstable manifolds on the two-dimensional map plane inevitably intersect transversally as a result of the nonintegrability of the map. In this sense the spatial nonintegrability of the stationary GDNLS with the resulting homoclinic and
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
387
heteroclinic tangles has a constructive impact on the excitation of standing solitons on the GDNLS lattice. 4. Effects of nonlinearity in Kronig—Penney models 4.1. Motivation The Kronig—Penney model [134], introduced more than half a century ago, has remained one of the most popular “theoretical laboratories” in the study of wave propagation in various systems. It has been successfully applied in band structure and electron dynamics studies in ordered solids, localization phenomena in disordered solids and liquids [135], microelectronic devices [136], physical properties of layered superconductors [137] and quark tunneling in one dimensional nuclear models [138]. We review the work on an extension of the well-known (linear) periodic Kronig—Penney model by considering the case where the periodic potential consists of a series of periodically (or more generally, quasiperiodically) spaced delta-functions modulated by the square of the amplitude of the wave function as introduced by Grabowski and Hawrylak in [139,140] and studied also by Coste and Peyrard [107] and Hennig et al. [141,142]. The motivation for studying this model is twofold: On one hand it represents a more realistic, continuous, many-band extension of the “one-band” discrete tight binding model used to study propagation in a periodic nonlinear medium [81,82]. The medium can be thought of as a superlattice, where the width of a layer is much greater than a typical atomic spacing and at the same time much smaller than the layer interspacing. Inside a layer, strong electron-phonon interactions induce polaronic effects which give rise to the nonlinear potential. On the other hand, the model can be used to describe the realistic propagation of electromagnetic (EM) waves through a dielectric superlattice constituted of nonlinear Kerr-type material, where the thickness of the nonlinear layer is much smaller than the layer interspacing. Under the assumption of a scalar wave approximation, an increase in the amplitude of incident waves has the effect of switching the wave from a “passing” to a “nonpassing” regime; in the latter case the wave is completely reflected. Transmission of information in such a periodically modulated nonlinear transmission line is then possible by using amplitude modulation [141]. In what follows we will use the transfer matrix approach to obtain a Poincare´ map that can be casted into a nonlinear difference equation form. We turn the (complex) difference equation into a twodimensional real map, by making use of the conservation of probability and show numerical results for the transmitted intensity for the “fixed output” problem. We investigate the transmittance for the “fixed input” problem and show how the combined effects of linear instability and homoclinic instability lead to a closing of the forbidden gaps. 4.2. The nonlinear Kronig—Penney model We consider the stationary problem relating to the transport of a quasiparticle in a onedimensional nonlinear chain modeled by the stationary nonlinear Schro¨dinger equation: , dt(x) # j d(x!x ) "t(x)" t(x) . Et(x)"! L dx L
(193)
388
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Equation (193) defines the nonlinear extension to the Kronig—Penney model (or nonlinear Dirac comb) where E is the energy of possible stationary states, x denotes space and j is a parameter. The periodic potential is taken to be a series of equidistant delta-functions that are additionally modulated by the square of the wave function t(x). The locations of the d-functions x are taken on L a periodic lattice with unit spacing, i.e. x ,n. L We will focus on two distinct problems: (a) electromagnetic wave propagation in dielectric nonlinear superlattices and (b) transport of quantum mechanical particles, such as ballistic electrons, in nonlinear superlattices. Even though these two problems are quite distinct, they can be treated simultaneously through very similar equations that lead to qualitatively similar results. In particular, it can be shown that for the Kronig—Penney model, the two equations are related by a scaled transformation. For details we refer to Section 4.3. Hence, Eq. (193) with some small changes in notation describes also the propagation of an EM wave in a layered superlattice where the nonlinear slabs (described by the d-function terms) are much smaller than their spacing [141]. When a wave with momentum k is sent from the left towards the nonlinear chain, it will be scattered into a reflected and transmitted part. On the left of the first d-function potential encountered by the plane wave we have the incident and reflected waves t(x)"R exp(ikx)#R exp(!ikx) , and to the right end of the chain the transmitted wave t(x)"¹ exp(ikx) .
(194)
(195)
We denote with R , R the amplitudes of the injected and reflected wave at the beginning of the chain and ¹ is the transmitted amplitude at the end of the chain. To study the transmission properties of the nonlinear Dirac comb segment we derive a Poincare´ map based on the corresponding (nonlinear) transfer matrix which relates the amplitudes of the wave function on adjacent sides of a single d-potential. Our approach is similar to the one used for the corresponding linear Kronig—Penney problem [143,144]. Since the nonlinear terms enter only at the locations where the d-functions are, we can write the general solution of Eq. (193) in the interval [x , x ]: L L> (196) t (x)"A e IV\VL#B e\ IV\VL , L L L with x3[x , x ] and A , B the amplitudes of the forward and backward propagating waves in L L> L L the segment [x , x ], respectively. Employing the boundary conditions at x"x , i.e. continuity L L> L of the wave function and discontinuity of its derivative, we obtain a nonlinear transfer matrix connecting the amplitudes of the transmitted and reflected parts through the single d-function potential at x"x : L I I A (1!i HL )e I !i HL e\ I A I I L> " L . (197) I I (1#i HL ) e\ I B B i HL e I I I L> L The nonlinear nature of our model enters through the modified potential strength jI : L jI "j"A e I#B e\ I",j" " . (198) L L L L Unlike the linear scattering problem, since the amplitude of the wave function enters in each single-site transfer matrix, the total transfer matrix for a chain with N'1 sites cannot be represented as a closed product of the unimodular single-site transfer [36,143—145]. We therefore,
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
389
need to iterate the Poincare´ map (A , B )P(A , B ) of Eq. (197) repeatedly for n"1,2, N. L L L> L> Computations are facilitated if we first transform the matrix relation of Eq. (197) to a difference equation [143]. We use the transformation
e I L> "
1 L
e\ I 1
A L , B L
(199)
to go from the “bond variables” (A , B ) to the “node” variables ( , ). The combined use of L L L L> Eqs. (197)—(199) leads to a second order nonlinear difference equation for the node variables : L
sin(k) " " .
# " 2 cos(k)#j L L L> L k
(200)
Eq. (200) is formally identical to the stationary discrete nonlinear Schro¨dinger equation (DNLS) with on-site energy E(k)"2 cos(k), unit transfer matrix element and nonlinearity parameter j(sin(k)/k) [1]. In fact there exist a connection between difference equations (tight-binding models) and differential Schro¨dinger equations [146,147]. For example Belissard et al. [143] showed that the Schro¨dinger equation for a periodic array of delta functions of potentials which can be incommensurate with the lattice of delta functions is equivalent to the Aubry model, that is a difference equation with a quasiperiodic potential [40]. We can utilize the equivalence of Eq. (200) with the DNLS equation of Section 2.1 to reduce the dynamics again to a two dimensional (real-valued) map going along the same lines as described in Section 2.4 to obtain a map (x , z )P(x , z ) where we study from the beginning the inverse L L L\ L\ map:
sin(k) (¼ !z ) (¼ !z )!x , x " 2 cos(k)#j L L L L L L\ 2k
(201)
1 x!x L\!z , z " L L L\ 2 ¼ !z L L
(202)
with ¼ "(x#z#4J. L L L In Fig. 9 we show a number of orbits of the map. The elliptic fixed points of the period-4 island chain are surrounded by (regular) quasiperiodic orbits whereas in the vicinity of the separatrix of the hyperbolic fixed points a thin chaotic layer develops. The evolution of the wave amplitude r corresponding to a chaotic orbit in Fig. 9 is illustrated in Fig. 10. L In Fig. 11 we illustrate the transmission behavior of a nonlinear chain with 500 d-potentials in the k!"¹" plane. This representation is similar to that used by Delyon et al. and Wan and Soukoulis in the study of the corresponding “one-band” tight binding model [81,82]. For a given output plane wave with intensity "¹" and momentum k the mapping of Eqs. (201) and (202) has been iterated. For incoming wave intensities "R " that are of the same order of the transmitted intensity the nonlinear chain is said to be transmitting (black area in Fig. 11). A divergent solution, on the other hand, results to absence of transmission (white area in Fig. 11). The curve separating the two different regimes in the k!"¹" plane exhibits a rich structure. For momenta k"np
390
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 9. Orbits of the map M. Wavevector k"1.5 and j"1.
Fig. 10. Evolution of the wave amplitude along the nonlinear chain corresponding to a chaotic but bounded orbit in Fig. 9.
plateaus appear corresponding to ideal transparency, i.e. "R """¹". Furthermore, above critical intensities "¹" several branches are created indicating bistable or multistable behavior. Such multistability is illustrated in Fig. 12 where for k"3.1 and j"0.1 the intensity of the transmitted wave is plotted a function of the incident wave.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
391
Fig. 11. Illustration of the transmission behavior of a nonlinear chain with N"200 and j"1 in the k!"¹" parameter plane. Transmitting solutions correspond to the dark areas whereas diverging (nontransmitting) solutions are indicated by the white area.
Fig. 12. Transmitted intensity as a function of the incident intensity exhibiting the multistability of the nonlinear transmission dynamics. Parameters k"3.1 and j"0.1 are used.
The curve shows oscillations resulting to numerous different output energies for a given input energy. Multistability in the wave transmission is a genuine nonlinear effect that is not present in the corresponding linear Kronig—Penney model and has been reported in numerous other nonlinear wave transmission studies [81,110,148] (see also Section 2.6).
392
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
4.3. Propagation in periodic and quasiperiodic nonlinear superlattices Novel phenomena such as photonic band gaps and possible light localization occur when electromagnetic (EM) waves propagate in dielectric superlattices [110,149—154]. In an approximation where only the scalar nature of the EM wave is taken into account, wave propagation in a periodic or disordered medium resembles the dynamics of an electron in a crystal lattice. As a result photonic bands and gaps arise in the periodic lattice case whereas EM wave localization is theoretically possible in the disordered case. In the latter case and when the dielectric medium is one dimensional, Anderson-type optical localization has been predicted [149,150]. In an ordered dielectric superlattice, on the other hand, photon band gaps have been demonstrated for various realistic configurations [153]. One issue that has not been widely addressed yet is the possibility of using superlattices with nonlinear dielectric properties and, in particular, the nonlinear Kerr effect, to construct optical devices with desired transmission characteristics [113,150]. Aspects of this problem will be considered here. Before addressing the actual properties of EM wave propagation in a periodic dielectric with nonlinear properties, let us first derive the stationary equation of EM waves and compare it with that for the electron. In order to derive the wave equation in a nonconducting medium we start from Maxwell’s equations for the medium and arrive at a wave equation for the electric field E in the absence of free charges, jE
E"ek , jt
(203)
where e, k are the permitivity and permeability of the medium, respectively. We will write ek"n/c, where n is the dielectric constant and c is the speed of light. For more general forms that allow also for a nonlinear response of the medium, we have n(u)"n (u)#jK"E" , where j is the wavelength, K the Kerr coefficient, ranging from 10\ m/» for helium gas to 10\ m/» for bentonite in water. Typical glass fiber materials have a value of 10\ m/» for the Kerr coefficient. In units where k"c"1, we have
2jK 4pK "E" "n 1# "E" , n"(n #jK"E")+n 1# n n k where the amplitude of the electric field E can be 10—10 V/m. For plane-wave propagation in the z-direction we only have to consider the following scalar Helmholtz equation: dE(z) n(u, E(z))u # E(z)"0 , c dz
(204)
and in particular consider the inhomogeneous multilayered media where the nonlinear one is much thinner than the linear one. Then for the linear medium, Eq. (204) becomes E(z)#kE(z)"0, where k "n u/c; whereas for the nonlinear medium, it becomes dE(z) 4pK #k 1# "E" E(z)"0 , (205) dz k
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
393
where k "n u/c. In the context of the Kronig—Penney model with wavenumber k"k , we get dE(z) , kE(z)"! # kd(z!z )g(E(z))E(z) , (206) L dz L where 4paK g(E(z))"1!a! "E(z)", and a"n /n . k
(207)
Upon comparison with Eq. (205) we note the following similarities and differences: The role of the stationary energy E of the nonlinear Schro¨dinger equation of Eq. (205) is played by the wavenumber k or frequency u in the problem of wave propagation. Furthermore, whereas in the wave case this term k appears as a factor in the delta-function term, this is not true for the particle case. In fact, it can be shown that Eq. (205) can be transformed into the same form as Eq. (206). In the time-independent case the Maxwell’s equations for EM waves is reduced to the following equation: d (x) #k d(x!na)g(n) (x)"k (x) , (208) ! dx L where (x) is the complex electric field strength, k is the wavenumber of the wave and g(n),g(n, " ") is the term that modifies nonlinearly the d-function strengths. The motion of particles, on the other hand, is describable by the one-dimensional Schro¨dinger equation: dt(x) ! # d(x!na)h(n)t(x)"Et(x) , (209) dx L where t(x) is the wave amplitude, E is the energy and h(n),h(n, "t") modifies the d-function strengths. Let us define z"kx, and transform Eq. (208) into an equation with respect to the new variable z. Eq. (208) becomes d (z) #k d(z!nka)g(n) (z)" (z) . ! dz L
(210)
For particles with positive eigenenergy, E'0, if we define i"(E, and z"ix, Eq. (209) becomes dt(z) 1 ! # d(z!nia)h(n)t(z)"t(z) . (211) dz i L It is easy to see that Eqs. (210) and (211) will be in the same form, if we define the new d-function strengths g(n, k)"kg(n) and h(n, i)"h(n)/i, and substitute them into Eqs. (210) and (211), respectively. We note that the Schro¨dinger representation is more general since the classical wave presentation corresponds only to the E'0 solutions of the former. Due to this equivalence between the one-dimensional particle and classical wave problem, in what follows we will deal primarily with the Schro¨dinger representation of our problem. But first we will actually compare explicitly the two cases.
394
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
4.4. Wave propagation in periodic nonlinear superlattices 4.4.1. Combined linear and nonlinear lattices We will now consider the case where in addition to the nonlinear d-function there is also a linear term present and thus the potential is of the form a , d(x!n)(1#j"t(x)")t(x). L Straightforward manipulations similar to the ones used in the standard linear Kronig—Penney problem lead to the following nonlinear difference equation [141,143,144]
sin(k)
,
# " 2 cos(k)#a(1#j" ") H H H> H\ k
(212)
where k is the wavenumber associated with the frequency u(k)"2 cos k. A local transformation to polar coordinates and a subsequent grouping of pairs of adjacent variables , turns Eq. (212) L\ L to the following two dimensional map M [141]:
sin(k) 1 x " 2 cos(k)#a 1# j(¼ #z ) (¼ #z )!x , L> L L L L L k 2
(213)
1 x!x L> , z "z ! L L> L 2 ¼ #z L L
(214)
cos(h !h ), z "r!r with "r exp(ih ) and where ¼ "(x#z#4J, x "2r r L L L\ L L\ L L L\ L L L L L L J is the conserved current, i.e. J"2r r sin(h !h ). L L\ L L\ The map M can contain bounded and diverging orbits. The former ones correspond to transmitting waves whereas the latter correspond to waves with amplitude escaping to infinity and hence do not contribute to wave transmission. In order to investigate directly the transmission properties of the injected plane waves in the nonlinear dielectric superlattice, we iterate numerically the discrete nonlinear equation of Eq. (212). For the initial condition [ , ]"[1, exp(ik)] we compute the transmitted wave amplitude ¹ for a superlattice with 10 nonlinear dielectric planes for different nonlinearity parameter j and wavenumber k. In Fig. 13 we plot the transmission coefficient t,"¹" as a function of the input wavenumber k for various nonlinearity values j. There are clear transmission gaps whose width (in k-space) depends on j. We note that with increasing j the width of each gap increases while in addition more gaps in the range between two gaps develop. This process of gap creating starts in the low energy range and extends with further increased j also to the high energy region. Finally, above critical j-values neighboring gaps merge leading to a cancelation of transparency. In Fig. 14 we plot the injected amplitudes for the linear (j"0) and nonlinear (jO0) cases as a function of the wavenumber k. We note that the typical linear band gaps (dark areas in Fig. 14a) become exceedingly complicated when nonlinearity becomes non zero (hatched region in Fig. 14b). A region was considered transmitting whenever the transmission coefficient was different from zero. In particular we observe the occurrence of new gaps in previously perfectly transmitting regions. Furthermore, the width of the passing regions (white regions) shrinks with increasing injected wave energy. The diagram was obtained by taking a grid of 500 values of k and 250 values "R " and iterating Eq. (212) on each individual point of the grid over the N"10 sites.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
395
Fig. 13. Transmission coefficient as a function of the wave number k for j equal to (a) zero (linear case), (b) 0.2, (c) 1.0 and (d) 4.0. The value of the linear coefficient is a"1 and the amplitude of the injected wave is taken to unity.
The “band structure” shown in Fig. 14 can be obtained directly from the tight-binding-like Eq. (212). In the linear case, i.e. for j"0, the allowed propagating band states are obtained from the well-known Kronig—Penney condition. The second order difference equation for the linear Kronig—Penney problem is
sin(k)
,
# " 2 cos(k)#a L L> L\ k
(215)
and wave transmission is possible only when
1 sin(k) cos(k)# a 41 . 2 k
(216)
The critical curves in the a—k parameter plane separating regions of allowed and forbidden energies, i.e. where the equal sign in Eq. (216) holds, are readily determined as: k"(2n#1)p ,
(217)
a"!2k cot (k/2)
(218)
396
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 14. Amplitude of injected wave R as a function of the wave number k for j equal to (a) zero (linear case), (b) 1.0. The value of the linear coefficient is a"1. The super lattice has 4000 nonlinear slabs. Transmitting solutions correspond to the hatched area whereas diverging solutions are indicated by the blank area.
and k"2np ,
(219)
a"2k tan (k/2) ,
(220)
where n"0, 1, 2,2 is an integer number. These equations define curves that form the boundaries of ranges of forbidden energies (gap states). The tongues get broader with increasing a and therefore the regions of allowed energies (band states) in between the tongues shrink. But even for large a values the tongues of neighboring gap states do not merge ensuring transparency of the linear chain. This situation changes drastically in the corresponding nonlinear Kronig—Penney problem. For an appropriate analysis of the wave propagation a nonlinear potential formulation can be invoked.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
397
To this end we exploit the fact that Eq. (212) can be derived by a variational approach j¸/j *"0, L where the Lagrangian is given by
sin(k) " " " " . (221) ¸" ( *( # * )! 2 cos(k)#a 1#j L L L L> L> L 2k L L The first kinetic term on the r.h.s. of Eq. (221) is responsible for the transfer along the chain whereas the second term represents the local on-site potentials º(" ") which are of Landau—Ginsburg type. L These quartic on-site potentials obey radial symmetry and are either convex (for E"2 cos(k)'0) or concave (for E"2 cos(k)(0). For a discussion of the stability of the wave dynamics evolving in such a potential we first locate the extrema of the potential which follow from jº/j" ""0. In the L linear Kronig problem the quadratic potential is
sin(k) " " . º (" ")" 2 cos(k)#a L L k
(222)
There is always a global minimum at " I ""0 .
(223)
In the nonlinear case, the quartic potential reads as
sin(k) º (" ")" 2 cos(k)#a 1#j " " L L 2k
" " , L
(224)
and the minimum remains at " I ""0. For k3[np,(2n#1)p/2) the minimum is the global extremum of the potential so that the potential exhibits the same features as in the linear case. In accordance with the condition for linear stability we require analogous to the treatment in Section 2.3 for wave propagation the constraint
1 sin(k) cos(k)# a 1#j " " 41 . 2 2k
(225)
The instability tongues in the present nonlinear case are broadened indicating stronger parameter instability. This is due to the explicit occurrence of the nonlinearity term on the l.h.s. in Eq. (225) so that the boundaries of the instability tongues in the k—a parameter plane become also wave amplitude dependent: k"(2n#1)p,
a"!2 cot
k (1#j" ") , 2
(226) (227)
and k"2np ,
(228)
a"2k tan
k (1#j" ") . 2
(229)
398
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Due to the fact that in these k-intervals wave instability is caused by exceeding an energetic stability border like in the linear case we refer to the corresponding tongues as linear instability tongues. For k3[(2n!1)p/2, np) and cos(k)(!a/2 a bifurcation appears and the potential has global maxima at
" I "" ! L
2 cos(k) k #1 , a j sin(k)
(230)
yielding a set of unstable equilibria. Therefore we have a ring of homoclinic fixed points
I "RI exp(iH) characterized by the radii " I " and different values of the phase H3[0, 2p). The dynamics of the wave amplitude in the proximity of the homoclinic structure exhibits extreme sensitivity in the choice of the initial momenta k inducing thus “spatial” chaos in the wave transmission. The instability in the proximity separatrix motion manifests itself in a “transient” phase of irregular homoclinic oscillations in the amplitude dynamics. Once the amplitude exceeds the separatrix threshold it blows up to infinity giving rise to a gap state. Due to the nature of the escape process we call the corresponding instability tongues homoclinic instability tongues. Above critical " "-values the linear instability tongues and the corresponding neighboring homoclinic instability tongues merge at k"(2n#1)p/2 creating broad forbidden gaps thus leading to impenetrability of the nonlinear chain. This is a purely nonlinear effect that is not present in the linear Kronig—Penney model. In the latter there are always allowed bands however small they might be. In conclusion one sees that nonlinearity in Kronig—Penney models, arising from strong manybody effects in the electron propagation [155] leads not only to multistability in the transmission of plane waves but also has profound effects on the overall transparency of the chain. While the nonlinear model is not very different from the corresponding linear one in the very small " " regime (respectively j regime), the former exhibits non-transparency for larger " "-values: A critical threshold in the values of " " exists above which the lattice becomes opaque to all wave transmission. If we compare the “band structures” of the linear and nonlinear Kronig—Penney models for each wavenumber k and corresponding parameter j-values we observe an overall reduction in transmittance in the nonlinear case. Furthermore, we note the appearance of new propagation gaps in the nonlinear case in wavevector regions where there was previously (in the linear case), a transmitting band. This behavior is markedly different from the one reported by Chen and Mills [111] and Kahn et al. [96] in a similar system. The difference, however, lies in the fact that while in the present case we are investigating plane wave propagation, in reference [111] soliton-like motion is studied. The presence of nonlinearity in the latter case assists the solitary wave propagation even at locations inside the gap of the corresponding linear problem. The conclusion reached by comparing these two problems is similar to that obtained in other cases, viz. that in a discrete nonlinear medium wavepackets and modes with bounded support travel more efficiently than plane waves [156]. The presence of nonlinearity in the dielectric superlattice planes alters substantially the transmission properties of the waves. In particular, when the input power " " (nonlinear coefficient j) is increased new nontransmitting regions appear adjacent to the regular instability regions. Consequently, for a given wavenumber k, an appropriate change of the input power of the wave (corresponding to a change in j) can switch the wave from a transmitting to a nontransmitting
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
399
region. It is then possible by a simple amplitude modulation of the incoming wave to transmit binary information to the other side of the transmission line in the forms of zeros (nontransmitting region) and ones (transmitting region). 4.4.2. Propagation of EM waves We consider propagation of plane waves in the scalar approximation in a one-dimensional continuous linear dielectric medium. In the medium we embed periodically small dielectric regions that have non-negligible third order nonlinear susceptibility s (Fig. 15). We assume for simplicity that the width of the nonlinear dielectric regions is much smaller than the distance between two adjacent ones. We are thus led to a model for a periodic nonlinear superlattice and the propagation of a plane wave injected on one side of the structure can be described through the a nonlinear Kronig—Penney lattice by the Schro¨dinger equation: , du(z) #a d(z!n)(1#j"u(z)")u(z) . (231) Eu(z)"! dz L In Eq. (231), u is the complex amplitude of an incoming plane wave with energy E along direction z, a is proportional to the dielectric constant of the dielectric in each superlattice slab and j is a nonlinear coefficient that incorporates s and the input wave power [157]. The series of equidistant delta-functions represent the effect of the periodic nonlinear dielectric medium on the wave propagation. We will now show results from the genuine wave case; even though wave propagation is described by a slightly different equation, it actually gives very similar quantitative results. We now consider Eq. (206) and perform the same analysis as previously in the particle case. In Fig. 16 we summarize the results for transmission in a superlattice with various values of the Kerr coefficients, K"1.19;10\, 1.19;10\, 1.19;10\, 1.19;10\, 1.19;10\, 1.19;10\, and 0.119 lm/» for (b)—(h) in the figure, respectively. The lattice size N"377 (other values of N give similar results), lattice constant a"1 lm, and a"0.95 in Eq. (207); k is in the unit of lm\
Fig. 15. A periodic dielectric superlattice with nonlinear susceptibility due to the Kerr effect. The narrow-stripped regions denote the dielectric slabs with nonlinear properties. The periodic value of the nonlinear coefficient j is approximated by the periodic delta functions.
400
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 16. Transmission spectrum for different amplitude E(z) for EM waves in alternatingly linear and nonlinear dielectric media. The white regions are transmission regions and the black areas are opaque regions. (a) is the linear case, and (b)—(h) are nonlinear cases with various nonlinear strengths given in the text.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
401
and the field amplitude E(z) is in the unit of V/lm. We note the qualitative similarity in the transmission properties with the particle case. In addition, we see also a new feature in the “weak” nonlinearity cases. There is a region where nonlinearity reduces the widths of the gaps and in some cases it effectively eliminates them for small k’s. This means that the transparency of the medium is enhanced in this particular region. We note that for large nonlinearity values the medium becomes opaque for large values of k. 4.5. Transmission in a quasiperiodic lattice-I We review the work on quasiperiodic nonlinear Kronig—Penney models with a sequence of barrier heights as well as sequential intersite distances constructed according to the Fibonacci inflation rule. We find that nonlinearity enhances transparency and reduces the localization properties of the corresponding linear quasiperiodic Kronig—Penney model. Since the discovery of the quasi-crystalline phase in AlMn exhibiting quasiperiodic properties there has been growing interest in models describing the electron and phonon spectrum of such quasicrystals [158]. Recent advances in nanodevice manufacturing has increased the interest in quasiperiodic one dimensional models [159]. Merlin et al. synthesized the first one dimensional semiconductor superlattice [160]. Quasiperiodic models provide a bridge between the regular ordered lattices of perfect materials and random lattices of amorphous systems. Interest in electronic propagation in quasicrystals has led to a thorough investigation of the band structure of some quasiperiodic systems, such as Penrose tiling and Fibonacci lattices [161—163]. Concerning the localization problem for wave propagation within a linear theory there exist three types of wave functions: localized (normalizable), extended (unnormalizable) and critical. For (electronic) tightbinding models possessing either two types of hopping matrix elements or on-site energy arranged in a Fibonacci sequence one obtains a Cantor-set of allowed energies and the wave function is self-similar and therefore intermediate between a localized and extended state [36,164]. For electron transmission in a one dimensional quasicrystal described by a (linear) Kronig—Penney model with d-potentials heights arranged due to the Fibonacci sequence two kinds of electron energies exist. One correspond to localized states and the other two extended states [165]. Studies on combined effects of nonlinearity and quasiperiodicity on the optical transmission properties of superlattices consisting of nonlinear materials arranged in a Fibonacci sequence were performed in references by Kahn et al. in [166] and Gupta and Ray [167]. Special attention was paid whether the gap solitons which had been found in the corresponding periodic nonlinear lattice exist in the aperiodic case as well. Kahn et al. [166] using a tight-binding like model found no evidence for gap solitons the field envelops turn out to be rather irregular and fast oscillating. Gupta and Ray utilized a nonlinear transfer matrix technique under the assumption of slowly varying envelope functions and established the existence of gap solitons for a choice of the linear optical path in each slab as np with integer n. But in these parameter ranges the linear transmission is also perfect since it remains unaffected by the quasiperiodicity. Later Johannsson and Riklund studied the electronic transmission properties of a one dimensional deterministic aperiodic nonlinear tight-binding lattice based on the DNLS. Quasiperiodicity in their model enters through arrangements of the on-site energy according to the Thue—Morse sequence [168]. For small nonlinearity the authors found soliton-like solutions in a similar way as in the corresponding periodic model. Contrary to the Fibonacci type aperiodicity the Thue—Morse type aperiodicity
402
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
supports transmission in the gaps of the linear regime in the form of gap solitons. With respect to the existence of single-site self-trapped states in aperiodic nonlinear discrete systems Johansson and Riklund concluded from their numerical analysis that there seems to be always self-trapping whenever in the corresponding linear lattice the spectrum is a point spectrum or a singular continuous spectrum. Finally, Hiramoto and Kohmoto extended a linear quasiperiodic tight binding model by adding a Hubbard-type interaction term giving so rise to nonlinearity in the model. They showed that such a term does not affect the Cantor-set character of the spectrum for a Fibonacci model and hence the wave functions stay critical. However for the incommensurate Aubry model the singular continuous character of the spectrum becomes destroyed [163,169]. In the present section we will be addressing the issue of wave propagation in a quasiperiodic superlattice model where the medium has in addition nonlinear properties arising from the optical Kerr effect [113,141,150]. We will study the interplay between nonlinearity and quasiperiodicity and demonstrate that it can be used advantageously in wave propagation in nanodevices. The results obtained are also applicable to problems of electronic propagation in superlattices where the effective nonlinearity stems from many-body interactions [155]. We consider the following general Kronig—Penney equation: , dt(z) # j d(z!z )g(z)t(z)"Et(z) , (232) ! L L dz L where for the linear (L), nonlinear (NL) and the general nonlinear (GNL) models, we define as follows:
1,
g(z)" "t(z)",
L NL
a #a "t(z)", GNL , where (a , a ) are real numbers. In Eq. (232), t(z) represents an electronic wavefunction or is the complex amplitude of an incoming plane wave with energy (or frequency) E along direction z, and j is a coefficient that, in L the case of the nonlinear optical medium, incorporates the nonlinear susceptibility s and the input wave power [141,157]. The space-periodic d-functions represent small nonlinear dielectric regions that are periodically embedded in a linear dielectric medium. These nonlinear regions are assumed to be much smaller than the distance between adjacent ones. Quasiperiodicity in this model enters in a twofold way. Firstly, the coefficients j that are assumed to follow the Fibonacci L sequence. This sequence follows from the inflation rule: S "S S , where S "A and S "AB. J> J J\ There are only two values of j , j and j and the actual value of j at location n is determined from L L the Fibonacci rule. We use the following procedure to obtain the sequence [144]: k "[ #1/c], L> L
"( #1/c) : mod(1) , L> L
(233)
where the rectangular brackets denote the integer part , c"((5#1)/2 is the golden mean and we start with "0 at n"1. The value of j is j of j when k is 1 or 0, respectively. L L Secondly, the intersite distances d "z !z are assumed to be quasiperiodic and follow the L L L\ Fibonacci sequence. There are two values of d , d "1 or d "a'1 resulting from the actual L L L
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
403
location of z which is determined by the rule L n#1 z "n#(a!1) L u
with u"(1#(5)/2 being the golden mean and the bracket [ ] denoting the integer part. When the intersite distance d is a constant, it can be shown that finding the solution of Eq. (232) is L equivalent to solving the problem of a nonlinear tight-binding model [143,144]; but this is no longer true for the quasiperiodic nonlinear lattices which are being studied here. We investigate the scattering problem of a quasiparticle with momentum k. The first study deals with the NL-case when quasiperiodicity enters through an arrangement of the heights of the delta-barriers due to the Fibonacci sequence while the distances between them is kept constant. Plane waves are sent from the left towards the nonlinear chain and will be scattered into a reflected and transmitted part. Straightforward manipulations similar to the ones used in the standard linear Kronig—Penney problem lead to the following nonlinear difference equation for t ,t(n) [141,143,144] L sin(k) t , (234) t #t " 2 cos(k)#j "t " L L L L> L\ k
where k is the wavenumber associated with the energy (or frequency) E(k)"2 cos(k). Eq. (234) can be treated as a nonlinear map for various initial conditions corresponding to waves injected initially from the left and propagating towards the right side of the chain. In Fig. 17 we illustrate the transmission behavior in the "R "!k parameter plane for the linear Fibonacci Kronig—Penney model (Fig. 17a) and the nonlinear one (Fig. 17b). Dark regions represent gaps in the wave propagation whereas the passing regions are white. In the linear case, i.e. when the term "t " is absent from Eq. (234), we have the typical band structure L resulting from quasiperiodicity [36]. We note that the effect of nonlinearity alters substantially this structure resulting in enhanced propagation for small initial intensities. The lattice is transparent for essentially all wavenumbers k at low intensities. In particular, the dominant linear gap for k-values approximately less than k&2 is reduced drastically. In the higher intensity region, on the other hand, the forbidden lines seem to coalesce together to form well defined nonlinear gaps that are interrupted periodically from propagating resonance-like zones. The latter occur for k-values that are multiples of p; for these wavenumbers the nonlinear term is effectively canceled leading to perfect propagation. The nonlinear lattice model we presented here has applications in the propagation of electrons in superlattices and electromagnetic waves in the dielectric materials. In the latter case a quasiperiodic linear term should also be included in Eq. (232) representing the linear dielectric constant of the material. The modification in the results due to this term will be studied below in the context of the GNL model. 4.6. Transmission in a quasiperiodic lattice-II In the previous section, we have studied a lattice with spatially periodic potentials whose bivalued strengths are arranged in a quasiperiodic sequence [141]. It turned out that while
404
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 17. Injected wave intensity as a function of the wave number k for a Fibonacci chain of length N"10. (a) Linear quasiperiodic Kronig—Penney model and (b) nonlinear model. The values of the coefficients j "$1 in both cases and L are distributed according to the Fibonacci sequence. The dark areas denote gaps or nonpropagating regions.
quasiperiodicity destroys the transparency of a linear superlattice for small wave numbers (long waves), nonlinearity enhances it in a self-trapped mechanism. We study the Kronig—Penney model here with the sequential intersite distance being quasiperiodic [170]. We will consider mainly the electronic wave for positive potentials but will discuss the results for negative (attractive) potentials, and compare them with the transmission of electromagnetic waves. Finally, we will explain the enhancement of transparency of the quasiperiodic chain for low intensity long waves. We will discuss the propagation of plane waves and the algorithm for numerical calculations in Section 4.6.1; we will also analyze and compare the transmission properties of the linear and nonlinear models and investigate phase correlations in Section 4.6.2. Section 4.6.3 is devoted to the analysis of long wavelength waves and their transmission at low intensity.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
405
4.6.1. Transmission of plane waves In the interval between z and z the solution of the Schro¨dinger equation can be written as L L> t (z)"A e IX\XL#B e\ IX\XL where k is the wave vector. L L L We want to establish a nonlinear transformation, connecting the amplitudes (A , B ) and L L (A , B ) on adjacent sides of the d-function potential, such that L\ L\ A A L "P L\ , (235) L B B L L\ where P is the symbolic nonlinear operator, or a nonlinear map which projects one set of complex L numbers to another. The transformation in Eq. (235) is generally non-symplectic, although the transformations on (t , t ) are. This is because unlike (t , t ), (A , B ) are not canonical variables. L L L L L L Similarly, as in the linear Kronig—Penney problem, straightforward manipulations considering the continuity of the wave functions and discontinuity of their derivatives at the boundary near the potential, lead to a (nonlinear) Poincare´ map for A and B . In order to simplify notations we L L define first for the amplitudes
AM A e IBL L\ " L\ , B e\ IBL BM L\ L\ where d "z !z is the distance between two consecutive potentials. We obtain the following L L L\ Poincare´ map for the nonlinear model:
A A AM !i(j/2k)"AM #BM "(AM #BM ) L "P L\ " L\ L\ L\ L\ L\ L B BM #i(j/2k)"AM #BM "(AM #BM ) B L\ L\ L\ L\ L\ L L\ and for the inverse transformation, we have the following:
A A [A #i(j/2k)"A #B "(A #B )]e\ IBL L\ "P\ L " L L L L L . L B [B !i(j/2k)"A #B "(A #B )]e IBL B L L L L L L\ L For a finite chain of N sites, the final coefficients of the wave function is: A A , " . P (236) L B B L, , Eq. (236) is a nonlinear map; for various initial conditions it describes waves injected initially from the left, and propagate towards the right side of the chain. Therefore, Eq. (236) or its inverse form can be used to study the transmission properties of the lattice by assuming a given pair of either (A , B ) or (A , B ). The results are analyzed in order to obtain the transmission properties of the L L lattice. In Fig. 18, we show the transmission behavior in the "R "!k parameter plane, where "R " is the amplitude of the injected wave for the linear Kronig—Penney Fibonacci (Fig. 18a), the nonlinear (Fig. 18b), and the GNL models (Fig. 18c). Dark regions represent gaps in the wave propagation whereas the passing regions are white. Comparing with the periodic case, Fig. 18 has more structure in the spectrum of the quasiperiodic lattice than the spectrum for a periodic structure shown in Fig. 14. We note that the effect of nonlinearity alters substantially this structure resulting
406
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
in enhanced propagation for small initial intensities. The lattice is transparent for essentially all wavenumbers k at low intensities. In particular, the dominant linear gap for k-values approximately less than k"2 is reduced drastically. The boundary between the gap and the transmission regions for small k is almost linear due to the self-trapping effect of the nonlinear medium for the long wave (Section 4.6.3). In the higher intensity region, on the other hand, the forbidden lines seem to coalesce together to form well defined nonlinear gaps that are interrupted periodically from propagating resonance-like zones. The latter occur for k-values that are multiples of p; for these wavenumbers the nonlinear term is effectively canceled leading to perfect propagation. For the
Fig. 18. Comparison of transmissions of linear and nonlinear quasiperiodic lattices. (a) Non-transmitting (black) and transmitting (white) regions for Fibonacci lattice of length N"987 corresponding to the Fibonacci number F . The linear model. (b) The nonlinear model. Gap regions have disappeared for low-intensity waves with small wave number. The boundary between gap and extended states for small k is a straight line. (c) The GNL model with a "a "0.5. It shows mixed results for transmission.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
407
Fig. 18. Continued.
GNL model, the results depend on the choice of a’s. We find that, when a (0, the transparency is enhanced as compared with the corresponding linear case; otherwise, it is reduced. To analyze further the effects of nonlinearity in the model we calculate the transmission function t""¹"/"R ", where ¹ is the transmitted amplitude in the end of the chain and R ""t ", is the incident wave amplitude at the beginning of the chain. In Fig. 19 we plot the transmission function vs. wavenumber k with incident wave amplitude R "0.2. As expected, the linear case shows (Fig. 19a) that the transmission or gap does not depend on the amplitude, and a large gap area exits for small k; the nonlinear model (Fig. 19b) demonstrates almost a total transparency for the entire spectrum of k, because the dependence of the interaction between the wave and the medium on the wave amplitude makes it easier for waves with relatively small amplitudes to transmit. The picture becomes more complicated when both linear and nonlinear interactions are included. An example is given in Fig. 19c. In Fig. 20, we plot transmission as a function of the chain length N for the linear Fibonacci Kronig—Penney case (a) and the nonlinear ones ((b) and (c)) for k"0.2350. In the linear case transmission coefficient t drops exponentially with N, whereas in the nonlinear and general nonlinear cases the transmission remains a constant, and we have windows with perfect transmittivity in (b). Fractal structure is present for both linear and nonlinear lattices due to the quasiperiodicity of the lattices. In Fig. 22, the boundary separating transmitting (white) from non-transmitting (grey) regions also shows a fractal structure, specially at large k-values. Similar fractal behavior has been reported by Delyon et al. for a related model [81]. 4.6.2. Correlation functions The enhancement in coherence resulting from the nonlinear term can be seen also in the correlation function. We go to polar coordinates t "r exp(ih ) and define the phase correlation K K K
408
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 19. Transmission vs. wavenumber for linear and nonlinear Fibonacci chains. The initial wave amplitude is 0.2 and the length of the Fibonacci chain N"987. (a) The linear Kronig—Penney model. (b) The nonlinear model. It shows that transmission is enhanced for almost the whole spectrum of k at low intensity. (c) The GNL model with a "a "0.5. It shows a mixed result for transmissions. But for small wavelength, transmission in general is reduced as compared with the corresponding linear model.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
409
Fig. 20. Transmission vs. chain length. The wave amplitude is 0.2, and k"0.23590. (a) The linear model, which shows the exponential decay of the transmission function t. (b) The nonlinear model, which appears to be a straight line at t"1 in the main figure (dotted line), but actually has its own fractal structure, as shown in the insert. (c) The general nonlinear model.
function C(m) as 1 , h 2" h h , (237) K> K L L>K N L where m is a lattice site. In Fig. 21 it can be easily seen that for the linear Fibonacci lattice, C(m) drops almost linearly as a function of m (Fig. 21a), in contrast with the well-known exponential decay of correlation functions in a random lattice. For the nonlinear Fibonacci lattice shown in Fig. 21b, however, the correlation functions are well maintained in amplitudes for m up to 10. For the general nonlinear case with a "a "0.5, (shown in Fig. 21c), correlation in wavefunctions is also substantially enhanced comparing to the linear case, but not as strong as in the pure nonlinear case. C(m)"1h
4.6.3. Low-intensity transparency for long waves In this section we will use the long-wavelength approximation and show in details why transparency is enhanced for low-intensity waves. With long waves, kd ;1, exp($ikd )+ L L 1$ikd . We will be able to solve the nonlinear equations in Section 4.6.1 by taking the long-wave L approximation. Let us define t ,t (z )"A #B , (238) L L L L L m "B !A . (239) L L L According to Eq. (236), t is the wavefunction at site n, whereas m is only a temporary variable for L L algebraic convenience. Then Eqs. (238) and (239) can be expanded and regrouped to form t and L
410
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 21. Phase correlation functions C(m) of propagating waves for three different quasiperiodic K—P models; (a) the linear lattice, (b) nonlinear lattice, and (c) the GNL lattice. In the linear case, the amplitude of C(m) decays almost linearly with m; in the nonlinear case, C(m) does not decay up to m"10 000; in the GNL case, C(m) drops more slowly than the linear case. Notice different scales are used for (a)—(c).
m , and by keeping only the first order terms, it is easy to see that L t "t !ikd m , L L\ L L\ (240) j m "m !ikd t #i "t "t . L L\ L L k L L Taking the continuum limit as kP0, or more appropriately, k\
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
411
where we have replaced d with its average d. Substituting the second equation into the first one in L Eq. (241) after differentiation, we obtain the following second order differential wave equation: j t® (n)"!kt(n)# "t(n)"t(n) . d
(242)
We can distinguish three cases. ¹he linear case. Replacing "t" with 1 in Eq. (242) yields the linear equation:
t® "! k!
j t,!pt . d
When p(0, wavenumber k is small, there are only exponential solutions, which results in localized state. Roughly speaking, extended states will happen when k is in the order of k ,(j/d. For example, in the previous section, we used j"1, d"1.618, so that k "0.786, and this result is consistent with Fig. 19a. ¹he nonlinear case. Let t(n)"t exp[ipn] be a solution of Eq. (242), then we find j (243) p"k! "t " . d Unlike the linear case, whether p is real or imaginary now depends on the intensity of the wave "t ". As long as "t "((j/d k, there will be propagating waves and hence extended states. Let p"0, we have (244) "t ""(j/d k . Eq. (244) represents the boundary that separates the gap state from the propagating state for very small k. Fig. 22 shows the result from numerical calculation which agrees with Eq. (244) for small k. We used j"1, and the average intersite distance d"1.618. ¹he general nonlinear case. Similarly, for the GNL model, the low-intensity wave equation becomes:
j t® "! k! (a #a "t") t . d For solution of the type of t(n)"t exp[ipn], we find j j p"k!a !a "t " . d d Generally, transmission would be more difficult for small k because of the presence of the linear term, and it will get worse if a '0. Compared with the corresponding linear model, better transmission would be achieved for small wavenumber k, if the nonlinear term dominates over the linear term, a
412
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 22. The boundary between transmission and gap regions for waves with low-intensities and long wavelengths. The boundary which separates the gap region (grey) from the transmission region (white) is illustrated. The black line is the long-wave approximation prediction by Eq. (244). Parameters are the same as in Fig. 22c.
Eq. (232) is shown to assist the waves in defying the quasi-random properties of the medium. This effect seems also to be present in genuine nonlinear and disorder segments but with some differences [156]. The nonlinear lattice model we presented here has applications, as pointed before, in the propagation of electrons in superlattices and electromagnetic waves in the dielectric materials. Although we have shown here explicitly only the results for positive potentials (j'0 in Eq. (232)), it is worthwhile to mention that similar results about the enhancement of transmission by nonlinearity exist for negative potentials (j(0) as well. 4.7. Field dependence and multistability Over the last several decades there has been a continuous interest in the study of electrical field-induced effects in crystals or other periodic structures. There are fundamental mathematical difficulties owing to the destruction of translational symmetry and the pathological effect of divergence for an infinite lattice in the presence of an external field that still prevent the finding of a complete solution for this problem [171]. In this chapter, we study electrical field-induced transport phenomena of ballistic electrons in a finite semiconductor superlattice. We will study first linear effects induced by an external field and subsequently we will see how these effects get modified by nonlinearity. We will use the Kronig—Penney model in both linear and nonlinear cases and solve the Scho¨dinger equation without the approximations of weak field or weak interwell couplings. In Section 4.7.1, we will study the field-induced changes in transmission spectrum and the interplay between the Wannier—Stark localization and the field-induced intersubband tunneling for the linear effects [172]. In Section 4.7.6, we will introduce the selfconsistent potentials and study the nonlinear space-charge effects in a doped semiconductor superlattice, and explain the experimentally observed multistability and discontinuity of electroconductance [173].
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
413
4.7.1. Electric field-induced changes in the transmission spectrum of electrons in a superlattice In this section, we study the transmission problem of ballistic electrons in a finite superlattice of rectangular barriers in the presence of a homogeneous electric field. A total transfer matrix is constructed, based on the eigenfunctions of the field-dependent Hamiltonian. An explicit solution is given for the transmission probability T for a lattice of d-function potentials. Numerical results show that, with increasing field strength, transitions occur in T at fixed energy, from transmission to gap and back to transmission. In addition to shifts of band edges to lower energies, almost complete disappearance of lower transmission bands occurs for large fields. We also discuss the connections of our findings with experimental results, especially the electro-optical effects. 4.7.2. The Wannier—Stark localization The energy spectrum and transmission property of an infinite crystal in the presence of a constant electric field was first studied by Wannier [174], and the Wannier—Stark (WS) ladder effect and Bloch oscillations has been discussed and debated over the years [175—178]. The predicted effect was made for the one-band tight-binding model or a system that resembles such a model, in which the energy levels of the crystal should contain ladder-like structures. For a lattice with a periodic potential of period a, it can be easily shown that an electron restricted to a single energy band of width D, and without the presence of any scattering, will demonstrate an oscillation in both real and reciprocal spaces [179]. It is easy to see that the period of Bloch oscillation is just u "eEa/ , and the spatial extension of the electron is ¸"D/eE, where E is the strength of the electric field. However, for almost three decades, WS ladder and Bloch oscillation had eluded experimental observations until the mid-1980s. It has been argued as early as the 1970s, the WS localization could be easier to be found in a semiconductor superlattice than crystals [180]. This is because in the superlattice, the lattice constant (10—100 A> ) is comparable with the electronic wavelengths; therefore, the quantum-mechanical interference effects, which are partly responsible for the formation of WS ladders, are more important in a superlattice than in the crystals. Wannier—Stark localization has been experimentally observed through optical emission and absorption experiments in semiconductor superlattices of GaAs/Al Ga As [181—183], formed by alternating thin V \V layers of GaAs and GaAlAs. Considerable theoretical work has also been done [184,185]. In the GaAs/GaAlAs system, for a certain range of field, the heavy-hole states are fully localized whereas the electron states are still partially extended. Interband optical transitions between electron and hole states could give rise to well-defined absorption or emission spectra with the following characteristics [186]: (1) The spectral lines would be equidistant in energy, the separation being the Stark energy DE" u . (2) The energy of the central line would be independent of the field to the first order, but its strength would increase with increasing field. (3) The energies of the central lines of the upper branches (relative to the central line) of the spectrum would increase linearly with field at the rate of nea (n"1, 2, 3,2), where n represents the neighbor index with respect to the central line. The lines of the lower branch would decrease similarly with increasing field. In dealing with bulk solid state problems, semiclassical treatments of electrons in transport problem are often utilized, but such methods are not valid for mesoscopic systems such as quantum wires and superlattices in which the electron’s de Broglie wavelength is comparable with the system’s spatial parameters. In dealing with quantum well problems in the presence of an electric field, we notice that the following considerations can significantly affect the results: (a) whether one
414
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
considers an infinite lattice or a finite one, (b) whether or not one uses plane wave approximations and staircase potentials to discretize the continuous potential of the electric field, and (c) whether or not perturbation theory and a weak (or strong) field approximation is used. The reason that (a) is important is because Bloch functions cannot in general be built out of a finite set of atomic wavefunctions in the presence of an electric field, and the effect of boundary conditions becomes increasingly evident for smaller lattices; the reason that (b) is important is because difference equations usually have more complicated spectra than the corresponding differential equations, and therefore discretization of a continuous process may easily introduce artificial spectra (even artificial chaos in a nonlinear case) into the system; and finally, the reason that (c) is important is because perturbation series usually diverge in critical parameter regions where the system undergoes fundamental changes or transitions. 4.7.3. The time-independent Kronig—Penney model There have been a lot of work concerning the discrete energy spectrum of electrons in a quantum well or in a superlattice consisting of many quantum wells in an electric field, usually in the limit of an infinite lattice. Less attention has been paid to the transmission of electrons with energies considerably higher than those of the first gap. We find that at these higher energies, the electric field can play quite a negative role for electronic transport, by suppressing the coherence of the electronic waves in consecutive wells. One of the reasons for studying the transmission spectrum, besides the inherent interests, is that the transport property is directly related to the transmission coefficient of quantum mechanical waves, as shown by Landauer and others [187,188]. The electric conductance is related to the transmission coefficient by the elementary factor 2e/h. We consider a finite quasi-one-dimensional superlattice in the presence of a constant electric field, and determine how the transmission spectrum is affected by the field. We do not consider the exciton formation and its effect in transmission, but concentrate our attention on the transmission problem of conduction electrons in the combined presence of periodic and tilted potentials. Fig. 23 shows the schematic picture of the superlattice and the tilted potentials inside it. Quantum
Fig. 23. Schematic diagram of a quasi-one-dimensional semiconductor superlattice consisting of two different kinds of materials (upper part), and the electric field-induced structural changes in the square-well/square-barrier potentials » (lower part).
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
415
mechanical wave coherence and interferences will be maintained when the mean free paths of electrons are much larger than the superlattice constant (interval between two neighboring unit cells). Electrons experience a potential barrier and are consequently scattered at each interface between two layers. In other words, we consider a model of periodic rectangular wells. When an external electric field is applied to the superlattice, the potential wells become tilted, and each site has a potential energy different from its neighbors. Here we will solve the problem exactly, without treating the electric field as a weak field or discretizing the potentials modified by the field. The formal solution shows that the gap regions are shifted toward lower energy and become wider under increasing field. In particular, we find that a region of the transmission spectrum undergoes a transition from transmission to gap and back to a transmission state, with stronger fields. We will illustrate how a gap state can be gradually brought into an extended state under the influence of the field and vice versa. The one-dimensional Schro¨dinger equation of a single electron with energy E'0 moving in a quantum dot superlattice of N periodic blocking potentials » and in an external field E, is
d t(x)# ! 2m dx
, »(x!x ) !eEx t(x)"E t(x) , L L
(245)
where »(x!x ) is the potential at site n, »(x!x )"0 except for the intervals x (x(x #b, L L L L where it equals » ; x "na, where a is the lattice constant, and b is the well width. L Fig. 24 shows the influences of the periodic potential barriers on the transmission of the electrons. The transmission spectra as a function of ka for 20 different lattice sizes (N) are plotted in a quasi three-dimensional diagram for comparison, where k"(2mE/ , and a is the lattice constant. The first gap (ka(0.6) exists regardless of the size of the lattice, which corresponds to a minimum energy requirement for transmission. However, when the lattice size N is small, the Bragg’s backscattering effect is not strong enough to produce the second and third gaps in the transmission spectra; but even for small N, we can still see that transmission is reduced at the locations where Bragg’s condition, ka"np (n is integer) is nearly satisfied. In Fig. 24, we use the following parameters: lattice constant a"20 A> , barrier strength » "0.25 eV A> , and the electric field E"0. For each of the 20 spectrum curves in Fig. 24, the range of transmission coefficient is from 0 to 1. Before we look for the solutions, we notice that in the presence of the electric field, plane waves are not eigenfunctions of Eq. (1), nor is wavenumber k a good quantum number in this problem. Nevertheless, if the electric field is relatively weak, one can still use plane waves and treat k as a semiclassical quantity, which changes by a discrete amount from site to site; band shifts are expected owing to the lifted energy levels of the left-hand side of the lattice relative to the right-hand side. However, when the field is larger, a full quantum-mechanical approach is needed to reveal the electronic transport behavior, and we now proceed to give that solution. If we define a characteristic length l(E)"( /2meE), and a dimensionless parameter j(E,º)" (2m/ eE)º, where º"E, or (E!» ), for first and second media, respectively, then, it can be shown that inside each layer, Eq. (1) can be transformed into a Bessel Equation of order (1/3) (see Ref. [189] for the transformation), and the solutions for propagating waves can be expressed as a combination of Hankel functions of the first and second kinds (or equivalently as Airy functions). The forward and backward propagating waves between x and x consequently are combined to L\ L
416
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 24. Transmission coefficient vs. ka for 20 different lattice sizes. Gaps are produced by the coherent backscattering of the barriers on the wavefunctions of electrons. Parameters are given in the text.
give the following solution: tG(z)"AG zH (z)#BG zH (z) , L L L
(246)
where i"1, 2 indicating the first and second medium respectively, and z(x, E, º)"(j(E, º)#x/l(E)) is the new dimensionless coordinate; (AG, BG) are amplitude constants, which will be determined L L solely by boundary conditions; and H(z) are the Hankel functions of the first and second kind, respectively. By differentiating Eq. (246) and using the recurrence relation of Bessel functions, d (zJJ (z))"zJJ (z) , J J\ dz
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
where J (z) is any kind of Bessel function, we obtain dt/dx as follows: J dtG 1 L " (z)[AGH (z)#BGH (z)] . L \ L \ dx l(E)
417
(247)
Considering the continuity of t(x ) and dt/dx at the interfaces of different layers, a transfer matrix L between consecutive unit cells can be found. For simplicity in presentation, we only give specific results for the d-function case: »"g a d(x!x ), where g is the average potential height in L L a unit cell for the original lattice without field. The rectangular potentials are replaced by d-functions so that in each unit cell the particle is only scattered once. The unit-cell transfer matrix is shown as follows:
A 1#w h/h w h/h A L> " L L L L L L L , (248) B !w h/h 1!w h/h B L> L L L L L L L where w "2ml/ (z )\g a, and all the h ’s are products of Hankel functions of argument L L L z "z(na, E, E), shown as follows: h"H (z )H (z )!H (z )H (z ), h" L L L \ L L \ L L H (z )H (z ), h"H (z ), and h"H (z ). The analytical solutions of Eq. (1) are there L L L L L L fore completely determined:
A A A , "¹ " , M (249) , B L B B L, , where A , B are the initial amplitudes of the wavefunction, ¹ is the total transfer matrix between , x and x , and M is the nth transfer matrix in Eq. (248). For a complete analysis, one should also , L consider the transfer matrices connecting the field-free regions, x(x and x'x , with the region , in the field, x (x(x . The final transfer matrix is denoted as TN. The transmission coefficient L T is related to the final transfer matrix by T""det(TN)/(TN) " .
(250)
4.7.4. Field-induced transmission spectral changes We are interested in the case when the electronic wavelength is comparable with the superlattice constant. Let k "2p/a, and k "(2mE/ . We want to study the transmission problem for the electrons with energies E, such that k /k &0.1 to 10. For instance, if we take a"10 to 50 A> , and E"0.01 to 1 eV, then we find that k /k &0.1 to 4. To simplify our numerical results, we first define two dimensionless quantities for eigenenergy and electric field strength. We define the dimensionless energy as EI "E/g , where again, g is the average barrier height in a unit cell; so if EI "1, it means that the eigenenergy is at the average height of the barrier. We also define a dimensionless electric field as EI "eEa/g , where e is the electron’s charge, and a is the lattice constant, so that EI can be seen as the relative potential drop within a cell caused by the electric field. We then can study how the transmission spectrum, as shown in Fig. 25, is changed by the electric field. As we increase the field strength, we find the second gap (gap II) becomes wider, and shifts to the left (lower value), as can be seen from Fig. 25b and c. The shift of the gap is not a surprise, because a similar result is expected from the semiclassical theory for conductivity. However, when we further increase the field strength, the original band structure collapses and gap II disappears;
418
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 25. Transmission spectrum changed by the electric field. We plot the transmission coefficient vs. the dimensionless eigenenergy of the electron in the electric field in a 200-unit superlattice with the following strengths: (a) EI "0.0, (b) EI "1.2;10\, (c) EI "0.04, (d) EI "0.06, (e) EI "0.08, and (f) EI "0.12.
the first transmission band is combined with gap II to form a new low-transmission band (Fig. 25e and f). In the final situation, almost the whole spectrum becomes a transmission band, but with significantly different probability for electrons with different energy. This new band structure cannot be obtained by any weak field approximation, and it is not predicted by any semiclassical calculations. From the changed spectrum, one can obtain the following conclusions: (1) the first gap, which has been studied extensively in the literature, undergoes moderate shifts in the presence of an
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
419
Fig. 26. Electric field induced transmission and gap. Plot of transmission vs. the dimensionless electric field in a 200-unit superlattice with eigenenergies EI at left and right gap edges. (a) EI "15. An original gap point is brought to transmission status by electric field; the threshold field is at about 0.06; (b) EI "12. An original transmission point is brought into gap region by electric filed in the range of 0.02(EI (0.05, after that it returns to transmission state.
electric field, whereas the second gap is greatly expanded for moderate field strength; (2) an electron with an eigenenergy initially at a transmission (conduction) state may become a gap electron in the field and vice versa; (3) as the field strength is increased, the whole spectrum undergoes a drastic reconstruction, i.e. the first transmission band collapses, and then rises again, and merges with gap II to become a lower transmission band. These changes in transmission spectrum will affect the electronic conductivity, and will result, in some cases, in oscillation between negative and positive differential resistances. We now study the transition processes at the transmission band edges. Fig. 26 shows how the electric field affects the transmission coefficient of the electron at the left and right gap edges; (a) for EI "15, a gap point when there is no electric field, is transformed into a transmission point by the electric field when EI 50.06; (b) for EI "12, a transmission point when there is no field, is transformed into a gap state when 0.02(EI (0.05, and then returned to a transmission point for larger field strengths. In both cases after escaping from the gap, the transmission coefficient increases smoothly over an interval of about 0.1, and then oscillates as the field is increased. 4.7.5. Summary for the linear effects We have shown that the transmission spectrum of a superlattice may undergo large changes in an electric field. Such changes will show up in electric conductivity of the superlattice. Similar changes have been observed in other properties of semiconductor superlattices such as the electro-optical effects [190,191] As in our present work, these effects are due to modification of states by the field and the resulting Stark—Wannier localization and intersubband resonanceinduced delocalization [192]. We have given explicit results for a superlattice with 200 unit-cells (N"200), but similar results concerning the band shifting and the transitions between the transmission and gap states exist for smaller lattices. However, when N is too small (in our case when N(25), Bragg backscattering from the blocking potentials will not be strong enough to
420
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
produce a gap, resulting in a lower transmission band sandwiched between two higher ones in the transmission spectrum. In the presence of an electric field, the edges of the lower transmission band is also shifted and the width is widened. For a stronger field, the distinction between the first two bands will disappear, similar to what is shown in Fig. 25f. Finally, we notice that the localization behavior in a disordered or quasiperiodic infinite system in the presence of an electric field has been studied extensively within the tight-binding model [193—195] and it has been found that localization either decays in real space by a power law or sometimes is totally eliminated by the field. It would be interesting to use the present model to study how disorder in a finite system would affect the transmission property and hence the conductivity of the electrons in the electric field. 4.7.6. Multiple conductance in a nonlinear superlattice In this section we continue to use the Kronig—Penney model in the presence of an external electric field and to study the nonlinear effects introduced by doped layers in a semiconductor superlattice. In particular, we will try to understand the experimentally observed multistability and discontinuity in the current-voltage characteristics of a doped semiconductor superlattice under an homogeneous electric field. Nonlinearity in our model enters through the self-consistent potential that is used to describe the interaction of the electrons with charges in the doped layers. We show that the process of Wannier—Stark localization is slowed down by the nonlinear effect in the doped layers, and that the shrinking and destruction of minibands in the superlattice by the nonlinearity is responsible for the occurrence of discontinuity and multistability in the transport of electrons. 4.7.7. Multistability in electroconductance There have been considerable interests in the study of transmissions in a nonlinear medium for both electrons and electromagnetic waves. It has been shown that under certain circumstances nonlinearity can enhance transmission by a self-trapped mechanism and by producing and propagating soliton-like waves in the nonlinear media [141,170]. The dynamics for the transmissions is determined by an interplay of nonlinearity and the periodicity (or the lack of it) of the material. In the ballistic regime of semiconductor nanostructures, electric conductance has been found to be related with transmission probabilities of quantum tunneling by an elementary factor of 2e/h for each quantum channel that connects the outgoing waves with the incident waves [187,188]. Optical switching and multistability in nonlinear periodic structures have been analyzed extensively and observed in experiments since the work of Winful et al. [95,113,196,197]. Electric field induced Stark ladder effect, Wannier—Stark localization and the electro-optical properties and device applications in the semiconductor superlattices (SL), such as GaAs/AlGaAs, have been studied extensively in recent years [181—186,198]. The quantum wave effect becomes significant in such a system because the wavelengths of electrons in it are in the same order of magnitude as the superlattice constant. Recently, there have been renewed interests in the study of the multistability and discontinuity in the current—voltage (I—») diagram of doped semiconductor superlattices, both theoretically [199,200] and experimentally [201]. It has been shown that the formation of charge domains in a superlattice is the reason for multistability in electroconductance. In this section, we present a theoretical analysis and numerical calculations by using a simple model with techniques of self-consistent potentials, to study the electronic transport in a SL. We demonstrate from
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
421
a fundamental quantum mechanical point of view how multistability and switching behaviors occur in a nonlinear discrete system. 4.7.8. A model with self-consistent potentials The wave functions of electrons in a superlattice must obey the Pauli’s principle. For simplicity, we assume a decoupling between the longitudinal and transverse degrees of freedom to make the problem one-dimensional. The nonlinearity arises from the self-consistent potentials [202] produced by the accumulation of charges in the doped layers. The superlattice we consider here consists of square-well/square-barrier semiconductor heterostructures, which is a common model representing the mismatch in the minibands between two component materials of the superlattice. We assume that the doped layers are located in the quantum barriers. Following Ref. [202], in the absence of an external field, we write the self-consistent Schro¨dinger equation for t(x, t):
j j #»(x)# ¼(t, t;x, x)"t(x, t)"dtdx t(x, t), i t(x, t)" ! 2m jx jt
(251)
where »(x) is the periodic lattice potentials. We are interested in the time-independent solutions, t(x, t)"t(x) exp+iut,. In such a case, the kernel is also assumed to be time-independent, and the integral part in Eq. (251) is proportional to the density of charges in the doped layers. If the size of these regions is much smaller than the spatial variations of t(x), the integral part of Eq. (251) can be replaced by the summation of the average contributions of the localized charges inside the wells: ¼ M "t(x )"Dx ; the average kernel in the well, ¼ M , is proportional to en /C, where e is the electron L L charge, n is the charge density in the doped layer and C is the capacitance of that layer. For simplicity, we assume that we have very thin doped layers, and use the d-function type of nonlinear barriers to represent the self-consistent potentials; this is an approximation which makes it much easier to obtain an exact solution for this model. The integral in Eq. (251) is hence replaced by the M . following summation, w"t(x)"d(x!x ), where w is proportional to ¼ L When an external electric field is applied along the growth axis of the superlattice, the most fundamental change that the field makes is the breaking of the translational symmetry. The energy levels of neighboring wells are misaligned, which results in the Wannier—Stark localization owing to the turning-off of the resonant tunneling between consecutive wells, shifted absorption edges [198] and widened gap regions in the transmission spectrum [172]. The time-independent Schro¨dinger equation for the electron in an external electric field E, with energy E, and approaching a sample of N periodic potential barriers is
,
d t(x)# g("t")d(x!x ) !eEx t(x)"E t(x) , (252) ! L 2m dx L where g("t")"p(g #g "t(x)"), p is the potential strength; g and g are the weight factor (pg "w), representing the linear and self-consistent nonlinear potentials respectively; x "na, L where a is the lattice constant. Eq. (252) is very similar to Eq. (245), except that the barrier’s potential now contains a nonlinear term, the strength of which is proportional to the local probability amplitude. As in Section 4.7.3, we define a characteristic length l(E)"( /2meE), and a dimensionless parameter j(E)"(2m/ eE)E then, it could be shown that between two consecutive d-function scatters, Eq. (252) can be transformed into a Bessel equation of order (1/3)
422
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
[172], and the solutions for propagating waves can be expressed as a combination of Hankel functions of the first and second kinds. The wave functions between x and x consequently are L\ L given as follows: t (z)"A zH (z)#B zH (z) , (253) L L L where z(x, E)"j(E)(1#x/j(E)l(E)), is the new dimensionless coordinate; (A , B ) are ampliL L tude constants, which will be determined later by boundary conditions; and H(z) are the Hankel functions of the first and second kinds, respectively. 4.7.9. Multistability and multi-valued conductance There is a significant difference in the wave functions of a linear lattice and a nonlinear one. For a linear lattice, g in Eq. (252) is a constant; the superposition principle holds for the waves in entire lattice. However, the same could not be said about a nonlinear lattice. It is possible that electrons with the same incident wave magnitude "A ", may not necessarily give the same output ¹, the transmission coefficient of the sample. We calculate the transmission of electrons in a SL in the presence of an external electric field E, and use Landauer’s formula to obtain the corresponding conductance. In Fig. 27, we show the transmission coefficient ¹ (Fig. 27a) and the electric conductance G (Fig. 27b) as a function of the field strength E for various values of g . In the linear case (g "0) and for a moderate electric field, the wave-function of the electrons inside each quantum well is localized (W—S localization) and the transmission is reduced due to the field induced reduction of the resonant tunneling between adjacent wells. As the electric field increases, the electrostatic potential energy of the electrons in each QW is enhanced by the amount of eaE; if this value becomes comparable to DE , the energy E gap between two minibands, enhancement in transmission is expected because of the intersubband resonant tunneling [192]. In the case of many minibands this process of enhanced transmission repeats itself also at higher field values resulting in the oscillatory pattern of the continuous curves in Fig. 27. This oscillatory behavior is a manifestation of the competition between W—S localization and the intersubband resonance-induced delocalization. We note that the delocalization effect is completely absent from a single band model. The effects of W—S localization and the intersubband resonance-induced delocalization can be observed through photon absorption and luminescence [181,182,192]. In the case of weak nonlinearity (g "0.05 in Fig. 27), the oscillatory behavior of transmission coefficient and conductance in the field remains similar to the linear case. However, the left and right sides of each peak becomes asymmetric, which means (a) the W—S localization process is slowed down in the presence of nonlinearity in the doped layers, as shown by the smaller slopes of the increasing curves in Fig. 27; and (b) the widths of the minibands shrink in the presence of moderate nonlinearity, so that the intersubband resonances occur in a narrower range of field values, resulting in the rapid drop after ¹ or G reaches a peak value. Finally, drastic changes are observed in the case of strong nonlinearity (g "0.25 in Fig. 27). We notice that W—S localization process is further slowed down in a increasing field, whereas the minibands structure is totally destroyed by the nonlinearity, resulting in abrupt changes in transmission and conductance, including the occurrences of discontinuity and multistability. In Fig. 27, we use a"20 A> , N"40, and E"0.32 eV (this energy is roughly at the center of the second miniband of the linear model). For the barrier strength, we use p"2.0 eV A> ; g "0.05, and 0.25, respectively, with g "1.0!g .
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
423
Fig. 27. The effects of nonlinearity is shown from (a) the transmission ¹—E and (b) the conductance G—E diagrams. For small nonlinear parameter g , the transmission and conductance curves are tilted and shifted; for large g , multiple transmissions and conductions become possible. The energy is chosen at a value near a band edge, but multistability is also observed at other energies (not shown). Arrows are used to indicate, as examples, the locations of discontinuity. Other parameters are given in the text. The absolute values are used for the field strengths.
Fig. 27 shows only the multistability in transmissions for the electron with energy E"0.32 eV. In order to understand the whole picture of multistability, we draw a contour plot of multistability on the field-energy parameter plane shown in Fig. 28. It can be seen that up to energy E"0.4 eV, there are three transmission bands separated by gaps of different widths, and the first and second transmission bands consist almost entirely of multistable states. It is very interesting that in the presence of nonlinearity the multistable and mono-stable states in the third transmission band form an oscillatory pattern, which is in agreement with Fig. 27. Detailed studies show that the second and third transmission bands actually comes from a single transmission band in the
424
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
Fig. 28. The contour plot of multistability on the E—E parameter plane. The black regions represent gaps or unstable regions; the dark gray regions are stable transmission states without multistability, and the light gray areas are the multistable states. Multistable states in the third (largest in this figure) transmission band exhibit a periodic pattern.
corresponding linear case; the splitting is due to the nonlinear forces. Another important thing that we notice is that the first transmission band vanishes when field E'2.1 kV/cm. Parameters in Fig. 28 are the same as in Fig. 27. A second model has been used to study the nonlinear effects in the presence of the field in more detail. In this model the doped layers are placed in the middle of the QWs instead of the barriers. After we obtain the conductance G as in the case of Fig. 27, we use the field strength E and sample length Na to obtain the voltage »"NaE, we then use the Ohm’s law to obtain the current, I"G». The current-field characteristic diagram and possible sweep-up and sweep-down paths for this second model are presented in Fig. 29. We use the following parameters for numerical calculations in Fig. 29: for the barrier potential, g "1.0, g "0.0; and for the doped layers, g "0.5, g "0.5. the rest of the parameters are the same as in Fig. 27. We point out that the behavior depicted in Fig. 27 is in qualitative agreement with the experimental results of Ref. [201]. We now go back to the analytical solution. Considering the continuity of t(x ) and the L discontinuity of dt/dx at x"x , solutions of Eq. (252) can be expressed by the following L recurrence relations: A "[1#w ("t ")h/h]A #w ("t ")h/h B , L> L L L L L L L L L L
(254)
B "[1!w ("t ")h/h]B !w ("t ")h/h A , L> L L L L L L L L L L
(255)
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
425
Fig. 29. The current—field characteristic for the second model. Possible sweep-up and a sweep-down paths are shown as the field is either increased or decreased. The current values (small circles) are obtained by calculating the conductance under different fields. It is easy to draw a hysteresis from this diagram. Parameters are given in the text. The absolute values are used for the field strengths.
where w "KJ
(z )\p(g #g "t (z )"); all the h ’s are products of Hankel functions of z . L L L L L L h"H (z )H (z )!H (z )H (z ), h"H (z )H (z ), h"H (z ), and h" L L \ L L \ L L L L L L L H (z ). The analytical solutions of Eq. (252) are therefore completely given by Eqs. (253)—(255) if L the initial amplitudes A , B are known. However, usually one can only assume that one knows the amplitude of the incident wave, A , but not the reflected one, B ; besides that one can also assume that B "0, since there is no barrier to reflect the waves beyond x"Na. It can be easily shown , that the recurrence equations (Eq. (254)) and (Eq. (255)) are reversible for every step. In this case, a self-consistent technique can be used, by numerically finding the appropriate transmitted wave amplitude A , such that through the reverse transformations the incident amplitude acquires the , desired value, and the solution can be verified by directly applying the equations of the forward transformation. In order to understand the meaning of the solutions, let us estimate the order of magnitudes of the physical quantities that we have been used, and apply the appropriate asymptotic form to the solutions. Take, for instance, a lattice consisting of N"50 cells, with lattice constant a"20 A> , E"10 V/cm, and E"0.35 eV, then l"72.3 A> , and j"483. This means that z<1 is true for all the x’s; the use of asymptotic form of the Bessel functions is well justified.
2 lp p exp $i z! ! H(z)+ J pz 2 4
,
(256)
where l is the order of the Hankel function. Eqs. (253)—(255) can be simplified by using Eq. Eq. (256), and the recursion relations can be written as AI "g[1#w ("t ")]AI #gw ("t ") BI exp+!2ik(x )x , , L> L L L L L L L L BI "g[1!w ("t ")]BI !gw ("t ") AI exp+2ik(x )x , , L> L L L L L L L L
(257) (258)
426
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
where g"(z /z )\, AI "((2/n)z\A exp(ih), and BI "((2/n)z\B exp(!ih), with L L> L L L L L L h"(2/3)j!(5/12)p. The electron is gaining kinetic energy in the field, and its “wavenumber” k(x) is an increasing function of x, given by
1 1 k(x)"k 1# j\k x! j\kx#2 , 4 24 where k "(2mE/ . The asymptotic solution then can be seen as a modified plane wave. t (x)"(z /z)AI e IVV#(z /z)BI e\ IVV . L L L L L
(259)
The asymptotic relations are useful in that it helps to illustrate the physical processes, and have been found to be quite reliable for the parameters that we have used. Nevertheless, all the physical quantities of our interests can be calculated directly from Eqs. (253)—(255). We have demonstrated that the occurrence of multistability and discontinuity in the transport processes of electrons can be explained by introducing the self-consistent potentials to represent the nonlinear space charge effects in the doped semiconductor layers. We use a simple model in which the doped layers are assumed to be ultrathin and provide nonlinear impacts on the wave packages of the electrons. One of the advantages of this model is that a fully quantum-mechanical treatment can be applied without using an effective Hamiltonian. Comparing with the tight-binding model, which is good only for weakly coupled QWs heterostructure, our model inherently creates a series of minibands (multiple conduction subbands structure), and the interwell coupling of the wave functions of all the QWs is fully considered in our algorithm. These couplings are responsible for tunneling and transmission of electrons, and become increasingly important in a superlattice fabricated with thinner barriers. This model can be easily modified to study other heterostructures in an electric field, such as a SL consisting of alternative n and p-doped layers and the modulations made by impurity and disorder.
5. Conclusions We have demonstrated that simple nonlinear lattice systems such as the ones formed through the discrete nonlinear Schro¨dinger equation and its generalizations can be used to model several physical circumstances ranging from polaron and soliton like problems to nonlinear electrical and optical problems. The main virtue of DNLS-like models is their simplicity, viz. with relatively simple formalism one can describe complicated phenomena. In many cases, such as the one of the nonlinear photonic band gap systems, one can bypass completely the complexity of the real problem while sacrificing only in the quantitative aspects of the results. In this review we addressed primarily transfer properties of nonlinear lattices. The stationary lattice problem becomes then equivalent to an effective dynamical system described through a nonlinear map whose dynamics corresponds to discrete propagation on the lattice. The map properties are then investigated though the use of standard analytical and numerical techniques. The map associate with the DNLS-AL equation shows rich behavior and different regimes
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432
427
depending on the relative strengths of the “integrable” to the “non-integrable” nonlinear terms. We saw that the integrable map regime can be extended to substantial values of the nonlinearity coefficient of the local nonlinear term. The nonintegrable regime is characterized by sensitivity in the initial condition dependence leading, in the lattice problem, to nonpropagating stationary states. The latter do not contribute in the lattice transfer properties. One interesting application of nonlinear lattices is in one dimensional nonlinear photonic band gap systems. In these systems there is an interplay between nonlinearity and the presence of gaps in the linear band leading to selective amplitude dependent transmission, or switching. We showed some of these properties through a straightforward many band generalization of DNLS, viz. a nonlinear Kronig—Penney model with delta functions. Since in proper variables this system corresponds to a DNLS equation with wavevector dependent nonlinearity coefficient, we used the map approach to study its transmission properties. We also addressed transfer properties in the presence of external fields, targeting semiconductor superlattice applications. We saw that several of the conclusions arrived through the map method can represent qualitatively results obtained experimentally.
Acknowledgements We thank our collaborators Ning Sun, Mario Molina, Bill Deering, Helmut Gabriel, and Kim Rasmussen for exciting and stimulating discussion over the last several years. This work was partially supported by the European Union under the Human Capital and Mobility program ERB-CHRX-CT-930331. One of the authors (D.H.) gratefully acknowledges the warm hospitality of the Research Center of Crete, Greece and support from the Deutsche Forschungsgemeinschaft via SFB 337.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
J.C. Eilbeck, P.S. Lomdahl, A.C. Scott, Physica D 16 (1985) 318. V.M. Kenkre, D.K. Campbell, Phys. Rev. B 34 (1986) 4959. T. Holstein, Ann. Phys. (N.Y.) 8 (1959) 325. A.S. Davydov, N.I. Kislukha, Phys. Status. Solidi B 59 (1973) 465. A.S. Davydov, J. Theor. Biol. 38 (1973) 559. A.S. Davydov, N.I. Kislukha, Sov. Phys. JETP 44 (1976) 571. A.S. Davydov, Sov. Phys. Rev. B 25 (1982) 898. J.C. Eilbeck, P.S. Lomdahl, A.C. Scott, Phys. Rev. B 30 (1984) 4703. A.C. Scott, P.S. Lomdahl, J.C. Eilbeck, Chem. Phys. Lett. 113 (1985) 29. A.C. Scott, J.C. Eilbeck, Chem. Phys. Lett. 132 (1986) 23. V.M. Kenkre, G.P. Tsironis, D.K. Campbell, in: A.R. Bishop et al. (Eds.), Nonlinearity in Condensed Matter, Springer, Berlin, 1987. G.P. Tsironis, V.M. Kenkre, Phys. Lett. A 127 (1989) 209. V.M. Kenkre, G.P. Tsironis, Phys. Rev. B 35 (1987) 1473. V.M. Kenkre, G.P. Tsironis, Chem. Phys. 128 (1989) 219. D. Hennig, B. Esser, Z. Phys. B 88 (1992) 231. M.J. Ablowitz, J.F. Ladik, J. Math. Phys. 17 (1976) 1011.
428 [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61]
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432 M. Salerno, Phys. Rev. A 46 (1992) 6856. D. Cai, A.R. Bishop, N. Gronbech-Jensen, Phys. Rev. Lett. 72 (1994) 591. S.M. Jensen, IEEE J. Quant. Electron. QE-18 (10) (1982) 1580. M.I. Molina, G.P. Tsironis, Physica D 65 (1993) 267. A.A. Maier, Sov. J. Quantum Electron. 14 (1984) 101. P.N. Buthcher, D. Cotter, The Elements of Nonlinear Optics, Cambridge Univ. Press, Cambridge, 1990. P. Marquie´, J.M. Bilbault, Remoissenet, Phys. Rev. E 51 (1995) 6127. M. Toda, Theory of Nonlinear Lattices, Springer, Berlin, 1978. R. Hirota, K. Suzuki, Proc. IEEE 61 (1973) 1483. M. Remoissenet, Waves Called Solitons, Springer, Berlin, 1994. V.I. Arnold, A. Avez, Ergodic Problems of Classical Mechanics, Benjamin, New York, 1968. A.J. Lichtenberg, M.A. Liebermann, Regular and Stochastic Motion, Springer, New York, 1992. J. Greene, R.S. MacKay, F. Vivaldi, M.J. Feigenbaum, Physica D 3 (1981) 468. R.S. MacKay, Physica D 7 (1983) 283. J.D. Meiss, Rev. Mod. Phys. 64 (1992) 795. T.C. Bountis, Part. Accel. 19 (1986) 181. R.H.G. Helleman, in: C. Horton, L. Reichl, V. Szebehely (Eds.), Long-time Predictions in Dynamics, WileyInterscience, New York, 1983. T. Chen et al., Phys. Rev. Lett. 68 (1991) 33. T. Satogata et al., Phys. Rev. Lett. 68 (1991) 1838. M. Kohomoto, L.P. Kadanoff, C. Tang, Phys. Rev. Lett. 50 (1983) 1870; M. Kohomoto, B. Sutherland, K. Iguchi, Phys. Rev. Lett. 58 (1987) 2436. G. Gumbs, M.K. Ali, Phys. Rev. Lett. 60 (1988) 1081; J. Phys. A 21 (1988) L517. F. Wijnands, J. Phys. A 22 (1989) 3267. J. Bellisard, B. Iochum, E. Scoppola, D. Testard, Commun. Math. Phys. 125 (1989) 527. S. Aubry, G. Andre, Ann. Israel Phys. Soc. 3 (1980) 133. S. Aubry, Physica D 7 (1983) 240. S. Aubry, P.Y. LeDaeron, Physica D 8 (1983) 381. J. Greene, J. Math. Phys. 20 (1979) 257. V.I. Arnold, Mathematical Methods of Classical Mechanics, Springer, New York, 1978. R. Devaney, Trans. Am. Math. Soc. 218 (1976) 89. A.A. Vakhnenko, Yu.B. Gadidei, Theor. Math. Phys. 68 (1986) 873. G.L. Wiersma, H.W. Capel, Physica A 142 (1987) 199; 149 (1988) 49, 75; Phys. Lett. A 124 (1987) 124. M.L. Glaser, V.G. Papageorgiou, T.C. Bountis, SIAM J. Appl. Math. 49 (1989) 692. T.R. Taha, M.J. Ablowitz, J. Comput. Phys. 55 (1984) 192 203. B.M. Herbst, M.J. Ablowitz, Phys. Rev. Lett. 62 (1989) 2065. C.G.J. Jacobi, see G. Birkhoff, Dynamical systems, Am. Math. Soc. Coll. Publ. vol. IX, American Mathematical Society, Providence, RI, 1927. G.R.W. Quispel, J.A.G. Roberts, C.J. Thompson, Physica D 34 (1989) 183. G.R.W. Quispel, J.A.G. Roberts, C.J. Thompson, Phys. Lett. A 126 (1988) 419. E.M. McMillan, in: W.E. Brittin, Odabasi (Eds.), Topics in Physics, Colorado Associated Univ. Press, Boulder, 1971, p. 219. K.A. Ross, C.J. Thompson, Physica A 135 (1986) 551. M.J. Ablowitz, P.A. Clarkson, Solitons, Nonlinear Evolution Equations and Inverse Scattering, Cambridge Univ. Press, New York, 1991. N. Finlayson, G.I. Stegeman, Appl. Phys. Lett. 56 (1990) 2276. Y. Chen, A.W. Snyder, D.J. Mitchell, Electron. Lett. 26 (1990) 77. A.C. Scott, Phys. Scr. 42 (1990) 14. L. Bernstein, J.C. Eilbeck, A.C. Scott, Nonlinearity 3 (1990) 293. J.C. Eilbeck, in: P.L. Christiansen, A.C. Scott (Eds.), Davydov’s Soliton Revisited, Plenum Press, New York, 1991.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432 [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99]
[100] [101] [102] [103] [104] [105] [106]
429
C. Schmidt-Hattenberger, U. Trutschel, F. Lederer, Opt. Lett. 16 (1991) 294. M.I. Molina, G.P. Tsironis, Phys. Rev. A 46 (1992) 1124. G.P. Tsironis, Phys. Lett. A 173 (1993) 381. D. Hennig, Physica D 64 (1993) 121. J.C. Eilbeck, G.P. Tsironis, S. Turitsyn, Phys. Scr. 52 (1995) 386. M.I. Molina, W.D. Deering, G.P. Tsironis, Physica D 66 (1993) 135. G.P. Tsironis, D. Hennig, N.G. Sun, unpublished. Yu.S. Kivshar, M. Peyrard, Phys. Rev. A 46 (1992) 3198. Ch. Claude, Yu.S. Kivshar, O. Kluth, K.H. Spatschek, Phys. Rev. B 47 (1993) 14 228. Yu.S. Kivshar, M. Salerno, Phys. Rev. E 49 (1994) 3543. M.H. Hays, C.D. Levermore, P.D. Miller, Physica D 79 (1994) 1. V.V. Konotop, O.A. Chubykalo, L. Va´zquez, Phys. Rev. E 48 (1993) 563. D. Cai, A.R. Bishop, N. Gr+nbech-Jensen, M. Salerno, Phys. Rev. Lett. 74 (1995) 1186. R. Scharf, A.R. Bishop, Phys. Rev. A 43 (1991) 6535. D. Hennig, K.O. Rasmussen, G.P. Tsironis, H. Gabriel, Phys. Rev. E 52 (1995) 4628. A.B. Aceves, C. De Angelis, A.R. Rubenchik, S.K. Turitsyn, Opt. Lett. 19 (1994) 329. A.B. Aceves, C. De Angelis, S. Trillo, S. Wabnitz, Opt. Lett. 19 (1994) 332. A.B. Aceves, C. De Angelis, T. Peschel, R. Muschall, F. Lederer, S. Trillo, S. Wabnitz, Phys. Rev. E 53 (1996) 1172. D. Cai, A.R. Bishop, N. Gr+nbech-Jensen, Phys. Rev. E 53 (1996) 4131. F. Delyon, Y.-E. Levy, B. Souillard, Phys. Rev. Lett. 57 (1986) 2010. Yi Wan, C.M. Soukoulis, Phys. Rev. A 41 (1990) 800; Yi Wan, C.M. Soukoulis, in: A.R. Bishop, D.K. Campbell, S. Pnevmatikos (Eds.), Disorder and Nonlinearity, Springer, Berlin, 1989, pp. 27—37. D. Hennig, N.G. Sun, H. Gabriel, G.P. Tsironis, Phys. Rev. E 52 (1995). H.A. Kramers, Physica 2 (1935) 483. H.M. James, Phys. Rev. 76 (1949) 1602. W. Kohn, Phys. Rev. 115 (1959) 809. R.E. Borland, in: E. Lieb, D.C. Mattis (Eds.), Mathematical Physics in One Dimension, Academic Press, New York, 1966. D.R. Hofstadter, Phys. Rev. B 14 (1976) 2239. R. Bellman, Introduction to Matrix Analysis, McGraw-Hill, New York, 1970. V.I. Arnold, in: R.Z. Sagdeev (Ed.), Nonlinear and Turbulent Processes in Physics, Harwood, Chur, 1984, p. 116. I. Shimada, T. Nagashima, Prog. Theor. Phys. 61 (1979) 1605. V.I. Oseledec, Trans. Mosc. Math. Soc. 19 (1968) 197. G. Benettin, L. Galgani, J.M. Strelcyn, Phys. Rev. A 14 (1976) 22 338. T. Bountis, Physica D 3 (1981) 577. H.G. Winful, J.H. Marburger, E. Garmire, Appl. Phys. Lett. 35 (1979) 379. L. Kahn, N.S. Almeida, D.L. Mills, Phys. Rev. B 37 (1988) 8072. A.J. Sievers, S. Takeno, Phys. Rev. Lett. 61 (1988) 970. J.B. Page, Phys. Rev. B 41 (1990) 7835. S. Takeno, S. Homma, J. Phys. Soc. Japan 60 (1991) 731, 62 (1993) 835; S. Takeno, K. Hori, J. Phys. Soc. Japan 60 (1991) 947; S. Takeno, J. Phys. Soc. Japan 61 (1992) 2821; S.R. Bickham, S.A. Kiselev, A.J. Sievers, Phys. Rev. B 47 (1993) 14206; K.W. Sandudsky, J.B. Page, K.E. Schmidt, Phys. Rev. B 46 (1992) 6161. T. Dauxois, M. Peyrard, C.R. Willis, Physica D 57 (1992) 267; Phys. Rev. E 48 (1993) 4768. T. Dauxois, M. Peyrard, Phys. Rev. Lett. 70 (1993) 3935. S. Flach, C.R. Willis, E. Olbrich, Phys. Rev. E 49 (1994) 836. S. Flach, C.R. Willis, Phys. Rev. Lett. 72 (1994) 1777. E.W. Laedke, K.H. Spatsckek, S.K. Turitsyn, Phys. Rev. Lett. 73 (1994) 1055. S. Aubry, Physica D 71 (1994) 196. S. Aubry, Physica D 86 (1995) 284.
430 [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135]
[136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151]
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432 J. Coste, J. Peyrard, Phys. Rev. B 39 (1989) 13086; ibid 39 (1989) 13096. J. Coste, J. Peyrard, Z. Phys. B 96 (1994) 111. H.G. Winful, Appl. Phys. Lett. 46 (1985) 527. W. Chen, D.L. Mills, Phys. Rev. B 35 (1986) 524. W. Chen, D.L. Mills, Phys. Rev. Lett. 58 (1987) 160. D.L. Mills, S.E. Trullinger, Phys. Rev. B 36 (1987) 947. W. Chen, D.L. Mills, Phys. Rev. B 36 (1987) 6269. Yu.S. Kivshar, Phys. Rev. Lett. 70 (1993) 3055. D. Hennig, K.". Rasmussen, H. Gabriel, A. Bu¨low, Phys. Rev. E 54 (1996) 5788. J. Guckenheimer, P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer, New York, 1983. S. Aubry, G. Abramovici, Physica D 43 (1990) 199. S. Aubry, Physica D 71 (1994) 196. R.S. MacKay, S. Aubry, Nonlinearity 7 (1994) 1623. P.C. Bressloff, Phys. Rev. Lett. 75 (1995) 962. J.J.P. Veerman, F.M. Tangerman, Commun. Math. Phys. 139 (1991) 245. R.S. MacKay, J.D. Meiss, Nonlinearity 5 (1992) 149. C. Baesens, R.S. MacKay, Physica D 71 (1994) 372. M.L. Glasser, V.G. Papageorgiou, T.C. Bountis, SIAM J. Appl. Math. 49 (1989) 692. P.F. Byrd, M.D. Friedman, Handbook of Elliptic Integrals for Engineers and Scientists, Springer, New York, 1971. G.D. Birkhoff, Acta Math. 43 (1920) 1. E. Tabacman, Physica D 85 (1995) 548. J. Moser, Commun. Pure, Appl. Math. 9 (1956) 673. G.L. Da Silva Ritter, A.M. Ozorio de Almeida, R. Douady, Physica D 29 (1987) 181. K. Furuya, A.M. Ozorio de Almeida, J. Phys. A: Math. Gen. 20 (1987) 6211. V.M. Eleonskii, N.E. Kulagin, N.S. Novozhilova, V.P. Silin, Theor. Math. Phys. 60 (1984) 395. D. Hennig, submitted (1997). Yu.S. Kivshar, D.K. Campbell, Phys. Rev. E 48 (1993) 3077. R. de L. Kronig, W.G. Penney, Proc. R. Soc. London, Ser. A 130 (1931) 499. See, e.g., E. Lieb, D.C. Mattis, Mathematical Physics in One Dimension, Academic Press, New York, 1966; J. Kollar, A. Su¨to , Phys. Lett. A 117 (1986) 203; F. Dominguez-Adame, A. Sanchez, Phys. Lett. A 159 (1991) 135; A. Sanchez, E. Macia, F. Dominguez-Adame, Phys. Rev. B 49 (1994) 147. M. Jaros, Physics and Applications of Semiconductor Microstructures, Clarendon Press, Oxford, 1989; E. Tuncel, L. Pavesi, Phil. Mag. B 65 (1992) 213. Y. Tanaka, M. Tsukada, Phys. Rev. B 40 (1989) 4482. G.J. Clerk, B.H.J. Kellar, Phys. Rev. C 41 (1990) 1198. P. Hawrylak, M. Grabowski, P. Wilson, Phys. Rev. B 40 (1989) 6398. M. Grabowski, P. Hawrylak, Phys. Rev. B 41 (1990) 5783. D. Hennig, H. Gabriel, G.P. Tsironis, M.I. Molina, Appl. Phys. Lett. 64 (1994) 2934. D. Hennig, G.P. Tsironis, M.I. Molina, H. Gabriel, Phys. Lett. A 190 (1994) 259. J. Bellisard, A. Formoso, R. Lima, D. Testard, Phys. Rev. B 26 (1982) 3024. D. Wu¨rtz, M.P. Soerensen, T. Schneider, Helv. Physica Acta 61 (1988) 345. P. Erdos, R.C. Herndon, Adv. Phys. 31 (1982) 65. M.Ya. Azbel, Phys. Rev. Lett. 43 (1979) 1954. J.B. Sokoloff, J.V. Jose´, Phys. Rev. Lett. 49 (1982) 334. R. Knapp, G. Papanicolaou, B. White, in: A.R. Bishop, D.K. Campbell, S. Pnevmatikos (Eds.), Disorder and Nonlinearity, Springer, Berlin, 1989, pp. 2—26. S. John, Phys. Rev. Lett. 53 (1984) 2169. S. John, Phys. Rev. Lett. 58 (1987) 2486. E. Yablonovitch, Phys. Rev. Lett. 58 (1987) 2059.
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432 [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197]
431
S. John, R. Rangarajan, Phys. Rev. B 38 (1988) 10 101. K.M. Ho, C.T. Chan, C.M. Soukoulis, Phys. Rev. Lett. 65 (1990) 3152. M. Sigalas, C.M. Soukoulis, E.N. Economou, C.T. Chan, K.M. Ho, Phys. Rev. B 48 (1993) 121. G. Jona-Lasinio, C. Presilla, F. Capasso, Phys. Rev. Lett. 68 (1992) 2269. M.I. Molina, G.P. Tsironis, Phys. Rev. Lett. 73 (1994) 464. M.I. Molina, W.D. Deering, G.P. Tsironis, Physica D 66 (1993) 135. D. Schectman, I. Blech, D. Gratias, J.W. Cahn, Phys. Rev. Lett. 53 (1984) 1951. W. Gellermann, M. Kohmoto, B. Sutherland, P.C. Taylor, Phys. Rev. Lett. 72 (1994) 633. R. Merlin, K. Bajema, R. Clarke, F.Y. Juang, P.K. Bhattacharya, Phys. Rev. Lett. 55 (1985) 1768. B. Sutherland, Phys. Rev. B 34 (1986) 3904. M. Kohmoto, B. Sutherland, C. Tang, Phys. Rev. B 35 (1987) 1020. H. Hiramoto, M. Kohmoto, Int. J. Mod. Phys. B 6 (1992) 281. S. Ostlund, R. Pandit, D. Rand, J. Schellnhuber, E. Siggia, Phys. Rev. Lett. 50 (1983) 1873. P. Hu, C.S. Ting, Phys. Rev. B 34 (1986) 8331. L.M. Kahn, K. Huang, D.L. Mills, Phys. Rev. B 39 (1989) 12 449. S.D. Gupta, D.S. Ray, Phys. Rev. B 38 (1988) 3628; 40 (1989) 10 604; 41 (1990) 8047. M. Johansson, R. Riklund, Phys. Rev. B 49 (1994) 6587; 52 (1995) 231. H. Hiramoto, J. Phys. Japan 59 (1990) 811. N. Sun, D. Hennig, M. Molina, G. Tsironis, J. Phys.: Condens. Matter 6 (1994) 7741. G. Nenciu, Rev. Mod. Phys. 63 (1993) 91. Ning G. Sun, Daiqing Yuan, W.D. Deering, Phys. Rev. B. 51 (1995) 4641. N.G. Sun, G.P. Tsironis, Phys. Rev. B 51 (1995) 11 221. G.H. Wannier, Phys. Rev. 117 (1960) 432. J. Zak, Phys. Rev. Lett. 20 (1968) 1477. G.H. Wannier, Phys. Rev. 181 (1969) 1364; J. Zak, Phys. Rev. 181 (1969) 1366. J.R. Banavar, D.D. Coon, Phys. Rev. B 17 (1978) 3744. D. Emin, C.F. Hart, Phys. Rev. B 36 (1987) 7353. G. Bastard, J.A. Brum, R. Ferreira, Electronic states in semiconductor Heterostructures, in: H. Ehrenreich, D. Turnbull (Eds.), Solid State Physics, vol. 44, Academic Press, New York, 1991. L. Esaki, R. Tsu, IBM J. Res. Develop. 14 (1970) 61. D.A.B. Miller, D.S. Chemla, T.C. Damen, A.C. Gossard, W. Wiegman, T.H. Wood, C.A. Burrus, Phys. Rev. B 32 (1985) 1043. E.E. Mendez, F. Agullo-Rueda, J.M. Hong, Phys. Rev. Lett. 60 (1988) 2426. F. Agullo-Rueda, E.E. Mendez, H. Ohno, J.M. Hong, Phys. Rev. B 42 (1990) 1470. M.M. Dignam, J.E. Sipe, Phys. Rev. Lett. 64 (1990) 1797. R.H. Yu, Phys. Rev. B 49 (1994) 4673. E.E. Mendez, in: F. Henneberger, S. Schmitt-Rink, E.O. Go¨bel (Eds.), Optics of Semiconductor Nanostructures, Akademie Verlag, Berlin, 1993. R. Landauer, Phil. Mag. 21 (1970) 863; J. Phys.: Condens. Matter 1 (1989) 8099. P.W. Anderson, D.J. Thouless, E. Abrahams, D.S. Fisher, Phys. Rev. B 22 (1980) 3519. S. Flu¨gge, Practical Quantum Mechanics, vol. 1, Springer, Berlin, 1971. J. Bleuse, G. Bastard, P. Voisin, Phys. Rev. Lett. 60 (1988) 220. I. Bar-Joseph, K.W. Goossen, J.M. Kou, R.F. Kopt, D.A.B. Miller, D.S. Chemla, App. Phys. Lett. 55 (1989) 340. H. Schneider, H.T. Grahn, K.V. Klitzing, K. Ploog, Phys. Rev. Lett. 65 (1990) 2720. C.M. Soukoulis, J.V. Jose, E.N. Economou, P. Sheng, Phys. Rev. Lett. 50 (1983) 764. F. Delyon, B. Simon, B. Souillard, Phys. Rev. Lett. 52 (1984) 2187. K. Niizeki, A. Matsumura, Phys. Rev. B 48 (1993) 4126. C.J. Herbert, M.S. Malcuit, Opt. Lett. 18 (1993) 1783. J. He, M. Cada, M.A. Dupertuis, D. Martin, F. Morier-Genoud, C. Rolland, A. SpringThorpe, Appl. Phys. Lett. 63 (1993) 866; B. Acklin, M. Cada, J. He, M.A. Dupertuis, Appl. Phys. Lett. 63 (1993) 2177.
432 [198] [199] [200] [201] [202]
D. Hennig, G.P. Tsironis / Physics Reports 307 (1999) 333—432 Paul Voisin, Surf. Sci. 288 (1990) 74. B. Laikhtman, D. Miller, Phys. Rev. B 48 (1993) 5395. F. Prengel, A. Wacker, E. Scho¨ll, Phys. Rev. B 50 (1994) 1705. J. Kastrup, H.T. Grahn, K. Ploog, F. Prengel, A. Wacker, E. Scho¨ll, Appl. Phys. Lett. 65 (1994) 1808. C. Presilla, G. Jona-Lasinio, F. Capasso, Phys. Rev. B 43 (1991) 5200.