SCIENCE IN DISPUTE
SCIENCE IN DISPUTE Volume
2
NEIL SCHLAGER, EDITOR
Produced by
Schlager Information Group
Sci...
81 downloads
1948 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
SCIENCE IN DISPUTE
SCIENCE IN DISPUTE Volume
2
NEIL SCHLAGER, EDITOR
Produced by
Schlager Information Group
Science in Dispute, Volume 2 Neil Schlager, Editor
Project Editor Brigham Narins Editorial Mark Springer Permissions Margaret Chamberlain, Jackie Jones, Shalice Shah-Caldwell
© 2002 by Gale. Gale is an imprint of The Gale Group, Inc., a division of Thomson Learning, Inc. Gale and Design™ and Thomson Learning™ are trademarks used herein under license. For more information, contact The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI 48331-3535 Or you can visit our Internet site at http://www.gale.com
Imaging and Multimedia Leitha Etheridge-Sims, Mary K. Grimes, Lezlie Light, Dan Newell, David G. Oblender, Christine O’Bryan, Robyn V. Young
Manufacturing Rhonda Williams
Product Design Michael Logusz
For permission to use material from this product, submit your request via Web at http://www.gale-edit.com/permissions, or you may download our Permissions Request form and submit your request by fax or mail to: Permissions Department The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI, 48331-3535 Permissions hotline: 248-699-8074 or 800-877-4253, ext. 8006 Fax: 248-699-8074 or 800-762-4058.
ALL RIGHTS RESERVED No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, or information storage retrieval systems—without the written permission of the publisher.
ISBN: 0-7876-5766-2 ISSN 1538-6635
Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
While every effort has been made to ensure the reliability of the information presented in this publication, The Gale Group, Inc. does not guarantee the accuracy of the data contained herein. The Gale Group, Inc. accepts no payment for listing; and inclusion in the publication of any organization, agency, institution, publication, service, or individual does not imply endorsement of the editors or publisher. Errors brought to the attention of the publisher and verified to the satisfaction of the publisher will be corrected in future editions.
CONTENTS About the Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Advisory Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Astronomy and Space Exploration Was the Moon formed when an object impacted Earth early in its history, in a scenario known as the giant impact theory? . . . . . . . . . . . . . . . . . . . . . . . 1 Should a manned mission to Mars be attempted? . . . . . . . . . . 10 Is the Hubble constant in the neighborhood of 100 km/s/Mpc? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Was the use of plutonium as an energy source for the Cassini spacecraft both safe and justifiable? . . . . . . . . . . . . . 29 Should civilians participate in manned space missions? . . . . . . . 39
Earth Science Are we currently experiencing the largest mass extinction in Earth’s history? . . . . . . . . . . . . . . . . . . . . . . Are current U.S. drinking water standards sufficient? . . . . Is the Great Sphinx twice as old as Egyptologists and archaeologists think, based on recent geological evidence? Are ice age cycles of the Northern Hemisphere driven by processes in the Southern Hemisphere? . . . . . . . .
. . . . . 47 . . . . . 59 . . . . . 70 . . . . . 77
Engineering Are arthroplasties (orthopedic implants) best anchored to the contiguous bone using acrylic bone cement? . . . . . . . . . 89 Is fly ash an inferior building and structural material?. . . . . . . . . 99
Life Science Historic Dispute: Are infusoria (microscopic forms of life) produced by spontaneous generation? . . . . . . . . . . . . Have sociobiologists proved that the mechanisms of the inheritance and development of human physical, mental, and behavioral traits are essentially the same as for other animals? . . . . . . . . . . . . . . . . . . . . . . . . . Was Margaret Mead naive in her collection of anthropological materials and biased in her interpretation of her data? . . . . Do the fossils found at the sites explored by Louis and Mary Leakey and the sites explored by Donald Johanson represent several hominid species, or only one? . . . . . . .
. . . 107
. . . 117 . . . 127
. . . 136
v
Does greater species diversity lead to greater stability in ecosystems?. . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Is the introduction of natural enemies of invading foreign species such as purple loosestrife (Lythrum salicaria) a safe and effective way to bring the invading species under control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Mathematics and Computer Science Should Gottfried Wilhelm von Leibniz be considered the founder of calculus? . . . . . . . . . . . . . . . . . . . . Does whole-class teaching improve mathematical instruction? . . . . . . . . . . . . . . . . . . . . . . . . . . . Will the loss of privacy due to the digitization of medical records overshadow research advances that take advantage of such records? . . . . . . . . . . . . . . . . . . . . . . . . Are digital libraries, as opposed to physical ones holding books and magazines, detrimental to our culture? . . . . . .
. . . 165 . . . 172
. . . 181 . . . 191
Medicine Historic Dispute: Were yellow fever epidemics the product of locally generated miasmas?. . . . . . . . . . . . . . . . Do current claims for an Alzheimer’s vaccine properly take into account the many defects that the disease causes in the brain? . . . . . . . . . . . . . . . . . . . . . Should human organs made available for donation be distributed on a nationwide basis to patients who are most critically in need of organs rather than favoring people in a particular region? . . . . . . . . . . . . . . . . At this stage of our knowledge, are claims that therapeutic cloning could be the cure for diseases such as diabetes and Parkinson’s premature and misleading? . . . . . . . .
. . . . 201
. . . . 210
. . . . 219
. . . . 227
Physical Science Historic Dispute: Are atoms real? . . . . . . . . . . . . . . . Does the present grant system encourage mediocre science? Is a grand unified theory of the fundamental forces within the reach of physicists today?. . . . . . . . . . . . . . Can radiation waste from fission reactors be safely stored? . Is DNA an electrical conductor? . . . . . . . . . . . . . . . .
. . . 237 . . . 248 . . . 256 . . . 266 . . . 275
CONTENTS
General Subject Index . . . . . . . . . . . . . . . . . . . . . . . 285
vi
SCIENCE
IN
DISPUTE,
VOLUME
2
ABOUT THE SERIES
Overview Welcome to Science in Dispute. Our aim is to facilitate scientific exploration and understanding by presenting pro-con essays on major theories, ethical questions, and commercial applications in all scientific disciplines. By using this adversarial approach to present scientific controversies, we hope to provide students and researchers with a useful perspective on the nature of scientific inquiry and resolution. The majority of entries in each volume of Science in Dispute cover topics that are currently being debated within the scientific community. However, each volume in the series also contains a handful of “historic” disputes. These include older disputes that remain controversial as well as disputes that have long been decided but that offer valuable case studies of how scientific controversies are resolved. Each historic debate is clearly marked in the text as well as in the Contents section at the beginning of the book. Each volume of Science in Dispute includes approximately thirty entries, which are divided into seven thematic chapters: • Astronomy and Space Exploration • Earth Science • Engineering • Life Science • Mathematics and Computer Science • Medicine • Physical Science The advisory board, whose members are listed elsewhere in this volume, was responsible for defining the list of disputes covered in the volume. In addition, the board reviewed all entries for scientific accuracy.
• Introduction: Provides a neutral overview of the dispute. • Yes essay: Argues for the pro side of the dispute. • No essay: Argues for the con side of the dispute. • Further Reading: Includes books, articles, and Internet sites that contain further information about the topic. • Key Terms: Defines important concepts discussed in the text. Throughout each volume users will find sidebars whose purpose is to feature interesting events or issues related to a particular dispute. In addition, illustrations and photographs are scattered throughout the volume. Finally, each volume includes a general subject index.
About the Editor Neil Schlager is the president of Schlager Information Group Inc., an editorial services company. Among his publications are When Technology Fails (Gale, 1994); How Products Are Made (Gale, 1994); the St. James Press Gay and Lesbian Almanac (St. James Press, 1998); Best Literature By and About Blacks (Gale, 2000); Contemporary Novelists, 7th ed. (St. James Press, 2000); Science and Its Times (7 vols., Gale, 2000-2001); and The Science of Everyday Things (4 vols., Gale, 2002). His publications have won numerous awards, including three RUSA awards from the American Library Association, two Reference Books Bulletin/Booklist Editors’ Choice awards, two New York Public Library Outstanding Reference awards, and a CHOICE award for best academic book.
Comments and Suggestions Entry Format Each entry is focused on a different scientific dispute, typically organized around a “Yes” or “No” question. All entries follow the same format:
Your comments on this series and suggestions for future volumes are welcome. Please write: The Editor, Science in Dispute, Gale Group, 27500 Drake Road, Farmington Hills, MI 48331.
vii
ADVISORY BOARD
Donald Franceschetti Distinguished Service Professor of Physics and Chemistry, University of Memphis William Grosky Chair, Computer and Information Science Department, University of Michigan–Dearborn Jeffrey C. Hall Assistant Research Scientist, Associate Director, Education and Special Programs, Lowell Observatory Stephen A. Leslie Associate Professor of Earth Sciences, University of Arkansas at Little Rock Lois N. Magner Professor Emerita, Purdue University Duncan J. Melville Associate Professor of Mathematics, St. Lawrence University Gladius Lewis Professor of Mechanical Engineering, University of Memphis
viii
LIST OF CONTRIBUTORS
Linda Wasmer Andrews Freelance Writer Peter Andrews Freelance Writer John C. Armstrong Astronomer and Astrobiologist, University of Washington William Arthur Atkins Freelance Writer Sherri Chasin Calvo Freelance Writer Adi R. Ferrara Freelance Science Writer Randolph Fillmore Science Writer Maura C. Flannery Professor of Biology, St. John’s University Katrina Ford Freelance Writer Donald Franceschetti Distinguished Service Professor of Physics and Chemistry, University of Memphis Natalie Goldstein Freelance Science Writer Jeffrey C. Hall Assistant Research Scientist, Associate Director, Education and Special Programs, Lowell Observatory Robert Hendrick Professor of History, St. John’s University Philip Koth Freelance Writer Brenda Wilmoth Lerner Science writer
K. Lee Lerner Professor of Physics, Fellow, Science Research & Policy Institute Eric v.d. Luft Curator of Historical Collections, SUNY Upstate Medical University Charles R. MacKay National Institutes of Health Lois N. Magner Professor Emerita, Purdue University Leslie Mertz Biologist and Freelance Writer M. C. Nagel Freelance Science Writer Lee Ann Paradise Science Writer Cheryl Pellerin Independent Science Writer David Petechuk Freelance Writer Laura Ruth Freelance Writer Marie L. Thompson Freelance Writer/Copyeditor Todd Timmons Mathematics Instructor, University of Arkansas-Fort Smith David Tulloch Freelance Writer Rashmi Venkateswaran Senior Instructor—Undergraduate Laboratory Coordinator, Department of Chemistry, University of Ottawa
ix
Stephanie Watson Freelance Writer
LIST OF CONTRIBUTORS
Elaine H. Wacholtz Medical and Science Writer
x
SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION Was the Moon formed when an object impacted Earth early in its history, in a scenario known as the giant impact theory? Viewpoint: Yes, the giant impact theory provides a compelling and plausible explanation for the formation of the Moon. Viewpoint: No, while the giant impact theory is widely regarded as a better explanation for the Moon’s formation than previous ones, it still has many problems that must be resolved. Humanity is perennially involved in a quest for origins, whether our own, that of our universe, or of our own planet. In the case of Earth and its only natural satellite, the Moon, this involves peering back some four and one-half billion years into the past. While the age of the Sun, the solar system, and of Earth is fairly well determined, the origin of the Moon is still uncertain. This debate is, however, an example of a scientific problem where several competing concepts have been largely defeated by observation and where one theory—the giant impact theory—has gradually gained increasing acceptance. Both of the following essays discuss three competing explanations for the origin of the Moon, which fell from grace in the 1970s, largely as a result of direct exploration of the Moon by the various Apollo missions. Unlike most celestial objects that can only be observed telescopically, humans have actually visited the Moon. And as is often the case, the observations did a marvelous job of foiling the explanations of its origins that had previously been advanced. In response to this, the giant impact theory (GIT) was advanced in the mid-1970s. This theory proposes that an object collided with Earth early in its history, throwing substantial debris into orbit about the planet that eventually coalesced to become the Moon. As the articles point out, this theory is not without its problems, but it is currently the best available explanation for the origin of the Moon. Because this debate is an example of one where a relatively new idea has gained widespread but by no means universal acceptance, it is particularly illustrative of just what is meant by the word theory. Scientists attempt to explain an observed phenomenon by first formulating a hypothesis, which is a suggestion—educated or otherwise—for the mechanism producing the phenomenon. In the case of the GIT, the phenomenon is “Earth has a large, natural satellite, called the Moon,” the question is “Where did the Moon come from?” and the hypothesis might be phrased “The Moon formed after a large object impacted Earth early in its history.” At this stage, there is nothing to support or reject the hypothesis, it is simply an idea. Having formulated a hypothesis, the scientist must then prove (or disprove) it as convincingly as possible. If observations, experiments, and (in recent times) computer modeling support the hypothesis, it may eventually be advanced as a theory. The broader the evidence in support of a hypothesis, the more convincing and widely accepted the theory is like to become. Hypotheses and theories may be developed in response to observations, or in anticipation of them. A classic example of a theory developed in response to observation is Isaac Newton’s theory of universal gravitation,
1
which Newton developed to explain observed motions of the planets. For macroscopic objects moving in a moderate gravitational field, Newton’s theory of gravitation works beautifully. It not only explains motion, but it is predictive—it can be used to project the motion of an object moving under the influence of gravity into the future. Despite this, Newton’s theory—or any theory, for that matter—is not fact. It is an intellectual construct that explains a specified set of observations extremely well; so well, in fact, that it is usually described as a law, a generalization that has stood the test of time, is continuously confirmed by new evidence, and which is more certain than either a hypothesis or a theory. Modern scientists universally accept it as the best available explanation of “classical” motion in a gravitational field. There are circumstances, however, where Newtonian gravity does not quite explain the observations. The motion of Mercury is a famous example; it was only explained fully by Albert Einstein’s theory of general relativity. In areas of strong gravitational fields, such as near black holes or even sufficiently close to the Sun, Newtonian theory fails to deal correctly with observed motion. Theories, therefore—even the best ones—have regimes under which they are applicable and regimes under which they fail. Any theory is subject to challenge and reevaluation by scientists at any time. It is the nature of scientific inquiry to test and refine theories and to discard those found lacking. By their nature, newer theories are always subject to greater skepticism than well-established ones. It would be very difficult to challenge Newton’s law of gravitation in the regimes for which it is valid, though the American writer Immanuel Velikovsky did just that (among other things) with his controversial book Worlds in Collision (1950). However, Velikovsky’s hypotheses were roundly decried and never considered valid. Less thoroughly tested hypotheses and theories are always open to critical review. The giant impact theory is a recent example, and it has met with the usual amount of healthy skepticism from the scientific community. As the essays below discuss, the theory explains certain observations about the Moon, but fails to explain others. Since not all the issues have been resolved, the GIT has not been generally accepted, despite its many compelling features. As with any developing explanations, the GIT can be presented with varying degrees of skepticism, and the viewpoints below reflect this. It is also not the case that once a scientist suggests a theory, he or she will have failed in some way if it turns out to be incorrect. Quite the contrary, establishing that something does not work is often as valuable as establishing that it does. Suppose, for example, that extensive computer models proved beyond a doubt that the GIT was incorrect—that there was no way a huge impact early in Earth’s history could have created the Moon. This would be an extremely valuable result, for scientists would then know they should work on other hypotheses, and not waste time investigating something known to be wrong.
ASTRONOMY AND SPACE EXPLORATION
Scientists are also very willing to make the statement “this is the best we have,” and this is generally said about the GIT. However, saying that the GIT is the best explanation for the formation of the Moon is not the same thing as saying it is an adequate explanation. An essential part of science is the regular acknowledgement of areas where knowledge is simply incomplete. Equally important is the willingness to be open to new ideas, and to discard ideas considered incorrect—a difficult thing to do, given the significant intellectual investment scientists make in their work. Some scientists are more satisfied with the GIT than others, and the two essays below present the various viewpoints in this light, with one striking a more skeptical tone than the other. In this debate, it is not so much a question of whether the GIT is correct or incorrect, but how completely it accounts for the available data. —JEFFREY HALL
2
Viewpoint: Yes, the giant impact theory provides a compelling and plausible explanation for the formation of the Moon. People have accorded religious and mystical significance to the Moon since before recorded history. Eventually, some ancient civilizations attempted to quantify various aspects of the Moon. For example, the ancient Babylonians, Chinese, and Egyptians all strived, with varying SCIENCE
IN
DISPUTE,
VOLUME
2
success, to predict the occurrence of lunar eclipses. Later on, several ancient Greek astronomers attempted to gauge the distances between various celestial bodies, including the distance between Earth and the Moon. But real progress in our understanding of the Moon began during the European Renaissance, which ushered in the birth of modern science. By the twentieth century, there were three principal scientific hypotheses regarding the Moon’s origin. They were: (1) the coaccretion hypothesis (referred to by other titles, including the “double planet hypothesis” and the “planetesimal hypothesis”); (2) the fission hypothesis; and (3) the cap-
ture hypothesis. These three hypotheses will be briefly outlined below, followed by selected reasons stating why each of them eventually fell into disfavor within the scientific community. The coaccretion hypothesis asserts that both Earth and the Moon (and indeed, the other planets and moons in the solar system) coalesced from a rotating disk of “planetesimals,” which were a host of celestial bodies thought to have ranged in size from inches to miles across. The planetesimals themselves had formed from a cloud, or nebula, of particles circling the “proto-star” that eventually became our Sun. The hypothesis holds that the Moon and Earth formed at about the same time, revolving about their common center of mass, and from the same small planetesimals. The fission hypothesis, originally proposed in the late 1800s by G. H. Darwin, son of Charles Darwin, hypothesized that the Moon was thrown out (i.e., “fissioned”) from Earth’s mantle. According to the fission hypothesis, this ejection of material that became the Moon was due to an extremely fast spin of the primordial Earth. The capture hypothesis held that the Moon formed farther out in the solar system, away from Earth, where there was a low proportion of the heavier elements, most notably iron. The Moon was subsequently captured by the gravitational attraction of Earth when the two bodies passed close by each other. Lunar Landings and Their Ramifications Starting in the 1960s, the use of spacecraft allowed the first direct measurements to be made at the lunar surface itself. By the early 1970s the Apollo astronauts had returned to Earth several hundred pounds of lunar rock and soil. The analysis of those lunar samples helped resolve many questions regarding the Moon. In addition to the soil and rock samples, the Apollo missions left behind various measuring devices on the Moon’s surface to relay data back to Earth.
The three main pre-Apollo hypotheses (outlined above) about the Moon’s origin were largely discredited after the discoveries of Apollo became known. Several of the most important findings used to refute those hypotheses are summarized below.
The measure of motion of objects in curved paths, including both rotation and orbital motion. For Earth and the Moon, angular momentum is the spin of each planet plus the orbital motion of the Moon around Earth. ANORTHOSITE: The ancient lunar surface rock, made up of igneous or magmatic rock (usually called “plutonic” rock). DENSITY: Mass of a given particle or substance per unit volume. ISOTOPE: Two or more forms of an element with the same atomic number (same number of protons) but with different numbers of neutrons. LINEAR MOMENTUM: Sometimes called simply “momentum”; for a single (nonrelativistic) particle, the product of the particle’s mass and its velocity. MAGMA: Molten matter beneath Earth’s crust that forms igneous rock when cooled. MAGNETOMETER: Instrument that measures the magnitude and sometimes the direction of a magnetic field, such as the Moon’s magnetic field. PLANETESIMAL: One of an enormous number of small bodies supposed to have orbited the Sun during the formation of the planets. PRIMORDIAL: Existing at the beginning of time. PRIMORDIAL SOLAR NEBULA: Cloud of interstellar gas and/or dust that was disturbed, collapsed under its own gravity, and eventually formed the Sun, the planets, and all other objects within the solar system. ROCHE LIMIT: The restrictive distance below which a body orbiting a celestial body would be disrupted by the tidal forces generated by the gravitational attraction of the celestial body. For Earth, the Roche limit is about three Earth radii. VOLATILE: Capable of being readily and easily vaporized, or evaporated. ANGULAR MOMENTUM:
ies. Altogether, these findings refute the coaccretion hypothesis. • The fission hypothesis ran into trouble because analysis of the total angular momentum of the present Earth-Moon system is inconsistent with the angular momentum that the original primordial Earth would have had to possess to have “thrown out” the Moon. • Isotope ratios that exist for elements on both Earth and the Moon turn out to be the same for the two bodies. This finding dispels the capture hypothesis, because if the Moon had formed far from Earth, the isotope ratios for elements found on each should have been different. A good illustration of SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
• Measurements showed that the core of the Moon is proportionally much poorer in iron than is Earth’s core. Because the coaccretion hypothesis asserts that the Moon and Earth both formed from the same materials, the chemical and elemental composition of the two should be very close. However, this is not the case, as the lack of iron in the Moon demonstrates, and there are other major differences in the chemical composition of the two bod-
KEY TERMS
3
The Moon as it appeared to the crew of Apollo 17. The Apollo missions helped refute several theories about the origin of the Moon.
isotopic dissimilarities is that of Mars and Earth. It is now known that oxygen-isotope ratios of Martian soil are different from the oxygen-isotope ratios of Earth soil.
(Photograph by
A New Explanation Emerges In a 1975 paper in the journal Icarus, William K. Hartmann and Donald R. Davis first hypothesized that a sizeable object impacted Earth early in its history. The name of Hartmann and Davis’s hypothesis has come to be generally known as the giant impact theory, or GIT. In their paper, the two scientists stated that the primitive Earth experienced an immense collision with another planet-sized body that threw out vast quantities of mantle material from Earth, in turn forming a cloud of debris that encircled Earth. This debris cloud later coalesced to form the Moon.
Roger Ressmeyer. © NASA/ Roger Ressmeyer/CORBIS.
ASTRONOMY AND SPACE EXPLORATION
Reproduced by permission.)
4
The most recent version of the giant impact theory proposes that the Moon was formed SCIENCE
IN
DISPUTE,
VOLUME
2
when an object obliquely (i.e., at an angle) impacted Earth very early in its history—around four and one-half billion years ago. This time frame corresponds to about 50 million years after the birth of the solar system, at a time when Earth is believed to have coalesced to about 90% of its current mass. Thus, Earth was in the latter stages of its development. The collision between Earth and the impacting body (in some theoretical models, this impacting body is proposed to be about the size of the planet Mars) caused an enormously energetic collision. Most of Earth is assumed to have melted, and a small portion of its mass was thrown out into space. This thrown-out debris from the collision formed a ring around Earth, some of which fell back onto Earth; the remainder eventually clumped together to form the Moon. In the GIT model, the Moon was initially close to Earth. Over eons of time the Moon and Earth have slowly moved away from one another, and the rotation rate of Earth has slowed. These
processes are still occurring within the EarthMoon system. The GIT is able to overcome the shortcomings of the three pre-Apollo hypotheses. Some of its more important features are: • It explains the close distance (approximately 240,000 mi, or 386,000 km) between Earth and the Moon. • It explains why the Moon has such a low density when compared to that of Earth (3.3 g per cu cm for the Moon, 5.5 g per cu cm for Earth) Computer models indicate that debris thrown into orbit from the collision came mainly from the rocky mantles of Earth and the impacting body. For most large objects in our solar system, lighter materials occur away from the core (i.e., in the mantle and crust), while the core itself contains denser materials, such as iron. Earth’s iron had already stratified into the core before the impact, leaving only the iron-depleted, rocky mantle to be thrown away from Earth. This ejected iron-poor mantle material, which then formed the Moon. • It explains why the Moon has a low volatile content. When materials were thrown into orbit about Earth, the gaseous volatile materials (such as water, carbon dioxide, and sodium) would have evaporated into space, while less volatile materials would have remained to form the Moon. • It explains the formation of the early magma ocean. The material tossed into orbit around Earth would have been very hot due to the impact with Earth. As this material accumulated to form the Moon it would have become even hotter. This very hot material would produce a thick layer of liquid—the magma ocean that today consists of the lunar anorthosite (the ancient lunar surface rock ) crust.
Earth-Moon Isotope Ratios Similarities between oxygen isotope ratios on Earth and Moon deserve to be explored in greater depth because of their importance to the giant impact theory. Indeed, a major reason why the this explanation has gained so much support in
Evidence gleaned from NASA’s Lunar Prospector spacecraft presents additional empirical evidence that the Moon was formed by a massive Earth-and-protoplanet collision. At a presentation of the Lunar and Planetary Sciences Conference in Houston in March 1999, scientists with the Lunar Prospector project said gravity and magnetometer measurements indicated the existence of a lunar iron core of between 140 to 280 mi (225 to 450 km) in radius. From this information, the mass of the lunar core is estimated to be between just 2 and 4% of the total mass of the Moon. By comparison, around 30% of Earth’s mass is contained in its iron-nickel core. The relatively small size of the lunar core is seen as evidence that the Moon was formed when a planet-sized body (currently thought to be about the size of Mars) struck Earth late in the formation of the solar system. The massive collision stripped off the upper, lighter layers of Earth, which later formed the Moon. “This impact occurred after Earth’s iron core had formed, ejecting rocky, iron-poor material from the outer shell into orbit,” said Prospector principal investigator Alan Binder. This material—which formed the Moon—contained little iron that could sink down and form the lunar core. Advances in Modeling the Giant Impact Any hypothesis about the Moon’s formation can only be tested indirectly. One cannot go back in time to witness the development of the solar system, or perform an experiment by crashing one planet into another (at least not yet!). So the planetary scientist must turn to theoretical models to “test” various hypotheses SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
• Finally, the giant impact theory explains the high angular momentum of Earth-Moon system. As previously stated, recent versions of the GIT model depict the collision between Earth and the impacting body as being at an oblique angle (i.e., away from vertical). This type of impact would have converted part of the linear momentum of the impacting body to the orbital momentum of the debris ring around Earth, and converted the rest of its momentum to Earth, making it rotate faster.
recent years is the evidence gathered concerning oxygen isotopes of materials from both Earth and the Moon. In the October 12, 2001 issue of the science magazine Science ETH Zurich, researchers showed that the Moon and Earth possess identical ratios of oxygen isotopes. Using laser fluorination, a technique developed only in the 1990s, some 31 samples of various types of lunar rocks returned from the Apollo missions were analyzed. The group performing the research was able to measure the isotope ratios of O16, O17, and O18 (denoting different isotopes of oxygen). This research was important because it was done with a precision 10 times greater than that attainable with previous techniques. According to Uwe Wiechert, the primary author of the article, scientists already knew that Earth and the Moon were similar to each other with respect to oxygen isotope ratios. However, because the instruments and techniques previously used were not very accurate, no one could determine whether or not the two bodies shared basically identical materials. This question now seems to be settled—Earth and Moon did indeed form from the same material.
5
and theories. The advent of the modern digital computer has proven to be indispensable for this task. As computing power has increased, so has the sophistication of the mathematical models being tested. And recent computer “simulations” (i.e., computer-modeling runs) have added more credence to the giant impact theory. In recent computer-modeling research into lunar formation, Robin Canup, senior research scientist of Space Studies at Southwest Research Institute in Boulder, Colorado, added support to the giant impact theory in her 2001 article in the journal Nature. Canup’s research involved a collision simulation that showed how the Moon could have formed from a large impact with a “proto-Earth” (i.e., a body that would eventually form into the planet Earth). Although the GIT had earlier encountered difficulties when reconciling some characteristics of Earth, and with the development of an integrated model of the EarthMoon system, Canup and other researchers have computer-modeled the planetary dynamics involved in the interaction of the impacting body and the “proto-Earth” to show that the giant impact theory can convincingly account for the origin of the Moon, as well as the Earth-Moon system.
ASTRONOMY AND SPACE EXPLORATION
In other research, a relatively new model used by researchers at the Southwest Research Institute in Boulder, and the University of California at Santa Cruz, has resulted in high-resolution computer simulations showing that an oblique impact by an object with just 10% of the mass of Earth could have ejected enough iron-free material into space to eventually coalesce into the Moon. This would also have left Earth with its present mass and rotation rate, the researchers report. Conclusion The giant impact theory is the leading explanation of the Moon’s formation. Planetary scientists have largely discredited others, especially coaccretion, capture, and fission, which seemed more or less plausible until direct evidence from the Moon showed otherwise. Since its proposal in the mid-1970s, the ascension of the GIT to preeminence can be attributed to two main factors: its correlation with the known characteristics of the Moon, and improvements in mathematical modeling techniques, due in large part to the enormous increase in computing power over the decades since its introduction. The exact sequence of events that led to the formation of the Moon may never be fully known. However, mounting physical evidence, coupled with sophisticated computer simulations, has eliminated many previously popular hypotheses, while reinforcing the general validity of the giant impact theory. —PHILIP KOTH
6
SCIENCE
IN
DISPUTE,
VOLUME
2
Viewpoint: No, while the giant impact theory is widely regarded as a better explanation for the Moon’s formation than previous ones, it still has many problems that must be resolved. Before the Moon was visited by U.S. astronauts of the Apollo missions, three general explanations about the Moon’s origin were considered. The fission hypothesis claimed that the Moon was spun off from Earth’s mantle and crust at a time when Earth was still forming and rotating rapidly on its axis. Since both objects were presumed to originate as one body, this idea gained support because the Moon’s density is similar to the density of rocks just below Earth’s crust. One difficulty with this explanation is that the angular momentum of Earth—in order for a large portion to break off—must have been much greater than the angular momentum of the present Earth-Moon system. Calculations show that there is no reasonable way for Earth to have spun at this required rate. The fission hypothesis does explain why the Moon’s core is so small (or perhaps even absent), and why Earth and the Moon are so close together. It does not explain why the Moon has few volatile materials, such as water, carbon dioxide, and sodium, and why the magma ocean formed. The binary accretion hypothesis maintained that the Moon coalesced (“accreted”) as an independent protoplanet in orbit around Earth. This hypothesis proposed that Earth, the Moon, and all other bodies of the solar system, condensed independently out of a primordial solar nebula. The binary accretion hypothesis was able to explain why Earth and the Moon formed so close together, but had difficulty explaining the discrepancy in chemical content between both bodies, the origin of the magma lunar ocean, the Moon’s lack of iron (and a substantial core); and the lack of appreciable amounts of volatile materials. The capture explanation contended that the Moon formed independently from Earth, elsewhere in the solar system, and was eventually “captured” by Earth. This hypothesis has difficulty explaining how the capture took place from an orbital dynamics point of view, and also fails to explain why lunar rocks share the same isotope composition as Earth. Like the accretion hypothesis, the capture hypothesis does not explain why the Moon has so little iron in its core, why it has so few volatile materials, and why the lunar magma ocean formed. The Giant Impact Theory Given that each of the three explanations had its own particular
MYTHS ABOUT THE MOON There have been many beliefs and myths about the Moon, such as its association with mental illness, werewolves, and increases in crime and accidents. People once believed that a full Moon (likened with the goddess Luna) caused people to go insane. The Moon was also thought to have been Diana, in her incarnations as the goddess of the woodland, an evil magical witch, and the goddess of the sky. In ancient times the heavens were considered to be perfect, but it can be easily seen that the Moon has dark patches. To explain these imperfections, the belief of the “Man in the Moon” became popular. In ancient times, the figure was visualized as a man leaning on a fork upon which he carried a bundle of sticks. The biblical origin of this fable is from Numbers 15:32–36 (in which God commands Moses to gather the community to stone to death a man caught gathering wood on the Sabbath). Others thought the Man in the Moon looked like a rabbit. In
strengths and weaknesses, it was hoped that the research and exploration of the Moon by the Apollo astronauts, and the instruments used in lunar orbit and on the Moon, would indicate which was correct. This, however, did not happen. After studying Moon rocks and close-up pictures of the Moon, scientists, for the most part, disclaimed the three hypotheses (which were now considered inadequate) and proposed what is now regarded as a more probable explanation of the Moon’s formation, the giant impact theory, or GIT.
Working independently from the PSI scientists, Alfred Cameron and William Ward, both of Harvard University’s Smithsonian Center for Astrophysics, concluded—by studying the angular momentum in the Earth-Moon system—that an impact from a body at least as large as Mars could have supplied the rough material for the Moon and also given system its observed angu-
—William Arthur Atkins
lar momentum. Much of Earth’s crust and mantle, along with most of the impact planetesimal, disintegrated and was blasted into orbit thousands of miles high. Loose material in orbit can coalesce if it is outside the “Roche limit,” the distance interior to which tidal forces from the central body prevent a body from forming. The material outside this limit formed the Moon; the material inside the limit fell back to Earth. Early estimates for the size of the impact planetesimal were comparable to the size of Mars, but computer simulation models by U.S. scientists in 1997 showed that the body would have had to be at least two-and-a-half to three times the size of Mars. Early Reactions to the Giant Impact Theory The GIT was viewed skeptically for about a decade after its introduction, because most planetary researchers generally dislike catastrophic solutions to geophysical problems. Hartmann, in fact, said that such solutions were “too tidy.” On the other hand, proponents suggested that such a catastrophic event is actually very random, since by the time the planets were near the end of their formation, there were not many large objects left in the solar system. Earth, it is assumed, just happened to be the planet struck by this large planetesimal. Experts predicted that if the formation of the solar system could be “rerun,” Venus or SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
In 1975 Planetary Science Institute (PSI) senior scientists William K. Hartmann and Donald R. Davis proposed that early in Earth’s history, over 4 billion years ago, a large object struck Earth. Their work was based on research performed in the Soviet Union during the 1960s concerning the formation of planets from countless asteroid-like bodies called planetesimals. The Russian astrophysicist V. S. Safronov pioneered much of this work.
one Hindu story, the god Indra posed as a beggar. A hare could not find food so threw itself into a fire. Indra took the body and placed it on the Moon for all to see. Native Americans thought that a brave had become so angry with his mother-in-law that he killed her and threw the body into the sky. It landed on the Moon so that all would see the crime. Danish folklore is thought to have originated the belief that the Moon was a wheel of curing cheese. This fable is thought to have been a precursor to the belief that the Moon is made of green cheese. In any case, professors Ivan Kelly at the University of Saskatchewan, James Rotton at Florida International College, and Roger Culter at Colorado State University have examined over 100 studies on lunar effects and concluded that there was no significant correlation between the Moon and any of the beliefs cited in popular folklore.
7
Mars might have ended up with a large moon instead. The chemical composition of Earth and the Moon are clearly predicted to be similar in this model, since a portion of Earth went into forming the Moon and a portion of the impact planetesimal remained in Earth. The Moon would be deficient in iron and similar metals if the impact occurred after those elements had largely sunk to the center of Earth. The Moon should also be quite dry because the material from which the Moon formed was heated to a high temperature in the impact, evaporating all the water. The Turning Point In October 1984 a conference on the origin of the Moon was held in Kailua-Kona, Hawaii. Discussions held during the conference added consensus to the validity of the new giant impact theory, and further dismissed the three traditional explanations of fission, binary accretion, and capture. Although these had added much information to the explanation of the Moon’s formation, they were set aside in favor of a better explanation that coincided with better technology now available to planetary scientists. Computer methods had improved significantly over the years, and more advanced computer simulations of the proposed giant impact could now be performed. The understanding of impact processes has also improved due to experiments and studies of large terrestrial craters. The study of planet formation had also provided more knowledge, specifically on how planets formed from objects that were themselves still forming. Such information led to the idea that several large bodies could easily form near each other.
ASTRONOMY AND SPACE EXPLORATION
During the 1990s, Robin Canup wrote her Ph.D. dissertation concerning computer modeling of debris collected into “moonlets” (as she called them), and eventually collected into the Moon. Later, as an astrophysicist at the Southwest Research Institute in Boulder, Colorado, Canup admitted that, “At first [the giant impact theory] was seen as ad hoc, probably unlikely, possibly ridiculous.” But the evolving giant impact theory was eventually able to resolve many of the problems associated with the earlier three hypotheses. For instance, it explained why Earth and the Moon were so close, why the Moon has little or no iron core; why the moon has a low volatile content; how the early magma ocean formed; and why the Earth-Moon system had a very high angular momentum. The Skeptics Although the GIT explains many of the mysteries that the former three theories could not, it is still incomplete, and therefore is not yet a generally accepted theory. Even though an advocate for the GIT, Canup has a list of research questions she would like to see addressed. These include: (1) make the giant impact model work with just a single impact,
8
SCIENCE
IN
DISPUTE,
VOLUME
2
rather than with multiple impacts; (2) explain how a planetesimal formed elsewhere in the solar system; (3) explain the formation of Charon, Pluto’s moon, which scientists think might also have been a result of a giant impact; and (4) chemically match the Moon’s characteristics with what should have happened in the proto-lunar debris cloud. One major problem with the GIT is that it seems to require that Earth be completely melted after the impact, as this would be the only way the huge crater caused by the impact could have been erased. Earth’s geochemistry, however, does not indicate such a radical melting. There is an intense effort underway to understand the processes that might have operated within Earth at its formation and during its development. Until this happens, the chemical evidence for or against the GIT has not been proven. Planetary scientists at a 1998 planetary conference in Monterey, California, raised further doubts about the validity of the GIT in three main areas: the evidence for a giant impact, the extent of melting in early Earth, and how Earth’s core was formed. Research carried by Alfred Cameron at Harvard, in collaboration with Canup, stated that there were numerous parameters still to be tested by computer simulations. Cameron admitted that he has not yet researched all the possibilities of the ratio of the mass of the growing Earth to that of the impact planetesimal, or the total range in angular momentum of the Earth-Moon system. Further, Jay Melosh of the University of Arizona pointed out that the physical properties (called the equation of state) that he, Cameron, and others use in impact simulations is far from perfect and might lead to unrealistic results. Cameron agreed, noting that he “considers this game [of impact simulations] very primitive so far.” As computers become more complex, better computer simulations will be created. However, the solar system’s chaotic past makes it impossible to repeat history. Planetary scientist David Stevenson of the California Institute of Technology says, “None of the scenarios for the Moon’s formation is highly likely.” Currently, no simulations of collisions can form the Moon out of debris thrown into orbit. Planet formation, and the later formation of crusts, mantles, and cores, is so complicated that much of the evidence for or against the GIT has already been destroyed. Furthermore, insufficient data is available to test all the possibilities for this explanations. One of its most important assumptions is that Earth would have been mostly molten when it formed. This scenario would logically lead to a nearly complete separation of the elements present when the core formed. The densest materials would eventually settle at Earth’s core, less dense materials would
settle further away from the core, and the least dense materials would settle at or near the surface of Earth. However, the composition of Earth’s upper mantle today suggests incomplete core formation. In addition, the degree of separation of elements varies with pressure, temperature, and the amount of available oxygen and sulfur. Because experiments to determine such factors are extremely difficult to carry out, the molten nature of early Earth and the early element concentrations are still a mystery, and one important prediction of the GIT cannot be verified scientifically. “It’s good news that the best model [the giant impact theory] gives the most plausible result,” says planetary scientist David Stevenson of the California Institute of Technology, “[b]ut this will not be the last word on the subject. The models still have their limitations,” and “[t]hey may not be capturing all the dynamics of the impact correctly.” Conclusion What processes formed the Moon and Earth? As humans were not there at the time, the most that can be done is to outline a possible course of events which does not contradict physical laws and observed facts. This was done with the earlier three explanations, and as new information was discovered, those were succeeded by the giant impact theory. For the present, rigorous mathematical methods cannot deduce the exact history of lunar formation, and thus cannot verify the validity of the GIT or fully determine the various steps of the origin of the Moon. However, it may be possible to show which steps are likely and which steps are unlikely. Although not proven to everyone’s satisfaction, the GIT explains much about Earth and the Moon. Combined with our current understanding of accretion, it leads to a dynamic and somewhat terrifying picture of the first several hundred million years for both bodies. —WILLIAM ARTHUR ATKINS
Further Reading
Lunar Sample-Return Stations: Refinement of FeO and TiO2 Mapping Techniques.” Journal of Geophysical Research 102 (1997). Canup, Robin M., and E. Asphaug. “Origin of the Moon in a Giant Impact Near the End of Earth’s Formation.” Nature 412 (2001): 708–12. ———, and Kevin Righter, eds. Origin of the Earth and Moon. Tuscon: University of Arizona Press, 2000. Hartmann, William K. “A Brief History of the Moon.” The Planetary Report 17 (1997): 4–11. ———. Astronomy: The Cosmic Journey. Belmont, CA: Wadsworth Publishing Company, 1989. ———, R. J. Phillips, and G. J. Taylor. Origin of the Moon. Houston: Lunar Planetary Institute, 1986. ———, and Ron Miller. The History of Earth. New York: Workman Publishing Co., 1991. Langseth, Marcus G. Apollo Moon Rocks. New York: Coward, McCann & Geoghegan, 1972. Lucey, P. G., G. J. Taylor, and E. Malaret. “Abundance and Distribution of Iron on the Moon” Science 268 (1995). Melosh, H. J., and C. P. Sonatt. When Worlds Collide: Jetted Vapor Plumes and the Moon’s Origin. Tuscon: Department of Planetary Sciences and Lunar and Planetary Laboratory. University of Arizona, 1986. Nozette, Stuart, et al. “The Clementine Mission to the Moon: Scientific Overview.” Science 166 (1994): 1835–39. Spudis, Paul D. The Once and Future Moon. Washington, DC: Smithsonian Institution Press, 1996. Stevenson, D. J. “Origin of the Moon—The Collision Hypothesis.” Annual Review of Earth and Planetary Sciences 15 (1987): 614. Taylor, S. R. “The Origin of the Moon.” American Scientist 75 (1987): 468–77.
Blewett, D. T., P. G. Lucey, B. R. Hawke, and B. L. Jolliff. “Clementine Images of the
———. “The Scientific Legacy of Apollo.” Scientific American 271, no. 1 (1994): 40–7.
SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Ahrens, T. J. “The Origin of Earth.” Physics Today (August 1994): 38–45.
9
Should a manned mission to Mars be attempted?
Viewpoint: Yes, a manned mission to Mars is the next logical step for space exploration. Viewpoint: No, a manned mission to Mars would be an enormously expensive enterprise with insufficient return to justify it.
Mars has long been an object of peculiar fascination. As one of the nearest worlds beyond Earth, it is naturally one of the most accessible for visitation by robot or manned spacecraft. Numerous unmanned missions have targeted the so-called Red Planet, from the Mariner and Viking spacecraft of the 1960s and 1970s to the more recent Mars Pathfinder, whose robot explorer Sojourner generated widespread public interest in 1997. For more than a century, Mars also has been regarded as a likely—or at least possible—location of extraterrestrial life. In 1877 the Italian astronomer Giovanni Schiaparelli identified features on the Martian surface that he called canali (channels). While the intent was only to indicate the presence of channel-like markings on the surface, the English mistranslation of “canali” into “canal” implied intelligent origin. The U.S. mathematician Percival Lowell believed firmly in the existence of intelligent life on Mars, and spent years sketching the Martian surface from the observatory he founded in Flagstaff, Arizona, in the 1890s. While Schiaparelli and Lowell carried out their systematic searches for Martian life, other imaginations were at work as well. H. G. Wells cemented the concept of extraterrestrials on Mars with his classic book War of the Worlds (1898), in which the Martians were portrayed as antisocial creatures intent on the subjugation of Earth. Hostile Martians soon emerged in movies and comic books. The idea of Martians, especially unpleasant ones, worked its way deeply into the public psyche. In 1938 Orson Welles dramatized War of the Worlds, transferring it to New Jersey and reporting the Martian invasion over the radio as a real-time event. Citizens already jittery of the events unfolding in Europe panicked, convinced that Earth was actually under siege by aliens. As recently as 1996, the Hollywood movie Mars Attacks! satirically portrayed a U.S. president pleading, “Can’t we all just get along?” and then being skewered by a bulbous-headed, glass-helmeted Martian.
10
Despite this popular imagery, no evidence of life on Mars has so far been discovered. The planet nevertheless remains an inviting next step for human exploration and, in the minds of visionaries, colonization. Mars is nearby, with a flight time of many months rather than many years. There is evidence that water stood on Mars’s surface in the past, there is water ice in the polar caps, the temperatures are cold but not much colder than those in Earth’s arctic regions, and there is at least a thin atmosphere. By contrast, the other nearby planets, Mercury and Venus, are unsuitable for a manned venture. Both are intolerably hot, and Venus is swathed in a thick atmosphere that creates crushing pressures at its surface.
For these practical reasons, as well as philosophical ones about human curiosity and our desire for challenge, many argue that Mars is a sensible place for humans to make their next great leap of exploration. The astronomer Carl Sagan argued eloquently for such an undertaking, commenting in his book Cosmos (1980) that perhaps Lowell was right after all: there are indeed Martians, and we are they.
If we are to explore other worlds, opponents argue, we should use robotic spacecraft. Since they do not need to be designed to keep people alive inside them, they are much smaller and cheaper to construct than manned vessels. Robotic missions costing tens of millions of dollars—well under 1% of the cheapest proposed scenarios for manned Mars missions—can be flown quickly to their targets, capturing a wealth of data with the advanced imaging and data storage systems now available. All of these arguments point out the central difficulty facing proponents of a manned Mars mission: An initial manned exploration of Mars would necessarily be a cursory first step into human interSCIENCE
IN
DISPUTE,
VOLUME
2
(Photograph by Roger Ressmeyer. © NASA/Roger Ressmeyer/CORBIS. Reproduced by permission.)
ASTRONOMY AND SPACE EXPLORATION
Opponents to a manned mission to Mars believe that there are more effective ways to invest vast amounts of financial and technical resources. Manned space travel is hideously expensive; a manned mission to Mars would cost tens of billions of dollars by the most optimistic estimates. Given the penchant for ambitious space missions to substantially overrun their budgets, estimates approaching a half trillion dollars have been advanced. This staggering cost, say opponents, defeats all arguments about the spirit of exploration, spin-off technology, and the necessity of expanding to other worlds for the survival of humans as a species.
Mars, as photographed by the Viking Orbiter.
11
planetary exploration. The objectives and returns of the mission would be relatively limited, yet the cost would remain huge. Arguments about the necessity of colonizing Mars for species survival, the potential for cultural development and improved international cooperation, acquisition of spin-off technology, and the development of resource-producing colonies are all visionary, long-term ideas. In contrast, today’s society demands a rapid return on investments. E-mail and the Internet provide means of widespread instant communication. Entertainment and commercials thrive on fastmoving images only seconds in duration. Budgets seem perennially insufficient, yet pressure to produce a result before the competitor is constant. Undeniably, the $20 billion to $500 billion spent on a manned Mars mission could be otherwise directed to projects with immediate benefits: technology, medical research, well-defined robotic missions of space exploration, and a host of other applications. Cynics lament the putative greediness of the “me” generation, but paying attention to immediate needs has been a necessity for virtually all of human history. The technology that allows us to devote large resources to distant, egalitarian goals is still in its infancy; prior to the nineteenth century, life for most of humanity was merely survival. Indeed, for much of present-day humanity, it still is. Given these conditions, our tendency to address immediate goals is hardly unreasonable. However, it is possible at the beginning of the twenty-first century to at least contemplate longer-term objectives such as the colonization of another world. Decisions about where and how we allocate such long-term resources underlie the arguments that follow. —JEFFREY HALL
Viewpoint: Yes, a manned mission to Mars is the next logical step for space exploration. Mars Pioneers History records that humans, by nature, are explorers. The reasons for exploring our solar system, presently and in the future, parallel the reasons that prompted pioneers to explore new lands on Earth. In sending a manned mission to Mars, our culture would be continuing a tradition that defines the human race. As a result of past explorations, people have gained enormous benefits. The exploration of our solar system began in earnest with the Apollo manned missions to the Moon from 1969 to 1972, and continued with robotic and satellite missions to all of the planets except for Pluto. Exploration of the solar system is a reality in the twenty-first century.
ASTRONOMY AND SPACE EXPLORATION
The Mars Society, whose purpose is to promote the exploration and settlement of Mars, advocates beginning the exploration of Mars for human colonization as soon as possible. Its members believe discovery missions can be completed inexpensively with existing technology, and produce enormous benefits. The successful exploration and colonization of Mars would show that humans have achieved greater maturity, proving that people are capable of developing colonies away from Earth in order to improve society and to safeguard the future. The exploration of Mars gained even more attention in the late 1990s with the discovery of possible ancient life on Mars and the examination of the planet by the Mars Pathfinder. As scientific knowledge and technology continue to
12
SCIENCE
IN
DISPUTE,
VOLUME
2
improve, especially with respect to exploration of the solar system, a stable foundation is being established to eventually set foot on Mars. The first manned mission to Mars would provide the initial contact that humans need to create a living community on the planet. According to the Goddard Space Flight Center of the National Aeronautics and Space Administration (NASA), a manned mission to Mars lies “on the very edge of our technological ability.” The accomplishment of such a goal would rapidly and dramatically increase our knowledge in numerous scientific and technological areas. For example, it would provide a more detailed understanding of Mars and, in turn, would provide a more complete understanding of the geologic processes and evolution of Earth. A manned Mars mission would answer many questions and increase the number of new questions that people seek to understand. For too long people have believed that reaching Mars requires very complicated technologies, interplanetary spaceships only imagined in the minds of science-fiction writers, and budgets reaching a half trillion dollars. Contrary to these beliefs, it is possible to conduct a Mars mission using available technologies (or at least with modest advancements in such technologies) that have been developed since the early space missions of the 1960s and from resources available on Mars. According to James Hollingshead and Bo Maxwell of the Mars Society, “In the 1960s, America’s economy was substantially boosted by spin-offs from the Apollo programme to reach the Moon.” Hollingshead and Maxwell contend that similar benefits would result from spin-offs of Martian exploratory missions. Why would humans go to Mars? To benefit humanity. The following discussion examines
the most common technological, scientific, economic, and philosophical justifications for the exploration and colonization of Mars. Technology and Science Technological and scientific research and advancements are important reasons why a manned Mars mission should be attempted. The development of new and improved technologies for the mission would enhance the lives of those on Earth, and could include such innovations as more efficient propulsion systems for the transportation industry and better life-support systems in the medical community.
Dr. Michael Duke of NASA’s Johnson Space Center has indicated that a manned Mars mission would help to answer three important scientific questions: (1) What caused the change in atmospheric conditions of Mars? (2) What do these changes mean with respect to environmental changes that have occurred, and are presently occurring, on Earth? (3) Did life begin on Mars like it did on Earth; if so, can supporting evidence be found on Mars? A manned mission to Mars would produce a wealth of scientific information about issues people have pondered for years, and would in all likelihood increase the number of questions that have yet to be pondered. Planetary Insurance Dr. Richard Poss, a University of Arizona humanities professor, proposed a scenario of planetary insurance: if Earth were destroyed, western civilization would conSCIENCE
IN
DISPUTE,
VOLUME
2
These photos of the surface of Mars indicate that water may once have flowed on the planet’s surface. (© AFP/CORBIS. Reproduced by permission.)
ASTRONOMY AND SPACE EXPLORATION
The expansion of our scientific knowledge also would increase from the exploration of Mars. This intriguing planet can tell us about the origin and history of our planet and other planets, and perhaps even about the creation of life on Earth. The atmosphere of Mars consists mostly of carbon dioxide, and possesses an average surface pressure of about 0.01 Earth atmospheres. While the surface temperature of Mars may reach a high of 77°F (25°C) on the equator, most of the time temperatures are much colder. The average temperature on Mars is about -67°F (-55°C). Recent evidence from the 1990s suggests that Mars contains water, the one key ingredient upon which any practical colonization effort would depend. In fact, based on a geophysical study by Dr. Laurie Leshin of Arizona State University, Martian water could be
two to three times as abundant as previously believed. Because the pressure and temperatures are so low, water cannot exist in liquid form on the surface of Mars. Regardless of this, the Mariner 9 and Viking missions of the 1970s observed old surface features that indicate both running and standing water. Current understanding presumes that the Martian atmosphere was once thicker and warmer, even possibly similar to Earth’s early atmosphere. Such characteristics found on Mars are key to understanding Earth’s history and future.
13
KEY TERMS A heavy hydrogen isotope with one proton and one neutron. PLANETOLOGY: A branch of astronomy that studies the origin and composition of the planets and other condensed bodies in the solar system such as comets and meteors. SPACE STATION: Any facility that allows humans to live in space for extended periods of time. SUPERNOVA: A rare celestial phenomenon involving the expulsion of most of the material in a star, resulting in a luminous power output that can reach billions of times that of the Sun. TERRAFORMING: The process of transforming a given environment to make it more like Earth. DEUTERIUM:
ASTRONOMY AND SPACE EXPLORATION
tinue if Martian colonies already had been established. According to Dr. J. Richard Gott III, a Princeton University astrophysics professor, “We live on a small planet covered with the bones of extinct species, proving that such catastrophes do occur routinely.” Gott believes, “Asteroids, comets, epidemics, climatological, or ecological catastrophes or even man-made disasters could do our species in. We need a life insurance policy to guarantee the survival of the human race.” With all people on one planet, a catastrophe could theoretically wipe out all human life—or at least a majority of life, thereby eliminating the chances of recovery. With a new Mars colony away from Earth, humankind would have a better chance at long-term survival.
14
Cultural and Economic Evolution Another important reason why a manned Mars mission should be attempted has been termed “cultural and economic evolution.” Mars is the next logical step in the cultural evolution from Earth. A first mission, and eventually other missions, would bring colonists and supplies from Earth to Mars. Migrations of people in the past have eased overcrowding and the depletion of natural resources from the original homeland, thus improving economic conditions. These migrations have almost always concluded with a permanent settlement. More importantly, the newly settled establishments eventually have become economically self-sufficient, and have provided economic benefits back to the homeland. The settlement of Mars could begin the process of relieving population pressures on Earth.
A Martian colony also could provide a select few colonists with a territory that is not heavily laden with bureaucracy and frustrating SCIENCE
IN
DISPUTE,
VOLUME
2
regulations, and where people with innovative ideas could maximize their benefits. This viewpoint is a far-future ideal, but it is possible that Mars eventually could serve the same role in the twenty-first century that the United States did in the eighteenth century. Without anywhere to expand, there is a danger that society will stagnate. It is important for society to grow, and almost all progress is driven by that need. The exploration and settlement of the planet Mars could enable continued human development. International Cooperation A manned mission to Mars also should be attempted because it would help to bring about international cooperation. Such a mission would be a huge undertaking; it would be impossible for one country to provide all of the necessary financial support and technical know-how. An international Mars exploration effort has the potential to bring about a sense of global unity, with many nations cooperating to accomplish the mission. Extraterrestrial Life Is there life anywhere outside of Earth? This question has been asked for generations. There are many reasons to believe that Mars may once have held single-cell life. In the August 16, 1996, issue of Science, David McKay from the Johnson Space Center and other scientists announced the first identification of organic compounds in a Martian meteorite. The authors hypothesized that these compounds might show evidence of ancient Martian microorganisms. A less probable chance, but one that still exists, states that this life went beyond a single cell, and might continue to exist today. From a biological, philosophical, and theological point of view, discovering that life existed on Mars (or that life never existed) would make a profound statement to all of humankind. Second-Best Planet upon Which to Live The creation of a livable, artificial environment is technically feasible on Mars, and this planet is probably the most hospitable environment outside of Earth to adequately support a human presence. According to a 2001 Goddard Space Flight Center Web page on Mars, the planet is the “only real candidate for future human exploration and colonization.” Mars offers the opportunity to use its own resources to provide air for explorers to breathe and fuel for their vehicles. Mars also has materials that are rare and expensive on Earth. For example, Mars has five times more deuterium than Earth. Deuterium may well be needed in future fusion power plants.
Comparing the other planets of the solar system also shows the advantages of exploring Mars. The planet Mercury is too close to the Sun (with extremes in temperatures and radiation) and contains almost no atmosphere. Venus
FOCUS ON ALH 84001 METEORITE ALH 84001 is a meteorite (any metallic or stony material that survives flight through the atmosphere and lands on Earth). It was discovered in Allan Hills (for which it was named), Far Western Icefield, Antarctica, on December 27, 1984, by the National Science Foundation’s ANSMET (ANtarctic Search for METeorites) expedition. It is estimated that ALH 84001 landed on Earth about 13,000 years ago. At the time of discovery ALH 84001 was shaped like a large potato and weighed about 4.25 lb (1.93 kg). ALH 84001 is important to current and future explorations of Mars because it is believed that the meteorite originated from molten lava about 4.5 billion years ago on an ancient Martian volcano. Scientists believe that ALH 84001 could only have originated from Mars because of the trace gases it contains. The composition of the
is too hot (averaging 932°F [500°C]), with extreme surface pressures. The gas giants (Jupiter, Saturn, Uranus, and Neptune) do not provide a feasible landing surface. The moons of the gas planets are considered too far away to be a practical route for initial colonization of the solar system and are much more inhospitable than Mars. The outer planet Pluto is too far away, and too cold for easy colonization. The Moon is the only other likely body for human colonization. It has two problems that would limit its effectiveness: (1) the lunar day is about 29 Earth days in length, making it difficult for plants to survive, and (2) the Moon lacks an atmosphere, eliminating the necessary human radiation shield. Mars is the only place in the solar system where plants can grow with no artificial illumination and no massive radiation shielding. Mars also likely possesses large quantities of water that would help for exploration, base building, settlement, and terraforming.
—William Arthur Atkins
Future Investment Investment in a manned Mars mission is reasonable when compared with the costs of current Earth projects. Potential monetary returns are high when compared to the investment costs. In addition, people thrive on a challenge. Between 1961 and 1972, with the Moon landing as the goal, NASA scientists and technicians produced technological innovations at a rate several orders of magnitude greater than the agency has shown since. Even so, NASA’s average budget in the late 1960s (in real dollars) was only 20% more than it was in the late 1990s ($16 billion 1998 dollars compared with $13 billion). Because NASA had a goal it was forced to create new technologies and to “think outside the box.” The challenge of a manned Mars mission is the same: to give the United States, and the world, a real return for its space dollars. Inspiration Dr. Robert Zubrin, a Colorado astronautical engineer and the president of the International Mars Society, stated in a 2001 interview, “I believe civilizations are like people. We grow when we’re challenged. Youth deserve challenge. They require it. They thrive on it. Out of that challenge we would get millions of new scientists, inventors, doctors, medical researchers.”
Another reason for a manned Mars mission is to inspire. The first manned Mars landing would serve as an inspiration for the world’s children. About 100 million U.S. children will SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Comparative Planetology Another reason for a manned Mars mission is comparative planetology: by better understanding Mars and its evolution, a better understanding of Earth would be achieved. Unmanned Mars rovers can only conduct a limited amount of research. With requirements to travel long distances across rough terrain, climb steep slopes, and perform heavy lifting, a crewed presence is necessary, and is well beyond the abilities of robots.
atmosphere of Mars was verified from analyses performed by the Viking lander spacecraft in 1976. The Martian atmosphere is unique in the solar system; its composition mostly consists of carbon dioxide, but also includes the elements nitrogen, argon, krypton, and xenon. The Viking landers also found that the atmosphere of Mars has an unusual relative abundance of the particular isotope nitrogen 15, and lesser relative abundances of the two isotopes argon 40 and xenon 129. Those same isotopic relative abundances are found in ALH 84001, which strongly supports its Martian origin. In addition, telescopic observation from Earth shows that the Martian atmosphere is very rich in deuterium (heavy hydrogen), as is ALH 84001.
15
be in school during the next 10 years. If only 1% were inspired to pursue scientific and engineering educations, an additional one million more scientists, engineers, technicians, and doctors would be produced. With one million more professionals we would produce a higher-educated populace. This would help to assure a more prosperous future for the human race.
For some time now there have been calls from a variety of people and organizations for a manned mission to the planet Mars. According to a report from the National Science Foundation released in June 2000, there is solid public support for such an endeavor. Myriad benefits have been predicted to follow from the human exploration, and possible habitation, of the Red Planet. A manned mission to Mars, however, cannot be considered in a vacuum. Such a mission would require a considerable commitment of financial resources over an extended period of time—resources that would not be available for other projects on Earth or in space.
Projected Mission Costs Many studies and analyses have been performed to determine the cost of a manned trip to Mars. These studies have been heavily influenced by the mission parameters that go into planning such a trip: the number of crew members traveling to Mars, types of launch and landing vehicles used, and duration of the crew’s stay before returning to Earth. Cost estimates range from as low as $20 billion for the Mars Direct plan touted by Robert Zubrin, the president of the International Mars Society, to as much as $450 billion according to a 1990 National Aeronautics and Space Administration (NASA) report. As calculating the cost of a manned mission to Mars is heavily dependent upon the organization or people creating the analysis, where can one look for guidance regarding the accuracy of various estimates? A reasonable analysis of these disparate estimates should include an historical review of recent manned space programs. The most recent, similar project is the International Space Station (ISS) program, which involves many countries but is led principally by the United States, which pays about three-quarters of the program’s costs. The ISS is the best indicator of what an internationally led Mars program would entail, because the two endeavors share several key characteristics; most notably, they are both long-duration space missions involving humans and are dependent upon an international consortium for funding and hardware.
ASTRONOMY AND SPACE EXPLORATION
Conclusion In 2001 then–NASA administrator Daniel Goldin said that the first Mars landing was expected between 10 and 20 years in the future. At the beginning of the twenty-first century, many pro-Mars groups are actively lobbying the U.S. Congress and the president to launch such a manned program. It would create thousands of new jobs, spur technological innovation and new inventions, excite children around the world to study science and math, and unite our society as a spacefaring civilization. —WILLIAM ARTHUR ATKINS
Some of the most vociferous advocates for a manned mission to Mars claim that a cascade of benefits would sooner or later follow such a pioneering effort, especially if the exploratory mission(s) resulted in the human colonization of Mars. As discussed in greater detail later in this essay, several of the most compelling benefits claimed by advocates are at best exaggerated, and at worst nothing more than wishful thinking and hype. This essay first examines the cost elements of the cost-versus-benefit analysis of sending people to Mars.
Cost-versus-Benefit Analysis for a Manned Mars Mission Because of the huge costs associated with a manned trip to Mars, at least with the technology and infrastructure currently available, any such mission would be a national or probably international endeavor. Whenever taxpayers’ money is used to fund a project as costly and risky as a manned mission to Mars, people have a right to be presented with a convincing argument for the project: a cost-versusbenefit analysis for a manned Mars mission must be presented to the public. Do the projected benefits from such a mission overwhelmingly justify the large financial costs, as well as the inherent risks, that would be encountered?
ISS as an Historical Guide The ISS had its genesis in a proposal made by President Ronald Reagan in his 1984 State of the Union message to construct a permanently crewed space station in low-Earth orbit (a few hundred miles above Earth’s surface). Authorized by Congress that same year, the space station was to be completed by 1994 and was to cost approximately $8 billion according to NASA. Since 1984 the space station program has repeatedly been modified and delayed. As a result, according to the U.S. General Accounting Office (GAO), by 1993 $11.4 billion had been spent on the space station program, with not a single piece of space hardware in orbit to show for the money spent.
16
SCIENCE
Viewpoint: No, a manned mission to Mars would be an enormously expensive enterprise with insufficient return to justify it.
IN
DISPUTE,
VOLUME
2
Some might argue that the ISS is not a good model for predicting how a manned Mars project might fare. For example, due to the project’s changing political goals, the ISS went through many modifications that often did more harm than good. Specifically, the decision to allow Russian participation in the ISS project resulted in disastrous fiscal and scheduling problems, which in turn were a symptom of Russia’s economic and political woes. But if the international effort to construct a space station in low-Earth orbit has encountered such enormous difficulties, why should the public believe that an international project to reach another world millions of miles away can avoid becoming mired in the same sort of endless delays and cost overruns? The prudent answer would seem to be that any notions of a manned Mars mission should be laid aside altogether. At the very least, an expedition to Mars should be postponed until NASA and its SCIENCE
IN
DISPUTE,
VOLUME
2
The International Space Station is a good model for predicting how a manned Mars mission might fare. (National Aeronautics and Space Administration.)
ASTRONOMY AND SPACE EXPLORATION
The program eventually became international, even incorporating former U.S. competitors in space such as Russia. Overall, the space station program (renamed the ISS) has a nearly 20-year track record of being both grossly over budget and behind schedule. The latest estimates from NASA delay the completion of the ISS until 2004 at the earliest. In recent years, according to the GAO, NASA has dramatically underestimated ISS operational costs on the order of $2.5 billion per year by omitting expenditures such as space shuttle flights flown in support of the ISS. The GAO has predicted that the ISS eventually will incur a total cost of around $100 billion. Even taking inflation into account, the original estimate of $8 billion for a space station now appears woefully optimistic. An international manned mission to Mars might well become— despite the protestations of its advocates— another fiscal disaster like the ISS.
17
international partners can bring the ISS to fruition and then operate the ISS program safely and on budget over a period of many years. NASA would thereby demonstrate that it could, at least in some instances, deliver on what it promises in the way of costs and schedules. Many space researchers and scientists believe the vast financial resources gobbled up by the ISS could have been much better spent by funding a host of robotic exploratory missions. In the March 2001 issue of Policy Options, the project management consultant Denis Legacey noted, “For the tens of billions of dollars it [the United States] spends on the ISS, it spends only hundreds of millions on space exploration.” Undoubtedly, a manned Mars mission would have a similar limiting effect on robotic missions of the future.
ASTRONOMY AND SPACE EXPLORATION
Alleged Benefits of a Manned Mars Mission Many significant benefits have been touted as the natural result of a mission to land people on Mars, including: major technological spin-offs; a unity of purpose and shared excitement for people throughout the world; and, if some cataclysm were to befall Earth, as a way to assure the preservation of humankind. However, these and other alleged benefits are only predictions that may or may not accrue from such a mission. As with the ISS program, all sorts of optimistic claims can be made that ultimately will fall far short of actual results. For example, advocates of a manned mission to Mars claim the effort would produce plentiful high-tech spin-offs that would benefit industry and improve life here on Earth. It is only reasonable to ask if the billions earmarked for a potential manned Mars mission would be better spent by directly funding government and industry research labs for the creation of new products and processes. Numerous research and political organizations outside of the space exploration community have suggested that several times more spin-offs and innovations could be achieved if the money were spent directly on research instead of on the space program.
Organizations that support a manned Mars mission, such as the International Mars Society, also have claimed that it should be the precursor to human settlement of Mars, which should be accomplished sooner rather than later because of several dire threats to the survival of the human race. The extinction of our species could come about either through man-made events (an allout nuclear war) or through a natural cataclysm. Mass extinctions of life on Earth have occurred repeatedly over the course of geologic time; indeed, some paleontologists claim mass extinctions have occurred on Earth once every 30 million years. In a 1984 paper, the paleontologists Dr. David Raup and Dr. J. John Sepkoski proposed a frequency of every 26 million years.
18
SCIENCE
IN
DISPUTE,
VOLUME
2
Many events could make life on Earth difficult: the impact of a large comet or asteroid, for instance, or a series of massive volcanic eruptions. Zubrin and other advocates for the habitation of Mars say that in the case of such a cataclysmic event, a self-sustaining Martian colony would ensure the survival of the species. However, the probability of such a naturally occurring apocalyptic event happening in the next several hundred years is exceedingly tiny. Large comets or asteroids capable of destroying life on Earth strike our planet very infrequently. “Killer” asteroids on the order of 10 miles in diameter—about the size of the asteroid said to have wiped out the dinosaurs 65 million years ago—are believed to hit Earth only once every 100 million years. Even assuming that preparations should be made to save the human race from possible extinction, a Mars colony would not necessarily be the best option. One alternative is using the myriad shelters (many of which are still in existence) that were constructed during the cold war in the latter half of the twentieth century. These shelters are located in many different countries and were designed to protect a limited number of people from the disastrous effects of a nuclear war. If the goal is the survival of humanity, such shelters could be stockpiled with food and fuel to keep a select group of people alive for years after a natural catastrophe, until the worst effects have subsided. Another alternative to a Mars colony is a self-sustaining Moon base, which appears more feasible because of evidence from NASA’s 1998 Lunar Prospector data indicating water ice trapped near the lunar poles. Earth shelters and a lunar colony both present cheaper alternatives to the massive investment and complexity integral to a permanent Mars colony. Moreover, either option could likely be achieved much more quickly than the colonization of Mars. Alternatives to a Manned Mars Mission Even by the most optimistic projections, a manned mission to Mars would cost tens of billions of dollars. Based on the excessive cost overruns and schedule delays of previous manned space programs, the total cost might well balloon into the hundreds of billions of dollars. This money would not be available to fund other important space initiatives. In place of a peopleto-Mars project, many scientists favor an extensive, but much less expensive, series of robotic missions to explore the solar system. Recent advances in robotics and intelligent processing systems make that choice even more viable.
The existing over-budget and behindschedule space station project must be brought to fruition before another large manned mission is attempted. If and when that day comes, a
return trip to the Moon should be the first priority. The Moon is much closer to Earth than Mars, which means fewer onboard supplies and less fuel can be used to get to the Moon. A trip to the Moon would take days (as in the case of the Apollo missions), whereas most Mars proposals envision a trip lasting about six months. The long duration of a Mars voyage would mean much more radiation exposure for the crew (unless extensive shielding were used, which would add weight and cost to the spacecraft) than would be encountered on a lunar trip. A major argument for a Mars base over that of a lunar one is the presence of water on Mars. While early measurements of the lunar surface showed a complete absence of water, tantalizing evidence from the late 1990s indicated the possibility of water ice near the lunar poles, thus possibly eliminating a strong feature favoring Mars for colonization. In short, sending people to the Moon would be safer and cheaper than a similar manned mission to Mars, while still embodying many of its benefits, including providing a haven to guarantee human survival, creating technological spin-offs, and generating worldwide public attention and excitement.
Britt, Robert Roy. “The Top Three Reasons to Colonize Space.” .
Conclusion A manned mission to Mars is an idea whose time definitely has not come, and should be deferred indefinitely. A variety of factors argues against putting people on Mars in the foreseeable future. For any publicly funded endeavor, the benefits obtained from the project must be weighed against the projected costs. As detailed earlier in this article, many of the benefits touted by advocates of a manned expedition to Mars are questionable, and could be achieved more safely and economically by a similar lunar expedition. The costs of a manned Mars mission also deserve scrutiny. Large, international space projects can consume many times the originally projected funds, as the financial quagmire of the ISS amply demonstrates. Moreover, the money tied up in a Mars expedition would not be available for other space projects—projects that would most likely produce far more return on their investment than a Mars endeavor. Myriad robotic missions to the solar system and beyond could be funded for a fraction of the cost of a Mars expedition. Developing additional manned space stations or a lunar colony would be far less expensive, as well as safer, than initiating a human mission to Mars. —PHILIP KOTH
Hamilton, John. The Pathfinder Mission to Mars. Minneapolis, Minn.: Abdo and Daughters, 1998.
Bergreen, Laurence. Voyage to Mars: NASA’s Search for Life Beyond Earth. New York: Riverhead Books, 2000.
Cole, Michael D. Living on Mars: Mission to the Red Planet. Springfield, N.J.: Enslow Publishers, 1999. Engelhardt, Wolfgang. The International Space Station: A Journey into Space. Nuremberg, Germany: Tessloff Publishing, 1998. Gaines, Ann Graham, and Adele D. Richardson. Journey to Mars. Mankato, Minn.: Smart Apple Media, 1999. Gehrels, Tom. “History of Asteroid Research and Spacewatch.” Lunar and Planetary Laboratory, University of Arizona. . Goldsmith, Donald. Voyage to the Milky Way: The Future of Space Exploration. New York: TV Books, 1999.
Legacey, Denis. “Is the International Space Station Really Worth It?” Policy Options (March 2001): 73–7. “Manned or Unmanned: Justification for a Manned Mission.” . “Martian Invasion? Not Yet.” Economist 5 (April 2001). McKay, D. S., et al. “Search for Past Life on Mars: Possible Relic Biogenic Activity in Martian Meteorite ALH84001.” Science 273 (16 August 1996): 924–30. Oberg, James E. Mission to Mars: Plans and Concepts for the First Manned Landing. Harrisburg, Pa.: Stackpole, 1982. Sheehan, William, and Stephen James O’Meara. Mars: The Lure of the Red Planet. Amherst, N.Y.: Prometheus Books, 2001. Walter, Malcolm. The Search for Life on Mars. Saint Leonards, New South Wales, Australia: Allen and Unwin, 1999. Zubrin, Robert, with Richard Wagner. The Case for Mars: The Plan to Settle the Red Planet and Why We Must. New York: Free Press, 1996.
SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Further Reading
“The Case for Mars: International Conference for the Exploration and Colonization of Mars.” The Case for Mars. .
19
Is the Hubble constant in the neighborhood of 100 km/s/Mpc?
Viewpoint: Yes, observations have raised the estimate of the Hubble constant from 50 to near 100 km/s/Mpc. Viewpoint: No, the best observations regarding the age of objects in the universe require a Hubble constant significantly below 100 km/s/Mpc. One of the most familiar of scientific phenomena is the Doppler shift of sound—the change in pitch of sound being emitted by an object moving toward or away from a listener. The pitch of a train whistle invariably appears to lower as the train passes the listener, and this change is caused by the compression and elongation of sound waves by the motion of the object. Longer-wavelength or, analogously, lower-frequency sound waves are perceived by the listener as having a lower pitch. The same phenomenon applies to light. Light waves emitted by an object moving toward or away from an observer are subject to the same compression or elongation of wavelength. If the object is moving toward the observer, the detected wavelength is shorter, or shifted toward the blue end of the spectrum. Conversely, if the object is moving away from the observer, the light is seen as shifted toward the red end of the spectrum. The mathematics governing the amount of wavelength shift and the velocity of the moving object is quite simple; it is a direct proportional relationship. If object A is moving away from an observer at twice the velocity of object B, the redshift of object A will be exactly twice that of object B. The Doppler shift of light underlies what is arguably the most important astronomical discovery of the twentieth century. In 1912, the American astronomer Vesto Melvin Slipher (1875–1969) of the Lowell Observatory in Arizona noticed that the spectra of distant objects we now know to be galaxies all had their spectral features shifted toward the red end of the spectrum. This discovery ultimately led Edwin Hubble (1889–1953), working at the Carnegie Institution’s Mount Wilson Observatory in California, to notice in 1929 that the amount by which the spectral features of galaxies were redshifted was directly proportional to their distance from Earth. The farther away a galaxy was, the faster it appeared to be receding. Put in the simplest mathematical terms, the relationship governing a galaxy’s recession velocity is V = H0 x D, where V is the recession velocity in kilometers per second (km/s), D is the galaxy’s distance in megaparsecs, and H0 is a quantity known as the Hubble constant. One megaparsec (Mpc) is 1 million parsecs, or about 3.26 million lightyears—rather a long way by terrestrial standards. A bit of work with a pencil will show you that to make the units in the equation above work out, the Hubble constant must have units of km/s/Mpc (kilometers per second per megaparsec). If the Hubble constant were, say, 50, then galaxies at a distance of 1 megaparsec would be receding, on average, at 50 km/s (30 mi/s); galaxies 2 Mpc away would be receding at 100 km/s (60 mi/s), and so on. 20
Determining the value of the Hubble constant is the subject of a decadeslong debate in the astronomical literature. The essays that follow discuss
points of view taken by the two main camps in the controversy. On one side are the eminent American astronomer Allan Sandage (1926– ) of the Carnegie Observatories in Pasadena, California, and his collaborators, who have steadily maintained that the value of the Hubble constant is in the neighborhood of 50 km/s/Mpc. Sandage was challenged in the 1970s by the French-born American astronomer Gerard de Vaucouleurs and his collaborators, who found a Hubble constant closer to 100 km/s/Mpc. The implications of these differing points of view are enormous. The Hubble constant is a measure of the rate of expansion of the universe, and therefore affects estimates of its age, current size, and ultimate fate. For example, the larger the Hubble constant, the faster the universe is expanding and the less time it will have taken to reach its present size. Thus, de Vaucouleurs was arguing, in effect, for a younger universe than Sandage. The Hubble constant has proven enormously difficult to determine, and as with many problems in astronomy, this stems from the sheer distance to the objects in question. Astronomers cannot bring objects into the laboratory; they must observe them at a distance and infer their properties from the light they emit. Happily, some things are easy to measure, and Doppler shift is one of them. Features in spectra lie at well-known “rest wavelengths,” which are the characteristic wavelengths of the feature when observed in a nonmoving object. Comparing the observed wavelength to the rest wavelength and determining the velocity needed to produce the observed shift is straightforward. However, distance is another matter entirely. Direct measures of distance are available only for the nearest objects, and for other galaxies only indirect measures of distance are available. So, although V is easy to measure from the spectra, D is extremely difficult to measure, and the remaining unknown H0 remains in doubt. This is why, in the following essays, you will see general descriptions of the Doppler shift, but protracted discussions about standard candles—methods of measuring distance. The most difficult problems always warrant the most extensive discussion, and the problem of measuring the distance to galaxies is extremely difficult. What methods are used, how good the observations are, and how they are interpreted are all subject to scrutiny and criticism.
Edwin Hubble (© Bettmann/CORBIS. Reproduced by permission.)
One final concept to keep clearly in mind when reading the following essays is the nature of the expansion of the universe itself. The recession of the galaxies is an apparent velocity caused by the expansion of the universe. It is not that all galaxies are flying away from the Milky Way as if in response to some cosmic offense, but rather that space itself is expanding, pushing galaxies apart as it does so. Discovering whether this expansion continues forever, gradually halts, or eventually reverses and brings all matter in the universe back together in what has been called the “big crunch,” is another reason for arriving at a well-determined value for the Hubble constant. —JEFFREY HALL
The modern picture of the universe began to emerge in the early 1920s, when Edwin Hubble (1889–1953) studied the fuzzy patches in the sky, or nebulae, using the 100-in (2.5-m) tel-
Hubble was interested in measuring the distance to these galaxies using the convenient properties of a type of star called Cepheid variables. These stars brighten and dim in a regular pattern with a period inversely proportional to their brightness. The brighter they are, the more slowly they pulsate. SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Viewpoint: Yes, observations have raised the estimate of the Hubble constant from 50 to near 100 km/s/Mpc.
escope at the Mount Wilson Observatory near Pasadena, California. Some of these nebulae turned out to be huge “island universes,” enormous galaxies of stars like our own Milky Way.
21
galaxies were speeding away from Earth like fragments from an exploding bomb.
KEY TERMS Brightness of an object seen at a distance. The farther away the object is, the smaller its apparent brightness. COSMIC MICROWAVE BACKGROUND (CMB): The background radiation throughout the universe. Because this electromagnetic radiation is at a very low temperature, only a few degrees above absolute zero, most of the radiation energy is in the microwave region of the electromagnetic spectrum. INTRINSIC LUMINOSITY: Brightness of an object, or the amount of energy it emits per unit time. This is independent of distance, and is a fundamental property of the object. KPC: Kiloparsec, or 1,000 parsecs. MPC: Megaparsec, or 1 million parsecs. PC: Abbreviation for parsec, a unit of distance equal to 3.26 light-years. REDSHIFT: Shift in the wavelength of light caused by the motion of an object. Redshifted light indicates the object is moving away from the observer. Since galaxies recede from us, and at an every increasing rate with distance, the distance to galaxies is sometimes indicated by their redshift. APPARENT BRIGHTNESS:
ASTRONOMY AND SPACE EXPLORATION
Astronomers already knew the apparent magnitudes of these stars, their brightness as seen from Earth. From the pulsation period, they could now figure out their absolute magnitude, or actual brightness. The difference in the two brightness numbers allowed calculation of the distance between Earth and the Cepheid variables, and thus of the galaxies in which they were found. Hubble Develops His “Constant” In 1912, the American astronomer Vesto Melvin Slipher (1875–1969) of the Lowell Observatory in Arizona had made the surprising discovery that the spectra of some galaxies; that is, the characteristic wavelengths at which their gas molecules absorbed light, were shifted toward the longer wavelength, or “red” end of the spectrum. This “redshift” is analogous to the lowering of the pitch of a siren as an ambulance moves into the distance. The faster an object is receding, the larger the shift will be. Thus, measuring the difference between the observed spectrum and that which would be expected for a particular class of object is a way to determine its velocity. Hubble compared the distances to various galaxies with their red-shifted spectra and discovered an unexpected pattern. The farther the galaxy was from Earth, the faster it seemed to be receding. The
22
SCIENCE
IN
DISPUTE,
VOLUME
2
At first, the idea that Earth seemed to be at the center of this rapid expansion gave scientists pause. But Albert Einstein’s 1915 general theory of relativity, in which gravity was related to the warping of space and time, helped to explain the situation. The galaxies are not receding from Earth in particular. Instead, everything, including our own galaxy, is receding from everything else, as the fabric of spacetime expands like a balloon. This expansion is the result of the “big bang,” the massive cataclysm by which most astronomers believe the universe originated. The key number in understanding the expansion of the universe is the slope of the straight line that Hubble drew through his plot of galactic velocity and distance. The Hubble law states that a galaxy’s distance is proportional to its redshift velocity. The slope of the line, which came to be called the Hubble constant, or H0 (pronounced “H nought”), was by Hubble’s 1929 estimate 530 km per second per megaparsec. This would mean that for every 1 million parsecs (about 3.26 million light-years) away a galaxy is, it would be moving 530 km (about 330 mi) per second faster. The Work of Sandage After Hubble’s death, his protégé Allan Sandage (1926– ), of the Carnegie Observatories in Pasadena, California, began making more measurements of distances and redshifts. The more measurements he made, the more Sandage realized that the Hubble constant needed some adjustment. Hubble himself had been able to use the reliable Cepheid measurements only for the nearby galaxies, where these stars could be seen. Farther out, he relied on redshifts in the spectra of the brightest stars in the galaxy and, at even greater distances, on the spectra of the galaxies themselves. Sandage also discovered some errors in measurements Hubble had used.
By the 1950s, Sandage had estimated a Hubble constant of about 180, or approximately one-third of Hubble’s original value. This smaller Hubble constant was a result of correcting the distance of the galaxies; they were now believed to be three times farther away. Thus a smaller Hubble constant implies a larger universe. It also implies an older universe, since it would have taken the galaxies three times longer to get to their recalculated positions. Sandage continued to work for decades on measuring the expansion of the universe using the Mount Wilson telescope, as well as the 200in (5-m) telescope at the Palomar Observatory operated by the California Institute of Technology (Caltech) in northern San Diego County, California. By 1975, his estimate for the Hubble
The Hooker Telescope at Mt. Wilson, where Edwin Hubble made groundbreaking discoveries about the expanding universe. (Photograph by Roger Ressmeyer. CORBIS. Reproduced by permission.)
ASTRONOMY AND SPACE EXPLORATION
SCIENCE
IN
DISPUTE,
VOLUME
2
23
Allan Sandage, shown at right. (Photograph by Douglas Kirkland. CORBIS. Reproduced by permission.)
constant had settled on about 50, implying an age for the universe of about 20 billion years. Sandage’s calculations have hovered near that number ever since.
ASTRONOMY AND SPACE EXPLORATION
For many years, Sandage’s numbers were essentially uncontested. To many, it seemed unthinkable to argue with the esteemed astronomer, clearly the most experienced man in the field and the anointed successor of Hubble himself. That situation changed dramatically when, at a 1976 meeting of the International Astronomical Union, Gerard de Vaucouleurs (1918–1995), a French-born American astronomer of the University of Texas at Austin, stood up and said that Sandage and his colleague, Gustav Tammann of the University of Basel in Switzerland, were wrong.
24
The Work of de Vaucouleurs De Vaucouleurs based his claim on his analysis of six papers by Sandage and Tammann, in which he claimed to find a dozen “blunders,” and on measurements he had made. The Hubble constant was actually about 100, de Vaucouleurs said, which would make the universe half as large and only about 10 billion years old.
Given Sandage’s eminence in the field, de Vaucouleurs’s claims were at first met with alarm and disbelief. However, he persisted, and constructed an elaborate scale incorporating not only the “standard candles” (objects for which the absolute magnitudes are known) employed SCIENCE
IN
DISPUTE,
VOLUME
2
by Hubble, Sandage, and Tammann, but new ones as well, such as bright star clusters and ringlike structures within certain galaxies. To decrease the risk that any one error would skew the results, De Vaucouleurs used some measurements to verify and cross-check others, and then averaged all the methods. De Vaucouleurs presented his ideas around the astronomy conference circuit, winning many adherents. In addition to making the argument that his distance scale was more robust and less error-prone, he accused Sandage’s camp of making unwarranted assumptions about the smoothness of the universe. If the distribution of astronomical objects was not as uniform as Sandage and his supporters had assumed, then the cosmic expansion would likewise be uneven, and their distance scale would be wrong. In fact, de Vaucouleurs asserted, Earth is part of a local supercluster of galaxies that is slowing down cosmic processes in our own astronomical neighborhood. De Vaucouleurs also objected to the way Sandage and Tammann had used spiral galaxies as standard candles. In extrapolating the characteristics of large, well-studied galaxies to smaller, dimmer ones, he believed they had made another assumption about uniformity in the universe. Also, according to de Vaucouleurs, Sandage and Tammann had neglected to account for the effects of interstellar dust on the brightness of the objects they observed.
There was one obvious problem with de Vaucouleurs’s 10 billion-year-old universe. Studies of some ancient globular star clusters had pegged their age at 17 billion years. Clearly the universe cannot be younger than its oldest stars. De Vaucouleurs was not alarmed by this development; he pointed out that the models that arrived at the stellar ages could just as easily be wrong. Since Sandage’s value for the Hubble constant had fallen over the years, de Vaucouleurs intimated that the final drop from 100 to 50 had been prompted by a desire to provide a universe old enough to accommodate the globular cluster results. Sandage indignantly denied this charge. Later Astronomers But de Vaucouleurs was not to be Sandage’s only critic. In the 1970s two young astronomers, Brent Tully and Richard Fisher, were looking for a new standard candle. They reasoned that the rotation rate of a spiral galaxy should be related to its luminosity, because a faster rotation is required to maintain the stability of the orbits of stars against the gravitational force of a larger total mass inside those orbits. More mass means more stars, yielding a brighter galaxy. Rotation could be measured by radio astronomy, based on the Doppler shifting of the energy emitted during transitions in hydrogen atoms. Using their new way of calculating brightness and thus distance, Tully and Fisher came up with a Hubble constant of about 100, just like de Vaucouleurs.
Using a variation of Tully-Fisher method in which the brightness of the galaxies was measured in the infrared wavelengths to limit the effect of interstellar dust, another group of astronomers, Marc Aaronson, John Huchra, and Jeremy Mould, estimated a Hubble constant of between 65 and 70. In recent years, many measurements have yielded numbers between 70 and 90.
Viewpoint: No, the best observations regarding the age of objects in the universe require a Hubble constant significantly below 100 km/s/Mpc.
Current estimates of the Hubble constant give results ranging from 50 to 100 km/s/Mpc (kilometers/second/megaparsec), with most measurements clustering around 50 to 55 and 80 to 90 km/s/Mpc. Such discrepant results have huge implications for the age of the universe. The upper value of 100 km/s/Mpc indicates the universe is no more than 6.5 to 8.5 billion years old (just barely older than the age of the solar system); the smaller value suggests an age of 13 to 16.5 billion years old. But, determining the “true value” is a murky proposition—one cannot just average the results. Instead, scientists must make sure the results fit the following criteria: 1) the method for determining the Hubble constant must be based on sound physical—rather than empirical—principles; 2) the value of the Hubble constant must be consistent with the physical parameters of the universe; and 3) derived results from the Hubble constant and other cosmological parameters must be consistent with other observations of objects in the universe. The Problem of Distance Many problems in astronomy are really problems of distance. Determining the size of a planet, computing the true velocity of an object moving through space, measuring the brightness of a star—all these problems require an accurate knowledge of the distance to the object to find the correct answer. Over the years, astronomers have put together an ingenious toolkit for finding the distance to celestial objects. For nearby objects, direct measurements such as parallax can be used. The parallax method capitalizes on the apparent change in position, from the perspective of a moving observer, of an object relative to the SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Although de Vaucouleurs had once dismissed such a solution as a compromise for “sissies,” younger astronomers have inherited little of the historical and personality clashes that led their elders to hold their ground so tenaciously. As the evidence mounts, the Hubble constant seems increasingly likely to converge on a number significantly higher than Sandage’s, but not quite so high as de Vaucouleurs’s. —SHERRI CHASIN CALVO
Since 1929, when the American astronomer Edwin Hubble (1889–1953) first observed the relationship between a galaxy’s distance from Earth and its apparent recession velocity, the Hubble constant (H0) remains the most fundamental, yet most poorly determined, parameter of modern cosmology. The difficulty in constraining the Hubble constant is understandable. Astronomers must determine both the velocity and distance to the galaxies they are measuring. The velocity poses little problem. Modern spectrometers, which analyze light from distant galaxies, can accurately determine the recession velocity by observing the galaxy’s redshift. But even in this age of high-resolution space telescopes, determining the distance to an object is no easy task. Measurement errors of 50 to 100% are common, and reducing these down to 10 to 20% requires special methods and careful observations. But the challenge is worth the reward. The Hubble constant contains information about the origin and fate of the universe, the nature of our cosmology, and—perhaps most interesting of all—the ability to estimate the age of the universe.
25
Pulsating stars known as Cepheid variables are a valuable tool for measuring the distance to other galaxies. (© CORBIS. Reproduced by permission.)
more distant background. Just as the passenger in a car sees nearby objects, such as trees and bushes on the roadside, moving past faster with respect to the horizon than objects that are further away, like a farmhouse, the apparent motion of objects in the sky as Earth moves around its orbit depends on distance. This direct and relatively precise method based on simple geometry can measure the distances to the planets and nearest stars. The satellite observatory Hipparcos has measured direct distances out to 1 kiloparsec (kpc). After that, the solutions become murkier. Astronomers search for objects to use as distance indicators, sometimes referred to as “standard candles.” Just as a driver at night can gage the distance of an oncoming car by the brightness of its headlights, if astronomers can deduce the intrinsic luminosity of the standard candle, identify it at great distances, and measure its
26
SCIENCE
IN
DISPUTE,
VOLUME
2
apparent brightness, they can determine the distance to the object. Determining the Hubble constant requires finding the distances to some of the most far away galaxies in the universe. A number of methods exist. Some, like parallax, are firmly based on the principles of physics—that is, astronomers understand the physical process that makes the standard candle operate. Others are based on empirical relationships, the physics of which may be partially or completely unknown. Well-understood standard candles make better distance indicators. Astronomers have identified two standard candles with a reasonable basis in physics to explain their behavior, which can also be used to measure the distances to galaxies. First, very luminous, pulsating stars known as Cepheid variables have the surprising characteristic that the
frequency of their pulsations and their intrinsic luminosity are correlated, so an accurate measurement of the period of the pulsations gives the luminosity of the star. Since they are bright, Cepheids can be seen a great distances. In addition, the pattern of the pulsation—the specific oscillations in the star’s brightness as it pulsates—is characteristic, so that Cepheids can be distinguished from other types of variable stars. Once they are identified, their apparent brightness can be measured, and a distance to the star, along with its host galaxy, can be deduced. Until recently, these Cepheids were of little use in measuring the Hubble constant, since they could only be distinguished in the galaxies nearest to our own. However, large ground-based telescopes, particularly the Hubble Space Telescope, allow detailed observations of Cepheids in distant galaxies out to 5 to 10 Mpc. Another standard candle is a particular kind of stellar explosion known as a Type Ia supernova. These explosions occur only in binary star systems when one star—a compact object known as a white dwarf—accretes mass from its companion star. The white dwarf detonates under the same physical conditions for all Type Ia supernovae, and the resulting explosions have a characteristic light profile and similar peak luminosity. Since these explosions can outshine the entire host galaxy, astronomers target distant galaxies and patiently wait to observe these rare events. Type Ia supernovae have been used to determine distances out to 100 Mpc or more. Astronomers have some understanding of the physical processes of both these indicators, the pulsating stars and the supernovae explosions, and are therefore more confident in the distances determined by these methods. Although other problems, such as extinction of the light due to intergalactic dust, can still make these measurements imprecise, they are thought to be accurate within the measurement uncertainty. Recent measurements made by Hubble’s protégé Allan Sandage (1926– ), of the Carnegie Observatories in Pasadena, California, and his colleagues, of supernovae distances coupled to Cepheid distances, measure the Hubble constant at 55 +/- 8 km/s/Mpc.
Astronomers describe the bulk properties of the universe with several parameters. The Hubble constant measures the expansion rate of the universe. If the expansion is constant, and there
First of all, the expansion rate may not be constant. The deceleration parameter, q0, indicates whether or not the universe is accelerating (q0 < 0), decelerating (q0 > 0), or expanding at a constant rate (q0 = 0). This deceleration parameter depends not only on the Hubble constant, but also on Ω0, the parameter describing the density of the universe. If the density parameter is greater than one, the universe contains enough mass to halt the expansion due to gravity and collapse back on itself. If it is less than one, the universe will expand forever. And even that is not the entire story. Recently, astronomers have found evidence for third parameter, the cosmological constant, denoted by Λ. The cosmological constant, if it is not equal to zero, also affects the expansion rate of the universe. All these fundamental parameters are interdependent, and together measure the curvature of the universe. Therefore, determining the curvature of space, figuring the acceleration and deceleration of the expansion, and finding the value of Λ can all constrain the Hubble constant. Recent observations of the cosmic microwave background (CMB) by the balloon-flown microwave telescope Boomerang have important implications for the Hubble constant. The detailed measurements of the bumps and wiggles in the CMB indicate the universe is flat—it will continue to expand until the mass in the universe brings it slowly to a halt. If the cosmological constant is zero, this means the Hubble constant can directly determine the age of the universe. However, if this model is correct, some objects in the universe are older than the age predicted by large values of the Hubble constant. Globular Clusters: An Independent Constraint on the Age of the Universe Whatever the cosmology of the universe, it must be consistent with the other objects within the universe. Since one of the primary features of the Hubble constant is its ability to help determine the age of the universe, astronomers can use the known ages of objects in the universe to constrain the values of the Hubble constant.
Evidence from meteorites indicates that the formation of our solar system, and presumably the Sun, dates back 4.55 billion years ago. Nearby stars and our own galaxy are also reasonably young. However, associations of stars known as globular clusters, whose ages can be SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
How H0 Fits with Other Cosmological Parameters However, direct measurement of the Hubble constant is only part of the story. The standard cosmology can be used to more accurately measure the Hubble constant. In other words, whatever the value of the Hubble constant, it should not directly conflict with measurements of other cosmological parameters.
is no other force acting to accelerate or decelerate it, then the Hubble constant directly indicates the age of the universe. This is the same as measuring the expansion rate of a balloon, and using that to determine how long it took to blow the balloon up. But, unfortunately, the universe is not so simple, and several other features make the calculation more challenging.
27
readily determined and that represent some of the oldest objects in the universe, perhaps predate galaxies themselves. Globular clusters are associations of anywhere from 10,000 to over 1 million stars, all of which formed in the same place and at the same time. These stellar associations exist in the distant halo of the Milky Way and other galaxies. Using detailed observations, astronomers can study the characteristics of the stars in the cluster and determine when the cluster formed. These measurements are straightforward, and the method for deriving the cluster ages is well founded in stellar astrophysics. The results from the dating of globular clusters are compelling. The clusters, on average, date back nearly 15 billion years. Not only do they represent the oldest objects in the galaxy, they put hard constraints on the value of the Hubble constant. For the given cosmology, H0 must indicate a universe as old or older than 15 billion years. Therefore, either the Hubble constant is smaller than 100 km/s/Mpc, or the cosmological model is incorrect.
ASTRONOMY AND SPACE EXPLORATION
Conclusion Hubble’s first measurements of H0 and the expansion of the universe, in principle, seemed straightforward to refine. Astronomers needed only two measurements— velocity and distance—to find the answer. As it turns out, H0 is more difficult to determine than any other fundamental parameter in cosmology,
28
SCIENCE
IN
DISPUTE,
VOLUME
2
forcing researchers to use a variety of methods to achieve a result. And with so many disparate answers, the evaluation of the Hubble constant must rely on a number of subsequent analyses. The distance indicators must be firmly based on physical principles, H0 must be consistent with all other determinations of the cosmological parameters, and it must not conflict with independent measurements of the age of objects in the universe. Given these criteria, the smaller value of the Hubble constant, somewhere in the range of 50 to 60 km/s/Mpc, is favored. However, modern cosmology is still an active field of research. Models change and new ideas are put forth all the time. Therefore, astronomers must continue to search for new methods of determining the Hubble constant until a consensus—or more evidence—is found. —JOHN ARMSTRONG
Further Reading Overbye, Dennis. Lonely Hearts of the Cosmos: The Scientific Quest for the Secret of the Universe. New York: HarperCollins, 1991. Peebles, P. J. E. Principles of Physical Cosmology. Princeton, N.J.: Princeton University Press, 1993. Trimble, Virginia. “H0: The Incredible Shrinking Constant 1925–1975.” Publications of the Astronomical Society of the Pacific 108 (December 1996): 1073–82.
Was the use of plutonium as an energy source for the Cassini spacecraft both safe and justifiable? Viewpoint: Yes, using plutonium as an energy source for Cassini carried only minimal safety risks, and it was the only currently feasible way to reach the outer planets in the solar system. Viewpoint: No, using plutonium as an energy source for Cassini was not safe, given the known dangers of plutonium and the legacy of rocket launch failures.
Spacecraft must do a variety of things during a mission. They must be able to send information to mission scientists on Earth, and they must be able to receive and respond to instructions sent from Earth. The instruments they carry must function, and the computers that record images and data must be kept operational. Mechanical parts must be kept operational, and backup systems kept ready for use if needed. All of these tasks require a significant amount of power, and a perennial challenge for space mission planners has been providing power to the spacecraft in question. Familiar methods of providing power are not feasible. One cannot just plug a satellite into an electric wall socket, and batteries have a limited lifetime, have a limited capacity to provide power, and are extremely heavy. For these reasons, many Earth-orbiting missions have used solar panels. From the Skylab space stations of the 1970s to the Hubble Space Telescope, satellites have sported large panels that collect sunlight and convert it to power, just as solar-powered homes and vehicles use similar panels on Earth. Not long after the Apollo landings on the Moon, our vision for space exploration turned to the outer solar system. The Voyager spacecraft generated immense public excitement with their grand tours of the giant planets in the 1970s and 1980s. More recently a variety of advanced, robotic missions have returned to these distant worlds—Galileo, designed to return a wealth of images and data from its orbits of Jupiter and its moons, and, beginning in 1997, a large, complex spacecraft called Cassini. Launched in October 1997, Cassini will arrive at Saturn in 2004. Cassini faces a problem that the Hubble Space Telescope does not: Saturn is a long way from the Sun. Saturn is nearly 10 times farther from the Sun than Earth, and receives only 1% as much sunlight as Earth. At this distance, solar panels are either ineffective or must be prohibitively large. Another source of power is required, and the solution used for Cassini—and other missions—has been radioisotope thermoelectric generators (RTGs). These components use the decay of radioactive elements such as plutonium to provide an ongoing power source for a spacecraft. They do not care how far they are from the Sun and, even better, they are small and light—a critical advantage for missions where every gram of the payload is carefully considered to maximize the spacecraft’s capabilities and benefits. The use of radioactive materials in space missions, however, has sparked a heated debate that has been partly scientific but largely political. The development, stockpiling, and use of nuclear weapons has been one of the defining debates of the last half-century. By extension, any project in
29
which the word “radioactive” appears has become the object of close scrutiny, and RTGs have been no exception. Opponents of RTGs point out that substances such as plutonium are lethal to humans in sufficient quantity. Since present-day launch vehicles have an undeniable track record of occasionally exploding during or shortly after launch, the possible dispersion of plutonium into the atmosphere has been held forth as a principal argument against the use of RTGs. Others counter this argument by pointing out that RTGs are specifically designed to withstand a launch failure, that inhalation of plutonium is impossible since the particles will be much larger than the dust-sized grains necessary for inhalation, and that the total amount of radioactivity produced in a given area by dispersed RTG plutonium would be less than that from naturally occurring sources. Cassini faced additional opposition because after it was launched, it flew around the Sun and returned for a close pass by Earth before heading out to Saturn. This is standard practice for missions to the outer solar system—spacecraft use a gravitational slingshot maneuver to pick up speed and get them to their destination. This flight path provides the velocity necessary to climb away from the Sun’s gravitational field and journey to the outer solar system. Spacecraft coast to their destination, with the initial impetus provided by the launch vehicle. Heavy spacecraft—and Cassini is very heavy—need an additional boost. The Earth flyby caused concern, of course: what if Cassini’s trajectory were a bit wrong, and it crashed into Earth? The same counterarguments about the small quantity of plutonium in the RTGs applied in this case as well, but few critics’ fears were assuaged. A final criticism of Cassini touches a broad policy debate at the National Aeronautics and Space Administration (NASA). Cassini is a huge spacecraft, bristling with instruments. When it enters the Saturn system, the returns will be monumental and will revolutionize our understanding of this distant planet and its moons. Cassini’s hardware requires lots of power, however, and precludes the use of solar panels. Smaller spacecraft, some argue, could use alternatives to RTGs and would be less expensive to build. A larger number of smaller, more focused missions could return as much information and understanding as one monolithic one, without excessive power requirements. The former NASA administrator Daniel Goldin’s slogan of “faster, better, cheaper” signaled NASA’s interest in this type of mission, although after some well-publicized failures in the 1990s, cynics in the industry suggested “pick any two” as an appropriate postscript. In the end, the Cassini launch and Earth flyby went without a hitch, but future missions employing RTGs will no doubt raise similar debates. —JEFFREY HALL
ASTRONOMY AND SPACE EXPLORATION
Viewpoint: Yes, using plutonium as an energy source for Cassini carried only minimal safety risks, and it was the only currently feasible way to reach the outer planets in the solar system. Past the Orbit of Mars Most spacecraft use solar energy as a power source to provide electricity for operations and to heat the spacecraft’s instruments, systems, and structures. However, in some cases solar and other traditional powersource technologies are not practical, and an alternate power source is required for the spacecraft. One of these instances occurs when spacecraft travel to the far reaches of the solar system, beyond the effective use of the Sun as an energy source. According to the Jet Propulsion Laboratory (JPL) of the National Aeronautics and Space Administration (NASA), the Cassini mission requires the use of nuclear material because other power sources are inadequate for (1) its extensive science objectives; (2) the current
30
SCIENCE
IN
DISPUTE,
VOLUME
2
launch systems available to lift Cassini’s mass into orbit; (3) the travel time required to reach Cassini’s destination, the planet Saturn; and (4) the far distance of Saturn from the Sun. Specifically, NASA has determined that the normally used energy of the Sun is inadequate as a practical power source for any spacecraft that operates beyond the orbit of Mars. This is primarily because solar panels are impractical due to the weak sunlight at those distances. NASA has further determined that the only practical source of power at distances beyond Mars is nuclear material. In the case of the Cassini mission, the nuclear material chosen was plutonium 238. Radioisotope Thermoelectric Generators (RTGs) The electrical power supply for the Cassini spacecraft and its instruments is provided by three plutonium batteries called general purpose heat source radioisotope thermoelectric generators (GPHS RTGs). Each device (usually abbreviated RTG) is designed to use the slow decay of plutonium 238 (denoted Pu-238) in order to generate heat. The heat generated by this process is then changed into electricity by a solid-state thermoelectric converter, converting heat into about 850 watts of electrical power for
all three RTGs. Leftover RTG heat is passed through the spacecraft to warm operational components and systems. RTGs are compact and lightweight spacecraft power systems that are very reliable, possess an outstanding safety record, and contain no moving parts. This proven technology has been used since the early 1960s within about two dozen U.S. space projects, including the Lincoln Experimental Satellites, Apollo lunar landings, Pioneer missions to Jupiter and Saturn, Viking landers sent to Mars, Voyager missions to Jupiter, Saturn, Uranus, and Neptune, Galileo mission to Jupiter, and Ulysses mission to the Sun’s polar regions. The RTGs have never caused a spacecraft failure; however, three accidents with spacecraft that contained RTGs have occurred. In each case the RTGs performed as designed, while the malfunctions involved other, unrelated systems. In 1964 a SNAP-9A RTG burned up in the atmosphere when a U.S. Navy Transit 5-BN-3 satellite reentered Earth’s atmosphere. The plutonium was scattered in the atmosphere, but after six years only slightly higher levels of radioactivity were detected in soil samples. In 1968 a Nimbus B-1 weather satellite was targeted into the Pacific Ocean when its launch rocket failed. After being retrieved from the ocean floor, the RTG was still intact and was later reused on another satellite. In 1970 the Apollo 13 lunar module reentered Earth’s atmosphere after successfully returning three astronauts from their life-or-death voyage around the Moon. No evidence of increased radiation was found from the affected impact area. These three accidents—where the RTGs performed as expected—affirm the safety of the RTGs when used in spacecraft such as Cassini. Plutonium 238 The nuclear fuel source within Cassini is 72 lb (33 kg) of plutonium 238, with a half-life of 88 years. Plutonium 238 is a naturally radioactive, silver-metallic element used both as a reactor fuel in nuclear weapons and as a fuel source in spacecraft. Pu-238 emits alpha particles as its nuclei spontaneously fission at a very slow rate.
Positively charged particles consisting of two protons and two neutrons (the nuclei of helium atoms) that are emitted by several radioactive substances. DEEP SPACE: There is no single, generally accepted definition; for the purposes of this article it is defined as the region of space beyond the orbit of Mars. HALF-LIFE: The time it takes for a radionuclide to lose half of its own radioactivity. NUCLEAR REACTORS: Any of several devices in which a fission chain reaction is initiated and controlled with the consequent production of heat. PLUTONIUM: A radioactive metallic element with the chemical symbol Pu, atomic number 94, and atomic weight (for its most stable isotope) of 244. RADIOACTIVE: Exhibiting the property possessed by some elements (such as uranium) or isotopes (such as carbon 14) of spontaneously emitting energetic particles (such as electrons or alpha particles) by the disintegration of their atomic nuclei. REACTOR FUEL: The particular power source used within a device in which self-sustained, controlled nuclear fission takes place. REM: A unit of ionizing radiation, equal to the amount that causes damage to humans in the form of one roentgen of high-voltage x rays. A millirem is onethousandth of a rem. ROBOTIC: Having characteristics of a machine that looks or works like a human being and performs various complex acts, such as walking or talking, similar to a human being. SOLAR ARRAY: A mechanism composed of solar cells that convert sunlight into power. SOLAR PANELS: Devices that convert light into electricity. They are called “solar” after the Sun, because the Sun is the most powerful source of light commonly used. ALPHA PARTICLES:
makes it difficult for it to enter the food chain; is heat resistant, which reduces its chance of vaporizing in fire or reentry environments; and has a low chemical reactivity, which makes it much less likely to cause damage. All of these safety features aboard Cassini help to reduce the potential health effects from accidents involving a Pu-238 release. If, by chance, dust-sized particles were released, very little exposure to humans would result. Extensive studies by NASA have shown that over a 50-year period, a person exposed to plutonium dust released in a launch accident would be subjected to about 1 millirem. This upper limit is 500 millirem if a person should inhale all of the plutonium dust in an 11 sq ft (1 SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
A major problem with alpha particles occurs when radioactive plutonium is ingested or inhaled. Alpha particles can then be emitted within the body. When this happens, alpha particles can inflict cell damage in any living organism, which can lead to cancer. To be inhaled or ingested by humans, plutonium must be ground into dust particles. However, the plutonium onboard Cassini is in the form of plutonium dioxide, a ceramic material that only breaks down into large chunks that are impossible to inhale and very difficult to crush into dust. This fuel is also highly insoluble in water, which
KEY TERMS
31
sq m) area of the contaminated area. In contrast, according to the Environmental Protection Agency and the Agency for Toxic Substances and Disease Registry, natural sources—such as house radon, rocks, cosmic rays, and the decay of naturally occurring radioactive elements in the human body—would release radiation in the amount of 15,000 to 18,000 millirem in that same 50-year time period.
ASTRONOMY AND SPACE EXPLORATION
Plutonium inside RTGs The RTGs used aboard Cassini are the most advanced version of thermoelectric generators that were safely used on past successful missions. Each of the 216 Pu238 pellets onboard Cassini is encased in capsules of heat-resistant iridium—a dense, corrosive-resistant, highly heat-resistant metal. Two pellets are then wrapped in a thermal graphite shell. Two of these shells are encased in a graphite block—a lightweight, highly corrosiveresistant, high-strength carbon-based material. The graphite block is about the size of two cell phones, and 18 of these blocks go into each of the RTGs. Previous tests conducted by NASA have shown that these modules will remain structurally sound even in an explosion like that of the Challenger in 1986 or other such devastating accident. These characteristics help to diminish the potential health effects from accidents involving the release of plutonium.
32
RTGs Are Not Nuclear Reactors RTGs are often compared to nuclear reactors because both use nuclear substances. However, RTGs are not nuclear reactors. While nuclear reactors use a “man-made” fission chain reaction to produce their power, RTGs use the “natural” radioactive decay of an isotope called Pu-238 to produce heat and ultimately to produce electricity. Since RTGs do not use an artificial fission chain reaction process, they could not cause an accident like those that have occurred within nuclear reactors (such as at Chernobyl in 1986 and Three Mile Island in 1979), and could not explode like nuclear bombs. Therefore, it is invalid to associate an RTG accident, or a potential accident, with any past radiation accidents involving nuclear reactors or with nuclear bombs. RTGs do not use any nuclear reactor type of process and could never explode like a nuclear bomb. Neither could an accident involving an RTG create the acute radiation sickness similar to that associated with nuclear explosions. Safety Design of RTGs Extensive engineering and safety analysis and testing have shown that RTGs can withstand severe accidents of the sort that can happen during space missions. Safety features incorporated into the design of RTGs have demonstrated that they can withstand physical conditions more severe than those expected from most accidents. Even SCIENCE
IN
DISPUTE,
VOLUME
2
before Cassini was launched, NASA performed detailed safety analysis under the guidance of internal safety requirements and reviews in order to reduce the risks of using RTGs. The U.S. Department of Energy has conducted extensive impact tests to ensure that the plutonium canisters remain intact in the event of a catastrophic accident. In addition, an Interagency Nuclear Safety Review Panel, composed of experts from academia, government, and industry, performed external safety evaluations as part of the nuclear launch safety approval process. Justifiable Use of RTGs According to comprehensive research performed by JPL, the requirements placed upon Cassini by its primary scientific objectives, the available launch systems, the extensive distance necessary to reach Saturn from Earth, and the great distance of Saturn from the Sun necessitated the use of RTGs. These mission requirements determined the RTG power source to be far superior over other power sources with respect to power output, reliability, and durability when used in the outer solar system.
For Cassini to complete its scientific objectives it must carry about 6,000 lb (2,720 kg) of fuel to Saturn and its four-year Saturnian orbit. Because so much fuel is required to be carried onboard, the spacecraft must be as light as possible in order to accommodate about 795 lb (360 kg) of scientific instruments to conduct experiments including photographic and radar imaging, atmospheric sampling, and various studies of Saturn’s planetary satellites and rings. As a result its power system must supply electricity to multiple scientific instruments at specific times, plus continuously power the spacecraft itself. To do all of these things, a lightweight and highly efficient power supply is required. The RTGs meet all of these requirements. Alternatives to RTGs JPL has concluded that neither fuel cells nor chemical batteries demonstrate the necessary operational life for Cassini, whose duration is expected to be 10.75 years, but could be extended to 16 years. In addition, the large amount of batteries that would be needed to power a Cassini mission greatly exceeds current launch vehicle lift capabilities.
Most NASA missions that fly through the inner solar system use solar panels to generate power because they are able to use the plentiful supply of sunlight to generate electricity. Even missions that have operated as far out as Mars— such as the Viking 1 and Viking 2 spacecraft— were all solar-powered missions. Solar panels are cheaper and lighter than RTGs and do not carry the same safety concerns as RTGs. But missions through the outer solar system (basically beyond Mars) exceed the functional use of solar tech-
nology, primarily because strong sunlight does not exist in those regions of space. JPL scientists have researched the recent advances in solarpower technologies, such as the high-efficiency solar arrays developed at the European Space Agency, and believe that solar technology is not capable of providing the necessary electrical power for space voyages beyond Mars. For the most part, the amount of solar arrays required for such an extended mission would make the spacecraft too heavy for any existing rockets to launch Cassini. Even if a rocket could lift such a heavy solar-powered spacecraft, significant risk to the success of a solar-powered mission would be introduced due to the lack of knowledge of such vehicles in space. In addition, according to JPL, Cassini would require solar arrays with an area of just over 5,380 sq ft (500 sq m), or about the size of
two tennis courts. With the need for two solar arrays plus supporting structures, including deployment equipment, increased complexity of the spacecraft’s design would result. The operation of Cassini would also be more complex, limiting its ability to maneuver and communicate. This complexity would severely hurt Cassini’s ability to achieve its scientific purpose. The enormous size of the solar arrays also would interfere with the fields of view of the science experiments. Equally bad, the solar arrays would limit the scope of the navigation sensors, further hindering Cassini in achieving its objectives. Lastly, the solar arrays could generate serious electromagnetic and electrostatic interference, adversely affecting the communications equipment and computers. Because of these reasons, NASA has concluded that RTGs are the only power source capable of reliably accomplishing the mission objectives of Cassini. SCIENCE
IN
DISPUTE,
VOLUME
2
An artist’s rendering of the Cassini spacecraft. Shown below the craft is the space probe Huygens, descending toward Saturn’s moon Titan. (© CORBIS. Reproduced by permission.)
33
Conclusion A two-foot-thick document details the government’s six-year comprehensive safety analysis of Cassini, and independent experts have substantiated every assertion. Conclusions showed that the fuel modules were unlikely to be damaged in an accident. Even if all of the coatings and containers were to fail, there was little chance that any person would consume enough material to experience adverse health problems. NASA estimated that there was a 1 in 1,400 chance of a plutonium release accident early in the launch; a 1 in 476 chance of such a mishap later in the launch; and a less than 1 in a million chance of Cassini reentering the atmosphere and releasing plutonium during its Earth flyby.
Despite the opposition to its launch, Cassini promises to provide a wealth of scientific information. In-depth NASA studies show that RTGs were the only feasible power system for the Cassini mission, and are a safe and justifiable power system to use when other systems cannot be used. —WILLIAM ARTHUR ATKINS
Viewpoint: No, using plutonium as an energy source for Cassini was not safe, given the known dangers of plutonium and the legacy of rocket launch failures.
ASTRONOMY AND SPACE EXPLORATION
The Cassini spacecraft was successfully launched in 1997. Leading up to Cassini’s launch, serious concerns were raised about its onboard plutonium, which is used to supply the spacecraft’s electrical power. Years after Cassini’s launch, many people continue to have concerns regarding the use of radioactive substances as power sources on such spacecraft. Divergent Views about Nuclear-Powered Satellites The particular power units installed on Cassini are called radioisotope thermoelectric generators (RTGs). Cassini’s RTGs use radioactive plutonium to supply the electrical power it needs to operate its various devices. Additionally, leftover heat from the RTGs is used to warm Cassini’s electronic circuitry. RTGs have been utilized by the United States and the Soviet Union (and now Russia) on a variety of spacecraft starting in the 1960s, and continuing up to the present. At the beginning of the twenty-first century, there is a range of opinion regarding the use of radioactive power units aboard satellites. At one extreme is the assertion that RTGs possess such a negligible safety risk that the probability of an RTG accident causing ill effects in people or the environment is essentially zero. At the other extreme are claims that the chances of an accident
34
SCIENCE
IN
DISPUTE,
VOLUME
2
(such as a launch failure) involving an RTG are considerable, and that under the right circumstances hundreds of thousands of people could be injured or killed. Obviously, strong disagreements exist regarding the safety of RTGs in certain situations. Even if one takes the dire warnings regarding RTGs with a grain of salt, one must still address the question of why they are used at all, given that most satellites do not possess them and that many people oppose their use. In addressing why RTGs are used as spacecraft power supplies, it is instructive to first examine the structure of the Cassini spacecraft as well as its mission of exploration. This examination of Cassini and its mission is important because the use of plutonium, as a power source, is motivated by particular types of exploratory missions and spacecraft, as typified by Cassini. Cassini and Its Mission At a mass of 6.3 tons (5.7 metric tons), Cassini is one of the largest nonmilitary, unmanned spacecraft constructed to date. Built by an international team, it is a complex spacecraft with a total project cost of about $3.4 billion. Cassini, along with its attached probe, Huygens, was designed to explore the planet Saturn and its many moons. The great distance of Saturn from the Sun is a principal reason why RTGs were chosen as Cassini’s power source instead of a solar array. A quantitative comparison can be made between the light received from the Sun at Earth’s distance and at Saturn’s distance. Light intensity from a source decreases as the distance from the light source increases; specifically, light intensity varies as the inverse of the distance squared. For example, a doubling of the distance from a light source results in one-fourth the light intensity per unit area, while a tripling of the distance results in one-ninth the light intensity. Earth orbits the Sun at a distance of approximately 93 million mi (150 million km), whereas Saturn orbits the Sun at around 886 million mi (1.4 billion km). A given solar array near Saturn only derives about one-hundredth the power from sunlight that it would near Earth.
So the Cassini project combines an exploratory mission far from the Sun with a large, complex spacecraft requiring a relatively large power supply. This combination of factors meant that the only reasonable option available to power Cassini consisted of RTGs, the advantages of which are further described below. Advantages of RTGs From an engineering standpoint, RTGs as satellite power sources (at least for certain types of missions) are a robust and attractive solution for several reasons, including: (1) There are no moving parts in the basic mechanism, which means that there are no
bearings, valves, etc., to fail; in contrast, many other types of existing or potential satellite power sources require some sort of moving mechanism to generate electricity. Even solar cells—although in principle they require no moving parts to generate electricity—must still be aimed at the Sun by some mechanism in order to generate power. (2) The source of energy in RTGs, i.e., the radioactive plutonium, is longlived. According to the National Aeronautics and Space Administration (NASA), the Pioneer 10 spacecraft launched in 1972 to study the outer planets still possesses an operating RTG power unit (although with a reduced power output). Pioneer 10 was still intermittently transmitting data to Earth in 2001. (3) The power output by an RTG is constant regardless of the satellite’s location. In contrast, solar cells do not operate if solar illumination ceases, as when the satellite passes into a planet’s shadow. And, as noted previously for solar cells, solar illumination decreases rapidly with distance, which in turn translates into a drastic reduction in power output. (4) Finally, RTGs are attractive because of their compact size, as compared to other power sources like solar arrays. Judging strictly by the technical attributes of various types of power supplies, RTGs should be the premier choice for use in a variety of space missions, including deep-space missions of the kind Cassini was designed to complete. However, there are reasons that strongly argue against the continued use of RTGs. The stance taken in this essay is that satellites should not utilize radioactive materials to provide their power because of two overriding considerations: First, there are other, nonnuclear devices that could reasonably provide power to satellites such as Cassini. Second, an accident involving RTGs could have dire consequences for the cause of space exploration, and in extreme situations could cause injury to people and/or the environment on Earth. Both of these assertions will be examined more fully below, beginning with the latter point.
However, Dr. Michio Kaku, professor of theoretical physics at the City University of New York, performed a detailed review of NASA’s
Many scientists and engineers disagree with assessments like Kaku’s. For the sake of argument, assume that such analyses are terribly unrealistic, and that dire predictions of RTGinduced health disasters are completely overblown. Even in such a scenario, there could still be serious and negative side effects from an accident involving an RTG. Even if only a small quantity of radiation were released, an accident involving an RTG over a populated area could lead to public panic, extremely negative publicity for NASA, and a slew of lawsuits (if not against the government directly, then against the companies that made the rocket and spacecraft). A Reasonable Alternative to Radioactive Power Supplies If the use of RTGs aboard Earth-launched spacecraft is a bad idea, what can feasibly be developed in the near future to replace them? Although several alternatives to RTGs are in the realm of possibility, the most promising technology seems to be some form of solar power. Solar arrays power the vast majority of satellites in orbit around Earth. Communications satellites are typical examples of satellites whose instruments are powered solely (via arrays of solar cells) by the Sun’s energy. Some solarpowered satellites store energy in batteries to power the satellite during those times when the solar arrays are temporarily not producing electricity—for instance, when a satellite is in a planet’s shadow. Nevertheless, the ultimate power source is the solar array.
As noted previously, the solar energy per unit of area is dramatically less for a deep-space vehicle such as Cassini than it is for satellites considerably closer to the Sun. While a much lower solar-power density is available in locales far from the Sun (such as Jupiter, Saturn, and beyond), sunlight can nonetheless be a viable energy source for spacecraft. Dr. Ross McCluney, a research scientist at the Florida Solar Energy Center of the University of Central Florida, favors the use of solar devices to replace RTGs for deep-space missions. McCluney has investigated the use of structures to concentrate light (somewhat analogous to the way in which a lens focuses light). These solar concentrators would concentrate the weak sunlight found in regions far from the Sun onto solar cells. The solar cells would then produce electricity. By effectively boosting the power density of sunlight using low-mass solar concentrators, far fewer solar cells would be needed to produce a SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Possible Ramifications of an RTG Accident NASA funded several studies to gauge the probability of the Cassini spacecraft crashing to Earth in various accident scenarios, as well as the deleterious effects that could be caused by the radioactive material in RTGs if such an event should occur. One of these reports, the Final Environmental Impact Statement (FEIS), asserted that there was a low probability of a serious accident involving Cassini, and furthermore, if an accident did occur, its health effects upon people would be relatively minor.
FEIS and concluded that by making convenient assumptions, the report downplayed the very real health risks that could result from an accident involving RTGs, such as those used on Cassini. Kaku stated, “True casualty figures for a maximum accident might number over 200,000.”
35
An image of Jupiter taken by the Cassini spacecraft. Cassini passed Jupiter in December 2000, on its way to its ultimate destination, Saturn. (© CORBIS. Reproduced by permission.)
36
given amount of power. A solar concentrator/solar cell array would be far less massive than an all-solar cell array, thereby reducing the total spacecraft mass. Further efficiencies and hence mass reductions in such a power unit could be achieved by using new, higher-efficiency solar cells. Besides developing a solar-based power supply, spacecraft engineers should reduce the power demand of satellites. This can be done in two ways. First, smaller satellites should be used. According to NASA’s Jet Propulsion Laboratory (JPL), the Pioneer 10 and Pioneer 11 spacecraft that explored the outer planets during the 1970s each possessed a mass of approximately 575 lb (260 kg). In contrast, Cassini has a mass of over 12,600 lb (5,700 kg). Cassini carries some 18 scientific instruments, along with computers and data storage devices, as well as one large and two small antennas. All of this onboard instrumentaSCIENCE
IN
DISPUTE,
VOLUME
2
tion required a considerable power supply. For future missions, using several smaller spacecraft carrying fewer instruments and equipment would require much less power per spacecraft. A second technique for reducing power consumption depends upon developing electronic circuitry that requires much less power to operate than similar circuits in use today. To recap, the four technologies described above were (1) the development of solar concentrators; (2) higher-efficiency solar cells, to be used in tandem with solar concentrators; (3) smaller and lighter spacecraft, but more of them; and (4) low-power electronics. McCluney envisions combining all four of these advances in spacecraft technology in order to eliminate the need for RTGs on future space missions. Some might argue that developing these technologies, such as solar concentrators for
AN UPDATE ON CASSINI’S JOURNEY At the close of 2001, the Cassini spacecraft—headed for a rendezvous in mid-2004 with the planet Saturn—was operating normally. This clean bill of health for Cassini was derived from the telemetry it sends to Earth. (Telemetry is the transmission of data, usually from remote sources, to a receiving station for recording and analysis.) Cassini flew by Jupiter on December 30, 2000. During the six months that Cassini was closest to Jupiter, its Ion and Neutral Camera (INCA) monitored the fluctuations in the solar wind and how it affected the planet’s magnetosphere. (Jupiter’s magnetosphere is a gigantic region of charged particles trapped inside the planet’s magnetic field.) Cassini also examined Jupiter’s moons, rings, and storm clouds. On November 26, 2001, Cassini began a 40-day search for gravitational waves. By analyzing tiny fluctuations in the speed of the Cassini spacecraft (with the use of NASA’s Deep Space Network), scientists are
spacecraft, may not be desirable or even practical. However, NASA claims to be in the business of developing innovative technologies of this type. Indeed, in the early 1990s, under the leadership of the administrator Daniel Goldin, a new philosophy regarding space exploration began at NASA. This new attitude was most forcefully directed toward unmanned (robotic) space missions and was embodied by the catchphrase “faster, better, cheaper,” meaning a move toward less complex and considerably less expensive space missions. To compensate for less sophisticated and cheaper satellites, many more missions would be flown.
—William Arthur Atkins
NASA’s well-publicized “faster, better, cheaper” philosophy. The Future of RTGs in Space Exploration Although RTGs have been used successfully in a variety of space missions, they also have been aboard spacecraft that have crashed back to Earth. Several U.S. spacecraft with RTGs aboard have been destroyed in launch accidents. In at least one such launch failure, a small amount of radioactive material was released into the environment. Furthermore, spacecraft already traveling in outer space can sometimes reenter Earth’s atmosphere, in which case extreme temperatures and pressures can be encountered due to the spacecraft’s high speed. According to the environmental organization Greenpeace, a Russian spacecraft carrying an RTG power unit reentered Earth’s atmosphere in 1996 and is thought to have crash-landed in South America. The fate of the plutonium that was onboard the spacecraft is unknown, at least to the general public.
If radioactive power supplies continue to be used on spacecraft, it is reasonable to assume that sooner or later another launch failure or reentry involving an RTG will occur. The specter of accidents involving radioactive materials aboard spacecraft poses unacceptable financial, environmental, health, and negative publicity risks. Some SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
A “faster, better, cheaper” philosophy contains many attractive features. The amount of time from mission conception to launch is greatly reduced, meaning that the latest technology and techniques (such as new solar-concentrator arrays) can be incorporated into a particular spacecraft. Also, highly expensive and complex vehicles, such as Cassini, are replaced by multiple, smaller spacecraft, the loss of any one of which can be tolerated. In contrast, the loss of a Cassini-type spacecraft means the loss of every instrument and mission goal. Thus, it would seem that development of new technologies and approaches to space exploration, such as the development of solar-concentrator arrays to replace RTGs, would be especially suited to
attempting for the first time to directly detect gravitational waves. If scientists are able to detect these “gravity ripples,” valuable information may be learned on how the universe behaves. The Cassini spacecraft is expected to arrive at Saturn on July 4, 2004. At that time, the Huygens probe will separate from Cassini, and then enter and aerodynamically brake into the atmosphere of Saturn’s moon Titan. (The probe was named after Christiaan Huygens, who discovered Titan in 1655.) The probe will parachute a robotic laboratory down to Titan’s surface while the probe support equipment (PSE), which remains with the orbiting spacecraft, will recover the data gathered during descent and landing. The PSE will then transmit the data to Earth. By exploring Saturn and its moons, scientists will gain valuable clues for understanding the early history and evolution of the solar system.
37
scientists and engineers contend that nonnuclear power supplies can, and should, be developed and deployed on future space missions that currently have RTGs as their only viable power-supply option. Using alternative power units for missions such as Cassini’s will involve developing new technologies. These new devices, such as solar arrays employing concentrators and highefficiency solar cells, may well find applications in areas not yet contemplated; high-tech spin-offs may ensue from the research and development efforts to replace RTGs. At the same time, the potentially serious problems associated with RTGs will be avoided. —PHILIP KOTH
Further Reading Cassini Program Outreach, Jet Propulsion Laboratory. “Cassini-Huygens Mission to Saturn.” Jet Propulsion Laboratory, California Institute of Technology. . “CPSR’S Joins Call to Postpone Cassini Launch: Risks of Nuclear Accident Too Great.” Computer Professionals for Social Responsibility. September 20, 1997. . “Greenpeace International Opposes Cassini Launch.” Greenpeace International. September 1997. .
ASTRONOMY AND SPACE EXPLORATION
Grossman, Karl. The Wrong Stuff: The Space Program’s Nuclear Threat to Our Planet. Monroe, Maine: Common Courage Press, 1997.
38
SCIENCE
IN
DISPUTE,
VOLUME
2
Herbert, George William, and Ross McCluney. “Solar Cassini Debate.” Online posting. August 28, 1997. The Animated Software Company. . Horn, Linda, ed. Cassini/Huygens: A Mission to the Saturnian Systems. Bellingham, Wash.: International Society for Optical Engineering, 1996. Kaku, Michio. “A Scientific Critique of the Accident Risks from the Cassini Space Mission.” August 4, 1997. The Animated Software Company. . Spehalski, Richard J. “A Message from the Program Manager.” Cassini Mission to Saturn and Titan. Jet Propulsion Laboratory, California Institute of Technology. . Spilker, Linda J., ed. Passage to a Ringed World: The Cassini-Huygens Mission to Saturn and Titan. Washington, D.C.: National Aeronautics and Space Administration, 1997. Sutcliffe, W. G., et al. “A Perspective on the Dangers of Plutonium.” October 18, 1995. Lawrence Livermore National Laboratory. . Uranium Information Centre. “Plutonium.” Nuclear Issues Briefing Paper 18. Uranium Information Centre. March 2001. .
Should civilians participate in manned space missions?
Viewpoint: Yes, the spirit of space exploration as an effort for all humanity demands broad participation by astronauts, scientists, and civilians alike. Viewpoint: No, space exploration—currently in its infancy—is an inappropriate pursuit for civilians unused to, or unfit for, the rigors and risks that it poses.
Exploration is as old as mankind. Our earliest ancestors explored out of necessity, whether as hunter-gatherers looking for food, for ample space to live, or simply out of curiosity. Settlement of Earth went quite slowly. If early humans evolved in Africa 2 million years ago, it took their descendents 1.99 million to find North America, since evidence of early human activities in what is now the United States only dates back some 12,000 years. Rapid mass transportation in the form of trains, cars, and aircraft had to wait until the nineteenth and twentieth centuries—barely a heartbeat in the long human chronicle. A defining aspect of almost all of this exploration was that everybody did it. Early hunter-gatherer clans moved as units. Entire families boarded sailing ships in England to head to the New World. In the developing United States, expansion was in large measure an individual process, as settlers headed westward in small groups. The exploration of space began in October 1957 with the launch of the Soviet satellite Sputnik, which means “explorer.” Human presence in space has so far been a very technical one, limited to astronauts and engineers highly trained for the missions they undertake. In the years since the first U.S. astronauts and Soviet cosmonauts first orbited Earth, space flight has gradually become a more familiar concept. The first flight of the space shuttle in 1981 was met with enormous fanfare and ongoing news coverage; while today many of us are not even aware a shuttle flight has been launched until we see it mentioned in a sidebar in the paper. With the advent of reusable spacecraft, and with the development of the International Space Station (ISS), a regular human presence in space, the idea of opening the exploration of space to the rest of humanity presents itself. Perhaps because exploration is such an ingrained, universal, human experience, sending civilians to space seems an inevitable and appropriate development. The first, and much publicized, foray into civilian exploration of space ended in unspeakable disaster on January 28, 1986, when the space shuttle Challenger exploded 72 seconds into its flight. “Teacher in Space” Christa McAuliffe died, along with the six other crewmembers, and a vigorous debate about the wisdom of sending civilians into orbit ensued. Proponents of civilian exploration of space support their position with two major arguments. The first is practical: Space exploration is expensive, and a regular revenue stream from paying civilian astronauts could initiate a “space tourism” industry, and thus provide a boost for sponsoring agencies such as the National Aeronautics and Space Administration (NASA). The second is philosophical: the innate curiosity humans have about their environment and their universe is reason enough to participate. Pushing the unknown bolsters
39
the spirit, the proponents argue, and is an essential pursuit for all mankind. Opponents of civilian space travel point out that space exploration is still a very new pursuit, and is fraught with all the dangers of any infant technology. Indeed, the space shuttle is by all rights an extraordinarily dangerous machine. Various estimates of the failure rate of the shuttle have been made, and they hover around the 1 in 100 mark. If commercial aircraft operated with such a failure rate, there would be over 300 crashes every day—something that would certainly raise eyebrows. Opponents also are skeptical of “space tourism” as a truly lucrative business. They argue that only the wealthy could afford such an expensive “ticket,” and in any event the absolute number of civilians that could be taken on orbital joyrides with the world’s present fleet of spacecraft is quite small.
American millionaire Denis Tito, shown shortly before he boarded a Russian rocket for a trip to the International Space Station. (AP/Wide World Photos.
ASTRONOMY AND SPACE EXPLORATION
Reproduced by permission.)
40
Both sides have telling points. If the allure of exploration is as strong as history implies, perhaps there are many who would hand over several months of salary for the thrill of a space shuttle ride, and the awesome look at the Sun climbing over the curving blue limb of Earth far below. Increased accessibility to space travel and exploration through a welldeveloped “civilians in space” program could provide a groundswell of interest and support—which of course is vital in the United States, where most of the available dollars for ventures in space come ultimately from the taxpayers. However, the political and public relations fallout from civilian fatalities could have just the opposite effect, and it is certainly true that space travel is not only dangerous but notoriously technical. This last point is at the center of the present issue, and pervades the two essays that follow. The debate ultimately hinges not on the issue that space exploration is dangerous, but that it is still a pursuit that requires genuine expertise. Setting forth across the Atlantic on the Mayflower may indeed have been dangerous for a family of nonsailors, but it was not particularly technical. Anyone can walk up the gangplank onto a ship. Travel in a spacecraft, however, is still limited to very small crews and requires a significant breadth of expertise and acclimatization. If the entire crew of the Mayflower had died en route to America, the “civilians” would have at least had a chance for survival; in the space shuttle, a stranded civilian would have simply no chance at all. To what extent, therefore, is it sensible to open technical exploration to anyone? Whether that exploration is flying into orbit, climbing Mount Everest, traveling in a bathyscaphe to the bottom of the Mariana Trench, is it prudent to open it to anyone with a deep enough wallet? It seems there must be a dividing line between exploration and foolishness, and the essays below examine where that line lies. —JEFFREY HALL
Viewpoint: Yes, the spirit of space exploration as an effort for all humanity demands broad participation by astronauts, scientists, and civilians alike. Children are often asked, “What do you want to be when you grow up?” Since the beginning of the manned space program and especially after Apollo 11 Commander Neil ArmSCIENCE
IN
DISPUTE,
VOLUME
2
strong stepped onto the Moon on July 20, 1969, many children have answered, “An astronaut.” However, unlike other replies, such as scientist, fireman, nurse, doctor, lawyer, pilot, and so on, not one in a million children have been able to pursue the dream of going into space. While the National Aeronautics and Space Administration (NASA) offered the promise of civilians traveling into space beginning in the mid 1980s with their civilian space program, this offer was summarily rescinded when the space shuttle Challenger exploded in 1986 shortly after its launch from Cape Canaveral. The tragedy
resulted in the death of its entire crew, including civilian schoolteacher Christa McAuliffe. Since then, the ban on allowing civilians on manned space flights has remained intact for over 15 years, despite significant improvements in safety and numerous successful space shuttle flights.
KEY TERMS Unit of measurement for determining the inertial stress on a body undergoing rapid acceleration, expressed in multiples of acceleration of gravity (g). INTERNATIONAL SPACE STATION: Inhabited satellite orbiting Earth at an altitude of 250 mi (400 km). The completed space station, built by the United States, Russia, and 15 other nations, will have a mass of approximately 520 tons (470 t), and will measure 356 ft (108 m) across and 290 ft (88 m) long. G-FORCE:
Although risks are inherent to space travel, people take risks almost everyday of their lives, from the Monday-through-Friday throng of workers traveling to and from work in their cars, to those rugged individualists who decide they want to climb Mt. Everest. Taking risks is the right of any individual in a free society and should not be limited to a few chosen government employees. Unlike standard aviation, which grew out of both government and private enterprise and included numerous civilians taking what were considered significant risks at the time, space flight has remained the purview of governmentcontrolled organizations and employees. As a result, despite landing on the Moon and the creation of the International Space Station (ISS), progress in space flight and exploration have remained relatively minimal when compared with the rapidity with which other technological innovations have advanced in the hands of civilians and private enterprise. For example, routine airplane transportation was in place only three decades after the Wright brothers made their historic flight in 1903. Now more than 1.5 million Americans fly around the country and the world everyday, and the airline industry has played a vital role in the country’s economy and in the lives of almost all Americans. Admittedly, allowing civilians to participate in space missions must begin on a small scale, but it could be introduced immediately by allowing civilians to fly on the space shuttle to the ISS. Although the shuttle can carry up to eight people, the average crew is five, including the commander, the pilot, and three mission specialists. In addition to McAuliffe, NASA has already set precedent by allowing civilians to fly on missions, including payload specialists from technology companies that want to perform tests in space, Senator Jake Garn in 1985, Congressman Bill Nelson in 1986, and ex-astronaut Senator John Glenn in 1998.
U. S. government agency charged with conducting research on flights within and beyond Earth’s atmosphere. SPACE SHUTTLE: System for manned space flights that includes a reusable orbiter spacecraft capable of returning to and landing safely on Earth. (NASA):
tional Space Station, where he spent six days in April 2001. NASA vehemently objected to the trip, arguing that Tito’s lack of training represented a safety risk, that he would be a distraction and need “babysitting,” and that it was especially inappropriate while the assembly of the Space Station was still going on. Despite these objections, Russia asserted its right to bring aboard any “cargo” it desired. Tito made the flight, and so far it appears that his presence had little or no impact on mission operations. Former astronaut and Apollo 11 moon walker Buzz Aldrin is in favor of such paying customers. Testifying before the U.S. House of Representatives Subcommittee on Space and Aeronautics, Aldrin said, “Ticket-buying passengers can be the solution to the problem of high space costs that plague government and private space efforts alike.” And there are wealthy people ready to put down a substantial amount of money to take a space trip. Before Tito, Toyohiro Akiyama, who worked for the Tokyo Broadcasting System, paid $11 million to spend a week on the Russian MIR space station, and film director Cameron Crow has also expressed interest in paying up to $20 million for the chance to go into space. Safety and Qualifications for Civilian Space Passengers Since the Challenger disaster in 1986, NASA has funneled more than $5 billion into upgrading the space shuttle and improving its safety. Furthermore, the numerous manned flights since the early 1960s have shown that short-term space travel presents no significant physiological problems, and common minor effects such as motion sickness can be easily SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Efforts by the United States to send humans into space were partly a response to the former Soviet Union’s taking the lead in the “space race” when it achieved the first manned space flight with the 1961 launch into Earth’s orbit of cosmonaut Yuri Gagarin in the Vostok 1 space capsule. Russia, the primary component of the former Soviet Union, has once again “outdone” the United States, allowing entrepreneur and former aerospace engineer Dennis Tito to pay a reportedly $20 million to fly on the Russian Soyuz spacecraft to the orbiting Interna-
NATIONAL AERONAUTICS AND SPACE ADMINISTRATION
41
an estimated risk; no loss of life has occurred in the United States space program since the Challenger disaster. Furthermore, in a speech before a 1999 Space Frontier Foundation Conference, a NASA administrator said that NASA’s goal was to have launch vehicles with a reliability of greater than 0.999, with that reliability continuing to increase to 0.999999 over two to three decades.
Senator Jake Garn undergoing stress training for his 1985 trip into space. (© Bettmann/CORBIS.
ASTRONOMY AND SPACE EXPLORATION
Reproduced by permission.)
treated with standard medications. Nevertheless, most civilians should be in good enough physical shape to withstand the G-force pressures during liftoff and landing. G-force is the inertial stress a body experiences during rapid acceleration, and experiments indicate that Gforces in the 10 to 20 g range could cause internal organs to move, resulting in injury and potentially death beyond 20 gs. Despite Hollywood movie depictions of the rigors of G-forces during launch, astronauts experience a maximum of about 3 gs, causing the body to feel heavier, and making legs and arms more difficult to move. Most normal health people can easily stand the G-force experienced during launch. At age 77, John Glenn experienced no adverse effects during his 1998 space shuttle flight. At the Astronaut Hall of Fame in Florida, civilian tourists can take a spin in the G-Force Trainer and experience 4 gs. Nevertheless, NASA has long expressed concerns over safety and other issues associated with civilians participating in manned missions. Some estimates indicate that the risk of disaster during a space shuttle takeoff and flight is 1 in 100, and NASA believes this is an unacceptable risk for civilians. Still, if someone is willing to take that risk and sign a waiver relieving NASA and the U.S. government from responsibility should a disaster occur, why should that person be denied the opportunity based on risk alone? It is important to remember that this is merely
42
SCIENCE
IN
DISPUTE,
VOLUME
2
Of course, civilians on NASA space flights will require more education and training than the typical airline passenger receives during a two-to-five-minute educational lecture with video about safety procedures. Training for functioning in weightlessness would be required, and preflight training for emergencies would be necessary. Simulator training would also be included, not only to provide experience for the in-flight dynamics but also to provide an opportunity for potential passengers who found the experience extremely uncomfortable and negative to change their minds. Dr. Harvey Wichman, professor of psychology, and director of the Aerospace Psychology Laboratory at Claremont McKenna College in California, points out that an unruly or troublesome passenger cannot be taken off a space flight. However, he notes that his laboratory has shown that two hours of pretraining before a 48-hour civilian space-flight simulation dramatically reduces negative interpersonal reactions. A screening and training program for civilian space flight has already been made available through the Orbital Flight Pre-Qualification Program, which is developed by a private company interested in space tourism in cooperation with Russian agencies responsible for cosmonaut training. Actually, NASA has already answered the question as to whether or not civilians should be allowed on space flights by reinstating its Spaceflight Participant Program in 2002 to provide opportunities for the general public to ride on the space shuttle and stay in the ISS. Furthermore, a two year study conducted by NASA and the Space Transportation Association (STA) concluded that more should be done to expand the space-tourism business and to create an in-space travel and tourism business. As the only system in the United States for transporting people to and from space, the space shuttle can serve as a gateway to new industries and businesses, including commercial space travel. In turn, commercial space travel can have a significant impact on the U.S. and global economies, creating new technologies, new jobs, and new opportunities. Potential “space businesses” include utilizing space solar power for energy, mining Helium-3 on the Moon, and processing materials in microgravity. But these are efforts that can only be done with private capital. Allowing civilians, such as the leaders of businesses interested in started
space projects, to participate in manned space flights can only help these private sector initiatives get off the ground. Tourism may end up being the biggest space industry of all and could become a part of the overall commercial and civil space program. It would also add significantly to the gross revenues of more than $400 billion per year that are part of the travel and tourism business in the United States alone. NASA and STA reported that space tourism and travel by the general public could be a $10 billion to $20 billion business in just a few decades. In the near term, the money paid by wealthy civilians to journey into space could be used to help offset the approximately $14 billion a year that taxpayers currently pay for the government-run space program and to increase the overall space-exploration budget, which had declined to approximately $14.2 billion in 2001 from $16.8 billion in 1991. Several surveys conducted in the United States, Japan, Great Britain, and elsewhere have shown that the public is keenly interested in the possibility of space travel. In 1993, the National Aerospace Laboratory (NAL) conducted a survey of 3,030 Japanese people and found that 70% under age 60 and over 80% of those under age 40 would like to visit space. Approximately 70% responded that they would pay up to three months of their salaries to do so. In the United States, a 1995 NAL survey of 1,020 households found that 60% of the people wanted to take a space vacation, with 45.6% saying they would allocate three months of their salaries for a chance to go, 18.2% were willing to pay 6 months worth of salary, and 10.6% would be willing to pay a year’s salary. Despite gaps between consumer intentions and actions, the study concluded that space tourism could be a multibillion dollar business. What about the risks and possible law suits if something goes wrong? Just as with other “adventure travel” programs, insurance and liability waiver forms that specify customers travel at their own risk could be employed.
What’s so special about going into space? When asked what was the best thing about space travel, Byron Lichtenberg, a space shuttle payload specialist who is not a career astronaut,
Allowing people from the general public to participate in manned space flights is the first step to opening up a new frontier, a frontier not only for business and economic development but also a frontier of hope, opportunities, and dreams. It will be the first step to a universe without limits. In the final analysis, it is not a question of whether civilians should be allowed on manned space flights, it is a question of when. Given a chance, civilians will show that they, too, have the “right stuff.” —DAVID A. PETECHUK
Viewpoint: No, space exploration—currently in its infancy—is an inappropriate pursuit for civilians unused to, or unfit for, the rigors and risks that it poses. “One small step for man; one giant leap for rich people.” It doesn’t quite have the same ring as astronaut Neil Armstrong’s original statement, “One small step for man, one giant step for mankind,” made when he took the first steps onto the Moon in 1969, does it? However, with wealthy thrill-seekers such as entrepreneur and former aerospace engineer Dennis Tito being allowed to join the Soyuz 2 mission to the International Space Station (ISS), such a statement could become surprisingly commonplace. The Russian Space Agency has already accepted money for allowing civilians on Mir. Other millionaires, including Titanic director James Cameron and South African entrepreneur Mark Shuttleworth, have tried to buy tickets up to the ISS. Although the National Aeronautics and Space Administration (NASA) and the Canadian, European, and Japanese Space Agencies stated that Tito’s visit was a one-time exception, the lure of multimillion-dollar investments from thrill-seekers will undoubtedly inspire future “exceptions” to be made. Proponents of civilians entering space say this money could help promote and expand the various space programs. However, does the simple fact that someone possesses a great deal of money provide him or her with the right to buy SCIENCE
IN
DISPUTE,
VOLUME
2
ASTRONOMY AND SPACE EXPLORATION
Good for the Psyche and the Soul Allowing civilian participation in space missions would not just provide a boost for the overall economy, but also for one of the driving forces in most cultures, that is, the need to explore new horizons. However, as Rick N. Tumlinson, president of the Space Frontier Foundation, put it during his 1995 testimony to the Space and Aeronautics Subcommittee of the U.S. House of Representatives, “We are a nation of pioneers with no new frontier.”
replied, “For me the best parts about being an astronaut are the incredible opportunity to fly in space, to be an explorer, to see Earth from space and to be involved in an effort to be able to get the general public into space and off our planet. So many things today are commonplace that many people do not get passionate about anything. We have become more and more a country of spectators not participants.”
43
German astronauts training for a mission on a U.S. space shuttle. Rigorous selection criteria and training programs mean that only a select few will ever get to journey into space. (Photograph by Roger Ressemeyer. CORBIS.
There are three sound reasons why we should neither encourage space tourism nor allow the trend of millionaires hitching rides into space to continue: 1) space flight requires specialized skills and dedicated training; 2) untrained civilians can become a distraction on space missions; and 3) there are grave risks involved.
ASTRONOMY AND SPACE EXPLORATION
Reproduced by permission.)
his or her way into every situation? Does it seem reasonable to replace specialized scientists conducting valuable research with unskilled tourists whose only qualification is that they have a lot of money to throw around? The simple answer to all these questions is simple: absolutely not.
Specialization and Training Flying into space is not like taking an trip on an airplane; entering space is a working trip in every sense of the word. Each mission has specific goals and requirements for successful competition. Should a person lack skills, they can greatly interfere with the mission and risk its success. Manned space flights, therefore, require crewmembers who are skilled professionals, trained not only in their scientific or specialized fields, but also to handle the rigors of space. Should they lack this training, they could not only endanger the mission, but themselves and the people around them. For this reason, NASA and other space agencies have created meticulous training programs for all potential candidates. Allowing an unskilled tourist to skip this process is exceed-
44
SCIENCE
IN
DISPUTE,
VOLUME
2
ingly dangerous, as well as unfair to those who undergo the training. From the time of the first U.S. manned space flight, astronauts have undergone arduous training programs and been required to adhere to specific requirements. In recent years, those requirements have become even more specialized. Astronauts are no longer simply pilots, but scientists as well. In addition, there are health considerations related to microgravity and other environmental issues. In a profession where one mistake can result in a catastrophic lose of life and the destruction of billions of dollars of research and equipment, there can be no “second best.” An astronaut must truly have the “right stuff.” NASA’s initial selection process requires only the most highly qualified individuals to apply for manned space missions. For both pilots and mission specialists, candidates must possess the minimum of a bachelor’s degree in biological science, engineering, mathematics, or physical science, in addition to three years of professional experience in a related field. An advanced degree is, of course, considered more favorably. Pilots must have a minimum of 1,000 hours pilot-in-command time in a jet aircraft, with flight test experience being more favorable. Both pilots and mission specialists must pass a rigorous physical, be of a proper height (64–76 in; 162–193 cm), and have perfect (or correctable to perfect) eyesight. The possession of all these qualities means only that a candidate
will be considered for a manned space mission, not that they will be accepted for further training. Out of the thousands of applicants, less than a few dozen are chosen. Should a candidate be one of these lucky few, he or she will begin formal astronaut training at the Johnson Space Center, located just outside Houston, Texas. This training consists of laborious physical and educational training over a period of one to two years, depending on a mission’s requirements. Classroom training includes courses such as astronomy, computers, guidance and navigation, mathematics, physics, and various other sciences. Trainees also receive technical training in handling complex equipment, parachuting, survival techniques (air, land, sea, and space), and operating a space-suit. To prepare them for the space environment, candidates are also exposed to severe atmospheric pressure changes and microgravity. All the while, they are under a strict observation and evaluation. Again, completion of this training does not mean a candidate will become an astronaut. Considering the training candidates must undergo to become astronauts, it is difficult to understand how an untrained civilian could be considered worthy of participating in a manned space flight. Nor is it justifiable that an unqualified civilian be allowed to replace a more experienced and qualified candidate. Rich civilians cannot pilot the shuttle, operate space suits or ISS equipment, or conduct the numerous and valuable experiments vital to space research. In truth, they are nothing more than ballast, taking up precious space and resources better made available to someone with qualifications. With the diminishing number of space flights and the work schedule for the ISS, it becomes increasingly important that every seat be given to trained astronauts rather than wealthy individuals.
One key characteristic of an astronaut is the physiological ability to act as a team player. Each member of a manned space mission is trained to work with the others in a effortless and efficient manner. Having an outsider thrown into the mix can cause significant distractions to a closely knit crew. Another consideration that must be taken into account is the extra burden placed on mission controllers and support staff while they
Given that each manned mission costs taxpayers several billions of dollars, NASA and other space agencies are concerned about successfully completing their missions in the best and most expedient manner. A civilian could detract from research and operations by their mere presence. This in turn wastes money and resources that could be better used by a trained staff member. An excellent example of this is Tito’s flight, as NASA is now seeking compensation for losses incurred because of the millionaire’s presence on the ISS. Finally, civilians could quite likely serve as a dangerous distraction should a problem occur. Emergencies in space require cool heads and cohesive teamwork. Specialized experience and expertise with regard to the various systems in the space vehicle can make all the difference in a critical situation. A panicked civilian, for example, would become a hazardous obstacle in an already perilous situation, costing astronauts vital seconds that could be the difference between life and death. This issue of safety has been one of the major points in the criticism of civilians in space. Safety Considerations Space travel has many dangers. A second’s hesitation can lead to cataSCIENCE
IN
DISPUTE,
VOLUME
2
Christa McAuliffe, the first civilian to participate in a manned space mission, was killed along with the rest of the crew when the space shuttle Challenger exploded after takeoff on January 28, 1986. (National Aeronautics and Space Administration (NASA).)
ASTRONOMY AND SPACE EXPLORATION
Distraction to Astronauts and Operations Since an untrained civilian is not trained to participate in a manned space mission, they stand a very high chance of getting in the way. The interiors of the shuttle and the ISS are cramped environments, where space is utilized to its utmost degree. Adding a person who cannot make an equal contribution to the team to such an environment is a waste of the available space. In essence, a disservice is done to the remainder of the crew, whether intended or not.
try to maintain safety for both themselves and the tourist. This was one of the chief concerns for former NASA chief administrator Daniel Goldin, who greatly criticized the proposed presence of Dennis Tito on the ISS.
45
strophic consequences. It is for this reason that astronauts undergo such strenuous training before being allowed to participate in manned space missions. A civilian, no matter how much they have paid for the trip, cannot be considered a “safe” passenger, they simply do not have the training to react properly to an emergency. As mentioned previously, civilians would be more of a hindrance than a help in such a critical situation; thus, only increasing the danger. Potential damage to public relations must also be taken into account when considering the addition of civilians on manned space flights. The first civilian traveling into space met with an untimely end. Christa McAuliffe, a teacher, was killed along with her six crewmates when the space shuttle Challenger exploded during takeoff in 1986. McAuliffe was participating in a NASA program that could have allowed civilians to participate in manned space missions without the usual training. The program was immediately cancelled, and the public relations disaster has hung over NASA for years since. Conclusion The addition of civilians to manned space missions is an unsuitable and dangerous course of action. Not only is such an action a waste of resources better utilized on a trained astronaut, it places other crew members at an unreasonable risk. Civilians do not have a place on the space shuttle, the ISS, or on any other space mission. They cannot contribute to the mission and only serve to get in the way no matter how we look at it.
ASTRONOMY AND SPACE EXPLORATION
The risks to the crews and machinery are far too high to allow civilians to participate in manned space missions. No matter how wealthy, civilians should leave space travel to those with the experience and training. It is a safer and more efficient use of our space flight capabilities and resources. Millionaires hitching rides into space are on nothing more than glorified ego trips, and it is unreasonable to put so many and
46
SCIENCE
IN
DISPUTE,
VOLUME
2
so much at risk simply to appease a wealthy person in search of the ultimate thrill. Space exploration has only been in our world for approximately four decades. It is still in its early stages, and has a long way to go before it becomes safe and efficient enough to allow civilians to participate. Undoubtedly, this will change in the future, but for now, it is an endeavor best left to the professionals. In a May 2001 survey by Cosmiverse, 64% of those surveyed responded that they believed that civilians should be allowed to visit the ISS at some point in the future, but that they should not at this time. —LEE ANN PARADISE
Further Reading Ashford, David, and Patrick Collins. Your Spaceflight Manual—How You Could Be a Tourist in Space within Twenty Years. Australia: Simon and Schuster, 1990. Ask an Astronaut Archives: Byron Lichtenberg. . National Aeronautics and Space Administration. Astronaut Selection and Training. Washington, DC: National Aeronautics and Space Administration, 1990. National Commission on Space. Pioneering the Space Frontier. New York: Bantam Books, 1986. Santy, P. A. Choosing the Right Stuff: The Psychological Selection of Astronauts and Cosmonauts. Westport, CT: Praeger Publishers, 1994. Tumlinson, Rick N. “Manifesto for the Frontier: A Call for a New American Space Agenda.” Space Frontier Foundation Online. March 16, 1995. .
EARTH SCIENCE Are we currently experiencing the largest mass extinction in Earth’s history? Viewpoint: Yes, human impact is currently causing the greatest mass extinction in Earth’s history. Viewpoint: No, several measures indicate that the current mass extinction, while severe and alarming, is not the largest in Earth’s history.
Of all the species that have lived on Earth over the last 3 billion years, only about one in 1,000 is alive today. The rest became extinct, typically within 10 million years of their first planetary appearance, an extinction rate that has contributed to the planet’s current biodiversity level. Presumably, all the species alive today will experience the same fate within the next 10 million years or so, making way for our own successors. Mass extinctions—catastrophic widespread perturbations in which 50% or more of species become extinct in a relatively short period compared with the background extinction rate—happen planet-wide and affect a broad range of species on land and in the sea. Paleontologists have identified five largescale extinctions in the fossil record. Such extinctions seem to be caused by the catastrophic impacts of agents such as asteroid or meteorites; or terrestrial agents such as volcanic activity, sea level variations, global climate changes, and changing levels of ocean oxygen or salinity. Earth today is experiencing a mass extinction—more than 11% of the 9,040 known bird species are endangered, 20% of known freshwater fish are extinct or endangered, and more than 680 of the 20,000 plant species in the United States are endangered, to cite some examples. What scientists do not know is whether extinction is a natural part of evolution, or a by-product of periodic catastrophes. Another fundamental question is whether the mass extinction now under way is the largest in Earth’s history. Those who believe it is blame a 6-billion-strong world population that consumes between 30 and 40% of the planet’s net primary production, the energy passed on by plants for the use of other life-forms. People also consume, divert for their own uses, or pollute 50% or more of Earth’s freshwater resources. Calculations of the rate of extinction now underway are based on the 1.4 million species that scientists estimate exist on Earth, and on two interrelated principles—loss of species through rain-forest destruction; and forest fragmentation, the survival of species in relatively small, restricted patches of ecosystem. In 1979, the British biologist Norman Myers estimated that 2% of the world’s rain forests were being destroyed annually. Initial estimates were that destruction of this magnitude was causing the extinction of between 17,000 and 100,000 species a year; or 833,000 to 4.9 million by 2050. This translates into species loss through the formula biologists use to determine rates of extinction: S = CAz. As the American biologist Edward O. Wilson explains in The Diversity of Life, S is the number of species, C is a constant, A is the area of the fragment, and z is an exponent whose value varies with the organism and its habitat requirements. Wilson used this formula to calculate that the cur-
47
KEY TERMS Normal rate of species extinction during times of relative stability. BIODIVERSITY: The overall diversity of species in the biosphere. Many researchers believe biodiversity, and not the total number of species, is the true measure of the impact of an extinction event. BIOSPHERE: The sum total of all life on Earth. CASCADE EFFECT: Widening effect the removal of a key species has on other species that depend on it, or its effects on the ecosystem, for survival. EDGE EFFECT: Vulnerability to harmful outside effects of the perimeters of fragments of preserved ecosystems surrounded by land cleared by humans. FRAGMENTATION: Clearing of natural ecosystems leaving small remnants, or fragments, intact. ISLAND BIOGEOGRAPHY: Study of the survival of species in small islands, or remnants, of ecosystems. MASS EXTINCTION: Catastrophic, widespread disruption in which major groups of species, generally 50% or more of the total number of species, become extinct in a relatively short period of time compared with the background extinction rate. MORPHOLOGY: The shape, size, and other characteristic features of an organism. OBSERVATIONAL BIAS: Any effect that influences the types of samples or measurements in a given observation. For example, performing an opinion poll on a college campus would preferentially sample young, college-age people, and thereby create a bias in the results. BACKGROUND EXTINCTION RATE:
rent extinction rate was 27,000 species a year. At that rate, by 2020 Earth will have surpassed the percentage needed for a genuine mass extinction. Today, 6.5 times more species are becoming extinct in a time frame that is 263 times faster than the fastest estimates for the Permian mass extinction. Those who do not think Earth is in the throes of history’s largest mass extinction explain that the fossil record is used to measure the severity of mass extinctions in several ways. The length of time it takes the extinction event to represent itself in the fossil record indicates the causative event’s severity and immediacy. The extinction’s duration, how quickly the biosphere adapts to the new conditions, is another measure of severity, as is the event magnitude, or total number of species affected. By several of these measures, the current mass extinction is clearly not the largest in Earth’s history. It is not occurring as suddenly as one brought on by an impact, such as the Cretaceous-Tertiary (K-T) extinction, and it has not affected as much of the biosphere as did the Permian-Triassic (P-T) extinction. History’s largest mass extinction in the fossil record was the event that defines the boundary between the Permian and Triassic periods, the P-T extinction, which occurred 250 million years ago and whose cause is still unknown. Researchers measuring the abundance of different marine species in the fossil record note a 90 to 95% reduction in the total number of marine species at that time.
Observational biases exist in current measurements and events inferred from the fossil record. The fossil record is the accumulation of hard-bodied creatures preserved and mineralized in sediment, so measuring extinction events is largely based on counts of marine species and does not necessarily reflect effects on other forms of life, like soft-bodied organisms and land-dwelling creatures. The fossil record says almost nothing about microorganisms, the most abundant type of life on Earth. In addition, researchers disagree about how to determine the number of species in the fossil record. A species is defined as a group of organisms that can interbreed freely under natural conditions, a difficult thing to test when examining mineralized shells. So researchers use morphological features such as shape and size to assign species.
EARTH SCIENCE
The true result of a mass extinction—depletion of the planet’s total biodiversity—carries implications for the severity of the current event. As the fossil record shows, ancient extinctions affected large numbers of species and dramatically reduced the total number of living organisms. The current extinction of species may be reducing total biodiversity, thus reducing the ability of the biosphere to adapt and recover. —CHERYL PELLERIN
Yes, human impact is currently causing the greatest mass extinction in Earth’s history.
48
SCIENCE
Viewpoint:
IN
DISPUTE,
VOLUME
2
As the American biologist Edward O. Wilson put it: “It is possible that intelligence in the wrong kind of species was foreordained to be a fatal combination for the biosphere. . . . Perhaps a law of evolution is that intelligence usually extinguishes itself.” In other words, what death and taxes are for individual humans, extinction is for species.
A
Under normal circumstances in nature, species become extinct as conditions change, and they are usually replaced by new species better adapted to the new conditions. However, since about 1800, the beginning of the exponential increase in the human population and its concomitant intrusion into and disruption of natural habitats around the world, the extinction of species has accelerated and spread. Today it is a worldwide phenomenon. The severity of the current extinction is a contentious issue, but this essay will show that our world is on the brink of a mass extinction of unprecedented proportions. The best way to document the severity of the current extinction crisis is to describe the normal process of extinction—what occurred during the greatest mass extinctions in the past—and compare that to what is happening today.
Research has shown that, over time, species diversity has remained quite stable. In fact, the history of life on Earth indicates that, in general and over time, the rate at which new species have evolved is slightly greater than the rate at which species disappear.
Malaysian rainforest is burned for agricultural and construction purposes. Such clearings impact enormous numbers of species. (Photograph by Sally A. Morgan. © Ecoscene/CORBIS. Reproduced by permission.)
Prior Mass Extinctions A mass extinction is defined as a catastrophic, widespread perturbation in which a large number of species become extinct in a relatively short period of time compared with the background extinction rate. Generally, mass extinctions are defined as those in which 50% or more of species disappear.
Earth has experienced five great mass extinctions (see table 1). Permian Mass Extinction The greatest extinction event the world has ever known was the event that defines the boundary between the Permian and Triassic periods, the P-T extinction, 245 million years ago. The exact causes of this catastrophe are unknown, though hotly debated. Some experts argue for shifting tectonic plates (which moved together to form the supercontinent, Pangaea), a devastating collision from an asteroid, or changes in ocean salinity. But many scientists now suggest that the Permian extinction was caused by rapid and catSCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
The Normal Extinction Processes The normal rate of extinction, which occurs during times of relative stability, has been dubbed by University of Chicago paleontologist David Raup the background extinction rate. Raup has shown that this background rate of random extinction is generally very low. During the past 500 million years, the background extinction rate has been approximately one species every four years. However, this figure includes those species that disappeared during mass extinction events. If the rate is recalculated omitting mass extinctions, that is, including only those extinc-
tions that occur during stable periods, the background extinction rate is even lower.
49
PRIOR MASS EXTINCTIONS TABLE 1 PERIOD
MYA*
SUSPECTED CAUSE
WHAT BECAME EXTINCT
Ordovician
440
climate change (?)
~50% species: marine invertebrates
Devonian
370
climate change (?)
~70% species: marine invertebrates
Permian
245
climate change (?); tectonic shifts (?)
~70% species: terrestrial; ~90% species: marine
Triassic
210
unknown
~44% species: marine
Cretaceous
65
comet collision (?)
~60% species: dinosaurs, marine
—*mya: millions of years ago
TABLE 2 TIME PERIOD
LAND
MARINE
1 million years
70% species
90% species
100,000 years
7% species
9% species
TIME PERIOD
LAND
MARINE
50,000 years
70% species
90% species
5,000 years
7% species
9% species
TABLE 3
EARTH SCIENCE
astrophic global warming. This hypothesis states that numerous volcanic eruptions spewed enormous quantities of carbon dioxide (CO2) into the atmosphere. This powerful greenhouse gas caused the climate to warm, which warmed ocean waters, which led to the release of (CO2) held in ocean sediments into ocean waters. This release led to the collapse of marine life.
50
Rates and Percentages. The P-T extinction is believed to have been preceded by several small extinction events, culminating in a major pulse of extinction. Many scientists believe that the major event occurred extremely rapidly— over a period of only one million years—a geological instant. During this one-million-year period, 70% of land species and 90% of marine species became extinct. To compare different extinction events more easily, it helps to analyze these figures for smaller time periods. Thus, assuming a linear rate of extinction, see table 2. Some scientists contend that within the one-million-year-long major pulse of extinction, the majority of species may have met their end during only a few tens of thousands of years, SCIENCE
IN
DISPUTE,
VOLUME
2
perhaps in as little as 50,000 years. If this is true, the extinction rate during the most catastrophic period would be as shown in table 3. The Sixth Mass Extinction The sixth mass extinction—the one currently underway and the most catastrophic in history—exceeds the Permian extinction in both its rapidity and in the percentage of species lost.
People and Resources. The current extinction event started when humans began to move into and dominate all parts of the planet. Wherever human populations settled, biological diversity decreased. Since 1800, the human population has been increasing exponentially. It took the human population until about 1800 to reach the one-billion mark. After that, it took only 130 years for it to reach two billion, 30 years more to hit four billion, a mere 15 years more to reach five billion, and a measly 12 years to top six billion (the six-billionth human baby was born in 1999). Of course, people take up space, and they need food and water. At the current rate of population growth, human demands on Earth’s
basic resources are unsustainable. We also threaten the continued survival of other organisms. At a population of six billion, humans on Earth today consume between 30 and 40% of the total net primary production on the planet. Net primary production (NPP) is the amount of energy passed on by plants for other organisms to use (plants are at the base of nearly all food chains, they are the organisms on which all others depend for life). The 40% figure includes direct consumption (food, wood, fuel), as well as indirect consumption (land clearing and development, feed grown for livestock) of NPP by people. Humans have also consumed, or diverted for their own uses, or have polluted, or made unusable and thus unavailable to other organisms, a significant percentage of the world’s nonglacial, freshwater resources.
How Many Species? Before it can be determined whether we are in the throes of another mass extinction, it is necessary to know the number of species that currently exist. Herein lies a problem, because all biologists agree that we have named only a small fraction of the organisms on Earth. To date, scientists have dis-
A cloud forest in Costa Rica. (Photograph by Steve Kaufman.
The vast majority of the world’s species reside in its rain forests—the cradle of biodiversity on Earth. In fact, biologists agree that rain forests hold so many more as-yet-unnamed species that the total number of species on Earth is many times the number so far counted. This is especially true for the arthropods. For example, Terry Erwin, an entomologist at the Smithsonian Institution, has intensively studied insects living in the canopy of the Amazon rain forest in Peru and Brazil, and in rain forest in Panama. Erwin’s surveys turned up so many previously unknown insect species, often numerous new species in a single tree, that he estimated there are at least 30 million species of insects in the tropics alone. Further research has led scientists to adjust this estimate downward, to about 10 million insect species. Despite our lack of knowledge of species resident on coral reefs—the “rain forests of the sea”—most scientists accept that the number of species is likely between 5 and 30 million. A working estimate of 10 million species is generally agreed upon, and will be used here to illustrate the current problem. It is important, however, to point out that mass extinctions are based not on the actual number of organisms that die, but on the percentage of species that go extinct. Current Extinction Patterns and Rates Instantaneous Extinction. When a rare cloudSCIENCE
IN
DISPUTE,
VOLUME
2
CORBIS. Reproduced by permission.)
EARTH SCIENCE
All in all, these figures indicate that human actions have dramatic consequences for all other life on Earth. Furthermore, the percentages of these crucial resources appropriated for human use is expected, inevitably, to skyrocket as the human population races toward 10 or 11 billion, at which time it may (or may not) stabilize.
covered, named, and described approximately 1.4 million species.
51
forest ecosystem in Ecuador was destroyed to make way for agriculture, 90 species of unique plants immediately became extinct. Instantaneous extinction is the rule when rare ecosystems, and their unique plants and animals, are destroyed. If, as described above and as many scientists believe, many rain-forest organisms have an extremely limited habitat (one tree, for example), instantaneous extinctions will contribute significantly to the overall extinction. Rain-Forest Destruction and Fragmentation. Rain forests contain the greatest number and diversity of species on Earth. Calculations of the rate of extinction currently underway are based primarily on two interrelated principles: first, the loss of species through rain-forest destruction; second, rain-forest fragmentation. These principles arose from the study of island biogeography, the survival of species in relatively small, restricted patches of ecosystem.
EARTH SCIENCE
In 1979, the British biologist Norman Myers estimated that 2% of the world’s rain forests were being destroyed annually. His estimate was termed “alarmist” by extinction skeptics. Today, satellite imagery shows that his estimate might well be called quite accurate. In the 1970s, satellite pictures showed an annual rainforest loss of about 28,960 sq mi (75,000 sq km); by 1989, that figure had jumped to 54,830 sq mi (142,000 sq km), a loss of 1.8% per year. Initial estimates were that destruction of this magnitude was causing the extinction of between 17,000 and 100,000 species every year; or 833,000 to 4,900,000 species by 2050.
52
In some regions, patches of rain forest are spared and left standing to preserve the biodiversity they hold. These “preserve patches” vary in size, usually from 0.004 to 40 sq mi (0.010 to 1,000 sq km). Scientists have carefully studied the fate of species within different-sized preserves. They have found that the edge effect, the vulnerability of the boundaries to outside conditions, nullifies species protection near the preserve perimeters. Habitat deep within a large preserve may retain its essential characteristics, but perimeters bordered by cleared land are exposed to the deleterious effects of wind, low humidity, pesticides, and the intrusion of humans and other animals not native to the preserve. Scientific and satellite studies indicate that animals living within 0.5 mile (0.8 km) of a preserve perimeter are highly vulnerable to extinction. Fragments of ecosystems are essentially islands of habitat surrounded, not by the sea, but by land appropriated by people. Like islands in the ocean, the larger they are, the more species they can support. Likewise, the smaller they are, the greater the rate of extinction of those species that had once lived in the undisturbed habitat. For example, two preserve patches in Brazil were studied for 100 years. In SCIENCE
IN
DISPUTE,
VOLUME
2
that time, 14% of the bird species in one patch of 5.4 sq mi (14 sq km) became extinct; in the other patch of 0.07 sq mi (0.2 sq km), 62% of bird species became extinct. Another factor in forest fragmentation is loss of one or more key species. For example, peccaries, small pigs also known as javelinas, disappeared from one 0.40-sq-mi (1-sq-km) forest fragment in the Amazon. When the peccaries fled, presumably because the fragment was too small to support them, three species of frog became extinct—the frogs required the mud pools created by wallowing peccaries. So the loss of one key species often creates a cascade of extinction within a forest fragment. How does this translate into species loss? Loss of Species. In The Diversity of Life, Edward O. Wilson explains the formula biologists use to determine rates of extinction based on “island” (habitat remnant) size. The formula is S = CAz, where S is the number of species, A is the area of the fragment, and z is an exponent whose value varies depending on the organism and its habitat requirements (C is a constant). In nearly all cases, the value of z varies between 0.15 and 0.35. Because it is an exponent, the higher the value of z, the greater the reduction in the number of species. Wilson used this formula to calculate the current extinction rate, including only the most conservative numbers. He did not factor in the depredations of overharvesting or the lethally disruptive effects of invasive species. In his sample calculation, Wilson plugged into the formula the lowest z value of 0.15, for a low estimate of 10 million rain-forest species. He assumed that, for the purposes of this trial calculation, all 10 million species had large geographical ranges (eliminating instantaneous extinction). Finally, he added the 1.8% per year loss of rain forest to the formula. The result—optimistic because of the several low estimates used—indicated that each year, 27,000 species become extinct. That means 74 species a day; 3 species an hour. Although this estimate is derived based on some assumptions, it is nevertheless a clear and alarming signal. Now and Then How does this rate compare with the mass extinction rate during the Permian? A die-off event is considered a mass extinction when 50% or more of species become extinct in a relatively short geological time span. If 27,000 species are lost per year in a world containing 10 million species, approximately 3 out of every 1,000 species (0.3%) are lost each year. The background extinction rate for previous eras, assuming 10 million species, has been calculated at between 1 out of every 1,000,000 species to 1 out of every 10,000,000 species.
Thus, the current extinction rate may be as much as 30,000 times higher than background. Based on Raup’s background-extinction rate estimate of one species becoming extinct every four years, the current rate of extinction is 108,000 times higher than the background extinction rate. A rate of 27,000 species extinctions per year means that 0.27% of species are lost annually. It is not hard to figure out that, at that rate, and assuming a linear relationship, in less than 200 years we will have met and surpassed the percentage needed for a true mass extinction. The extremely rapid extinction rate during the Permian pales in comparison with the current rate of extinction (see table 4). As this astonishing (and alarming) comparison shows, 6.5 times more species today are becoming extinct in a time frame that is 27 times faster than the fastest estimates for the Permian mass extinction. If the Permian extinction occurred over a one-million-year period, the current extinction rate becomes even more catastrophic. Of course, the 0.27% per year loss will not (hopefully) continue indefinitely until every last living thing on Earth is gone. But island biogeography has confirmed that for every 10-fold decrease in rain-forest habitat, 50% of its resident species will go extinct: some instantly, some over a period of time, perhaps decades or centuries. The process has already begun, and people show no inclination to stop it, as seen in the minuscule areas of rain forest preserved: 4% in Africa, 2% in Latin America, and 6% in Asia. Worse, the island biogeography estimates are conservative. Because of the extremely limited range of some rain forest species, when 90% of a rain forest is destroyed, the result is not simply a percentage reduction in the populations of resident species. The result is the immediate extinction of some localized species, and the gradual decline and eventual extinction of others.
Some experts predict that, at current rates of destruction, 90% of the world’s rain forests will be gone in a century; the remainder will likely be patches incapable of supporting diverse
TABLE 4 TIME PERIOD
EXTINCTION
Permian
5,000 years
(avg. land + marine) 8% of species
Current
185 years
(overall) 51% of species
species. The combination of total rain-forest destruction and extinction due to edge effect and fragmentation will, many scientists believe, result in the extinction of about 50% of Earth’s species in the next 100 years. Today, one in eight plant species is at risk of disappearing, and some are so far gone on the road to extinction that they are not expected to recover. These plant extinctions are particularly worrying. The Cretaceous mass extinction may have wiped out the dinosaurs, but most plant species were spared. The sixth extinction is taking both plants and animals. Although habitat in temperate zones is also being destroyed—gobbled up by development of one sort or another—the devastation of tropical rain forests puts us squarely in the midst of the greatest and most rapid mass extinction ever seen on Earth. As Stuart Pimm has said, “The sixth extinction is not happening because of some external force. It is happening because of us. . . . We must ask ourselves if this is really what we want to do. . . .” —NATALIE GOLDSTEIN
Viewpoint: No, several measures indicate that the current mass extinction, while severe and alarming, is not the largest in Earth’s history. Compelling evidence exists for mass extinctions throughout Earth’s history. At several points in the fossil record, researchers have observed severe reductions in the total numbers and diversity of species. However, the cause of these historical extinctions often remains unclear. Some appear to have been caused by meteorite impacts, others by extensive volcanism or dramatic changes in climate. Others are mysterious, with the cause completely unknown. For one mass extinction, however, the cause is all too clear. From the time humans SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
Even among known species, documented extinctions today are occurring far more rapidly than in the past. In his study of bird extinction in the rain forests of Hawaii, Stuart Pimm, professor of ecology at the Center for Environmental Research and Conservation at Columbia University, reports an extinction rate of one bird species per year—four times the background rate. Of 135 native Hawaiian birds, only 11 species are thriving in numbers that ensure their survival to 2100. Globally, in addition to those already extinct, at least 11% of all bird species are now critically endangered.
MASS EXTINCTION RATES
53
Father-and-son research team Luis and Walter Alvarez were the first to postulate that a giant asteroid’s collision into Earth was responsible for the Cretaceous-Tertiary extinction, which wiped out the dinosaurs. (Photograph by Roger Ressmeyer. CORBIS.
EARTH SCIENCE
Reproduced by permission.)
54
began to spread across the globe over 100,000 years ago, they began to fiercely compete with other species. In historical times, humans have accelerated their impact on the natural world so much so that many believe humans are causing the extinction of large numbers of species at an unprecedented rate. To examine that claim, scientists study the fossil record. The fossil record indicates that the severity of mass extinctions can be measured in a number of ways. The time it takes for the extinction event to represent itself in the fossil record indicates the severity, and immediacy, of the event that caused it. For example, a mass extinction caused by the impact of a giant meteorite occurs much faster than one caused by climate change, and the biosphere’s ability to adapt can be stressed if the extinction event is too rapid. The duration of the extinction event is also an indicator of severity. This duration is essentially a measure of how quickly the biosphere can adapt or evolve to the new conditions. Finally, the magnitude of the event, or the total number of species affected, also serves as a measure of severity. By several of these measures, the current mass extinction, despite its intensity, is nowhere near the largest in Earth’s history. It has not been brought on as suddenly as the CretaceousTertiary, or K-T extinction (Kriede is the German word for Cretaceous), nor has it affected as much of the biosphere as the Permian-Triassic, or P-T extinction. In addition, observational SCIENCE
IN
DISPUTE,
VOLUME
2
biases exist in both the current measurements and the events inferred from the fossil record. These biases reflect on the interpretation of the fossil record, and on the policies instituted in response to the current mass extinction. Finally, the true result of a mass extinction—the depletion of the total biodiversity of Earth—carries implications for the severity of the current event. As the fossil record shows, the ancient extinctions affected large numbers of species and dramatically reduced the number of living organisms. The current onslaught on Earth’s species may be resulting in a total reduction of biodiversity, reducing the ability of the biosphere ability to adapt and recover. Earth’s Current Mass Extinction Is Earth currently experiencing a mass extinction? Of this there is little doubt. According the International Council on Bird Preservation, over 11% of the 9,040 known bird species are endangered, some 20% of known freshwater fish in the world are extinct or seriously endangered, and, according the Center for Plant Conservation, over 680 of the 20,000 plant species in the United States are endangered. Considered along with an exploding human population, unprecedented resource exploitation, and the destruction of wild habitat, the picture becomes grim indeed.
However, these are just a few direct observations. Most of the species of flora and fauna on Earth are largely unknown. Estimates of global
THE CRETACEOUS EXTINCTION There is no question that the most “popular” mass extinction—the one that fires the human imagination—occurred during the Cretaceous period, when the dinosaurs breathed their last. The Cretaceous extinction was the line that separated the “Age of the Dinosaurs” from the “Age of the Mammals.” At the end of the Cretaceous period, and by the beginning of the Tertiary, 60% of the world’s species disappeared. Although the dinosaurs were, by far, the most notable casualties, many more marine species became extinct during this event. Experts have furiously debated the cause or causes of the Cretaceous extinction for decades. There are two major hypotheses.
INTRINSIC GRADUALISM This hypothesis states that changes on Earth (intrinsic changes) caused the extinction. These changes would have taken place over millions of years. Volcanic eruptions, which increased greatly during the Cretaceous, may have put enough dust into the air to cause a global cooling of the climate. Also at this time, Earth’s tectonic plates were in flux, causing oceans to recede from the land. As the oceans retreated, their mitigating effect on the climate would have been reduced and the climate would have become less mild. This too would have happened gradually, over millions of years.
However, scientists and conservation groups are still struggling to directly measure the effect of humans on the biosphere. Most species on Earth are unnamed and their habits poorly understood, making a direct assessment difficult. Also, actual extinction events are local, isolated events and receive little attention, except in the case of high profile species such as the passenger pigeon. These biases lead many policy makers to ignore the current extinction,
This hypothesis, put forth by Luis and Walter Alvarez and others at the University of California, Berkeley, stipulates that an enormous extraterrestrial object—perhaps a meteor or a comet—collided with Earth at the end of the Cretaceous. The impact would have been great enough to send up into the atmosphere a huge cloud of dust, sufficient to cool the global climate for years. A huge crater discovered at Chicxulub, on the Yucatan Peninsula in Mexico, was found to have features that identified it as the most likely collision site. Other evidence supporting the extrinsic catastrophism hypothesis has emerged. Shocked quartz, which is formed during extremely violent earth tremors, has been found in rock of the Cretaceous period, as has a layer of soot in some rocks. The soot layer is indicative of widespread firestorms that would have raged over parts of Earth after an impact. Proponents of each hypothesis are still discussing the subject; to date, and for the foreseeable future, no definitive answer has emerged. Too many questions remain unanswered by both hypotheses: Why did some species die out and others survive? Why did more marine species disappear than terrestrial species? Can climate change really account for the selectivity of the extinctions? Some scientists are now studying a complex of causes for the extinction, integrating one or more intrinsic events and the extrinsic impact event. —Cheryl Pellerin
or claim the event is “natural.” One look at the fossil record, however, reveals what a natural event really looks like. The Largest Mass Extinctions As serious as the current extinction seems, it is not the largest. Examining the fossil record, two major extinctions overshadow all others. Approximately 65 million years ago, at the boundary of the Cretaceous and Tertiary periods (the K-T boundary), the fossil record indicates that over 85% of all species suddenly went extinct. This event destroyed the dinosaurs and cleared the evolutionary field for small mammals to eventually evolve into humans. Some of the most abundant species in the fossil record—clamlike brachiopods and mollusks, as well as echinoids SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
rain-forest destruction (over 190,000 sq mi, or 500,000 sq km, per year) indicate the rate of extinctions are 1,000 to 10,000 times the “normal,” background rate of species extinction that occurs naturally, and the global extinction rate is greater than it has been for over 100,000 years.
EXTRINSIC CATASTROPHISM
55
By several measures, the current mass extinction is clearly not the largest in Earth’s history. The Permian-Triassic extinction, which may have been caused by extensive volcanic eruptions, was the largest in Earth’s history. (Photograph by Roger Ressmeyer. CORBIS.
EARTH SCIENCE
Reproduced by permission.)
56
such as sea stars and sea urchins—were severely affected. The K-T extinctions event altered the entire character of life on Earth. The cause of the K-T extinction remained a mystery until 1980, when researchers Luis and Walter Alvarez, Frank Asaro, and Helen Michel at the University of California, Berkeley, located a small layer of iridium in layers of Earth’s crust that had been deposited during the boundary (the time of the change from one period to another) between the Cretaceous period and the Tertiary period. Iridium, although uncommon in Earth’s crust, is common in meteorites. The group advanced the theory that the boundary, along with the extinction event, resulted from an impact with a giant meteorite. This impact raised enough dust to cover Earth and blot out the Sun for several years, destroying many plants and the animals that depended on them. As Earth’s climate recovered, severe weather patterns further stressed the biosphere. Additional studies supported this theory. The K-T extinction event was severe in two ways. First, the nature of the impact caused an immediate change in the global conditions. The dust drifting into the atmosphere reduced the amount of sunlight, and therefore the energy, available to plants on land and in the ocean. Organisms relying on the plants died out, and the predators of those organisms were subsequently affected. This precipitated a collapse of the planet’s food web, and a large, rapid extinction SCIENCE
IN
DISPUTE,
VOLUME
2
event. Second, in total numbers, the K-T event stands out as one of the largest in Earth’s history. But the K-T event is not the largest in the fossil record. That title goes to the extinction event that defines the boundary between the Permian and Triassic periods. The P-T extinction occurred approximately 250 million years ago. Researchers measuring the abundance of different marine species in the fossil record note a 90 to 95% reduction in the total number of marine species at this time. In sheer numbers, the P-T event is by far the largest extinction in the history of Earth. The cause of the P-T extinction is for the most part unknown. It was not as sudden as the K-T event, indicating that something other than a giant impact caused the extinction. Several possibilities exist. Global glaciations caused by dramatic climate change could have forced many species to extinction. Glaciations may have been more localized to the poles, causing a reduction in the overall sea levels and reducing marine habitat. It is also possible that the area of the continental shelf, a favorite habitat of sea creatures, was greatly reduced during the formation of the supercontinent Pangaea. It has also been suggested that an increase in global volcanic events could have triggered a climate change and caused the extinction. Whatever the reason, the P-T extinction, while not as sudden as the K-T, was certainly more severe in terms of the number of species affected.
Measuring the Severity: Observational Biases It is not a simple matter to measure the effects of humans on the current biosphere. Living creatures are difficult to track, and despite the best efforts of researchers, estimates of the current mass-extinction rate must be made by inferences from the destruction of habitats such as rain forests. But what are some of the unique biases in the fossil record?
The fossil record consists of the accumulation of organisms that have been preserved and mineralized in sediment. This means that creatures with hard shells and skeletons in marine or shallow-water environments are preferentially preserved. The measurement of extinction events is therefore based largely on counts of fossils of marine species, and does not necessarily reflect the effects of such events on plants, soft-bodied organisms, and land-dwelling creatures. Furthermore, the fossil record says almost nothing about microorganisms, by far the most abundant type of life on Earth. In addition, researchers disagree on how to determine the number of species in the fossil record. A species is defined as a group of organisms able to interbreed freely under natural conditions, a difficult feature to test when examining a bunch of mineralized shells. Therefore, researchers use morphological features such as the shape and size of an organism to assign species. Still, the fossil record has one distinct advantage over present-day methods for determining extinction rates. By studying the layered sediment of an sea floor, field researchers can distinguish when an extinction occurred. At one level, there exists abundant life, in the next, it is severely depleted. Only by taking careful records for hundreds of years could today’s researchers record such an extinction event happening now, and by then it could be too late.
Naturalists and ecologists agree that fostering the biodiversity of Earth is important for a healthy ecosystem. However, human activity is taking biodiversity in the opposite direction, removing varieties of species and replacing them with more of the same types of species. For example, the diversity of agricultural products has declined as humans rely on more limited, higher-yield crops. This is in addition to the endangerment of large numbers of other wild plants and animals. Biodiversity is needed to provide the genetic diversity for evolution to prosper. The American biologist Edward O. Wilson, author of The Diversity of Life, suggests that the current level of extinction is removing species at such a rate that biodiversity on Earth is seriously affected. As Wilson indicates, if the recovery from the current mass extinctions takes as long as for other mass extinctions in the fossil record, humans as a species may not live to see it. Clearly, regardless of the impact human activity has on biodiversity, the biosphere, given enough time, will recover. Life on Earth has survived volcanoes, advancing ice sheets, catastrophic impacts, and a host of other life-threatening events. Whatever the results of the human effects on the biosphere, life, eventually, will go on. However, as researchers like Wilson are quick to point out, the reduction in biodiversity may result in the extinction of at least one more very important species. Combating the effects of the current extinction thus becomes not only a fight for the biosphere, but for the survival of Homo sapiens. —JOHN ARMSTRONG
Further Reading Ehrlich, Paul. Extinction. New York: Random House, 1982. Eldredge, Niles. Life in the Balance: Humanity and the Biodiversity Crisis. Princeton, N.J.: Princeton University Press, 1998. Erwin, Douglas. The Great Paleozoic Crisis: Life and Death in the Permian. New York: Columbia University Press, 1993. ———. “The Mother of Mass Extinctions.” Scientific American (July 1996): 72–8. Lawton, John. Extinction Rates. New York: Oxford University Press, 1995. Leakey, Richard, and Roger Lewin. The Sixth Extinction: Patterns of Life and the Future of Humankind. New York: Doubleday, 1996. SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
The Real Problem: The Question of Biodiversity Whatever the biases in the fossil record, or the difficulty in determining the rate of current extinctions, the true measure of the severity of extinction is the change in biodiversity. Biodiversity is the overall diversity of species in the biosphere. What were the effects of other mass extinctions on biodiversity? Again, ignoring the observational bias in the fossil record, it seems that biodiversity was seriously reduced by both the K-T and the P-T extinction events. In both cases, the elimination of species opened niches for subsequent explosions of new species. The brachiopods, marine animals that looked like clams, fared poorly in the P-T extinction, but other organisms subsequently prospered. Similarly, the K-T extinction made the evolution of mammals, including humans, possible. Mammals were present long before the K-T extinc-
tion, but the K-T event cleared the way for them to evolve into previously unoccupied niches. However, biodiversity took 100 million years to recover from the P-T extinction, and over 20 million years to recover from the K-T, certainly a very long time by human standards.
57
Morell, Virginia. “The Sixth Extinction.” National Geographic (February 1999): 42–59.
———. Rivers in Time: The Search for Clues to Earth’s Mass Extinctions. New York: Columbia University Press, 2001.
Ward, Peter D. On Methuselah’s Trail: Living Fossils and the Great Extinctions. New York: W. H. Freeman and Company, 1993.
Wilson, Edward O. The Diversity of Life. New York: W. W. Norton, 1999.
EARTH SCIENCE
———. The End of Evolution: On Mass Extinctions and the Preservation of Biodiversity. New York: Bantam Books, 1994.
58
SCIENCE
IN
DISPUTE,
VOLUME
2
Are current U.S. drinking water standards sufficient?
Viewpoint: Yes, while not perfect, current U.S. drinking water standards are sufficient, and new government regulations continue to improve these standards. Viewpoint: No, current U.S. drinking water standards are not sufficient and must be improved to ensure public health.
Every day the average U.S. citizen uses 100 gal (380 l) of water; every year, the average household uses 100,000 gal (380,000 l) of water; and every day Americans drink more than one billion glasses of water. All of this water comes from many sources and, because different treatment methods are used, the quality of drinking water varies throughout the country. At the beginning of the twenty-first century, water experts consider U.S. drinking water to be generally safe. But during the nineteenth century, diseases like cholera and typhoid could be traced to the U.S. water supply, so in 1914 the U.S. Public Health Service set standards for drinking water and has continued improving on antibacteriological water technologies ever since. Most people agree that U.S. public drinking water supplies are probably among the world’s most reliable, but not everyone thinks current U.S. drinking water standards are sufficient. Those who think drinking water standards are sufficient cite statistics offered by the National Environmental Education and Training Foundation: 91% of U.S. public water systems reported no violations of health-based drinking water standards in 1998–1999, and almost all of the reported violations in that period involved reporting and monitoring requirements rather than water-standards health violations. They also cite the National Centers for Disease Control and Prevention, which said the proportion of reported diseases linked to problems at public water treatment systems declined from 73% in 1989–1990 to 30% in 1995–1996. This decline was a direct result of improved water treatment and water treatment technology. The Environmental Protection Agency (EPA) Office of Ground Water and Drinking Water administers the 1974 Safe Drinking Water Act (SDWA), which made drastic advances in enforcing voluntary Public Health Service standards by requiring the EPA to set national uniform water standards for drinking water contaminants. The EPA drinking water standards are called maximum contaminant levels (MCLs) and apply to private and public systems that serve 25 people or 15 homes or businesses for at least 60 days per year. MCLs are established to protect the public health based on known or anticipated health problems, the ability of various technologies to remove the contaminant, their effectiveness, and cost of treatment. In 1986 Congress strengthened the SDWA by enacting amendments that increased regulations on 83 contaminants in three years and for 25 more contaminants every three years thereafter. Since the mid-1970s a network of governmental agencies has been developed to act on the SDWA. In 2002, the nation’s 55,000 U.S. community water systems tested for more than 80 contaminants.
59
Those who don’t believe current U.S. drinking water standards are sufficient cite a case involving the source of carcinogens in the 1970s leukemia cluster in Woburn, Massachusetts, which was the basis for Jonathan Harr’s book A Civil Action. In the Woburn case, illegally dumped trichloroethylene and other possible carcinogens seeped into two city wells. Although water standards were not the issue, the story has lessons for authorities who make cost/benefit decisions when setting standards.
Some argue that America’s tap water system is not as safe as it should be, due to insufficient safety standards. (© Japack Company/CORBIS. Reproduced by permission.)
The EPA has been considering more stringent regulation of arsenic in water, but says that a significant reduction in the MCL allowed by law could increase compliance costs for water utilities. The MCL for arsenic in drinking water is 50 parts per billion (ppb) or 50 micrograms per liter (µg/l); in 1998 the EPA estimated that arsenic occurrence in groundwater exceeded that limit in 93 U.S. public drinking water systems. The EPA issued a 10 ppb (µg/l) regulation to begin in January 2001, but Christine Todd Whitman, the EPA administrator, suspended implementation of the new standard in March 2001 to allow for further review. After considerable public outcry and the publication of a National Research Council (NRC) report that spelled out a significant cancer risk, the government pledged to issue a new standard by February 2002. In the meantime, how many people were adversely affected? The NRC issued another report in September 2001 noting that even very low concentrations of arsenic in drinking water were associated with a higher incidence of cancer. The report found that people who consume water with 3 ppb (µg/l) of arsenic daily have a 1 in 1,000 risk of developing bladder or lung cancer during their lifetime. At 10 ppb (µg/l) the risk is more than 3 in 1,000; at 20 ppb (µg/l) it is 7 in 1,000, based on consumption of 1 qt (1 l) of water per day. Arsenic is not the only contested standard. Methyl tert-butyl ether (MTBE) is a common gasoline additive used throughout the United States to reduce carbon monoxide and ozone levels caused by auto emissions. Now MTBE is contaminating ground and surface water supplies from leaking underground storage tanks and pipelines, spills, emissions from marine engines into lakes and reservoirs, and to some extent from air contaminated by engine exhaust. MTBE falls under the EPA’s Unregulated Contaminant Monitoring Rule, and its long-term health risk is still being studied.
EARTH SCIENCE
Lead is another contaminant that is hazardous to infants, children, and adults. The MCL for lead is 15 ppb (µg/l), and the goal is zero exposure. Much of the exposure to lead in drinking water comes from household plumbing, not from an EPA-regulated external water source. With a long demonstrated risk of delayed physical and mental development in children exposed to lead in drinking water, there should be compelling regulations to end this preventable exposure.
60
Safeguards built into the U.S. legislative process to protect citizens from hasty and oppressive laws make it difficult to provide new drinking water standards and to enforce them in a reasonable timeframe, even when scientific evidence calls for change. According to a timeline published by the EPA, it takes eight years from the date a contaminant candidate is put on the first list to the date when regulatory determinations are announced. —CHERYL PELLERIN
regulations continue to improve these standards. Viewpoint: Yes, while not perfect, current U.S. drinking water standards are sufficient, and new government SCIENCE
IN
DISPUTE,
VOLUME
2
One Billion Glasses of Water Water flows naturally throughout the world. It is an essential component to all life on Earth. Over many years mankind has learned to divert the flow of water
KEY TERMS
to serve desired purposes such as drinking. In the United States safe drinking (or tap) water is critical for maintaining the public health. Americans use millions of gallons of water each day in homes, industries, farms, and countless community activities. According to the Environmental Protection Agency (EPA), the average U.S. citi-
inorganic constituents when such contaminants cannot be removed adequately by filtration or sedimentation. Ion exchange can be used to treat water that is rich in calcium and magnesium salts (commonly called hard water). It also can be used to remove arsenic, chromium, excess fluoride, nitrates, radium, and uranium. ORGANIC CHEMICAL: Any compound that contains bonded carbon. The compound can be man-made or naturally occurring. OZONATION: The process that is used to treat or impregnate water (or other substances) with ozone, a colorless, gaseous variation of oxygen (that is, ozone possesses three atoms rather than the usual two atoms in oxygen). PUBLIC WATER SYSTEMS: As defined by the Environmental Protection Agency, a system that delivers water for human consumption if such a system has at least 15 service connections or regularly serves at least 25 individuals 60 or more days out of the year. Such systems include municipal water companies, homeowner associations, schools, businesses, campgrounds, and shopping malls. RADIONUCLIDE: A radioactive element such as radium or an elementary particle such as an alpha or beta particle. REVERSE OSMOSIS FILTRATION: The process by which pressure is used to force fresh water through a thin membrane in order to remove undesired minerals from the water, based on their inability to pass through the membrane. SEDIMENTATION: A gravity-based process that removes heavier, solid particles from water. TYPHOID: A highly infectious disease, commonly called typhoid fever, which is caused by the typhoid bacillus (Salmonella typhosa). It is normally transmitted by contaminated food or water.
zen uses about 100 gal (380 l) of water each day, and the average household uses around 100,000 gal (380,000 l) of water each year. Americans drink more than one billion glasses of water every day. Because water is accessed from many different sources and various methods are used to treat water, the quality of drinking water varies SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
The process in which organic contaminants and color-, taste-, and odor-causing compounds are adhered to the surface of granular or powdered activated carbon or other high-surfacearea material in order to be removed from drinking water. AQUIFER: Bodies of rock that are capable of containing and transmitting groundwater. CARCINOGEN: A cancer-causing agent such as trichloroethylene. CHLORINATION: The process to treat or combine with chlorine or a chlorine compound. CHOLERA: An often fatal, infectious disease caused by the microorganism Vibrio comma. DISINFECTION: The process in which water is decontaminated before entering the distribution system in order to ensure that dangerous microbes are killed. Chlorine, chloramines, or chlorine dioxide are often used because they are very effective disinfectants. FILTRATION: The process used by many water treatment facilities to remove remaining particles from the water supply. Those remaining particles include clays and silts, natural organic matter, precipitants (from other treatment processes in the facility), iron and manganese, and microorganisms. Filtration clarifies water and enhances the effectiveness of disinfection. FLOCCULATION: The water treatment process that combines small particles into larger particles, which then settle out of the water as sediment. Alum and iron salts or synthetic organic polymers (alone, or in combination with metal salts) are generally used to promote particle combination. INORGANIC CHEMICAL: An element, ion, or compound that does not contain bonded carbon. ION EXCHANGE: The process used to remove ADSORPTION:
61
throughout the country. But, even with all of these inconsistencies, according to the EPA’s Office of Water, more than 90% of U.S. water systems meet federal standards for tap water quality. Water experts consider the drinking water of the United States to be generally safe. In fact, according to Mary Tiemann of the Environment and Natural Resources Policy Division within the Congressional Research Service, “When compared to other nations, the United States is believed to have some of the safest drinking water in the world.” The following discussion spotlights some of the more important reasons why the U.S. water supply is safe and current drinking water standards are sufficient.
EARTH SCIENCE
Early Water Movements During the nineteenth century, diseases such as cholera and typhoid could be traced to the U.S. water supply. In response to such problems, major public health movements were instituted from that time to the early twentieth century. Because of those early efforts, the debilitating diseases cited above have been virtually erased through effective water disinfections and improved sanitary engineering practices. Federal regulations of drinking water quality began in earnest in 1914 when the U.S. Public Health Service set standards for drinking water. Since that time, the United States has continued to improve on antibacteriological water technologies that provide safe water throughout the country. Erik D. Olson, senior attorney at the Natural Resources Defense Council, a national nonprofit publicinterest organization, is confident that these public health movements yielded, and continue to yield, enormous public health benefits. Current Water Quality According to the EPA’s Department of Water (DOW), the United States enjoys one of the best supplies of drinking water in the world. The National Environmental Education and Training Foundation (NEETF) concurs with the DOW and has stated, “Most drinking water in the United States is quite safe to drink.” NEETF also has asserted that community water suppliers deliver high-quality drinking water to millions of Americans every day. In fact, of the more than 55,000 community water systems in the United States, the DOW indicated in 1996 that only 4,769 systems (about 8.7%) reported a violation of one or more drinking water health standards. According to the NEETF, 91% of America’s public water systems reported no violations in 1998–1999 of any health-based drinking water standard, such as failures of water treatment and actual contaminants in the drinking water. More importantly, all of the reported violations during this time period involved reporting and monitoring requirements. In addition, owners of drinking water systems have spent hundreds of
62
SCIENCE
IN
DISPUTE,
VOLUME
2
billions of dollars to build state-of-the-art drinking water treatment and distribution systems and, as a group, they continue to spend an additional $22 billion per year to operate and maintain these systems. The quality of drinking water has been improving over the past 10 decades in large part due to governmental regulations. According to the National Centers for Disease Control and Prevention (CDC), the proportion of reported diseases that have been linked to problems at public water treatment systems has consistently declined from 1989 to 1996. This improvement, according to the CDC, directly relates to improvements in water treatment and water treatment technology, all of which are overseen by federal legislation. The Laws Contamination of water supplies gained the attention (and the concern) of the U.S. public in the early 1970s and, soon after, the political action of the U.S. Congress. This increased awareness promptly led to the passage of several federal health laws, one of the most important being the Safe Drinking Water Act (SDWA) of 1974. The EPA’s Office of Ground Water and Drinking Water administers this law, along with its subsequent amendments. The act made drastic advancements to effectively enforce numerous voluntary U.S. Public Health Service standards by requiring the EPA to set national uniform water standards for drinking water contaminants. The SDWA provided for (1) regulations that specified maximum contaminant levels or treatment techniques, (2) regulations for injection-control methods to protect underground water sources, and (3) grants for state programs involving groundwater and aquifer protection projects.
Maximum contaminant levels (MCLs), the official name for EPA drinking water standards (or regulations), are applied to both private and public systems that serve at least 25 people or 15 service connections (such as homes and businesses) for at least 60 days during the year. By the year 2001, the EPA estimated that across the country about 170,000 public water systems serving 250 million people were regulated under the act. These MCLs are part of the multiplebarrier method for protecting drinking water. The major aspects of MCLs include (1) assessing and protecting drinking water sources, (2) protecting wells and other collection systems, (3) making sure water is treated by qualified and regulated operators, (4) ensuring the integrity of distribution systems, and (5) making information available to the public concerning the quality of their drinking water. For example, an MCL was established in 1975 for arsenic exposure in water at a level no greater than 50 parts per billion (ppb) or 50 micrograms per liter (µg/l).
Exposure to naturally occurring and industrially produced arsenic in water is a contamination problem for the majority of U.S. citizens. In January 2001, the EPA announced a new standard for arsenic in drinking water that requires public water supplies to reduce arsenic to 10 ppb (µg/l) by 2006. With the involvement of the EPA, state governments, water utilities, communities, and citizen groups, these methods ensure that tap water in the United States is safe to drink. In 1986 Congress strengthened the SDWA by enacting several major amendments that strengthened standard procedures for 83 contaminants. Congress also strengthened and further expanded the act’s requirements on compliance, monitoring, and enforcement, especially emphasizing the need for more research on contaminants that are most dangerous to children and other susceptible people. The new law also focused on the public’s rightto-know (and need-to-know) about tap water, the necessity of federal financial assistance to water treatment facilities, and the desire to give individual states greater flexibility in dealing with water-quality issues. Since the mid-1970s a network of governmental agencies has been developed to act on the SDWA. The EPA and state governments set and enforce standards, while local governments and private water suppliers possess direct responsibility for the quality of drinking water. Engineers of community water systems test and treat water, maintain the distribution systems that deliver water to consumers, and report on their water quality to the state. States and the EPA provide technical assistance to water suppliers and take legal action against systems that fail to provide water that meets state and federal standards. Some of the agencies, such as state departments of environmental protection, choose to enforce standards that are stricter than EPA standards; all state laws must be at least as stringent as the federal standards.
MCLs are established to protect the public health based on known or anticipated health problems, the ability of various technologies to remove the contaminant, their effectiveness, and the cost of treatment. The limit for many substances is based on lifetime exposure and, for the most part, short-term limits are not considered a health risk (unless the short-term risk poses an immediate threat). In 2002, the nation’s approximately 55,000 community water systems needed to test for more than 80 contaminants. In 1996, 4,151 systems (about 7.5%) reported one or more MCL violations, and 681 systems (less than 1.3%) reported violations of treatment technique standards. All in all, the system for setting water standards has prompted a consistent level of quality water. Water Treatment Processes Water suppliers use a variety of treatment processes to remove contaminants from drinking water. In order to SCIENCE
IN
DISPUTE,
VOLUME
2
A water treatment plant in Florida. (Photograph by Alan Towse. © Ecoscene/CORBIS. Reproduced by permission.)
EARTH SCIENCE
Setting Drinking Water Standards A method called risk assessment is used to set drinking water quality standards. The major division of cancer versus noncancer risks is evaluated with respect to the degree of exposure to an undesirable chemical in drinking water. The first step is to measure how much of the chemical could be in the water. Next, scientists estimate how much of the chemical the average person is likely to drink, and call this specific amount the exposure. In developing drinking water standards, the EPA assumes that the average adult drinks .5 gal (2 l) of water each day throughout a 70-year average life span. Risks are estimated differently for cancer and noncancer effects. For cancer influences, a risk assessment estimates a measure of the chances that an indi-
vidual may acquire cancer due to being exposed to a drinking water contaminant. The EPA normally sets MCLs at levels that limit an individual’s health risk of cancer from that contaminant to between 1 in 10,000 and 1 in 1 million over a lifetime. For noncancer influences, the risk assessment process estimates an exposure level below which no negative effects are expected to occur. Risk assessment is a uniform and effective way that the EPA can monitor water quality throughout the nation.
63
most effectively remove undesirable contaminants from the water, these individual processes are grouped into what is commonly called a treatment train. Filtration and chlorination are longtime effective treatment techniques for protecting U.S. water supplies from harmful contamination. Today, other commonly used processes include filtration, flocculation, sedimentation, and disinfection. Additional decontamination processes have been implemented over the years. In the 1970s and 1980s, improvements were made in membrane development for reverse osmosis filtration and other treatment techniques such as ozonation, ion exchange, and adsorption. A typical water treatment plant will possess only the combination of processes that it needs in order to treat the particular contaminants in its source water. Recently, the implementation of a new approach called multiple barriers has been implemented to counter tap water contamination from source waters. The multiple barriers technique is one that allows for different treatments of water along the decontamination process in order to decrease the possibility of degradations. It also provides, in some cases, for physical connections between water supplies, so that if one supply is degraded water can be diverted around the problem. In addition, modern water treatment technologies (such as membranes, granular activated carbon, and more advanced disinfectants) and improvements in distribution systems also have been installed.
EARTH SCIENCE
Recently the CDC and the National Academy of Engineering named water treatment as one of the most significant public health advancements of the twentieth century. Moreover, the number of treatment techniques that have been developed (along with the various combinations of those techniques) is expected to increase in the future as more complex contaminants are discovered and regulated.
64
After Twenty-five Years On December 16, 1999, the Safe Drinking Water Act was honored for twenty-five years of service to the citizens of the United States. Under the current law, every public water system must test for more than 80 individual contaminants. Water utility professionals under state drinking water program offices then study and investigate the testing results. Utility and state personnel finally compare the results with the established federal drinking water standards.
Since the enactment of the 1974 SDWA, the government, the public health community, and water utilities throughout the country have worked together to protect the nation’s drinking water supplies and to ensure the law safeguards public health. In addition, water utilities have helped to strengthen the law by keeping SCIENCE
IN
DISPUTE,
VOLUME
2
customers informed about their drinking water. According to the American Water Works Association, annual consumer confidence reports (CCRs) on water quality help consumers to understand a number of different things about their drinking water. Since 1999, nearly all water utilities have been required to distribute a water quality report to their consumers. CCRs, sometimes called water quality reports, must be prepared each year by water systems to explain what substances were found in drinking water and whether the water is safe to drink. The report provides information on local drinking water quality, including the water’s source, the contaminants found in the water, and how local consumers can get involved in protecting drinking water. Conclusion: Consistently High Quality Few things are as important to our personal wellbeing as drinking water. At the beginning of the twenty-first century, U.S. consumers possess more information than ever before about the quality of their drinking water. The NEETF states the “United States is one of the few nations in the world that consistently enjoys high-quality drinking water from the tap. But no system is perfect and local differences in tap water quality can be significant.”
It is acknowledged that the condition of water safety in the United States is not perfect, and that the water standards currently enacted also are not perfect. It is widely known that water safety within the nation has its problems, and that water standards do not encompass all of the problem areas within water safety. However, water safety and the standards that protect water safety are continually improving, and they provide sufficient safety of the water supply and to the citizens who drink that water. Unfortunately, there are a growing numbers of threats (especially those made by individuals or groups) that could possibly contaminate drinking water. Nonetheless, actual occurrences of serious drinking water contamination occur infrequently, and typically are not at levels that endanger near-term health. But with the likelihood of such menacing events increasing, drinking water safety cannot be taken lightly. The EPA and its partners, water suppliers, and the public must constantly be vigilant in order to ensure that such events do not occur frequently in the water supply. Even though more types of water contamination are possible, and more threats of degradation to water supplies are likely, the good news is that the water treatment technology of the United States is very capable of removing the vast majority of contamination from the public’s drinking water. —WILLIAM ARTHUR ATKINS
“It must be in the water.” Although this common expression is usually said in jest, unfortunately, and sometimes tragically, “it” is in the water, as with the source of carcinogens in the 1970s leukemia cluster in Woburn, Massachusetts. “It” was the basis for the best-selling book A Civil Action (1995) by Jonathan Harr. In the Woburn case, illegally dumped trichloroethylene and other possible carcinogens seeped into two of the city wells. Criminal acts and failure of the city to properly test the water, not standards, were the issue. Although this incident occurred
The story also should be required reading for the authorities who make many of the cost/benefit decisions on setting standards. The U.S. Environmental Protection Agency (EPA), the federal regulatory body responsible for drinking water standards, has been considering a more stringent regulation of arsenic in water, but notes a significant reduction in the maximum contaminant level (MCL) allowed by law could increase compliance costs for water utilities. The question arises: How much is a child’s life worth? SCIENCE
IN
DISPUTE,
VOLUME
2
Donna Robins of Woburn, Massachusetts, and son Kevin. The Robins’ family and others sued the W.A. Grace Company. (© Bettmann/CORBIS. Reproduced by permission.)
EARTH SCIENCE
Viewpoint: No, current U.S. drinking water standards are not sufficient and must be improved to ensure public health.
several decades ago, the story of the Woburn case should be required reading for all public water-supply providers to make personal the consequence of tainted water at any time from any source, including consequences from inadequate testing. Standards, testing, and enforcement have to be one issue.
65
Arsenic Standard Controversy A longstanding standard for arsenic in drinking water (since 1975) was 50 parts per billion (ppb) or 50 micrograms per liter (µg/l). In 1998 the EPA estimated that arsenic occurrence in groundwater exceeded that limit in 93 U.S. public drinking water systems. The EPA issued a regulation in January 2001 mandating that the standard fall to 10 ppb by 2006, but Christine Todd Whitman, the EPA administrator, suspended implementation of the new standard in March 2001 to allow for further review. After considerable public outcry and the publication of a National Research Council (NRC) report spelling out a significant cancer risk, the government issued a new standard by 2002.
some extent from air contaminated from engine exhaust. The long-term health risk is still being studied, although a Consumer Acceptability Advisory Table published by the EPA (summer 2000) includes under margin of exposure a reference for cancer effects for two concentration levels. MTBE falls under the EPA’s Unregulated Contaminant Monitoring Rule.
An extensive study of research on the health effects of various levels of arsenic in drinking water was conducted by the NRC, an independent, nongovernmental organization affiliated with the National Academy of Sciences whose work includes convening experts to study scientific and public health issues of interest to the federal government and other parties. The NRC began its study of arsenic in drinking water in 1997 and issued its first comprehensive report in 1999. The 2001 proposed change in the drinking water standards for arsenic was in response to that report.
The safeguards built into the legislative process in the United States to protect citizens from hasty and oppressive laws make it difficult to provide new drinking water standards, and effective enforcement of them, in any reasonable time, even when scientific evidence cries out for change. According to a timeline published by the EPA, it takes eight years from the date a contaminant candidate is placed on the first list to the date when regulatory determinations are announced.
EARTH SCIENCE
The NRC issued another report in September 2001 which reinforced their risk assessment that even very low concentrations of arsenic in drinking water appear to be associated with higher incidence of cancer. The committee found that people who consume water with 3 ppb (µg/l) arsenic daily have about 1 in 1,000 risk of developing bladder or lung cancer during their lifetime. At 10 ppb (µg/l) the risk is more than 3 in 1,000; at 20 ppb (µg/l) it is approximately 7 in 1,000, based on the consumption of about 1 qt (1 l) of water per day—a conservative amount of water to drink in one day.
66
MTBE in Water The U.S. drinking water standards clearly are inadequate for arsenic. Although it may be one of the best-publicized debates, arsenic is not the only contested standard. Another potential hazard that has been in the news is MTBE (methyl tert-butyl ether). MTBE has been a common additive used in gasoline throughout the United States to reduce carbon monoxide and ozone levels caused by auto emissions. It replaced lead as an octane enhancer, because lead was (justifiably) regulated as a very serious health risk.
Now MTBE is contaminating ground and surface water supplies where it imparts a foul odor and taste to the water. MTBE gets into water supplies from leaking underground storage tanks and pipelines, spills, emissions from marine engines into lakes and reservoirs, and to SCIENCE
IN
DISPUTE,
VOLUME
2
MTBE is being phased out of gasoline, but the same contamination sources will continue to exist. What will the risk be from the additive(s) that replace it? With the amount of gasoline stored and used in the United States on a daily basis, it is naive to believe the drinking water supplies will be safe from exposure if the replacement turns out to be a contaminant.
EPA Story The EPA is an independent agency in the executive branch of the federal government that was formed in 1970 to consolidate the government’s environmental regulatory activities under the jurisdiction of a single agency. Among the 10 comprehensive environmental protection laws administered by the EPA are the Clean Water Act and the Safe Drinking Water Act (SDWA). Within the agency, the Office of Water is in charge of water quality, drinking water, groundwater, wetlands protection, marine estuary protection, and other related programs. The SDWA was first enacted in 1974 and was amended in 1977, 1986, and 1996.
One of the significant edicts of the EPA is the National Primary Drinking Water Regulation, a legally enforceable standard that applies to public water systems. Another EPA document is the National Secondary Drinking Water Regulation, a nonenforceable guideline regarding contaminants that may cause cosmetic effects or aesthetic effects in drinking water. The arduous process for setting standards includes input from a 15-member committee created by the SDWA, the National Drinking Water Advisory Council. There is no fast-track process for enacting changes to standards. Where scientific research and information technology have reached the twenty-first century, the EPA has not. The National Primary Drinking Water Standards are listed as maximum contaminant level goal (MCLG), the level of contaminant in
FLUORIDE IN DRINKING WATER: GOOD OR BAD? Fluoride is the only compound whose concentration is limited by law on the EPA’s National Primary Drinking Water Standards list of contaminants that is also an additive in the public drinking water—for more than 144 million people in the United States. “Concentration” is the key word for fluorides. The Environmental Protection Agency (EPA) has set an enforceable drinking water standard for fluoride of 4 parts per million (ppm) or 4 milligrams per liter (mg/l), and a secondary cosmetic standard of 2 ppm (mg/l). The lower standard is suggested because young children exposed to too much fluoride get what was dubbed “Colorado brown stain” in the early 1900s by Frederick McKay, a young dentist in Colorado Springs who noticed that children with the stains also had fewer cavities. He got the cooperation of H. V. Churchill, an ALCOA chemist in Pittsburgh, Pennsylvania, who identified fluoride as the substance in the water that was causing both the staining and the stronger teeth. Many studies were conducted on fluoride following this discovery. A 1946 study by Joseph Mueller, a young dentist in Indiana, found that when a fluoride compound of tin (stannous fluoride) was applied directly to tooth enamel it gave the same protection. Soon after, the well-known toothpaste print
drinking water below which there is no known or expected risk to health, and as maximum contaminant level (MCL), the highest level of contaminant that is allowed in drinking water. The EPA says MCLs are set as close to MCLGs as feasible, using the best available treatment technology and taking cost into consideration. Only MCLs are enforceable standards.
—M. C. Nagel
Age Factor While the standards for the vast majority of contaminants are considered adequate, research continues. It has become clear that some groups are more vulnerable to contaminants, and that for some contaminants the health effects can vary significantly with the age of the exposed individual.
Nitrates are an example of an age-dependent heath risk. Infants below the age of six months can die from blue-baby syndrome if exposed to nitrates above the MCL. The nitrates’ MCL and MCLG are both10 parts per million (ppm) or 10 mg/l. Nitrates occur naturally in soil and water in low concentrations. When fertilizers are applied in excess, nitrate levels in groundwater used as drinking water sources can be raised to dangerous levels quickly from rain or irrigation. In some areas, high-risk individuals, particularly pregnant women, may have to use bottled water. Lead is another contaminant that is more hazardous to infants and children, although it is SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
Contaminants are listed under the following headings: microorganisms, disinfectants and disinfection by-products, inorganic chemicals, organic chemicals, and radionuclides. Under microorganisms is a heading for turbidity, which serves as a general catchall to indicate water quality and filtration effectiveness. This is justified because one reason for turbidity is the presence of microorganisms. The listing of disinfection by-products was necessary because contaminants such as total trihalomethanes, many of which are carcinogenic, are produced by the disinfection process, making a bad situation worse.
advertisement “look, Mom—no cavities” appeared, often with an illustration by Norman Rockwell of a smiling boy or girl. Fluoride prevents tooth decay through both direct contact with the teeth and when people drink it in the water supply. The most inexpensive way to deliver its benefits is by adding fluoride to the drinking water. The National Center for Chronic Disease Prevention and Health Promotion advocates providing optimal levels of fluoride. A one-size-fitsall approach does not apply to determining optimal levels of fluoride, so some organizations oppose putting fluoride in drinking water. The Water, Environment, and Sanitation division of the United Nations Children’s Fund recognizes fluoride as an effective agent for preventing tooth decay. However, because a diet poor in calcium increases the body’s retention of fluoride, people’s nutritional status must be considered in determining the optimal level for daily fluoride intake. This potential for the retention of too much fluoride put fluoride on the EPA’s enforceable standard list. Too much fluoride can cause skeletal fluorosis, a bone disease. For this reason, fluoride is removed from water supplies where the natural fluoride level is too high.
67
safe and sanitary manner. Since 1993 the FDA has implemented standard definitions for terms used on the labels of bottled water, such as “mineral,” “spring,” “artesian,” “well,” “distilled,” and “purified.” There is no guarantee the source is in total compliance with all EPA regulations and recommendations because the source of the water may be unregulated, or it may be from a public drinking water source that is not completely in compliance. The EPA regulates drinking water supplies down to as small as 25 users, but not below that. According to the EPA’s published information, in 1996 more than 6% of community water systems violated MCL or treatment levels in 36 states. In 13 of these states, more than 11% of the systems were out of compliance.
The use of bottled drinking water has skyrocketed in the United States during the past two decades. However, there is no guarantee that the source of any particular bottled water is in compliance with EPA standards. (Photograph by David Hanover. CORBIS. Reproduced
EARTH SCIENCE
by permission.)
68
also hazardous to adults. The MCL for lead is 15 ppb (µg/l), where the goal is zero exposure. Much of the exposure to lead in drinking water comes from household plumbing, not from an EPA-regulated external water source. With a long-demonstrated risk of delayed physical and mental development in children exposed to lead in drinking water, there should be compelling regulations to end this easily preventable exposure. The challenge may fall under another agency than the EPA; perhaps a housing authority should be brought into the mix. Trichloroethylene, suspected to be the primary offender in the Woburn leukemia cases, has an MCL of 5 ppb (µg/l) with a goal of zero. There is no mention of it being a more serious carcinogen in children, but most of the victims in Woburn were young. In fact sheets published by the EPA, increased risk is more serious with exposure over a year or longer. Bottled Water The water tasted strange to the Woburn victims and they did complain to authorities. Bottled water was not readily available in 1970, but it is today. Bottled water is considered a food product by the government and thus is covered by the U.S. Food and Drug Administration (FDA) regulations. As with public drinking water, it is regulated at the state and local level as well as by the federal government. The FDA ensures the water, like food, is processed, packaged, shipped, and stored in a SCIENCE
IN
DISPUTE,
VOLUME
2
Current Research More up-to-date studies are being done to ensure safe drinking water. Issues related to safe drinking water are frequently reported in news of current research. A study was conducted at the University of Illinois at Urbana-Champaign (UI) in 2001 on a sequential disinfection process that would better handle Cryptosporidium parvum, a parasitic protozoan that can infiltrate a public water supply. In Milwaukee, Wisconsin, in March 1993, more than 400,000 people were infected with symptoms similar to food poisoning during an outbreak of cryptosporidiosis. Most treatment plants disinfect drinking water using chlorine, which has little effect on C. parvum outside of its host. The Illinois study used a sequence of ozone followed by chlorine to more effectively kill the parasite than either treatment alone. Benito Marinas, a UI professor of civil and environmental engineering, directed the research.
At Brigham Young University (BYU) researchers have created molecules that glow in the presence of metal pollutants such as zinc, mercury, and cadmium, which could provide an early warning system to alert regulators to the contamination of drinking water and waste streams. A BYU press release in July 2001 stated that, according to the EPA’s toxic chemical release inventory, from 1987 to 1993 cadmium releases were primarily from zinc, lead, and copper smelting and refining industries, with the largest releases occurring in Arizona and Utah. The research was led by Jerald S. Bradshaw, professor emeritus of chemistry, and Paul B. Savage, associate professor of chemistry. A better water-quality sampling scheme has been developed at the University of Arkansas. The researchers Thomas Soerens, assistant professor of civil engineering, and Marc Nelson, director of the Arkansas Water Resources Center Quality Lab, pointed out in a May 2001 news release that timing is everything in taking test samples, especially during storms.
The public drinking water supplies in the United States are among the world’s most reliable, but that does not mean they are all they could or should be. There is no room for complacency when it comes to drinking water standards, especially as new information becomes available. The procedures for developing new standards must keep pace with the information on contaminants and their effects on health. Testing procedures and enforcement of safe drinking water standards are equally important; without them, the standards have no meaning. A Woburn, Massachusetts, scenario must never happen again. —M. C. NAGEL
Further Reading Barzilay, Joshua I., Winkler G. Weinberg, and J. William Eley. The Water We Drink: Water Quality and Its Effects on Health. New Brunswick, N.J.: Rutgers University Press, 1999.
NEETF’s Guide to Consumer Confidence Reports on Drinking Water Quality (Drinking Water Safety and Regulation).” . The National Research Council. 2002. National Academy of Sciences. . NSF Consumer Information. NSF International, the Public Health and Safety Company. . Olson, Eric D. “Clean Water and Oceans: Drinking Water.” Implementation of the Safe Drinking Water Act Amendments of 1996 (October 8, 1998). National Resources Defense Council. . Stanford, Errol. The Water Conspiracy: Is the Drinking Water Affecting Us? Long Island City, N.J.: Seaburn Publishing, 1997.
Blair, Cornelia, Barbara Klier, and Nancy R. Jacobs, eds. Water: No Longer Taken for Granted. Wylie, Tex.: Information Plus, 1999.
Symons, James M. Drinking Water: Refreshing Answers to All Your Questions. College Station, Tex.: Texas A&M University Press, 1995.
The Clean Water Network. .
Tiemann, Mary E. “CRS Report to Congress” (91041: Safe Drinking Water Act: Implementation and Reauthorization). Environment and Natural Resource Policy Division, Congressional Research Service. .
Drinking Water Standards Program. 8 September 2001. Environmental Protection Agency, Office of Water. . Environmental Protection Agency, Office of Water. Water on Tap: A Consumer’s Guide to the Nation’s Drinking Water. EPA 815K-97-002. July 1997. Harr, Jonathan. A Civil Action. New York: Random House, 1995. Ingram, Colin. The Drinking Water Book: A Complete Guide to Safe Drinking Water. Berkeley, Calif.: Ten Speed Press, 1991. Latest Drinking Water News. Water Quality and Health Council. .
United States Geological Survey. 2002. United States Department of the Interior. . United States National Research Council Subcommittee on Arsenic in Drinking Water. Arsenic in Drinking Water. Washington, D.C.: National Academy Press, 1999. Water Organizations. Capitolink. .
SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
The National Environmental Education and Training Foundation. “What’s in the Water:
United States Environmental Protection Agency. Homepage of the EPA. .
69
Is the Great Sphinx twice as old as Egyptologists and archaeologists think, based on recent geological evidence? Viewpoint: Yes, recent evidence suggests that the Great Sphinx is much older than most scientists believe. Viewpoint: No, the Great Sphinx was built about 4,500 years ago during the reign of the pharaoh Khafre, as has long been believed by most archaeologists and Egyptologists.
Rising from the Sahara in Egypt looms one of history’s most perplexing mysteries. Its stone eyes stare out of an almost human face, surveying a land of ancient tombs and endless sand. For millennia, it has weathered the ravages of time and witnessed the rise and fall of civilizations. Yet, after all these hundreds of years, the Great Sphinx of Giza remains an enigma. Just when we believe we are about to solve its eternal riddles, the Sphinx reveals another layer of secrecy. By its very existence, the Great Sphinx can be considered a riddle. It watches over the necropolis of Giza like some silent sentinel from a forgotten age. Formed from blocks of carved limestone, it is a 240-foot-(73-m) marvel of architectural and engineering skill. Archaeologists have long debated how a civilization some 4,500 years ago could manage to transport such building materials of such weight and size from quarries so far away. Over the years, scientists and laymen alike have developed numerous theories to explain how ancient Egyptians succeeded at this seemingly impossible achievement. These explanations ranged from the completely plausible to the totally fantastic, including the intervention of aliens from outer space. With so many farcical beliefs being presented, it is understandable that a certain level of skepticism would develop in the archaeological community. So it was not surprising that scientists scoffed when they were presented with the hypothesis that dated the Great Sphinx at almost double the age it had traditionally been believed to be. If this date were accurate, it could suggest the existence of an ancient race with the technological skill to erect such a monument. Perhaps the pharaoh Khafre was not responsible for building the Great Sphinx after all, but instead built Giza around it. However, unlike the more farfetched theories regarding the origins of the Sphinx, this claim could possibly be backed up with evidence. One of the easiest ways to determine the age of ancient buildings comes from the effects of erosion upon their structures. Wind and water wage in an endless war on stone, slowly wearing it away with an incessant assault. The Great Sphinx was not immune to these forces and now displays wounds from its hopeless struggle against time and nature. And at one period in its existence, it spent more than 700 years beneath the surface of the desert.
70
Water and windblown sand leave different types of marks on surfaces they wear down. Upon closer examination of the surface of the Sphinx, scientists have begun to wonder whether the scoring is more water-based or wind-based. If the former, could the Great Sphinx come from a time where the weather patterns were significantly different than now? Also, the Sphinx was
constructed from materials similar to that of the nearby pyramids and other structures. If it had been built at the same time as these other monuments, would it not share similar erosion marks? Some evidence puts this theory into question. However strong some of the evidence of an older civilization being responsible for the Great Sphinx, it does not explain several contradictory beliefs and findings. Indeed, opponents of the older Sphinx hypothesis dismiss much of the evidence as coincidental or simply misinterpretation. As with much of science, the way scientists look at a particular finding results in the answer found.
KEY TERMS Period in Egypt’s history from roughly 2575 to 2130 B.C. NEW KINGDOM: Period in Egypt’s history from roughly 1550 to 1070 B.C. OLD KINGDOM:
Both sides of the Great Sphinx debate have “evidence” that “proves” their point. Does this mean only one side is correct, or could the truth lie somewhere in between? Like the shifting sands of the Giza necropolis, the “facts” can change and take new shapes. Each day science and technology continue to advance, thus allowing us to rediscover what was once held to be true and disprove conjecture once and for all. Perhaps only one constant remains in regards to the Great Sphinx and its origins. Much like the creature of legend upon which it is based, it will continue posing humankind with riddles and hold it secrets close to its stone heart. We have been considering its mysteries for thousands of years. Perhaps we will never solve them all, but we will continue trying. —LEE ANN PARADISE
Viewpoint: Yes, recent evidence suggests that the Great Sphinx is much older than most scientists believe.
In the early 1990s, the American writer and independent Egyptologist John Anthony West posed the question of erosion that launched the Sphinx controversy. While reading the works of R. A. Schwaller de Lubicz (1887–1962), an earlier Egyptologist and mathematician, West found de Lubicz’s references to water erosion on the Sphinx and was intrigued. As West relates in his 1993 television program, Mystery of the Sphinx, he went to see an Oxford geologist and asked the geologist if he might play a trick on him. West showed the man a photograph that was partly covered, making the area look like any common, eroded cliff. Is this sand or water erosion, he asked the geologist? Water, definitely, answered the scholar, backtracking only when shown the complete photograph and realizing its subject was the Great Sphinx. There are obvious differences between the effects of water erosion and sand erosion. Rocks eroded by wind-blown sand have a ragged, sharp appearance. Rocks eroded by water have smoother, undulating erosion patterns, resulting SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
According to tradition, the Great Sphinx of Giza was built around 2500 B.C. by the pharaoh Khafre, during the period known as the Old Kingdom. To state that the Sphinx is older than the Old Kingdom implies that some sort of organized civilization existed in this area long before the third millennium B.C. If this is so, much of what archeologists and historians think they know about the rise of civilization must be revised. That idea is as threatening to many scientists today as Galileo’s idea that the Sun revolves around the Earth was to the church hundreds of years ago. However, the idea that the Sphinx is older than commonly assumed is not new, it was an accepted truth among Egyptologists in the nineteenth century. The British archaeologist Sir Flinders Petrie, one of the founding fathers of Egyptology, considered the Sphinx older than the Old Kingdom. In 1900, the director of the Department of Antiquities in the Cairo Museum, Sir Gaston Maspero, raised the possibility that Khafre did not build the Sphinx, but simply unearthed it. If that is the case, the monument is obviously older than the Old Kingdom, the time of Khafre’s reign.
The Question of Erosion At the heart of the controversy seems to be the question of erosion. Was the erosion on the surface of the Great Sphinx caused by rainfall or wind? If the erosion were caused by rainfall, the Sphinx would indeed be thousands of years older than 2500 B.C. By the time of Khafre, rainfall in Egypt was very similar to its current level, and could not possibly account for the deep erosion on the surface of the Sphinx.
71
The Great Spinx, shown in front of a pyramid in Giza, Egypt. (Photograph by Larry Lee Photography. CORBIS.
EARTH SCIENCE
Reproduced by permission.)
72
SCIENCE
IN
DISPUTE,
VOLUME
2
in wide fissures. According to geologist Robert Schoch, who has been researching the age of the Sphinx with West since 1990, the erosion on the Sphinx fits the latter pattern. Egyptologists argue that the water erosion on the Sphinx could have been caused from the Nile floods that occur in the area, but Schoch contends that if that were the case, the floods would have undercut the monument from its base. Instead, the heaviest erosion appears at the top of both the Sphinx and the walls enclosing it. This pattern is more consistent with rainfall from above, rather than flood water from below. Schoch also noted refacing work which had been formfitted to the eroded blocks behind it. It is thought that the blocks used for this refacing are from the Old Kingdom, but why would so much work be necessary in less than 500 years? Some scholars have suggested that the original limestone used to build the Sphinx deteriorated quite rapidly. But if that was the case, and assuming an even rate of deterioration throughout the ages, the Sphinx should have disappeared approximately 500 years ago. Other scholars believe that New Kingdom workers used blocks from the causeway to Khafre’s pyramid, which would be Old Kingdom blocks, to reface the Sphinx. However, there is no way to verify this belief. It is generally accepted that the Sphinx was buried in sand from approximately 2150 to 1400 B.C. It was then uncovered and repaired. From the various repairs done at different periods of history, it appears that weathering caused little erosion between 1400 B.C. and the present, but the restoration work dating from 1400 B.C. is quite substantial. If the Sphinx was built in 2500 B.C., and spent most of the following millennium under sand, how did it erode so much? Furthermore, if the Sphinx and the tombs around it in the valley are made of the same rock (this was verified by an independent expert), and all date to the same period, shouldn’t the erosion on the tombs be similar to the erosion on the Sphinx? Yet the tombs around the Sphinx show only the mild wind-blown sand weathering one would expect in Old Kingdom monuments.
How would rainfall explain the fact that the head of the Sphinx, which undoubtedly should be effected by rainfall, shows less weathering
The Lost Civilization Many researchers wonder, if the Sphinx predates the Old Kingdom, who built it? There are two possible and contradictory answers to this question. The first is that a primitive society predated the Old Kingdom, and its members built the Sphinx. Would it take a technologically advanced culture to erect the Sphinx? Not necessarily, but it would require technical skills and supreme organization. After all, a relatively primitive culture built Stonehenge in Britain. In 1998, in another area of the Sahara called Nabata, a Neolithic settlement was discovered with astronomical structures built with huge stones, like the Sphinx. The Nabata structures are fascinating in their astronomic accuracy, and date to approximately 4500 B.C. If Neolithic cultures can build structures such as these, why not a Sphinx?
On the other hand, the enclosure surrounding the Sphinx is made of huge blocks, and the builders had to move these blocks quite a distance to build the enclosure. Could a primitive culture complete such a task? In Mystery of the Sphinx, West challenged construction engineers to achieve the task. Even using a crane with one of the largest booms in the world, the task still could not be accomplished. Many supporters of the hypothesis that the Great Sphinx was built before the reign of Khafre concur that the builders were probably advanced, and possibly used acoustic technology to move the stones. Current technology can “levitate” small objects using sound, and it is not impossible that the lost civilization that built the Sphinx could move much bigger objects in the same way. In the biblical story of the destruction of Jericho, sound destroyed walls 6.5 ft (2 m) thick and some 20 ft (6 m) tall. Those that believe an ancient civilization constructed the Sphinx suggest that sound could also be used to put structures up. If there was a “lost civilization,” challenge some opponents of the older Sphinx theory, where are their artifacts? Where is the proof of their existence? Schoch and West maintain that archeologists are looking in the wrong place. There is more than a fair chance that these artifacts are buried under silt in the Nile River, or under parts of the Mediterranean. In 1999, archeologists uncovered what they consider to be the remnants of Cleopatra’s palace underwater in the silt of the harbor at Alexandria, Egypt. Cleopatra reigned from 69 to 30 B.C. SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
Seismic measurements done on the grounds of the Sphinx enclosure point to a difference in the weathering of the rock under the Sphinx. The west side of the enclosure (the rump) shows less weathering than the other three sides. The north, east, and south sides show 50 to 100% more weathering. If we assume that the west side dates to Khafre’s time, and the weathering rate of the rock is linear, then the Sphinx would date to 5000 B.C. at the earliest. If the weathering pattern is nonlinear, the Sphinx could be much older.
than other parts? Precise measurements taken of the head and the body reveal that the head is not proportional to the body; it is much too small. The tool marks on the head are “relatively recent,” according to Schoch, and he believes that the head was recarved from the original, which had been heavily damaged.
73
Khafre Why was the Great Sphinx attributed to Khafre in the first place? In front of the Sphinx stands a stela, or vertical stone slab, with an inscription containing Khafre’s name, but the text around it has eroded and flaked off. The inscription is known to be from the reign of Thutmose IV (1425–1417 B.C.), and the part that is legible tells of the repairs made to the Sphinx in Thutmose’s time. The Giza plateau, where the Sphinx is located, also contains the Khafre pyramid and the Khafre Temple, and a causeway connecting the Pyramid and the valley runs along the outer wall of the Sphinx. Several statues of Khafre were found buried in temple in front of the Sphinx. This evidence is circumstantial at best. No one knows what the stela actually said regarding Khafre’s involvement with the Sphinx. The inscription could have simply described repairs made by Khafre and Thutmose.
Some other Egyptologists believe that the face of the Sphinx is that of Khafre. To examine this possibility, West enlisted the help of Frank Domingo, a specialist in facial analysis for the New York City police. Using computer technology, Domingo compared the face of the Sphinx to a face on a statue of Khafre in a Cairo museum. The results strongly suggested that the face on the Sphinx was not Khafre’s, and Domingo went on to comment that the facial features on the Sphinx are very consistent with those of the people of Africa. Interestingly enough, the Zulu tradition holds that their people once inhibited the Sahara “when it was green.”
EARTH SCIENCE
Far more damaging to the case for Khafre as the builder of the Sphinx is the Inventory Stela, found near the Great Pyramid in the nineteenth century. This stela describes repairs to the temple of Isis made by the pharaoh Khufu, who erected the Great Pyramid at Giza. Khufu predates Khafre, and the Inventory Stela states that he found the temple of Isis, “mistress of the pyramid, beside the house of the Sphinx.” This seems to indicate the Great Sphinx was there before Khafre’s time, assuming the stela does not refer to the house of another sphinx.
74
The hieroglyphics on the Inventory Stela are not from the time of Khufu, but date to around 1000 B.C. Egyptologists use this fact to dismiss the Inventory Stela as “fiction,” even though old records were commonly copied at a later date. The authenticity of these copies is not usually challenged, except, of course, when they conflict with the conventional wisdom of Egyptology. There is no hard evidence that the Inventory Stela is inaccurate or fictional. Robert Schoch notes that for centuries, starting in the period of the New Kingdom and throughout Roman times, the Great Sphinx of Giza was considered to have been built before the Pyramids. Oral traditions of villagers who live SCIENCE
IN
DISPUTE,
VOLUME
2
in the Giza area date the Sphinx to 5000 B.C., before Khafre’s time. So much of our knowledge of the ancient world is based on oral traditions and ancient texts. When this evidence is supported by physical proof—such as the geological weathering pattern on the Sphinx—can we afford to ignore the facts simply because they contradict current beliefs? After all, Galileo was right, Earth does revolve around the Sun. —ADI R. FERRARA
Viewpoint: No, the Great Sphinx was built about 4,500 years ago during the reign of the pharaoh Khafre, as has long been believed by most archaeologists and Egyptologists. The Great Sphinx of Egypt is a monument consisting of a pharoah’s head on the recumbent body of a lion. There were many other sphinxes, in ancient Egypt, Assyria, Greece, and elsewhere. The Great Sphinx, with its human head, is termed an androsphinx. Other types of sphinx include the crisosphinx, with a ram’s head on the lion’s body, and the hierocosphinx, with a hawk’s head. The Great Sphinx, which was carved in soft limestone, is 240 ft (73 m) long. It shares the Giza necropolis site 6 mi (10 km) west of Cairo with the three Great Pyramids of Khufu, Khafre, and Menkaure. A number of smaller tombs, pyramids, and temples also remain at Giza. Most archaeologists believe that the Sphinx was constructed at the behest of Khafre, a pharaoh of the Old Kingdom’s Fourth Dynasty, who reigned from 2520 to 2494 B.C. However, in the early 1990s, the American geologist Robert Schoch, along with American writer and ancient Egypt enthusiast John Anthony West, claimed that the Sphinx was built at a time before the rise of Egyptian civilization, perhaps between 7,000 and 9,000 years ago. Others hypothesize even earlier dates. These ideas are regarded with disbelief and derision by most mainstream scholars. Links to Khafre Several pieces of evidence support dating the Sphinx to Khafre’s time. In front of the Sphinx there is a stela, or vertical stone slab, dating from the reign of New Kingdom pharaoh Thutmose IV (1425–1417 B.C.). The inscription was in the process of flaking off when it was recorded, but did include at least the first syllable of Khafre’s name. A temple adjacent to the Sphinx, the Valley Temple, is associated with Khafre, and statues of the pharaoh were found there. In his time, two sphinxes 26 ft (8
m) long were constructed at each of the two entrances to the temple. In addition, Khafre’s mortuary temple, which lies adjacent to his pyramid, includes a center court that is identical to the one in the Sphinx Temple. A causeway runs between the Valley Temple and Khafre’s pyramid. The drainage channel from this causeway empties into the enclosure where the Sphinx now stands. It is unlikely that the channel would have been thus positioned if the enclosure had already been excavated, since this would be regarded as a desecration, so the implication is that the Sphinx was built after the causeway. Weathering Patterns Much of Schoch’s case for a prehistoric Sphinx is based on the quantity and patterns of erosion seen on the structure. The Sphinx was carved of soft limestone, a material vulnerable to water damage. Schoch contends that the amount of weathering on the surface of the Sphinx indicates that it withstood a prolonged period of moist, rainy weather; specifically, that which resulted from the glacial melts at the end of the last ice age. This transitional period lasted from about 10000 to 5000 B.C.
One thing they do agree on, however, is that the erosion is obviously not dependent on rain induced by the melting of ice age glaciers. Other than more recent rains, possible mechanisms include wind, weathering by water-saturated sand, and the crystallization of salts naturally present in the limestone after they are dissolved by morning dew. Credibility Problems for Proponents of an Older Sphinx The major problem with the hypothesis that the Sphinx was built during pre-
Scientists are also unmoved by the Sphinxrelated prophesies of the self-proclaimed psychic Edgar Cayce (1877–1945), which have influenced many Sphinx enthusiasts, including John Anthony West. Cayce claimed that he learned during a 1935 trance that people from the lost civilization of Atlantis were responsible for building the Sphinx. Furthermore, he said that the Atlanteans hid documents explaining the meaning of life in a chamber between the Sphinx’s paws. Cayce prophesied that the documents would be discovered in 1998. When the chamber in which they were hidden was opened, he went on, it would trigger a geological catastrophe on a global scale. Fortunately this prediction failed to materialize. The ancient and mysterious monuments at Giza, including the three Great Pyramids and the Sphinx, have always interested mystics and eccentrics, as well as scientists. Fed up with New Age tour groups trampling his site looking for secret chambers, and maverick theorists advancing wild claims that divert attention from scholarly research, Hawass has dismissed the prehistoric Sphinx advocates as “pyramidiots.” American archaeologist Mark Lehner, who first came to Egypt at the behest of the Cayce organization, became convinced of the Fourth Dynasty provenance of the Sphinx during his work at the Giza complex, and now collaborates with Hawass on excavations in the Pyramids area. However, not all advocates of an olderSphinx hypothesis can be dismissed as believers in psychic prophesies and unsupportable theories. Geologist Schoch, despite defending the existence of mysterious lost civilizations in works such as his book Voices of the Rocks (1999), has argued that a prehistoric Sphinx could have been built by indigenous people. Schoch cites examples such as Jericho, which has a well-built stone tower and walls dating from around 8000 B.C., as demonstrating that Neolithic societies in the Near East were capable of significant construction projects. No archaeological evidence of such antiquity has been found in Giza, but scholars of the ancient world must often acknowledge that “absence of evidence is not evidence of absence.” Still, in this case, the newer date for the Sphinx is supported by the fact that while prehistoric context is missing from the site, Fourth Dynasty artifacts abound. —CHERYL PELLERIN SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
However, one need not go back to the last Ice Age to account for water damage at Giza. Several instances of violent rains and severe flooding have been recorded in the Nile region in historical times. Damage and erosion caused by these storms were described in 1925 by W. F. Hume, then director of the Geological Survey of Egypt, in his book Geology of Egypt. “It must not be forgotten that the rains in the desert produce . . . sheet floods,” Hume wrote. “The vast amount of water falling cannot be dealt with in many cases by the channels already existing, and as a result it makes new passages for itself along lines of least resistance. The deep grooves are cut through the more friable strata. . . .” In addition, Zahi Hawass, the director of antiquities at Giza, notes that the same erosion patterns cited by Schoch still continue on a daily basis. On some parts of the Sphinx’s surface, large flakes are shed constantly, to the dismay of archaeologists and conservators who have yet to agree on the cause or the cure.
historic times is the lack of a credible candidate for builder. Many advocates of the older-Sphinx hypothesis solve this problem in ways that immediately eliminate the possibility of their being taken seriously in the scientific world, speculating that the Sphinx was constructed by aliens from outer space, or by ancient giants from Arabia.
75
Further Reading Hawass, Zahi A. The Secrets of the Sphinx: Restoration Past and Present. Cairo: American University in Cairo Press, 1998. Schoch, Robert M. “Redating the Great Sphinx of Giza.” KMT 3, no. 2 (1992): 53–9, 66–70. ———. “A Modern Riddle of the Sphinx.” Omni 14, no. 11 (1992): 46–8, 68–9.
EARTH SCIENCE
———. Voices of the Rocks. New York: Harmony Books, 1999.
76
SCIENCE
IN
DISPUTE,
VOLUME
2
West, John A. The Traveler’s Key to Ancient Egypt. New York: Alfred A. Knopf, 1989. Wilford, John N. “With Fresh Discoveries, Egyptology Flowers.” New York Times (December 28, 1999). Wilson, Colin. From Atlantis to the Sphinx. New York: Fromm International Publishing Corporation, 1996.
Are ice age cycles of the Northern Hemisphere driven by processes in the Southern Hemisphere? Viewpoint: Yes, ice age cycles of the Northern Hemisphere are driven by complex forces in the Southern Hemisphere, and possibly even the tropics. Viewpoint: No, the ice age cycles of the Northern Hemisphere are not driven by processes in the Southern Hemisphere; the Milankovitch cycle and disruption of the ocean’s thermohaline circulation are the primary initiators.
Ice-age conditions—with a global temperature 9°F (5°C) lower than today’s climate and glaciers that cover most of Europe, Asia, and North and South America in a deep blanket of ice—have predominated for 80% of the past 2.5 million years. Ice ages come in 100,000-year cycles controlled by the shape of Earth’s orbit around the Sun. The shape varies, becoming more circular or elliptical every 100,000 years. In the 1870s, the Scottish physicist James Croll suggested that ice ages were caused by insolation, or changes in the amount of solar radiation at the poles as a result of this 100,000-year change in orbit shape, and changes in orbital tilt (every 40,000 years) and wobble (every 20,000 years). By themselves, changes in the amount of solar energy reaching Earth are too small to affect the global climate. Somehow—no one knows exactly how—the changes interact with the atmosphere and oceans and grow into large global differences in average temperature. And no one knows yet why ice ages occur in both hemispheres simultaneously when changes in solar radiation from orbital variations have opposite effects in the north and south. Despite gaps in knowledge, many hypotheses exist about what causes an ice age to begin or end. Some focus on the Northern Hemisphere as the connection between orbital variations and climate. In the 1930s, for example, the Serbian geophysicist Milutin Milankovitch suggested that orbital variations in solar radiation at 60°N drove the waxing and waning of ice sheets in North America and Europe. In 1912, Milankovitch had described the small but regular changes in the shape of Earth’s orbit and the direction of its axis, a process now called the Milankovitch cycle. A confluence of these factors— maximum eccentricity (when Earth’s orbit is most elliptical), extreme axial tilt (with the North Pole pointed most acutely away from the Sun), and precession, which delays and reduces solar radiation at high northern latitudes— could lead to a major ice age in the Northern Hemisphere. A recent study of Antarctic seafloor sediment cores by an international team of scientists shows that changes in polar regions—particularly the advance and retreat of glaciers—follow variations in Earth’s orbit, tilt, and precession as described in the Milankovitch cycle. The samples showed that Antarctic glaciers advanced and retreated at regular intervals during a 400,000year period, and the glaciation and retreat cycle matched those predicted by Milankovitch, with increased glaciation at 100,000- and 40,000-year intervals. Other researchers propose that the North Atlantic Deep Water circulation belt (NADW) may be an important amplifier of climatic variation that magnifies subtle temperature and precipitation changes near Greenland. The
77
NADW is an enormous ocean current that moves huge quantities of equator-warmed water along the equator and up the coast of North America toward northwestern Europe. The engine that drives this current is near Iceland in the North Atlantic. Here, subsurface ocean water is very cold, and because it holds lots of salt, very dense. This cold, salty, dense North Atlantic current sinks to the depths and flows southward toward the equator, pushing immense volumes of water ahead of it. As it heads south the current warms, loses salt, becomes less dense, and rises toward the surface. At the equator the warm, low-saline current swings through the Caribbean and up again toward Europe. Near Greenland it is cooled by Arctic air masses and gets colder, saltier, and denser, then sinks and flows southward, completing the cycle. Evidence shows that global warming affects the NADW and has led to periods of glaciation, and that this may be happening now. As air and sea surface temperatures increase, evaporation over the oceans increases. Greater amounts of water vapor rise, accumulate in clouds, and eventually fall as rain (fresh water). Much of the increased rainfall occurs over the North Atlantic, where it dilutes seawater salinity and density, disrupting the NADW. As air temperatures increase, melting Arctic sea ice and Greenland glaciers further reduce North Atlantic salinity and density, pushing the North Atlantic current toward collapse. Collapse of the current would lower temperatures in northwestern Europe by 20°F (11°C) or more, giving Ireland a climate identical to that of Norway. Once snow and ice accumulate on the ground year-round, the onset of an ice age is rapid. Other hypotheses about the origin of ice ages focus on processes in the Southern Hemisphere. In 2000, for example, Gideon Henderson of Lamont-Doherty Earth Observatory at Columbia University and Niall Slowey of Texas A&M University challenged the long-standing belief that processes in the Northern Hemisphere control ice age cycles. To understand what causes changes in Earth’s climate, scientists must create a history of those changes. One way to do this involves removing long cores of soil or ice from Earth’s crust. These cylindrical cores show layers of climate history, just as the rings of a tree trunk show its stages of growth. Long cores have been extracted from Greenland and the Antarctic, from marine sediment on ocean floors, and from the floor of a cave called Devil’s Hole in Nevada, where terrestrial climate is preserved. Henderson and Slowey said their study of marine sediment cores produced evidence that atmospheric CO2 (carbon dioxide) levels influenced the ice ages. The change in global atmospheric CO2 concentration was centered at 138,000 to 139,000 years ago, at the same time there was a peak in Southern Hemisphere insolation. This relationship suggested the change in CO2 was driven by a process in the Southern Hemisphere, and may initiate processes that eventually lead to the collapse of the Northern Hemisphere ice sheets.
EARTH SCIENCE
Henderson and Slowey reported that ice-flow age models from samples taken from the Vostok ice core in Antarctica typically put the penultimate deglaciation earlier than the 127,000-year mark. Calculations by astronomers also indicated that peak insolation in the Southern Hemisphere occurred 138,000 years ago, in the same timeframe as Henderson and Slowey’s findings about the end of the second-to-last ice age. Henderson and Slowey used an improved method of uranium thorium (U-th) dating to show that the midpoint of the end of this ice age was much older, at 135,000 years ago. They say this new, accurate date is consistent with deglaciation driven by orbital variations in solar radiation, either in the Southern Hemisphere or in the tropics, but not in the Northern Hemisphere. However, scientists still don’t know how climatic change in the Southern Hemisphere could cause ice sheets in North America to melt. —CHERYL PELLERIN
78
Viewpoint: Yes, ice age cycles of the Northern Hemisphere are driven by complex forces in the Southern Hemisphere, and possibly even the tropics. For more than 80% of the last 2.5 million years, much of Earth’s surface has been buried under a heavy blanket of ice. Periodically, Earth’s atmosphere cools and great blankets of ice gouge their way south from the northern polar ice cap, covering Europe, Northern Asia, SCIENCE
IN
DISPUTE,
VOLUME
2
and much of North America. These ice ages are punctuated with interglacial periods, times when Earth’s climate warms, and glaciers and ice caps recede. Understanding what drives these ice age cycles has proved to be an elusive task. Even as pieces to the puzzle are added, the picture remains incomplete. However, when two researchers—Gideon M. Henderson from the Lamont-Doherty Earth Observation at Columbia University, and Niall C. Slowey from Texas A&M University—published the results of their research into the ice age phenomenon in Nature in March 2000), they added some important pieces to the puzzle. Henderson and Slowey’s research on a marine
Ice Ages and Insolation The concept of the ice age is relatively new. Until the early 1830s, geological formations that we now know were caused by glaciation—the action of glaciers as polar ice caps grow—was thought to be caused by the Great Flood described in the Bible. The Swiss geologist Jean de Charpentier made the first scientific case for glaciation in the early 1830s. Several great geologists of that time
became converts to this concept and, in 1842 the French mathematician Joseph Adhémar published Revolutions de la Mer, Deluges Periodics, a detailed hypothesis of ice ages. While his hypothesis was incorrect, his book excited other scientists who eagerly embarked on the process of discovering why ice ages occur. In particular, James Croll, a self-taught physicist from Scotland, made a significant contribution to unraveling the mystery. He developed several hypotheses in the 1870s, including the concept of astronomically based insolation, the intensity with which sunlight (solar radiation) hits Earth and variations in that intensity caused by latitudinal and cyclic changes as Earth journeys around the Sun. Based on his hypothesis of insolation, Croll predicted that ice ages would alternate between hemispheres because insolation would be greater in the Northern Hemisphere at one stage in the cycle, and in the next stage SCIENCE
IN
DISPUTE,
VOLUME
2
A section of an ice core sample taken from the Greenland ice sheet. (Photograph by Roger Ressmeyer. CORBIS. Reproduced by permission.)
EARTH SCIENCE
sediment core led them to speculate that the alternating cycles of ice age and interglaciation (warm periods between ice ages) are driven by forces far removed from the huge ice sheets that cover the Northern Hemisphere every 100,000 years or so. By attributing those processes to events in the Southern Hemisphere, and possibly even the tropics, they challenged the longstanding hypothesis that processes in the Northern Hemisphere control ice age cycles.
79
KEY TERMS Degree to which a surface reflects light; its reflectivity; a lighter-colored surface has a higher albedo (reflects more light and heat) than a dark surface (low albedo: absorbs more light and heat). APHELION: Point in Earth’s orbit where it is farthest from the Sun. DEGLACIATION: Melting of the polar ice caps following an ice age, or glaciation. ECCENTRICITY: Variation in the shape of Earth’s orbit around the Sun. ELLIPTICAL ORBIT: An orbit that is elongated: ovalshaped rather than in a circle. EQUINOX: Day of the year when the sun is directly over the equator and day and night are of equal length. GLACIATION: Expansion of the polar ice caps; formation of massive glaciers and sheets of ice over much of the Northern Hemisphere landmass. GULF STREAM: Part of the NADW that flows from the Caribbean to northwestern Europe and carries huge amounts of heat to this region. INSOLATION: Rate at which solar radiation is delivered per unit of horizontal surface, in this instance, Earth’s surface. INTERGLACIATION: Warmer periods between ice ages when Earth’s climate is basically as we know it. NADW: North Atlantic Deep Water circulation; an enormous current of very cold, salty, dense ocean water that sinks and drives the conveyor belt of ocean currents. PALEOCLIMATE: Prehistoric climates. PENULTIMATE DEGLACIATION: Melting of the polar ice caps following the second-last, or next-to-last, ice age. PERIHELION: Point in Earth’s orbit where it is closest to the Sun. PRECESSION: Alterations in the timing of the equinoxes on Earth. The major axis of Earth slowly rotates relative to the “fixed” stars; precession occurs when Earth’s axis wobbles, much like the wobble of a top as it slows down. SOLAR RADIATION: Energy radiated from the Sun. SYNCHRONOUS: Occurring at the same time. THERMOHALINE: Temperature and salt content of ocean water; the thermohaline circulation is the ocean conveyor belt that is driven by differences in seawater’s temperature and salinity.
EARTH SCIENCE
ALBEDO:
80
greater in the Southern Hemisphere. However, that prediction proved completely wrong; ice ages were shown to be synchronous, or occurring at the same time, and Croll’s insolation hypothesis was tossed aside. However, scientists still agree with several of Croll’s ideas: that huge glaciers reflect rather than absorb the Sun’s energy, further lowering SCIENCE
IN
DISPUTE,
VOLUME
2
Earth’s temperature; that ocean currents influence global warming and cooling; and that precession and Earth’s orbit influence climate. As it travels around the Sun, Earth’s axis is tilted, but not always in the same direction. The tilt moves slowly, from 22.1° to 24.5° and back over a period of 41,000 years. Earth’s axis also wobbles, much like the wobble of a top as it slows down, which means the North Pole is not always tilted in the direction that it is today. It wobbles back and forth in a process called precession; this happens over a period of 25,800 years. Also, Earth’s orbit changes gradually over a period of 100,000 years, moving from slightly elliptical, or oval, to almost circular. These patterns combined mean that every 22,000 years, the hemisphere that is pointed toward the Sun at Earth’s closest approach to the Sun cycles between North and South, creating variations in insolation. The Milankovitch Hypothesis of Insolation Croll’s insolation hypothesis was revived again in 1938, when a Serbian engineer and professor teaching physics, mathematics, and astronomy at the University of Belgrade took up the ice age challenge. Milutin Milankovitch speculated that, because the Northern Hemisphere contained more than two-thirds of Earth’s landmass, the effect of insolation on those landmasses at the mid-latitudes (on line with Greenland’s southern tip) controlled the ice age phenomenon in both hemispheres at the same time
Using important new calculations of Earth’s orbit made by the German scientist Ludwig Pilgrim in 1904, Milankovitch accurately described a dominating insolation cycle of 23,000 years, predicted that ice ages would be most severe when insolation fell below a certain threshold, and estimated dates of the ice ages. However, the invention of radiocarbon dating allowed precise age estimates that conflicted with the timing of the ice ages in Milankovitch’s detailed calculations. Although the idea of Northern Hemisphere control remained, the astronomical hypothesis of insolation as the force that drove those controls was again abandoned. In 2000, Richard A. Muller, professor in the physics department at the University of California, Berkeley, pointed out the daunting nature of Milankovitch’s calculations. “Today these calculations are an interesting task for an undergraduate to do over the course of a summer using a desktop computer. But Milankovitch had to do all the calculations by hand, and it took him many years.” He remarks that it was unfair to toss out Milankovitch’s concepts: “Do we throw out the astronomical theory of the seasons simply because the first day of spring is not always spring-like? The warm
weather of spring can be delayed by a month, or it can come early by a month; the important fact is that it always comes. We demand too much of a hypothesis or theory if we require it to predict all the details in addition to the major behavior.” The insolation hypothesis was again revived when scientists began to confirm a regularity in ice-age cycles. In a groundbreaking paper published in 1970, researchers Wally Broecker and Jan van Donk of Columbia University’s Lamont Geological Observatory noted that an analysis of core samples taken from sea-floor sediment showed for the first time that a repeating 100,000-year cycle dominated ice age cycles. This frequency also appeared in the insolation hypothesis. In 1976, studies of seafloor sediment samples made by James D. Hays of Columbia University, John Imbrie of Brown University, and Nicholas Shackleton of the University of Cambridge, also showed evidence of 41,000- and 23,000-year cycles, and these cycles that were also evident in spectral analysis of insolation New Dating Technique Builds a Case for the Southern Hemisphere The insolation hypothesis still contained serious problems in relation to Northern Hemispheric processes driving ice age cycles. Of major concern was its implication that Northern Hemispheric deglaciation (the receding of the polar caps) should correspond with Northern Hemisphere June insolation. It did not. Astronomical studies showed that the northern June insolation during the penultimate, or next-to-last, deglaciation peaked 127,000 years ago, while scientific studies were suggesting deglaciation peaked much earlier— perhaps as much as 15,000 years earlier.
Kurt Sternlof, writing for Earth Institute News at the Columbia Earth Institute in New York City in 1999, explained that these marine sediments contain a record of “the total volume of global ice through time in the changing ratio of oxygen isotopes captured as they accumulated. Peaks in global ice volume correspond to ice ages; valleys correspond to interglacial peri-
Calculations by astronomers indicated that peak insolation in the Southern Hemisphere occurred 138,000 years ago, within the same timeframe as Henderson and Slowey’s ocean sediment studies indicated as the end of the second-to-last ice age. Sternlof quotes Henderson as saying: “In our paper we demonstrate that, based on a simple argument of timing, the traditional model of ice ages as forced by climate amplifying mechanisms in the Northern Hemisphere cannot be correct.” Although Henderson and Slowey assert that the general association between orbital insolation and glacial timeframes remains obvious, they also assert that orbital insolation does not govern the ice-age phenomenon. In his article “Ice Cycle,” in Nature News Service (2001), John Whitfield explains that variations in insolation are not great enough by themselves to drastically alter Earth’s climate. By somehow interacting with our oceans and atmosphere, however, insolation causes huge changes in average temperature and therefore Earth’s climate. Digging Back into Climate History To understand what causes changes in Earth’s climate, scientists must discover the history of those changes. As Whitfield puts it, “The best way to go back is to dig a hole.” Simplistically, digging holes involves removing long cores of material from Earth’s crust. These cylindrical cores reveal layers of climate “history,” just as the rings in a tree trunk reveal its stages of growth. Lengthy cores have been extracted from Greenland and the Antarctic where ice caps are several miles thick, from marine sediment on ocean floors, and from the floor of a cave called Devil’s Hole in Nevada, where the terrestrial climate record is wonderfully preserved.
In their Nature article, Henderson and Slowey explain how their study of marine sediment cores produced evidence that atmospheric CO2 (carbon dioxide) levels influence the ice ages. The scientists write: “The change in global atmospheric CO2 concentration closely follows a hydrogen isotope, ∂D, and is centered at 138,000 to 139,000 years ago, coincident with the peak in Southern Hemisphere insolation. This relationship suggests that the change in CO2 is driven by a process in the Southern SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
One way to confirm the growing evidence for early deglaciation was by examining oxygen isotopes contained in marine sediment. Using a new dating method based on the radioactive decay of uranium to thorium called the U-Th isochron technique, Henderson and Slowey studied marine sediments deposited during the penultimate deglaciation on the sea floor in the Bahamas. For the first time ever, the age of marine carbonate sediments more than 30,000 years old could be accurately determined. “It’s like leaves falling on a forest floor,” said Slowey. “If you were to figure out which season is which, you can look at the leaves to get a clue of what the seasons were like.”
ods.” Thus, the U-Th technique allowed Henderson and Slowey to establish that the midpoint of the penultimate deglaciation and the end of the preceding ice age occurred 135,200 years ago, give or take 2,500 years. This was 8,000 years too early to have been affected by Northern Hemisphere peak insolation that occurred at 127,000 years. This timing assessment was supported by data from other researchers studying ice cores and sediment from cave floors.
81
A scientist takes a core sample of ice in Antarctica. (Photograph by Morton Beebe, S.F. CORBIS. Reproduced
EARTH SCIENCE
by permission.)
82
SCIENCE
IN
DISPUTE,
VOLUME
2
Hemisphere. This change in CO2 may initiate the processes that eventually lead to the collapse of the Northern Hemisphere ice sheets. Southern Hemisphere mechanisms for the ice-age cycle are also suggested by the pattern of phasing between southern ocean sea-surface temperature changes and oxygen isotope ∂18O, and are consistent with possible effects of sea-ice variability.” Significantly, they point out that iceflow age models from samples taken from the Vostok ice core also typically place the penultimate deglaciation as being earlier than the 127,000-year mark. The Vostok ice core is the longest ice core ever obtained. Extracted from the Antarctic by a U.S.-Russian-French science team at Russia’s Vostok research station, the coldest place on Earth, the core measures 2 mi (3.2 km) long and is composed of cylinder-shaped sections of ice deposits containing a record of snowfall, atmospheric chemicals, dust, and air bubbles. Previous cores taken from Antarctica and Greenland dated back only 150,000 years, revealing two ice ages. The Vostok core, which took from 1992 to 1998 to extract and contains ice samples dating back 420,000 years, shows Earth has undergone four ice ages. Significant to the search for driving forces behind the ice ages, however, was that, like marine sediments studied by Henderson and Slowey, the Vostok core allowed researchers to determine fluctuations in atmospheric CO2 levels throughout the ages.
The connection between CO2 and the Southern Hemisphere was further supported by another Nature article (March 9, 2000) by Britton Stephens, a University of Colorado researcher at the National Oceanic and Atmospheric Administration’s Climate Monitoring and Diagnostics Laboratory in Boulder, Colorado, and Ralph Keeling of the Scripps Institution of Oceanography, University of California, San Diego. Stephens and Keeling found that atmospheric CO2 levels during an ice age fall by about 30%, from approximately 0.03 to 0.02%, keeping Earth cool by reducing the greenhouse effect. Using a computer model, Stephens and Keeling show how large ice sheets in the Antarctic, such as those that exist during an ice age, could prevent the usual release of carbon dioxide from the sea, thereby lowering atmospheric CO2 concentrations and causing global cooling. This process, they think, is “suggestive” of Southern Hemisphere forces lying behind climate changes. To Be Continued Although much is known about what drives the ice age cycle, much remains to be discovered. Even as Henderson and Slowey explore the CO2 hypothesis, they explore another involving the tropical oceanatmosphere system, a system that is also consistent with the timescale of the penultimate deglaciation. “Recent modeling,” they comment, “suggests that increasing insolation leads to a larger than average number of El Niño/Southern Oscillation (ENSO) warm events, starting at 137,000 years ago.”
In his article for Nature News Service, John Whitfield writes that we expect too much if we expect one big idea to encompass the forces behind ice age cycles and changing climates. He closes his discussion with a quote from Henderson: “These cycles aren’t controlled by one neat switch.”—MARIE L. THOMPSON
Viewpoint: No, the ice age cycles of the Northern Hemisphere are not driven by processes in the Southern Hemisphere; the Milankovitch cycle and disruption of the ocean’s thermohaline circulation are the primary initiators. In the past, scientists suggested that changes in the Southern Hemisphere—particularly those relating to ENSO (El Niño/SouthSCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
Significance of Atmospheric CO2 Changes CO2 is one of the most important greenhouse gases contributing to global warming. Microscopic oceanic plants and algae constantly remove CO2 that is absorbed readily from the air by cold ocean waters. When these life-forms die, they sink to the deep ocean floors taking their store of CO2 with them. The amount of CO2 returned to the atmosphere depends upon deep waters circulating to the ocean’s surface and releasing their stores of CO2. Only at the end of the twentieth century did scientists discover that most deep-ocean waters do not return to the surface at low latitudes as was previously believed, but to the surface around Antarctica. Therefore, as the waters of Antarctica freeze and expand the southern polar ice cap, CO2 within those waters becomes imprisoned in the massive depths of ice. The CO2 lies trapped in the ice for several thousand years while Earth continues the journey through its orbital cycles. Henderson speculates that the “ultimate cause” of the penultimate deglaciation was the intensified amount of solar energy in the Southern Hemisphere that began to melt the Antarctic ice sheets. As carbon dioxide was released and entered the atmosphere, global temperatures began to rise, initiating deglacia-
tion and ultimately the collapse of the Northern Hemisphere ice sheets.
83
ern Oscillation) and glaciation trends in the Antarctic—were ultimately responsible for the advent of ice ages in the Northern Hemisphere. The latest research, arising largely from studies of current conditions, has pretty much exonerated the Southern Hemisphere of accusations of instigating glaciation in the north. The culprits responsible for northern ice ages are now believed to be cosmic or localized, but definitely not southern. Recent research points to two process as most likely to generate ice ages in the Northern Hemisphere: 1) Changes in Earth’s tilt and/or orbit. 2) Climate change affecting the North Atlantic. This essay explores these hypotheses—which have widespread support in the scientific community—to show that they, not conditions south of the equator, are the primary triggers of northern glaciation.
EARTH SCIENCE
Changes in Earth’s Tilt and/or Orbit The tilt of its rotational axis and the path of its orbit around the Sun have profound effects on Earth. Changes in either the axial tilt or the shape of the orbit cause enormous alterations in climate, which are sufficient to trigger ice ages. Several interrelated factors are involved.
84
Eccentricity. The planet Earth revolves around the Sun in an orbit that is not perfectly circular, but elliptical. The exact shape of this elliptical orbit varies by 1 to 5% over time. This variation in Earth’s orbit is known as eccentricity. The eccentricity of Earth’s orbit affects the amount of sunlight hitting different parts of the planet’s surface, especially during the orbit’s aphelion and perihelion, the points in Earth’s orbit at which it is farthest from the Sun and closest to the Sun, respectively. At times of maximum eccentricity—when Earth’s orbit is most elliptical—summer and winter temperatures in both the Northern and Southern Hemispheres are extreme. Scientists studying Earth’s eccentricity have found that, though the degree of change in solar radiation striking Earth during different orbital eccentricities seems rather small, only about 0.2%, this variation is sufficient to cause significant expansion or melting of polar ice. Earth’s orbit varies periodically, with its eccentricity swinging from maximum to minimum about every 100,000 years. Axial Tilt. The rotational axis running through a globe from the North to the South Pole is tilted off center. Of course, a modern globe shows today’s planetary tilt, which is 23.5°, but this degree of tilt has varied substantially over geologic time. Every 41,000 years or so, Earth’s axis shifts, and its axial tilt varies, usually between 21.6 and 24.5°. Changes in the tilt of Earth’s rotational axis affect the planet’s climate in the same way as changes in its eccentricity. As Earth’s tilt changes, parts of the planet receive different SCIENCE
IN
DISPUTE,
VOLUME
2
amounts of solar radiation. The changes in the amount of sunlight striking the planet’s surface are most extreme at high latitudes. When axial shifts turn the poles more acutely away from the Sun, polar regions may get up to 15% less solar radiation than they do today. The lightless polar winter is also much longer. The extremely long, dark, and severely cold winters at the poles result in increased glaciation. Precession of the Equinoxes. Changes in Earth’s tilt are closely related to altered timing of the equinoxes, called precession. Earth’s equinoxes occur twice a year, when the Sun is directly over the equator, and night and day have equal length. Today, the equinoxes occur on or about March 21 (the vernal equinox, or the first day of spring) and on or about September 21 (the autumnal equinox, or the first day of fall). The timing of the equinox depends on Earth’s rotational axis. If the axial tilt changes, so does the time of the equinoxes. As Earth spins on its axis, the gravitational pull of the Sun and Moon may cause it to “wobble.” Even slight wobbling produces changes in precession. Earth’s axial wobbles follow a pattern that varies over a period of about 26,000 years. Changes in precession caused by wobbling affect the timing and distribution of solar radiation on Earth’s surface, again with polar regions most affected. For example, when aphelion occurs in January, winter in the Northern Hemisphere and summer in the Southern Hemisphere are colder. When a winter aphelion is coupled with changes in precession that delay, and thus reduce, solar radiation striking northern regions, glaciation increases. Longer, colder winters in the Northern Hemisphere lead almost inevitably to accumulation of ice and the onset of an ice age. Wobbling on its axis also causes greater or lesser changes in Earth’s elliptical orbit. Orbital changes alter the timing of the aphelion and perihelion. The combination of altered precession and changes in the timing of aphelion and perihelion result in significant climate changes on Earth. The tilt of Earth’s rotational axis is closely related to precession, and precession is closely related to eccentricity. They are all closely related to climate changes on Earth—particularly northern ice ages. These factors were described by Serbian scientist Milutin Milankovitch in 1912, and together they are known as the Milankovitch cycle. Each factor may be capable of initiating northern glaciation on its own and in its own time frame (100,000 years; 41,000 years). A confluence of these factors—maximum eccentricity (when Earth’s orbit is most elliptical), extreme axial tilt (with the North Pole pointed most acutely away from the Sun), and precession, which delays and reduces solar radiation at high northern lati-
tudes—is thought to lead to a major ice age in the Northern Hemisphere. The “Cosmic” Causation of Northern Ice Ages On October 17, 2001, an international team of scientists led by New Zealand researcher Tim Naish, part of the Cape Roberts Project, published the results of their study of Antarctic seafloor sediment cores in the British journal Nature. The data show that changes in polar regions, particularly the advance and retreat of glaciers, follow variations in Earth’s orbit, tilt, and precession as described in the Milankovitch cycle. The sea floor sediment samples indicate that Antarctic glaciers advanced and retreated at regular intervals during a 400,000-year period, and the cycle of glaciation and retreat matched those predicted by Milankovitch, with increased glaciation at intervals of 100,000 and 40,000 years. Ongoing research is fully expected to confirm these findings for Northern Hemisphere glaciation as well. The Cape Roberts Project study revealed surprising, and alarming, facts about the rapidity with which ice ages can occur. The seafloor cores showed that global climate changes— global warming or global cooling—can initiate a transition from a warm climate regime to intense glaciation in as few as 100 years. The North Atlantic Deep Water Circulation A Northern Hemisphere system affected by and thus leading to this transition is the North Atlantic Deep Water (NADW) circulation belt. The NADW is an enormous oceanic current that moves gargantuan quantities of water (estimated at 16 times more than the water in all the world’s rivers) through the North Atlantic. The NADW flows along the equator and curls up the coast of North America carrying equatorwarmed water toward northwestern Europe.
The NADW and Global Warming The warmer surface water is and the warmer the air is above it, the greater the amount of water that will evaporate. There is no question that the global climate is currently warming. In 2000, scientists with the IPCC (Intergovernmental Panel on Climate Change) revised the agency’s estimate of the rate and degree of global warming we are currently experiencing. They now believe that the global temperature is rising up to 38 to 40°F (3.5 to 4.5°C) per century. There is evidence to support the hypothesis that in the past, global warming affected the NADW and led to periods of glaciation. Could this be happening now?
As air temperature, and to a lesser extent sea surface temperature, increases, evaporation over the oceans also increases. Greater amounts of water vapor rise into the air and accumulate in clouds. Eventually the water vapor in clouds condenses and falls as rain. Much of this increased rainfall occurs over the North Atlantic. Rain is fresh water, and increased rainfall over the pivotal site of the sinking deep-water current in the North Atlantic dilutes the ocean water, reducing its salinity and its density. The lessdense, less-saline water is less likely to sink to the ocean depths, where it generates and drives the NADW. As air temperatures increase, Arctic sea ice and the immense Greenland glaciers melt more rapidly. Ice melt is also fresh water. Research has documented considerably increased input of sea ice and glacial meltwater into the North Atlantic. This additional injection of fresh water into the NADW further reduces its salinity and density, hinders its sinking, and weakens the flow of the entire current. Although the surface waters of the North Atlantic that sink to drive the engine of the NADW have only 7% more salt than waters at similar latitudes in the Pacific, this is just sufficient to reach the threshold that causes these waters to sink and drive the Atlantic conveyor belt. However, even small reductions in salinity, or a few degrees increase in sea surface temperature, would prevent the North Atlantic current from reaching the sinking threshold. Scientists believe that the degree of freshwater input from increased precipitation and glacier melt, coupled with small, but documented increases in sea surface temperature, are beginning to push the NADW toward collapse. SCIENCE
IN
DISPUTE,
VOLUME
2
EARTH SCIENCE
The engine that drives the NADW is located near Iceland in the North Atlantic. In this region, subsurface ocean water is extremely cold. This very cold water holds a lot of salt, so the water here is highly saline. Very salty water is also very dense—heavier than less salty, warmer water. This cold, salty, dense North Atlantic water sinks and flows southward toward the equator. As it heads south, the current warms. As it warms, it loses salt. As it loses salt, it becomes less dense. The warmer, less-dense current begins to rise toward the surface. At the equator, it is very warm and at its lowest density. This warm, low-saline current then swings through the Caribbean and up again toward Europe as the heat-transporting Gulf Stream. As it nears Greenland, the current is cooled by Arctic air masses, becomes colder, saltier, and denser, and then sinks and flows southward, completing the cycle of this perpetual oceanic conveyor belt.
The entire NADW system depends on the creation and sinking of the dense, cold, saline, current in the North Atlantic—thermohaline circulation. Thermohaline refers to the variations in temperature (“thermo”) and salinity (“haline”) that keep the oceanic conveyor belt going.
85
IS ANOTHER ICE AGE IMMINENT? and methane at 770 parts per billion (ppb). As of September 2001, the concentrations of these gases in the atmosphere were carbon dioxide at 369.4 ppm, and methane at 1,839 ppm. The greenhouse effect is not controversial. The above figures indicate that humans are releasing large amounts of greenhouse gases into the air. The global climate is predicted to warm as much as 8°F (4.5°C) in the next century. This warming will affect the thermohaline circulation in the North Atlantic; in fact, it is already doing so. There is clear evidence indicating that, as global warming worsens, weakening or collapse of the NADW will lead to the onset of another ice age. Dr. Wallace Broecker, a leading climate expert, warns that: “Were [another ice age] to happen a century from now, at a time when we struggle to produce enough food to nourish the projected population of 12 to 18 billion, the consequences could be devastating.”
Although there is no astronomical information indicating that we are in for a Milankovitch cycle–induced ice age, abundant data exists to show that warming of Earth’s climate may plunge us into a deep freeze. And this is likely to happen far more quickly than anyone wants to believe. No one questions the fact that the world climate is warming. A vast majority of scientists around the world believe this is due to the huge input of greenhouse gases from human combustion of fossil fuels. The greenhouse effect—the accumulation of heat-trapping gases in the atmosphere—has enabled life to exist on Earth. Without naturally occurring greenhouse gases in the air, Earth would be a frozen, likely lifeless, planet. However, “too much of a good thing” can rapidly devastate life. Important ice-core research shows that, throughout the past 420,000 years, the highest concentrations of significant greenhouse gases to occur in the atmosphere were carbon dioxide at 300 parts per million (ppm),
—Natalie Goldstein
EARTH SCIENCE
Collapse of the NADW Dr. Wallace S. Broecker from the Lamont-Doherty Earth Observation at Columbia University, one of the world’s most eminent climate change experts, calls the NADW conveyor belt “the Achilles’ heel of the climate system.” In a paper published in the journal Science on November 28, 2001, Broecker described paleoclimate data that show that the NADW has collapsed several times in the past, initiating ice ages in the Northern Hemisphere. The evidence points to abrupt changes in the climate regime, perhaps a few decades or even a few years, after weakening or collapse of the NADW.
86
The part of the NADW that carries warm water northward from the Caribbean is the Gulf Stream. The Gulf Stream transports heat to northwestern Europe, which keeps the region considerably warmer than its latitude would otherwise permit (Britain has the same latitude as central Quebec, Canada). As the NADW weakens, less heat is carried to northwestern Europe. If the NADW collapses, northwestern Europe will begin to freeze over. If the NADW weakens and collapses and the climate of northwestern Europe cools considerably, snow and ice that accumulate there during the winter will, in a relatively short time, SCIENCE
IN
DISPUTE,
VOLUME
2
remain on the ground during the region’s colder summers. The collapse of the NADW would, in fact, lower overall temperatures in northwestern Europe by 20°F (11°C) or more, giving relatively balmy Dublin, Ireland, a frigid climate identical to that of Spitsbergen, Norway, a city 600 mi (965 km) north of the Arctic Circle. Once winter snows fail to melt, and snow and ice accumulate on the ground year-round, the onset of an ice age is quite rapid. Because albedo, or reflectivity, of ice is far greater than that of the bare ground, whose dark coloration absorbs more heat through solar radiation, the snow and ice create a brilliant white ground cover that reflects more heat away from Earth’s surface. With less heat absorbed, the surface temperature falls further, so more snow and ice accumulate, and so on. Paleoclimate data, particularly those obtained from Greenland ice cores, indicate that this process has occurred before and that it occurs quickly. There is abundant evidence documenting this process as a primary creator of Northern Hemisphere ice ages. There is no question that these two processes—the Milankovitch cycle and disruption of the ocean’s thermohaline circulation—are the primary initiators of Northern Hemisphere ice ages. —NATALIE GOLDSTEIN
Further Reading Barber, D. C., et al. “Forcing of the Cold Event of 8,200 Years Ago by Catastrophic Drainage of the Laurentide Lakes.” Nature 400 (1999): 344–48. Berger, A. L. Milankovitch and Climate: Understanding the Response to Astronomical Forcing. Boston: D. Reidel Pub. Co., 1984. Broecker, W. S., and G. M. Henderson. “The Sequence of Events Surrounding Termination II and Their Implications for the Cause of Glacial-Interglacial CO2 Change.” Paleoceanography 13 (1999): 352–64. Cronin, Thomas M. Principles of Paleoclimatology. New York: Columbia University Press, 1999. Erickson, Jon. Ice Ages: Past and Future. New York: McGraw-Hill, 1990. Hays, J. D., J. Imbrie, and N. J. Shackleton. “Variations in the Earth’s Orbit: Pacemaker of the Ice Ages.” Science 194 (1976): 1,121–32. Henderson, Gideon M., and Niall C. Slowey. “Evidence against Northern-Hemisphere Forcing of the Penultimate Deglaciation from U-Th Dating.” Nature 404 (2000): 61–6.
“Ice Ages in Sync.” Scientific American (February 15, 2001). Imbrie, John, et al. Ice Ages: Solving the Mystery. Cambridge: Harvard University Press, 1986. Muller, Richard A., and Gordon MacDonald. Ice Ages and Astronomical Causes: Data, Spectral Analysis, and Mechanisms. New York: Springer-Verlag, 2000. Petit, J. R., et al. “Climate and Atmospheric History of the Past 420,000 Years from the Vostok Ice Core, Antarctica.”Nature 399 (1999): 429–36. Rahmstorf, Stefan. “Risk of Sea-Change in the Atlantic.” Nature 388 (1997): 825–26. Schiller, Andreas. “The Stability of the Thermohaline Circulation in a Coupled Ocean— Atmosphere General Circulation Model.” CSIRO Journal. Australia (1997). Slowey, N. C., G. M. Henderson, and W. B. Curry. “Direct U-Th Dating of Marine Sediments from the Two Most Recent Interglacial Periods.” Nature 383 (1996): 242–44. Wilson, R. C. L. The Great Ice Age: Climate Change and Life. New York: Routledge, 1999.
EARTH SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
87
ENGINEERING Are arthroplasties (orthopedic implants) best anchored to the contiguous bone using acrylic bone cement? Viewpoint: Yes, acrylic bone cement has been used successfully in arthroplasties for several decades. Viewpoint: No, newer materials such as bioceramics will eventually offer a safer and more durable anchor for arthroplasties.
The total hip replacement (THR) operation, which can restore mobility to people with damaged or arthritic joints, is one of the glowing success stories in the history of modern medicine. Rather than repairing or reconstructing the ball-and-cup joint of the hip, either of which is a very difficult operation, the entire unit is replaced with two artificial prostheses. This procedure has also been successfully applied to other joints such as fingers and knees, and the operation is without complications in more than 95% of cases. However, there have been doubts raised as to the long-term durability of the most commonly used components of THR operations, in particular the cement that glues the artificial prosthesis to the bone. Wear and tear, and a general degradation of the cement over time, often result in hip replacement failure. More worrying are the additional complications that can arise from cement debris generated as the bond between the bone and the prosthesis breaks down. Particles of cement debris can damage surrounding tissue or enter the bloodstream, leading to serious problems. The hip consists of two main parts: a ball (femoral head) at the top of the thighbone (femur), which fits into a rounded cup (acetabulum) in the pelvis. Ligaments connect the ball to the cup, and a covering of cartilage cushions the bones, allowing for easy motion. There is also a membrane which creates a small amount of fluid to reduce friction in the joint, and allow painless motion. The hip is one of the body’s largest weight-bearing joints, and because of the heavy load it bears, the hip can become badly worn. The cartilage is particularly vulnerable to wear, and without this layer of cushioning, the bones of the joint rub together and become irregular, causing stiffness and pain. In a total hip replacement operation, surgeons replace the head of the thigh bone with an artificial ball, and the worn socket with an artificial cup. Such operations began to be regularly performed in the 1950s, when medical technology and engineering allowed the use of safe and durable components. Although THR is considered a miracle of modern medicine, the wealth of data collected on the procedure has shown that there is a lack of durability in the components after about 10 years. At first this was not seen as a great problem, as the vast majority of patients were elderly arthritis sufferers, and 10 years of service from a new hip was certainly better than nothing. However, with the increasing longevity of Western populations and a corresponding rise in the physical expectations for life and leisure, the 10-year life span of hip prostheses is now seen as too short for vast numbers of patients. The main culprit in the decay of the artificial hip appears to be the cement used to bond the replacement parts to the bone. Over time, tiny cracks in the cement weaken the bond between bone and prosthesis, and the parts
89
become loose. As a result the patient can suffer discomfort, pain, and a range of dangerous complications arising from the movement of particles of cement within the body, often causing heart problems and other life-threatening conditions. There is obviously a need for new solutions, but opinion is divided as to the best way to improve the life span of artificial hips. Some doctors advocate using artificial components that can be fixed in place without cement. By encouraging the surrounding bone to grow into the replacement parts, a more durable bond can be created. However, the drawback of this solution is that the bone must be given time to grow into its bond with the prosthesis, and thus recovery times for such operations are measured in months, rather than days. Although it does seem that the bonds created in this process are more durable, there is a lack of long-term data, since the earliest operations using this method were carried out in the 1980s. Another proposed solution is to cement only one part of the replacement hip, the femoral head (ball), while using the cementless bone-growth attachment for the acetabular (cup). As the breakdown usually occurs first in the acetabular prosthesis, this method strengthens the weakest part of the artificial joint. However, some critics argue that such hybrid total hip replacements would have drawbacks associated with both methods—a long recovery time, and a small projected increase in the life-span of the prosthesis, as yet unknown. The third solution is to attempt to improve the materials in cemented artificial hips, and thereby increase their durability. New cements, adjustments to the way the cement is mixed, and additives are all being tested. As the process by which the cement breaks down is not fully understood, it is hoped that research will be able to shed light on the process, and allow for minor changes in the current cement to solve the problems. Since cemented prostheses allow for quick recovery times and a stronger initial bond, if the lifetime of the cement could be improved, such a method would have many advantages. However, clinical trials of new ingredients or methods will have to wait many years before their full impact of this solution can be known. The total hip replacement is the best way to restore mobility to damaged joints, but the question remains as to which method will serve the patient best. The long- and short-term needs of the individual must be considered, and the pros and cons of each option weighed. The increasing life-spans of patients, as well as a rising culture of continuing physical leisure activities into later life, makes such decisions difficult. —DAVID TULLOCH
ENGINEERING
Viewpoint: Yes, acrylic bone cement has been used successfully in arthroplasties for several decades.
90
Arthroplasties are orthopedic implants used to relieve pain and restore range of motion by realigning or reconstructing a joint. Arthroplasty surgery can restore function in a hip or a knee when disabling joint stiffness and pain are produced by severe arthritis. Most total joint replacement surgery is performed using acrylic bone cement to mechanically hold the parts in place. Patients receiving new hips can stand and walk without support almost immediately after arthroplasty surgery when this type of bone cement is used. The year 2001 was the 40th anniversary of the clinical use of acrylic bone cement in total joint replacement. The particular acrylic used, polymethylmethacrylate (PMMA), was first introduced for use in low-friction hip replacement in the 1960s by the British orthopedic surgeon, Sir John Charnley (1911–1982). By 2001, total hip replacement was described by the American Academy of Orthopaedic SurSCIENCE
IN
DISPUTE,
VOLUME
2
geons (AAOS) as an orthopedic success story, and the use of acrylic bone cement has done much to make it so. More than 500,000 hip replacements are performed worldwide every year, most of them using PMMA bone cement. History of Joint Replacement Arthroplasty surgery was developed to relieve the pain and immobility from arthritic hips, although it has since been adapted to other joints, particularly knees. The hip is described as a ball-and-socket joint because the head of the femur (thighbone) is spherical and it moves inside the cuplike acetabulum (socket) of the pelvis. Both the ball and the socket become impaired with arthritis, but in most early arthroplasty operations only the ball was treated or replaced.
Hip replacement was first attempted in 1891. There is not much information on the outcome, but through the years the disabling nature of arthritis has led to a great deal of innovative surgery to relieve the intense pain and immobility it causes. In 1925, Marius Nygaard Smith-Petersen (1886–1953), a surgeon in Boston, tried covering the ball at the end of the femur with a molded piece of glass. The glass did provide the necessary smooth surface on the ball to give some relief, but was not durable.
KEY TERMS Hemispheric cuplike socket on each side of the pelvis, into which the head of the femur fits. ARTHROPLASTY: From the Greek words arthron (joint) and plassein (to form, to shape, or to create); any surgical procedure to rebuild a joint, especially the hip or knee, usually by replacing the natural joint with a prosthesis. BIOACTIVITY: Property of a chemical substance to induce a usually beneficial effect on living tissue. An artificial joint component is said to be bioactive if it is osteogenic, i.e., if it stimulates the growth of bone to form a biochemical bond with the implant. CAPUT FEMORIS: “Head of the femur”; the hemispheric protuberance that fits into the acetabulum to create the ball-andsocket joint of the hip. COLLUM FEMORIS: “Neck of the femur”; the short, stout rod of bone between the head and shaft of the femur, projecting from the shaft at an angle of about 125°. MONOMER: Single chemical unit that combines over and over in large numbers to form a polymer, or plastic. NANO: Unit of size referring to one billionth, as in 1 nm (nanometer) is one-billionth of a meter. POLYMER: Compound made of a great number of repeating smaller units called monomers; also commonly called plastic. ACETABULUM:
The process, called “molded arthroplasty,” was also tried with stainless steel. In 1938, two brothers, Drs. Jean and Robert Judet of Paris, tried a new commercial acrylic plastic. By the 1940s, molded arthroplasty was the best hope for sufferers.
In the 1950s, Frederick R. Thompson of New York, and Austin T. Moore of South Carolina, independently developed replacements for the ball at the end of the femur. The end of the
femur was removed and a metal stem with an attached metal ball was placed into the marrow cavity of the femur. The acetabulum was not replaced. The procedure was called hemiarthroplasty. The results were not entirely successful, since the socket problems remained and the implant was not secured to the bone. In 1951, Dr. Edward J. Haboush of New York City tried anchoring a prosthesis with an acrylic dental cement. In England, Dr. John Charnley was working on improving the treatment, and in 1958 he attempted to replace both the socket and the ball, using Teflon for the acetabulum. When that was not satisfactory, he tried another plastic, polyethylene, which worked very well. Charnley used the acrylic bone cement polymethylmethacrylate (PMMA) to secure both the socket and the ball prostheses. Dr. Charnley SCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
In 1936, a cobalt-chromium alloy was used in orthopedics and provided dramatic improvement. It is still used today in various prostheses, for it is both very strong and resistant to corrosion. Although mold arthroplasty and the new alloy were improvements, the resurfacing of the ball was not enough to give predictable relief. In addition, many patients had limited movement, so the search for more improvements continued.
Process by which monomers join to make polymers. Polymerization occurs in two ways: addition polymerization where monomers combine directly (usually the reaction needs a catalyst to occur); and condensation where a small byproduct molecule is created as two monomers bond. REVISION ARTHROPLASTY. Second joint replacement operation required because of the failure of the first. SHAFT: Elongated cylindrical part of a bone, such as the femur, tibia, fibula, humerus, radius, or ulna. SOL-GEL PROCESS: Low-temperature chemical method that synthesizes a liquid colloidal suspension (“sol”) into a solid (“gel”), yielding a high-purity glass or ceramic substance composed of fine, uniform, and often spherical particles. The sol-gel process facilitates the manufacture of bioactive materials such as hydroxyapatite. THA OR THR: “Total hip arthroplasty” or “total hip replacement,” one of the most common and most successful of all modern surgical procedures. TROCHANTER: Either of two rough protuberances, called the greater, major, or outer trochanter and the lesser, minor, or inner trochanter, near the upper end of the femur between its shaft and neck. Their main purpose is to serve as attachments for hip muscles. POLYMERIZATION:
91
The implant parts must also have the same mechanical properties as healthy versions of the structures they replace. The Charnley total hip replacements have consistently produced pain relief and rapid recovery because the implants are fixed in place. The patient has nearly immediate mobility. The success of the Charnley method has led to total joint replacement surgery being performed on knees, ankles, fingers, wrists, shoulders, and even elbows. Acrylic Polymers PMMA, polymethylmethacrylate, is one of a large family of acrylic polymers that have in common some association with a compound called acrylic acid, a rather small molecule with the empirical formula C3H4O2. The name “acrylic” relates to the acrid odor of the acid, which is not present in the very large molecules that are called acrylic polymers. Poly means “many,” mer means “parts.” Therefore in PMMA, P (poly-) means “many,” and the rest of the name, MMA (methylmethacrylate), relates to the particular parts that are chemically bonded over and over again to make this polymer. These individual parts that combine to make a polymer are called monomers.
An x-ray showing a hip joint replacement. (Photograph by Lester V. Bergman. CORBIS. Reproduced by permission.)
had succeeded in performing a total hip replacement (THR), a development which benefited so many sufferers that he was knighted by Queen Elizabeth II, becoming Sir John Charnley.
ENGINEERING
Implant Construction The total hip replacement used by Charnley in 1962 involved a 0.86in (22-mm) stainless steel ball on a stem, inserted into the femur to replace the ball side of the joint, and a high density polyethylene socket to replace the acetabular. He secured both components in place with PMMA. Since Charnley’s first replacements, balls of different sizes and materials and different stem lengths have been fashioned to accommodate individual patients.
Total hip replacements today use ball portions made of highly polished cobalt/chromium alloys or ceramic materials made of aluminum oxide or zirconium oxide. The stem portions are made of titanium- or cobalt/chromium-based alloys. The acetabular socket is made of metal or ultrahigh molecular weight polyethylene, or polyethylene backed by metal. The prosthetic parts weigh between 14 and 18 oz (400 and 500 g), depending on the size. It is important to use biocompatible materials that can function without creating a rejection response. The materials must also be resistant to corrosion, degradation, and wear. One concern with the use of acrylic bone cement is particulate debris, which in a some cases has been generated over time as the implant parts move against each other.
92
SCIENCE
IN
DISPUTE,
VOLUME
2
Proteins are complex natural polymers. Acrylic polymers are synthetic molecules and not as complex as proteins. The term polymer is more generally used with synthetic materials. They are also commonly called plastics. There are two basic ways to make a polymer, depending on whether the monomers just join one after another with no byproduct, or whether the monomers join and produce a small molecular byproduct. The process depends on the properties of the particular monomers. In addition polymerization, a reaction proceeds, usually under the influence of a catalyst, to link identical monomers to each other in long chains. Very reactive groups such as peroxides are common catalysts, and polyethylene is one example of an addition polymer. Condensation polymerization produces very large molecules, as monomers join by producing a small molecule byproduct, part of the byproduct molecule coming from each of the monomers as they join. Nylon 6-10 is an example of a condensation polymer. The small molecule byproduct is HCl, hydrogen chloride (hydrochloric acid in solution). Condensation polymers are thermosetting, that is they use heat to cure and set and become infusible when heated. Sometimes catalysts are used for condensation polymers. The first commercially prepared plastic was Bakelite, in the early 1900s. PMMA Commercial names for PMMA include Lucite and Plexiglas. PMMA is an amorphous, transparent, colorless plastic that is hard and stiff, but brittle. PMMA was first synthesized by Dr.
Otto Rohm and Otto Haas early in the twentieth century and first commercialized in the 1930s. It was used for a variety of applications to replace hard rubber, including dentures. During World War II, PMMA was used for aircraft windshields and canopies. The original synthesis of PMMA required heat, but in the 1940s, a “cold-curing” process was developed that made it possible to use it in orthopedic applications. Bone cements are made of a powder containing methacrylate and a liquid monomer of methylmethacrylate. The powder contains a peroxide initiator, and the liquid contains an activator. The polymerization process begins at room temperature with mixing of the powder and liquid. Opaque agents are added to the powder for radiographic contrast so the surgeon can check the cement in place. An antibiotic is also added to the powder. Chlorophyll is added to the components for optical marking of the bone cement during the operation. The long-term success of the acrylic bone cement depends at least in part on the exactness of the mixing and application procedures. The polymer is mixed in the operating room and inserted into the femur during the polymerization process. The implant is then inserted into the hardening cement. A similar procedure is used if cement anchors the acetabular in place. As the polymerization process proceeds, the PMMA fills all the spaces to mechanically anchor the prosthesis. The bone cement does not chemically bond to the bone or the prosthesis. Long-Term Prospects A total hip replacement is irreversible, but it may not last forever. The cement in cemented total hip replacements, or in other joints, may break down over time. The time varies for a number of reasons, not all well understood. One of the factors being researched is the influence of mixing techniques on the physical properties of PMMA. There is considerable evidence that mixing procedures play a significant role in the quality of the bone cement.
Cementless implants were introduced in the 1980s. These implants are larger and have a surface structure that is supposed to induce new bone growth. Recovery is slow and there has
The American Academy of Orthopaedic Surgeons (AAOS) addressed the problem of in vivo (within the body) degradation of acrylic bone cement in total hip arthroplasty at their annual meeting in 1999. Factors identified as contributing to failure include enzymatic activity and local pH, mechanical loading on the joint, porosity, and the initial molecular weight or other characteristics of the cement that may influence the susceptibility of PMMA to in vivo degradation. One study, on the micrometer-sized filler particles put into cement to make it opaque to x rays (so the orthopedic surgeon can monitor the cement over time), was done through Harvard Medical School in Boston. The filler particles commonly used are either barium sulfate or ceramic particles such as zirconium oxide. The microsized particles tend to clump together and cause voids, which eventually lead to breakdown in the bone cement. When smaller particles of nano-sized aluminum oxide were studied, they stayed dispersed, suggesting this may be one solution to the problem. Other studies indicate the type of sterilization used on the cement components has a direct effect on the longevity of the prosthesis. Gamma irradiation was found to reduce the cement’s toughness and resistance to fracture, but ethylene oxide sterilization did not reduce the quality of the cement. In other studies, the benefits of adding reinforcing fibers was investigated. Such fibers are added routinely to plastics to increase their toughness for many other applications. There is another possible option to consider when deciding whether or not to use acrylic cement, hybrid total hip replacements that use a cemented femoral component but a cementless acetabular one. This procedures eliminates what many consider the weak point. Hybrid hips are now widely used and producing good results according to reports in the Great Britain, where this whole success story started.
ENGINEERING
Breakdown of the cement occurs when microcracks in the cement appear, and the prosthetic becomes loose or unstable. The problem is most often with the acetabular component. The rubbing of the ball against the cup produces microscopic debris particles, which are absorbed by the cells around the joint, and an inflammatory response results. This response can lead to bone loss, and the implant becomes loose as a result. Only about 10% of the total hip replacements fail within 10 years, but the large number of implants being performed makes that a significant number of patients in distress.
not been enough time to evaluate the success of these procedures. The age of patients having total hip replacement surgery has dropped considerably since Charnley introduced the procedure, and the longer and more active life of recipients has contributed to the number of replacements that have begun to fail.
—M. C. NAGEL
Viewpoint: No, newer materials such as bioceramics will eventually offer a safer and more durable anchor for arthroplasties. SCIENCE
IN
DISPUTE,
VOLUME
2
93
A surgeon can either repair, reconstruct, or replace a patient’s damaged joint. All three operations are difficult, and the history of medicine is replete with failures in this area. Early in the twentieth century, surgeons learned that the best remedy for a diseased or injured joint is to replace it with a prosthesis, but that was not possible until the late 1950s. History of Joint Replacement Surgery The first successful operation to rebuild a joint was performed in 1826 by John Rhea Barton (1794–1871) in Philadelphia. He repaired a disabled ball-and-socket joint in a sailor’s hip by reshaping it into a hinge joint without using a prosthesis. Barton’s achievement was dependent upon the features of that particular case and was essentially unrepeatable.
Royal Whitman (1857–1946) reported another successful but isolated case of successful hip arthroplasty without prosthesis in 1924, but by then surgeons were beginning to understand that their goal of strong, dependable, durable artificial joints would only be met by developing low-friction prostheses rigidly attached somehow to natural bone.
ENGINEERING
Sustained research in replacing hip joints with prostheses began in the 1920s. Surgeons left the head of the femur alone, but inserted an artificial cup into the acetabulum, the socket of the pelvis. Selecting proper materials for prostheses is always a problem. Materials must resist wear, abrasion, breakage, and rejection; they must adhere well to bone; and they must not damage, irritate, or infect living body tissue. Between 1923 and 1939, experimental acetabular prostheses were made of glass, celluloid, Viscaloid, a celluloid derivative (1925), Pyrex (1933), and Bakelite (1939).
94
In 1937, a Boston doctor, Marius Nygaard Smith-Petersen (1886–1953) began trying metals for acetabular cups. He reported in 1939 his design of a successful cup made of Vitallium, a relatively inert, generally biocompatible, alloy of cobalt, chromium, and molybdenum. Several researchers tried stainless steel or steel-reinforced acrylics in the 1940s and 1950s. Artificial femoral heads became possible with these new substances, but friction and deterioration remained significant problems. In the 1950s, the British surgeon John Charnley (1911–1982) experimented with acetabular cups made of polytetrafluorethylene (PTFE), a low-friction, nearly inert substance usually called Teflon. Combining these cups with femoral heads made of stainless steel and other metals, Charnley succeeded in producing slippery, well-lubricated, and less troublesome artificial hip joints. For the first time, both the acetabulum and the head of the femur could be replaced with artificial materials. Charnley’s SCIENCE
IN
DISPUTE,
VOLUME
2
landmark 1961 article, “Arthroplasty of the Hip: A New Operation,” in the British medical journal The Lancet, announced the first genuine total hip replacement (THR), that is, the manufacture and safe implanting of strong, durable, biochemically inert, artificial joints. The total replacement of knees, fingers, and other joints soon became possible after Charnley’s breakthrough in low-friction prosthetic arthroplasty. But early in the 1960s, Charnley realized that Teflon was inadequate for the socket because it would erode and discharge irritating debris into the joint and surrounding tissues. His subsequent experiments with other synthetic low-friction materials led him in 1963 to high density polyethylene, which is still preferred by most THR surgeons. Clinical Problems Most arthroplastic complications occur in cases of hip or knee arthroplasty, simply because the hip and knee are larger and must bear more weight than other joints, and thus come under more stress. Complications occur more often with artificial hips than with artificial knees because the natural movements of the ball-and-socket joint in the hip are more versatile and complex than those of the hinge joint in the knee, again producing more stress, and more kinds of stress. Nevertheless, THR gives most patients distinct relief from their degenerative or painful hip conditions, and the long-term prognosis for THR remains among the most optimistic of all surgical procedures, about 96% free of complications in all cases.
The natural ball-and-socket hip joint consists of the head of the femur, rotating in the acetabulum, lubricated by synovial fluid, supported by ligaments and muscles, and cushioned by cartilage. The aim of hip arthroplastic science is to mimic and preserve this combination to the greatest extent possible, for the sake of the comfort, safety, and mobility of the patient. Typically the acetabular prosthesis is just a hollow hemispheric cup made of ultrahighmolecular-weight polyethylene. The femoral prosthesis is much more elaborate, mimicking the head and neck of the femur and one or both trochanters, and terminates in a thin cylindrical stem 4 or 5 in (10 or 12 cm) long, made of stainless steel, titanium, titanium alloy, cobaltchromium alloy, cobalt-chromium-molybdenum alloy, or some other strong, noncorrosive metal. During THR, the acetabulum is reamed, the cup prosthesis is inserted, the part of the femur to be replaced is cut away either above both trochanters, through the greater trochanter, between the trochanters, through the lesser trochanter, or just below the lesser trochanter. Then the shaft is reamed, and the stem inserted into this new hollow. The stem is
JOHN CHARNLEY DEVELOPS TOTAL HIP REPLACEMENT SURGERY In the late 1950s and early 1960s, the British orthopedic surgeon John Charnley (1911–1982) developed successful techniques and materials for total hip replacement (THR) surgery, to substitute artificial ball-andsocket joints for diseased or injured femurpelvis joints. Many surgeons consider THR the greatest surgical advance of the second half of the twentieth century, because it provides long-term pain relief, restores mobility and functionality, dramatically improves the quality of life for millions of patients, and is relatively free of complications. In 1950, a 40year-old patient with degenerative hip disease could expect to be in a wheelchair by age 60. In 2002, a similar patient, after THR, can walk, play golf, and ride bicycles in his or her 70s and 80s.
sometimes fluted to prevent rotation, further elongated to restrict jiggling, or made porous or roughened to improve adhesion. The convex side of the acetabular prosthesis and the stem of the femoral prosthesis must be reliably attached to the pelvis and femur. There are two common ways to accomplish these attachments—by cementing the prosthesis to the reamed inner surface of the bone, or allowing the bone to grow into the prosthesis without cement. There is also a procedure called hybrid fixation, in which one component, usually the femoral, is cemented, while the other, usually the acetabular, is not.
Among the most common and most significant complications of THR are: the loosening of a prosthesis; the erosion of the prosthesis or its cement so that debris is generated; sepsis (infection); nerve damage; thrombosis (blood clot) or other cardiovascular difficulties; and cancer. All these complications, except for cancer, are most often associated with cemented prostheses. One
—Eric v.d. Luft
of the main problems with THR, the gradual disintegration of the materials of the artificial balland-socket joint, is more likely to occur with cemented prostheses because of their tendency to loosen and allow unwanted movement of components within joints. The standard bone cement is polymethylmethacrylate (PMMA), which Charnley introduced in the early 1960s. It cures quickly and adheres well, but there are two common problems associated with its use. First, as it cures it gets hot enough to damage surrounding soft tissues. Second, it has an effective longevity of only about 10 years, after which it is likely to loosen and generate debris. Incidents of cardiac arrest, thrombosis, and other serious cardiovascular complications have been associated with using PMMA. Loosening of the prosthesis can be caused by failure of the prosthesis itself or its cement, by the natural aging and withering of the remaining bone, or by erosion of bone as the prosthesis aggravates it. The failure of the cement not only causes pain as the prosthesis loosens, but can also be very dangerous to the health or life of the patient, as tiny particles of bone, dried cement, metal, plastic, fat, or other substances are released into surrounding tissues and/or the cardiovascular system. This debris creates either physical damage, obstruction, or infection. Cobalt or chromium debris has specifically been associated with the onset of cancer. Even though rates of complication such as infection or induced thrombosis dropped signifiSCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
Each method has its advantages and disadvantages. The question is whether cementing these components to the residual bone or adhering them in some cementless way is better for the patient. Bone will usually grow into the porous or roughened surface of a prosthesis and create a very strong, safe, and durable bond without cement, but this takes some time, at least several months. Bone cement provides a quicker bond that is stronger in the short term, but may be more likely to weaken over time or to cause infection or injury to surrounding tissues.
With bachelor of medicine (M.B.) and bachelor of surgery (Ch.B.) degrees in 1935 from the Victoria University of Manchester School of Medicine, Charnley became a Fellow of the Royal College of Surgeons in 1936. He volunteered for the Royal Army Medical Corps in 1940, and from 1941 to 1944 served as an orthopedic surgeon to the British forces in North Africa. At Wrightington Hospital of the Manchester Royal Infirmary in 1958 Charnley used polytetrafluorethylene (PTFE), better known as Teflon, and stainless steel to achieve “low friction arthroplasty,” that is, the manufacture and safe implanting of strong, durable, biochemically inert artificial joints.By the mid1960s THR was a routine surgical procedure.
95
cantly after the 1980s because of more effective prophylactic and anticoagulant drugs, earlier and more thorough physical therapy, and shorter hospital stays, physical damage to peripheral nerves still occurs sometimes from inadequately controlled extrusions of cement. The most serious side effect of cemented prostheses is bone cement implantation syndrome (BCIS), a condition first recognized and named in the early 1970s. BCIS consists of debris extruding from the hardening or hardened cement in the bone marrow and getting into the blood vessels. Symptoms may include low intraoperative blood pressure, pulmonary embolism (obstruction of a blood vessel in the lung), cardiovascular hypoxia (low oxygen level), pulmonary artery distress, or cardiac arrest. BCIS is the leading cause of death as a complication of arthroplasty. Additional arthroplasties are sometimes necessary to replace defective, deteriorated, or improperly installed implants. Revision arthroplasties are also often indicated for cemented bonds, because of the short effective duration of bone cement.
ENGINEERING
A Wide Range of Solutions Research into debris control, friction control, the bone/prosthesis bond, and other joint replacement problems generally centers around the materials used. Cement adheres a prosthesis to bone more quickly, but natural bone growth around and into the implant ultimately provides a stronger and more durable bond. Ideally, then, the best implant would be bioactive in such a way as to stimulate rapid bone growth into its porous surface without recourse to cement. Current research is proceeding in that direction, and the most promising research concerns bioceramics.
96
Bioceramics are materials designed to promote or facilitate the growth of living tissue into themselves to form strong bonds. They are used mostly in arthroplasty, but also in plastic surgery, heart surgery, and dentistry. There are four kinds of bioceramics: bioinert, such as alumina or zirconia; resorbable, such as tricalcium phosphate; bioactive, such as hydroxyapatite; and porous, such as some coated metals. The methods employed for coating implants with bioceramics include sol-gel and plasma spraying processes. Alumina and zirconia are commonly used for the ball of the femoral component and the inside of the cup of the acetabular component, where bioactivity is not wanted. Several bioactive kinds of bioceramics are used as ingredients in new bioactive bone cements, which are less toxic and less thrombogenic (likely to cause blood clots) than PMMA. They are also used in cementless arthroplastic applications. SCIENCE
IN
DISPUTE,
VOLUME
2
With many minor improvements, ultrahigh molecular weight polyethylene for the socket has been used since Charnley’s time, but methods of attaching it to the acetabulum vary. Recently acetabular components backed with porous ceramic-coated metal have shown the best results. In 2001, a team of surgeons in Osaka, Japan, led by Takashi Nishii, reported that firm adhesion of the acetabular cup without cement could be achieved even in cases of bone degeneration. The femoral stem is sometimes coated with hydroxyapatite, because it is chemically similar to bone, increases the porosity of the stem, and therefore adheres well without cement. The stems of implants can also be coated with a fine layer of biodegradable polymer which encourages bone growth and gradually disappears as the bone grows into its space. A prosthesis coated with hydroxyapatite has a surface that is both porous and bioactive, thus better promoting vigorous growth of bone into the prosthesis to create a natural biochemical bond. This bioactivity and porosity is the main advantage of hydroxyapatite over cement in these applications. In 2001, another surgical team in Osaka, led by Hironobu Oonishi, reported significantly reduced rates of prosthesis loosening when bone cement was used in conjunction with tiny granules of hydroxyapatite. F. Y. Chiu’s Taiwanese team reported in 2001 that mixing bone cement with the antibiotic cefuroxime greatly reduced the incidence of infection caused by the cement in elderly patients. Also in 2001, Alejandro Gonzales Della Valle’s team in Buenos Aires reported that bactericidal agents such as vancomycin and tobramycin mixed with PMMA yielded excellent results against gram-positive infections. The bottom line for surgeons is that choosing the right method of prosthesis implantation and fixation for each particular patient dramatically reduces the risk of complications for that patient. Some patients are right for cementing; some not, and this medical determination must be made on a case-by-case basis. The quality and viability of the residual bone is a major factor in deciding whether or not to cement. The lifestyle of the patient also matters. Since a cemented prosthesis in an active patient would typically require revision arthroplasty after 10 years, it would not be best for, say, a 50-yearold cyclist. But older patients who are not likely to survive beyond the time when revision arthroplasty would be indicated, or whose bones are thinner, weaker, and less robust, often do better with cemented prostheses, especially if their lifestyles are less active. In 2001, Ewald Ornstein’s team in Hassleholm, Sweden, reported disappointing results in 18 cemented revision arthroplasty cases. Even so, bone cements are
probably best for most older patients, and uncemented prostheses for younger, but even for patients in their 80s and 90s, each clinical case must be evaluated according to its own set of indications. —ERIC V.D. LUFT
Lotke, Paul A., and Jonathan P. Garino, eds. Revision Total Knee Arthroplasty. Philadelphia: Lippincott-Raven, 1999. Matsui, Nobuo, ed. Arthroplasty 2000: Recent Advances in Total Joint Replacement. New York: Springer-Verlag, 2001.
Further Reading “AAOS Bulletin” and “AAOS .
Learmouth, Ian D., ed. Interfaces in Total Hip Arthroplasty. London: Springer, 2000.
Report.”
Bono, James V., et al., eds. Revision Total Hip Arthroplasty. New York: Springer, 1999. Charnley, John. “Arthroplasty of the Hip: A New Operation.” The Lancet 1 (1961): 1129–32. Chiu, F. Y., et al. “Cefuroxime-Impregnated Cement at Primary Total Knee Arthroplasty in Diabetes Mellitus.” Journal of Bone and Joint Surgery (British Volume) 83, no. 5 (July 2001): 691–95. Clark, D. I., et al. “Cardiac Output During Hemiarthroplasty of the Hip: A Prospective, Controlled Trial of Cemented and Uncemented Prostheses.” Journal of Bone and Joint Surgery (British Volume) 83, no. 3 (April 2001): 414–18.
National Institutes of Health. Consensus Development Conference Statement 12, no. 5 (September 12–14, 1994). Nishii, Takashi, et al. “Osteoblastic Response to Osteoarthrosis of the Hip Does Not Predict Outcome of Cementless Cup Fixation: 79 Patients Followed for 5–11 Years.” Acta Orthopaedica Scandinavica 72, no. 4 (August 2001): 343–47. Oonishi, Hironobu, et al. “Total Hip Arthroplasty with a Modified Cementing Technique Using Hydroxyapatite Granules.” Journal of Arthroplasty 16, no. 6 (September 2001): 784–89. Ornstein, Ewald, et al. “Results of Hip Revision Using the Exeter Stem, Impacted Allograft Bone, and Cement.” Clinical Orthopaedics and Related Research 389 (August 2001): 126–33.
Effenberger, Harald, et al. “A Model for Assessing the Rotational Stability of Uncemented Femoral Implants.” Archives of Orthopaedic and Trauma Surgery 121, no. 1–2 (2001): 60–4.
Peina, Marko, et al. “Surgical Treatment of Obturator Nerve Palsy Resulting from Extrapelvic Extrusion of Cement During Total Hip Arthroplasty.” Journal of Arthroplasty 16, no. 4 (June 2001): 515–17.
Finerman, Gerald A. M., et al., eds. Total Hip Arthroplasty Outcomes. New York: Churchill Livingstone, 1998.
Ritter, Merrill A., and John B. Meding, eds. Long-Term Followup of Total Knee Arthroplasty. Hagerstown, MD: Lippincott Williams and Wilkins, 2001.
Furnes, O., et al. “Hip Disease and the Prognosis of Total Hip Replacements: A Review of 53,698 Primary Total Hip Replacements Reported to the Norwegian Arthroplasty Register 1987–99.” Journal of Bone and Joint Surgery (British Volume) 83, no. 4 (May 2001): 579–86.
Hayakawa, M., et al. “Pathological Evaluation of Venous Emboli During Total Hip Arthroplasty.” Anaesthesia 56, no. 6 (June 2001): 571–75. Hench, L. L. “Bioceramics.” Journal of the American Ceramic Society 81, no. 7 (July 1998): 1705–28. Hiemenz, Paul C. Polymer Chemistry: The Basic Concepts. New York: Marcel Dekkar, 1994.
Schreurs, B. Willem, et al. “Favorable Results of Acetabular Reconstruction with Impacted Morsellized Bone Grafts in Patients Younger Than 50 Years: a 10- to 18-Year Follow-Up Study of 34 Cemented Total Hip Arthroplasties.” Acta Orthopaedica Scandinavica 72, no. 2 (April 2001): 120–26. Scott, S., et al. “Current Cementing Techniques in Hip Hemi-Arthroplasty.” Injury 32, no. 6 (July 2001): 461–64. Sinha, Raj K., ed. Hip Replacement, Current Trends and Controversies. New York: Marcel Dekkar, 2002. Skyrme, A. D. , et al. “Intravenous Polymethyl Methacrylate after Cemented Hemiarthroplasty of the Hip.” Journal of Arthroplasty 16, no. 4 (June 2001): 521–23. SCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
Gonzales Della Velle, Alejandro, et al. “Effective Bactericidal Activity of Tobramycin and Vancomycin Eluted from Acrylic Bone Cement.” Acta Orthopaedica Scandinavica 72, no. 3 (June 2001): 237–40.
Roberson, James R., and Sam Nasser, eds. Complications of Total Hip Arthroplasty. Philadelphia: Saunders, 1992.
97
Steinberg, Marvin E., and Jonathan P. Garino, eds. Revision Total Hip Arthroplasty. Philadelphia: Lippincott Williams and Wilkins, 1999. Sylvain, G. Mark, et al. “Early Failure of a Roughened Surface, Precoated Femoral Component in Total Hip Arthroplasty.” Journal of Arthroplasty 16, no. 2 (February 2001): 141–48.
ENGINEERING
Trahair, Richard. All About Hip Replacement: A Patient’s Guide. Melbourne: Oxford University Press, 1998.
98
SCIENCE
IN
DISPUTE,
VOLUME
2
Waugh, William. John Charnley: The Man and the Hip. London: Springer-Verlag, 1990. Walenkamp, G. H., ed. Bone Cementing and Cementing Technique. New York: SpringerVerlag, 2001. Xenakis, Theodore A., et al. “Cementless Hip Arthroplasty in the Treatment of Patients with Femoral Head Necrosis.” Clinical Orthopaedics and Related Research 386 (May 2001): 93–9.
Is fly ash an inferior building and structural material?
Viewpoint: Yes, fly ash cement is an inferior building and structural material in terms of durability, safety, and environmental effects. Viewpoint: No, fly ash has proven to be an excellent building and structural material that actually can enhance the properties of concrete and other construction resources.
Most of us don’t think that much about cement; we take it for granted in many ways. In general, we know it to be a strong and durable building material. Roads made of cement wind through our cities. Our homes and businesses are often built with cement foundations. But what if someone could come up with a way to make cement even stronger and better by changing its physical and chemical characteristics? How would we decide if the change was an improvement? Fly ash is a fine, glass powder recovered from the gases of burning coal during the production of electricity. These micron-sized particles consist primarily of silica, alumina, and iron. Fly ash can be used to replace a portion of cement in concrete. One way to measure the quality of fly ash cement is to determine whether it is truly durable and strong. Comparisons of cement made with and without fly ash raise many questions. What type of cement will fill a space best? Will the material remain safe over time? Will it be moisture resistant? Will it remain strong? Benefits such as cost reduction and energy savings are often part of the equation. Yet, the benefits are not clear cut, because each side has a different point of view. Whether product quality or cost savings are being discussed, opponents and proponents of fly ash cement are convincing in their assessment of the issue. There are always risks associated with any building material. In the case of fly ash cement, the issue is critical, especially when one considers that it is often used for road construction and building foundations. No one wants to be driving along and have the road collapse, or find out that the foundation of a cherished home is faulty. Proponents of fly ash cement contend that, when mixed properly, it is less susceptible to environmental stresses than cement without fly ash. Critics say that the quality of fly ash cement is too varied to make that claim. Which side you choose to agree with will probably have a lot to do with the way you determine and weigh risk. If you find environmental issues important, then you will probably find a discussion of the hazardous elements of fly ash cement interesting. This side of the debate is often addressed with passion. Are these hazardous elements a threat to the environment? Some think so. Others say that using fly ash is good for the environment, because it is a recycled waste product. With worldwide concerns about air pollution, water pollution, and overloaded landfills, recycling waste products is obviously important. However, opponents of fly ash cement consider that the health risks caused by hazardous elements in fly ash outweigh the benefits of recycling in this instance.
99
The debate over the use of fly ash cement will continue for some time. Further data on the durability, safety, health, and environmental concerns regarding this material must be determined and evaluated. Even then, the debate is likely to continue, as cost and risk management is an important component of this equation. For some, the benefits of fly ash cement will always outweigh the risks; others may never be comfortable with its use. —LEE ANN PARADISE
that contain the active pozzolanic ingredient. However, fly ash is inferior to natural pozzolan.
Viewpoint: Yes, fly ash cement is an inferior building and structural material in terms of durability, safety, and environmental effects. In the traditional story of the three little pigs, the house the pig built out of brick was the only one to survive the onslaught of the superwinded wolf, who easily blew down the house of straw and the house of sticks and ate the other two pigs. Safely ensconced in his brick house and protected from the voracious wolf, the third little pig thought he had it made. Unfortunately, he did not realize that his house was built with bricks and a floor cemented with concrete that contained a high fly ash content. As a result, the house showed signs of deterioration much earlier and often needed repairs. The little pig also became quite mysteriously ill as time went on. And after an earthquake racked the countryside, the house fell down.
ENGINEERING
This tale presents a note of caution. Despite its widespread use, the inclusion of fly ash in concrete mixtures and as a filler in other types of building materials is problematic at best. Touted as a safe and economical way to recycle coal incinerator ash, fly ash is most often used in cement and mortar and also in place of clay, soil, limestone, and gravel for roads and other construction. The proponents of fly ash say it conserves energy by reducing the need for standard materials such as cement, crushed stone, and lime, all of which require energy to be produced. They also propose that fly ash saves costs associated with obtaining construction materials such as the naturally occurring pozzolans (volcanic ash, opaline shale, and pumicite) traditionally used for making cement. Best of all, they claim, using fly ash allows recycling of a byproduct that could otherwise cause enormous disposal problems. Although all these points are true, they are not the whole truth. Fly Ash in Cement Construction Naturally occurring pozzolans have been used for at least 2,000 years to make cement-like products. The Romans used pozzolana cement from Pozzuoli, Italy, near Mt. Vesuvius, to build the Colosseum and the Pantheon in Rome. Fly ash is an artificial pozzolan, with glassy spherical particulates
100
SCIENCE
IN
DISPUTE,
VOLUME
2
When coal powder burns, excess amounts of carbon dioxide and sulfur trioxide are trapped inside the spherical envelopes of fly ash, giving fly ash an inconsistent chemical composition. For example, the hydration of fly ash causes the envelope (the membrane that covers fly ash particles) too prevent or slow down its reaction with calcium hydroxide during cement curing. This slower process may lead to the envelope breaking at a later stage and causing the delayed formation of crystals of the mineral ettringite (DEF) in the concrete. DEF, sometimes referred to as an internal sulfate attack, results in gaps filled with ettringite crystals that can cause cracking and peeling in the concrete. In addition to the problems of cracking and peeling, fly ash does not control alkali-aggregate reactions in cement as well as natural pozzolan. The fly-ash envelope slows down the reaction with calcium hydroxide, a product resulting from the hydration of Portland cement (the most common cement used in construction), and the silicate inside the fly ash particles reacts with alkali in the cement. As a result, silica gels are formed and expand, causing cracking and differential movements in structures, as well as other problems such as a reduction in durability in areas where there are freezes and thaws, as well as reductions in compressive and tensile strength. In contrast, natural pozzolan, quickly reacts with calcium hydroxide, trapping the alkali inside the cement paste to form a denser paste with almost no alkali-aggregate reaction. One of the most touted advantages of fly ash concrete is that high-quality fly ash can reduce the permeability of concrete at a low cost. However, the quality of fly ash varies widely, often depending on how hot a coal plant is burning, which influences the ash’s carbon content. Low nitrogen oxide (NOx) combustion technology used to burn coal in a manner that better controls pollution often increases the carbon content of the ash, resulting in low-quality fly ash with carbon content above 10%. (The American Society for Testing and Materials [ASTM] 618 standard for building codes sets a limit of 6% carbon content, and industry preferences are set at 3% or lower.) This low-quality product can actually increase permeability and interfere with the air-entrainment process, leading to unreliable pours. Many other variables also affect the quality of fly ash and its suitabil-
ity for making concrete. For example, a low tricalcium aluminate content of 1.3% and sodalite traces can result in a substantial lowering of sulfate resistance in mortar blends. Overall, fly ash is also typically linked with slower-setting concrete and low early strength. In addition, the use of concrete containing fly ash cement in road construction is also associated with several cautionary measures. The Virginia Highway and Transportation Research Council (VHTRC) outlined several restraints concerning fly ash concrete used to construct highways and highway structures. The council noted that special precautions are often necessary to ensure that the proper amount of entrained air is present in the fly ash cement mixtures. They also noted that not all fly ashes have sufficient pozzolanic activity to provide good results in concrete. Finally, transporting fly ash to the construction site may nullify any other cost advantage of using fly ash, and the use of a superplasticizer admixture to make fly ash less reactive to water can also cancel out cost savings. The Recycling of Fly Ash, Health, and the Environment The growing concern over environmental pollution that began in the 1950s and the 1960s led to stronger regulations and new technologies to reduce air pollutants. The United States Environmental Protection Agency (EPA) now estimates that 95 to 99% of particulate and organic pollutants can be removed from air emissions resulting from coal combustion. Although they are removed from emissions, these pollutants are captured as part of the fly ash from the smoke stack. Approximately 50 to 60 million tons of this fly ash are produced each year in the United States as a byproduct of coal combustion, and disposing of this fly ash has caused concern. Why? Because fly ash can contain any number of more than 5,000 hazardous and/or toxic elements, including arsenic, cadmium, chromium, carbon monoxide, formaldehyde, hydrochloric acid, lead, and mercury. Fly ash also includes harmful organic compounds such as polychlorinated biphenyls (PCBs), dioxins, dimethyl and monomethyl sulfate, and benzene.
Adhere in a thin layer of molecules (of gases, solutes, or liquids) to the surfaces of solid bodies or liquids to which the substances are in contact. AGGREGATE: Mixture of sand, gravel, slag and/or other mineral materials. ALKALI-AGGREGATE REACTIONS: Reactions that occur between certain types of aggregates and the alkali in the pore solutions of cement paste in concrete. CONCRETE: Mixture normally composed of cement, water, aggregate, and, frequently, additives such as pozzolans. DELAYED ETTRINGITE FORMATION (DEF): Formation of the mineral ettringite that is associated with expansion and cracking in mortars and concrete. LIME: Calcium hydroxide; forms in concrete when cement mixes with water. PORTLAND CEMENT: Most common cement used in construction, mainly composed of lime, silica, and alumina. POZZOLANS: Finely divided siliceous (or siliceous and aluminous) material that reacts with calcium hydroxide, alkalis, and moisture to form cement. Natural pozzolans are mainly of volcanic origin. Artificial pozzolans include fly ash, burned clays, and shales. ADSORB:
Although recycling fly ash into building materials may seem to be a viable alternative to disposing fly ash into waste dumps where it can leach into the soil, using a hazardous material in building products is actually waste disposal masquerading as recycling. A fundamental rule of recycling is similar to that of medicine, that is, “First, do not harm.” However, the use of fly ash in construction materials is far from safe. For example, some buildings in the United States, Europe, and Hong Kong have been found to have an increase in toxic indoor air contamination which is in direct relation to fly ash that has been used as an additive in concrete to make it more flowable. In a high rise building in Hong Kong, researchers suspect that the combination of fly ash and granite aggregation in concrete causes the building to be “hot” with the radioactive gas radon when the air-conditioning systems are shut down at night and on weekends. As a result, night and weekend workers may be exposed to higher and potentially dangerous radon levels. One especially troubling component of fly ash is dioxin, one of the best-known contaminates of Agent Orange, the notorious defoliant used in the Vietnam War. On July 3, 2001, the British Broadcasting Corporation (BBC) feaSCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
Many of the substances in fly ash are known to have carcinogenic and mutagenic effects; and some, such as dioxins, are so toxic that experts cannot agree on a safe level of exposure. In one study, a team of ecologists at the U.S. Department of Energy’s Savannah River, SC, Ecology Laboratory linked fly ash with developmental abnormalities (both behavioral and physical) as a result of high levels of heavy metals leaching into the water. For example, affected bullfrog tadpoles and soft-shell turtles had elevated levels of arsenic, cadmium, selenium, strontium, and mercury.
KEY TERMS
101
Workers pouring concrete during the construction of a stadium in Seattle, Washington. (Photograph by Natalie Forbes. CORBIS. Reproduced
ENGINEERING
by permission.)
tured a report on its Newsnight program about highly contaminated mixtures of fly ash and bottom ash (the ash left at the bottom of a flue during coal burning) that included heavy metals and dioxin. The mixtures had been used throughout several London areas to construct buildings and roads. Tests showed that the dioxin content of the fly ash was greater than 11,000 ng/kg (nanograms per kilogram), which is much greater than the 200 ng/kg left as a result of the use of Agent Orange. (In fact, 30 years after the end of the Vietnam War scientists still find elevated dioxin levels and birth defects in human tissues in Vietnam.) In addition to the many hazardous compounds already contained in fly ash, the use of ammonia to condition fly ash adds another environmental/health problem. Ammonia can be adsorbed by the fly ash with the flue gas train in the form of both free ammonia and ammonium sulfate compounds. During later transport and use of this fly ash, the ammonia can desorb, which presents several concerns. The primary problem associated with ammonia in fly ash is connected with waste disposal, since moisture can cause the ammonia to leach into nearby rivers and streams. However, ammonia desorbing into the air from contaminated fly ash is also a concern with its use in concrete mixtures. During the mixing and pouring of concrete, ashes with high amounts of ammonia may create harmful odors that can affect workers’ health.
102
SCIENCE
IN
DISPUTE,
VOLUME
2
Fly ash also poses a potential health and environmental hazard during storage before mixing, since a strong wind can scatter the fly ash, and rain can cause it to leach into the ground. Even if the fly ash were not causing immediate harm to people or the environment as part of a construction material, “disposing” fly ash in the concrete construction materials of a building is a temporary solution at best. Little is known about the leachability of materials made with fly ash. And, if the concrete in a building is a source of environmental health problems, replacement is often not an option. Once a building is constructed, little can be done since there is no proven method of encapsulation to control emissions. In the final analysis, it is bewildering for government regulations to require industries to spend millions of dollars on antipollution devices to capture deadly toxins, but then allow these toxins to be used via fly ash in the construction of office buildings, houses, roads, and even playgrounds. Too Many Variables Although numerous standards, regulations, and tests are in place or available concerning the use of fly ash cement mixtures in construction, the wide variability in the quality of fly ash and its potential negative effects on human health and the environment mark it as an inferior component for use in cement. Because of its inconsistent properties and particle size (fly ash particles can range
from one to 100 microns in size), it is recommended that fly ash be obtained from consistent sources, and not just from one utility plant but also from one unit at one utility. Furthermore, prices can range anywhere from $13 to $28 a ton depending on the consistency. As a result, it is not only important to test and approve each source of fly ash but also the properties of a specific fly-ash-cement combination on a project-by project basis. Construction companies trying to meet a deadline can easily overlook such detailed and consistent testing. In addition, to reduce costs, cement manufacturers have been known to use too much fly ash (typical construction specifications permit substituting fly ash for just 15% of the cement) in the production process. As a result of one such case, after an earthquake in Taiwan in 1999, many buildings collapsed. Problems with fly ash used as a fill material in cement construction has also been documented in the United States. In Chesterfield County, Virginia, at least 13 buildings built around 1997 developed problems, including floors heaving upward and cracking, because fly ash fill that had been exposed to moisture was used in their construction. Although the Virginia example is not the norm, it points out that regulations concerning amounts and quality of fly ash used in cement can be and have been ignored. Such occurrences are compounded because of the many variables connected with fly ash quality. Considering the long-term health and environmental hazards that may occur by spreading fly ash—which absorbs 99% of any heavy-metal contamination in whatever is burned—in cement throughout the country in buildings and roads, fly ash further loses much of its luster as a safe and effective construction material. In the final analysis, difficulties in quality control that often lead to lower-grade cement products combined with potential health and environmental problems make fly ash cement an inferior building product. —DAVID PETECHUK
No matter how concrete is used—buildings, bridges, roads, sewers—it normally contains Portland cement, water, aggregate, and
Over the years, fly ash has become a welcome addition to concrete for many reasons. Fly ash in the concrete improves concrete flow, and furnishes a better-looking, stronger, longer-lasting, and more durable finished product. Fly ash is readily available and relatively inexpensive. From an environmental standpoint, using fly ash in concrete reduces the amount of this byproduct of the burning of coal, which would otherwise be destined for burial in landfills. Increased Strength The advantages of concrete made with fly ash (FA-concrete) grow from its physical and chemical characteristics. Fly ash particles are small, smooth, and round in shape, attributes that allow them to move readily around the aggregate to create an FA-concrete mixture with fewer voids. On the chemical side, fly ash makes a critical contribution by reacting with the lime (calcium hydroxide) that results when cement mixes with water. The fly ash reacts with the lime to make the same binder, called calcium silicate hydrate, that is created when cement and water mix. In other words, hydrated cement yields the concrete binder along with lime, and fly ash uses that lime to make more binder. Fly ash, therefore, can be used to replace at least some of the cement in a concrete mixture.
The fly ash–lime reaction is particularly important. In traditional concrete, lime continues to be produced during and well after the original placement. The only requirement is that moisture comes in contact with the cement. This occurs from the initial water in the mix, but also from water vapor that moves through voids in the concrete, traveling from moist to dry areas, as from the damp bottom of a concrete driveway slab to the dry surface. As the moisture moves, it picks up the excess, nondurable lime and transports it, frequently resulting in a white, chalky residue called efflorescence. FA-concrete, on the other hand, combats the excess lime by reacting with it and making more cementitious paste, which fills the voids and curtails water flow through the concrete. Voids are a marked problem in traditional concrete because they allow moisture to move more easily through the concrete, which causes additional lime leaching. Voids can also diminish the SCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
Viewpoint: No, fly ash has proven to be an excellent building and structural material that actually can enhance the properties of concrete and other construction resources.
sometimes extra ingredients. One of the most common extra ingredients is fly ash. Fly ash falls under the classification of pozzolans, compounds that react with the lime in concrete to form the hard paste that holds together the aggregate. Both natural and synthetic pozzolans are available. The natural types include processed clays or shales, volcanic ash, and other powdery compounds. A synthetic pozzolan, fly ash is a byproduct of coal combustion, which is used to generate power.
103
structural strength of the concrete, and fly ash provides a remedy for this. First, it diminishes the number and size of voids during the initial pour due to its physical shape—essentially serving as a microaggregate. Second, it continues to react with lime and make additional paste to fill the remaining voids. This can generate a stronger concrete than is possible with the traditional mix. Increased concrete strength is especially important in projects that require a high strength-to-weight ratio. High-rise buildings, for example, require strong but relatively light structural components that hold up well but put minimal stress on supporting structures. Traditional concrete has a strength ceiling at a mix of about seven bags of cement per cubic yard (each bag holds 94 lb/42 kg of cement). Because FAconcrete becomes stronger over time, FA-concrete can exceed this ceiling. In addition, a thinner, and therefore a lighter, FA-concrete component can confer the same strength properties as a larger, heavier traditional concrete component.
ENGINEERING
Numerous studies have confirmed the strength enhancement in concrete made with fly ash. One of the most compelling is a 2000 study of the long-term performance of FA-concrete. The study examined the compressive strength of test elements made from different concrete mixes after they had stood for 10 years in an outdoor environment. FA-concrete exhibited the highest compressive strength, followed by traditional concrete, and then an assortment of test specimens containing different additives. The study also verified the continuing action of fly ash on excess lime, and the rise in the strength of FA-concrete over time. An early test performed at 28 days indicated that the FA-concrete was slightly less strong than traditional concrete and the other test specimens. However, the study reports, “it attained the highest strength gain, of more than 120 percent between 28 days and 10 years.” Protective Qualities FA-concrete is also less susceptible to many of the common environmental stressors that take a toll on traditional concrete. Decreased permeability is the primary reason. FA-concrete can fend off surface assaults and sustain considerably less damage because it has fewer voids and consequently blocks deleterious substances from freely penetrating the concrete. A major concrete corrosive is salt, and particularly the chloride it contains. Salt can also have negative effects on the steel bars and latticework often imbedded in concrete as reinforcement. Both seawater vapor in coastal areas and deicing salt in colder regions can quickly ravage concrete. In various tests, FA-concrete demonstrated a greater resistance to the effects
104
SCIENCE
IN
DISPUTE,
VOLUME
2
of chloride than did traditional concrete. In one study, researchers cast cylinders made from traditional and FA-concrete in high-, medium-, and low-strength mixtures. They fully submerged the cylinders in a powerful 19,350 ppm concentration of sodium chloride for 91 days, and then tested the cylinders to determine the amount of free chloride ions (charged chloride atoms) they contained. FA-concrete showed significantly fewer ions. The paper summarized, “Overall results suggest that a judicious use of fly ash in concrete-making can decrease the incidence of chloride-induced corrosion of the reinforcement in concrete structures.” Similar studies show that FA-concrete is better able than traditional concrete to withstand attack by sulfates, which react with lime and can cause concrete expansion and cracking. Beyond chloride and sulfate, concrete faces other threats, such as freezing and thawing. Unlike most compounds that contract and get smaller when they transform from a liquid to a solid state, water expands. Large voids in concrete are problematic because they can fill with liquid water. As it freezes, the ice exerts pressure on the matrix of the concrete, which can cause cracking and spalling (chipping). At the same time, some voids are desirable to allow excess moisture room to expand, so it does not cause microfractures in the concrete. FA-concrete provides an answer. For one thing, FA-concrete requires up to 10% less water in the initial mix than traditional concrete does, yet it provides the same workability. The smooth, spherical, fly ash particles can maneuver around aggregate and fill voids without as much help from water. In addition, when FA-concrete is used in the proper mix proportions, microscopic voids remain. These tiny air pockets serve as safe storage vessels for freezing water in which it expand safely without compromising the concrete’s integrity. Quality Finished Product Strength and durability are vital to a quality finished product, and so is appearance. In this aspect, FA-concrete again improves upon traditional concrete.
The augmented flowability of FA-concrete helps ensure that the concrete will more completely fill the formwork used to contain it in its plastic state until it sets or hardens. It will spread from top to bottom and to all edges of the formwork with less work, with an even consistency, and with fewer voids. This is particularly evident along the edges of the formwork. For example, wood forms typically serve as guides for the sides of a driveway. Once the concrete is poured and the forms are removed, traditional concrete often reveals noticeable spaces where the concrete did not completely fill the form. Segregation may also be visible. In segregation,
larger aggregates migrate to the bottom of the concrete and finer aggregates move to the top. On the other hand, concrete with fly ash flows much easier, filling forms more completely and more consistently, and decreasing voids and segregation. Although a driveway is usually at ground level and its sides are not usually noticeable, a decorative concrete column, arch, or other design element would require as smooth a finish as possible to be pleasing to the eye and for structural integrity. FA-concrete meets those demands. Flowable concrete is also critical in road construction. Large road projects use slipforms that slide along with the paver (concrete-laying machinery) to provide a temporary form for the concrete. As the paver moves along the road, it pours concrete that fills a section of roadway between the slipforms. Excellent concrete flow is paramount because the concrete must flow quickly and completely between the slipforms before the paver travels on. Again, FA-concrete is an excellent choice. As a road-construction material, research also shows that it maintains a better surface finish longer than traditional concrete. In addition, the appearance of concrete can be affected by excess lime. As mentioned earlier, Portland cement reacts with water to generate lime. As the lime leaches onto the surface of the concrete and evaporates, it leaves behind a milkcolored, powdery residue. On vertical and sloped concrete structures, streaks may occur. This problem is greatly reduced with FA-concrete. Because fly ash reacts with the lime to make the cementitious paste, less lime is available as leachate, leachate residue declines, and the finished product is more aesthetically pleasing. Beyond Concrete Beyond concrete, fly ash is beneficial to other materials. Recent studies review its advantages in mortar, grouts, and bricks, and in less well-known materials. A 1999 study in China, for example, reviewed the benefits of fly ash in that nation’s major cement ingredient, blast furnace slag. The results of the study showed that cement containing the correct proportions of fly ash and slag provided greater strength and better pore structure than cement made with slag alone.
Pocket and Planet Friendly Because fly ash is initially a waste byproduct of coal-burning operations, FA-concrete is a cost-effective alternative to concrete made with only Portland cement as the binder. That alone is enough to pique the interest of concrete producers and contractors. Its versatility as a construction material makes it even more attractive. Because it has greater flowability, contractors find it reduces time at the job site. In addition, FA-concrete has shown improved strength and durability, which means that buildings, bridges, roads, and sewers have the potential for longer life.
Fly ash’s “green” qualities provide yet another enticement. Cement production requires heat, which uses energy and releases copious amounts of carbon dioxide, a greenhouse gas, into the atmosphere. In contrast, fly ash is a plentiful waste product. The amount of fly ash generated annually is enormous, with the United States alone producing more than 60 million metric tons every year. By using fly ash as a construction material, contractors are recycling a waste product that would otherwise find its way into a landfill, and reducing the demand for cement. In summary, fly ash has a wide range of attributes. It saves time and money, provides strength and durability, yields improved concrete flow and workability, and helps produce high-quality finished products. It is an excellent building and structural material. —LESLIE MERTZ
Further Reading “Ammonia on Fly Ash and Related Issues.” W. S. Hinton & Associates. . Derucher, K. N., and G. P. Korfiatis. Materials for Civil & Highway Engineers. 2nd ed. Englewood Cliffs, N.J.: Prentice Hall, 1988. Dunstan, M., and R. Joyce. “High Fly Ash Content Concrete: A Review and a Case History.” Concrete Durability: Katherine and Bryant Mather International Conference, SCIENCE
IN
DISPUTE,
VOLUME
2
ENGINEERING
Construction contractors have also found fly ash to be an excellent alternative to the standard backfill blend of sand and gravel used to fill in narrow trenches excavated by utility workers, and to fill various other cavities occurring under buildings, roads, or other structures. Standard backfill is relatively inexpensive in itself, but can rapidly become costly. For instance, when filling a deep, narrow trench, workers must spread a bit of backfill, compact it, spread a bit more, compact it again, and so on, until the hole is filled. Then they level it off. Many companies are now
switching to controlled low-strength material (CLSM), which has been described as “a fluid material that flows as easily as thick pancake batter and is self-leveling.” This material is basically a slurry made of Portland cement, water, and a fine aggregate, which is often fly ash. While CLSM is a bit more expensive than backfill, many contractors find it saves time and money because it eliminates compacting and leveling work. In addition, CLSM normally can carry higher loads than backfill, it will not settle like backfill can, and it is receptive to removal with standard construction equipment, if required.
105
vol. 2. Detroit: American Concrete Institute (1987): 1411–41. ———. “Long-Term Durability of Fly Ash Concretes in Civil Engineering Structures.” Concrete Durability: Katherine and Bryant Mather International Conference, vol. 2. Detroit: American Concrete Institute (1987): 519–40. Halstead, W. J. “Use of Fly Ash in Concrete.” National Cooperative Research Program (NCHRP). Synthesis of Highway Practice 127 (October 1986). Haque, M. N., O. A. Kayyali, and M. K. Gopalan. “Fly Ash Reduces Harmful Chloride Ions in Concrete.” ACI Materials Journal 89, no. 3 (May 1, 1992): 238–41. ISG Resources Inc. .
ENGINEERING
Malhotra, V. M., Min-Hong Zhang, P. H. Read, and J. Ryell. “Long-Term Mechanical
106
SCIENCE
IN
DISPUTE,
VOLUME
2
Properties and Durability Characteristics of High-Strength/High-Performance Concrete Incorporating Supplementary Cementing Materials under Outdoor Exposure Conditions.” ACI Materials Journal 97, no. 5 (September 1, 2000). Peles, J. D., and G. W. Barrett. “Assessment of Metal Uptake and Genetic Damage in Small Mammals Inhabiting a Fly Ash Basin.” Bulletin of Environmental Contamination and Toxicology 59 (1997): 279–84. Ryder, Ralph. “No Smoke without a Liar.” The Ecologist (October 26, 2001). Smith, A. “Controlled Low Strength Material: A Cementitious Backfill That Flows Like a Liquid, Supports Like a Solid, and Self-Levels without Tamping or Compacting.” Aberdeen’s Concrete Construction 36, no. 5 (May 1991): 389–98.
LIFE SCIENCE Historic Dispute: Are infusoria (microscopic forms of life) produced by spontaneous generation? Viewpoint: Yes, prior to the nineteenth century, many scientists believed that infusoria are produced by spontaneous generation. Viewpoint: No, experiments by Louis Pasteur in the nineteenth century confirmed that infusoria are not produced by spontaneous generation.
As suggested by the term “Mother Earth,” many ancient cultures shared the assumption that the earth is a nurturing organism, capable of giving birth to living creatures. From the earliest systems of religion and philosophy up to the seventeenth century, belief in the spontaneous generation of living beings from nonliving matter was almost universal. The doctrine of spontaneous generation generally was applied to the lowest creatures, parasites, and vermin of all sorts, which often appeared suddenly from no known parents. Since the time of the Greek philosopher Aristotle in the fourth century B.C., philosophers and scientists have seen the question of spontaneous generation as an essential element in the study of the natural world. Generally, the ancients assumed the existence of spontaneous generation, that is the appearance of living beings from nonliving materials, without doubt or question. For Aristotle, examining the existence of spontaneous generation was part of his attempt to establish a natural scheme for the classification of living beings. An ideal system of classification would recognize the many characteristics displayed by all living beings and analyze them in order to reveal the natural affinities beneath the bewildering variety of structures and functions they presented to the natural philosopher. In evaluating the characteristics of animals, such as structure, behavior, habitat, means of locomotion, and means of reproduction, Aristotle concluded that means of reproduction was the most significant factor. Animals could be arranged hierarchically by examining their level of development at birth. Aristotle described three major means of generation: sexual reproduction, asexual reproduction, and spontaneous generation. Heat, according to Aristotle, was necessary for sexual, asexual, and spontaneous generation. The animals now known as mammals were at the top of Aristotle’s hierarchy because they reproduced sexually, and the young were born alive and complete. At the bottom of his scheme were creatures like fleas, mosquitoes, and various kinds of vermin that were produced by spontaneous generation from slime and mud in conjunction with rain, air, and the heat of the sun. According to Aristotle’s observations, certain kinds of mud or slime gave rise to specific kinds of insects and vermin. For example, a combination of morning dew with slime or manure produced fireflies, worms, bees, or wasp larvae, while moist soil gave rise to mice. Although many seventeenth-century naturalists continued to accept the doctrine of spontaneous generation, the Italian physician Francesco Redi, a member of the Academy of Experiments of Florence, initiated a well-known experimental attack on the question that also helped to clarify the life cycle of insects. Noting the way that different flies behaved when attracted to various forms of rotting flesh, Redi suggested that maggots might develop from the objects deposited on the meat by adult flies. Published in 1668 as Experiments on the Generation of Insects, Redi’s experiments changed the nature of the
107
debate about spontaneous generation. The introduction of the microscope in the seventeenth century, moreover, proved that the so-called lower creatures were composed of complex parts and were, therefore, unlikely to arise spontaneously from mud and slime. Arguments about the generation of macroscopic creatures were largely abandoned. Seventeenth-century microscopists, however, discovered a new world teeming with previously invisible entities, including protozoa, molds, yeasts, and bacteria, which were referred to as infusoria or animalcules. Antoni van Leeuwenhoek, a Dutch naturalist and the most ingenious of the pioneering microscopists, was quite sure that the “little animals” he discovered with his microscopes were produced by parents like themselves, but other naturalists took exception to this conclusion. Indeed, questions concerning the nature, origin, and activities of the infusoria were still in dispute well into the late nineteenth century. Studies of the infusoria suggested new experimental tests of the doctrine of spontaneous generation. In 1718 Louis Joblot published an illustrated treatise on the construction of microscopes that described the tiny animals found in various infusions. Following the precedent established by Redi, Joblot attempted to answer questions about the spontaneous generation of infusoria by a series of experiments. Joblot compared flasks of nutrient broth that had been covered or uncovered after boiling. When he found infusoria in the open flask, but not in the sealed vessel, he proved that the broth in the sealed flask could still support the growth of infusoria by exposing it to the air. Supporters of the doctrine of spontaneous generation, such as the French naturalist Georges-Louis Leclerc, comte de Buffon, and the English microscopist John Turberville Needham, attacked Joblot’s methods and conclusions. In their hands, flasks of nutrient broth produced infusoria under essentially all experimental conditions. Many seventeenth- and eighteenth-century naturalists regarded spontaneous generation as a dangerous, even blasphemous, materialistic theory and challenged the claims made by Needham and Buffon. A series of experiments conducted by the Italian physiologist Lazzaro Spallanzani raised questions about Needham’s experimental methods. According to Spallanzani, the infusoria that appeared under various experimental conditions entered the vessels through the air. Advocates of spontaneous generation argued that Spallanzani’s attempts to sterilize his flasks had destroyed the “vital force” ordinarily present in broths containing organic matter. Reflecting on the status of biology in the 1860s, the German embryologist Karl Ernst von Baer suggested that his studies of the stages in the development of the mammalian embryo from the egg were an important factor in diminishing support for the doctrine of spontaneous generation. Nevertheless, although he regarded spontaneous generation as “highly problematic,” he did not think the question had been unequivocally settled. During the nineteenth century, the design of experiments for and against spontaneous generation became increasingly sophisticated, as proponents of the doctrine challenged the universality of negative experiments. The debate about spontaneous generation became part of the battle over evolutionary theory and the question of biogenesis, the origin of life. Microorganisms were also at the center of great debates about medicine, surgery, and the origin and dissemination of disease and infection. The great French chemist Louis Pasteur entered the battle over spontaneous generation through his studies of fermentation and the stereochemistry of organic crystals. Pasteur often argued that microbiology and medicine could only progress when the idea of spontaneous generation was totally vanquished. Many of Pasteur’s experiments were designed to refute the work of Félix-Archimède Pouchet, a respected French botanist and zoologist, the champion of a doctrine of spontaneous generation called heterogenesis. Pasteur also challenged the work of Henry Charlton Bastian, an English pathological anatomist, who claimed to have evidence for archebiosis, the production of life from inanimate matter.
LIFE SCIENCE
The spontaneous generation debate was important to Pasteur for professional and political reasons, although he knew that it was logically impossible to demonstrate a universal negative; that is, one cannot prove that spontaneous generation never occurred, never occurs, or will never occur. Indeed, advocates of spontaneous generation have argued that some form of the doctrine is necessarily true in the sense that if life did not always exist on Earth, it must have been spontaneously generated when the planet was very young. In practice, Pasteur’s experiments were designed to demonstrate that microbes do not spontaneously arise in properly sterilized media under conditions prevailing today. Nevertheless, Pasteur and his disciples regarded the spontaneous generation doctrine as one of the greatest scientific debates of the nineteenth century. —LOIS N. MAGNER
Viewpoint: Yes, prior to the nineteenth century, many scientists believed that infusoria are produced by spontaneous generation.
108
SCIENCE
IN
DISPUTE,
VOLUME
2
The belief that life can spontaneously generate from nonliving matter has had a great deal of intellectual appeal for many centuries. Certain philosophical approaches and scientific theories have encouraged support for spontaneous generation. This is particularly true of theories that have emphasized the existence of vital forces within nature, perceiving the whole of nature,
living and nonliving, as having a basic unity. Depending on the strength or weakness of such approaches to the understanding of nature, support for spontaneous generation has fluctuated over the last few centuries. In the nineteenth century, against the background of scientific, religious, and social transformation, debate over spontaneous generation reached intense levels, as the leading scientists of the age used the issue to establish their own scientific authority. Definition The term “spontaneous generation” refers to the theory that certain forms of life are generated from other, nonliving materials, rather than being reproduced from living members of their own species. As with any scientific debate, the definition of different terms was important to the people involved in the controversy. Spontaneous generation covered a variety of different beliefs about the way in which life forms come into being. For example, “heterogenesis” referred to the belief that life could spontaneously generate from organic matter, and “abiogenesis” referred to the more radical belief that life could be formed from inorganic matter. In addition, the definition of life itself and the nature of the living entities that were supposedly being generated was an issue of intense discussion. As the world of microscopic life was opened to human observation for the first time, words such as “bacteria,” “molecules,” “infusoria,” “animalcules,” and “germs” were all used to describe the new forms of life being discovered. The definition of these terms was not clear, and meanings differed from theory to theory and from scientist to scientist. As some historians have pointed out, it was the very authority to define such terms that was often at stake in debates over the nature of life and its generation.
The theory that life could be formed from inorganic matter, without parents. ENDOSPORE: A spore, or reproductive body, developed within bacteria cells. EVOLUTION: The theory that all species on Earth were not created as they are now, but evolved from earlier forms of life over millions of years. GERM: Until the late 1870s, the word referred to the precursor of a microorganism and not to the microorganism itself, as it does today. HETEROGENESIS: The theory that life could be formed, without parents, from organic matter. INFUSION: A liquid extract obtained by steeping a substance in water. INFUSORIA: Microscopic forms of life, also known as animalcules, molecules, bacteria, microorganisms, etc. MEDIUM: A sterilized nutrient substance used to cultivate bacteria, viruses, and other organisms. PLANT GALLS: The swelling of plant tissue caused by insect larvae or fungi. For centuries, these galls were thought to be spontaneously generated. PUTREFACTION: The decomposition of organic matter caused by bacteria or fungi. SPONTANEOUS GENERATION: The theory that living organisms can be spontaneously created from dead or inorganic matter under certain specific circumstances, for example, placing dirty clothes in a container with wheat or cheese could lead to the spontaneous creation of mice. Its adherents were divided into three groups: those who held that inorganic substances could generate living organisms (abiogenesis); those who believed that degenerating organic material was necessary for the process (heterogenesis); and those who hypothesized that both mechanisms could produce life. VITALISM: The philosophical approach that emphasized the existence of vital forces and powers in nature. ABIOGENESIS:
cated these ideas attacked spontaneous generation in order to defend their own theories. The area of contention moved from larger forms of life, such as mice and insects, to the new dimensions of microscopic life that had been recently discovered under the microscope of the Dutch naturalist Antoni van Leeuwenhoek. The teeming world of infusoria and animalcules, so new and mysterious, proved amenable to ideas about the creation of life through spontaneous generation. Through the microscopes of eighteenth- and nineteenth-century scientists, these forms of life appeared so simple that the distinction between them and nonliving matter was blurred. It appeared reaSCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
From Aristotle to the Eighteenth Century Beliefs about the spontaneous generation of life have existed for centuries. In Greek philosophy, Aristotle claimed that certain lower forms of life such as worms, bees, wasps, and even mice were produced as a result of heat acting on mud, slime, and manure. Maggots on meat were believed to have spontaneously generated from the rotting flesh. Such beliefs persisted from antiquity into the seventeenth century, when the Flemish physician and chemist Jan Baptista van Helmont claimed to have produced the spontaneous generation of mice from a mixture of dirty rags and wheat. However, spontaneous generation fell out of favor toward the end of the seventeenth century, as religious and philosophical ideas changed. The Italian natural philosopher Francesco Redi showed through a series of experiments that maggots were not spontaneously generated, but were hatched from eggs laid on meat by flies. New ideas about the generation of life developed, and those who advo-
KEY TERMS
109
did not occur and that life could only be produced from other living entities. Scientists on both sides of the debate made accusations about their opponents’ experimental methods, but at this stage none of the experiments was generally accepted as providing conclusive evidence and the matter remained unresolved. The Nineteenth-Century Debates During the nineteenth century, debate over the issue of spontaneous generation took place against a variety of local philosophical and religious circumstances. In Germany in the early 1800s, spontaneous generation found favor among those who were part of the philosophical movement known as Naturphilosophie. Within this worldview, the spontaneous generation of life was an expression of the essential unity of all things living and nonliving. In mid-nineteenthcentury France, theories involving spontaneous generation came to be associated with materialism, atheism, and radical social and political thought. If life could be spontaneously generated, then there appeared to be no need for God as the Creator of all living things. As a result, support for spontaneous generation was regarded by many people as a rejection of orthodox religion, and therefore an attack on all forms of social and political authority in France. Antoni van Leeuwenhoek (Archive Photos, Inc. Reproduced
LIFE SCIENCE
by permission.)
sonable to believe that such forms of life could be spontaneously generated, as it seemed unlikely that they themselves would have the complexity required to reproduce. Other philosophical developments helped to encourage support for spontaneous generation. The growing influence in the late eighteenth century of Newtonian physics, with its emphasis upon dynamic forces in nature, also encouraged belief in spontaneous generation. Such vitalist approaches supported the contention that it was possible for particles of nonliving matter to be rearranged so that life could be generated. The naturalists John Needham and Georges-Louis Leclerc, comte de Buffon, were the most notable advocates of spontaneous generation in the eighteenth century. Needham conducted several experiments in which he heated meat gravy in sealed containers to kill off any microorganisms, and then observed the regrowth of microscopic life in the gravy. If, as opponents of spontaneous generation claimed, all life came from life, then how did one explain the appearance of life where none existed? For Needham and others, the answer was obviously spontaneous generation. However, those who supported rival theories about the formation of life, such as Lazzaro Spallanzani, conducted similar experiments and obtained different results. Spallanzani claimed to have been able to prevent life from growing in sealed sterilized vessels, indicating that spontaneous generation
110
SCIENCE
IN
DISPUTE,
VOLUME
2
This was the background to the mid-century debates between Félix-Archimède Pouchet and Louis Pasteur. In 1858 Pouchet began to publish the results of experiments that he claimed proved spontaneous generation could occur. Pouchet was not a social or political radical, and tried to show that belief in spontaneous generation was in accord with orthodox religious belief. However, the conservative opponents of spontaneous generation in France denounced his beliefs as heretical and atheist, and it was important to members of the conservative establishment that Pouchet’s conclusions be proved wrong. Again, the experiments focused on whether microscopic forms of life would grow in solutions in sealed containers, heated to temperatures that would kill all forms of life, and then left to cool. Pouchet was able to consistently produce life in his sealed and sterilized infusions, and his conclusion was that they must have been spontaneously generated from the nonliving organic matter in the vessels. It was generally believed that no forms of life could survive the temperatures to which Pouchet heated his infusions. Pouchet was also sure his experimental technique ensured that the solutions were not contaminated. Therefore, there seemed no other way to interpret the results other than to allow the existence of spontaneous generation. At this stage, Louis Pasteur, the greatest experimental scientist of his era, entered the
debate on spontaneous generation from his work on fermentation. Pasteur had come to the conclusion that fermentation and putrefaction in substances were the result of microbes that were present in the air. Therefore, Pasteur was sure the life forms that had grown in Pouchet’s infusions were not spontaneously generated there, but were caused by germ-carrying dust that had contaminated the experiments. Life could only be produced from life after all. For Pasteur, his debate with Pouchet over spontaneous generation was an ideal opportunity to convince the public of his theory about the presence of germs in the air, and their role in the process of fermentation and putrefaction. The issue of spontaneous generation inevitably became caught up with support or opposition to Pasteur’s germ theory. To many observers, the idea that life could be spontaneously generated appeared to be less fantastic than Pasteur’s theory that the air was filled with microscopic living entities. As Pouchet and others pointed out, if that were the case, surely the air would be so foggy with these life forms as to be impenetrable!
In addition to the connection between Darwinism and spontaneous generation, opposition to Pasteur’s germ theory was also behind Bastian’s determination to prove that spontaneous generation could occur. Many of those who supported Bastian in England were doctors, and Bastian himself was a neurologist and a professor of pathological anatomy. They were opposed to Pasteur’s germ theory because of the enormous consequences it would have for medical theory and practice. They were more willing to believe that the life forms in Bastian’s infusions were spontaneously generated because to admit that they were the product of airborne germs would lead to an understanding of disease and its treatment that they believed was misguided and unacceptable. It was also a matter of professional territorial protection. Many physicians resented the intrusion of the new upstart bacteriologists and experimental scientists into the domain of medicine and the understanding of disease. Therefore, support for spontaneous generation had its source in a variety of different approaches and motivations. The belief that spontaneous generation could occur was an expression of an approach to the world that emphasized the essential unity between all things in nature, living and nonliving. It also required the belief that nonliving matter contained some kind of vital force or energy that could lead to the creation of living entities. In the nineteenth century, the issue of spontaneous generation also became caught up in debate over two of the most significant theories in the history of science—Darwinism and Pasteur’s germ theory. For very different reasons, both encouraged the support of the belief that the SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Darwinism and Spontaneous Generation At the same time as the Pouchet-Pasteur debate, the issue of spontaneous generation came to the fore in England, against the background of controversy over Darwinian evolution. The publication of Charles Darwin’s On the Origin of Species by Means of Natural Selection (1859) generated an enormous amount of public debate and conflict. His theory that current species were not created as they now exist, but evolved from earlier life forms over thousands of years, was enormously controversial. It was particularly significant because many people, both supporters and opponents, interpreted Darwinism as providing support for spontaneous generation. Indeed, spontaneous generation appeared necessary to Darwinism, if one was to maintain a purely naturalistic explanation of the beginnings of life. If all species had evolved from a few early life forms, then from what had the very earliest forms of life themselves evolved? Unless one believed that life on Earth had always existed, or invoked some kind of Divine Creation, then it was necessary to believe that life had been spontaneously generated from nonliving matter at least once in the far distant past. To allow this belief was to then admit the possibility that spontaneous generation could still occur, given the right conditions. In the swirl of debate and ideas that surrounded Darwinism, one of the clearest outcomes was that there was no longer an unbridgeable gap between living and nonliving matter in the minds of many people. Spontaneous generation seemed to be necessary in light of evolution’s unified concept of nature, a necessary part of the chain that bound together the lifeless and the living.
Henry Charlton Bastian was the most prominent and determined advocate of spontaneous generation in England. Bastian was a supporter of Darwinian evolutionary theory, and one of those who interpreted the theory as lending support to spontaneous generation. During the 1860s and 1870s, Bastian carried out many experiments that he felt showed spontaneous generation did occur. Once again, his proof hinged on the growth of microorganisms in infusions that had been sealed and heated to high temperatures. When they cooled, if signs of life did appear in the infusion, then Bastian interpreted this as the result of the nonliving matter rearranging its most basic components to create new life. Bastian also argued from analogy, comparing the appearance of specks of life in suitable fluids where no life had previously existed to the formation of specks of crystals in other fluids. Opponents attacked Bastian’s experimental method, claiming that his sloppy techniques must have led to the contamination of his containers with microorganisms from the outside.
111
spontaneous generation of microscopic life forms could occur. Thus, the theory of spontaneous generation was entangled in the most crucial scientific debates of the nineteenth century, which helps to explain why it was such a burning issue. —KATRINA FORD
Viewpoint: No, experiments by Louis Pasteur in the nineteenth century confirmed that infusoria are not produced by spontaneous generation. Pre-Nineteenth-Century Background Although the concept of spontaneous generation was an old one dating back to the ancient Greeks, by the late seventeenth and early eighteenth centuries few European naturalists still believed that plants and animals were created from dead or inorganic matter. Observations and experiments had thoroughly discredited the idea. For example, Francesco Redi demonstrated in 1668 that maggots were not spontaneously generated by rotting meat as most people assumed, but rather developed from eggs laid by flies on the meat. Experiments such as Redi’s and those of Marcello Malpighi on plant galls, which were also believed to be spontaneously generated, led to an acceptance that larger life forms could only arise from other living things.
LIFE SCIENCE
However, with Antoni van Leeuwenhoek’s development of the microscope, defenders of spontaneous generation received a boost to their position. Using the microscope, Leeuwenhoek and others immediately discovered the existence of countless hitherto unknown and excessively small creatures that seemed to appear out of nowhere. Originally called animalcules, they were especially likely to be found in infusions of hay and other organic substances. Was it not possible that these infusoria were spontaneously generated in these infusions, particularly in fermenting or putrefying fluids? Did spontaneous generation occur with such primitive life forms? During the eighteenth century, the controversy became intense. The opponents of spontaneous generation demonstrated that filtering, boiling, or chemically altering a medium could often prevent the appearance of infusoria. On the other hand, defenders of the concept could produce evidence based on experiments where precautions were taken to prevent outside contamination, and yet the organisms still appeared. The culmination of these eighteenth-century debates was the one between John Needham and Lazzaro Spallanzani, a debate that, in its
112
SCIENCE
IN
DISPUTE,
VOLUME
2
essentials, foreshadowed those between Louis Pasteur and his opponents a century later. Needham, who was supported in his arguments by the famous zoologist Georges-Louis Leclerc, comte de Buffon, claimed that microscopic organisms developed in infusions that had previously been sterilized by heat. Spallanzani repeated Needham’s experiments, but sealed his flasks before heating the infusions. When no organisms developed, he correctly concluded that those observed by Needham had come from the air. Spallanzani argued that this demonstrated that every organism had to have a parent, even the tiny animalcules. There was no spontaneous generation. Needham challenged this conclusion, however, by asserting that Spallanzani’s prolonged heating had altered the “vegetative force” in the infusion and had destroyed the air in the flask so that no life could be spontaneously generated in such conditions. In essence, the debate remained mired at this point for the next century. The Pasteur-Pouchet Debate In the middle third of the nineteenth century, a series of experiments by Franz Schülze (1836), Theodor Schwann (1837), Heinrich Schröder (1854), and Theodor von Dusch (1859) further demonstrated that what were now called the germs of the microbial life that caused fermentation and putrefaction in infusions were introduced from the air. Collectively, these experiments strongly suggested that the germs of microorganisms already existed, were airborne, and were not spontaneously generated by the infusions themselves. But occasionally, in substances such as milk and egg yolks, these experimenters could not prevent the formation of infusoria, thus encouraging the adherents of spontaneous generation.
The culminating debates over spontaneous generation began in 1858 when FélixArchimède Pouchet presented a paper to the Académie des Sciences, France’s highest scientific body, in which he claimed to have produced spontaneous generation under carefully controlled conditions that allowed no chance of outside contamination. As the director of the Muséum d’Histoire Naturelle at Rouen, Pouchet was well known to the public through his books of science popularization. He was also a respected naturalist at the height of his career, a man who had made several valuable contributions to science. Thus, his defense of spontaneous generation drew considerable interest with both the scientific community and the public, especially after the publication in 1859 of his lengthy book Hétérogénie ou traité de la génération spontanée basé sur de nouvelles expériences (Heterogenesis: A treatise on spontaneous generation based on new experiments), in which he
repeated his claim that life could originate spontaneously from lifeless infusions. Pouchet’s activities caused the Académie des Sciences to offer a prize for the best experiments that could help resolve this controversy over spontaneous generation. Interest in the question was extraordinarily high not only for purely scientific reasons, but also because there were political and religious overtones to the debate. France in this period was controlled by the extremely conservative dictatorship of Napoleon III, who came to power after bitter social warfare in 1848. He was strongly supported by the Catholic Church and by all individuals who feared another social revolution. Those defending spontaneous generation seemed to imply that acts of creation could and did occur without God’s intervention. Since the Catholic Church and its teachings were regarded as a bulwark against materialism, socialism, atheism, and revolution, anyone attacking the concept of spontaneous generation could count on favorable backing from the church, social leaders, and the government. Pouchet’s claim was challenged by Louis Pasteur, a chemist who had just completed several years of work on fermentation, demonstrating that the process was caused by microorganisms. Since some scientists argued to the contrary that the microorganisms were the product of fermentation rather than its cause, Pasteur’s interest in the debate over spontaneous generation was a natural one. He assumed that just as microorganisms caused fermentation, germs of microorganisms caused what appeared to be spontaneous generation. The Pouchet-Pasteur debate was especially bitter. Pasteur was a relatively young and extremely ambitious man who craved the recognition and support of the public, as well as of the French scientific establishment. Attacking spontaneous generation would garner him that support. Extremely pugnacious by nature, Pasteur never tolerated any questions about the validity of his hypotheses. In addition, he was one of the most brilliant and careful experimenters in the history of science. Pouchet was clearly overmatched in this controversy.
Louis Pasteur (© Hulton-Deutsch Collection/CORBIS. Reproduced by permission.)
In these papers, Pasteur described a number of experiments in which he boiled sugared yeast water to create a sterile but nutrient liquid. He then introduced into the sealed flasks containing this medium air that had also been sterilized, either by various filtration methods or by heating at a high temperature. Under these conditions, no microorganisms formed in the medium. But when he subsequently introduced ordinary air into these flasks, the liquid soon swarmed with microbial life. In a series of control flasks in which the liquid was sterilized but then exposed to the air, microbes almost always developed. To confirm his contention that it was not spontaneous generation but rather airborne germ-laden dust that caused the appearance of the microorganisms, Pasteur devised what became his most famous experiments, those involving the “swan-necked” flasks. These flasks contained sugared yeast water, with long, narrow necks drawn out and bent in several directions, some curving downward and then up. They were not sealed; air could enter the flasks slowly but freely. After boiling the liquid in them, Pasteur placed the flasks in areas without heavy air currents. The air entered the necks SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
In the period from February 1860 to January 1861, Pasteur presented five short papers to the Académie des Sciences detailing his experiments on the spontaneous generation question. These papers were expanded into his 1861 essay “Mémoire sur les corpuscles organisés qui existent dans l’atmosphère” (Report on the organic corpuscles which exist in the air), which won him the Académie’s prize. His purpose in these experiments was to demonstrate that ordinary air contained living organisms (“germs” in his terminology) and that they alone had the ability to produce life in infusions. In other words,
there was no spontaneous generation; living things developed from other living things and, in this case, the air itself conveyed the germs. Deprive the infusions of airborne germs, and no microorganisms will form.
113
cury, to boiled hay infusions. Pasteur showed, however, that the surface of mercury in laboratory troughs was covered with germ-bearing dust. Thus, Pasteur contended, in conducting his experiments Pouchet had introduced these germs into his infusions.
John Tyndall
slowly enough to allow gravity to trap the germladen dust in the curves. No microorganisms developed in the medium. In those swan-necked flasks where no boiling had occurred, microorganisms grew since the germ-laden air was already present before the necks were drawn out. If the curved necks of the sterile flasks were broken off allowing the air to enter quickly, microbial life soon appeared, but if the swan necks were not removed, the liquid remained sterile indefinitely, even though dust-free air was entering the flasks. What all this demonstrated, Pasteur insisted, was that what appeared to be spontaneous generation was really the result of germs carried by airborne dust.
LIFE SCIENCE
(The Library of Congress.)
In other papers in this series, Pasteur demonstrated that the density of germs in the air varies with environmental conditions, air movement, and altitude. The latter factor became especially critical in pushing the debate forward. Pasteur claimed that he had opened sealed, sterilized flasks of yeast extract and sugar at various elevations on mountains. The higher he went, the less contamination he encountered. Of the twenty flasks he opened at an elevation of over a mile, only one developed microbial life. In his “Mémoire” of 1861, Pasteur not only elaborated on his own experiments, but also on a flaw he had found in Pouchet’s experiment. Pouchet argued in favor of spontaneous generation because he could produce microorganisms by adding sterilized air, under a blanket of mer-
114
SCIENCE
IN
DISPUTE,
VOLUME
2
Pasteur clearly had the upper hand in the debate. He was, however, open to attack because his opponents argued, as had Needham a century earlier, that heating somehow modified or destroyed some basic condition necessary for spontaneous generation to occur. In April 1863 Pasteur announced that he had taken blood and urine (both substances rich in nutrients) from living animals and preserved them free from microbial growths without having heated them, but by simply protecting them from germ-laden air. Although his victory seemed complete, late that same year Pouchet announced that he had duplicated Pasteur’s experiments by exposing sterilized hay infusions at high altitudes. Contrary to Pasteur’s results, all of Pouchet’s flasks quickly developed microorganisms. Admitting oxygen, Pouchet claimed, caused spontaneous generation. Pasteur countered by asserting that Pouchet must have somehow introduced airborne germs into the infusions. In January 1864 the Académie des Sciences appointed a commission, whose members were friends and defenders of Pasteur, to decide the issue. Ironically, both Pasteur and Pouchet were operating under what proved to be two related and false assumptions. Both assumed that boiling water killed all living organisms or, in Pasteur’s terminology, their germs. Neither had any idea of the thermal resistance of some microorganisms, such as the hay bacillus endospore. Thus, they both assumed that Pasteur’s sugared yeast water and Pouchet’s hay infusions were equivalent substances in these experiments. Pasteur strengthened his case in a brilliant, if hardly objective, public lecture he gave in April 1864 at the Sorbonne to an audience of the scientific, political, social, and cultural elite of France. After dramatically tracing the history of the dispute, he concluded, “The spontaneous generation of microscopic beings is a mere chimera. There is not a single known circumstance in which microscopic beings may be asserted to have entered the world without germs, without parents resembling them.” He was also careful to emphasize that the doctrine of spontaneous generation threatened the very foundation of society by attacking the idea of a “Divine Creator.” In June 1864 Pouchet, undoubtedly realizing that the commission members were biased in Pasteur’s favor, withdrew without duplicating his experiments. Not surprisingly, the commission then announced in Pasteur’s favor; that is, against spontaneous generation.
Aftermath of the Debate This decision essentially ended the debate in France. Pouchet published no new material on the topic and Pasteur moved on to study other problems, such as silkworm diseases. The dispute, however, was not quite over. It shifted to England, where it became entwined with Darwinian theories. Some evolutionists there argued that spontaneous generation and evolution were in fact linked theories that could strengthen each other. In 1872 a respected physician and naturalist named Henry Charlton Bastian published an immense two-volume work, The Beginnings of Life: Being Some Account of the Nature, Modes of Origin, and Transformation of Lower Organisms. In this book and in other writings, Bastian insisted on the existence of spontaneous generation. Pasteur himself was only briefly involved in a debate with Bastian. Between July 1876 and July 1877, the two fought over Bastian’s claim that, under certain circumstances, microbial life originated spontaneously in sterilized urine. Pasteur argued that Bastian had somehow contaminated his experiments and invited Bastian to present the dispute to a commission of the Académie des Sciences. Bastian at first agreed, but like Pouchet before him, eventually withdrew without appearing before the group.
It is interesting to note that neither Pasteur nor Tyndall nor anyone else has ever devised an experiment proving that spontaneous generation is not possible. What they did accomplish was to demonstrate that spontaneous generation had been shown to never have occurred. In terms of formal logic, it is impossible to argue that spontaneous generation can never take place. Nevertheless, scientists today, who operate on the basis of experimental research and not
Further Reading Bastian, Henry Charlton. The Modes of Origin of the Lowest Organisms. New York: Macmillan, 1871. Conant, James Bryant. “Pasteur’s and Tyndall’s Study of Spontaneous Generation.” In Harvard Case Histories in Experimental Science, ed. James Bryant Conant. Vol. 2, 487–539. Cambridge, Mass.: Harvard University Press, 1957. Crellin, J. K. “Airborne Particles and the Germ Theory: 1860–1880.” Annals of Science 22, no. 1 (1966): 49–60. ———. “Félix-Archimède Pouchet.” In Dictionary of Scientific Biography, ed. Charles Coulston Gillispie. Vol. 11, 109–10. New York: Scribner, 1975. De Kruif, Paul. Microbe Hunters. New York: Blue Ribbon Books, 1930. Dubos, René. Louis Pasteur: Free Lance of Science. New York: Da Capo Press, 1960. Farley, John. “The Social, Political, and Religious Background to the Work of Louis Pasteur.” Annual Review of Microbiology 32 (1978): 143–54. ———. The Spontaneous Generation Controversy from Descartes to Oparin. Baltimore: Johns Hopkins University Press, 1977. Farley, John, and Gerald L. Geison. “Science, Politics, and Spontaneous Generation in Nineteenth-Century France: The PasteurPouchet Debate.” Bulletin of the History of Medicine 48 (1974): 161–98. Fry, Iris. The Emergence of Life on Earth: A Historical and Scientific Overview. New Brunswick, N.J.: Rutgers University Press, 2000. Geison, Gerald L. “Louis Pasteur.” In Dictionary of Scientific Biography, ed. Charles Coulston Gillispie. Vol. 10, 350–416. New York: Scribner, 1974. ———. The Private Science of Louis Pasteur. Princeton, N.J.: Princeton University Press, 1995. SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
However, Bastian’s main opponent turned out to be John Tyndall, an Irish physicist whose Floating Matter in the Air in Relation to Putrefaction and Infection (1881) and earlier works defended the theory that microorganisms were carried on airborne dust. Tyndall also devised glycerine-coated chambers in which scattered light beams revealed the presence of microscopic organic matter in the air. Along with Ferdinand Cohn, Tyndall proved that the hay bacillus could survive many hours in boiling water. Thus, Pouchet’s spontaneous generation was really caused by endospores he himself had introduced into his hay infusions. After Tyndall, it was very difficult to espouse the theory of spontaneous generation. Meanwhile, both Joseph Lister and John Burdon Sanderson established that Pasteur’s “corpuscles” were not the germs of microorganisms but were rather the fully developed microorganisms themselves. After the work of these men, fewer and fewer supporters of spontaneous generation appeared; with Bastian’s death in 1915 they became extraordinarily scarce.
formal logic, all adhere to the germ theory of life. There are at least three reasons for this. First, no claim for a case of spontaneous generation has ever been proven. Secondly, the use of pure cultures in scientific research has demonstrated countless times that the only life that develops in these cultures is that which is placed there. Lastly, the germ theory, not spontaneous generation, has proven to be the basis of all modern microbiology. —ROBERT HENDRICK
115
Study of Living Things. 3rd and rev. ed. New York: Abelard-Schuman, 1959.
Pasteur, Louis. “On Spontaneous Generation.” An address delivered by Louis Pasteur at the Sorbonne Scientific Soirée of 7 April 1864. .
Strick, James E. Sparks of Life: Darwinism and the Victorian Debates over Spontaneous Generation. Cambridge, Mass.: Harvard University Press, 2000.
Singer, Charles. A History of Biology to about the Year 1900: A General Introduction to the
Vallery-Radot, René. The Life of Pasteur. Trans. R. L. Devonshire. New York: Dover, 1960.
LIFE SCIENCE
Magner, Lois N. A History of the Life Sciences. 2nd ed. New York: Marcel Dekker, 1994.
116
SCIENCE
IN
DISPUTE,
VOLUME
2
Have sociobiologists proved that the mechanisms of the inheritance and development of human physical, mental, and behavioral traits are essentially the same as for other animals? Viewpoint: Yes, sociobiologists led by E. O. Wilson have offered convincing evidence that the mechanisms of the inheritance and development of human physical, mental, and behavioral traits are essentially the same as for other animals. Viewpoint: No, sociobiologists fail to account for many observable phenomena and invariably support a version of biological determinism.
In popular usage, the term sociobiology is used for the assumption that all mechanisms that account for the inheritance and development of physical, mental, and behavioral traits are essentially the same in humans and other animals. According to Edward O. Wilson, author of Sociobiology: The New Synthesis (1975), sociobiology is “the systematic study of the biological basis of all social behavior.” Wilson, the Pellegrino University Research Professor at Harvard, was widely recognized as an authority on the social insects. In the first chapter of his controversial book, The Morality of the Gene (1984), Wilson noted that, so far, such studies had necessarily focused on animal society, but he predicted that the discipline would eventually encompass the study of human societies at all levels of complexity. Indeed, he thought that sociology and the other social sciences, including the humanities, would be incorporated into the “Modern Synthesis,” that is, neo-Darwinist evolutionary theory. Up to the 1970s, the central theoretical problem in sociobiology had been determining how altruism, self-sacrificing behavior which could lead to injury or death, could have evolved by natural selection. The answer, Wilson asserted, was kinship. Genes that led to altruistic behaviors were selected over time because altruistic acts by a member of a kinship group increased the survival of such genes in future generations. The word sociobiology was used as early as 1949 by the American zoologist Warder C. Allee and his associates in Principles of Animal Ecology. The kinds of questions addressed by sociobiology, however, were alluded to in Charles Darwin’s On the Origin of Species by Means of Natural Selection (1859), and The Descent of Man, and Selection in Relation to Sex (1871). In the first book, Darwin merely hinted that his ideas might throw some light on the origins of human beings, in the second, he analyzed the implications of human evolution as a purely biological process. Despite the paucity of evidence available at the time, he concluded that “man is descended from a hairy, tailed, quadruped, probably arboreal in its habits.” Darwin reasoned that the investigation of behaviors shown by animals, such as curiosity, memory, imagination, reflection, loyalty, and the tendency to imitate, could be thought of as the precursors of human characteristics. In 1872 Darwin elaborated on this concept in The Expression of the Emotions in Man and Animals, a work that established the foundations of modern research in ethology (animal behavior)
117
and ethnology (comparative anthropology). Essentially, Darwin argued that the evolution of behavior, like the evolution of the physical components of the body, is subject to the laws of inheritance and selection. The idea that cooperation and altruism were significant factors in evolutionary change was proposed by Russian geographer (and revolutionary) Peter Kropotkin in Mutual Aid (1902). He suggested that evolution must have produced the instincts that ruled the behavior of social insects and the wolf pack. Until the 1960s scientists generally ignored Darwin’s arguments about sexual selection and female “choice” in mating as a significant aspect of the evolution of apparently nonadaptive traits, but some aspects of the concept were revived in the 1960s. In particular, sexual selection was invoked as a means of providing Darwinian explanations for the evolution of traits, such as altruism, cooperation, and sexual ornamentation, that might be seen as counterproductive in the struggle for existence. The establishment of ethology as a new science is primarily associated with the Austrian zoologist Konrad Lorenz, who shared the 1973 Nobel Prize in Physiology or Medicine with the Dutch-born British zoologist Nikolaas Tinbergen, and another Austrian zoogist, Karl von Frisch, “for their discoveries concerning . . . individual and social behavior patterns.” Like sociobiology, ethology has often been associated with controversial political and social assumptions about the nature of learning and inheritance. Sociobiology developed from studies in population biology and genetics, in conjunction with research on the social insects, especially ants and honeybees. The theoretical basis of sociobiology is generally attributed to papers published by the British evolutionary biologist William Donald Hamilton in the Journal of Theoretical Biology (1964). In “The Genetical Theory of Social Behavior,” Hamilton established the concept of inclusive fitness. Essentially, this concept emphasizes the survival of genes, as opposed to the survival of individuals, by means of the reproductive success of relatives. The concept works particularly well for the social insects, where all the workers born of the same queen are full sisters, and only the queen reproduces. As developed by Wilson and his followers, however, sociobiology purports to involve the study of all social species, including humans. By following his theory from the social insects to human beings in the last chapter of Sociobology, “Man: From Sociobiology to Sociology,” Wilson created a well-publicized, and even acrimonious, controversy. In the twenty-fifth anniversary edition of Sociobiology, Wilson said that the chapter on human behavior had “ignited the most tumultuous academic controversy of the 1970s.” Primarily, Wilson was accused of advocating a modern version of the concept of “biological determinism,” although he denies saying that human behavior is wholly determined by the genes. Many critics of sociobiology have called it a pseudoscience that promotes racism and sexism. The British zoologist Richard Dawkins, however, expanded on Hamilton’s concept in his well-known book The Selfish Gene.
LIFE SCIENCE
One of the first public critiques of Wilson’s Sociobology was a letter published in The New York Review of Books. The cosigners of the letter were members of a group called the Sociobiology Study Group, which included two of Wilson’s colleagues in the same department at Harvard, Richard C. Lewontin and Stephen J. Gould. Nevertheless, sociobiology eventually evolved into an interdisciplinary field generally known as evolutionary psychology, which has attracted many anthropologists, psychologists, cognitive scientists, geneticists, economists, and so forth. The Adapted Mind: Evolutionary Psychology and the Generation of Culture (1992), edited by Jerome H. Barkow, Leda Cosmides, and John Tooby, provides a valuable exposition of this academic field. Critics argue that the basic concepts of evolutionary psychology are fundamentally the same as those of sociobiology. —LOIS N. MAGNER
118
Viewpoint: Yes, sociobiologists led by E. O. Wilson have offered convincing evidence that the mechanisms of the inheritance and development of human physical, mental, and behavioral traits are essentially the same as for other animals. Human beings have long valued the concept of freedom, which Merriam-Webster’s Collegiate Dictionary defines as “the absence of SCIENCE
IN
DISPUTE,
VOLUME
2
necessity, coercion, or constraint in choice or action.” If we are not free, we are bound by some type of restraint(s), in effect, slaves to forces quite possibly beyond our control. The loss or perceived loss of freedom in any context usually results in a maelstrom of debate. When sociobiology declared that human behaviors such as altruism, aggression, and even choice of a mate has biological and genetic roots, a large outcry declared that humanity was being relegated to sophisticated robots, preprogrammed to love, hate, be kind, or act selfishly. Even segments of the scientific community, which largely agrees that such is the case in other “lower” animals, joined in the attack.
KEY TERMS Alternative forms of a gene that may occur at a given locus. EUGENICS: Approach to improving of hereditary qualities of a race or breed by selection of parents based on their inherited characteristics. EVOLUTION: In biological terms, evolution is the theory that all life evolved from simple organisms and changed throughout vast periods of time into a multitude of species. The theory is almost universally accepted in the scientific community, which considers it a fundamental concept in biology. GENE: Cellular component that determines inherited characteristics. Genes can be found in specific places on certain chromosomes. GENETIC DETERMINISM: Unalterable traits inherited through genes. GENETIC MUTATION: Permanent transmissible change in the genetic material, usually in a single gene. GENOME: The total set of genes carried by an individual or a cell. HUMAN GENOME PROJECT: The worldwide effort to sequence all the genes in the human body. NATURAL SELECTION: Process in which certain individuals (organisms, animals, humans) who are best suited for their environment and reproduction survive. Natural selection acts as an evolutionary force when those selected for survival are genetically different from those not selected. PLEISTOCENE: Geologic time period, or epoch, in Earth’s history spanning 1.8 ALLELE:
spective to humans in his final chapter that scientific, political, and religious factions cried out that sociobiology proponed “biological determinism,” which has been used to validate existing social arrangements as being biologically inevitable. In fact, sociobiology says nothing of the sort. No serious sociobiologist believes that biology and heredity totally determine human behavior. Wilson and other sociobiologists have made it quite clear that they believe human behavior results from both biological/genetic causes and environmental factors. Sociobiology Helps Explain Human Nature Not all sociobiologists work from the same fundamental theoretical and methodological SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
If Edward O. Wilson, a distinguished Harvard zoologist, had left out the last chapter of his groundbreaking 1975 book Sociobiology: The New Synthesis, there would have been little uproar over this new approach to studying the behavior of social animals. Evolution and natural selection (the various processes that lead to the “survival of the fittest”) had previously been applied primarily to physical characteristics in animals and humans. Wilson said they also affected animal behaviors or traits, such as instinctual parental behaviors to ensure offspring survival, as well as the survival of the species or group as a whole. This concept was not entirely new in terms of animal behavior. It was only because Wilson speculatively applied this per-
million to 10,000 years ago. Anthropologists have found evidence of the early ancestors of humans living during the Pleistocene. According to the theories of evolution, these early humans eventually developed into modern humans. POPULATION BIOLOGY: Study of populations focusing on understanding the mechanisms regulating their structure and dynamics, including demography and population genetics. POSTULATE: Hypothesis that is an essential presupposition or premise of a train of reasoning. PREDETERMINE: To establish or make concrete beforehand. PREDISPOSE: To incline beforehand; to give tendency to. SOCIAL DARWINISM: Concept that applies Charles Darwin’s theories of evolution to society in that people ultimately compete for survival, which results in certain “superior” individuals, social groups, and races becoming powerful and wealthy. The theory has been criticized by sociologists for its failure to take into account social influences such as people born into wealth and powerful families, thus their social status and good fortune relies on social position and not natural superiority. Social Darwinism was no longer widely accepted by the turn of the twentieth century. SOCIOBIOLOGY: Systematic study of the biological basis of all social behavior.
119
approaches. However, nearly all sociobiologists construct theoretical models to explain social behavior based on information collected by population biologists, who gather both demographic and genetic data to understand the mechanisms regulating the structure and dynamics of animal populations.
LIFE SCIENCE
Regardless of the approach or theories, numerous examples offer insights into how sociobiology helps to explain human behavior. For example, considering the basic “survival of the fittest” aspect of evolution, it would seem that animals should be totally selfish in “looking out for number one” so they could survive and produce offspring who would also survive to reproduce. However, there are many examples of altruism in humans and other animal species. Crows often post a lookout to keep an eye out for predators and warn the rest of the flock that is foraging for food. However, the lookout is more exposed than the rest of the flock and its sound of alarm will also call attention to it. As a result, its behavior actually reduces its own individual fitness, or likelihood of survival, but it increases the overall group, species, or population fitness and ability to survive. The explanation lies in the concept of inclusive fitness, which says that animals also have an innate biological drive to pass on their genes both to direct offspring and to close relatives that have many of the same genes. This form of natural selection is called kin selection, and acts on both individuals and more extended families (as is often the case in flocks of crows). How does kin selection work on a genetic basis? An “altruism gene” or allele would increase a species’ likelihood of survival as a group if this altruistic trait of being willing to risk oneself for the group was passed on to successive generations. Many of the behaviors that sociobiology looks to explain are intensely controversial, such as the “naturalness” of the long-time (or oldfashioned, if you will) roles of males and females in human society. Virtually all biologists believe that natural selection is intrinsic in evolution. As a result, sexual behavior is far too important to be left solely to chance. For example, on a biological or genetic basis, sexual attraction to healthy, vital, vigorous, attractive people as opposed to those who are sickly is partly due to biological programming to maximize our genetic success in producing healthy offspring. In addition, differences between the sexes manifest themselves in terms of the roles males and females play in certain aspects of mating and child rearing. Our ancient female ancestors, for example, could only produce a limited number of offspring just as today, but they also had to face greater health and physical dangers associated with pregnancy and birth. In addition, during pregnancy, they required more food (which
120
SCIENCE
IN
DISPUTE,
VOLUME
2
was not readily available at the local supermarket) and were more susceptible to predators. So, choosing a mate that could provide for and protect them was far more important to females than to males, who could easily impregnate a female and then leave. Over the centuries, a highly specific form of sex-role specialization developed in which females “traditionally” (and that is a key word here) shoulder the larger burden, not only in bearing but also in rearing children. In terms of self-interest, males and females have very different agendas concerning sex and producing offspring. Studies of “mating strategies” in many countries and cultures have found that men place a high value on physical attractiveness, which may indicate health and the success of producing offspring. On the other hand, women may value attractiveness, but they are more interested in status and income, which may help to ensure that their offspring will survive. This preference holds true in both literate (the United States, Nigeria, and Malaysia) and nonliterate (the Ache of Paraguay and the Kipsigis of Kenya) countries and societies. Another behavior that can be explained in sociobiological terms is aggression. Throughout recorded human history, violence has been perpetrated more by men than women. For example, current statistics have shown that men killing other men is the cause of most violent deaths, and that sexual rivalry is often a major cause of violence. Male violence and murder directed toward women is also more common as opposed to women acting violently toward men. This predominance of control through aggression in men can partly be explained because, in terms of absolutes, a male can never be certain without genetic testing that a child is his, while a female who bears a child knows with absolute certainty that the child belongs to her. As a result, men have often used force or violence to help prevent their women from engaging in sexual relations with other men, thus ensuring that their genes are passed on for survival. In addition, most societies have traditionally placed a greater social stigma on women engaging in sex with more than one man than on men having sex with more than one woman. Statistics on family violence also show that stepfathers are seven times more likely to abuse their children than biological fathers, and that fatal abuse by stepfathers is 100 times higher. This behavior is common in many countries and societies, from the United States to the Yanomamo Indians of Venezuela. Furthermore, research has shown that other factors, such as ethnicity, religion, education, and socioeconomic status, do not account for these statistics. The sociobiological explanation is that a stepfather has no “evolutionary” goal to attain by maintaining his stepchildren and that using
resources in “providing” for them is actually against his own biological interests. As a final example, let us look at the case of divorce. In her book The Anatomy of Love (1995), anthropologist Helen E. Fisher says that the prevalence of divorce is due to more than just the influence of a more modern liberal culture. Based on data from more than 60 societies around the world, Fisher found that divorce tends to occur between two and four years after marriage. Although certain factors, such as having more than one child and a female who is more dependent on a male for support, may prolong marriage, this tendency to separate after two to four years of marriage seems to be a “natural” part of humans. The statistics stand true regardless of other social structures, including whether or not a society is polygamous or monogamous or whether or not it condones divorce. Sociobiology explains this phenomenon in several ways. In most societies, human infancy is considered to last about four years, after which the child is better able to care for itself or be cared for in school and other places. In addition, the euphoric feeling of love has been traced to the brain’s limbic system as it produces “feel good” natural amphetamines, such as phenylethyline. Research has shown that this natural stimulant due to new love begins to wear off after three years, which, in some cases, may lead both males and females to seek this “high” again by finding a new mate. Furthermore, as Fisher points out, in many traditional societies, breast-feeding coupled with other factors such as exercise and a low-fat diet can suppress ovulation, inhibiting the female’s ability to become pregnant for about three years. Thus, the divorce peak can be seen to relate to the average of four years between births in families. This behavior in humans is similar to that in other animals, including foxes and robins, that mate only through a breeding season that lasts long enough for offspring to become independent.
In The Triumph of Sociobiology (2001), John Alcock, a noted animal behaviorist at Arizona State University, reminds readers that “natural” behavior is not “moral” behavior. This linking of natural to moral is really the crux of the vehement opposition to sociobiology as stated by Richard C. Lewontin and others. They believe that any hypothesis attempting to provide a biological basis for social behavior is wrong because it propagates the status quo, as well justifying the allocation of privilege on the basis of sex, race, and social class. Such hypotheses, they note, played a large role in sterilization and immigration laws in the United States between 1910 and 1930 and in theories of SCIENCE
IN
DISPUTE,
VOLUME
2
E. O. Wilson (AP/Wide World Photos. Reproduced by permission.)
LIFE SCIENCE
Sociobiology Is Not a Question of Morality Some scientists have criticized Wilson and other sociobiologists as being oversimplistic and conducting “bad” science. Making the assumption or conjecture that a gene or group of genes may exist for a certain behavior does not, they assert, meet the requirement of good science. But almost all good science, including psychiatry and psychoanalysis, begins with and continues to use postulates (statements assumed to be true and taken as the basis for a line or reasoning). As Ullica Segerstrale, a professor of sociology at the Illinois Institute of Technology, points out in her book, Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond (2001), “Most of science depends on plausibility arguments. Science is an ongoing project.” Nevertheless, science has made progress in relating
genetics and behavior. For example, laboratory experiments have found genes related to gregarious feeding habits and other social behaviors in worms, learning behaviors in honeybees, and kin recognition in ants. However, unlike physical, or morphological changes that have occurred due to human evolution, behavior in humans can be difficult to observe accurately and even more difficult to measure or quantify. Still, if sociobiology is bad science, why do biologists and others almost universally accept it when it focuses on animals other than human beings? For example, the Animal Behavior Society, the primary international organization in this discipline, said in 1989 that Wilson’s Sociobiology: The New Synthesis was the most important book published concerning animal behavior.
121
eugenics used by Hitler in Germany during World War II as an excuse for the slaughter of the Jews. In other words, their primary argument is that sociobiology is a scientific endeavor that is morally wrong. There is little doubt that, taken simplistically, sociobiology could be construed to offer justification for eugenics and Social Darwinism. Both concepts have led to beliefs which hold that the poor are “unfit” and, as a result, should be allowed to die off or be eliminated to enhance opportunities for those who are fit (successful, rich, good looking, etc.) to prosper. But sociobiology does not claim to fully explain human actions or to condone seemingly selfish or other negative actions as “biologically correct.” Neither does sociobiology deny the influence of culture on human actions or reject the concept of free will by saying that all human actions are the result of preprogrammed biological impulses.
LIFE SCIENCE
What sociobiology does point out is that cultures and their growth and change occur much more rapidly than evolution in biological terms, and that much of our basic behaviors are still based on our Pleistocene hunter-gatherer nature because, in terms of evolution, several million years is a relatively short amount of time. Sociobiology also provides enhances our model of understanding human behavior and our psyches. Only by truly grasping all the influences on our human nature, including evolutionary, psychological, and cultural influences, can we began to make really lasting changes for the better, including reducing such things as violence, selfishness, and greed. Take racism for example, sociobiology has not provided one example of racial superiority, but has helped to provide evidence for the “universality” of human traits, thus undercutting beliefs in cultural and racial differences. Conclusion Humans have bred animals for centuries to obtain certain characteristics, including those that are physical (e.g., specific types of cows for beef) and psychological (e.g., breeds of dogs that are more aggressive). If scientists accept that evolutionary forces have influenced the physical traits of human beings just as it has those of animals, it is reasonable to believe that, just like other animals, our psychological traits or tendencies have evolutionary factors involved. In the end, sociobiology is about influences, not determinism, and about understanding our free will and moral behavior in a new light. What makes humans different from other animals is our enormous intellect and ability to make choices. It is not “nature versus nurture” but “nature and nurture” that makes us what we are. And, like evolution, sociobiology does not eliminate the possibility of a God, for, as the saying goes, “God works in mysterious ways.” Or, as Shakespeare put it in
122
SCIENCE
IN
DISPUTE,
VOLUME
2
Hamlet, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” —DAVID PETECHUK
Viewpoint: No, sociobiologists fail to account for many observable phenomena and invariably support a version of biological determinism. Edward O. Wilson, Pellegrino University Research Professor at Harvard University, defines sociobiology in his book Sociobiology: The New Synthesis (1975) as “the systematic study of the biological basis of all social behavior.” In other words, genetics are the sole determinant of behavior. The premise that humans inherit and develop their physical, mental, and behavioral traits in essentially the same way as other animals can be made to sound feasible, logical, and almost obvious. However, there is no hard evidence to back up this premise. In “Genomics and Behavior: Toward Behavioral Genomics,” Peter McGuffin, Brien Riley, and Robert Plomin of the Social, Genetic, and Development Psychiatry Research Centre, Institute of Psychiatry, Kings College, London, state that news reports often claim that researchers have found specific genes for traits such as aggression, intelligence, homosexuality, and bad luck. These reports “tend to suggest, usually incorrectly, that there is a direct correspondence between carrying a mutation in the gene and manifesting the trait or disorder.” McGuffin, Riley and Plomin postulate that the idea of a single gene determining a particular trait holds little validity, and that behavior is influenced by the interplay between multiple genes and our environment. Moreover, they point out that human behavior is “unique in that it is the product of our most complicated organ, the brain.” In an article entitled “Genes, Culture, and Human Freedom,” Kenan Malik, a neurobiologist and former research psychologist at the University of Sussex Centre for Research into Perception and Cognition, writes: “In the six million years since the human and chimpanzee lines first diverged on either side of Africa’s Great Rift Valley, the behavior and lifestyles of chimpanzees have barely changed. Human behavior and lifestyles clearly have. Humans have learned to learn from previous generations, to improve upon their work, and to establish a momentum to human life and culture that has taken us from cave art to quantum physics—and to the unraveling of the genome. It is this capacity for constant innovation that distinguishes humans from all other animals.”
If, over the eons, humans have developed from hairy, apelike creatures learning to get off all fours and onto their hind legs to the way we appear today, and chimpanzees have remained virtually unchanged; and if one believes in the Darwinian philosophy of evolution; then why, if humans and animals inherit their genes in essentially the same way; are humans and chimps so different? Could it be that additional variables and possibly other mechanisms came into play? Sociobiological Theory Opens Pandora’s Box Although the roots of sociobiology go back at least as far as 1949, to the work of American zoologists Warder C. Allee and Alfred E. Emerson in their book Principles of Animal Ecology, Wilson opened the Pandora’s Box of modern sociobiology in 1975 when he extended the Darwinian philosophy of evolution, not just from the physical to the behavioral, but from animal behavior to human behavior. Thus began a controversy that rages even as the twenty-first century begins and the Human Genome Project (HGP) has mapped our genes.
Richard C. Lewontin is an evolutionary geneticist and Alexander Agassiz Research Professor at Harvard University. In his 1979 article “Sociobiology as an Adaptationist Program,” Lewontin argues that the sociobiological approach to human behavior reduces evolution-
Maybe It’s Not in Our Genes! The idea that natural selection determines human behavior has been interpreted by many to mean our behavior is predetermined, or set in the cement of our genetic inheritance. That may be fine for other animals that have the well-defined and very limited goals of survival and reproduction. When applied to humans, the hypothesis leaves many questions unanswered. Many human cultures hold dear the idea that individuals have free will— the power to choose, and the ability to change behavior patterns. Therefore, the idea of genetic determinism arouses heated debate because humans are not limited to naturally defined goals such as finding food, shelter, or a mate. We establish individual goals, as well as goals for our families, our group, and our societies.
When, in the late twentieth century, scientists began to map out human genes, the general consensus was that humans, being much more sophisticated than other creatures, possessed as many as 100,000 different genes. It would then be just a matter of identifying, some thought, the individual genes that determined our behavior patterns, for surely, they must. Much to the surprise of many, the final analysis of the Human Genome Project revealed that humans have approximately 30,000 genes, just 300 more than a mouse. It should be noted that these figures are still open to debate. Nonetheless, ass noted by Robin McKie, science editor for the Observer in her article “Revealed: The Secret of Human Behavior, Environment, Not Genes, Key to Our Acts,” this small number raised serious problems for the those in the scientific community who hung their hats on genetic determinism. McKie quotes Dr. Craig Venter, an American scientist whose company worked SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
In another of his books, On Human Nature (1978), Wilson writes: “The heart of the genetic hypothesis is the proposition, derived in a straight line from neo-Darwinian evolutionary theory, that the traits of human nature were adaptive during the time that the human species evolved and that genes consequently spread through the population that predisposed their carriers to develop those traits.” In this context, the term adaptive means that an individual inheriting certain desirable traits (particularly those necessary for survival) is more likely to be chosen as a mate, or to choose a mate with a similar dominant trait, in order to produce offspring so that genetic trait is passed on. The offspring that displays that trait is similarly more likely to pass on those genes, and so forth. Sociobiologists call this trend “genetic fitness,” which, they state, increases the odds of personal survival, personal production of offspring, and survival of others in the extended family which inherits the trait from a common gene pool of ancestors. The stronger the trait becomes, the greater the genetic fitness of that particular group. When continued over many generations, the trend produces an entire population displaying the preferred trait. Darwin called this process “natural selection,” and Wilson says “In this way human nature is postulated by many sociobiologists, anthropologists, and others to have been shaped by natural selection.”
ary theory to a pseudoscience. Just three of the several arguments he puts forth against sociobiology are: 1) Reification—in which he points out that “sociobiologists conveniently forget that evolution occurs only in real objects and cannot occur in the metaphysical world of thoughts. Yet they constantly try to apply evolution to ‘mental constructs’ such as property and territoriality.” 2) Confusion of levels—sociobiology, he says, by its very name, deals with societal behavior but often focuses on individual behavior. Then it assumes, incorrectly, that societal behavior is simply a collection of individual behavior. 3) Imaginative Reconstruction—a process in which a single trait of some species is isolated, and a problem is deduced to explain why that trait was important enough to become subject to natural selection. The deduced problem may be correct, but it also may be completely incorrect, although scientifically it cannot be proven incorrect. “This makes this method unfalsifiable and therefore, unscientific,” writes Lewontin.
123
independently of a United States-United Kingdom team on the project. Venter said, “We simply do not have enough genes for this idea of biological determinism to be right. The wonderful diversity of the human species is not hardwired in our genetic code. Our environments are critical.” Nature versus Nurture: If Not Genes, Then Culture Kenan Malik at the University of Sussex addresses both the nature (genetic) and nurture (cultural) viewpoints. If, he asks, having fewer genes implies less hard-wiring and more freedom of behavior, should we not be celebrating the fact that “a creature with barely more genes than a cress plant can nevertheless unravel the complexities of its own genome?” If it turned out humans had 200,000 genes, he wonders, would we be “slaves to our nature? And given that fruit flies possesses half our number of genes, should we consider them twice as free as we are?”
LIFE SCIENCE
Does the fact that we have fewer genes than expected mean that we are, of necessity, governed by our cultural environment? “The problem with the nature-nurture debate is that it is an inadequate way of understanding human freedom,” Malik says. While agreeing that, just like every other organism, humans are shaped by both hereditary and cultural forces, he notes that humans, unlike all other organisms, have the ability to “transcend both, by our capacity to overcome the constraints imposed by both our genetic and our cultural heritage. . . . We have developed the capacity to intervene actively in both nature and culture, to shape both to our will.” Malik believes that while our evolutionary heritage no doubt shapes the way we approach our world, we are not limited by it. The same applies to our cultural heritage, which influences how we perceive our world but, again, does not imprison our perceptions. “If membership of a particular culture absolutely shaped our worldview,” Malik writes, “historical change would never be possible.”
124
C. George Boeree, a professor in the psychology department at Shippensburg University in Pennsylvania, writes in his article “Sociobiology” that, regardless of what behavior is apparent in humans, for every sociobiological explanation of that behavior a cultural explanation can also be applied. However, Boeree ends his article with virtually the same premise as Malik—humans have the ability to change their behavior, whether that particular behavior is influenced by our genes or our environment. Joseph McInerney, director of the Foundation for Genetic Education and Counseling, an organization set up to promote understanding of human genetics and genetic medicine, writes in his article “Genes and Behavior: A Complex SCIENCE
IN
DISPUTE,
VOLUME
2
Relationship,” that the nature/nurture debate is virtually meaningless. Instead, McInerney says, “the prevailing view is how nature and nurture contribute to the individuality of behavior.” Much Left to Incorporate What is it, then, that gives humans such individuality and sets us apart from animals? Many people point to the spiritual side of human nature, one often denied by sociobiologists and other scientists. The Reverend Joel Miller, the senior minister at the Unitarian Universalist Church of Buffalo, New York, in a sermon entitled Sociobiology, Spirituality, and Free Will (December 3, 2000), states that both theologians and scientists often confuse religion and science. “The ‘creationism’ of some religionists is the famous example of religion confused with science. . . . Some scientists have the firm and certain belief that only those things that can be detected with instruments are worth any attention, and because free will is an impossible thing to measure or describe scientifically, it doesn’t exist at all and should not even be discussed.”
Margaret Wertheim, the writer and host of the 1998 PBS television series Faith and Reason, points out in her article “Crisis of Faith”: “We have already encountered such proposals from Harvard entomologist Edward O. Wilson in . . . Consilience, from Richard Dawkins, who has famously explained religion as a virus of the mind, or what he calls a viral ‘meme,’ and from English psychologist Susan J. Blackmore, who has elaborated on Dawkins’ ideas in her recent book The Meme Machine. For all these scientists, religion is simply a byproduct of cultural and/or genetic evolutionary processes that arises and flourishes in human societies because it lends a survival advantage.” Wertheim does not deny the sociobiological perspective of religions as “lending a survival advantage . . . by encouraging altruism and reciprocal altruism among group members and by providing a moral framework for the community.” However, she points out that a purely scientific answer for the basis of religious beliefs discounts the foundation of those beliefs in reality. She notes that, “[F]or believers,” God and the soul are “fundamental aspects of the real. . . . that, for Christians, Jesus really was the son of God, . . . that he really did rise from the dead and ascend to Heaven, and that they, too, will be resurrected. . . . Likewise, for Aboriginal Australians, the Dreamtime spirits really did create the world and they really do interact in it today.” Caroline Berry, a retired consultant geneticist who once worked at Guy’s Hospital in London, England, and Attila Sipos, a lecturer in psychiatry at the University of Bristol, England, in their article “Genes and Behaviour,” also bring the spiritual and aesthetic nature of humans into
the debate. Can our genetic makeup cause us to appreciate art and music, ask Berry and Sipos, or to “sacrifice ourselves for intangible ideals such as universal suffrage and the abolition of slavery? . . . God is spirit, so clearly our being made in his image gives us more than is in our DNA. There is more to our humanity than our biological makeup, even though it is difficult to elucidate the exact nature of this inherited quality.” Berry and Sipos also quote Francis Collins, director of the Human Genome Project and a committed Christian, from his announcement to representatives from the world’s media in February 2001 following completion of the mapping of human genes: “The human genome will not help us to understand the spiritual side of humankind . . .” Obviously, the spiritual side of human nature plays a huge role in determining our behavior—individually and societally.
of genetic makeup, animal behavior is absolutely predetermined by it. While there may be similarities between humans and other animals in the ways in which genes are inherited and developed, there are also differences. True knowledge is dependant upon examining both similarities and differences, and perhaps it is time to pay closer attention to the differences.—MARIE L. THOMPSON
Further Reading Alcock, John. The Triumph of Sociobiology. Oxford, NY: Oxford University Press, 2001. Berry, Caroline, and Attila Sipos. “Genes and Behaviour.” Christian Medical Fellowship. .
Tom Bethell, the senior editor of the conservative journal the American Spectator writes in an article entitled “Against Sociobiology,” that: “A peculiar omission from the school of sociobiologists’ subdivision of human nature is the faculty of reason itself. . . . Once reason is admitted as a characteristic of human nature— and in truth it is the characteristic, along with freedom of the will—it can be shown to do the work imputed to phantom genes in almost any example that sociobiologists want to bring up.”
Bethell, Tom. “Against Sociobiology.” First Things: A Journal of Religion and Public Life 109 (January 2001): 18–24. .
Malik, describing humans as both subjects and objects, writes that while we are influenced by biological and physical laws our consciousness gives us purpose and allows us to “design ways of breaking the constraints” of those laws. He points out that, while humans and other animals have an evolutionary past, only humans make history. “The historical, transformative quality of being human is why the socalled nature-nurture debate, while creating considerable friction, has thrown little light on what it means to be human. To understand human freedom we need to understand not so much whether we are creatures of nature or nurture, but how, despite being shaped by both nature and nurture, we are also able to transcend both.”
Holmes, W. G., and P. W. Sherman. “Kin Recognition in Animals.” American Scientist 71, no. 1 (1983): 46–55.
Boeree, C. George. “Sociobiology.” . Dawkins, Richard. The Selfish Gene. New York: Oxford University Press, 1989.
Kitcher, P. “Precis of Vaulting Ambition: Sociobiology and the Quest for Human Nature.” Behavioral and Brain Sciences 10 (1987): 61–71. Lewotin, R. C. “Sociobiology as an Adaptionist Program.” Behavioral Science 24 (1979): 5–14. Malik, Kenan. “Genes, Culture, and Human Freedom.” . McGuffin, Peter, Brien Riley, and Robert Plomin. “Genomics and Behavior: Toward Behavioral Genetics.” Science 291 (February 16, 2000): 1232–49.
McKie, Robin. “Revealed: The Secret of Human Behavior, Environment, Not Genes, Key to Our Acts.” Observer (February 11, 2001).
Conclusion Although human behavior may, to some extent, be predisposed to the influence
Miller, Joel. “Sociobiology, Spirituality, and Free Will.” .
McInerney, Joseph D. “Genes and Behavior: A Complex Relationship.” Judicature 83, no. 3 (November–December 1999). .
SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
In a 1987 article in the journal Behavioral and Brain Sciences, Philip Kitcher, professor of philosophy at Columbia University, writes “genetical-cultural evolution is incorrect due to its strong argument on how the genetic aspect of the evolution is directly responsible for the cultural aspect. This is because while genetics may truly affect certain individual’s behavior, it does not mean an entire culture can be shaped due to presence of certain genes in its population.”
125
Segerstrale, Ullica. Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond. New York: Oxford University Press, 2000. Shermer, Michael. “Biology, Destiny, and Dissent.” Washington Post (June 25, 2000).
LIFE SCIENCE
Wilson, Edward O. Sociobiology: The New Synthesis. 25th anniversary ed. Cambridge,
126
SCIENCE
IN
DISPUTE,
VOLUME
2
MA: Belknap Press of Harvard University Press, 2000. ———. “The Relation of Science to Theology.” Zygon 15 (1980): 425–34. Wertheim, Margaret. “Crisis of Faith.” Salon (December 24, 1999). .
Was Margaret Mead naive in her collection of anthropological materials and biased in her interpretation of her data? Viewpoint: Yes, Margaret Mead’s methodology was flawed, and her bias and naiveté call into question her conclusions. Viewpoint: No, while Mead’s methodology and conclusions have been legitimately criticized, her overall analysis has been supported by a majority of anthropologists.
When the American anthropologist Margaret Mead died in 1978, she was the only anthropologist so well known to the general public that she could be called “grandmother to the world.” Indeed, it was through Mead’s work that many people learned about anthropology and its vision of human nature. Through her popular writings she became a treasured American icon, but by the end of the twentieth century her reputation was under attack by critics who challenged the accuracy of her early research and her concept of human nature and culture. Perhaps it was Mead’s obvious eagerness to offer advice and guidance on a plethora of issues that caused critics to complain that she “endowed herself with omniscience . . . that only novelists can have.” Mead was born in Philadelphia in 1901. She majored in psychology at Barnard, earning her B.A. in 1923, then entered Columbia University at New York, where she earned an M.A. in 1924, and a Ph.D. in 1929, and studied with pioneering anthropologists Franz Boas (1858–1942) and Ruth Benedict (1887–1948). In 1925 Mead went to American Samoa to carry out her first fieldwork, focusing on the sexual development of adolescent girls. When the Samoan work was published as Coming of Age in Samoa: A Psychological Study of Primitive Youth for Western Civilization (1928), it became the best selling anthropology book of the twentieth century. In 1929 Mead and her second husband Reo Fortune went to Manus Island in New Guinea, where her fieldwork focused on children. This work was published as Growing Up in New Guinea: A Comparative Study of Primitive Education (1930). In subsequent fieldwork Mead explored the ways in which gender roles differed from one society to another. Various aspects of her pioneering comparative cross-cultural studies were published as Sex and Temperament in Three Primitive Societies (1935). In fieldwork carried out with her third husband, Gregory Bateson, Mead explored new ways of documenting the relationship between childrearing and adult culture. Bateson was the father of Mead’s only child, Mary Catherine Bateson, who also became an eminent cultural anthropologist. During World War II, Mead and Ruth Benedict investigated methods of adapting anthropological techniques to the study of contemporary cultures, especially the allies and enemies of the United States, including Britain, France, Russia, Germany, and Japan. Through an understanding of cultural traditions, Mead hoped to find ways to encourage all nations to work toward a world without war. “Those who still cling to the old, simple definition of patriotism have not yet recognized that since Hiroshima there cannot be winners and losers in a war,” she warned, “but only losers.” When remem-
127
bering Margaret Mead, most people will recall her famous admonition: “Never doubt that a small group of thoughtful, committed citizens can change the world.” Although Mead taught at Columbia University, New York University, Emory University, Yale University, The New School for Social Research, University of Cincinnati, and The Menninger Clinic, the American Museum of Natural History in New York was always her research base. She also served as president of the American Anthropological Association, Anthropological Film Institute, Scientists Institute for Public Information, Society for Applied Anthropology, and the American Association for Advancement of Science. She was awarded 28 honorary doctorates and, in 1979, the Presidential Medal of Freedom. A popular lecturer and prolific author, Mead produced over 40 books and thousands of articles, and made dozens of films that introduced new ways of thinking about adolescence, sexuality, gender roles, childrearing, aggression, education, race relations, and environmental issues. For the centennial celebration of her birth, many of her books were reissued with new introductions. Despite Mead’s immense popularity, her reputation was somewhat tarnished when Derek Freeman, a professor of anthropology at Australian National University, created the “Mead-Freeman controversy” with the publication of his book Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth in 1983. Initially, Freeman argued that a young, gullible Mead mistook Samoan jokes about sexual conduct for the truth. Freeman’s assertion that he had discovered a “towering scientific error” received widespread media attention. According to Freeman, Mead used inaccurate material to please her mentor, Franz Boas, and to support the doctrine of absolute cultural determinism. Freeman argued that the “Mead paradigm” dominated twentieth-century anthropology because of the cultlike loyalty of her followers. According to Freeman, anthropology cannot be a respectable scientific discipline until anthropologists acknowledge Mead’s initial errors in Samoa and their disastrous intellectual consequences. Some critics of Freeman’s book suggested that his attack on Mead had been inspired by his friendship with the Australian anthropologist Reo Fortune, Mead’s second husband, whom she divorced. In a later book, The Fateful Hoaxing of Margaret Mead: A Historical Analysis of Her Samoan Research (1999), Freeman claimed that Mead had been “hoaxed” by her informants, citing as evidence some of the letters exchanged between Mead and Franz Boas in the 1920s. Critics of Freeman counter that he used the letters selectively and even deceptively. For example, Martin Orans, an anthropologist at the University of California at Riverside, refuted Freeman’s claim that Mead was duped by her Samoan informants. Nevertheless, Orans called his book Not Even Wrong: Margaret Mead, Derek Freeman, and the Samoans (1996), because he rejected the assumption that Mead’s work proved that Samoan adolescents “came of age” without stress, and that cultural practices are universally independent of biological determinants. Orans believed that such sweeping global assertions were too vague to be empirically tested, and thus, the Samoan conclusions did not reach the threshold required of scientific claims. Both Mead’s global claims and Freeman’s refutation were “not even wrong,” which Orans asserts, is “the harshest scientific criticism of all.”
LIFE SCIENCE
Although Freeman’s critique of Mead brought him a remarkable amount of media attention, he presented himself as a heretic and a lonely dissenter searching for truth. Accusing anthropologists of mindlessly following the “prescientific ideology” of a “totemic mother,” Freeman asserts that professional journals have suppressed his work in “the interests of a ruling ideology.” Indeed, the title of his Margaret Mead and Samoa (1983) was changed in the 1996 edition to Margaret Mead and the Heretic (1996). In his later writings, Freeman claimed that Mead was antievolutionary, but other anthropologists argue that Freeman simply omits or misrepresents Mead’s views on evolution in order to discredit her work. Anthropologist Paul Shankman, at the University of Colorado at Boulder, writing in the Skeptical Inquirer (1998), asserts that “on the fundamental issues of biology, culture, and evolution, Mead and Freeman are in substantial agreement. Mead was not antievolutionary; she held what are now conventional views on evolution, just like Freeman.” Since Freeman published Margaret Mead and Samoa in 1983, even anthropologists who had criticized various aspects of Mead’s work or conclusions on technical grounds, found themselves defending a popularized work from the 1920s in order to defend the overall scientific standing of anthropology. Some have accused Freeman of cowardice for waiting until after Mead’s death to publish his attacks on her early work. On the whole, Freeman’s critiques have dismissed his thesis as ridiculous and absurd, because no rational scientist would expect work carried out in the 1920s to meet contemporary standards of scholarship. Coming of Age in Samoa was a popular classic. It was never a “sacred text” for anthropologists, and certainly not a model for modern fieldwork and scholarly writing. Indeed, Shankman admits that Mead’s popularity actually “led academic anthropologists to treat her work with caution, recognizing its limitations as well as its strengths.” —LOIS N. MAGNER
128
SCIENCE
IN
DISPUTE,
VOLUME
2
Viewpoint: Yes, Margaret Mead’s methodology was flawed, and her bias and naiveté call into question her conclusions. Margaret Mead was direct, strong, unique, articulate, intellectual, hardworking, passionate, and honest. Thought by many to be the mother of anthropology, there is much to admire in Mead. However, to say that she was always unbiased with regard to the way she interpreted her data would be stretching the truth. Complete objectivity was certainly the ideal, but it was not always the reality. In addition, it could easily be argued that Mead was, at times, naive regarding the process of data collection, especially early in her career. Early Influences Margaret Mead had an interesting childhood. Her father, a university professor, placed a high value on intellectual thought and believed that the finest thing a person could do was to add something of value to the public discourse. Margaret’s mother, an educated woman in her own right, was a dutiful and earnest woman who took her domestic role seriously and had little regard for ostentatious behavior. Margaret learned a certain degree of self-sufficiency from watching her mother, who was keenly aware of her husband’s affairs and frequently left alone.
Indeed, Mead was greatly influenced by the strong opinions of her mother and paternal grandmother. From them she learned that a mind is not sex-typed and that it is perfectly natural for a woman to be intelligent. In fact, if what she claims is true about one’s environment shaping personal character, then she certainly learned the value of social advocacy from the
Scientists who systematically study and record data about other cultures. PARADIGM: An example, pattern, or principle that forms the basis of a methodology or theory. In science this word is often used to refer to a clearly defined archetype (something that served as the model or pattern for other things of the same type). EUGENICS: Study of human genetics and methods to improve inherited characteristics, both physical and mental. The early emphasis was on the role of factors under social control that could either improve or impair the qualities of future generations. Modern eugenics is directed chiefly toward the discouragement of propagation among the unfit (negative eugenics) and encouragement of propagation among those who are healthy, intelligent, and of high moral character (positive eugenics). Such programs encounter many difficulties, from defining which traits are most desirable, to the obvious moral and ethical dilemmas that result regarding the freedoms of individuals. CULTURAL DETERMINISM: Idea that the culture in which we are raised determines who we are, both emotionally and behaviorally. Some cultural determinists argue that even physical traits can be affected by culture, such as the effect of proper nutrition on growth and final height. While genetic research often makes the headline, cultural determinists consider it is more important to remember the role culture plays in determining abilities and capabilities. While genetics has obvious effects, and may define limits to growth and development, cultural influences have a broader role in shaping individuals. ETHNOGRAPHERS:
women in her life and maybe a bias or two as well. After all, her mother was quite vocal with regard to those who did not support the suffragettes campaigning for women’s right to vote, referring to them derisively as “women who probably kept poodles.” Mead’s Early Work And yet, despite all her lessons in reality, Margaret Mead was a bit naive when she set off for Samoa to study adolescent girls in 1925. She was fresh out of college and virtually without any field experience. To make matters worse, the methods she learned were not practical, but based on theory, and much of the previously published work on Samoa was inaccurate and did not pertain to the question she hoped to study anyway. Mead was not a linguist and had never even learned a foreign language, so she had to rely totally on an interpreter’s ability to convey the facts in an unbiased SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
In many ways, it was her father’s behavior that taught Margaret never to be too dependent on a man. Although Margaret respected her father’s mind, she did not always respect his ethics. In her book, Blackberry Winter, she wrote, “I simply was very careful not to put myself in a position in which he [her father], who called the tune, had too much power over me.” After all, Margaret’s father was not especially warm toward her, and it could easily be argued that his opinions and behavior influenced her passionate viewpoints regarding a woman’s place in society. For example, despite the strength of Margaret’s mind, her father advised her that a college education was unnecessary in the event that Margaret was to marry; an opinion Margaret blatantly dismissed as incorrect, demonstrating early on her unwillingness to be a pedestrian in her own life.
KEY TERMS
129
way. Although Mead attempted to learn some of the culture’s language that she was about to study, she was hardly fluent, and it was naive of her to assume that this would not affect the outcome of her research is a significant way. Indeed, Mead was not getting the whole picture; the Samoans sometimes censored themselves, and political opposition from local officials resulted in the monitoring of her correspondence. Mead, aware of her readership at home, censored herself and tailored her stories to her audience. The argument can easily be made that no matter how objective the researcher, a personal agenda and outside factors can sometimes effect the interpretation of events and the way those events are communicated.
LIFE SCIENCE
Mead’s training in psychology may have given her ideas regarding samples and tests, but it did not prepare her for some of the obstacles she faced. After all, she was breaking new ground and had to invent procedures of her own which she later claimed allowed her to create a broad cultural picture without the need for a lengthy stay. However, it was naive of Mead to think that she could understand with complete clarity the nuances of a culture in a matter of months, regardless of how effective she believed her methods to be. And although it is true that different scientists may approach the same problem in different ways, Mead’s conclusions on Samoan promiscuity remain controversial today.
130
Data Collection and Interpretation Despite the fact that Mead was armed with the ability to competently categorize subjects and facts, much of her controversial work on gender issues came from “post interpretation,” which is disturbing to anyone hoping to verify her conclusions. This is true, for example, regarding the much-discussed research she conducted with Reo Fortune, a respected Australian anthropologist and her second husband. Together they studied three New Guinea tribes: the Mundugumor, Arapesh, and Tchambuli. In contrasting the typically gentle Arapesh women and men with the typically assertive Mundugumor women and men, Mead concluded that one’s culture formed adult personality, not one’s biological sex. Whether or not one believes this analysis, critics point to a variety of problems in the way Mead collected her data.
First, Mead’s naïveté did factor into her collection of data. Nancy McDowell, a professor of anthropology at Beloit College, did fieldwork in 1972 and 1973 in the first village upriver from the village in which Fortune and Mead carried out their research and continues to visit periodically. In her book Mundugumor, McDowell says that Mead thought that no significant problem of sampling or perspective existed if the informant knew the culture well. Most scientists SCIENCE
IN
DISPUTE,
VOLUME
2
now know that the questions asked could very well shape the answers given. Even the development of the questions could be influenced, at least in part, by the researcher’s bias or personal interest. A broader sampling is so valuable for that reason; it helps eliminate bias. But Mead disagreed. For this reason, many of Mead’s critics have questioned the validity of her conclusions. They find it naive, at best, to think that anyone could reach valid conclusions about a culture based on the information derived from only one or two informants. One example of this limitation is reflected in an observation made by McDowell, “Mead may have failed to see that her perception of the collapse of Mundugumor society may have been as much her informants’ construction of past events as it was a true rendering of ‘reality.’” To compound the problem, Mead was operating within the confines of a far less sophisticated paradigm than anthropologists do today. Mead had a tendency to think she could study a culture for a bit of time, master it, and move on to the next culture. This “mosaic view” of culture, a term coined by Roger Keesing, author of Cultural Anthropology: A Contemporary Perspective is considered simplistic by today’s standards. As McDowell states, culture is now understood to be both “a complex set of diversities” and “not perfectly integrated.” Data collection can also be affected by egalitarian factors. Sometimes a female/male team can be helpful in communicating with both sexes; however, the division of labor can also affect how the data is gathered. Communication problems among field workers who are supposed to be teammates can certainly damage the accuracy of the data collection process. For example, when Fortune missed a cue and did not communicate all aspects of his research with Mead, he caused Mead to unknowingly work in the dark, so to speak, and this lack of knowledge could have affected her conclusions. In part this problem was not Mead’s fault; she trusted Fortune to be completely forthcoming and thorough, but in retrospect that was probably naive. How impressed she was with his scholarship and how their personal chemistry effected the process is also difficult to know for certain, but it is fair to speculate that a more seasoned Mead would have done things differently had she been able to relive the experience. Some would argue that Mead, who was earnest enough to type out her notes, was capable of working so fast that she did not have to spend as much time in the field as other workers. However, working fast is not always better as there were considerable gaps and inconsistencies in her notes. Some questions simply remained unanswered. As Nancy McDowell, author of Mundugumor, states “Mead never
claimed that she and Fortune did a complete ethnography of the Mundugumor, and she knew very well that her materials were especially limited since they left in the middle of their planned field trip.”
Some of the most vocal criticism regarding Mead’s work has come from the Australian anthropologist Derek Freeman, who lived in Samoa for a time. According to Freeman, his conversations with Samoan women generated significantly different results than did Mead’s. However, when one considers the passion and determination with which Freeman attacks Mead’s work, it is not surprising that some peo-
The Strength of Her Convictions To her credit, Mead was not a weak person; she was unafraid to take a stand and voiced her opinions openly. One needs only to read her book Some Personal Views to understand what an unconventional thinker she was. Mead’s personal beliefs were, by many accounts, radical in SCIENCE
IN
DISPUTE,
VOLUME
2
Margaret Mead (© Bettmann/CORBIS. Reproduced by permission.)
LIFE SCIENCE
On some level, Mead may have wished she could have studied the Mundugumor culture from a distance. She did, after all, hold it in a fair amount of disdain, which causes one to question how unbiased she was in the interpretation of the data she collected. Photography, some say, allows quite a bit of objectivity. Mead often used photography in her work, especially to study a culture from a distance, yet this seemed to contradict her belief that one must immerse oneself in a culture. It would seem that to study a culture accurately, one must examine it close up, not at a distance. Documentary-style photographs, as compelling as they are when they are well done, capture only a moment in time. Moreover, the snippets of information they provide can also be deceiving; for a total picture, the subject must be studied over time.
ple have called his motives into question. Indeed, his delivery does seem unduly forceful at times, and perhaps he simply could not reconcile his conservative views with Mead’s radical ones. In any event, Freeman has garnered a great deal of attention. On the surface, it might appear that Freeman is primarily responsible for the controversy that surrounds Mead’s work, but to give him that kind of power is to oversimplify the situation. Mead’s methods and conclusions were controversial on their own; Freeman was hardly alone in his criticism. He merely capitalized on some of the weaknesses in her methods, claiming that she was duped, for example, by the people she interviewed in Samoa. What is ironic is that the same argument against credibility could also be used with regard to Freeman’s work. The language barrier was a challenge for any anthropologist studying Samoan culture. And, indeed, the same gender politics that might have affected what was said to Mead might also have affected what was said to Freeman. So, in truth, each proposed theory must be weighed against a multitude of variables.
131
LIFE SCIENCE
nature, in fact, despite her rejection of the term, she could easily be defined as a radical feminist. Rosemarie Tong, author of Feminist Thought pointed this out when she used Margaret Mead as an example of someone who “espoused a nurture theory of gender difference according to which masculine and feminine traits are almost exclusively the product of socialization or the environment.” In essence, Mead held the controversial belief that culture determines the formation of an individual’s character, an idea that is widely debated even today.
132
One could easily say that Mead enjoyed shaking things up a bit and, despite all her rhetoric about suspending one’s own belief system while studying other cultures, her biases were simply too strong not to occasionally affect her data interpretation. Some might even argue that it is our bias, our unique philosophical perspective that serves us well when we engage in scientific analysis. Those same people might also debate that it is not possible, or even advisable, to completely abandon all we believe to be intrinsically true when we interpret data that relates to human behavior. When studying human nature and other cultures, as anthropologists do, it seems perfectly natural to utilize not only clinical, objective methods of analysis, but also personal impressions. In some other scientific disciplines, chemistry, for example, the analysis is more mathematical than intuitive. In anthropology, however, this does not have to be the case; the collection of data can successfully be united with intuition. As long as the bias is recognized within the framework of the analysis, it does not have to be detrimental. The problem is that the bias is not always adequately recognized and identified; and it is this concern that some scholars have with the way Mead interpreted her data. And yet, it is nearly impossible to “sweep one’s mind clear of every presumption” as Mead suggests in Blackberry Winter. Mead herself could not do it in every case; it is merely an ideal. Some might even argue that it is a senseless ideal. There are times when it is appropriate for an anthropologist to recall personal experiences, especially when trying to understand differences that exist within cultures. In Mead’s book Letters From the Field 1925–1975, she cautions “one must be careful not to drown,” and adds that balance can be achieved when “one relates oneself to people who are part of one’s other world.” When Mead was a young anthropologist, scientists were just beginning to explore the nature of the relationship between the observer and the observed. Biases in her interpretation were more than likely unintentional, but nonetheless, they did exist. However, to criticize Mead in some moral fit of anger, filtered through the veil of chauvinism as some scholars SCIENCE
IN
DISPUTE,
VOLUME
2
have done, seems extreme and certainly not in the spirit of scientific discovery. It seems equally ridiculous to accept Mead’s conclusions on culture and character, for example, simply because she was instrumental in putting the field of anthropology on the map. Her work provides food for thought, but she is not the final authority on culture and character, nor did she wish to be. Mead acknowledged that in the light of changing theories, improvements in data collection would be made and that people would reevaluate her methods as part of the learning process—a process that she thoroughly embraced. Conclusion The issue of how Mead collected and interpreted her data is complicated. Cutand-dried analysis does Mead and her work a disservice. Her numerous contributions to the field of anthropology are rightly acknowledged, and she was a leader, and by many accounts, an important thinker. Indeed, Mead’s pioneering spirit and strength of character led her to engage in studies that were, in her youth, reserved only for men. In that regard, her work is admirable, but it can also be said that her personal biases sometimes prevented her from being completely objective. It is also important to recognize that no scholar or scientist is, or should be regarded as, flawless. With scientific advancement and new research comes an inevitable series of new questions. The awareness that others may come after us and develop new hypotheses and different methodologies should free us to act. Mead certainly understood this, and no matter how controversial some of her theories have been, she certainly added to the public discourse, despite her biases. Mead inspired people to look at society in a new way, to think critically, and to appreciate different cultures. —LEE ANN PARADISE
Viewpoint: No, while Mead’s methodology and conclusions have been legitimately criticized, her overall analysis has been supported by a majority of anthropologists. Margaret Mead followed anthropological guidelines and attempted to maintain a “scientific” detachment during her fieldwork, from her first field work in Samoa, to her last study. While many anthropologists have questioned portions of her work, criticized her populist writing style, and disagreed with her both personally and professionally, only one person has seriously questioned her in terms of bias and naïveté. After her death, Australian anthropolo-
gist Derek Freeman managed to single-handedly call Mead’s scientific reputation into question. However, although the public perception of her achievements was tarnished, even her opponents in anthropology rallied to her defense against what they considered an unwarranted attack. An Influential and Controversial Book Margaret Mead came to fame chiefly due to her first book, The Coming of Age in Samoa, published in 1928 when she was in her early twenties. The book was based on several months’ fieldwork on the islands of Manu’a, in American Samoa. Mead had set out to study adolescent life, as it was both her hope and that of her supervisor at Columbia University, Franz Boas, that she would find proof that adolescence is not the same in all cultures. This was important to Boas, as he was a supporter of cultural determinism, a theory that stood in opposition to genetic determinism and eugenics. The eugenicists argued that human behavior was genetically determined; the cultural determinists argued that upbringing and environment were more important factors in development. Simply put, the debate was over which was more important in human life, nature (genetics) or nurture (upbringing). Boas was opposed to eugenics on philosophical and moral grounds, as well as scientific ones, as many eugenicists believed in notions of racial purity, forced sterilization, and other concepts he found abhorrent. However, Boas and his students still recognized the important of hereditary and genetics. The cultural determinists were seeking evidence against an extreme form of genetic determinism, not against genetics itself.
Rather than just another shot fired in the battle between cultural determinists and eugenicists, Mead’s Coming of Age in Samoa was destined to make a much wider splash. Her editor encouraged her to make it more marketable by including chapters written in a popular style, and to generalize her conclusions to United States culture. She included flowery passages such as: “As the dawn begins to fall among the soft brown roofs, and the slender palm trees
Freeman Challenges Mead in the Media In 1983, several years after Mead died, Harvard University Press published a book by Derek Freeman questioning Mead’s work, Margaret Mead and Samoa. The publishers appear to have realized the promotional possibilities of such a work, attacking the most well known of anthropologists, as they “leaked” details to the popular media. For example, two months before the book was published, an article, “New Samoa Book Challenges Margaret Mead’s Conclusions,” appeared on the front page of the New York Times. In Margaret Mead and Samoa, Freeman charged that Mead’s cultural determinist ideology had been more important to her than the evidence she had found in Samoa, that many others had observed Samoa differently, that her methods of getting information were flawed, and that she had taken things said in jest too seriously. He also went on to claim that the whole validity behind Boas’s cultural determinism and the importance of nurture was therefore based on flawed work. This last claim seemed to challenge the very foundation of anthropology, and its scientific status.
Much of the debate over Mead’s work and the implications for anthropology were played out in the media. Freeman appeared on talk shows, gave public lectures, and many magazines (including Time) and newspapers carried articles on the subject. While the debate was over fairly quickly in academic circles, with Freeman’s claims being soundly dismissed for a variety of reasons, Freeman continued to press his case in the media. Dismissals from academic critics just seemed to strengthen Freeman’s resolve, and he began to stress his “outsider” status. A new edition of his book was renamed Margaret Mead and the Heretic, with Freeman now taking the role of the “heretic” attacked for pointing SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Mead had a broader interest in adolescence, and her observations led her to think that there was a great range of cultural difference in young people. She needed a good test case, decided the Pacific held promise, and went to Samoa. After a number of months in the field she concluded that for the 50 young Samoan women she had studied, there were few of the typical Western adolescent upheavals and rebellions associated with adolescence. Samoan culture, she claimed, provided an openness that made coming of age a relatively smooth transition. This implied, therefore, that nurture was more important than nature, at least in this one case.
stand out against a colorless, gleaming sea, lovers slip home from trysts beneath the palm tress or in the shadow of beached canoes, that the light may find each sleeper in his appointed place,” which gave the work a romantic feel. The popular style, and the suggested application to United States culture and child-rearing, gave the book a wide appeal. While the book offended many with its call for sexual freedoms, it also struck a powerful cord in a time when American adolescence and sexual behavior were hot topics. Coming of Age in Samoa was probably the most widely read anthropological book of the twentieth century, and many commentators placed it as one of the 100 most important works of the century. Mead became a celebrity, and her career soared. She went on to study other cultures, and wrote many other important works on a variety of subjects including motherhood and the women’s movement, publishing over 1,400 articles and books.
133
out the faults with Mead’s false doctrine. A film presenting Freeman’s view was made, and a play, Heretic, was staged. In 1998 a second book, The Fateful Hoaxing of Margaret Mead: A Historical Analysis of Her Samoan Research, was published. Its major claim was based on a portion of the film, in which one of Margaret Mead’s teenage informants, Fa’apuna’a (now in her eighties), was interviewed. In the interview Fa’apuna’a states that she and her friend Fofoa had hoaxed Mead by making up stories about their sexual adventures. In reality, Fa’apuna’a claimed, there had been no trysts among the slender palm trees as portrayed in Coming of Age in Samoa, she had been a virgin.
LIFE SCIENCE
The Anthropologists Respond The anthropological community generally dismissed Freeman’s work. His use of evidence was seen as highly selective, ignoring that of other observers of Samoan culture who agreed with Mead. His method was criticized, and the whole basis for his attacks questioned. Freeman’s claims regarding Boas’s philosophy were considered to be wildly inaccurate and misleading, and some went so far as to dismiss the whole affair as another right-wing attack on liberal academia. Rather than an academic work, many anthropologists considered it nothing more than an attempt at character assassination, timed as it was after Mead’s death.
Freeman portrayed these criticisms as the followers of Mead’s ideology “circling the wagons,” desperately defending against the onslaught of his “irrefutable” evidence. Yet many of those who spoke up to defend Mead’s work also disagreed with aspects of her methods and conclusions. Mead’s reputation had never been totally clean, and as one academic noted, “she has never been accused of having been the most meticulous and persistent of linguists, historians or ethnographers.” Many of Mead’s colleges had criticized her work, her methods, her conclusions, and her style. However, Mead might not have been totally correct, many said, but she was not the biased and naive researcher that Freeman portrayed. Anthropologist Lowell D. Holmes of Wichita State University, whose doctoral research consisted of a methodological restudy of Mead’s Samoan work, had what he described as a stormy relationship with Mead, and his own research led him to several opposing conclusions. However, Holmes saw these as details to be politely debated academically, and stated: “Although I differ from Mead on several issues, I would like to make it clear that, despite the greater possibilities of error in a pioneering scientific study, her tender age (twenty three), and her inexperience, I find that the validity of her Samoan research was ‘remarkably high.’” Many others had revisited the islands and questioned portions of Mead’s work, yet none saw
134
SCIENCE
IN
DISPUTE,
VOLUME
2
Freeman’s arguments as valid. Indeed, in the same year as Freeman’s first book was published, another book, Richard A. Goodman’s Mead’s Coming of Age in Samoa: A Dissenting View, offered an analysis of Samoa that differed from Mead’s. Goodman’s book did not gain any media attention as his tone was much tamer, more reasoned, and more academic. Freeman’s Argument Criticized It is easy to see why there was such as strong reaction to Freeman’s books when his methods and evidence are analyzed. Freeman set out to prove Mead wrong, as early correspondence with other scholars shows, implying that it was he that was biased from the outset. Freeman’s research in Samoa began some time after Mead’s, and he did not publish his work until 1983, yet the changing nature of Samoan culture is not considered in his work. Furthermore, Freeman did not study the same area as Mead, and much of his statistical evidence, such as reports of rape and other violent crimes, does not take into account the divisions between urban and rural Samoan culture. Freeman was a middle-aged man living with his wife in Samoa, yet does not seem to consider that this might color the responses of young girls when talking to him about sexuality. Mead had the advantage of establishing a rapport with girls about her own age and size (Mead was slim and short), whereas the middle-aged Freeman must have seemed an imposing figure in a fiercely patriarchal and Christian society. As Holmes notes, “one cannot criticize Margaret for believing the sexual accounts of young Samoan girls, many of them about her age, and then expect the scientific community to believe that the investigations of an elderly white male among girls of adolescent age could be reliable and valid on such a delicate subject as virginity.” It is not surprising Freeman only heard denials of sexual activity from young women. Would anyone expect any other response in such a situation in any society? Freeman also fails to consider what motivations there may have been for the changed testimony of Fa’apuna’a, some 60 years after the fact. Although Freeman is quite certain that Fa’apuna’a lied to Mead, he does not consider that he himself may have been hoaxed. An elderly woman in Christian Samoa is not likely to admit to adolescent sexual behavior, especially not when Fa’apuna’a had been a highstatus maiden who should have guarded her virginity. Freeman is asking us to believe this one interview instead of the fieldwork of Mead in which Fa’apuna’a was just one informant, yet has not considered which is the more plausible story.
Although many anthropologists believe that Mead went too far in her conclusions, and that she was mistaken in some instances, the majority of the anthropological community consider that the vast bulk of her research and analysis was correct. Mead’s Samoa work was a
portrait of Samoan adolescent female culture in the 1920s, and in that narrow context appears to have been very accurate. Some of the broader conclusions that Mead included, partly to please her publisher, may be justly questioned, but that does not imply that she was naive or biased in her study of Samoa, just over-reaching in her promotion of cultural determinism. In addition, the broader implications for the nature/nurture debate are not as dramatic as Freeman has portrayed them to be. Mead’s book was not the only study to show the importance of nurture, and neither Mead nor Boas were blind to the importance of heredity. Freeman himself, criticizing cultural determinism on the one hand, then attempts to use it to prove his case. Samoa, he argues, is an unusually violent culture, something he does not link with genetics. Freeman’s celebrated calls for a new style of anthropology fell not on deaf ears, but on those of intelligent listeners who quickly decided that his ideas were confused and worthless. Although Freeman found fame and support outside anthropology, he did not find wide support for his criticisms of Mead within the discipline. Yet Mead’s legacy in anthropology is not one of blind faith by devoted followers. Many have questioned her methodology and her broad conclusions, but the majority of anthropologists have supported Mead’s overall analysis. Coming of Age in Samoa was a pioneering work, and as such must be expected to have its faults. Many other studies have shown that Mead’s conclusions regarding the cultural component of adolescence hold true in many societies. Mead was no more biased than any other anthropologist, and there were few who would have called her naive, even at the age of 23. Even though Freeman’s attacks may have succeeded in damaging her popular reputation, Mead’s anthropological reputation remains intact. —DAVID TULLOCH
Further Reading Caton, Hiram, ed. The Samoa Reader: Anthropologists Take Stock. Lanham, MD: University Press of America, Inc., 1990.
Hellman, Hal. “Derek Freeman versus Margaret Mead.”Great Feuds in Science: Ten of the Liveliest Disputes Ever. New York: John Wiley & Sons, Inc., 1998. Holmes, Lowell D. Quest for the Real Samoa: The Mead/Freeman Controversy & Beyond. Boston, MA: Bergin & Garvey Publishers, Inc., 1987. Keesing, Roger. Cultural Anthropology: A Contemporary Perspective. New York: Holt, Rinehart and Winston, 1981. McDowell, Nancy. The Mundugumor: From the Field Notes of Margaret Mead and Reo Fortune. Washington, DC: Smithsonian Institution Press, 1991. Mead, Margaret. Coming of Age in Samoa: A Psychological Study of Primitive Youth for Western Civilization. New York: Perennial Classics, 2001. ———. And Keep Your Powder Dry: An Anthropologist Looks at America. New York: Berghahn Books, 1999. ———. Blackberry Winter: My Earlier Years. New York: William Morrow & Company, Inc., 1972. ———. Letters From the Field, 1925–1975. New York: Perennial, 2001. ———. Margaret Mead: Some Personal Views. New York: Walker Publishing Company, Inc., 1979. Murray, Stephen O., and Regina Darnell. “Margaret Mead and Paradigm Shifts Within Anthropology During the 1920s.” Journal of Youth and Adolescence 29, no. 5 (2000): 557–73. Orans, Martin. Not Even Wrong: Margaret Mead, Derek Freeman, and the Samoans. Novato, CA: Chandler and Sharp, 1996. Tong, Rosemarie. Feminist Thought: A Comprehensive Introduction. Boulder, CO: Westview Press, 1989.
LIFE SCIENCE
Freeman, Derek. Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth. Cambridge, MA: Harvard University Press, 1983.
———. The Fateful Hoaxing of Margaret Mead: A Historical Analysis of Her Samoan Research. Boulder, CO: Westview Press, 1999.
SCIENCE
IN
DISPUTE,
VOLUME
2
135
Do the fossils found at the sites explored by Louis and Mary Leakey and the sites explored by Donald Johanson represent several hominid species or only one? Viewpoint: Yes, the fossils found by Louis and Mary Leakey and by Donald Johanson represent several hominid species. Viewpoint: No, the hominid fossils found and named by Donald Johanson and Louis and Mary Leakey represent a single species of Australopithecine or very early Homo.
Modern studies of the relationship between the ancestors of the great apes and modern humans involve many disciplines, such as paleoanthropology, historical geography, archaeology, comparative anatomy, taxonomy, population genetics, and molecular biology. Although insights gained by the genetic analysis of human and nonhuman lineages have provided new insights into human evolution, the fossilized remains of human ancestors still provide the most valuable clues to the past. Unfortunately, hominid fossils are rare and generally quite fragmentary. Complete skulls and skeletons are uncommon, and identifying and classifying bits of bones and teeth to determine their relationship to other ancient specimens involves formidable challenges. Many subtle characters must be used in the analysis of fragmentary remains. The major sites of discovery of the most ancient hominid fossils have been in Africa: Kenya, South Africa, Tanzania, and Ethiopia.
136
When Charles Darwin published On the Origin of Species in 1859, he hinted that his theory of evolution by means of natural selection might throw some light on the origins of human beings. It was only with great reluctance that he finally explored this most controversial aspect of his theory in Descent of Man, and Selection in Relation to Sex (1871). Other evolutionists had written about man’s place in nature and the survival of the fittest in human society, but Darwin realized that his contemporaries were not ready for a rigorous analysis of human evolution as a purely biological process. In Descent of Man, Darwin argued that human beings, like every other species, had evolved from previous forms of life by means of natural selection. According to Darwin, all the available evidence indicated that “man is descended from a hairy, tailed, quadruped, probably arboreal in its habits.” The evidence available to Darwin did not, however, allow him to reach any specific conclusions about the time, place, or identity of the first humans. Although studies of cultural anthropology and paleontology were very limited in his time, Darwin’s views on human evolution were remarkably perceptive. He suggested that the ancient ancestor of modern human beings was related to that of the gorilla and the chimpanzee. Moreover, he predicted that the first humans probably evolved in Africa between the Eocene and Miocene eras. Wrestling with the crucial theme of the development of human intelligence, Darwin pointed out that differences in body size must be taken into account when evaluating the significance of absolute differences in brain size.
Since Darwin established the basic framework for the study of human evolution, scientists have searched for physical evidence of the most ancient ancestors of modern humans and their closest relatives, the chimpanzee and the gorilla. Going beyond morphology and taxonomy, paleoanthropologists now employ the techniques of molecular biology, the analysis of genetic similarities and differences, new methodologies in archeological excavation, and insights from sociobiological studies of primates and hunter-gatherer societies. Moreover, scientists now generally accept the concept that the evolutionary history of the primates was more like a “bush” with many branches (some of them evolutionary dead-ends) than a ladder leading directly to modern Homo sapiens. The chimpanzee and the gorilla are clearly the living animals most closely related to modern humans. Indeed, studies of nuclear and mitochondrial DNA suggest that humans and chimps might have shared a common ancestry, after divergence from the gorilla lineage. Based largely on differences between the DNA of modern apes and humans, the last common ancestor of humans and chimpanzees presumably lived in Africa about 6 million years ago. Fossil evidence for the species that existed at the time the human lineage separated from that of the great apes is, unfortunately, very fragmentary. The South African physical anthropologist and paleontologist Raymond Dart made the first substantive discovery of human ancestors in Africa as early as 1924, when he identified the famous Taung fossils as Australopithecus africanus (South African Ape-man). The most exciting subsequent twentieth-century discoveries of ancient human ancestors are associated with the work of Kenyan anthropologist Louis Leakey and his anthropologist wife Mary, and that of the American anthropologist Donald Johanson. Working primarily at sites in Olduvai Gorge and Laetoli in Tanzania, Mary and Louis Leakey identified many hominid fossils, including Proconsul africanus (an extinct Miocene primate) in 1948, Australopithecus boisei (originally called Zinjanthropus boisei, or Nutcracker Man) in 1959, Homo habilis (Handy man) in 1960–1963, and a remarkable trail of fossilized hominid footprints preserved in volcanic ash. Johanson’s most important discovery was the unusually complete skeleton of a primitive australopithecine (usually referred to as Lucy) in the Afar region of Ethiopia in 1974. In addition to proclaiming that he had found a new species, which he called Australopithecus afarensis, Johanson claimed that Mary Leakey’s Laetoli fossils were actually members of this species. Many scientists objected to Johanson’s designation of a new species and the relationship between his Afar fossils and Mary Leakey’s Laetoli specimens remains controversial. Several new hominid finds were announced at the beginning of the twenty-first century. As usual, the identification and classification of these fragments of bone provoked intense debate among paleoanthropologists. In 2000, French paleoanthropologists Brigette Senut and Martin Pickford discovered a set of fossil fragments in the Tugen Hills of Kenya. They claimed that the fossils represented a new species, which they called Orrorin tugenensis (Original man, Tugen region). A few months later, a report by Yohannes Haile-Selassie raised questions about the hominid status of O. tugenensis, and announced the 1997 discovery of an early form of A. ramidus. Further discoveries will, no doubt add new insights into the history of human evolution, and create new disputes among paleoanthropologists. —LOIS N. MAGNER
Viewpoint: Yes, the fossils found by Louis and Mary Leakey and by Donald Johanson represent several hominid species.
Long Baseline Experiments Most scientists agree that the fossil record provides evidence of 10 to 15 different species of early humans, but the relationships among these ancient species and their relationship, if any, to modern humans SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Theories of human evolution are increasingly framed in terms of insights gained by the genetic analysis of human and nonhuman primate lineages. Using differences in DNA to estimate how long humans and chimps have been separate lines, scientists suggest that humans separated from the apes about 5 to 8 million years ago. Nevertheless, debates about the traditional source of information, i.e., the fossil evidence, are still complicated by the paucity of the evidence and disagreements about palaeontological systems of classification. Although thou-
sands of hominid fossils have been collected, many specimens consist of only bits of bone or a few teeth. Indeed, it is sometimes said that all known hominid fossils could fit into one coffin. The discovery of any hominid fossil specimen, no matter how fragmentary, inevitably serves as the basis for endless speculation. Nevertheless, according to the eminent eighteenth-century French anatomist Baron Georges Cuvier (1769–1832), through knowledge of the comparative anatomy of living animals, many insights into the form and function of extinct creatures can be reconstructed by the analysis of a few bones.
137
KEY TERMS Genetic changes that can improve the ability of organisms to survive, reproduce, and, in animals, raise offspring. BIPEDALISM: One of the earliest defining human traits, the ability to walk on two legs. FORAMEN MAGNUM: Opening at the bottom of the skull through which the spinal cord passes in order to join the brain. FOSSIL: Term originally used by the German scholar Georg Bauer (Georgius Agricola) in the sixteenth century to refer to anything dug out of the earth. Eventually the term was restricted to the preserved remains of previously living animals or plants. GENE POOL: The genetic material of all the members of a given population. HOMINID: Term traditionally applied to species in the fossil record that seem to be more closely related to modern humans than to apes. The term originally referred only to species of humans, when only humans were included in the family Hominidae. Recent genetic evidence suggests that humans, chimpanzees, and gorillas are so closely related that hominid should
LIFE SCIENCE
ADAPTATION:
138
remains uncertain and controversial. The classification of various species of early humans, and the factors that influenced evolution and extinction are also subjects of debate. The conventional criteria for allocating fossil species to a particular genus have often been challenged for being ambiguous, inappropriate, and inconsistently applied. Arguments about the identity of fossil remains are complicated by evidence that suggests hominid species may have been quite variable, with some of the apparent variability due to sexual dimorphism, a characteristic often found among living nonhuman primates. Given the interesting variations found in hominid fossils, many paleoanthropologists agree that human remains exhibiting a unique set of traits should have a new species name. The story of the search for the human ancestors and the debates about the relationships among the various species included in the catalog of early hominids is, in large part, the story of the Kenyan anthropologist Louis Leakey and his wife, paleoanthropologist Mary Leakey. The Leakeys stimulated and inspired many paleoanSCIENCE
IN
DISPUTE,
VOLUME
2
refer to the family Hominidae, which includes Homo and Pan. HOMININ: Modern humans and their ancestors; used by some classification systems as a replacement for the older term hominid. HOMININAE (HOMININES): Subfamily containing the ancestral and living representatives of the African ape and human branches. PALEOANTHROPOLOGY: Scientific study of human evolution; a subfield of anthropology, the study of human culture, society, and biology. PONGID: Term traditionally applied to animals in the fossil record that seem to be closer to the apes than to modern humans. POPULATION: Group of organisms belonging to the same species and sharing a particular local habitat. SEXUAL DIMORPHISM: Differences in size or other anatomical characteristics between males and females. SPECIES: In reference to animals that reproduce sexually, refers to a group whose adult members interbreed and produce fertile offspring. Each species is given a unique, two-part scientific name.
thropologists, including American Donald Johanson, to search for human ancestors and explore the relationship between humans and other primates. Few people have had more impact on the modern era of paleoanthropology than Leakey, the patriarch of a remarkable multigenerational family of anthropologists. The Leakeys were largely responsible for convincing scientists that the search for human ancestors must begin in Africa. When Louis Leakey began hunting for fossil hominids in the early 1930s, most anthropologists believed that early humans had originated somewhere in Asia because of previous discoveries of human fossils in Java (now Indonesia) and China. Leakey’s son Richard Leakey, his wife Meave, and their daughter Louise, are carrying on with the work begun by Louis and Mary Leakey. The Leakeys were not, of course, the first scientists to discover ancient human ancestors in Africa. The South African paleontologist Raymond Dart was one of the first to recognize the existence of the fossilized remains of primitive, but bipedal human ancestors. In 1925, Dart dis-
covered the skull of an extinct primate at Taung, South Africa. According to Dart, the creature was not an ape, but it walked upright. Because its brain was only about 28 cu in (450 cc), too small for admission to the genus Homo, Dart established a new genus, Australopithecus (Southern apeman). He named the primitive creature Australopithecus africanus. Dart’s contemporaries generally rejected his claims until Robert Broom, another South African paleontologist, discovered many more A. africanus skulls and other bones. Although brain size was originally considered the key to human evolution, many paleontologist now consider the evolution of bipedalism, the ability to walk on two legs, to be one of the most critical early differences between the human and the ape lineages. Habitual bipedalism, as opposed to the ability to stand upright like chimps, requires many anatomical adaptations, in both the upper and lower body. Such changes involve the pelvic bone, hip joints, leg bones, toes, S-shaped cure of the spine, and the position of the foramen magnum. Australopithicines did, however, have curved, elongated fingers and elongated arms, which suggests that, in addition to walking upright, they climbed trees like apes.
widely accepted, because the brain size was above the range for the australopithicines. Moreover, there were significant characteristics of the feet, the ratio of the length of the arms to the legs, and the shape and size of the molar teeth, premolar teeth, and jaws of H. habilis that distinguish it from those of contemporary australopithicines. Differences in the body size of various H. habilis specimens suggest a striking degree of sexual dimorphism.
During the 1960s the Leakeys and their son Jonathan discovered fossils remains that seemed to represent the oldest known primate with human characteristics. Leakey challenged contemporary ideas about the course of human evolution and established a new species name for these fossils—Homo habilis (Handy man). Louis Leakey thought that H. habilis was a tool-making contemporary of the australopithecines, with a brain size of about 43 cu in (700 cc). His designation of these remains as a new species belonging to the genus Homo was very controversial. Some critics argued that Leakey’s H. habilis was based on insufficient material and that the remains in questions were actually a mixture of A. africanus and H. erectus. Others questioned the age of the fossil and concluded that Leakey’s fossil should have been classified as a rather large-brained Australopithecus, rather than a small-brained Homo. Although H. habilis was originally very controversial, Leakey’s designation of this new species was subsequently
In 1975 Mary Leakey discovered the jaws and teeth of at least 11 individuals at Laetoli, 30 mi (48 km) south of Olduvai Gorge. The fragments were found in sediments located between deposits of fossil volcanic ash dated at 3.35 and 3.75 million years. At the time these were the oldest known hominid fossils. Mary classified these remains as H. habilis. Mary made another remarkable discovery in 1978, a trail of fossilized hominid footprints that had been preserved in volcanic ash at the Laetoli site. The footprints seemed to be those of two adults and a child, probably made about 3.5 million years ago. The footprints definitely prove that australopithicines regularly walked bipedally, but the discovery led to a major controversy about the identity of the hominid species that made them. According to some anthropologists, the species Mary Leakey was studying at Laetoli was not H. habilis, but a new hominid species, A. afarensis, that Donald C. Johanson had recently discovered at Hadar, Ethiopia. SCIENCE
IN
DISPUTE,
VOLUME
2
Donald Johanson with a plaster cast of Lucy. (UPI/Corbis-Bettmann. Reproduced by permission.)
LIFE SCIENCE
The discovery that brought worldwide attention to the Leakeys occurred in 1959 at Olduvai gorge in Tanzania, when Mary Leakey found the skull of a creature originally called Zinjanthropus boisei (East African man). Informally the fossil became known as Nutcracker Man, because of its robust skull and huge teeth. The specimen was an almost complete cranium, with a brain size of about 32 cu in (530 cc). The specimen was estimated to be dated about 1.8 million years ago. Today, this species is known as Australopithecus boisei.
139
Donald Johanson is one of the best-known American paleoanthropologists and the founder of the Institute of Human Origins, a nonprofit research institution devoted to the study of prehistory. While working at Hadar in the Afar region of Ethiopia from 1972 to 1977, Johanson discovered hominid remains that were dated as 2.9 to 3.3 million years old. One of his finds was a small but humanlike knee, the first example of a hominid knee. He made his most famous discovery in 1974, the partial skeleton of a female australopithecine, popularly known as Lucy. The skeleton has been dated between 4 and 3 million years ago and was almost 40% complete, making Lucy the oldest, most complete human ancestor ever assembled. In 1975 Johanson’s team found a collection of fossils at a single site that seemed to be the remains of some 13 individuals. The collection was nicknamed the First Family. Eventually still more hominid fossils were discovered, along with stone tools.
LIFE SCIENCE
After analyzing the fossils with Timothy White, Johanson came to the conclusion that all the Afar fossils belonged to a new species. In 1978 Johanson and White named the new species Australopithecus afarensis. These discoveries and Johanson’s interpretation created a major controversy among paleoanthropologists. Critics claimed that slight differences did not justify a new species name and said Johanson’s A. afarensis should be considered a geographical subspecies of A. africanus. Johanson and his supporters argue that the anatomical differences between A. afarensis and other hominids are qualitatively and quantitatively beyond the normal variation found within a species. In addition to being older, Johanson pointed out that that the brain case of A. afarensis was smaller than that of H. habilis and A. africanus. There were also significant differences in the teeth, jaws, fingers, foot, and leg bones. Johanson argues that the differences between specimens justified assigning the Afar fossils to a new species. Much of the controversy about creating a new species designation arose when Johanson argued that his Afar specimens belonged to the same species as Mary Leakey’s Laetoli fossils. Based on apelike characteristics of the teeth and skull shared by no other fossil hominid, Johanson and White assigned both the Laetoli and Afar remains to A. afarensis. They claimed that this new species was more ancient and more primitive than any other hominid fossil. Mary and Richard Leakey criticized Johanson for proclaiming a new species too quickly, and suggested that the fossils could be a mixture of several different species. Other anthropologists, however, agreed that the features pointed out by Johanson were significant enough to distinguish the Afar and Laetoli fossils as different species.
140
SCIENCE
IN
DISPUTE,
VOLUME
2
In 1994 Meave Leakey found teeth and bone fragments similar to a fossil arm bone that had been discovered in 1965. In 1995 Meave classifed all these remains as belonging to a new species, Australopithecus anamensis. This very primitive australopithicene had an apelike skull, but leg bones apparently adapted to bipedalism. Ironically, A. anamensis appears to be quite similar to A. afarensis, the species that was the subject of a dispute between Mary Leakey and Johanson. Despite the ambiguities involved in identifying and naming ancient ancestors, there is general agreement that the earliest human ancestors were the australopithecines. The most significant features distinguishing australopithecines from the apes were their small canine teeth and bipedalism. Members of this group appear to be the first mammals anatomically adapted for habitually walking on two legs. However, they had a brain size of about 24–34 cu in (400–550 cc), a low cranium, and a projecting face. The most primitive australopithecines are now placed in the genus Ardipithecus. In addition to the genus Australopithecus, some anthropologists have adopted the category Paranthropus. At the beginning of the twenty-first century, several new fossil hominids discoveries were announced, and, as usual, greeted by debate about their identity and their relationship to other ancient ancestors. The announcement of the discovery of new hominid remains in 2001 sparked renewed controversy about the earliest hominid ancestors, as well as those of the chimpanzee. In 1990, French paleoanthropologists Brigitte Senut and Martin Pickford discovered a set of fossil fragments in Kenya 6 million years old, which they dubbed Millennium Man. Senut and Pickford classified the fossils as belonging to a new species, which they called Orrorin tugenensis (Original man, Tugen Hills region). One aspect of the controversy has been attributed to prior conflicts between Richard Leakey and Pickford. Nevertheless, the bones and teeth do show an interesting combination of features, which separate Orrorin from the australopithecines. Within months of the report on O. tugenensis, Yohannes Haile-Selassie, of the University of California, Berkeley, announced a new find that cast doubt on the hominid status of Orronin and supported hominid status for A. ramidus, a species discovered in 1994 by an international team led by paleoanthropologist Timothy White. The fossils, found in the Middle Awash area of Ethiopia, are estimated to be between 5.2 and 5.8 million years old. HaileSelassie argues that they represent an early form of A. ramidus, and appear to be from a hominid species closely related to the common ancestor of chimpanzees and humans. The debate about
the status of O. tugenensis and A. ramidus could, therefore, provide insight into the lineage of chimpanzees and hominids. Debates about hominid fossils have been the one constant in the rapidly changing field of paleoanthropology. The conflict between the Leakeys and Johanson is well known, but disputes continue into the twenty-first century with the discovery of each new fossil. —LOIS N. MAGNER
Viewpoint: No, the hominid fossils found and named by Donald Johanson and Louis and Mary Leakey represent a single species of Australopithecine or very early Homo. Ever since Charles Darwin published his book On the Origin of Species (1859), in which he speculated on a common ancestor for humans and other members of the primate family, scientific imaginations have been fired by the possibility of finding evidence of such a common ancestor. Not long after Darwin’s book was published, the fossilized remains of early humans began turning up when intrepid adventurers, and later scientists, retrieved them from the earth where they had lain for hundreds of thousands, or even millions, of years. Paleoanthropologists now agree that it was about 12 million years ago when what became the modern apes diverged from the primate line that became modern humans. The fossils found by Leakey family of paleoanthropologists and Donald Johanson come from a period of from 3.5 to 1.5 million years ago, not old enough to be considered the true common ancestor of prehumans and apes. However, the finds of both the Leakeys and Johanson represent crucial evidence of the sequence of human evolution.
While the genus Australopithecus was rejected by many scientists, then and for another generation, hominid fossils clearly bigger than Dart’s Australopithecus began showing up in South Africa, and Australopithecus robustus soon emerged. A generation later, Louis and Mary Leakey—who had found their share of A. robustus specimens in East Africa at Olduvai Gorge— convinced the scientific world that they had found the earliest member of the genus Homo. Louis Leakey maintained that his bigger-brained Homo habilis, had a brain big enough at 43 cu in (700 cc) to be considered human and, once more, his find used stone tools. So, he was granted the first find in the Homo genus and named a new species, habilis. Many argued then that Leakey’s Homo habilis (Handy Man) was nothing more than a large-brained australopithecine. Whether it used stone tools was—and still is—debated. Whether one believes that H. habilis was notably different from the australopithecines depends whether one is a “lumper” or “splitter.” Lumpers prefer to ignore small differences in fossil samples, lumping them into the same species. Splitters focus on slight variations and use them to justify naming new species. The evolutionary biologist Ernst Mayr (1904– ) cautioned against the careless naming of new species. He had seen what has been called a “species naming frenzy” during the 1950s. Mayr, an extreme lumper who recommended lumping Australopithecus and early Homo, said that brain size should be minimized in the species debates. Paleoanthropologists C. Loring Brace (1930– ) and Milford Wolpoff (1942– ) felt that the Homo genus should have started with a brain size of around 73 cu in (1,200 cc), rather than the 43 cu in (700 cc) allowed Louis Leakey when he named H. habilis. SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Crises of Category In the twentieth century, the problem in tracing human origins through an evolutionary process consistent with Darwin’s model became a question of how to classify fossils with regard to their relationship to each other and with modern humans. Once science was able to reliably date fossils, this task became easier. However, science is a process that requires creating categories, making comparisons, testing, and taking leaps of logic, as well as leaps of faith. The need for categorization has often created scientific disagreement. Each of these disagreements might be called “a crisis of category.” Trying to understand the path of human evolution by accounting for human and prehuman fossils has been plagued by many such crisis.
In 1925, South African anatomist Raymond Dart discovered and named an extinct hominid (a primate that walks upright). The primate was not an ape, nor was it a modern or premodern human that could fit into the category “Homo,” or human, because its brain was too small. Following the scientific convention of using genus and species to name a biological organism, Dart established a new genus, calling his discovery Australopithecus (Southern Apeman), with the species name africanus. In some respects, Australopithecus africanus was a functional “missing link” because it demonstrated bipedal, or humanlike, upright locomotion, but was topped with a very small, apesized brain, about 27 cu in (450 cc), the size of a small grapefruit. Because scientists then thought that in the evolution of humans a large brain came before bipedal locomotion, Dart’s discovery created a crisis of category that lasted almost 25 years.
141
Louis and Mary Leakey. (AP/Wide World Photos. Reproduced by permission.)
The human evolution landscape remained somewhat quiet after the classification of H. habilis until 1974, when Donald Johanson discovered the fossil remains of the oldest, smallest, and most primitive-looking australopithecine. It had a brain about 18 cu in (350 cc), or about the size of a softball. Johanson’s discovery (later called Lucy) was made in the Hadar region of Ethiopia, in north east Africa, some 1,500 mi (2,414 km) to the north of where the Leakeys were finding hominid fossils in Kenya and Tanzania.
LIFE SCIENCE
Lucy caused a “crisis of category” that has not abated. At issue is whether Johanson’s discovery and naming of A. afarensis represents a different species than previously discovered A. africanus, A. robustus, or even H. habilis.
142
In response to the question under discussion—no, from the perspective of a lumper, the fossils found by the Leakeys and Johanson are not distinct, but represent geographical subspecies of A. africanus/robustus. This lumper’s argument suggests that the morphological (structural) differences between the specimens represent normal variation within a species, variations wide enough to include all species of australopithecines and H. habilis. C. Loring Brace and a South African paleoanthropologist, Phillip Tobias, took a lumper position in the controversy over whether A. afarensis was a new species or represented normal variation within a species. Brace said that SCIENCE
IN
DISPUTE,
VOLUME
2
the anatomical differences between various hominid fossils found in East Africa and South Africa were the result of variation within a single species. Tobias suggested that Lucy was a subspecies of A. africanus. In addition to being older, what made A. afarensis different from H. habilis and A. africanus/robustus were a smaller brain case, differences in dentition, and some differences in finger and leg bones. Remembering that the Olduvai/Laetoli specimens were found more than 1,000 miles (1,610 km) south of Hadar and were significantly older, the lumper’s perspective suggests that, over time, the geographical barriers to mixing gene pools, the genetic material of all the members of a given population, can produce great variation with a species. These differences may not only be in size but in function, because over great amounts of time, evolution is adaptive to local environments. Even if the local ecological environments at Hadar and Laetoli/Olduvai were similar more than two million years ago, there could still be local variation in terms of hominid morphology. Mayr thought that there would have been a less serious category crisis if Johanson had named Lucy Australopithecus johansonensis, rather than name it for a region. Mayr, who felt that regional morphological variation was important to keep in mind, recommended “suppressing” the name Johanson had chosen. Mary Leakey also took
A NEW ERA OF GENUS AND SPECIES NAMING FRENZY? Almost 50 years ago, the evolutionary biologist Ernst Mayr warned against being too zealous in naming new species in the quest for answers on human evolution. Mayr urged scientists to become conservative lumpers, not splitters, in creating categories. However, new fossil finds must be categorized and their place in human history accounted for. New answers about human origins continue to unravel in Ethiopia, an area that yielded Australopithecus afarensis (Lucy) in 1974, at almost 4 million years old, the oldest recognized human ancestor. New fossil finds in Ethiopia have occasioned new genus and species designations—Ardipithicus ramicus (1997), Orrorin tugenensis (2000), and most recently, a subspecies of Ardipithicus ramicus was crowned Ardipithicus ramicus kadabba. Dated by the volcanic material in which it was embedded, kadabba, is about 5.8 million years old, older than the previous Ardipithicus by more than a million years. The significance of kadabba was presented in a paper published in the journal
great issue over the suggestion that Johanson’s A. afarensis from Ethiopia, rather than her H. habilis or A. africanus, was once walking around on her site in Tanzania 1,500 mi (2,414 km) away. Tobias, who worked closely with Louis Leakey, agreed with Mayr. From the beginning, Tobias said that Johanson’s fossils from Afar that he named A. afarensis and the hominids from Laetoli in Tanzania were of the same genus and species—A. africanus. Tobias would have preferred that Johanson name his find as a subspecies, such as A. africanus aethiopicus (since it was discovered in Ethiopia), while the Laetoli finds from Tanzania he said should be A. africanus tanzaniensis.
—Randolph Fillmore
aries, as well as by cultural preferences, in the choice of reproductive mate. What Determines a Species? The argument over whether Johanson and the Leakeys have been finding the same species or different species of hominid comes down to the question: “By what does science judge speciation?” Biologists have always had difficulty when defining species. Generally, species are defined as similar individuals that breed mostly among themselves. While species also denotes morphological and biological stability, a species also has the ability to change over many generations. Most importantly, biologists contend that species change occurs within a species, so drawing the species line is difficult. Biologists consider species the units of evolution.
To determine whether similar or different species of several million-years-old hominids have been turning up in East and South Africa and in Ethiopia, paleoanthropologists can employ three concepts—cladistics, phylogeny, and scenario. A clade is a group of species recently derived from a common ancestor. A cladistic study compares traits and groups of traits. Cladistic studies of the australopithecines and H. habilis have been inconclusive, as well as SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
It is important to note that even today there are phenotypic differences, i.e., observable physical differences, such as skin color, hair texture, shape of the nose and eyes, in the modern human populations in Ethiopia and Tanzania. In general, Ethiopian males are much shorter and smaller boned than males in Tanzania, many of whom are quite tall by world standards. These differences are the result of millennia of environmental adaptation, as well as the product of the genetic traits available in gene pools. Gene pools can be restricted by geographical bound-
Nature in July 2001. Ethiopian graduate student Yohannes Haile-Selassie, who discovered kadabba, was working out of the University of California, Berkeley, under Timothy White, a former collaborator of Donald Johansen, who discovered Lucy. HaileSelassie found fossilized teeth, toes, part of a jaw bone and pieces of collar bone. He says that the fossil teeth and toes of kadabba point to the primate as having been an upright, bipedal walker. If kadabba was bipedal, this find pushes the known beginnings of bipedalism back another million years or so. However, kadabba fossils are few, and the conclusions that can be drawn from them are therefore sketchy. Even if kadabba is found to have been bipedal, its place in human evolution may not be conclusive. There may have been more than one bipedal primate in Africa six million years ago, and discovering which bipedal primates may have marched in the parade toward becoming human is still a difficult question to answer.
143
confusing. In some cladistic studies, A. robustus and H. habilis share more primitive characteristics than they do with the other australopithecines, either A. afarensis or A. africanus. Other groupings of traits make A. afarensis and H. habilis look more alike. Once a cladistic comparison is made, and similarities and dissimilarities are agreed upon, scientists may create a phylogeny, or a natural history of change that can show how one species may have evolved into the next. Finally, a scenario is developed. A scenario is a kind of story about evolution that seeks to explain which species was ancestral to which other, and why and how change occurred. Lumpers tend to be liberal in cladistic studies, allowing for wide variation within species. Splitters prefer to add up differences and then name a new species if they judge the total differences warrant. Using cladistics, phylogeny, and scenario, it is not perfectly clear that A. afarensis, A. africanus, A. robustus, and H. habilis represent distinct species. In additon, paleoanthropologists still argue over the design of their phyletic “branching” on the early human and prehuman family tree.
LIFE SCIENCE
Phyletic Gradualism and Punctuated Equilibrium Encephalization, or increasing brain size, plays a large part in how the fossils “hang” from the family tree. How and why brain size increased and at what speed are two questions that not only need answers, but also have helped create many a crisis of category. Paleoanthropologists recognize two evolutionary speeds— phyletic gradualism, where change is slow and gradual—and punctuated equilibrium, where change is rapid and dramatic when a new species arises by splitting off from a lineage. How these two evolutionary speeds hinder or help the species-naming game needs consideration.
144
The phylogenic and scenario approach to hominid speciation favors phyletic gradualism. However, when looked at cladistically, the reading of our fossil record might favor punctuated equilibrium. For splitters, the fossil record shows greatly different forms showing up relatively quickly, with no or few transitional forms or species in between. This approach begs the question: Are the “breaks” in the fossil record real, or have we not yet found enough transitional forms to fill all in the gaps between, for example, A. afarensis and H. habilis? Lumpers visualize evolution as occurring slowly and gradually. Other Pressures Affecting the Naming of Species Finally, the question of whether the Leakeys and Donald Johanson were finding a different species or the same needs some nonscientific consideration. The scientific literature on human evolution—and specifically species discovery and naming—does not shed light on the fact SCIENCE
IN
DISPUTE,
VOLUME
2
that successful species naming brings with it fame and, perhaps more importantly, research money. The internal politics of paleoanthropology have a real role in the species naming process. Louis Leakey was successful in naming Homo habilis because his specimens had larger brains than australopithecines, big enough to be called human, he said. In addition, Leakey convinced the scientific world that H. habilis used primitive stone tools. However, what especially helped Leakey’s case that H. habilis was “human,” was his influence within the scientific community and that community’s willing acknowledgment of his many decades of hard work. In some respects, science rewarded Leakey by agreeing that his discovery was the earliest known Homo. Had H. habilis just been a bit smaller brained, had Leakey not convinced science that H. habilis used tools, or had Leakey not been so highly regarded, his H. habilis may have been entered into the paleontology books as only a big-brained, and perhaps, tool-making australopithecine. When Johanson, who proved that A. afarensis was more than a million years older than hominid fossils found in Kenya and Tanzania, and also showed that Lucy was more “primitive” than other australopithecines, he won most, but not all, the hearts in the scientific community. Splitters awarded him the new species; lumpers would rather have not. Conclusion In conclusion, what can be said with surety is that from about 3 million to about 1.5 million years ago, there were bipedal primates in Africa, primates we call australopithecines, whose brain size was less than one-half to one-third that of modern humans. Whether these primates represent the same or distinct species depends on whether one is a lumper or a splitter, and how the crisis of category is resolved. Science is a conservative discipline and thus should err on the side of caution, which means lumping the fossils finds of the Leakeys and Johanson until further data and research suggest otherwise. —RANDOLPH FILLMORE
Further Reading Cole, Sonia. Leakey’s Luck: The life of Louis Seymour Bazett Leakey, 1903–1972. London: Collins, 1975. Haile-Selassie, Yohannes. “Late Miocene Hominids from the Middle Awash, Ethiopia.” Nature 412 (2001): 178–81. Johanson, Donald, and J. Shreeve. Lucy’s Child: The Discovery of a Human Ancestor. New York: Early Man Publishing, Inc, 1989. ———. and Maitland Edey. Lucy: The Beginnings of Humankind. New York: Simon and Schuster, 1981.
Leakey, Mary. Disclosing the Past. New York: McGraw Hill, 1986. Leakey, Richard, and Roger Lewin. Origins Reconsidered: In Search of What Makes Us Human. New York: Anchor Books, 1992. Lewin, Roger. Bones of Contention. New York: Simon and Schuster, 1987. Meilke, W. Eric, and Sue Taylor Parker. Naming Our Ancestors: An Anthology of Hominid Taxonomy. Prospect Heights, IL: Waveland Press, 1994. Morell, Virginia. Ancestral Passions: The Leakey Family and the Quest for Humankind’s Beginnings. New York: Simon & Schuster, 1995.
Poirier, Frank E. Understanding Human Evolution. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1987. Relethford, J. H. Genetics and the Search for Modern Human Origins. New York: WileyLiss, 2001. Skelton, R., H. M. McHenry, and G. M. Dawhorn. “Phylogenetic Analysis of Early Hominids.” Current Anthropology 27:1 (1986): 21–38. Tattersall, Ian. The Fossil Trail: How We Know What We Think We Know About Human Evolution. New York: Oxford University Press, 1995.
LIFE SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
145
Does greater species diversity lead to greater stability in ecosystems?
Viewpoint: Yes, greater species diversity does lead to greater stability in ecosystems. Viewpoint: No, ecosystem stability may provide a foundation upon which diversity can thrive, but increased species diversity does not confer ecosystem stability.
In 1970, Philip Handler, president of the United States National Academy of Sciences, said “the general problem of ecosystem analyses is, with the exception of sociological problems, . . . the most difficult problem ever posed by man.” Despite increasingly complex and sophisticated approaches to the analysis of ecosystems over the past 30 years, many ambiguities remain. Although ecology is often thought of as a twentieth-century science, ecological thought goes back to the ancients and was prominent in the writings of many eighteenth and nineteenth century naturalists. Indeed, the nineteenth-century German zoologist Ernst Haeckel who proposed the term “ecology” for the science dealing with “the household of nature.” Until the twentieth century, however, ecology was largely a descriptive field dedicated to counting the number of individuals and species within a given area. Eventually, ecologists focused their attention on competitive relationships among species, predator-prey relationships, species diversity, the relative frequency of different species, niche selection and recognition, and energy flow through ecosystems. The German-born American evolutionary biologist Ernst Mayr called species the “real units of evolution,” as well as the “basic unit of ecology.” Understanding ecosystems, therefore, should include knowledge of their component species and their mutual interactions. Natural ecosystems, whether aquatic or terrestrial, are made up of interdependent units produced by evolutionary processes under the influence of climate, geography, and their particular inorganic and organic constituents. Ecologists tend to focus their attention on species diversity, also known as biodiversity, within particular ecosystems, although ecosystems are always part of a larger continuum. The 1992 Convention on Biological Diversity defined biodiversity as “the variability among living organisms from all sources including . . . terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part; this includes diversity within species, between species and of ecosystems.”
146
The ancient belief in the “balance of nature” might be rephrased in more modern terminology as the belief that the more diverse a system is, the more stable it ought to be. The formal expression of this principle is generally attributed to the Princeton biologist Robert MacArthur in a classic paper published in the journal Ecology in 1955. MacArthur suggested that the stability of an ecosystem could be measured by analyzing the number of alternative pathways within the system through which energy could flow. He argued that if there were many species in a complex food web, predators could adjust to fluctuations in population by switching from less abundant to more abundant
prey species. This would eventually allow the density of the less common species to increase. Based on his studies of the impact of invading plant and animal species on established ecosystems, the English biologist Charles Elton in The Ecology of Invasions by Animals and Plants (1958) argued in favor of what has been called the “diversity-stability hypothesis.” According to Elton, evidence from mathematical models, laboratory experiments, and historical experience indicates that systems with few species were inherently unstable, and more susceptible to invading species. During the last few decades of the twentieth century, conservationists often appealed to the diversity-stability hypothesis to underscore arguments for the importance of maintaining biological diversity. In his widely read book The Closing Circle (1971), the American ecologist Barry Commoner asserted that “The more complex the ecosystem, the more successfully it can resist a stress. . . . Environmental pollution is often a sign that ecological links have been cut and that the ecosystem has been artificially simplified.” Convinced that ecosystems with very limited numbers of species were unstable, some ecologists even considered such systems pathological. For example, some ecologists warned that monocultures should be seen as “outbreaks of apple trees and brussel sprouts,” because by creating large areas composed of such outbreaks, agriculturalists inevitably prepared the way for “corresponding outbreaks of pests.” In the 1960s, some ecologists began to apply computer science to previously intractable biological problems such as ecosystem dynamics, predator-prey interactions, and other aspects of the emerging subfield of systems ecology. Mathematical modeling was made possible by the development of powerful computers, which were capable of analyzing the massive amounts of data characteristic of complex ecosystems. By the 1970s, critics of the diversity-stability hypothesis were arguing that mathematical models, computer programs, laboratory experiments, and observations in the field suggested that the hypothesis owed more to intuition and traditional assumptions than to rigorous evidence. As computer models evolved, system ecologists became increasingly focused on ecosystem processes such as energy transport and the carbon cycle, rather than the species that were members of the system. Critics of mathematical models argued that models are simply experimental tools, rather than products of nature, and that oversimplified models could unwittingly omit crucial variables. Nevertheless, some ecologists believe that evidence from mathematical models and laboratory experiments disproved the hypothesis that linked species diversity and stability. Advocates of the diversity-stability hypothesis argued that the mathematical models designed to analyze simplified systems and experiments carried out on artificial communities did not reflect the complexity of natural ecosystems. Conservationists and environmental advocates generally emphasize the importance and values of species diversity and the need to study complex ecosystems composed of numerous species, including those that are inconspicuous and often overlooked by those constructing mathematical models. Conservationists generally thought of healthy natural communities in terms of species diversity, that is, differences in the number of species, their relative abundance, and their functional differentiation or ecological distinctiveness. If formal definitions of ecological diversity fail to incorporate all aspects of ecological and evolutionary distinctiveness, and experimental investigations fail to detect critical but cryptic relationships among diversity, stability, and ecosystem function, research results may be quite unrealistic. In examining the stability-diversity debate, it often seems that advocates of particular positions are using incompatible definitions of stability and diversity. Many reports focus only on the number of species in the system, as if these species were completely interchangeable components, rather than on the abundance, distribution, or functional capacity of the various species. In a natural ecosystem, the number of species might not be as significant to the ecosystem’s ability to respond to change or challenge as the structural and functional relationships among species. A natural community would probably be very different from others with the same numbers of species if one or a few members were changed. A model might, therefore, be exquisitely precise, but totally inaccurate.
Attempting to avoid the diversity-stability debate, some ecologists prefer to discuss ecosystems in terms of their “wholeness” or “biological integrity,” that is, whether or not the ecosystems include the appropriate components and processes. Although some critics of this viewpoint argue that any SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Similarly, in many model systems and experiments, evidence about systems that are “more resistant” to disturbance may not be relevant to studies designed to determine whether another system is “more stable.” Although one possible meaning of the term stability is constancy, ecologists generally would not consider this an appropriate definition because few natural systems are constant and unchanging. The term resiliency is often used to reflect the ability of a system to return to the state that existed before the changes induced by some disturbance. Mathematical models are often based on this concept, because it allows the analysis of deviations from some equilibrium state. Such models, however, often have little relationship to natural ecosystems. Some ecologists argue that natural communities never exist in the form of equilibrium state used in mathematical models.
147
and all components could be considered appropriate, others contend that species change is a valid criterion of biological integrity. Thus, the replacement of native species by invaders in various ecosystems could be considered a warning sign of danger to the biosphere. Critics of this latter viewpoint argue that although such concepts appear to be intuitively true, they are too vague and circular to serve as valid scientific theories of biological diversity. Going beyond traditional ecological themes and debates about biodiversity, scientists predict that biocomplexity will emerge as an important research field in the twenty-first century. Biocomplexity has been described as the study of global ecosystems, living and inorganic, and how these ecosystems interact to affect the survival of ecosystems and species. Using the Human Genome Project (a worldwide effort to sequence the DNA in the human body) as a model, scientists have suggested the establishment of the All-Species Inventory Project. The goal of this project would be to catalog all life on Earth, perhaps as many as 10 to 130 million species. Since the time of the Greek philosopher Aristotle, whose writings referred to some 500 animals, scientists have identified about 1.8 million species. According to the American biologist Edward O. Wilson, the All-Species Inventory would be a “global diversity map” that would provide an “encyclopedia of life.” —LOIS N. MAGNER
Viewpoint: Yes, greater species diversity does lead to greater stability in ecosystems. The concept of the balance of nature is an old and attractive one for which there is much evidence. Living things are always changing, so the communities of species in ecosystems are always subject to change. However, those who see stability as an important characteristic of healthy ecosystems focus on the fact that some level of stability is usually associated with a wellfunctioning ecosystem, that fluctuations occur within limits and that they are usually around some average, some balanced state. This is important to keep in mind in any discussion of stability in ecosystems: stability is never absolute.
LIFE SCIENCE
The idea that the balance of nature is the norm and that wild fluctuations in populations are a sign of disruption in ecosystems comes from the work of many biologists, including that of the English biologist Charles Elton (1900–1991), one of the great ecologists of the twentieth century. Elton wrote about how foreign species, those that are not native to a particular area, can invade an ecosystem and throw it into imbalance. An example of this is the zebra mussel that has invaded lakes and rivers in the Midwestern United States, leading to the loss of many native species and the clogging of waterways. As a result, species diversity has been seriously affected and ecosystems reduced to a dangerously depleted state, where they are much more likely to be unstable. Question of Definition One problem in the debate over the relationship between species diversity (often called biodiversity) and stability is a question of definition. The general definition of stability is the resistance to change, deterioration, or depletion. The idea of resistance to change is
148
SCIENCE
IN
DISPUTE,
VOLUME
2
related to the older concept of the balance of nature. Resistance to change also brings with it the concept of resilience, that is, being able to bounce back from some disturbance, and this meaning of stability is the one many ecologists focus on today. They ask: Is there a relationship between resilience and biodiversity? They accept the idea that stability is not the same as changelessness, and that an ecosystem is not unchanging, though it may appear to be so to casual human observation. A young person may be familiar with a forested area, and then revisit that area years later when it appears to be the same ecosystem, which has remained seemingly unchanged over a period of 30 or 40 years. But in reality many trees have died during that time, and others—perhaps belonging to different species— have grown up to replace them; there may even have been forest fires and tornado damage. What remains however, is a stable ecosystem in the sense that the later forest has about the same number of species as the earlier one, and about the same productivity in terms of biomass (living material such as new plant growth) produced each year. Ecologists would regard this ecosystem as stable. Many would argue that if the forest’s biodiversity were compromised, if for example, all the trees were cut down and replaced by a plantation of trees of one species to be used for lumber, the forest as a whole would be much more unstable, that is, more susceptible to a disturbance such as the outbreak of an insect pest and much less able to rebound. In the 1970s mathematical models of ecosystem processes seemed to show that biodiversity did not stabilize ecosystems, but that it had just the opposite effect—diverse ecosystems were more likely to behave chaotically, to display wild shifts in population size, for example. These mathematical models had a dramatic effect on the thinking of ecologists and brought the whole idea of the balance of nature into question. But it must be remembered that a model is a construction of the human mind. It may be
intended to represent some part of the natural world, but it is a simplified, abstract, view of that world. Models are useful; they eliminate many of the “messiness” of real life and make it easier for the human mind to grasp complex systems. However, that simplification can be dangerous, because by simplifying a situation, some important factor may be eliminated, thus making the model of questionable value. Although there is some evidence that species diversity can at times increase the instability of an ecosystem, there is also a great deal of evidence against this. The Benefits of Species Richness Increasing species diversity leads to an increase in interactions between species, and many of these interactions have a positive effect on the ecosystem because they are mutually supportive. For example, a new plant species in a community may provide food for insect species, harbor fungi in its roots, and afford shade under which still another plant species may grow. Such interactions, though perhaps insignificant in themselves, can make the ecosystem as a whole more stable by preventing other plant or insect or fungal species from overgrowing.
If an ecosystem is species-rich, this means that most of its niches are filled. A niche is an ecological term meaning not only the place where an organism lives, but how it utilizes that place. For example, an insect that feeds on a single plant species has a very specific niche, while one that can survive by eating a variety of foliage has a broader niche. In general, only one species can occupy a particular niche, so two bird species may both live in the same area but eat different kinds of prey, one specializing in worms, for example, and another in beetles. If an ecosystem is species poor, this means that a number of niches are open and available to be filled by generalist species such as weeds or foreign invaders that may fill several niches at one time and overwhelm native species. In a species-rich ecosystem it is more difficult for such a takeover to occur, because invaders would have to compete with the present niche occupants. In other words, more balanced ecosystems are more likely to remain in balance. They are also more likely to recover successfully from environmental disruptions such as fires, storms, and floods.
Range of organisms in an ecosystem. Amount of living material (particularly plant material), by weight, produced in a given period of time. COMMUNITY: Group of living things residing together. ECOSYSTEM: All the physical and biological components of a particular ecological system. A garden ecosystem, for example, includes biological components such as plants and insects, as well as physical components such as soil. ECOSYSTEM STABILITY: Ability of an ecosystem to survive or bounce back from a disturbance. Disturbances can include anything from the introduction of a species to hurricanes. FOOD CHAIN: Arrangement of organisms such that each organism obtains its food from the preceding link in the chain. For example, a large fish eats a smaller fish, which feeds on aquatic vegetation. FOOD WEB: Made up of numerous food chains; the oftencomplex, nutrition-based relationships between organisms in an ecosystem. GENERALIST SPECIES: Species that can live on a variety of different nutrients and in a variety of different environments. NICHE: Place where an organism lives and how it uses the resources in that habitat. PARAMECIUM: Genus of freshwater protozoa. PROTOZOA: Typically microscopic, unicellular organisms. STABILITY: Ability to resist disturbances caused by change, deterioration, or depletion; ability to recover after a disturbance. TROPHIC LEVEL: One level of a food chain. Organisms in the first trophic level feed organisms in the second, which feed the organisms in the third, and so on. BIODIVERSITY: BIOMASS:
resistant to the effects of drought, and also were most likely to have a growth rebound after the drought ended. In other words, the more diverse plots produced more biomass. A careful analysis of Tilman’s results did reveal that rebound was also related to the particular species that were added, not just to the number of species; plants that were more productive, that grew faster, contributed more to the rebound. This analysis does not completely negate the basic finding about diversity, because the more species in an ecosystem, the greater the likelihood that among those species will be highly productive. Another group of researchers who also explored the link between biodiversity and stability was led by Shahid Naeem of the University of Washington at Seattle. These researchers also took an experimental approach, but their work was carSCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
Experimental Evidence In the 1990s, several groups of researchers produced solid evidence that there is indeed a link between diversity and stability. Some of the most convincing information came from field experiments carried out by the ecologist David Tilman and his colleagues at the University of Minnesota. They created test plots in open fields; and added varying numbers of plant species to some of these plots. They found that the plots with the most species, that is, those that had greater diversity, were most
KEY TERMS
149
LIFE SCIENCE
Rain forests contain tremendous species diversity. Shown here is the Amazon River rain forest.
ried out in the laboratory. In the 1980s, they built indoor chambers and showed that the chambers with more species tended to be more productive and more stable. Recently, the same researchers have produced similar results with microbial communities of algae, fungi, and bacteria. They found that an increase in the number of species leads to an increase in the predictability of growth. In another set of experiments, an increase in the number of species was related to a decrease in fluctuations in the production of CO2 (carbon dioxide), which was used as a measure of microbial function. Both these studies on microbial communities indicate that an increase in the number of species at each trophic level (the function an organism performs in the ecosystem) was important to stability. So not only is the number of species in the ecosystem important, but it is also important that each trophic level—producer, consumer, and decomposer—has a variety of species represented. These studies are particularly important because in many ways they mimic the kinds of communities found in soil, an area of biodiversity which has lagged behind the study of communities above ground. There is also increasing evidence that biodiversity in the soil may also enrich biodiversity above ground. For example, soil fungi can enhance the uptake of nutrients by plants. These studies also show that while the population size of individual species may vary widely, the fluctuations can actually contribute to overall stability of the ecosystem. It may be that these population
150
SCIENCE
(Photograph by Wolfgang Kaehler. CORBIS. Reproduced by permission.)
IN
DISPUTE,
VOLUME
2
changes compensate for other changes within the ecosystem and thus enhance stability. Studies such as these, carried out on well-defined ecosystems, explore the link between diversity and stability. The advantage of microbial systems is that they can be assembled from many species and run for many generations within a reasonable period of time and at reasonable expense. Critics of Tilman’s and Naeem’s work argue that their results often depend on the species chosen, in other words, the relationship of biodiversity with productivity and stability is not true for just any grouping of species. However, this criticism points up the importance of diversity, of having a variety of organisms with many different growth and resource-use characteristics. It would be helpful to be able to perform field studies on the link between biodiversity and stability, rather than having to rely on the artificiality of experimental plots and chambers. Again the problem of complexity arises—natural ecosystems, especially those in tropical areas where biodiversity is likely to be greatest, are so filled with species and so rich in their interactions, that it is difficult to decide what to measure. Nevertheless, many observations in such ecosystems suggest that a depletion in species can lead to instability, with large increases in the populations of some species being more common. For example, invasion by foreign species is easier in disturbed ecosystems, where species have already been lost. This explains why agri-
cultural areas are so susceptible to invasion. Other research has shown that invasion by nonnative species is more likely to occur in less diverse ecosystems, at least on small plots. It may be that diversity is particularly important in ecosystems that are structurally diverse such as layered rain forests, where essentially different ecosystems exist at distinct levels above ground. But diversity can be important even in simple ecosystems. One large-scale experiment in China showed that growing several varieties of rice together, rather than the usual practice of growing just one variety, prevented crop damage due to rice blast, a fungus that can seriously disrupt production. It appears that rice blast cannot spread as easily from plant to plant when several varieties are interspersed, so production is more stable. A rice field is obviously far from a natural ecosystem, but this is still one more piece of evidence for the diversity-stability link. The multivariety approach also has the benefit of reducing the need for pesticides and thus slows further deterioration of the ecosystem. Benefits have also been found for another example of diversity in agriculture, the substitution of mixed perennial grasses for the traditional planting of one annual grass, such as wheat. One benefit of mixed perennials is that there is less opportunity for large numbers of one insect pest to decimate an entire crop. Again, stability comes with diversity, because greater numbers of species provide a buffer against disruption.
Viewpoint: No, ecosystem stability may provide a foundation upon which diversity can thrive, but increased
The hypothesis that greater species diversity begets heightened ecosystem stability may seem correct at first glance. Most people intuitively assume that the pond ecosystem has a better chance of thriving from year to year—even in adverse conditions—if it has a wider variety of species living there. That assumption, however, is supported by little scientific proof. On the other hand, many studies provide compelling evidence that diversity does not promote stability and may even be to its detriment. Several studies also suggest that if species diversity does exist, it is based on ecosystem stability rather than vice versa. The Paramecium Studies of N. G. Hairston One of the early experiments to critically damage the greater-diversity-equals-greater-stability argument came from the N. G. Hairston research group at the University of Michigan in 1968. In this study, the group created artificial communities of bacteria, Paramecia, and/or predatory protozoa grown on nutrient agar cultures. Each community contained more than one trophic level. In other words, the communities contained both predators and prey, as do the macroscopic food webs readily visible in a pond: A fish eats a frog that ingests an insect that attacks a tadpole that scrapes a dinner of bacterial scum from a plant stem. In Hairston’s case, the researchers watched the combinations of organisms in a laboratory instead of a natural setting. Several patterns emerged.
In one series of experiments, the researchers combined prey bacteria, which represented the lowest link in the food chain—the first trophic level—with Paramecium. The bacteria included Aerobacter aerogenes, and “two unidentified bacilliform species isolated from a natural habitat.” The Paramecium—two varieties of P. aurelia and one variety of P. caudatum—fed on the bacteria and so represented the second trophic level. As researchers increased the diversity of the bacteria, the Paramecia thrived and their numbers increased, at first suggesting that diversity caused stability. However, when the researchers looked more closely at the effects of increasing diversity on a specific trophic level, the story changed. They added a third Paramecium species to communities that already contained two species, and then watched what happened. The data showed that stability was based on which Paramecium species was introduced to which two pre-existing Paramecium species, and indicated that diversity in and of itself was not a requirement for stability. This set of experiments demonstrated that a higher number of species of one trophic level is unrelated to increased stability at that level. SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
As with much scientific research, not all data support the diversity-stability link, but scientific results are rarely unanimous. Although the idea of the balance of nature may have been too simplistic, there is still validity to the idea that stability is beneficial, and the values of biodiversity are many. More species not only contribute to more stable ecosystems, but provide a source of chemicals that could be useful as drugs, help to detoxify noxious substances in the environment, and provide a rich source of positive aesthetic experiences. There is enough evidence for the diversity-stability link to make it a viable idea in ecology, and as David Tilman has said, data indicate that it would be foolish to lose diversity from ecosystems. Once that diversity vanishes, it is almost impossible to bring it back, especially because many of the species involved may have become extinct. —MAURA C. FLANNERY
species diversity does not confer ecosystem stability.
151
Finally, Hairston reported the repercussions that followed the introduction of predatory protozoa—the third trophic level—to the experimental communities. The predatory species were Woodruffia metabolica and Didinium nasutum. Regardless of whether the community held two or three Paramecium species, or whether the predators numbered one species or two, all Paramecia quickly fell to the protozoa, whole systems failed, and stability plummeted. In this case, at least, diversity did not generate stability. Although the Hairston research is based on an artificial system rather than a natural one, it represents credible, empirical evidence against the assertion that greater diversity yields stability. Over the years, numerous research groups have conducted similar laboratory experiments with the same results.
LIFE SCIENCE
May and Pimm’s Conclusions about Stability Not long after the Hairston paper was published, noted population biologist Robert M. May, formerly of Princeton and now at Oxford, devoted an entire book to the subject. First published in 1973, Stability and Complexity in Model Ecosystems provided detailed mathematical models illustrating the connection between diversity and instability in small systems, and argued that these models predict similar outcomes in larger systems. May wrote, “The central point remains that if we contrast simple fewspecies mathematical models with the analogously simple multi-species models, the latter are in general less stable than the former.” He also noted that complexity in food webs does not confer stability within communities. A complex food web has many interacting individuals and species. The higher the number of connections in a food web, the greater the chance for individual links to become unstable and eventually affect the entire web.
152
May readily admitted that stable natural systems often are very complex and contain many species. However, he contended that the increased diversity is reliant on the system’s stability, not the opposite. Complexity is not a prerequisite for stability; instead, stability is essential for complexity. In a separate paper, May used the example of a rain forest, a complex ecosystem with vast species diversity but also a high susceptibility to human disturbance. The ecologist and evolutionary biologist Stuart Pimm, of the University of Tennessee, continued the debate in his book The Balance of Nature (1991). Pimm provided a historical view of the stability argument, along with discussions of many of the experiments conducted over the years, and arrived at several conclusions, one of which has direct bearing on the diversity-stability debate. If stability is defined as resilience, or the ability of a species to recover following some SCIENCE
IN
DISPUTE,
VOLUME
2
type of disturbance such as drought, flood, or species introduction, Pimm stated that shorter food chains are more stable than longer food chains. Simplicity, not complexity, imparts stability. He argued that resilience depends on how quickly all members of the food chain recover from the disturbance. Longer food chains involve more species, which present more opportunities for the delay of the restoration of the complete food chain. Pimm supported his argument with results from studies of aphids. Pimm also noted that scientists have and will face problems when taking the stability-diversity question to the field. One problem is the absence of long-term data, which would help scientists to draw conclusions about grand-scale ecological questions such as the diversity-stability connection. Pimm explained that long-term scientific research projects typically require numerous consecutive grants to fund them, and such continuous chains of grants are few and far between. Other Approaches Another difficulty with field studies is finding existing systems that can be adequately compared. If ecosystem stability is defined as the capacity of its populations to persist through, or to show resilience following, some type of disturbance, scientists must identify ecosystems that have similar physical characteristics, and which are experiencing or have experienced a disturbance. To compare the effects of diversity, one ecosystem must have high speciesrichness and one must have low species-richness. In the early 1980s, Thomas Zaret of the Institute for Environmental Studies and University of Washington had that opportunity.
Zaret investigated the relationship between diversity and stability in freshwater fish communities in Africa and South America. First, he compared lakes and rivers. Lakes, Zaret reasoned, provide a more constant habitat than rivers. Rivers experience substantially more acute annual variation in water level, turbidity, current, and chemical content as a result of seasonal rains. Zaret then surveyed the two systems and found that the lakes contained more species than the rivers. Next, he followed the effects of a disturbance on both systems. The disturbance was a newly introduced predatory fish that had invaded a river and a lake in South America. The lake and river were similar in geographic location, and thus topography and climate, which provided an ideal opportunity for a comparison of each system’s ability to rebound from a disturbance. Five years after the introduction of the predator, an examination of 17 common species that occurred in both water systems showed that 13 had disappeared from the lake, while all were still present in the river. Challenging the diversity-breeds-stability argument, Zaret’s results indicated that the less-diverse river was more
stable. He concluded, “The data presented from freshwater fish communities support the hypothesis that diverse communities have lower stability (resilience).”
almost always more diverse than more stable areas. In both cases, Connell and Reice indicate that diversity depends on stability, rather than vice versa.
Although these and other experiments indicate that diversity is not necessary for ecosystem stability, the discussion does not end there. A team of researchers from the University of Wisconsin-Madison determined that although diversity itself did not promote stability, the species-specific resilience of the community’s residents might. Led by zoologist Anthony Ives, the team mathematically analyzed the consequences of environmental stress on various communities. After compiling the data, the team found that the characteristics of each species were more important than the number of species in conferring stability. The results showed that the most stable ecosystems—those that were both persistent and resilient—contained individual organisms that responded well to environmental stress. They did not show a correlation between stability and the sheer number of species in the ecosystem. The research team came to the conclusion that species richness alone does not generate ecosystem stability, and suggested that scientists should begin investigating the stress response of individual species rather than simply counting species.
Another researcher, Wayne P. Sousa of the integrative biology department at the University of California, Berkeley provided validation to this principle with a study of the marine intertidal zone (at the ocean’s edge). Sousa counted the number of sessile (attached) plant and animal species on rocks of various sizes. His reasoning was that waves can easily move small rocks, but not the largest rocks. The small rocks, then, are an unstable system for the sessile residents, the largest rocks are a stable system, and the medium-sized rocks fit the requirements of a system with intermediate disturbance. His results showed an average of 1.7 species on the smallest rocks, 2.5 on the largest, and 3.7 on the medium-sized rocks. To ensure that the species distribution was based on rock movement rather than rock size, he also artificially adhered some small rocks to the substrate (the ocean floor) and determined that species distribution was indeed based on wave-induced movement. This work upheld the intermediate-disturbance hypothesis, and illustrated that the greatest diversity was not associated with the most stable system. Diversity Is No Prerequisite As Daniel Goodman, of Montana State University, wrote in a 1975 examination of the stability-diversity controversy, there have been no experiments, field studies, or model systems that have proved a connection between greater diversity and stability. He added, “We conclude that there is no simple relationship between diversity and stability in ecological systems.” Those words still hold today. In 1998 another group of scientists (Chapin, Sala, and Burke) reviewed much of the literature surrounding the connection between diversity and stability in their paper “Ecosystem Consequences of Changing Biodiversity,” which appeared in the journal BioScience. They concluded that research that had inferred relationships between diversity and stability had relied on simple systems and may not translate well to the more complex systems common in nature. Although they noted that several studies imply a relationship between diversity and ecosystem stability, they added, “At present, too few experiments have been conducted to draw convincing generalizations.”
In summary, none of the studies presented here proves beyond doubt that less species diversity produces a more stable natural ecosystem. However, the combination of studies does provide considerable evidence that greater diversity is not a requirement for ecosystem stability. Several of the studies also suggest that the stability of the system may be the driving factor SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
The Intermediate-Disturbance Hypothesis Several scientists took a different perspective in the discussion of diversity and stability, and developed what is known as the intermediatedisturbance hypothesis. This hypothesis states that the greatest species diversity appears not in the most stable systems, but in systems under periodic, nonextreme stress. In the most stable systems—defined here as those where disturbances are mild or absent—dominant species eventually outcompete their rivals, and the communities become less diverse. Diversity also declines in highly disturbed systems, because only those species that can reproduce and populate an area quickly thrive. The only areas that have high species diversity are those that experience infrequent, moderate disturbances. Joseph Connell of the Department of Ecology, Evolution, and Marine Biology at the University of California at Santa Barbara reinforced the hypothesis with his review of coral reefs and tropical forests. Connell plotted the level of disturbance against species richness and confirmed that ecosystems under infrequent, moderate stress have the greatest diversity. Specifically, he found the highest levels of diversity among reefs in the path of occasional hurricanes and tropical forests that take the brunt of infrequent storms. Seth R. Reice of the biology department at the University of North Carolina, Chapel Hill, similarly noted that habitats that experience natural disturbances, including storms and fire, are
153
in whether a community has high or low species diversity. Despite decades of research, the question of what makes a system stable remains largely unanswered. —LESLIE MERTZ
Further Reading Burslem, David, Nancy Garwood, and Sean Thomas. “Tropical Forest Diversity—The Plot Thickens.” Science 291 (2001): 606–07. Connell, J. H. “Diversity in Tropical Rain Forests and Coral Reefs.” Science 199 (1978): 1302–10. Elton, Charles. The Ecology of Invasions by Animals and Plants. London: Methuen, 1958. Goodman, Daniel. “The Theory of Diversity-Stability Relationships in Ecology.” Quarterly Review of Biology 50, no. 3 (1975): 237–366. Hairston, N. G., et al. “The Relationship Between Species Diversity and Stability: An Experimental Approach with Protozoa and Bacteria.” Ecology 49, no. 6 (Autumn 1968): 1091–101. Ives, A. R., K. Gross, and J. L. Klug. “Stability and Variability in Competitive Communities.” Science 286 (October 15, 1999): 542–44. Kaiser, Jocelyn. “Does Biodiversity Help Fend Off Invaders?” Science 288 (2000): 785–86.
LIFE SCIENCE
———. “Rift Over Biodiversity Divides Ecologists.” Science 289 (2000): 1282–83.
154
SCIENCE
IN
DISPUTE,
VOLUME
2
May, Robert M. Stability and Complexity in Model Ecosystems. Princeton, NJ: Princeton University Press, 2001. Milne, Lorus, and Margery Milne. The Balance of Nature. New York: Knopf, 1961. Naeem, Shahid. “Species Redundancy and Ecosystem Reliability.” Conservation Biology 12 (1998): 39–45. Pimm, Stuart. The Balance of Nature?: Ecological Issues in the Conservation of Species and Communities. Chicago: University of Chicago Press, 1991. Reice, S. R. “Nonequilibrium Determinants of Biological Community Structure.” American Scientist 82 (1994): 424–35. Sousa, W. P. “Disturbance in Marine Intertidal Boulder Fields: The Nonequilibrium Maintenance of Species Diversity.” Ecology 60 (1979): 1225–39. Tilman, David. “The Ecological Consequences of Changes in Biodiversity: A Search for General Principles.” Ecology 80 (1999): 1455–74. Walker, Brian. “Conserving Biological Diversity through Ecosystem Resilience.” Conservation Biology 9 (1995): 747–52. Wolfe, Martin. “Crop Strength through Diversity.” Nature 406 (2000): 681–82. Zaret, T. M. “The Stability/Diversity Controversy: A Test of Hypotheses.” Ecology 63, no. 3 (1982): 721–31.
Is the introduction of natural enemies of invading foreign species such as purple loosestrife (Lythrum salicaria) a safe and effective way to bring the invading species under control? Viewpoint: Yes, introducing the natural enemies of invading foreign species is a safe and effective way to bring the invading species under control, as long as rigorous screening and proper release strategies are utilized. Viewpoint: No, introducing the natural enemies of invading foreign species such as purple loosestrife is neither safe nor effective; as history shows, numerous such attempts have backfired.
Just as Sir Isaac Newton demonstrated that every action is accompanied by an equal and opposite reaction, many naturalists believe that for every pest problem there is an equal and opposite natural counterpart, which could be the basis of a biological control method. Unfortunately, we are unlikely to notice the workings of natural controls until an ecosystem has been disturbed. Biological control involves the use of natural enemies (agents) to manage invading species, i.e., nonnative organisms that have become pests (targets). Exotic animals as well as plants can become serious problems for agriculture and the natural environment when they become established in areas where they have no natural enemies. Typically, the target species are weeds, insects, snails, marine organisms, rats, snakes, rabbits, or other animals. The kinds of control programs that might be adopted depend on the nature of the area that has been threatened or damaged and the nature of the invading pest species. Very different methods might, therefore, be appropriate for gardens, greenhouses, fields, farms, wetlands, or forests. The concept and practice of biological control and integrated pest management are of great theoretical interest to researchers in ecology and agricultural science, and of practical interest to farmers, ranchers, horticulturists, land and wildlife managers, extension agents, and regulatory officials. Biological control agents typically are plant-eating insects, parasites, diseases, or insects that attack other insects. Thus, control agents generally can be divided into three groups: microbials (fungi, bacilli, viruses, bacteria, protozoans), parasitoids (agents that parasitize their target), and predators (agents that prey on their target). In the search for safe and effective biological control agents, scientists have initiated detailed studies of the life cycles and habits of hundreds of natural enemies of various pest species. Extensive research is needed to determine the potential host specificity and environmental impact of biological control agents once they are released into a new environment. Although insects are often thought of as major pest problems, many are quite benign and play an important role in keeping other potential pests under control. Many insect species have been suggested for use as control agents, but rigorous screening is essential and release strategies in the field must be
155
continuously evaluated. Without detailed information about the genetics, taxonomy, and ecology of the insects used as biological control agents, their release into new areas might result in unanticipated problems. Moreover, in areas where complete eradication of an invading species is the goal, attempts to use biological controls might be disappointing. The philosophy of integrated control is based on a natural containment strategy, rather than an attempt to eradicate pest species. Because biological controls act by restoring the natural balance between species, they are unlikely to completely eradicate their target or to eliminate all of the damage caused by the pest. The goal of most true biological control programs is to bring invasive pests under control and to maintain an acceptable equilibrium between the pest and control agent. Because the control agent is a living entity that depends on the pest for its survival, the population of the agent will decrease as the pest species is brought under control. At that point, the pest population might increase until some level of balance is achieved. When insects are used as biological controls, it is usually necessary to discontinue the use of pesticides. Advocates of biocontrol insist that it is one of the most valuable approaches for the long-term management of serious invading pests. Biological control methods have sometimes been applied without proper precautions, but with proper research and appropriate applications, biocontrols may be less damaging to the environment than pesticides, herbicides, and other toxic chemicals. The initial costs of research, screening, and testing may be very expensive, but once the appropriate agent has been established the control system should be self-sustaining. Biocontrol is not a simple matter of adding a living agent to a system threatened or already degraded by an imported pest; any given biocontrol agent must be part of a well-designed, integrated pest management strategy. Effective agents are most likely to come from the original habitat of the target species, where it was presumably kept in check by its own natural enemies. To develop safe and effective biocontrol programs, researchers must test the natural agents that appear to control the pest in its native habitat for host specificity before bringing the agents into a new environment where they might become pests themselves. Appropriate biocontrol agents should attack the target without significantly affecting other species. The ideal agent would only attack the target, but finding agents with absolute host specificity is unlikely. The release of agents that have the appropriate host range must be closely monitored. Critics argue that it is impossible to test an agent against all nontarget species under the conditions that might be found in different sites once the agent is released. Biocontrol programs usually follow certain basic steps and testing procedures. Before a program begins, the pests that are the objects of the control efforts (the targets) must be identified and carefully studied to determine their impact on the environment and their potential vulnerabilities. Scientists need to analyze factors such as the target’s current distribution, potential to spread, natural history, ecological impact, and economic implications. It is important to predict what the future impact of the pest would be in the absence of biocontrol, as well as the possibility of implementing control by other approaches. Once the appropriate agents are screened and selected, they must be released in sufficient numbers to increase the likelihood that they will attack the target, reproduce successfully, and colonize the area occupied by the target. Long-term monitoring of the released agent and the target is essential to ensure the program is safe and effective.
LIFE SCIENCE
In the 1940s, because of the apparent success of DDT (dichlorodiphenyltrichloroethane) and other synthetic broad-spectrum insecticides, some entomologists believed that insect pests could be eradicated, but pesticide resistance developed very rapidly. Rachel Carson’s book Silent Spring (1962) convinced many readers that DDT and other chemical pesticides were poisoning the water, air, fish, birds, and ultimately threatening human health. Biological control systems would presumably reduce the use of insecticides, herbicides, and other toxic chemicals. However, just as the widespread use of synthetic insecticides ignored the complexity of the natural environment, the incautious use of biological controls also could undermine the delicate ecological web that represents nature’s own system of pest control. The attempt to control purple loosestrife provides a good example of the debate concerning the use of biological agents. Purple loosestrife, which occurs naturally in Europe, has invaded wetlands throughout North America. In Europe, the growth of purple loosestrife is apparently kept under control by several insects. Advocates of biological control programs believe that the introduction of several European insect species that specifically target purple loosestrife might bring this pest species under control. The case of purple loosestrife, however, brings up one of the major obstacles to the implementation of biological control programs—the possibility that the imported control agent might itself become a pest. Despite concerns about the safety and effectiveness of biological control, various biological agents currently are being used in attempts to control exotic pests. Many scientists believe that, eventually, more specific and effective agents will be created through the use of molecular biology and bioengineering. Undoubtedly, attempts to use these genetically modified organisms in biological control programs will generate a new set of controversies. —LOIS N. MAGNER
156
SCIENCE
IN
DISPUTE,
VOLUME
2
Viewpoint: Yes, introducing the natural enemies of invading foreign species is a safe and effective way to bring the invading species under control, as long as rigorous screening and proper release strategies are utilized. The proper use of biological control, such as the introduction of natural enemies of invading foreign species, will restore the balance of plant and animal habitats in areas that have been overrun by an imported species. In addition, biological control (BC)—the science and technology of controlling pests with natural enemies—may be the only safe and effective solution to salvaging endangered native species and habitats from nonindigenous invaders when other control techniques are impractical or not working. For example, in one location in Illinois purple loosestrife plants are growing on a floating mat where access is nearly impossible to people, but getting at the plants is no problem for bugs. That is not to say the use of natural enemies is the first line of defense, nor that it is guaranteed to always be without risk. On the risk/benefit issue, if a scientific approach is conscientiously used to determine exactly which natural enemies will meet a very strict list of safety criteria to protect native environments, then biological control can be the answer where nonindigenous plants have run amok. The present selected BC efforts are showing some success with purple loosestrife. BC also is being developed in the Everglades to control the devastating Old World climbing fern. Garlic mustard is another pest being studied for biological control.
Background on Purple Loosestrife Since its introduction into New England from Europe in the early 1800s, purple loosestrife (Lythrum salicaria) has become a spectacular, and aggressive, perennial that is found in every state in the
Purple loosestrife was not entirely a chance introduction. Although it is believed seeds came ashore accidentally in ship ballast and tangled in the wool of imported sheep, purple loosestrife plants also were imported for use in herbal medicines. Various parts of the plant were used to treat diarrhea, dysentery, and both external and internal bleeding. Once introduced in the Northeast, purple loosestrife spread easily with the expansion of inland waterways and canals. It also was commercially distributed for its beauty, medicinal applications, and to provide a nectar source for bees. How did purple loosestrife become such an overwhelming invader? First, it had essentially no natural insect enemies or diseases in its new environment, and unfortunately none came with it from Europe. Also, the plant is hardy in most moist environments, and can tolerate a variety of soils and nutrient regimes. It is a beautiful plant, and as a hardy perennial it was actually sold and planted in gardens for a long time. SCIENCE
IN
DISPUTE,
VOLUME
2
A clump of purple loosestrife next to a lake. (Photograph by John Watkins. © Frank Lane Picture Agency/ CORBIS. Reproduced by permission.)
LIFE SCIENCE
However, bringing in natural enemies to return order to a runaway imported pest without enough research can be risky. The history of hasty, poorly thought-out schemes for a fast fix to a problem of invading species has given the use of one species to control another a bad reputation. Possibly the worst example of poor judgment was the introduction in 1883 of the mongoose to Hawaii to control rats, which also were imports. The mongoose sleeps in the night and hunts in the day. The rat sleeps in the day and feeds at night. They never see each other. Both are now severe pests in Hawaii and are doing damage to native species.
United States except Florida. Purple loosestrife favors wet meadows, floodplains, and roadside ditches, where it tends to crowd out native wetland plants. It endangers wildlife by crowding out their food supply and cover. Ducks and muskrats avoid the very dense loosestrife-covered areas. Few native insects bother it, so native birds like wrens and blackbirds cannot find enough food to stay in an environment where purple loosestrife has taken over.
157
strife in Illinois have been treated with herbicides for more than a decade with no progress made in reducing the plant population.
KEY TERMS Any biological material capable of propagating that is not native to that ecosystem; also described as nonindigenous species. BIOCONTROL: Intentionally introducing a natural enemy of a pest to control that pest. ENTOMOLOGY: A division of zoology that deals with insects. HOST-SPECIFIC: An organism that is attracted to only one particular species. INDIGENOUS SPECIES: A plant or animal species that is native to a particular ecosystem. MONOSPECIFIC: Specific to just one thing, in this case, one species. NONTARGET HOST: Unintentional target that a pest predator may attack. PEST PREDATOR: An organism that preys on a pest organism. TARGET HOST: Intentional target for attack by the pest predator. ALIEN SPECIES:
LIFE SCIENCE
Purple loosestrife is about 6.6 ft (2 m) tall with 30 to 50 stems in its wide crown. The purple spikes of six-petaled flowers appear from July to September, forming seeds by mid-July that are shed all winter. Each of 30 stems per plant produces about 1,000 seed capsules, and each seed capsule averages 90 seeds. That equals more than two million seeds about the size of a speck of ground pepper per plant, or more than four million seeds for the 50-stem plants. The seeds are spread easily by flowing water. Purple loosestrife’s sturdy root system and mature woody stems survive the winter. In addition to seed dispersal, the stems and roots of mature plants can start new plants. Controlling Purple Loosestrife The problem of purple loosestrife has been widely addressed. One of the leading authorities on the control of nonindigenous plants is Bernd Blossey, a research associate in the department of natural resources at Cornell University. Blossey says no conventional effective control of purple loosestrife is available except where the plant occurs in small isolated environments so it can be pulled up. It is absolutely essential to get all vegetative parts or the plants will regrow and spread. Cutting and burning do not work. Herbicides are costly, may kill desirable plants as well as the loosestrife, and also may pose some risks to animal species. They must be applied repeatedly and have not been a successful solution to date. Several areas infested with purple loose-
158
SCIENCE
IN
DISPUTE,
VOLUME
2
The obvious solution is biological control— to import the natural enemies of the species that kept it in control in its native habitat. But this is not an easy solution. The biological control of purple loosestrife has been a multiagency effort, including the Agriculture Research Service (ARS) of the U.S. Department of Agriculture (USDA), the U.S. Fish and Wildlife Service, and the U. S. Geological Survey (USGS). A scientific advisory group was formed with representatives from universities, U. S. federal and state agencies, and Canada. The problems associated with purple loosestrife are not limited to the United States; the plant has become a major pest in parts of Canada as well. The advisory group has been working since 1986. Through the efforts of this group, the first insects selected to control the spread of purple loosestrife were made available in 1992 to seven states and Canadian cooperators. Since then, trained biologists have released selected insects in about 27 states and all of the Canadian provinces. Biological Controls for Purple Loosestrife These insect controls were introduced after extensive research had been conducted to assure a reasonable chance for success. About 15 species were identified and tested for host specificity against 41 species of native plants and 7 species of agricultural plants. They were first tested at the International Institute for Biological Control in Europe, where both the insects and purple loosestrife were coexisting in a balanced environment. Then the selected insects were brought to the United States under strict quarantine conditions and were tested for their feeding habits and ability to survive here. The BC process is not speedy. It takes at least five years to select test insects and then they must be established where they are needed, which is not always successful. Should the insects find the new environment too desirable, native controls also must be available.
Four host-specific insect species have been approved for the control of purple loosestrife by the USDA and released in the United States. These include a root-mining weevil (Hylobius transversovittatus), two leaf-eating beetles (Galerucella calmariensis and G. pusilla), and a flower-feeding weevil (Nanophyes marmoratus). The goal of biological control is the long-term management of an invading species, not eradication altogether. Part of the process includes continuous and long-term monitoring. BC is a relatively new science. Researchers hope to learn from their experiences with purple loosestrife, and to improve technologies for environmentally friendly, safe, and effective weed control.
There are success stories to report on the purple loosestrife biological control efforts that indicate the new science is working. An article in a USGS newsletter (2000) describes the results of a 1995 introduction of Galerucella beetles by the USGS Bureau of Reclamation in a Columbia Basin Project in central Washington State. Purple loosestrife was introduced to a new irrigation project in the 1960s as part of a university experiment, but the alien plant quickly spread over 20,000 acres (8,100 ha), ruining wildlife and waterfowl habitats. It was a nuisance for boaters, anglers, and hunters and was clogging irrigation routes. Herbicidal control was only briefly considered for the vast area. Besides environmental concerns, the cost was prohibitive.
Other Candidates for Biological Control The Old World climbing fern (Lygodium microphyllum) is smothering cypress trees on the edge of the Everglades in Florida. Steve Mirsky
Donald Strong and Robert Pemberton (2000) list recent successes using biological control for both invading plant and insect species in various parts of the world. However, they are quick to point out that “BC is not a panacea” and must be used carefully. The key to safety is finding BC organisms that feed only on the offending alien. This factor was ignored in the 1960s when the weevil Rhinocyllus conicus was introduced to control invading musk thistle, even though it was known to attack four different thistles in Europe. The weevil attacked native thistles in the United States, too. Some say the thinking at the time was that no thistle was a good thistle. Whether the weevil introduction was the result of inadequate regulations or differing values is not clear. Such a release would no SCIENCE
IN
DISPUTE,
VOLUME
2
A Galerucella species. Two species of these leaf-eating beetles have been approved by the USDA for the control of purple loosestrife in the United States. (Photograph by Gregory K. Scott. Photo Researchers, Inc. Reproduced by permisison.)
LIFE SCIENCE
The purple loosestrife has to be defoliated for two years by the beetles before it is killed. At the Washington site, this process requires a lot of beetles. Fortunately they are thriving in the test site and even are moving into nearby areas in Idaho that have been overrun with purple loosestrife. Biologists estimate it will take five to seven years to control purple loosestrife at a site. There is no record of insects intentionally released for the biological control of one species suddenly shifting their diet to include other plants.
(1999) explains the fern-covered trees “look as if they’re dripping with green sequins,” although this beautiful sight represents a “botanical carnage.” ARS may now have identified a biological control for the climbing fern. It took a near worldwide search, but scientists at the ARS Australian Biological Control Laboratory have found a fern-fighting moth, the tiny Cataclysta camptozonale at .5 in (1.3 cm), that seems promising. According to the ARS (2000), rigorous tests were conducted on 14 other fern species, then more than 250 moths were sent to Florida for controlled tests on the Old World climbing fern and to determine whether the moth will harm native crops or plants.
159
Research also is being conducted to determine those factors that increase the competitive ability of invasive plants. Where safeguards are applied as outlined in Cornell’s scientific approach to biological control and USDA protocols are respected, the introduction of natural enemies of invading foreign species such as purple loosestrife is a safe and effective way to bring the invading species under control. —M. C. NAGEL
Viewpoint: No, introducing the natural enemies of invading foreign species such as purple loosestrife is neither safe nor effective; as history shows, numerous such attempts have backfired.
The introduction of the mongoose into Hawaii in 1872—in an effort to control rats—backfired. (Photograph by Joe McDonald.
longer be permitted by the USDA’s Animal and Plant Health Inspection Service, according to David Ragsdale, professor of entomology at the University of Minnesota (McIlroy 2000).
CORBIS. Reproduced
LIFE SCIENCE
by permission.)
160
Safe and Effective Biological Control According to Cornell University researchers (Weeden et al. 2001), biological control has been successfully applied to invasions of nodding thistle in Kansas and Canada; ragwort has been controlled in California, Oregon, and Canada; and alligator weed and water lettuce have been brought into balance in Florida. The researchers have identified desirable characteristics of weed-feeding natural enemies. First on their list is the criterion that the selected insects must be specific to one plant species. The insects also must have enough of a negative impact on the alien plants to control the population dynamics of the target weed; must be both prolific and good colonizers; and must thrive and become widespread in all of the climates occupied by the pest weed.
A Biological Control of Nonindigenous Plant Species program was established at Cornell to advance the science of biological weed controls. The main focus of the program is to document the ecosystem effects of invasive species and to develop and implement biological control programs. In addition, the program includes the long-term monitoring of ecosystem changes after the release of controlling agents. SCIENCE
IN
DISPUTE,
VOLUME
2
Freedom to Destroy One reason a foreign species becomes invasive in its new environment is that the new environment contains none of the natural enemies that keep the species in check in its native environment. It would therefore seem feasible to introduce a natural enemy of the invading foreign species into the new environment to bring that species into check. However, biocontrol efforts can go wrong. Some have gone terribly wrong, wreaking havoc on native bird, animal, and plant populations, causing total extinction of numerous species, and bringing others to the verge of extinction.
Biological control, the act of intentionally introducing a natural enemy of a pest in order to control that pest, was first used in 1889 in a California citrus grove (now Los Angeles) where the cottony cushion scale was devastating the citrus industry. By releasing only 129 vedalia beetles imported from Australia, the scale was brought under control as the beetles proliferated and the industry was saved. There are other biocontrol success stories, but there are also horror stories. Dangers of Introducing a Foreign Species Introducing a foreign species into a new environment is a dangerous proposition, even if it is for biocontrol purposes. While some foreign species will live in harmony with native creatures, others are opportunistic and will take control. Of the more than 4,000 plants introduced (intentionally or accidentally) into the United States, only 10% are generally considered invasive. However, this 10% has done untold damage, both economically and environmentally. Invasive species have often overrun native vege-
tation, leading to a loss in diversity of many species that rely on native vegetation. Worldwide, introduced species have caused about 46% of all species loss. “At first the invasion doesn’t look like much,” explain the authors, “but, if left unchecked, can alter a region’s natural, cultural, and aesthetic values irrevocably.” By the time a foreign species is known to be invasive, it is often too late to halt its destructive progress. Preintroduction Testing—How Reliable? Before a foreign species is introduced, it is essential that extensive testing and analysis be conducted to predict the impact the species may have on its new environment. S. J. Manchester and J. M. Bullock (2000) believe the early identification of “problem” foreign species might make their control easier, but that attempts at predicting which species will become invasive have been highly unsuccessful.
One way to help reduce the threat from foreign species is by host-range testing—attempting to determine how many different hosts may be susceptible to attack by the proposed pest predator. Any species other than the target host (the pest) is called a nontarget host. Every effort must be made to ensure that nontarget species will not be adversely affected by the pest predator. According to Keith R. Hopper (1999), a research entomologist with the U.S. Department of Agriculture and a scientist in the Beneficial Insect Introduction Research Unit, University of Delaware, it is virtually impossible to test all nontarget species in the area of intended introduction.
Retrospective studies are often the only true way to evaluate the effectiveness—or destructiveness—of the introduced natural enemy. “All natural systems are dynamic,” explains Hopper, noting that dynamic systems are a moving target. “Introductions in particular may take a long time to reach some sort of equilibrium or at least relatively steady state,” he
By the time the introduced enemy is well established over a wide area, control sites (sites without the introduced enemy) may be difficult or impossible to find, as the enemy will have infiltrated all of the areas containing its target host. Because control sites are important in assessing the introduction’s results, a lack of those sites will make accurate assessment nearly impossible. “A major problem with extensive surveys for non-target impacts is that negative evidence is hard to quantify and publish,” says Hopper. “Showing in a convincing way that small but significant impacts have not occurred is much more difficult and time consuming than showing that large impacts have occurred.” Donald R. Strong and Robert W. Pemberton (2000) caution that, while biological control is a “powerful technique,” it is not a cureall. They note that there are very few safeguards in place anywhere in the world protecting native habitats from the intentional introduction of foreign species. They also explain that in the SCIENCE
IN
DISPUTE,
VOLUME
2
The common myna, introduced into the United States in 1865 to help control sugarcane worms, instead led to the expansion of a harmful weed species. (Photograph by Chris Van Lennep. © Gallo Images/ CORBIS. Reproduced by permission.)
LIFE SCIENCE
Also, the host range may evolve over time. A nontarget host that the pest predator did not attack in the laboratory—or even in the environment while the target host was prolific—may become a target as the availability of the target host decreases and the pest predator proliferates. Regardless of the narrowness of the host range, few species are monospecific (feed on one specific host). Seldom, explains Hopper, can a list of species suitable for feeding be “neatly” separated from a list of species that appears unsuitable. Even if nontarget species are at a lesser risk, the risk assessment for individuals in that group must be translated into a risk factor for the entire population. Predicting the overall impact on the target host is difficult, let alone the impact on nontarget species.
says. The rule of thumb for an introduced agent to become established in its new environment is approximately 10 years, a timeframe that may be much too short to evaluate its impact on the target, let alone on the nontargets. Yet for researchers, 10 years is a long time. “Such long time horizons and large spatial scales often put evaluations of target, as well as non-target, impacts beyond the resources available to most researchers,” says Hopper.
161
United States biocontrol is governed by a “hodgepodge” of ancient laws that were meant for entirely different purposes; native invertebrates and insects are relatively unprotected; and the past importation of herbivores (plant-eating animals) for the control of weeds has proven “problematic.” Grave Biocontrol Disasters In 1872 the Indian mongoose (Herpestes auropunctatus) was introduced into Hawaii and Jamaica, Puerto Rico, and other parts of the West Indies to control rats in sugarcane. While the mongoose controlled the Asiatic rat, it did not control the European rat. It also found native ground-nesting birds and beneficial native amphibians and reptiles to its liking. R. W. Henderson (1992) found this exotic mammal caused the extinction of at least seven species of reptiles and amphibians in Puerto Rico and other islands in the West Indies alone, and is a major carrier of rabies. Some researchers estimate this biocontrol effort gone wrong costs Puerto Rico and Hawaii $50 million annually by destroying native species, causing huge losses in the poultry industry, and creating serious public health risks.
LIFE SCIENCE
The Eurasian weevil (Rhinocyllus conicus) was introduced into the United States in the 1960s to control a Eurasian thistle that had previously been introduced into the United States and had become a serious pest plant in farming areas. Even before the weevil was released, there were predictions that it might also threaten native thistles. The weevil was released anyway, and does indeed attack native thistles.
162
Bird species introduced into the United States for biocontrol purposes include the house or English sparrow (Passer domesticus) and the common myna (Acridotheres tristis). The English sparrow, introduced in 1853 to control the canker worm, proliferated to the point where it diminished fruit crops by eating fruit-tree buds; displaced native birds such as wrens, cliff swallows, purple martins, and bluebirds from their nesting sites; and spread more than 30 different diseases among humans and livestock. The myna, introduced in 1865 to help control cutworms and army worms in sugarcane, became a major disperser of Lantana camara seeds, an introduced weed species harmful to the environment. Three small predators from the Mustelid family—the ferret, stoat, and weasel—were introduced into New Zealand in the late 1870s to control rabbits, which had been introduced into the country decades earlier and had quickly become a serious agricultural pest. Mustelids, the stoats in particular, have a huge negative impact on native New Zealand species, especially the indigenous kiwi, killing an estimated 15,000 brown kiwi chicks annually—a whopSCIENCE
IN
DISPUTE,
VOLUME
2
ping 95% of all kiwi chicks. Ferrets find groundnesting birds easy prey and pose a serious threat to the endangered black stilt, of which only 100 birds remain. The weasel, a small animal and relatively low in number, tackles prey much larger than itself and has negatively affected lizards, invertebrates, and nesting birds. In Australia, the most disastrous example of biocontrol gone wrong is the introduction of the cane toad (Bufo marinus) by the Australian Bureau of Sugar Experimental Stations. Sugarcane was introduced into Australia in the early 1800s and is now a major industry. However, along with the sugarcane came pests to that crop: the grey-backed cane beetle and the Frenchie beetle. In 1935, 102 Venezuelan cane toads, imported from Hawaii where they were reported to have successfully controlled the sugarcane beetle, were set loose in a small area in North Queensland. Hillary Young (2000) says this “misguided attempt” not only failed to control the cane beetles (ultimately controlled with pesticides), but the toads “successfully devoured other native insects and micro-fauna to the point of extinction. Adding insult to ecosystem injury, the poisonous toad instantly kills any predator that attempts to eat it, particularly the quoll, Australia’s marsupial cat, and giant native lizards. Its population continues to proliferate, outcompeting native amphibians and spreading disease.” In a two-year program that ended in 1998, scientists at the Commonwealth Scientific Industrial Research Organization (CSIRO), Australia’s biggest scientific research organization, failed to find a way to control the cane toad. One pair can lay between 20,000 and 60,000 eggs per breeding season, and the tadpoles develop faster than most native frog tadpoles (Australia had no toads until the cane toad’s introduction), thus outcompeting native tadpoles for food. Cane toads are poisonous at all stages in their life cycle; eat virtually anything from dog food, mice, and indigenous plant and animal species to their own young; and grow to almost 10 in (25 cm) long and more than 4 lb (2 kg). They have spread thousands of miles, moving at a rate of almost 19 mi (30 km) a year. According to the Environment News Service article “Australia Declares Biological War on the Cane Toad” (2001), these toads found Australia to be a virtual paradise compared to their native Venezuela, and are 10 times more dense than in their native habitat. By the year 2000 the species had migrated to an area near Sydney, New South Wales, where the last of the endangered Green and Golden Bell frogs exist. In early 2001 the toads were observed for the first time in Kakadu National Park in the Northern Territory, a 7,700 sq mi (20,000 sq km) environment providing habitats for a huge variety of rare and
By 1996, purple loosestrife was found in all contiguous U.S. states except Florida, and in all Canadian provinces.
Infested
“wired differently than a long-lived animal like a mammal or bird.” Insects gain an adaptive advantage by narrowly selecting a host rather than using their short life spans moving from host to host as they feed. The article also states, “There is no case where an introduced insect has either exterminated the target weed, or unexpectedly switched hosts to become a serious pest of other plants,” citing the work of P. Harris (1988).
Testing for Enemies of the Purple Loosestrife In the search for a way to control the invasive foreign plant purple loosestrife (Lythrum salicaria) in the United States and Canada, researchers identified approximately 15 species of insects and 41 species of native North American plants for testing host-range specificity. They determined that, although some insects could feed on some native plant species, no natives were “preferred” when in the same area as the purple loosestrife, and in no instances did insects complete their life cycle on any native plant. In 1992 researchers began rearing and releasing two European beetles (Galerucella pusilla and G. calmariensis) in several U.S. states and all Canadian provinces. Also, a European root-mining weevil (Hylobius transversovittatus) and a flower-feeding weevil (Nanophyes marmoratus) have been approved for release.
No Such Thing as Risk-Free Bernd Blossey, director of Biological Control of the Nonindigenous Plant Species Program, Cornell University, cannot unconditionally exclude the possibility of an unexpected host switch in insects, although he believes this event is extremely unlikely (McIlroy 2000).
Although the biocontrol plan for purple loosestrife is considered a model program, range testing all native species and predicting what the introduced species will do in the environment over the long term is virtually impossible. David Ragsdale, professor of entomology at the University of Minnesota, says in an on-line article (McIlroy 2000) that insects (usually short-lived) are
Pest control programs that chase one foreign species with another create the potential for the pest predator to become a pest itself, and maybe an even worse pest. In a funny little children’s song an old lady accidentally swallows a fly. She then swallows a spider to catch the fly, then a bird to catch the spider, then a cat to catch the bird, then a dog to catch the cat, then a cow to catch the dog, then a horse to catch the cow. The end of the story? She dies, of course. Introducing a foreign species involves risks to the native environment; the ultimate risk is death to the unique native species in that environment. —MARIE L. THOMPSON
Further Reading Agricultural Research Service. “IPM/Biological Control.” Quarterly Report of Selected SCIENCE
IN
DISPUTE,
VOLUME
2
LIFE SCIENCE
indigenous species. This delicately balanced environment, one of the few sites the United Nations Educational, Scientific, and Cultural Organization’s World Heritage lists as having outstanding cultural and natural universal values, is now facing irrevocable damage from an introduced foreign pest predator. The CSIRO is continuing its research into ways to halt the toad’s progress.
163
Research Projects. April–June 2000. . “Australia Declares Biological War on the Cane Toad.” Environment News Service. 6 March 2001. The Lycos Network. . Biological Control of Nonindigenous Plant Species. 21 September 2001. Department of Natural Resources, Cornell University. . Carson, Rachel. Silent Spring. Houghton Mifflin, 1993.
Boston:
Cox, George W. Alien Species in North America and Hawaii: Impacts on Natural Ecosystems. Washington, D.C.: Island Press, 1999. Devine, Robert. Alien Invasion: America’s Battle with Nonnative Animals and Plants. Washington, D.C.: National Geographic Society, 1998. Harris, P. “The Selection of Effective Agents for the Biological Control of Weeds.” Canadian Entomology 105 (1988): 1495–503. Henderson, R. W. “Consequences of Predator Introductions and Habitat Destruction on Amphibians and Reptiles in the PostColumbus West Indies.” Caribbean Journal of Science 28 (1992): 1–10. Hopper, Keith R. “Summary of Internet Workshop on Research Needs Concerning Nontarget Impacts of Biological Control Introductions.” Online posting. 5 October 1999. University of Delaware. .
LIFE SCIENCE
Malecki, R. A., et al. “Biological Control of Purple Loosestrife.” BioScience 43, no. 10 (1993): 680–86.
164
SCIENCE
IN
DISPUTE,
VOLUME
2
Manchester, S. J., and J. M. Bullock. “The Impact of Nonnative Species on UK Biodiversity and the Effectiveness of Control.” Journal of Applied Ecology 37, no. 5 (2000): 845–46. McIlroy, Barbara, comp. “Q & A: Biological Controls for Purple Loosestrife.” Invasive Plants in the Upper Valley. January 2000. Upper Valley of New Hampshire and Vermont. . Mirsky, Steve. “Floral Fiend.” Scientific American November 1999. . Pimentel, David, et al. “Environmental and Economic Costs of Nonindigenous Species in the United States.” BioScience 50, no. 1 (January 2000): 53–65. . Strong, Donald R., and Robert W. Pemberton. “Biological Control of Invading Species— Risk and Reform.” Science 288 (16 June 2000): 1969–70. . United States Geological Survey. “Bugging Purple Loosestrife.” People, Land, and Water July/August 2000. . Weeden, C. R., A. M. Shelton, Y. Li, and M. P. Hoffmann, eds. Biological Control: A Guide to Natural Enemies in North America. 10 December 2001. Cornell University. . Young, Hillary. “Exotic Down Under: Australia Fights Back against Invader Species.” E/The Environmental Magazine. November–December 2000. .
MATHEMATICS AND COMPUTER SCIENCE Should Gottfried Wilhelm von Leibniz be considered the founder of calculus?
Viewpoint: Yes, Leibniz should be considered the founder of calculus because he was the first to publish his work in the field, his notation is essentially that of modern calculus, and his version is that which was most widely disseminated. Viewpoint: No, Leibniz should not be considered the founder of calculus because the branch of mathematics we now know by that name was first expressed by Isaac Newton, and only later was it developed by Leibniz.
Deciding who deserves to be called the founder of calculus is not altogether easy. Some fiercely contend that the English scientist Isaac Newton (1643–1727) should be considered the founder, while others are convinced the recognition should belong to Gottfried Wilhelm von Leibniz (1646–1716), the German philosopher, diplomat, and mathematician. As is true with many scientific discoveries, the publication process often cements the scientist’s place in history. But is “Who published first?” always the fairest way to credit someone with an innovative idea? Isn’t it possible that the first person to be published is not necessarily the innovator of the idea? After all, being published not only has to do with submitting work that has merit, but it also with academic connections and public reputation. What happens when it is unclear who is the mentor and who is the student? Often colleagues inspire each other, and should credit for a discovery be given, at least in part, to both? Besides, when and how an idea is discovered is not always clear. Especially in mathematics, concepts often develop over time, like snapshots taken of the interior of a house. As time goes by, snapshots will become outdated; changes over time naturally occur. Scientific discovery is often the same way. It evolves; it changes. So, how then, without accurate and completely honest disclosure, can one ever hope to credit an idea to a single person? When making up your mind on who is the true founder of calculus, there are a variety of issues to consider. First, bear in mind the nature of the student/professor relationship. How does that relationship affect the scientific discovery? The timing and complexity of the discovery are also important. Is the discovery entirely new or is it the expansion of a previously held belief? Part of the recognition process also involves whether the discovery can be proved or whether it is merely an hypothesis. In mathematics, a problem solved has tremendous merit, but hypotheses also have value. Indeed, much controversy surrounds the application of both Leibniz’s and Newton’s work. Finally, the importance of political alliances and how they affect public opinion cannot be underestimated in this debate. The glory of discovery tends to expand over time beyond the actual discoverer. Nations, for a variety of political reasons, take great pleasure in recognizing the achievements of their native sons and daughters. Unfortunately, sometimes this can complicate the issue, making the truth even harder to find. In any event, don’t be surprised if you find it hard to choose a side. People have been arguing over this issue for a long time, and we may never be certain who “found” calculus. —LEE ANN PARADISE
165
KEY TERMS Branch of calculus that deals with finding tangent lines through the use of a technique called differentiation. FLUXIONS: Name that Isaac Newton gave his version of calculus. FUNDAMENTAL THEOREM OF CALCULUS: Theorem that states the inverse relationship between the process of differentiation and the process of integration. INFINITESIMAL: Infinitely small. In calculus, the idea that areas are composed of an infinite number of infinitely small lines and volumes are an infinite number of infinitely small areas, etc. INTEGRAL CALCULUS: Branch of calculus that deals with finding areas through the use of a technique called integration. METHOD OF EXHAUSTION: Method developed by ancient Greek mathematicians in which areas or volumes are calculated by successive geometric approximations. QUADRATURE: Process of finding the area of a geometric figure. ROYAL SOCIETY: Scientific society in Great Britain founded in 1660; one of the oldest in Europe. TANGENT: Line that touches a given line at one point but does not cross the given line. DIFFERENTIAL CALCULUS:
Viewpoint:
MATHEMATICS AND COMPUTER SCIENCE
Yes, Leibniz should be considered the founder of calculus because he was the first to publish his work in the field, his notation is essentially that of modern calculus, and his version is that which was most widely disseminated. Gottfried Wilhelm von Leibniz (1646–1716), the German philosopher, diplomat, and mathematician, is considered by many to be the founder of calculus. Although other men may lay claim to the same title, most notably the English scientist Isaac Newton (1643–1727), several arguments favor Leibniz. Not only was Leibniz the first to publish an account of this new branch of mathematics, his notation and conceptual development of calculus became the acknowledged and accepted form throughout Europe and eventually the world. Without Leibniz, modern calculus would have a much different look.
166
SCIENCE
IN
DISPUTE,
VOLUME
2
Leibniz’s Early Life Leibniz was born in Leipzig (now part of Germany) and died in Hanover (also now part of Germany). Although he was born and died in the German states, Leibniz was a true world traveler who learned from and consulted with the greatest scholars of Europe. Leibniz was awarded a bachelor’s and a master’s degree in philosophy from the University of Leipzig, as well as a bachelor’s degree in law. Later he was awarded a doctorate in law from the University of Altdorf in Nuremberg.
Leibniz was an incredibly versatile scholar. His lifelong goal was to unite all of humankind’s knowledge into one all-encompassing philosophy. To this end he spent much of his energy organizing and encouraging scientific and other scholarly societies. He also believed that he could develop a sort of mathematical system to quantify human reasoning. For these reasons, along with more concrete accomplishments such as his development of calculus, Leibniz is known as one of history’s most important philosophers and scientists. After completing his education, Leibniz began a life of travel, scholarship, and service to various royal courts. Two trips made by Leibniz, one to France and the other to England, have special significance in relation to his scientific and mathematical work. An extended stay in Paris, beginning in 1672, enabled Leibniz to study with many of the leading scholars of Europe. In particular, his relationship with the Dutch scientist Christiaan Huygens (1629– 1695) proved extremely fruitful. Huygens, who produced the first practical pendulum clock, was an outstanding astronomer and mathematician. Huygens helped his young German protégé with his pursuit of a mathematical education. Under Huygens, Leibniz’s latent mathematical abilities began to bloom. Influence of Leibniz’s Visit to England During a visit to England the following year (1673), Leibniz conversed with many of Britain’s leading scientists, and was made a fellow of the Royal Society. A calculating machine of his invention received a lukewarm reception from members of the society and a prolonged animosity arose between Leibniz and Robert Hooke, an important member. Leibniz’s correspondence with Henry Oldenburg, the secretary of the society, would prove to be a fateful step in the dispute that later developed with Isaac Newton concerning who deserved priority in the discovery of calculus. It was through Oldenburg that Leibniz received many letters with veiled references to the new mathematical techniques being developed by Newton, Isaac Barrow, James Gregory, and other British mathematicians.
It has never been completely clear to historians exactly how much Leibniz’s trip to Eng-
land influenced his development of calculus. Although he did not meet Newton, Leibniz did have the opportunity to discuss his mathematics with John Pell (1611–1685), and he was exposed to the mathematics of René-François (R. F.) de Sluse (1622–1685) and Isaac Barrow (1630–1677). Since both de Sluse and Barrow had worked on the problem of determining tangents, Leibniz’s introduction to their work probably influenced his future conceptions of calculus. The trip to England also initiated Leibniz’s long-running correspondence with John Collins, the librarian of the Royal Society. This new line of correspondence, along with the continuing correspondence with Oldenburg, provided Leibniz with kernels of information concerning the British work in new mathematical processes. In 1676, Leibniz even received two letters from Newton that included sketches of his work with infinite series. Although this correspondence would later be used against Leibniz in the priority dispute, it is very clear that these letters did not contain the detail required to formulate the rules of calculus. Leibniz’s discoveries of the next few years were, without doubt, his own. Leibniz’s Work on Calculus Back in Paris, Leibniz continued his study of mathematics. His interest in the study of infinitesimals led him to discover a set of new mathematical techniques he called differential calculus. By the end of 1675, Leibniz had developed the basic techniques of his new discovery. Although Leibniz wrote several manuscripts containing these ideas and communicated his discoveries to many of his contemporaries, it was not until 1684 that he published the details of differential calculus in the journal Acta eruditorum. Two years later, he published the details of integral calculus in the same journal.
Other than being the first to publish, Leibniz’s version of calculus was disseminated widely in Europe. Two Swiss brothers, Jacob Bernoulli (1654–1705) and Johann Bernoulli (1667– 1748), were especially important in the circulation of Leibniz’s ideas. The Bernoullis were not only innovators in the new field of calculus, but they were also enthusiastic and prolific teachers. One of Johann’s students, the Marquis de l’Hôpital (1661–1704), published the first differential calculus textbook in 1696. Much of the credit for the spread of Leibnizian calculus in Europe belongs to these extraordinary brothers. Another claim for the priority of Leibniz in the calculus question is that his notation, for the most part, is the notation that was adopted and continues to be in use today. Much of Newton’s notation was awkward, and very little remains a part of modern calculus. Leibniz, on the other hand, was the first to use such familiar notation as the integral sign ∫. This symbol represents an elongated s, the first letter in the Latin word summa, because an integral was understood by Leibniz to be a summation of infinitesimals. Leibniz also invented the differential notation (dx, dy, etc.) favored by most calculus texts. Leibniz’s development of the notation of calculus goes hand-in-hand with his life-long dream of creating a sort of “algebra” to describe the whole of human knowledge. SCIENCE
IN
DISPUTE,
VOLUME
2
Gottfried Wilhelm von Leibniz (The Library of Congress.)
MATHEMATICS AND COMPUTER SCIENCE
If Isaac Newton developed the method of fluxions (his version of the calculus) in 1665 and 1666, why then argue that Leibniz, who admittedly did not have his inspiration until nearly a decade later, should be considered the true founder of calculus? One reason is simply that Newton delayed publishing his work, and in fact did not clearly explain his discovery to anyone, until long after Leibniz had independently found his own version of calculus and published his results. Whereas Leibniz published his work in the mid-1680s, the first published account from Newton did not appear until 1704, as an appendix to his work on the science of optics. In addition, Leibniz was confident that his new methods constituted a revolutionary change in mathematics, whereas Newton seemed content to apply his techniques to physical problems without paying heed to their significance. It seems that Leibniz
had a greater appreciation for the importance of the discovery than had Newton.
167
The Priority Dispute The priority dispute that developed between Newton and Leibniz, and was continued by their supporters even after the death of the two great mathematicians, is one of the most interesting and controversial episodes in the history of mathematics. One can find historical accounts favorable toward Newton and accounts favorable toward Leibniz. Luckily, there are several historical works (see Further Reading) that present a balanced and unbiased analysis of the dispute.
At first, it seemed that Newton played the part of a good-natured, yet slightly condescending teacher, and Leibniz that of the eager student. Although Leibniz always maintained a high level of confidence in his own abilities, he realized after his visit to England that he had a lot to learn about mathematics. Yet, later Leibniz was to give little credence to Newton’s abilities, preferring to believe that Newton’s work in infinite series was not the equivalent to his own development of calculus.
MATHEMATICS AND COMPUTER SCIENCE
The controversy arose when the disputants and their supporters gathered along two lines of thought. Newton and his defenders rightfully claimed that Newton was the first discoverer of calculus. However, Newton’s supporters went on to make the erroneous claim that Leibniz had plagiarized Newton’s work and therefore had no claim of his own upon the discovery of calculus. Leibniz, on the other hand, was essentially content to be credited as an equal in the priority question, although he continued to believe that his discovery was much broader and more substantial than that of Newton.
168
The low point of the dispute occurred after Leibniz appealed to the Royal Society to correct what was, in Leibniz’s own judgment, unfair attacks by John Keill (1671–1721, a Scottish scientist and member of the Royal Society who supported Newton, on his role in the discovery of calculus. The report of a committee formed within the Royal Society to investigate the question was known as the Commercium epistolicum. To Leibniz’s chagrin, the report reaffirmed Newton as the “first inventor” of the calculus, and worse, condemned Leibniz for concealing his knowledge of Newton’s work. The report stopped just short of calling Leibniz a plagiarist. It later came to light that the author of the Commercium epistolicum was none other than Newton himself. For centuries, the question of who deserved credit for the discovery of calculus depended largely on which side of the English Channel one lived. British mathematicians and historians defended Newton, while the majority of continental mathematicians sided with Leibniz. Today, the two great men are generally accorded equal credit for the discovery, mutually independent of each other. However, Leibniz was SCIENCE
IN
DISPUTE,
VOLUME
2
the first to give the world calculus through publication, his notation is essentially that of modern calculus, and his version is that which was most widely disseminated. Therefore, if one man was chosen as the founder of calculus, Leibniz could rightfully stake his claim to the title. —TODD TIMMONS
Viewpoint: No, Leibniz should not be considered the founder of calculus because the branch of mathematics we now know by that name was first expressed by Isaac Newton, and only later was it developed by Leibniz. Gottfried Wilhelm von Leibniz (1646–1716) is often considered the founder of calculus. However, many claim this title is not deserved, and for several reasons. First, the history of mathematics is much too complicated for such a simple statement to be true. Leibniz did not create calculus in a scientific vacuum, but rather gathered together the work of many mathematicians over many centuries to derive a “new” way of approaching physical problems through mathematics. Second, if one were to venture an assertion as to the true founder of calculus, the name of the English mathematician Isaac Newton (1643–1727) would certainly be the first to mind. Either of these reasons taken alone forms a solid argument that Leibniz is not the founder of calculus. Leibniz’s Predecessors The development of calculus involved the work and ideas of many people over many centuries, and any number of these people might be called the founder of calculus. However, they might be more appropriately labeled the founders of calculus, because each one played an important role in the story. The history of calculus begins over 20 centuries ago with the mathematicians of ancient Greece. A method of calculating the areas of various geometric shapes, called the method of exhaustion, was developed by several Greek mathematicians and perfected by Eudoxus of Cnidus (c. 400–350 B.C.). Eudoxus used the method of exhaustion to find the volumes of various solids such as pyramids, prisms, cones, and cylinders, and his work is considered a forerunner of integral calculus.
Archimedes (c. 287–212 B.C.), perhaps the greatest mathematician of antiquity, used the method of exhaustion to find the area of a circle. He did so by first inscribing a series of polygons within a circle and then circumscribing the polygons around the outside of the circle. As the
number of sides of the polygons increased, the average of the areas of the inscribed and circumscribed polygons became progressively closer to the area of the circle. Using this method with a 96-sided polygon, Archimedes was able to approximate the value of pi more accurately than anyone had before. As well as circles, Archimedes applied his method to a number of other geometric figures, including spheres, ellipses, cones, and even segments of parabolas. Like Eudoxus, Archimedes also found ways to calculate volumes of various solids. His method for finding the areas and volumes is essentially equivalent to the method of integration in calculus. Archimedes also developed methods for finding the tangents to various curves, an anticipation of differential calculus. Because his technique for finding areas was an early form of integration, and his tangent problems were essentially derivatives, it is easy to consider Archimedes a founder of calculus.
ematics, Fermat maintained a long correspondence with many of the most important mathematicians of France. This correspondence inspired an incredibly fruitful episode in the history of mathematics. Fermat is known today as a cofounder of probability theory, in collaboration with Blaise Pascal (1623–1662); a cofounder of analytical geometry along with, but independently of, René Descartes (1596– 1650); and one of the first modern mathematicians to show a serious interest in the theory of numbers. He is particularly important to the story of calculus because he worked in both branches, differential calculus and integral calculus. Fermat extended the work of his predecessors by discovering general formulas for finding the areas of various figures. In addition, Fermat used the method we today call differentiation to find the maximum and minimum values of algebraic curves.
Medieval Europe saw little in the way of scientific or mathematical advances, and calculus was no exception. However, Islamic scholars of the Middle Ages not only kept alive much of the mathematics of Greek antiquity, but also made important advances in certain areas. At least one Islamic mathematician, Sharaf ad-Din at-Tusi (c. 1135–1213), actually used the concepts of calculus in his work. At-Tusi essentially employed the derivative (although not explicitly and not in modern notation) to find the maximum values of certain cubic equations. Although historians disagree over the approach at-Tusi took in developing his mathematics, there is no doubt that some of the underlying concepts of calculus were known and used by certain Islamic mathematicians.
Not long after Fermat’s work, another important contribution to the development of calculus came from the Scottish mathematician James Gregory (1638–1675). Gregory discovered many of the basic concepts of calculus at about the same time as Newton and well before Leibniz. In his book Geometriae pars universalis (The universal part of geometry), published in 1668, Gregory presented the first proof in print of what we now call the Fundamental Theorem of Calculus. His work anticipated both Newton and Leibniz, yet gained very little notoriety in Gregory’s lifetime.
Quite possibly the most important contributor to the development of calculus between the time of Archimedes and that of Leibniz and Newton was the French jurist, Pierre de Fermat (1601–1665). Fermat was an amateur mathematician who made many wondrous discoveries while working on mathematics in his spare time. Although far removed both geographically and professionally from the centers of French math-
Barrow conceived of several ideas fundamental to the development of calculus. One of these was the concept of the differential triangle. In the differential triangle, the hypotenuse of a systematically shrinking triangle represents the tangent line to a curve. This idea is conceptually equivalent to the notion of a derivative. Barrow also realized the inverse relationship between the method of finding tangents (differentiation) and the method of finding areas (integration). Although he never explicitly stated the Fundamental Theorem of Calculus, this inverse relationship laid the groundwork for Newton’s work. Isaac Newton: The True Founder of Calculus So far, we have addressed the other characters in SCIENCE
IN
DISPUTE,
VOLUME
2
MATHEMATICS AND COMPUTER SCIENCE
In seventeenth-century Europe, there was a rebirth of ideas that involve what we now call calculus. One of these critical ideas was the assertion that areas could be considered as an infinite sum of lines, or indivisibles. The concept of an indivisible, popularized by the Italian mathematician, (Francesco) Bonaventura Cavalieri (1598–1647), was an important step in the process of “quadrature,” or finding the area of a figure. By conceiving of areas as the sum of infinitely small lines, Cavalieri found methods to calculate the area of a variety of geometric figures. This extension of the work of Archimedes provided a fundamental concept behind the method of integration.
Many historians consider Gregory’s contemporary, Isaac Newton, to be the founder of calculus. There are certainly several fundamental arguments that favor Newton as the man to whom the credit is due. These arguments will be presented shortly. However, even Newton relied on his predecessors. Not only was Newton obviously influenced by the work of those mathematicians already mentioned, but he was also greatly indebted to his teacher and the man whom he replaced as professor of mathematics at Cambridge University, Isaac Barrow (1630–1677).
169
Newton called these velocities fluxions and the inverse of the fluxions Newton called fluents. Although the names are different than those we use in calculus today, the mathematical concept is equivalent. Newton’s fluxions are our derivatives. More importantly, Newton realized the relationship between the process of differentiation and that of integration, giving us the first modern statement of the Fundamental Theorem of Calculus. Isaac Newton developed his version of calculus many years before Leibniz, and he used his newfound mathematical techniques in analyzing scientific questions. However, for various reasons, Newton did not publish his work on calculus until much later, even after Leibniz’s own version of calculus was published in the 1680s. Newton wrote a short tract on fluxions in 1666 and several other works on calculus in the next decade. However, none of Newton’s works addressing calculus were published until the next century.
Isaac Newton (Painting by Kneller. The Bettmann Archive. Reproduced
MATHEMATICS AND COMPUTER SCIENCE
by permission.)
170
the history of mathematics who would claim at least part of the credit for the development of calculus. If anyone deserves to be called the founder of calculus, however, it is Isaac Newton. Newton was born and raised on a small farm in Lincolnshire, England. Although he was initially expected to manage the family farm, an uncle recognized the latent scientific talent in the young boy and arranged for Newton to attend first grammar school and later Cambridge University. At Cambridge, Newton discovered his interests in mathematics, but because that subject was given very little attention at the university, Newton acquired his own books containing the most advanced mathematics of the time and taught himself. On leave from Cambridge, closed due to an outbreak of the plague, Newton spent 1665 and 1666 at his mother’s farm. There Newton conceived of many of the ideas that would make his the most famous name in science: the laws of motion, the universal law of gravitation, and the formulation of calculus, which he called the method of fluxions. This period of a little over a year is often referred to as the annus mirabilis, or the “year of miracles,” because of the almost miraculous inspiration that allowed one man to make so many important scientific discoveries in a short period of time. In his conception of calculus, Newton imagined a point moving along a curve. The velocity of the point was a combination of two components, one horizontal and one vertical. SCIENCE
IN
DISPUTE,
VOLUME
2
Although Newton’s writings about calculus were not published for many decades, his formulation of the methods of calculus was known in England and continental Europe. Thanks to many friends and supporters, Newton’s work was circulated and discussed long before it was formally published. Just how much Leibniz knew of Newton’s discovery is debatable. Leibniz did receive correspondence from several people regarding Newton’s work, including two letters from Newton himself. None of this correspondence stated explicitly Newton’s work on fluxions, but rather offered vague accounts of the new method. For this reason, most historians believe Leibniz developed his version of calculus independently of Newton. However, of one thing there is no doubt—Isaac Newton formulated the new branch of mathematics we now call calculus long before his rival Gottfried Leibniz. Calculus after Leibniz Whether we consider Leibniz or Newton (or even Archimedes or Fermat) to be the original founder of calculus, the calculus we know today is certainly very different than that which came from the pen of any of these men. Learning of the new methods of calculus shortly after Leibniz’s discovery, two Swiss brothers, Jacob Bernoulli (1654–1705) and Johann Bernoulli (1667–1748), quite possibly outshone their teacher. Jacob Bernoulli was professor of mathematics at Basel, Switzerland, where he did pioneering work in integration techniques and in the solution of differential equations. It appears that the term integral was first used in print by Jacob. Johann was professor of mathematics at Groningen in the Netherlands and later succeeded Jacob at Basel after Jacob’s death. Johann also made significant contributions to the theory of integration, as well as serv-
ing as the tutor for the French nobleman, the Marquis de l’Hôpital (1661–1704). De l’Hôpital used (some say stole) Johann’s work to write the first differential calculus textbook in 1696. Interestingly, the two gifted brothers bickered constantly throughout their lives and what could have been brilliant collaboration degraded into petty jealousy. The Bernoulli brothers, however, did as much to shape modern calculus as anyone else associated with its conception. Although the basic techniques of calculus were discovered by the early eighteenth century, serious questions arose as to its logical foundation. Few of the individuals discussed up to this point, from Archimedes to the Bernoulli brothers, were overly concerned with setting calculus on a sound logical foundation. This problem was pointed out quite effectively by the Irish scientist (and bishop), George Berkeley (1685–1753). In his 1734 work, The Analyst: or, a Discourse Addressed to an Infidel Mathematician, Berkeley attacked many of the illogical assumptions upon which calculus was based. The most glaring of these gaps in logic was the inconsistency with which infinitely small numbers were treated. Berkeley called these numbers “the ghosts of departed quantities.” Until it was placed on a solid foundation, we cannot maintain that the process of discovering calculus was complete. The man who finally succeeded in placing calculus on a firm logical foundation was the French mathematician Baron Augustin-Louis Cauchy (1789–1857). In his book, Cours d’analyse de l’École Royale Polytechnique (Courses on analysis from the École Royale Polytechnique), published in 1821, Cauchy set a rigorous course for calculus for the first time. Although not the final word on rigor in calculus—the German mathematician Georg Friedrich Bernhard Riemann (1826–1866) later provided the modern and rigorous definition of a definite integral–—Cauchy’s work provided the framework within which modern calculus is understood. Without the work of Cauchy and others, calculus would not be what it is today.
without the influences of other sources, obviously the first person to make the discovery deserves credit. Unfortunately, that seldom happens, especially in mathematics. The discovery of calculus was a centuries-long process that culminated with the work of modern mathematicians. This makes it very problematic to ask simply the question, “What single person deserves credit?” Nevertheless, as we have also seen, if one is pressed to answer such a question that answer would surely be Isaac Newton, because Newton was the first person to assimilate all the work that came before into a single, rational entity. —TODD TIMMONS
Further Reading Aiton, E. J. Leibniz: A Biography. Boston: Hilger, 1985. Baron, Margaret E. The Origins of the Infinitesimal Calculus. New York: Dover Publications, 1969. Boyer, Carl B. The History of the Calculus and its Conceptual Development. New York: Dover Publications, 1949. Edwards, C. H. The Historical Development of the Calculus. New York: Springer-Verlag, 1979. Guicciardini, Niccoló. The Development of Newtonian Calculus in Britain 1700–1800. Cambridge: Cambridge University Press, 1989. Hall, Rupert. Philosophers at War: The Quarrel Between Newton and Leibniz. New York: Cambridge University Press, 1980. Hofman, Joseph E. Leibniz in Paris, 1672–1676: His Growth to Mathematical Maturity. London: Cambridge University Press, 1974. Westfall, Richard. Never at Rest. Cambridge: Cambridge University Press, 1980.
Who should receive credit for a new discovery? If that discovery is made independently
MATHEMATICS AND COMPUTER SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
171
Does whole-class teaching improve mathematical instruction?
Viewpoint: Yes, whole-class teaching improves mathematical instruction by cutting down on preparation time for teachers and providing teachers with a consistent method to assess student development. Viewpoint: No, whole-class teaching does not improve mathematical instruction; it is inefficient and ineffective except in situations where class size is very small and student ability falls in a narrow range.
At the heart of this debate is the age-old question, “How much is too much?” Should class time center around “whole-class teaching” methods, or should it be tailored toward individual needs? Or should it, ideally, be a mixture of methods? Most of us have suffered from a twinge of math anxiety at least once, especially when faced with the challenge of solving a seemingly impossibleto-solve equation. And yet, by all accounts, basic mathematical skills are considered by many to be essential for success in the business world. Success, of course, is measured in a variety of ways. Test scores commonly measure one form of success, which is academic achievement. Korean students, for example, frequently outrank other students with regard to test scores. Proponents of whole class learning are quick to point out that the average Korean class size is made up of 40 or more students. This, they say, allows teachers to utilize a wider variety of nontraditional teaching methods more effectively. But are higher test scores a result of larger classes, or a societal attitude toward learning in general? Certainly study habits play a part in academic achievement. Success can also be measured in other, less tangible, ways such as a student’s confidence level and ability to apply knowledge to real-life situations. After all, not all students are gregarious and outgoing. Some are less comfortable with class participation than others. For the shy, perhaps mathphobic student, individual attention can be quite beneficial. Working in small groups might allow the less confident student to ask questions he or she might otherwise feel uncomfortable to ask in front of the entire group. Many students dread asking what could considered to be a “stupid question” and would rather remain in the dark than look foolish in front of their peers. So, when considering whether whole class learning is the best approach to teaching mathematics, one must also think about the feasibility of class participation as a key element in the learning process. The issue is clearly complicated, and is not likely to be decided any time soon. It would seem that we must first agree on the goals of mathematical instruction and on what constitutes success, and that alone is a monumental task. Some people value personal achievement, while others say that measurable success (a high test score) is paramount. Making math seem less intimidating is certainly one goal; making it more interesting is another. 172
Ideally, a teacher’s job is to create an environment in which learning can take place, but how does one do that when students enter a classroom at dif-
ferent academic levels? Does the teacher team them together, utilizing technological aids and modern resources, in the hope that demonstrating mathematical concepts on a broader basis will drive the finer points home? Or should the teacher focus on the detail work, breaking mathematical analysis up into smaller bits (and in smaller study groups) in the hope that individual needs will be met? Whether it is best to focus on the collective or on the individual depends on the educator you talk to. Some say the focus should be situational. In other words, some aspects of math can be taught on a whole-class basis, while others can be explored on an individual basis. The nontraditional approach embraces active rather than passive learning, which is said by some to bring an element of excitement to the classroom. Others would argue that there is something comforting about the old-fashioned approach—the teacher puts the problem on the board and explains in a step-by-step manner how to solve it. This method, some would argue, serves as an equalizer; regardless of their skill level, students can grasp the general concepts of mathematics if they simply go to class and pay attention. Still others say that is too optimistic an approach; keeping students’ attention in this highly visual, technological world is a constant challenge. Perhaps the solution can be found in a combination of methods; as in mathematics itself, there is often more than one way to solve a problem. —LEE ANN PARADISE
Viewpoint: Yes, whole-class teaching improves mathematical instruction by cutting down on preparation time for teachers and providing teachers with a consistent method to assess student development. In the real world, people must use mathematical skills in their daily lives. In the ideal world of mathematical instruction, students are provided with a good overall sense and understanding of mathematics, as well as of its importance to solving real-world problems. Unfortunately, things are often far from the ideal. Some people actually have a phobia of mathematical concepts, and a lower-grade anxiety about math is present in even larger numbers of people. In any case, a majority of the population is apathetic toward math, and ignorant concerning its true nature.
A Big Difference Whole-class teaching (WCT) and its associated technology, make it easier to effectively teach mathematics. WCT makes a big difference by cutting down on preparation time for teachers, and helps to keep the class focused on the math concept at hand, thus freeing the teacher from classroom management issues (so more time can be spent on effec-
In spite of the many favorable attributes, including those referred to above, that its supporters contend are genuine, WCT does have its detractors. One of the main objections relates to class size. Opponents of WCT make the contention that whole-class teaching is only possible in small classes; that it breaks down in large classroom settings. However, the Third International Mathematics and Science Study (TIMSS), a project of the International Study Center at Boston College, stated in a report that “69% of the students in Korea were in mathematics classes with more than 40 students and 93% were in classes with more than 30 students. Similarly, 98% of the students in Singapore, 87% in Hong Kong, and 68% in Japan were in classes with more than 30 students.” The report continues, “Dramatic reductions in class size can be related to gains in achievement, but the chief effects of smaller classes often are in relation to teacher attitudes and instructional strategies. The TIMSS data support the complexity of this issue. Across countries, the four highest-performing countries at the fourth grade—Singapore, Korea, Japan, and Hong Kong—are among those with the largest math classes . . . the students with higher achievement appear to be in larger classes.” It is reported that these teachers relied more on whole-class instruction and independent work than the United States, two practices which are currently opposed by many school reformers. The TIMSS study shows that WCT is effective not only in small classes, but also in large classroom environments. SCIENCE
IN
DISPUTE,
VOLUME
2
MATHEMATICS AND COMPUTER SCIENCE
At present, educators put great effort into trying to develop mathematical skills in students using traditional methods. For a majority of students, however, these traditional methods fail. Instead of focusing upon the utility of mathematics for its application to a wide variety of life experiences, traditional methods rely far too much upon rote learning and a mechanical attitude towards math. This traditional, mechanical approach all too often leads to student disinterest in, and even distaste for, mathematics.
tively teaching math), and providing a consistent way to assess student development and learning. The whole-class methodology requires a fundamental change in teaching; with a change in the emphasis on how to teach, changes in the assessment of student performance, use of an integrated curriculum, and the use of manipulative and activity-based instruction.
173
KEY TERMS Acronym for Compact Disc-Read-Only Memory. A CD-ROM is a rigid plastic disk that stores a large amount of data through the use of laser optics technology. HOMOGENEOUS: Having a similar nature or kind. INTEGERS: Any positive or negative counting numbers, or zero. LCD: Acronym for Liquid Crystal Display. An LCD panel is a translucent glass panel that shows a computer or video image using a matrix of tiny liquid crystal displays, each creating one pixel (“picture element” or dot) that makes up the image. PROBABILITY: Ratio of the number of times that an event occurs to the larger number of trials that take place. STATISTICS: Discipline dealing with methods of obtaining a set of data, analyzing and summarizing it, and drawing inferences from data samples by the use of probability theory. CD-ROM:
MATHEMATICS AND COMPUTER SCIENCE
In additional support of the TIMSS studies, psychology professors Jim Stiegler of the University of California at Los Angeles, and Harold Stevenson of the University of Michigan recommended in The Learning Gap (1992) larger class sizes (in the context of using WCT in those classrooms) in order to free teachers to have more time to collaborate and prepare. Oral, Interactive, and Lively High-quality WCT has been described as “oral, interactive, and lively.” WCT is not the traditional method of teaching math that uses the all-too-simplistic formula of lecture, read the book, and “drill and practice.” It is a two-way process in which pupils are expected (and encouraged) to play an active role by answering questions, contributing ideas to discussions, and explaining and demonstrating their methods to the whole class. For example, the June 1998 report entitled Implementation of the National Numeracy Strategy written by David Reynolds, a professor of education at the University of Newcastle upon Tyne, Great Britain, talks about why WCT is more active than passive in its practices throughout the classroom. Reynolds says, “Direct teaching of the whole class together does not mean a return to the formal chalk and talk approach, with the teacher talking and pupils mainly just listening. Good direct teaching is lively and stimulating. It means that teachers provide clear instruction, use effective questioning techniques and make good use of pupils’ responses.”
174
SCIENCE
IN
DISPUTE,
VOLUME
2
Achievements The National Numeracy Strategy is an important educational project involved in raising math standards in Great Britain. According to the project’s Framework for Teaching Mathematics, WCT succeeds in a variety of important math-related teaching areas:
• Teachers are effectively able to share teaching objectives. This ability guides students, allowing them to know what to do, when to do it, and why it is being done. • Teachers are able to provide to the student the necessary information and structuring that is essential to proper instruction. • Teachers are able to effectively demonstrate, describe, and model the various concepts with the use of appropriate resources and visual displays, especially by using upto-date electronic devices for conveying information. • Teachers are able to provide accurate and well-paced explanations and illustrations, and consistently refer to previous work to reinforce learning. • Teachers are able to question students in ways that match the direction and pace of the lesson to ensure that all pupils take part. This process, when skillfully done, assures that pupils (of all abilities) are involved in the discussions and are given the proper amount of time to learn and understand. Only WCT allows this to happen because all students are part of the group, allowing for more effective responses from the teacher. • Teachers are able to maximize the opportunities to reinforce what has been taught through various activities in the class, as well as homework tasks. In parallel, pupils are encouraged to think about a math concept, and when ready to talk through a process are further encouraged either individually or as part of the group. This process expands their comprehension and reasoning, and also helps to refine the methods used in class to solve problems. It also allows students to think of different ways to approach a mathematical (or for that matter, any other) problem. • Teachers are able to evaluate pupils’ responses more easily, identify mistakes, and turn those mistakes into positive teaching points by talking about them and clarifying any misconceptions that led to them. Along these lines, teachers discuss the justifications for the methods and resources that students have chosen, constructively evaluating pupils’ presentations with oral and written feedback. This is done with all of the students, so all learn from their fellow students’ mistakes.
• Teachers are able to review the students near the end of each lesson with respect to what had been taught and what pupils have learned. Teachers identify and correct any misunderstandings, invite pupils to present their work, and identify key points and ideas. Insight into the next stage of their learning is introduced at this time. Uniform review is important, and is most effectively performed with WCT as the predominate style of teaching.
Key Technological Aids and Techniques WCT allows for the successful deployment of technology in the classroom. A number of key techniques utilizing electronic and other technologies supplement WCT with respect to mathematics. These techniques are instituted primarily to increase the effectiveness of math instruction through the overall WCT process. Some of the more vital technologies include television screens, large-screen monitors, data proj-
Computers are often either underused or incorrectly used—or both—in the classroom. But with today’s technology (and that technology will only get better over time) each student can be equipped with a computer and follow the class lesson while the teacher demonstrates the lessons with results projected on a large overhead projection screen. Again, WCT will advance as teachers are better trained with computers and gain a more informed understanding of technology as applied to the classroom. The Benefits of Technology The speed and automatic functions of the technologies discussed above enable teachers to more easily demonstrate, explore, and explain aspects of their instruction in several ways. First, the capacity and range of technology can enable teachers and pupils to gain access to immediate, recent, or historical information by, for example, accessing information on CD-ROMs or the Internet. Second, the nature of information stored, processed, and presented using such technology allows work to be changed easily, such as using a word procesSCIENCE
IN
DISPUTE,
VOLUME
2
Students raise their hands during math class. Advocates of whole-class teaching say that students benefit from an active learning approach, as opposed to the passive learning encouraged by traditional math instruction methods. (Photograph by Bob Rowan. Progressive Image/CORBIS. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
As implied above, teachers are an essential component in the success of WCT. Debs Ayerst, the education officer at the British Education, Communications, and Technology Agency states that in relation to WCT, “The role of the teacher or assistant is seen as paramount in order to demonstrate, explain and question, stimulate discussion, invite predictions and interpretations of what is displayed, and to ask individual children to give an instruction or a response.”
ects, graphing calculators used with an overhead projector, LCD panels/tablets that sit atop an overhead projector unit, plasma screens, and interactive whiteboard technologies. These devices all aim to allow access to and use of digital resources for the benefit of the whole class while preserving the role of the teacher in guiding and monitoring learning.
175
sor to edit writing. Third, pupils learn more effectively when they are using, for example, a spreadsheet to perform calculations rather than performing them by hand. In this way they can concentrate on patterns that help to enhance the learning process. In summary, such technologies are geared toward WCT because all students can easily follow along and actively participate.
MATHEMATICS AND COMPUTER SCIENCE
Examples of Successful WCT Programs One good example of a WCT mathematics resource is Easiteach®, an annual subscription service developed in Great Britain which is designed to aid teachers in delivering lessons, and provides an online collection of ready-made downloadable teaching activities. Easiteach Maths® combines a teaching tool full of familiar math resources—such as number lines, number grids, and place-value cards—together with a collection of ready made online teaching activities to help teachers deliver math lessons. The advantages of Easiteach Maths® are that it: (1) saves the teacher’s planning time by providing familiar resources plus ready-made WCT activities to use with them; (2) gives access to “bestpractice” methods through expertly written, flexible activities that integrate with existing classroom resources; (3) reduces classroom management issues by providing a wide range of familiar resources that are managed centrally; and (4) enhances learning through its WTCbased interactive and dynamic nature.
176
WCT suits the study of mathematics because there are so many real-life examples that can be used to teach and understand math. As an example, a former middle school teacher near Detroit, Michigan, designed a practical supplement for middle school curriculum that follows the thirteen weeks of the National Association of Stock Car Auto Racing (NASCAR) Winston Cup season. Students use newspaper results of the automobile races to focus on topics such as rounding, estimating, integers, proportions, graphing, percents, probability, and statistics. In this setting the whole class is introduced to the NASCAR curriculum; assignments are given that can be done individually, in small groups, and with the whole class; and discussion is used to analyze facets previously learned. This wholeclass setting is much more conducive to learning than a small-group setting, where results are not commonly compared and discussed, and different viewpoints not often shared. Other examples of a successful WCT program is that created by Creative Teaching Associates, a company started in 1971 by three educators, Larry Ecklund, Harold Silvani, and Arthur Wiebe. Ecklund, Silvani, and Wiebe wanted to assist teachers to help students become more efficient learners and to help them enjoy learning, and created a variety of activities and games from SCIENCE
IN
DISPUTE,
VOLUME
2
real-life experiences to reinforce math skills. The company developed a “Money and Life” game that represents a real-life bank account to teach students from grades 5 to 12 about the variety of processes involved in having a checking account. The game board allows students to write checks, make deposits, and keep records of such transactions as buying groceries, paying taxes, making car payments, and paying medical bills. Conclusion What does constitute success in mathematics? In order to count the learning experience as successful, students need to enjoy math, and this in turn means mathematics should become less intimidating, more relevant to students, and involve more active learning. The position taken in 1989 by the NCTM Standards and the National Research Council’s Everybody Counts report lent support to the idea that math needs to be thought about in new ways; more specifically, in the way it is taught and the way it is evaluated.
In the traditional classroom setting, mathematics is taught separately from other courses. It is force-fed to students as a set of necessary skills, and this-force feeding, for the most part, turns off the majority of students. Later in life, when those same people need math skills, they are typically too intimidated and unmotivated to acquire that knowledge. The whole-class technique incorporates mathematics within all coursework in order to teach basic skills that are usable in everyday life. The main goal of WCT as it relates to the teaching of math is to make students aware of the nature of mathematics and the role math plays in contemporary society. To do this, mathematics must be taught in much the same way as history, geography, or English literature, as part of learning about society. This is best accomplished in the whole-class environment, in which mathematics is taught as a part of the culture in order to provide a motivating situation for more students. With this aim in mind, a mathematics education will be more likely to produce an educated citizen who can use mathematics in everyday life, and not someone who shudders at the term “math.” Finally, in a WCT atmosphere, teachers are better able to identify those students that are not keeping up with the rest of the class, as well as those who are gifted students. —PHILIP KOTH
Viewpoint: No, whole-class teaching does not improve mathematical instruction; it is inefficient and ineffective
except in situations where class size is very small and student ability falls in a narrow range. The reform of mathematics education seems to be a high priority in the United States and the rest of the world. The National Council of Teachers of Mathematics (NCTM) established a Commission on Standards for School Mathematics in 1986 for just this reason. The commission’s conclusions led to the 1989 publication of Curriculum and Evaluation Standards for School Mathematics (known simply as Standards), which includes numerous standards for grades kindergarten through 4, 5 through 8, and 9 through 12, as well as procedures for evaluating the mathematical proficiency of students within these three groups based upon the standards. The NCTM standards established five goals for improving mathematical literacy. These goals are geared so that students: (1) become mathematical problem solvers; (2) become confident in their ability to perform mathematics; (3) learn to value mathematics; (4) learn to communicate mathematically; and (5) learn to reason mathematically. When properly used in the classroom, whole-class teaching (WCT) can be a useful “component” of instruction in the achievement of these five important goals. However, whole-class teaching does not, in and of itself, improve mathematical instruction.
A first grade student working on a math puzzle.
Implementing WCT often results in unequal opportunities for learning. Most classrooms do not contain students with a narrow range of math abilities, but usually those with skills ranging from low to high. In addition, unmotivated students can easily disrupt the time the entire class needs to learn math. Highly motivated students are easily bored with a slow process, or with the constant repetition of concepts that they have already learned. Disciplinary problems, late-comers, and the need to repeat instructions are several ways in which time can be wasted with WCT. In contrast to whole-class methods, multiple sizes for groups, along with learning in pairs and as individuals, all help to reduce unequal opportunities that exist to a greater or lesser extent in most schools.
Reproduced by permission.)
Perhaps the most recurring criticism of mathematical whole-class instruction is that it is extremely difficult to meet the needs of both high and low achievers. Students have diverse learning styles that make WCT very inefficient to use, or at least to use solely, in the classroom setting. It is essential to have a mix of classroom techniques, from group time to individual time, so that students can demonstrate different aptitudes in different settings. It is generally accepted that all students of all abilities welcome nontraditional SCIENCE
IN
DISPUTE,
VOLUME
2
(Photograph by Andrew Levine. Photo Researchers, Inc.
MATHEMATICS AND COMPUTER SCIENCE
The Difficulties By and large, whole-class instruction does not significantly improve the performance of mathematics teaching when used alone. It is often very difficult to use whole-class approaches, for instance, when there are multiple languages spoken in the classroom, when students with little or no mathematical background (with respect to fellow students) enter into a new classroom, or when a wide diversity of students (with respect to math abilities) is taught in the classroom. It is also increasingly difficult to effectively teach whole-class with respect to mathematics when the classroom size is very large. Sonia Hernandez, deputy superintendent for the Curriculum and Instructional Leadership Branch of California State’s Department of Education, noted that in some countries where math students are among the highest achievers in the traditional classroom environment, the diversity of the student population is very small. Hernandez asserts that in some major urban school districts “the instructional strategies have to be much different from what can be put into place when one has a pretty uniform student population.” Hernandez continues, “By and large, we do have whole-class instruction to the extent that it is reasonable, but it is very difficult to use only whole-class approaches when there are multiple languages in the classroom, or when students with no educational background whatsoever are coming into
California.” Contentions such as these show why WCT is inadequate for the general teaching of students, and specifically for the teaching of mathematics.
177
math activities, but each student has their own likes and dislikes among these activities. However, when only one avenue is available (as with WCT), students who dislike this way of learning will suffer academically.
MATHEMATICS AND COMPUTER SCIENCE
What Does Work? Students usually enjoy a change from the traditional class work involving lecturing, memorization, and tests, and especially enjoy the variety inherent in different instructional techniques used in the classroom throughout the day, week, and semester. In order to most effectively teach mathematics to students, the teacher must successfully guide them through diverse avenues involving individual, small group, and whole-class activities. The introduction of a new math concept with a new set of vocabulary terms is easy within the wholeclass setting. Such a discussion, for example, can build listening and response skills, and the teacher can talk with the entire class about this new topic. However, the effectiveness of WCT normally ends there. It is much more beneficial to then break the class down into smaller groups, pairs, or (occasionally) individuals in order to achieve better comprehension of mathematical concepts. The same challenging math curriculum is followed throughout each group, but the depth of work can vary, and the topics explored in different ways, depending on the needs and skills of each group. The teacher is critical to this process, as students remain in certain groups or are moved depending on changing needs and/or requirements. The fact remains that one teaching style (WCT, for instance) is typically not effective if used all the time.
A whole-class environment is often useful while working with math that applies to everyday life. However, assignments are more effectively carried out when they are assigned to smaller groups that perform the actual work, then report their results back to the main group with a summary and related math questions. Typical topics might be: “If five gallons of water flows from the faucet into the bathtub in one minute, how much water would be in the bathtub after 12 minutes?” and “The Mississippi River delivers x gallons of water each hour to the Gulf of Mexico. How much water is this in one day? . . . one week? . . . etc.?” This type of math encourages interesting discussions, but, for the most part, is more effective when students first interact in small groups, then relate their experiences back to the main group. Small-group work allows students to talk about the math tasks at hand while they solve nonroutine problems. Smaller groups, such as pairs and individuals, are practical for computer lab work. Moreover, individual work settings ensure that all students process lessons at their own rate of learning.
178
SCIENCE
IN
DISPUTE,
VOLUME
2
Traditional Teaching Dr. Keith Devlin, dean of science at Saint Mary’s College of California, and a senior researcher at Stanford University’s Center for the Study of Language and Information, stated in an article of the Mathematical Association of America that “any university mathematics instructor will tell you that the present high school mathematics curriculum does not prepare students well for university level mathematics.” In response to such criticisms, the NCTM Standards call for the implementation of a curriculum in which, through the varied use of materials and instruction, students are able to see the interrelatedness of math. The emphasis here is on a mix of traditional (with lecturing, memorization, and tests and homework designed to reinforce the lecture), and whole-class instructional approaches. A pronounced shift away from traditional teaching methods to the newer whole-class approach can run into several inherent difficulties of WCT. For instance, in WCT, students who are not interested in math quite often hamper those students that are interested. When there is a mix of hard-working students and low-achievers in a whole-class environment, the quality learning time of the entire class suffers. Students too often view the whole-class environment as a time for social interaction rather than for academic effort. Effective Math Teaching Methods Handson and interactive materials, computer lab assignments for individuals and pairs, and participative whole-class discussion and problemsolving are all ways to contribute to an effective math class. A flexible classroom setting is necessary for these activities in order to maximize the learning for each and every student. Small groups (as opposed to whole-class) that are composed of a wide mix of abilities are especially appropriate when creative brainstorming is sought. Regardless of ability, students generally enjoy this level of grouping when working on a math problem. At other times, it is more suitable to use small groups that are more or less homogeneous. Even in smaller, homogeneous groups, students who show the most aptitude do not necessarily show the highest aptitude in all assignments and areas of math. In fact, it is often obvious that students who may be perceived to be of low ability are very proficient in a particular skill, and such students are able to effectively interact in groups when the need to use that special skill arises.
Even though nontraditional teaching methodologies, including whole-class learning, are important new tools in the teaching of mathematics, older, traditional techniques still have an important place in the classroom. Homework, quizzes, and examinations, although often very unpopular, often act as
motivators in a highly diverse classroom. Homework can be given in several levels of difficulty because of the diversity of group size. Gifted students normally work the more difficult assignments, while others choose a level they feel more comfortable with, given their expertise in a particular topic. All students, regardless of their level of expertise, benefit when the class discusses problems from each assignment in detail. Average students gain confidence more easily by working in various groups, and can strive toward their capabilities when they perceive advanced work as more attainable. They became motivated by the challenge of more difficult assignments, while always learning from the opportunity of resubmitting corrected assignments. Because all students take the same quizzes and tests—but from a diverse (i.e., as regarding group size) classroom environment— the entire process becomes less intimidating. So-called “at-risk” (those perceived to have low potential) students certainly benefit from having good role models. Discipline problems are greatly reduced when these students are placed in a class where the majority of students want to learn. Slower students gain confidence because they can easily receive individual and smaller group help from the teacher or fellow students. Students who had previously been unable to take advanced, enhanced, or highability math classes, now have the opportunity to develop their true potential. No longer relegated to a mediocre curriculum with no diversity, these students have a much better chance to become both assisted and challenged by the higher achievers. Diverse grouping also offers benefits for advanced students. They now have the opportunity to examine certain math topics in depth and to explore some unusual areas of the subject, and they often benefit from helping others. Some may be under a great deal of parental pressure to continue as high achievers, but this pressure is often reduced when average students are intermingled within the classroom setting. Within such intermingled classes, advanced students become less threatened than they would be by learning in a class with an overwhelming majority of advanced students.
Organizing into diverse groups gives all students in math classes the opportunity to be exposed to a challenging curriculum. A combination of traditional and nontraditional arrangements of students—individual, pair, small group, and whole-class teaching environments—provides the best possible avenue to teach students to become well versed in mathematics, and to feel comfortable with their ability to perform necessary math skills. —WILLIAM ARTHUR ATKINS
Further Reading Carnegie Council on Adolescent Development. Turning Points: Preparing American Youth for the 21st Century. New York: Carnegie Corporation, 1989. Davidson, Neil, ed. Cooperative Learning in Mathematics: A Handbook for Teachers. Menlo Park, CA: Addison-Wesley Publishing, 1990. Department for Education and Skills, London, England. “The Final Report of the Numeracy Task Force.” . Glasser, W. The Quality School: Managing Student Without Coercion. New York: Harper and Row, 1990. Kennedy, Mary M., ed. Teaching Academic Subjects to Diverse Learners. New York: Teachers College Press, 1991. Kitchens, Anita Narvarte. Defeating Math Anxiety. Chicago: Irwin Career Education Division, 1995. Langstaff, Nancy. Teaching in an Open Classroom: Informal Checks, Diagnoses, and Learning Strategies for Beginning Reading and Math. Boston: National Association of Independent Schools, 1975. Lerner, Marcia. Math Smart. New York: Random House, 2001. Mertzlufft, Bonnie. Learning Links for Math. Palo Alto, CA: Monday Morning Books, 1997. SCIENCE
IN
DISPUTE,
VOLUME
2
MATHEMATICS AND COMPUTER SCIENCE
The Future Many mathematics teachers appreciate the advantages of flexible teaching, which gives longer time for classes, additional laboratory time in math, more time for presenting projects and extracurricular activities, and more opportunity to involve students in indepth and active-learning scenarios. These teachers believe that it is essential to have plenty of quality time to learn math, and that this quality time will encourage the students’ commitment to learning. As opposed to the strictly
whole-class approach, where the entire class “brainstorms” and discusses mathematical concepts, flexible grouping arrangements normally produce fewer discipline problems, and provide more time for learning. By the end of the semester, teachers may note advantages associated with their students remaining with the same group for all academic subjects. Students often request that groups remain intact, and their selfconfidence when giving oral reports and answering questions has often increased because of the camaraderie within the groups.
179
National Assessment of Educational Progress. The Mathematics Report Card: Are We Measuring Up? Princeton, NJ: Educational Testing Service, 1988. National Council of Teachers of Mathematics. Curriculum and Evaluation Standards for School Mathematics. Reston, VA: National Council of Teachers of Mathematics, 1989. National Research Council. Everybody Counts: A Report to the Nation on the Future of Mathematics Education. Washington, DC: National Academy Press, 1989.
MATHEMATICS AND COMPUTER SCIENCE
Oakes, J. Keeping Track: How Schools Structure Inequality. New Haven, CT: Yale University Press, 1985.
180
SCIENCE
IN
DISPUTE,
VOLUME
2
———. Multiplying Inequalities: The Effects of Race, Social Class, and Tracking on Opportunities to Learn Mathematics and Science. Santa Monica, CA: The Rand Corporation, 1990. Schlechty, P. Schools for the Twenty-First Century: Leadership Imperatives for Educational Reform. San Francisco: Jossey-Bass, 1990. Valentino, Catherine. “Flexible Grouping.” .
Will the loss of privacy due to the digitization of medical records overshadow research advances that take advantage of such records? Viewpoint: Yes, the loss of privacy due to the digitization of medical records has already had a negative impact on patient care, and necessary safeguards have not been enacted by the health-care industry. Viewpoint: No, the loss of privacy due to the digitization of medical records is being minimized by new legislation, and the potential advances in patient care made possible by the digitization of medical records are enormous.
For many, the thought of large numbers of people—authorized or otherwise—having access to their personal medical records is an uncomfortable and worrying notion. It is, therefore, not surprising that the digitization of medical records, and their accumulation into enormous databases, is viewed with increasing alarm. There is strong public opposition against not just the improper use of such databases, but to their existence in the first place. However, supporters of digitization argue that online medical records are secure, allow for improved health care and better research, and reduce paperwork and costs. Medical researchers fear that the public resistance toward digital medical records could reduce the effectiveness of research, and potentially cost lives. Any time a patient receives treatment from a health worker, whether it be a physician, dentist, chiropractor, or psychiatrist, medical records are created. These contain your medical history, such as any diseases you have had, the results of laboratory tests, prescribed medications, and may also contain information about your lifestyle, such as participation in high-risk sporting activities, whether you smoke, have used drugs, and other personal details. Until recently such records were only kept as paper documents, or as computer records for single institutions. However, with the advent of networking and the Internet, medical records can be collected from many sources into massive digital databases that are instantly available to a growing number of people. The concept of patient privacy dates back to the fourth century B.C. and the Oath of Hippocrates, which made those who intended to be doctors swear that; “Whatsoever things I see or hear concerning the life of men, in my attendance on the sick or even apart therefrom, which ought not be noised abroad, I will keep silence thereon, counting such things to be as sacred secrets.” Today the concept of doctor-patient privilege remains a strong legal restraint and offers psychological reassurance. The purpose of such confidentiality is to allow patients to feel safe and completely open when discussing what may be very personal, embarrassing, or potentially dangerous personal details, in order to receive the most appropriate treatment. However, there are real concerns that the doctor-patient relationship is under threat from public fears over electronic medical records, and the growing numbers who have access to them. Many patients no longer feel comfortable revealing potentially damaging information which they fear will later be used inappropriately, and thereby may be hindering their own medical treatment.
181
In many ways, however, the confidentiality of medical records has been a myth for a number of years. Even before the widespread introduction of electronic databases, the number of organizations that could legally view your medical records was increasing. All the electronic revolution has done is increase the speed that such information can be sent and viewed. With the introduction in many Western countries of “managed health care,” such as that provided by a health maintenance organization (HMO) in the United States, many doctors are required to share a single patient’s medical history. HMO patients agree to use a specific network of medical providers in order to receive a broader range of medical coverage, and so may see many doctors in a short space of time. As a result patient records are handled by many more people, not just doctors, but also administrative and accounts personnel. In the interests of saving money, many doctors are now sending voice dictation of patients medical notes to other countries to be transcribed, India being the most popular destination. Compressed voice files are sent electronically, and rooms of typists sit with medical dictionaries and reference works, transcribing the personal medical details of British and American patients. Although such work has to be done by someone, the sending of medical details over telecommunication lines, as well as the potential for embarrassing or dangerous mistakes to enter medical records, makes many uneasy. However, areas other than health care cause the most worry to the public. Health records are often used by potential employers, insurance companies, and drug manufacturing companies. Many people feel their careers have been destroyed by the release of information in their private medical files, and insurance premiums can soar, or be discounted, depending on a person’s medical history. This legal access to medical records has civil rights organizations up in arms, as they believe that many of these actions are another form of discrimination. However, the illegal or accidental access or release of private medical information receives the most media attention, and generates the widest public fears. Computer hacking is not generally well understood by most, and so is often demonized beyond its impact. Even though computer firewalls and safeguards are never invulnerable, there have been few serious breaches into medical databases by hackers. A more legitimate threat is the accidental release of files to the public, and a number of such releases have already occurred. The digital age has also sped up the rate at which mistakes can be made, and increased the volume of files that can be transferred inadvertently with the “press of a button.” The potential benefits of digitized medical records must be weighed up against the disadvantages. Proponents argue that electronic databases allow for improved care, are safer than paper documents (as access can be electronically fingerprinted and tracked), save time and money, and offer incalculable future benefits in the form of improved medical research. By enabling quick and accurate cross-referencing, electronic medical databases have already revolutionized research, allowing for large-scale studies of the effectiveness of, and problems associated with, particular treatments and medication. Also, the trend to digitization seems unstoppable, and any backward steps would be costly and potentially dangerous to patients.
MATHEMATICS AND COMPUTER SCIENCE
However, opponents of digitized medical records argue that potential problems related to the weakening of doctor-patient confidentiality outweighs the possible benefits. There are also the questions of who owns the medical records, and whether such information should be sold or transferred between companies. The lack of clear legislation, growing public fears, and the media attention on breaches of security mean that the issue is likely to be hotly debated for many years. —DAVID TULLOCH
182
Viewpoint: Yes, the loss of privacy due to the digitization of medical records has already had a negative impact on patient care, and necessary safeguards have not been enacted by the health-care industry. When a patient deals with his or her doctor, it is naturally expected that anything discussed will remain between the two of them. That expectation has been around for a long time. SCIENCE
IN
DISPUTE,
VOLUME
2
Indeed, the Oath of Hippocrates, which dates back to the fourth century B.C. clearly states that patient information should be treated as “sacred secrets,” making the unwarranted disclosure of such information a serious professional and moral infraction. After all, how many people would talk freely with their doctors if they knew that what they said would be shared with other parties? Clearly, such knowledge would inhibit the free flow of information, especially with regard to sensitive medical cases. However, before the electronic age, privacy issues seemed controllable by legislation and the physical environment, and the availability of medical information given to third parties for research pur-
poses seemed more limited and somehow less threatening. Medical records were safely stored in file cabinets or on microfiche and destroyed after a suitable period of time. Medical research was conducted, but it was carried out within the confines of the physical environment. It’s a New World With the advent of computer technology and the digitization of medical records, things radically changed. Privacy can no longer be relied upon as it was once. Thanks to computers and databases, even the most intimate details of one’s medical history are only a few keystrokes away and the number of people with access to them has also increased.
Researchers argue that the availability of medical records is of paramount importance, especially with regard to long-term research projects. The benefits of this research, they claim, could greatly improve medical treatment and could allow researchers to track certain diseases over time. Drug companies could, for example, use the information for drug research and to gauge the performance of the drugs they manufacture. Proponents of digitization also say that large databases reduce paperwork and organize information. This may be true, but at what cost to patient privacy?
In fact, some research is carried out simply for the sake of knowledge, and has no real application. Researchers might argue that the laws governing the protection of human subjects are
so complex and bureaucratic that the implementation of medical research for practical purposes (or otherwise) have become needlessly difficult. However, laws governing the protection of human subjects are strict for a reason—to protect patient privacy—and many people are concerned that they are not uniform or strict enough. When it comes to what information can be shared without the patient’s consent, not every state has the same set of laws. The Issue of Consent Asking a subject to take place in the study seems more agreeable than digitizing medical records and using them without the patient’s consent. Informed consent seems essential, given that digitalization could open up the opportunity for abuse. In a 1993 Lou Harris Poll, 85% of the people polled believed that protecting the confidentiality of medical records is absolutely essential in healthcare reform. In the same poll, 64% of the respondents said they do not want medical researchers to use their records for studies, even if they are never identified personally, unless researchers first get their consent. Furthermore, 96% believed that federal legislation should designate all personal medical information as sensitive and impose penalties for unauthorized disclosure. However, this is easier said than done. Alderman and Kennedy, authors of The Right to Privacy are quick to point out that technology is fast and the law is slow. Plus, the laws vary from SCIENCE
IN
DISPUTE,
VOLUME
2
Patients’ medical records are increasingly in digitized form. (Photograph by Bill Bachmann. Photo Researchers, Inc. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
In an ideal world, only researchers would have access to a patient’s confidential records and patients would not only be able to control how the information was used, but decide whether it should be used at all. However, in our not-so-ideal world, as soon as information is added to a database, it can be accessed by virtually anyone. Even encrypted databases have been shown to be vulnerable in the event of high employee turnover and employee carelessness. And even when mechanisms are put in place to protect patient confidentiality, the threat of hackers who, despite the possible penalties, seem to be challenged by the idea of breaking into an encrypted system is always with us. Confidentiality has become only a pleasant memory, much like doctors who make house calls. In a world where information is a commodity, many people are eager to acquire as much information as they can get, and not always for altruistic reasons. Personal information is regularly sold to the highest bidder, sometimes without the permission of the people involved. Many consumers, including pharmaceutical companies, health-care providers, government agencies, and insurance companies, are eager to acquire medical information. What they do with that information is not always to the benefit of the subject.
183
KEY TERMS To convert information into a unique code that can be read only once it is deciphered. DE-IDENTIFIED PATIENT INFORMATION: Removal of all personal information that could trace medical information back to a specific individual. INFOBROKERS: People who illegally obtain and sell information on individuals that is stored in government data banks. INFORMED CONSENT: Agreement of a person (or legally authorized representative) to serve as a research subject, in full knowledge of all anticipated risks and benefits of the experiment. NORM: Authoritative standard, a model; a pattern or trait taken to be typical in the behavior of a social group. PROTECTED HEALTH INFORMATION: All medical records and other individually identifiable health information. PUBLIC HEALTH: Any medical issue that affects the health of many people. DATA ENCRYPTION:
state to state and do not apply to all situations. Alderman and Kennedy report that there simply is not a comprehensive body of law established to deal with all the privacy concerns arising in the digital age.
MATHEMATICS AND COMPUTER SCIENCE
Informed consent has been a hot topic for years, and the insistence on it from the general public is growing. The results of a recent Gallup survey on medical privacy sponsored by the Institute for Health Freedom showed that individuals overwhelmingly believe that their medical information should not be given out without their permission and that consent is a right they are entitled to. Statistically, 92% oppose nonconsensual access by the government and 67% by researchers. The highest opposition was noted with regard to the hotly debated topic of genetic testing, with 9 out of 10 adults insisting that medical and governmental researchers should first obtain permission before studying their genetic information. It seems abundantly clear that a lose of privacy is not a price the public is willing to pay in exchange for the benefits of medical research, regardless of how important the research may be. Long-Term Ramifications To complicate matters, not all patients who give their consent truly understand the long-term implications of allowing their medical records to be utilized for research purposes. After all, your medical information can include everything from physical health to sexual behavior to feelings expressed
184
SCIENCE
IN
DISPUTE,
VOLUME
2
during psychotherapy. Many worry that information from medical records could effect their admission to college, credit, employment, and insurance. For example, what if the company doing the research is a hospital, but the department interested in the data (and the one using it) is really a managed-care department interested in limiting long-term insurance liability. What effect could that information have on the subject’s ability to obtain insurance, or even employment, in certain circumstances? Some worry that the negative effects could be far too great and that the risks simply do not outweigh the possible benefits that could be gained, benefits in some cases that have more to do with corporate financial gain than health-related research. Take, for example, the well-publicized case in 1995 involving Merck & Co., a pharmaceutical company, and Medco, a division of Merck, which concerned 17 states. Medical information regarding patient prescriptions was obtained, and Medco-employed pharmacists, without identifying the corporate connection between the two firms, called physicians and encouraged them to change prescriptions, with Merck often benefiting from the switch. The settlement involved a variety of measures including restitution, and Medco’s agreement to advise consumers of their rights and to what extent confidential information would be used. To assume that information is safely stored, regardless of who originally requested it, is to make a very unwise assumption. In fact, privacy advocates worry that some patients will avoid seeking medical attention or filling necessary prescriptions rather than risk their personal privacy. Because long-term research studies track patients over time, patients with illnesses that carry a stigma may never feel free of their disease. In the case of a curable, sexually transmitted disease, for example, the patient may want to consider the disease a thing of the past. That will be nearly impossible to do if his or her medical records are permanently stored in some database. Also, doesn’t this open the door for discrimination if the information were to fall into the wrong hands? What if information brokers were to sell personal information to employers or rental agents, for example? It could certainly effect employment or housing opportunities. Control of the Information Not all information that is computerized is disclosed on purpose. Take for example the unfortunate situation that was reported by Robert O’Harrow Jr. of the Washington Post involving the pharmaceutical company, Eli Lilly & Co. Eli Lilly, the makers of Prozac, an antidepressant prescribed to treat depression, bulimia, and obsessive-compulsive disorder, had begun a daily e-mail program that reminded patients to take their medicine. These
people had signed up for the service on the company website, but they had not consented to what ultimately happened. When Eli Lilly discontinued the program, they sent out messages to all the people on the list. However, rather than get a private e-mail from the company, each recipient received the names and e-mail addresses of every other recipient. Everyone who received the message was then in possession of the addresses of hundreds of Prozac users. Although Eli Lilly was highly apologetic, the damage was already done. Who knows what future repercussions the dissemination of that information will pose to those patients? Once the information was disclosed, it could hardly be taken back. After information has been digitized; mistakes are sometimes made, and the cost to personal privacy may overshadow the benefit of the research or program.
Proponents for using computerized medical records in research argue that it is easy to remove or code personal identifiers, such as a patient’s name or social security number, so that researchers cannot see them. However, this “deidentification” process is not a true protection against human error or abuse. Mistakes happen every day. What if someone fails to delete sensitive information unintentionally? What if someone thinks the information is blinded from the recipient, but later finds out the data appeared in more than one place and was not deleted from every computer field? What if the de-identification methods used were faulty, or if the employee hired to maintain the database did not care about doing a good job? Data entry clerks are paid per entry. Are they taking their time and worrying SCIENCE
IN
DISPUTE,
VOLUME
2
Paper medical records awaiting transfer into digital form. (Photograph by Owen Franken/CORBIS. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
Indeed, this main theme is often echoed throughout the privacy versus health research debate. People fear that employers might have access to private medical information that could be used to deny them job advancement or future employment. They also fear that the failure to safeguard their medical information could result in their being identified as a “high insurance risk,” which could affect their ability to obtain insurance. Assurances from researchers who point to internal review boards (IRBs) as a mechanism to maintain privacy and integrity are not comforting. This was clearly reiterated by
Bernice Steinhardt and her team in a 1999 United States General Accounting Office report, which stated that while reviews made by IRBs may help protect the privacy of subjects and maintain confidentiality of data used for research, privacy advocates and others argue that any use or disclosure of an individual’s medical information should require the individual’s informed consent. Furthermore, it is unclear how effective IRBs actually are in protecting privacy, with or without consent. Steinhardt’s report discloseed that several examples of breaches of confidentiality had been reported to the Office for the Protection from Research Risks at the National Institutes of Health.
185
about accuracy, or are they hurrying in order to meet quotas? Not all workers take pride in doing a good job, and once the information is disclosed, the damage is done and an apology is of little use. It is also overly optimistic to assume that everyone in charge of storing data is sensitive to privacy issues. Greed is a powerful thing, and if cutting corners saves or earns money, there will always be someone willing to sacrifice privacy for monetary gain. Again, the rules and penalties regarding informed consent vary depending on the state and situation. Therefore, the warm and fuzzy blanket of protection some argue is provided by informed consent is moth-eaten at best. Conclusion Such harmful invasions of personal privacy could, in fact, overshadow any benefits gained by using digitized medical information for research. Until appropriate and secure safeguards are put in place to assure patient confidentiality, there will always be an unacceptable risk to personal privacy. With the new kinds of research now being conducted, including genetic research, and the sophisticated computerized studies that utilize various medical databases, it seems unlikely that a uniform solution to the problem of privacy assurance is anywhere in sight. Given the mishaps that have already taken place, medical privacy should not be taken for granted, it should be protected in spite of the possible benefits to research the digitalization of medical records might provide. —LEE ANN PARADISE
MATHEMATICS AND COMPUTER SCIENCE
Viewpoint: No, the loss of privacy due to the digitization of medical records is being minimized by new legislation, and the potential advances in patient care made possible by the digitization of medical records are enormous. Barbara Kirchheimer, news editor at Modern Healthcare, a weekly newsmagazine for health-care executives, says in “Public Fears Enter Into Equation,” that the benefits gleaned by the health-care community through access to electronic patient records far surpass any additional steps that community must take to protect patient privacy. Kirchheimer also quotes Stephen Savas, health-care analyst at the New York investment firm Goldman, Sachs & Company, as saying: “The reality is that [the patient record] is probably more safe in digital form than it is in paper form . . . because it’s not actually hard to track anyone who accesses the information.” The trade-off for digitized records, says Savas, is improved health care.
186
SCIENCE
IN
DISPUTE,
VOLUME
2
“In the future, electronic records will be the norm,” writes Norman M. Bradburn, a professor emeritus at the Harris Graduate School of Public Policy Studies at the University of Chicago, in the introduction to his paper “Medical Privacy and Research.” Technological changes in data storage mean that researchers and the health-care community as a whole must address the issue of patient privacy and legitimate access to medical records. In the first section of Privacy and Health Research, William W. Lowrance, an international consultant in health policy, says: “The policy and technical challenges are to devise improved ways for preserving individuals’ informational privacy, while at the same time preserving justified research access to personal data in order to gain health benefits for society.” The Changing Concept of Confidentiality Bradburn notes that the high value individuals and society place on being healthy are “very strong motives and have led to an almost insatiable demand for health care services.” He believes an important component in the privacy issue surrounding those services is the concept of the “norm of confidentiality.” In this norm, he says, a person seeking medical treatment gives up the right to privacy—gives up the right to withhold information from their physician. If the patient withholds information, the physician is unable to treat effectively.
The next “norm of confidentiality” is the situation outside the closed doors of the physician’s office. Although this norm remains in full force, changes in the health-care industry precipitated by managed care have forced changes. Managed-care companies require medical histories before authorizing payment of medical fees. Therefore, physicians must disclose patient information to insurers if patients expect insurance companies to pay for their medical treatment. According to Bradburn, the confidentiality issue must now extend to those records and the people accessing them, as they are now part of the doctor/patient relationship. Therefore, training such personnel in confidentiality requirements and imposing sanctions when confidentiality is violated are imperative. However, Bradburn stresses the importance of setting confidentiality limits that restrict the use of records in a manner harmful to the individual, while allowing access that will protect and improve public health. Privacy Issues The trend from paper to digital records has heightened anxiety about privacy. Patients wonder: Will confidential information escape from the computer in my doctor’s office? If so, to where and to whom; and how will it be used or, more drastically, will it be misused or abused? The public is generally aware of horror
stories about the misuse and abuse of confidential information. The author of “Health Records and Computer Security,” quotes Dr. Ted Cooper, national director of confidentiality and privacy at Kaiser Permanente (one of the largest HMOs in California), as saying: “There are a lot of problems in the way medical information is handled. But currently the breaches that happen have very little to do with computers and more to do with the people that handle them.” The article notes that, although the issue of digitized records and privacy is being discussed, debated, and addressed by hoards of individuals, institutions, and government agencies, paper records are a greater concern than digitized information. “Any person in a white coat in a hospital environment can walk up to a medical chart and have a look,” says Jeff Blair, vice president of the Medical Records Institute.
Robert Gellman, a member of the advisory committee that reviews regulations in the
Medical records are viewed by dozens of people, including personnel handling thirdparty insurance claims, hospital personnel, public health authorities, medical researchers, government agencies, cost-containment managers, licensing and accreditation organizations, coroners, and many other entities. “Medical records aren’t confidential; they haven’t been confidential for decades,” says, Gellman. He states that, of all the different types of records maintained on individuals, medical records are probably the most widely accessed. “Computer records, network records are not inherently evil. It depends on what you do with them,” he says. Protecting Personal Information Gellman, also a privacy and information policy consultant based in Washington, D.C., and former chief counsel to the House of Representatives subcommittee on Information, Justice, Transportation, and Agriculture, believes that no progress will be made if the objective is solely to protect SCIENCE
IN
DISPUTE,
VOLUME
2
Paper medical records require a vast amount of storage space. Shown here are medical records in the basement of Methodist Hospital in St. Louis Park, Minnesota. (Photograph by Owen Franken. CORBIS. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
Is the Issue One of Privacy, or Paranoia? Barbara Kirchheimer points out that a report by the Robert E. Nolan Company questions whether there is truly a “crisis” where privacy issues are concerned, and that, while concerns are legitimate, breaches of privacy are rare. After researching the issue, Blue Cross and Blue Shield Association found fewer than two complaints per 100,000 Americans that pertained to violations of privacy.
Health Insurance Portability and Accountability Act (HIPAA) first proposed in 1996, disputes the very premise of personal information privacy, pointing out that the word “confidential” has virtually no meaning for any form of personal records. This includes an individual’s banking records or student records which are routinely disclosed to any number of entities with or without the consent—or even the knowledge—of the individual involved.
187
the “sanctity” of records. “We have too many decisions that require the sharing of records to act like we can preserve the illusion of confidentiality. We need to be more honest with the public. We need to lower expectations.” Gellman also says that technology is a twoedged sword, making it easier to exploit records for profitable purposes such as creating marketing lists but, at the same time, making it more difficult to access confidential information without authorization. De-identification of records, for example, is easily facilitated by computers. Rules have long been in place that govern the need for physicians and institutions to obtain “informed consent” from patients participating in research that does require personally identifiable information. And electronic records can be “sliced and diced,” as Gellman puts it, to remove identifying information such as name, address, birth date, and Social Security number, so that end-users such as researchers receive information that contains only the medical data needed for that research.
MATHEMATICS AND COMPUTER SCIENCE
Privacy concerns have created a burgeoning industry focused on technological ways to protect information. While these are essential, a 1997 report by the Computer Science and Telecommunications Board concludes that protection is best accomplished by a combination of technical and organizational measures. Technical measures can protect information from outside “hackers” and even trusted “inside” employees. Firewalls, audit trails that record and analyze all access to digitized records, access controls, data encryption, passwords, and the like, are all readily available. They limit unauthorized access, yet allow access by authorized personnel to large data sources for essential purposes such as treatment procedures and medical research. Organizational security methods include adequately training personnel in the importance of confidentiality, implementing policies and procedures dictating who has access to information and how to restrict that access, and imposing heavy penalties for breaching confidentiality. The HIPAA has determined that inappropriate breeches of privacy are punishable by 10 years in prison and up to $250,000 in fines—a significant deterrent. In a letter to the U.S. Department of Health and Human Services (HHS) written on behalf of 93,100 members of the American Academy of Family Physicians, board chair Bruce Bagley, M.D., writes: “Only in a setting of trust can a patient share the private feelings and personal history that enable a physician to comprehend fully, diagnose logically, and treat properly. . . . The Academy supports the development and use of electronic medical records because, under proper procedural constraints, these offer an opportunity for greater protection of the patient confidentiality than paper records.”
188
SCIENCE
IN
DISPUTE,
VOLUME
2
Research and Records Medical research, and therefore individual and public health, benefits immensely from digitized patient information. In just one example, one physician will see only a small number of patients with a particular disease or disorder; however, cumulatively, hundreds of physicians will see thousands of patients with that particular disease or disorder. Accessing those thousands of records, and abstracting the medical information from them, is highly labor intensive and expensive when they are in paper form, requires extensive training and supervision of the abstractors, and is prone to error. When that information is accessible from a computerized central data bank such as clinical data registries, fewer errors are likely; coordinated studies by researchers make possible large clinical trials, retrospective studies, and evaluation of different types of treatment possible; along with reducing costs.
In his article “Rx for Medical Privacy,” Noah Robischon gives an example of how researchers’ access to a database of individual health records might speed the discovery of drug-related disorders such as that caused by DES, a hormone originally prescribed to pregnant women to help prevent miscarriage. “It took researchers 20 years,” writes Robischon, “to track down afflicted patients and learn that DES caused a rare form of vaginal cancer in the daughters of women who took the drug. If everyone’s record were in a database, the discovery would have come much more quickly.” Even though all sectors of the heath-care industry agree to the concept of patient information privacy, most sectors, as well as the HHS, agree protection must not interfere with a patient’s access to, or quality of, health-care delivery. “How Research Using Medical Records Helps Improve Your Medical Care,” an article published on-line by the Mayo Clinic, states, “Many people do not realize that one of the most important aids to medical process has been the medical record. . . . Without research that used medical records, medical care could not have advanced as far as it has.” The U.S. Government Enacts Regulations Both state and federal governments have, over the decades, enacted laws to address the privacy issue. The HIPAA introduced significant reforms to the health insurance industry, including “Standards of Privacy of Individually Identifiable Health Information.” This rule, generally known as the Privacy Rule, was enacted in 2001. It is the first comprehensive effort to govern the privacy of health information, and dictates under what circumstances protected health information may be disclosed by covered entities (entities governed by the rule) for the purpose of research. Covered entities include health insurance com-
panies, health-care clearing houses, and healthcare providers who conduct electronic transactions for financial and administrative purposes.
Gellman, Robert. “The Myth of Patient Confidentiality.” .
A major concern for researchers and public health entities has been the potential for national privacy policies to hinder research. The HIPAA, however, provides procedures through which researchers and other affected parties can apply for adjustments to the rule. In an overview explaining the rule and its implications, the HSS Office for Civil Rights (OCR) says: “We can and will issue proposed modifications to correct any unintended negative effects of the Privacy Rule on health care quality or access to such care.” The overview also expresses the belief that, instead of hindering medical research, the rule will enhance it by helping patients and health plan members feel more comfortable participating in research knowing their information has much greater protection than before.
Health and Human Services. Office for Civil Rights. “Standards for Privacy of Individually Identifiable Health Information.” .
David Kirby, head of information security for Duke University Medical Center, agrees. In an article in The Scientist by Katherine Uraneck, Kirby is quoted as saying: “People won’t participate [in research] unless they are convinced that you will do well with this issue of privacy . . . it’s something that we need to do well.” In “Privacy in Medical Research: How the Informed Consent Requirement Will Change Under HIPAA,” Jackie Huchenski, a partner with the law firm Moses & Singer LLP, and Linda Abdel-Malek, an associate in their Health Care Group, conclude by saying: “The amount of research being conducted should not decrease as a result of the Privacy Rule, however, since the Privacy Rule adds administrative obligations that are somewhat minimal for federally funded research and broader for nonfederally funded research.”
—MARIE L. THOMPSON
Further Reading Alderman, Ellen, and Caroline Kennedy. The Right to Privacy. New York: Alfred A. Knopf, 1995. Beauchamp, Tom L., and James F. Childress. Principles of Biomedical Ethics. 3rd ed. New York: Oxford University Press, 1989.
“How Research Using Medical Records Helps Improve Your Medical Care.”. Huchenski, Jackie, and Linda Abdel-Malek. “Privacy in Medical Research: How the Informed Consent Requirement Will Change Under HIPAA.” Institute for Health Freedom. “Gallup Survey Finds Americans’ Concern About Medical Privacy Runs Deep.” . Kirchheimer, Barbara. “Public Fears Enter Into Equation.” Modern Healthcare. . Levine, Robert J. Ethics and Regulation of Clinical Research. 2nd ed. Baltimore, MD: Urban and Schwarzenberg, 1986. Lowrance, William W. Privacy and Health Research: A Report to the United States Secretary of Health and Human Services. Washington, DC: United States Department of Health and Human Services, 1997. National Research Council. Committee on Maintaining Privacy and Security in Health Care Applications of the National Information Infrastructure. Computer Science and Telecommunications Board. For the Record: Protecting Electronic Health Information. Washington, DC: National Academy Press, 1997. O’Harrow, Robert Jr. “Prozac Maker Reveals Patient E-Mail Addresses.” Washington Post (July 4, 2001). Robischon, Noah. “Rx for Medical Privacy.” . Steinhardt, Bernice. Medical Records Privacy: Access Needed for Health Research, But Oversight of Privacy Protections Is Limited. Washington, DC: United States General Accounting Office, 1999. Tilman, Linda. “Hot Topics—Health Privacy.” Computers, Freedom, and Privacy ConferSCIENCE
IN
DISPUTE,
VOLUME
2
MATHEMATICS AND COMPUTER SCIENCE
As Norman M.Bradburn says, individuals and society place a high value on being healthy. Health-care services require records, and records are quickly becoming digitized. In “Hot Topics—Health and Privacy,” Linda Tilman quotes Peter Squire, chief privacy counselor of the U.S. Office of Management and Budget, as saying: “Records save lives. We want [consolidated medical information] but with better privacy.” And Squires is optimistic this can occur.
“Health Records and Computer Security.” .
189
ence 2000. .
Review Board Guidebook. Bethesda, MD: U.S. National Institutes of Health, 1993.
Uraneck, Katherine. “New Federal Privacy Rules Stump Researchers.” The Scientist 15, no. 18. (September 17, 2001): 33.
U.S. National Institutes of Health. Office of Human Subjects Research. Guidelines. Preface. Washington, DC: U.S. National Institutes of Health, 1995.
U.S. Congress, Houses of Representatives Committee on Government Operations. Health Security Act Report. H. R. Rept. 103.
MATHEMATICS AND COMPUTER SCIENCE
U.S. National Institutes of Health. Office for Protection from Research Risks. Protecting Human Research Subjects: Institutional
190
SCIENCE
IN
DISPUTE,
VOLUME
2
Westin, Alan F. Computers, Heath Records, and Citizens Rights. National Bureau of Standards Monograph. Washington, DC: U.S. Government Printing Office, 1976.
Are digital libraries, as opposed to physical ones holding books and magazines, detrimental to our culture? Viewpoint: Yes, digital libraries are inherently vulnerable to a wide array of disasters, and they are not adequate repositories for historic manuscripts. Viewpoint: No, digital libraries are not detrimental to our culture. They increase access and reduce the costs of communication.
Perhaps at the heart of this debate is one’s personal definition of culture. Some might define it in tangible terms. They might examine artwork, books, architecture, and styles of dress to better understand aspects of a community. Intangible things might also be considered. Freedom of expression and the various styles of communication people use in their daily lives also help to define culture. Culture is oddly delicate and strong at the same time. Some aspects of culture, such as religion, have been known to endure almost any attack; yet other aspects of culture, such as personal freedom, have been known to be fragile. In some societies, the ability to read a controversial book is made nearly impossible. As keepers of information, libraries store books that might otherwise be difficult to read or obtain. They are sometimes the only resource for out of print books, offering invaluable opportunities with regard to education and research. In terms of history, libraries preserve important documents, periodicals, and books for future use and study. Almost any historian will say that in order to understand where we’re going, we must understand the past and, in that way, the written word should be seen as a critical element in the definition of culture. As the world evolves, so does our way of producing and maintaining the written word. The computer age brought about a whole new type of library— the digital library. But what effect will digital libraries have on our culture, if any? What will be the cost? Will digital libraries only serve to dehumanize us? Some say digital libraries are, in fact, more humanizing than physical ones. They point to one’s ability to personalize the information shared. However, others find this disturbing. They worry about censorship and the ability to wipe away entire texts with a single keystroke. When a book is published in physical form, it becomes a part of recorded history. It is a tangible example of the culture in which it is produced. It is also a record of originality, which, in a historical sense, can be verified. Critics of digital libraries often point to the problem of determining what edition of a text is being read. At their core, the purpose of libraries is the preservation of information. Both sides of the debate would refute the abilities of the other in achieving this goal. Proponents of digitization find several faults with a “paper-based” collection. The cost of paper (both in production, as well as environmentally) can be quite high, especially considering that many thousands of books are simply destroyed when their sales do not reach a publisher’s expectations. There is also a cost in maintaining the amount of space physical books take up, with
191
the additional cost of cataloguing and sorting. These spaces are also vulnerable to fire, water, and age. However, digital libraries share some of these problems as well. Servers and databases are vulnerable to natural catastrophe as well as hackers, electrical shortages, and Internet problems. Economically and technologically, digital libraries also have a number of obstacles to overcome. Computers are progressing at an incredible rate as well as the hardware and software required to maintain them. A database preserved with one format may be obsolete in only a few years and the information contained within irretrievable. Also, the cost for updating an entire system every couple of years and transferring the data is exceptionally high. One aspect of libraries is the exchange of cultural ideas as well as social interaction. Both styles of libraries have positives and negatives in these regards. Although the digital library does not possess the same “face-to-face” interaction that a physical library would have, it does help remove social barriers. Online users feel comfortable interacting in a cyber-environment and are easily able to find others of similar interest. Cultural ideas can be exchanged without the fear of immediate reprisal or prejudice. However, with digital libraries, there are few restraints preventing those maintaining the database from editing or manipulating the data stored within. Access to certain cultural content may be restricted or even refused. The sharing of all ideas is a vital part of a library’s existence and function. Culture cannot progress without the exchange of ideas, no matter what some people feel about those ideas. Perhaps one problem facing the proponents of either style of library is the unspoken belief that “one way is the only way.” Many people fear that by accepting the views of the other side it will mean that they are completely surrendering their way of life. However, there may exist a comfortable medium somewhere between these two philosophies. The existence of one doesn’t have to mean the demise of the other. In fact, libraries could profit from a meeting of the minds that utilized the best elements of both the physical and digital world. —LEE A. PARADISE
Viewpoint: Yes, digital libraries are inherently vulnerable to a wide array of disasters, and they are not adequate repositories for historic manuscripts.
MATHEMATICS AND COMPUTER SCIENCE
Librarians as Guardians of Culture Libraries are repositories of information, from the scientific and factual to the literary and fanciful, in all sorts in media. The mission of libraries is to collect, organize, disclose, protect, and preserve this information. The duty of professional librarians and the goal of their discipline—information science—is to determine how to accomplish this mission in the best way possible, for the sake not only of their present and anticipated future clientele, but also of posterity in general.
In 1644 the poet John Milton wrote in Areopagitica that whoever “destroys a good book, kills reason itself.” The censorship, destruction, and restriction of books has been a standard tool of repressive or authoritarian powers for as long as people have been literate. The Roman Catholic Church maintained its Index librorum prohibitorum (List of forbidden books) from 1559 until Pope Paul VI declared it void in 1966. The Karlsbad Decrees of 1819 ruined many promising academic and journalistic careers in Germany over the next several decades. Even though Mark Twain’s Adventures of Huckleberry Finn (1884) makes strong
192
SCIENCE
IN
DISPUTE,
VOLUME
2
antiracist statements, U.S. school boards have sometimes tried to ban the book because it includes the word “nigger.” Because of the power of the printed word to promote free thought and threaten the status quo, the most despotic regimes even discourage literacy. They understand that books contain clear records of science, art, religion, politics, fantasy, exploration, and all human aspirations for better lives and higher culture—in short, all that they wish to control, subvert, or obliterate. But books and their content are destroyed not only deliberately by philistine, barbarous, or dictatorial forces; they are also destroyed unwittingly by well-meaning, civilized people who do not adequately understand the nature of the book as the primary locus of culture. Librarians are among the busiest guardians of culture because they must constantly and vigilantly resist not only the occasional fanatics who actively seek to destroy books, but also the ordinary gentle souls who believe they are preserving the cultural content of books and other information storage media, while in fact their actions contribute to destroying it. A case in point of well-meaning people accidentally destroying the culture of ages is the use of a kind of format migration called preservation digitization. Format migration is the transfer of intellectual or artistic content in its entirety and without significant distortion from one medium to another. Examples include microfilming a book, transferring a Hollywood movie to videotape from 35 mm film, making a compact disc (CD)
of Bing Crosby songs from 78 rpm records, or, in the case of Johannes Gutenberg in the 1450s, typesetting the Bible from a medieval manuscript. Taking a photograph of Leonardo da Vinci’s Mona Lisa (1503–1506) is not format migration because the content of the photo is less than and inferior to the content of the original painting. When the migration of an entire content has been completed in all of its depth, the original format is sometimes considered expendable. The movie might lose some nuances on video, but Crosby is likely to sound better on a CD than on a record. The typesetter probably would want to preserve the medieval manuscript even after the printed copy appears, but the microfilmer might be less likely to try to preserve the book. Leonardo’s original masterpiece will never become expendable, no matter how many other formats portray that image. Relative Permanence of Media The standard portable storage unit for personal computers, the floppy disk, was invented in 1971 for IBM by Alan Shugart and David Noble. By 1998 it had undergone a rapid series of improvements, from an 8 in, 33 kilobytes disk to various 8 in, 5.25 in, 3.5 in, and Iomega Zip disks with increasing amounts of storage space. With some exceptions, the disk drives required to read these different kinds of floppies are incompatible with each other, which means that, if the data stored on floppies is to be retained, extensive format migration of software is necessary whenever computer hardware is upgraded.
Providing hardware to read obsolete storage media is not a priority for the high-tech electronics industry. For example, reel-to-reel audiotape players and their replacement parts, which were common and cheap in the 1960s, are scarce and expensive in 2002. Yet, if the information recorded on the original medium is to be preserved for future generations to hear, this hardware must somehow be kept widely available and in good repair. In the 1990s much information was transferred to compact discs with read-only memory (CD-ROMs), but no one knows how long CD-ROMs can last and still be readable. Estimates range from between 10 and 200 years.
The main reading room of the New York Public Library.
In general, the expected lifespan of any particular information storage medium decreases proportionately to the recency of its invention. That is, the usual stuff that ancient people wrote on lasts longer than the usual stuff that medieval people wrote on, which lasts longer than the usual stuff that we write on today. Cave paintings on stone exist that are tens of thousands of years old. Clay tablets from ancient Babylonia and papyrus scrolls from ancient Egypt survive, as do vellum and parchment manuscripts from the medieval era. The average paper made in 1650 is more enduring than the average paper made in 1850.
Photo Researchers, Inc.
Audio has migrated, not always successfully, from Thomas Edison’s wax cylinders to 78 rpm records, 33 rpm records, reel-to-reel tapes, 8 track tapes, cassette tapes, CDs, digital audiotapes (DATs), and beyond. Media deteriorate, and hardware to preserve or rescue them cannot always keep pace. Frank Zappa’s reel-to-reel audiotapes from the 1960s were discovered in advanced states of deterioration in the 1980s. Fortunately most of the music was able to be recaptured digitally and transferred to CDs, but that is only a temporary solution. Nitrate-based movie film can literally burn itself to ashes and acetate-based film is subject to SCIENCE
IN
DISPUTE,
VOLUME
2
(Photograph by Rafeal Macia. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
For libraries, the costs of preserving electronic media, providing updated hardware about every 5 or 10 years to read them, and migrating formats to accommodate this new hardware are staggeringly high compared to the cost of indefinitely maintaining books on shelves and microfilms in drawers. Cost estimates for maintaining digitized data abound in professional library journal articles. The consensus seems to be that, for every digitized file, the cost of keeping it usable will increase every 10 years by at least 100% over the original cost of digi-
tizing it. These costs include software migration, new hardware, and labor. The maintenance of digital collections is very labor intensive.
193
KEY TERMS The unique internal records of an institution, typically consisting of unpublished material; best preserved in their original order and format. CODEX: The standard form of the book, comprised of same-sized sheets of paper or other thin, flexible material that are gathered at one edge and bound between covers. ARCHIVES:
CONSERVATION/PRESERVATION (CONS./PRES.):
MATHEMATICS AND COMPUTER SCIENCE
The theory and practice of protecting books, manuscripts, documents, artworks, sound recordings, etc., so that they still may be used hundreds of years from now and may serve as physical evidence for future scholars. Preservation is preventing damage to materials; conservation is treating or restoring them after they have been damaged. DIGITAL LIBRARY: A type of electronic library in which the information is stored only on digital media, such as computer disks and CD-ROMs. ELECTRONIC LIBRARY: A repository of information stored on any electronic media, either analog or digital, including audiotapes, videotapes, computer disks, and CD-ROMs. FORMAT MIGRATION: Transferring an entire intellectual or artistic content from one
194
vinegarization. Polyester-based film is considered safe, but it can still shrink away from its emulsion. The movie Star Wars (1977) had to be digitally restored in the 1990s. The result was magnificent, but precarious. Film negatives, when properly cared for, are more permanent than digital photography. Film librarians and preservationists know how to keep film negatives intact for hundreds of years, but no one yet knows how to preserve digital media for that long, or even whether it is possible to do so. The format migration of movies is mind-boggling. From nitrates and acetates to safety films, movies since the late 1970s have appeared in Betamax, video home system (VHS), laser disc, and digital versatile disc (DVD) formats. Through all of these media developments, the codex form of the book has survived nearly unchanged since the fifteenth century. It is durable, versatile, functional, and simple. It requires nothing except the naked eye to read it. When printed on nonacidic paper and kept from SCIENCE
IN
DISPUTE,
VOLUME
2
medium to another, ideally without truncation or distortion. FULL-TEXT DOCUMENT DELIVERY: A service commonly provided by libraries to put the content of periodical articles promptly into the hands of patrons of other libraries via interlibrary loan. Formerly this was accomplished using photocopies and the regular mail, but as academic, professional, and scientific journals move toward publishing only on-line versions, it increasingly is being accomplished digitally. OCLC WORLD CAT: On-line Computer Library Center’s international digital database of card-catalog records. It is the largest union catalog in the world, with approximately 49 million records in February 2002. A union catalog is a group of catalog records covering two or more libraries. VIRTUAL LIBRARY: A type of digital library that has no physical location and provides client access to materials on-line via a remote server. WEEDING: The systematic process by which libraries occasionally select portions of their holdings for deaccessioning and disposal. Digitization is increasingly being considered as an alternative to weeding.
fire, moisture, and other dangers, it easily lasts for centuries. Digitization for Preservation versus Digitization for Access Given the nature of the various media, preservation digitization does not work, and thus it is folly for libraries to pursue such a policy. Sometimes when library administrators choose digitization to preserve the content of fragile materials, they fail to realize that the digital medium will soon prove even more fragile than the medium it supersedes. The content, already in danger from its old medium, will be in even more danger from its new medium.
There is more to content than just the words and images on a page or the sounds on a tape. The book itself is physical evidence of the time and culture in which it was current. For example, the same seventeenth-century text may have appeared in simultaneous editions. Printed in a large format (folio or quarto) and bound in tooled calf, it would have been a limited edition
with specialized appeal to a wealthy clientele; in a small format (octavo or duodecimo) with a plain sheepskin binding, it would have been a common, inexpensive trade publication intended for a wide audience. Such primary physical evidence of the history of the transmission of knowledge is lost by digitization, thereby diminishing the worth of a library’s collection, which is determined by its research value. Books printed on acidic paper present a major preservation problem. OCLC World Cat, the world’s largest database of book titles, contains thousands of records for books that no longer exist, but existed in 1969 when this database was founded as the on-line catalog of the Ohio College Library Center. Many of these losses are due to acidic paper. Deacidification of paper preserves the original book, but either electronic format migration or microfilming can destroy that physical evidence. Moreover, deacidification makes economic sense. In the 1990s, counting both materials and labor, an average-length book of 300 pages cost between $85 and $120 to microfilm, photocopy, or digitally scan, while mass deacidification cost only $17 per volume. Techniques, procedures, and policies for indefinitely preserving printed materials are well known and widely practiced, but no one knows how to preserve content that exists only in digital form. Print it out and save the hard copy? Very cumbersome and labor intensive. Back up the digital files to multiple sites? Then what to do when the hardware to read them becomes scarce? Download source code from the Internet to on-site files? Then what to do when the author of the on-line file changes the code? How many versions and which versions of each file should be preserved? Producing a new edition of a printed book or article takes months, but updating a computer file to produce a new edition of a Web page takes seconds. With books, typically the first or the latest edition is definitive; with Web sites or computer files, how can we know?
Weeding, the process by which libraries occasionally dispose of parts of their collections, is a necessary evil from the point of view of library administrators who must save shelf space. However, it is a despicable and unforgivable practice from the point of view of historians, antiquarians, and connoisseurs of the book arts. In the long run, libraries never prosper by weed-
Digitization is a proper tool for access, not for preservation. The original content in its original medium can be kept in a climate-controlled vault away from the everyday use that will wear it down, while the digitized version of this same content can be made freely available to library users, even thousands of miles away, via the World Wide Web. If used carefully, digitization can be a boon for providing access to otherwise inaccessible collections. It can indirectly aid the cause of preservation by allowing the original medium to remain undisturbed after the digitizing process has been completed. However, microfilming can damage or destroy books and SCIENCE
IN
DISPUTE,
VOLUME
2
The library of Trinity College, Dublin, built in 1732. Are treasured libraries such as this an endangered species? (Photgraph by Adam Woolfitt. CORBIS. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
As digital full-text document delivery services between far-distant libraries become more common; as the publishers of academic, professional, and scientific journals and even popular periodicals move toward publishing only on-line versions; and as digitally scanned hard copy is discarded, another nightmare emerges for preservation librarians.
ing, but they often prosper, especially in terms of their scholarly reputations, by refusing to weed. For example, one of the most important scholarly resources in the United States is the collection of the Library Company of Philadelphia. Founded in 1731 by Benjamin Franklin as a circulating library for subscribers, the Library Company, no longer a circulating library, created and maintains its stellar reputation as a research facility for early Americana in part by never having weeded either its original eighteenth-century circulating collection or subsequently acquired subcollections, even after those books had ceased to be frequently consulted. Digitization is an inadequate alternative to weeding because, even though it preserves the textual and graphic content, it destroys the physical evidence in and of the book.
195
by decentralizing holdings and using digitization only to make their content more widely and democratically available. —ERIC V.D. LUFT
Viewpoint: No, digital libraries are not detrimental to our culture. They increase access and reduce the costs of communication. The promise of digital libraries is a world where ideas are more available, where individuals have new ways to express themselves and be heard, and where the costs of communications can be minimized. The threats include the hamfisted use of technology, the deaths of small, authentic cultures, and the loss of the aesthetic joys of books and magazines.
A boy peruses library shelves. (Photograph by Kevin Cozad. © O’Brien Productions/CORBIS.
MATHEMATICS AND COMPUTER SCIENCE
Reproduced by permission.)
196
digital scanning can tear paper. Digitization cannot work for archives or manuscripts, because in those cases the format is an inextricable part of the content. Reformatting books or other physically readable media digitally can provide greater access to content, but the preservation of the medium ensures the preservation of the content. Concern for library security and worry about library disasters increased in the wake of the September 11, 2001, terrorist attacks on the United States. Many irreplaceable archives and artifacts were lost at New York’s World Trade Center. Those passengers of United Airlines flight 93 who saved the U.S. Capitol from destruction probably also saved the Library of Congress, which is across the street from the Capitol. In view of all library disasters, whether natural, accidental, or deliberate, one fact must be remembered: machine-readable data, whether analog or digital, is less stable and shorter lived than data that can be read with the naked eye. When a library burns, there is some chance to save the books and even the microfilms, but there is no chance to save the electronic media. Virtual libraries inherently lack security, with each being only as safe as its server. A better plan to enhance the security of culturally important library collections is decentralization. The Digital Libraries Initiative, which began in 1994 under the aegis of the National Science Foundation, recognizes that the security of materials is gained not by digitization alone, but SCIENCE
IN
DISPUTE,
VOLUME
2
These risks are real. History is full of instances, from manned flight to the use of pesticides, where innovations have brought disastrous, unintended consequences. Culture can be lost irretrievably, as was proven with the destruction of the Library of Alexandria, beginning in 48 B.C. The visceral delights of a medium, such as radio theater, can be crushed under the weight of a new form of communication. Certainly, the advent of digital libraries changes our time-honored relationship with books and magazines. This relationship includes access to knowledge and respect for learning, deliberation, and the official record. The written word has been key to shifts in power across social strata, the emergence of new ideas, and the establishment of safeguards for freedom, including freedom of the press. Much of the world’s law, art, and religion has been built and sustained by the written word. How will this culture be altered by the juggernaut of digital libraries? Will people have more or less freedom? What institutions will be challenged? How will power shift? What will be lost and what will be gained? Fundamentally, digital libraries create new paths to knowledge. They change who has access to information, what form this information takes, how it is filtered, and who is authorized to add to the body of knowledge. Tools such as search engines and metadata coding assume a major role. Ownership and authority are challenged. Digital libraries alter the way we view, understand, and use books and magazines. While this cultural change is inevitable, loss is not. In fact, there is already evidence that digital libraries are enriching our culture. Functions From a technical standpoint, digital libraries have much to recommend them. First,
their contents are more widely accessible, thanks to electronic connectivity. Millions of publications are now available for free through the Internet. Others are available for fees or with special authorization, such as from business partners.
There is no denying that today the form factor of the printed word is superior to accessing written material through personal computers, personal digital assistants, and electronic books.
Hyperlinking allows a level of personalization—enriching material to user specifications, providing explanations on demand, and, through expert finder technologies, providing access to real-time human help. An individual can collect and organize information that is directly relevant to his or her point of view. Effectively, this collection can represent a subjective view of the world, which can be easily shared with people of similar interests. Socially, digital libraries support the creation of smaller-scale communities, made up of people with shared goals or interests. Perhaps these digital communities do not have the same commitment and mutual concern as those communities that literally rub shoulders at physical libraries, but digital libraries provide their own starting points for face-to-face interactions. For those with rare needs and interests, digital libraries may provide the only alternative to being isolated. A key element of a strong and vibrant culture is the support and protection of creative works. Digital libraries do not eliminate copyright protections, but they do make it easier to copy and SCIENCE
IN
DISPUTE,
VOLUME
2
Digital libraries create new paths to knowledge. Here, a librarian at the University of Southwestern Louisiana helps a student with computerized searching. (Photograph by Philip Gould. CORBIS. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
Access, of course, is more than just having a connection. One also must be able to find the material. Locating materials on-line is possible thanks to ever more sophisticated search engines and indexes. Under most circumstances, finding information within specific documents in electronic form is significantly easier and faster than finding the equivalent information in books or periodicals. Can this efficiency work against understanding and against the serendipitous acquisition of knowledge? While there is definite value to a “library effect,” wherein books, periodicals, or paragraphs that are nearby the sought material provide new insights and enrich contexts, there are technical analogues. Tools, such as collaborative filtering, can provide similar outcomes to the library effect through suggesting or presenting similar materials. Even if these tools do not provide an exact replacement, digital libraries create a variety of new ways to grab a reader’s attention and create awareness by clustering documents based on key words, expert opinion, algorithms tied to use, and many other models.
Markup on paper is simple, and there is no concern about batteries or glare or the intrusive sounds of fans and disk drives. However, these liabilities should be balanced against the new electronic capabilities to share markup, commentary, and references globally. The hyperlinking of documents to share references is enormously valuable.
197
distribute works without the permission of their creators. Napster, Gnutella, and other file-sharing tools have enabled large-scale abuses that have taken valuable intellectual property from artists, writers, and communicators. While some creators are happy to have their work distributed widely and certain models indicate a financial advantage, others assert that this inhibits them from producing original works. However, with cryptography, digital water markings, and other protective technologies, it is getting easier to identify and punish those who pirate digital works. Beyond commercial concerns, many writers have discovered that the Internet and text have allowed them to circumvent the publishing industry and find an audience.
MATHEMATICS AND COMPUTER SCIENCE
In the near term, there are specific problems with the inclusion of copyrighted materials in digital collections. New and old publications are easily incorporated. Books and periodicals created after the Internet revolution have specifications in their contracts for electronic distribution. Older materials can be included because they are out of copyright and in the public domain. But publications from the decades in between cannot easily be added to most digital libraries; this omission is serious and important. It can only be hoped that some legal or social solution will be found so that an historical quirk does not significantly twist research during the coming decades, while this material is less easily accessed by researchers.
198
Use A persistent concern among critics of digital libraries is that they will steamroll smaller, more fragile cultures. This result would occur both because of the overwhelming presence of the amplified cultures and because of the tendency for the majority culture to absorb elements of minority cultures. Several historical instances and current trends illustrate this concern. For example, the number of languages used on a dayto-day basis by people in the world has been shrinking each year. On a less visible scale, food services have become homogenized due to the overwhelming advertising power of chain restaurants, which often gain a competitive advantage over local, quirkier restaurants. Pervasive media, such as radio and television, have reduced the number of regional dialects and accents. Even when authentic cultures are not annihilated, they can be lost. There can be a mélange effect of transforming distinct cultural artifacts into something that is more digestible by the larger population, for example the conversion of traditional folk music into popular music.
It is indisputable that digital libraries provide advantages to those cultures that can most easily access and leverage their possibilities. The structure of searches and cataloging is inevitably biased toward those who structure and catalog the SCIENCE
IN
DISPUTE,
VOLUME
2
material. Given the capabilities of digital libraries to reconfigure modular versions of text, individuals and groups can easily sample, combine, and reinterpret traditional texts. Besides expertise, there is concern that the expense of digital tools distorts the culture by giving a louder voice and more power to the wealthy. However, the trend toward ever cheaper communications and computing power makes it likely that cultural dominance based on an ability to pay for technology will be less of a challenge than paying for printing presses and the physical distribution of books and periodicals. The widespread distribution and availability of digital texts concerns some critics. Neil Postman, a professor of culture and communications at New York University, points out that cultures retain their power and define themselves as much by what they are able to keep out as by what they are able to create and use. As an example, he cites the challenges faced by cultures that define themselves by a literal interpretation of sacred scripts when they cannot keep out the culture of science. Historically, there are many examples of cultures that have been too fragile to withstand the introduction of new ideas. A great deal of change and dislocation was created by the invention of the printing press in the fifteenth century. Today, the reach of digital technology is worldwide and immediate. Cultures that have been able to withstand previous waves of modernization and have had time to adapt to other challenges may collapse or be distorted by digital library technologies. It is a philosophical question whether people deserve to be exposed to a variety of options rather than to have their information filtered by others. Whether the advance of new technologies simply selects for the most robust and adaptable cultures or whether it selects for the best is a value judgment. For those who are concerned about the effects of digital libraries in disrupting cultures, there are strategies available. Proponents believe digital libraries can be used to catalog and revivify cultures. With materials in digital form, this process can be abetted by the widespread use of translation tools, which can shift power from dominant languages like English. Given enough societal interest, languages and accents and text can be preserved in a rich and complete manner without having to overcome the traditional burdens of finding an audience and providing capital in the physical world of books and periodicals. There are already potent examples of “dead” languages, such as Hebrew and Irish, being revived. These may be used as models by those who are concerned that authentic cultures will be destroyed or turned into stuffed animals on display in digital libraries. Even reinterpretation and combinations can enrich cultures. In the hands of the composer Aaron Copland, the Shaker hymn
“Simple Gifts” became Appalachian Spring (1944). Historically, Latin was transformed by locals with their own distinct languages into the modern languages of French, Spanish, and Italian.
Aesthetics But what about the feel of a book in one’s hands? The smell of the paper and ink? The crisp music of pages turning? What about the special sense of having a personal collection of books on the shelf, or the experience of giving or getting an inscribed copy of a book? Books are more than just content. In the cases of the Koran, Talmud, and Bible, they are venerated religious
The answers to these questions are inevitably mixed. The complete elimination of physical books would represent a significant aesthetic loss. But the book has faced and adapted to challenges over time. Television and radio have not eliminated the written word. Audio books sell far fewer copies than their original printed versions. It would be an underestimation of the robustness of printed media to assume that digital libraries could vanquish them in the foreseeable future. It can be argued that the creation of paperback books, which are routinely pulped, reduced the respect for books and possibly for learning. Anything that is cheap and common must gain respect and dignity from something other than scarcity. Indeed, some paperbacks have become collectors’ items, and important writers have established themselves in the paperback market. These facts give hope to digital writers, whose work can be eliminated with the flip of a switch. A direct aesthetic benefit of digital libraries is environmental—they keep paper out of landfills and leave trees standing. The aesthetic potential of digital libraries should not be discounted. The medium is still young. It will undoubtedly spawn SCIENCE
IN
DISPUTE,
VOLUME
2
Digital libraries offer new approaches to learning. Shown in this February 2000 photo are King Juan Carlos of Spain and former U.S. Librarian of Congress James Billington, announcing a collaborative project between the Library of Congress and the National Library of Spain. (© Reuters NewMedia Inc./Corbis. Reproduced by permission.)
MATHEMATICS AND COMPUTER SCIENCE
The values of the culture that created digital libraries come along with the medium itself. Just as the availability of printing presses established the practical reality that became the principle of freedom of the press, digital libraries bring their own forces for social change. For those who believe that Western culture, with its traditions of analysis and logic, advocacy, and free enterprise, is inherently destructive to the human spirit, there is little that can defend digital libraries. The best that may be said is that the more dehumanizing effects can be mitigated. For example, materialistic aspects can be reduced by making major elements free, as has been advocated by the open source movement. In fact, one of the first projects to spring forth from the electronic community was Project Gutenberg, which since 1971 has enlisted volunteers to convert printed books into free, digital form so they can be made more widely available.
objects. Periodicals bring their own thrill because they arrive fresh, new, and colorful in the daily mail. Libraries have their own presence, with a hush and a history and experiences of using your library card for the first time. Can digital libraries replace any of these feelings? Can they ever create the same joy or ambience or even sense of awe?
199
its own artists and designers who will take hyperlinking, multimedia, and all the other tools the technologists have provided to create something beautiful. Chances are that this work, a very human endeavor, will grow up to be a valued part of our culture. —PETER ANDREWS
Further Reading Arms, William Y. Digital Libraries. Cambridge, Mass.: MIT Press, 2000. Bellinger, Meg. “The Transformation from Microfilm to Digital Storage and Access.” Journal of Library Administration 25, no. 4 (1998): 177–85.
Kenney, Anne R., and Paul Conway. “From Analog to Digital: Extending the Preservation Tool Kit.” Collection Management 22, no. 3/4 (1998): 65–79. Lazinger, Susan S. Digital Preservation and Metadata: History, Theory, Practice. Englewood, Colo.: Libraries Unlimited, 2001. Lesk, Michael. “Going Digital.” Scientific American 276, no. 3 (March 1997): 58–60. Levy, David M. “Digital Libraries and the Problem of Purpose.” D-Lib Magazine 6, no. 1 (January 2000).
Berkeley Digital Library SunSITE. “Preservation Resources.” .
Lynn, M. Stuart. “Digital Preservation and Access: Liberals and Conservatives.” Collection Management 22, no. 3/4 (1998): 55–63.
Borgman, Christine L. From Gutenberg to the Global Information Infrastructure: Access to Information in the Networked World. Cambridge, Mass.: MIT Press, 2000.
Marcum, Deanna B., ed. Development of Digital Libraries: An American Perspective. Westport, Conn.: Greenwood Press, 2001.
Coleman, James, and Don Willis. SGML as a Framework for Digital Preservation and Access. Washington, D.C.: Commission on Preservation and Access, 1997. De Stefano, Paula. “Digitization for Preservation and Access.” In Preservation: Issues and Planning, ed. Paul N. Banks and Roberta Pilette, 307–22. Chicago: American Library Association, 2000. ———. “Selection for Digital Conversion in Academic Libraries.” College and Research Libraries 62, no. 1 (January 2001): 58–69. DeWitt, Donald L., ed. Going Digital: Strategies for Access, Preservation, and Conversion of Collections to a Digital Format. New York: Haworth Press, 1998.
MATHEMATICS AND COMPUTER SCIENCE
Johns, Adrian. The Nature of the Book: Print and Knowledge in the Making. Chicago: University of Chicago Press, 1998.
Berger, Marilyn. “Digitization for Preservation and Access: A Case Study.” Library Hi Tech 17, no. 2 (1999): 146–51.
Brancolini, Kristine R. “Selecting Research Collections for Digitization: Applying the Harvard Model.” Library Trends 48, no. 4 (Spring 2000): 783–98.
200
sation Project.” New Review of Academic Librarianship 4 (1998): 39–52.
Gilliland-Swetland, Anne J. Enduring Paradigm, New Opportunities: The Value of the Archival Perspective in the Digital Environment. Washington, D.C.: Council on Library and Information Resources, 2000. Graham, Peter S. “Long-Term Intellectual Preservation.” Collection Management 22, no. 3/4 (1998): 81–98. Jephcott, Susan. “Why Digitise? Principles in Planning and Managing a Successful Digiti-
SCIENCE
IN
DISPUTE,
VOLUME
2
Negroponte, Nicholas, and Michael Hawley. “A Bill of Writes.” Wired (May 1995). Ogden, Barclay W. “The Preservation Perspective.” Collection Management 22, no. 3/4 (1998): 213–16. Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books, 1993. Stam, David H., ed. International Dictionary of Library Histories. Chicago: Fitzroy Dearborn, 2001. Stern, David, ed. Digital Libraries: Philosophies, Technical Design Considerations, and Example Scenarios. New York: Haworth Press, 1999. Stoner, Gates Matthew. “Digital Horizons of the Copyright Frontier: Copyright and the Internet.” Paper presented at the Western States Communication Association Conference, Vancouver, B.C., February 1999. Teper, Thomas H. “Where Next? Long-Term Considerations for Digital Initiatives.” Kentucky Libraries 65, no. 2 (Spring 2001): 12–13. Tiwana, Amrit. The Knowledge Management Toolkit: Practical Techniques for Building a Knowledge Management System. Upper Saddle River, N.J.: Prentice Hall, 2000. Waters, Donald J. “Transforming Libraries through Digital Preservation.” Collection Management 22, no. 3/4 (1998): 99–111.
MEDICINE Historic Dispute: Were yellow fever epidemics the product of locally generated miasmas? Viewpoint: Yes, prior to the twentieth century, most physicians thought that yellow fever epidemics were the product of environmental factors, including locally generated miasmas. Viewpoint: No, yellow fever epidemics were not the product of locally generated miasmas; evidence eventually proved that yellow fever is spread by the mosquito Aedes aegypti.
The origin of yellow fever is almost as mysterious and controversial as that of syphilis and concerns the same problem of Old World versus New World distribution of disease in pre-Columbian times. Until the twentieth century, the cause and means of transmission of the disease were also the subjects of intense debate. Some historians believed that the Mayan civilization was destroyed by yellow fever and that epidemics of this disease occurred in Vera Cruz and Santo Domingo (Hispaniola) between 1493 and 1496. Others argue that the first yellow fever epidemics in the Americas occurred in the 1640s in Cuba and Barbados and that the disease came from Africa. Waves of epidemic yellow fever apparently occurred throughout the Caribbean Islands during the seventeenth and eighteenth centuries. By the eighteenth century, yellow fever was one of the most feared diseases in the Americas. In the United States, yellow fever epidemics always broke out in the summer or autumn and disappeared rapidly as the weather turned cold. In some tropical zones, however, the disease was never absent. Epidemiologists say that “humans suffer most from those illnesses for which they are not the intended host.” This is true for yellow fever, which is transmitted to humans by mosquitoes from its normal reservoir in nonhuman primates. Some aspects of the eighteenth and nineteenth century debates about the cause of yellow fever epidemics can be clarified by reviewing current knowledge of the disease. Yellow fever is an acute viral disease usually transmitted to humans by the mosquito now called Aedes aegypti. Jungle yellow fever remains endemic in tropical Africa and the Americas, but historically, the urban or epidemic form has been most important. Dense populations of humans or other primates are needed for transmission of the disease from one victim to another. The mosquito, which only bites when the temperature is about 62°F (16°C), has a very limited range and needs stagnant water to live and breed. However, the mosquito hibernates during periods of low temperature, and the eggs can withstand severe drying condition for several moths before hatching. The virus must be incubated in a mosquito for 9 to 18 days before the mosquito can infect another person. During the first stage of the disease the virus circulates in the blood and a mosquito can become infected by biting the patient. Chills, headache, severe pains in the back and limbs, sore throat, nausea, and fever appear after an incubation period of three to four days. Physicians who were well acquainted with the disease might note subtle diagnostic clues such as swollen lips, inflamed eyes, intense flushing of the face, and early manifestations of jaundice. The second stage was an often deceptive period of remis-
201
sion, in which the fever diminished and the symptoms seemed to subside. In many patients, however, the disease would enter the third phase, which was marked by fever, delirium, jaundice, and the “black vomit.” Death, with profound jaundice, was usually caused by liver failure, but the disease could also damage the kidneys and heart. In the absence of modern diagnostic aids, physicians often found it difficult to distinguish between yellow fever, dengue fever, malaria, and influenza. Since many diseases may coexist in impoverished tropical countries where yellow fever is most common, differential diagnosis remained a problem well into the twentieth century. Because yellow fever is caused by a virus, no specific remedies have ever been available. Complete rest, good nursing care, and symptomatic relief, remain the most effective therapies. Determining the mortality rate of yellow fever is difficult because many mild cases may not be reported. Estimates in various epidemics have ranged from 10% to as high as 85%. Statistics from some nineteenth century hospitals reveal mortality rates as high as 50%. However, physicians generally claimed that the mortality rate for their private patients was closer to 10%. Presumably these differences reflect the different health statuses of wealthy private patients and poor hospitalized patients. Despite the antiquity of yellow fever epidemics, the cause and means of dissemination of the disease were not understood until the twentieth century. Nevertheless, debates about the nature of the disease and the best method of treatment and control were widespread and sometimes hostile. Most eighteenth and nineteenth century physicians thought that endemic and epidemic diseases were caused by environmental conditions, especially bad air, heat, humidity, and filth. Thus, the collection of meteorological and geographical information was of great interest to physicians who hoped to find the critical correlations between disease and environmental factors. Debates about the causes and means of dissemination of disease, however, included contagion theory as well as the miasma theory. Contagion referred to transfer by contact. The idea that disease, impurity, or corruption can be transmitted by contact is very old, but it was generally ignored in the Hippocratic texts. According to the miasma theory, disease was caused by noxious vapors that mixed with and poisoned the air. The Italian physician Girolamo Fracastoro (1478–1553) published a classic description of these theories in 1546. In De contagione et contagiosis morbis (On Contagion and Contagious Diseases), Fracastoro proposed an early version of the germ theory. He suggested that diseases were caused by invisible “seeds.” Some diseases, such as syphilis and gonorrhea, were only spread by direct contact, whereas other diseases, such as malaria, were transmitted by noxious airs. The seeds of some diseases, however, could contaminate articles that came in contact with the sick person. These articles, usually referred to as fomites, could then cause sickness in another person. Although seventeenth century microscopists had seen bacteria, protozoa, and molds, the connection between microorganisms and disease was not established until the second half of the nineteenth century. Even during the “golden age” of medical microbiology in the late nineteenth century, the contagion/miasma controversy concerning yellow fever could not be resolved until studies of insect vectors explained the chain of transmission. Ancient writers had speculated about the role of insects in the transmission of disease, but scientific confirmation was not established until the late nineteenth century through the work of British parasitologist Sir Patrick Manson and the bacteriologist Sir Ronald Ross. Nevertheless, several clues to the puzzle of yellow fever had been collected long before the “mosquito theory” was subjected to experimental trials.
MEDICINE
Yellow fever became a disease of special interest to the United States as a result of the Spanish American War in 1898. After the war, the U.S. Army occupying Cuba found yellow fever a constant threat. Improvements in the sanitary status of the island failed to decrease the threat of yellow fever. In 1900 a special commission, headed by Major Walter Reed, a U.S. Army pathologist and bacteriologist, was sent to investigate the causes of infectious diseases in Cuba. Following leads provided by a Cuban epidemiologist, Carlos J. Finlay, Reed and his colleagues carried out carefully controlled experiments to determine whether yellow fever was transmitted by miasma, direct contagion, or the bite of infected mosquitoes. Reed’s results clearly disproved the miasma theory and established rigorous proof that the disease was transmitted by the bite of infected mosquitoes. Acceptance of the mosquito theory made efforts to control yellow fever possible, but did not fully eliminate the threat of this viral disease. The discovery of jungle yellow fever in the 1930s among tree-dwelling primates in South America and Africa meant that an inexhaustible reservoir of disease would thwart all efforts at eradication. Safe and effective yellow fever vaccines were not widely available until the 1940s. Epidemics of yellow fever in Africa in the 1970s stimulated renewed interest in the yellow fever virus. The disease also remains a threat in South America. The reservoir of yellow fever virus in South American rain forests and the resurgence of Aedes aegypti in urban areas led some public health specialists to warn that it is only a matter of time before urban centers again experience the threat of yellow fever. Mysteries about the distribution of the disease remain. For example, the disease is unknown in Asia even though Aedes aegypti are common.
202
SCIENCE
IN
DISPUTE,
VOLUME
2
The 1793 yellow fever epidemic in Philadelphia, which was then the capital of the new republic, was the first major outbreak in the United States. Later epidemics occurred in New York, Baltimore, Norfolk, and other urban centers. The effect of this epidemic on Philadelphia, and on medical thought and practice, were profound, as indicated by the debates about whether the epidemic was the product of locally generated miasmas. Yellow fever epidemics have been studied from many perspectives. Historians have been especially interested in examining the relationship between health, politics, public health regulations, professionalism and medical practice, especially during times of crisis in the early days of the United States. The debate about the cause of yellow fever was much more complex than a debate between narrow interpretations of contagion and miasma. The controversy about the origins and treatment of yellow fever in Philadelphia became part of the wider political debates of the 1790s, i.e., the conflict for political leadership between supporters of Alexander Hamilton (George Washington’s secretary of the treasury), who advocated a strong central government, and those of Thomas Jefferson, who did not. Perhaps because Philadelphia was home to the most learned medical community in the nation, the debates about the epidemic were particularly heated, and, because of the political climate, often quite hostile. Even decisions about treatment regimens became embroiled in political disputes. Nevertheless, neither side could provide conclusive proof as to whether the disease was the product of contagion or miasma, imported or generated by local conditions. After the epidemic of 1793, Philadelphia reached for a pragmatic compromise and attempted to institute both quarantine and sanitary reforms when dealing with later epidemics. Although yellow fever no longer excites the fear that it inspired in the eighteenth century, understanding the historical debate about this disease might provide cautionary lessons for dealing with the threat of West Nile fever in New York and other parts of the United States. —LOIS N. MAGNER
Viewpoint: Yes, prior to the twentieth century, most physicians thought that yellow fever epidemics were the product of environmental factors, including locally generated miasmas.
Miasma and the Environment For many centuries, the concept of miasma had been important in debate over the causes of disease. Miasma is a Greek term, which in its most basic sense referred to any kind of pollution or polluting agent. In regard to late eighteenth and early nineteenth discussions about disease, the term specifically referred to noxious vapors that tainted the air, causing yellow fever and other epidemic diseases. These vapors could arise from various sources, such as stagnant water and marshes, dead animals or corpses, rotting food and vegetable matter and any other kind of filth or decaying material. Once the air was polluted in this manner, it could have an adverse effect on the human body, resulting in illnesses throughout the community. Changes in weather conditions could also affect the air, producing miasmas that could create epidemics. For this reason, many epidemics were associated with climatic events, such as great thunderstorms.
Those medical theorists who argued that yellow fever was the product of local miasmas had centuries of medical theory to support their arguments, reaching back to antiquity. Theories of miasma are to be found amongst the HippoSCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
When a community is struck by epidemic disease, one of the first priorities is to identify the source of the illness. By being able to explain how the epidemic started, those responsible for public health can take action to contain the epidemic and prevent future outbreaks. In the case of the epidemics of yellow fever that struck the urban centers of Europe and the Americas during the eighteenth and nineteenth centuries, the debate over etiology reflected some of the fundamental issues in Western medical theory at this time. Until the middle of the nineteenth century, most medical observers were convinced that yellow fever epidemics were the result of locally generated miasmas. This was not a new medical theory, but part of a long tradition, dating back to early Greek medicine. Many of the epidemiological characteristics of yellow fever, how it appeared and spread through the community, were puzzling to physicians and public officials. As a result, they examined their surroundings and tried to construct explanations that were rooted in the specific conditions that they found in their urban environments. The hypothesis that epidemics of yellow fever were
the products of locally generated miasmas was convincing, because it accounted for many aspects of the epidemics that the idea of an outside contagion, spread from person to person, could not. Therefore, many people in the early nineteenth century regarded it as the most convincing explanation for the epidemics that threatened their communities.
203
Yellow fever victims in New Orleans during an 1878 outbreak. Many cities in the southern United States suffered from periodic epidemics of yellow fever until the advent of modern sewage systems. (© Bettmann/CORBIS.
MEDICINE
Reproduced by permission.)
cratic writings of early Greek medicine, as part of their general focus on the role of the environment in causing disease in the human body. The main thrust of this approach is that the source of disease exists not within the sick person, but in the external environmental conditions, which create pathological changes within the human body. In particular, many early medical writings put a great deal of emphasis on the role of the air in affecting health and causing disease. As the human body is dependent upon air for life, and because air is everywhere, the idea that polluted air could cause epidemic disease has always been very compelling. Identifying the air as the source of disease provided an ideal explanation for the simultaneous appearance of the same disease in many different people, which made it particularly relevant in understanding disease epidemics. Urban Society: Filth and Fear The theory of miasmas was also compelling as an explanation for disease to people who were concerned about the conditions of urban life in the eighteenth and nineteenth centuries. Given the lack of adequate sewerage systems or a means of disposing of household and industrial wastes, cities were places of foul smells and disgusting filth. In the era before the knowledge of the role of bacteria in causing disease, the effect of the filthy environment upon the air was the major public health concern. It required little imagination on
204
SCIENCE
IN
DISPUTE,
VOLUME
2
the part of concerned citizens to believe that the piles of rotting offal and carcasses of dead animals that accumulated in the streets could let off stinking fumes that would taint the air and cause diseases. In Philadelphia during the 1793 yellow fever epidemic, observers criticized the usual methods of disposing of household wastes and food scraps, because they created a poisonous stench that had an unhealthy effect on the air. Sanitation was therefore a crucial issue for those who believed that epidemics were a product of unclean conditions. When cases of yellow fever would start to appear in a city, one of the first actions usually taken by city officials was to order the streets to be cleaned up. The threat of yellow fever within a city inevitably brought with it criticism of public cleaning measures. Often, epidemics were blamed upon the laziness and ignorance of public officials, whom people held responsible for the filthy conditions that were believed to create epidemics. Such criticisms indicate the close association in many people’s minds between the local environmental conditions and disease epidemics. Therefore, urban officials came under increased pressure during the nineteenth century to provide a hygienic environment for their citizens. Philadelphia 1792 The yellow fever epidemic that struck Philadelphia in 1792 was not the largest in terms of loss of life, but it was significant for a number of reasons. Philadelphia was
the capital of the United States at the time and the major political and economic center of the young nation. As a result, it was also home to some of the leading physicians in the country. This was the first major epidemic of yellow fever to strike a United States city, and it highlighted crucial divisions between medical thinkers. Physicians split over the issue of whether the disease was a contagion that had been brought in from outside the city, or whether it had arisen within the city as a result of the conditions there. Many people claimed the disease had been imported with shiploads of French refugees escaping a revolution on the Caribbean island of Hispaniola (now Haiti and the Dominican Republic). However, a prominent Philadelphia physician, Dr. Benjamin Rush, argued in favor of the local origins. Rush regarded the disease as a product of conditions within Philadelphia, and particularly pointed the finger at a cargo of rotting coffee that had been left for weeks on the wharves near the neighborhood where the first cases of fever were reported. This refuse had created a noxious miasma that polluted the air, causing the epidemic. This hypothesis did not meet universal acceptance from Rush’s medical peers. The Philadelphia College of Physicians, which represented medical authority in the area, maintained the disease was an outside contagion. However, there were a number of reasons why Rush’s explanation of the cause of the epidemic was compelling. First, the pattern of the spread of the disease did not support the theory that the disease had been transmitted from person to person through direct physical contact. People in various parts of the city contracted the disease simultaneously, without any chain of direct contact between them. In an age before an understanding of viruses and the role that mosquitoes played in spreading some diseases, there was no adequate theory of contagion that could explain how a yellow fever victim in one street could infect another person living two streets away without any physical contact. Therefore, the presence of a pathological element in the general atmosphere seemed to offer a more convincing explanation.
Illness that can be spread from one person to another, usually through physical contact. ENDEMIC: Disease that occurs within a specific area, region, or locale. ETIOLOGY: Study of the nature and causes of disease. EPIDEMIOLOGY: Study of how diseases spread from person to person and from place to place. HIPPOCRATIC TEXTS: Collection of early medical writings which served as the basis of medical thought until the seventeenth century; named after the Greek physician Hippocrates (c. 460–377 B.C.), although not all of them were written by him. MIASMA: Pollution or poison, arising from rotting or unclean material, which taints the air and causes disease. PATHOLOGICAL: Altered or caused by disease. VECTOR: Organism that transmits disease-causing microorganisms. CONTAGION:
disease and were taken outside the city did not appear to transmit it to others, and the epidemic was largely confined to the environs of the city. This seemed to suggest that the disease was a product of the specific conditions within Philadelphia that summer and autumn. Anticontagionists such as Rush pointed to these issues when arguing against the theory that the epidemic was caused by a contagion introduced from an outside source. To find the cause of the epidemic, people needed to look no further than the poisonous vapors produced by the refuse within their own city. The connection that was drawn between the epidemic and the specific local conditions of the city is reflected in part of a poem written by a local newspaper editor, Philip Freneau, during the epidemic: Nature’s poisons here collected Water earth and air infectedO, what a pity Such a City Was in such a place erected. In the poem, the environmental conditions of Philadelphia itself, rather than external sources of infection, were being blamed for the disease. Anticontagionists versus Contagionists As epidemics of yellow fever became a regular occurrence in nineteenth-century urban life, anticontagionist arguments gained wide support in the medical community, both in the United States and in Europe. In the first half of the nineteenth century, many physicians were conSCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Secondly, those who attended the sick did not always contract the disease themselves. If the disease was contagious, why didn’t everyone who had contact with the sick get it? One of Rush’s students claimed that that various experiments, such as drinking the vomit of infected victims, were carried out during the epidemic to support the theory the disease was not contagious. The idea that the fever was a product of the local environment rather than carried by the victim and transmitted to others, was also supported by the fact that the disease did not spread out of Philadelphia. Those who contracted the
KEY TERMS
205
ures would have succeeded in destroying some of the habitat of the mosquito, which, unbeknown to medical theorists at the time, was the main culprit in the spread of yellow fever. The debates over the measures that should be taken in response to the threat of yellow fever continued for most of the nineteenth century.
Benjamin Rush (UPI/Corbis-Bettman.
MEDICINE
Reproduced by permission.)
vinced that yellow fever was not contagious. Anticontagionists noted other details that seemed to support their position. Yellow fever epidemics usually occurred during the summer months, when the summer heat was most likely to cause putrefaction in the environment and create dangerous miasmas. The fever receded with the coming of colder weather, when the air was purified of noxious vapors by frost and colder temperatures. The connection between the incidence of epidemics and the seasons furthered encouraged people to identify environmental conditions as the source of the disease. The failure of quarantines to prevent outbreaks of yellow fever was also seen as evidence that epidemics were created by conditions in the city, rather than being brought in from the outside. Quarantines had been part of the standard public health response to epidemic disease since the plagues of the Middle Ages, but anticontagionists argued that they were an unnecessary disruption to trade and commercial enterprise, therefore having a negative effect on morale during an epidemic. In response to claims that epidemics usually coincided with the arrival of ships from cities hit by yellow fever, they pointed to instances when quarantines against such ships had failed to prevent epidemics. Instead, anticontagionists focused upon preventing epidemics by eliminating the sources of the miasmas that caused disease, such as piles of filth and stagnant water. Ironically, such meas-
206
SCIENCE
IN
DISPUTE,
VOLUME
2
Conclusion As yellow fever became a regular feature of the urban environment in the nineteenth century, public health officials tried to take actions that might protect their communities from the disease. In most cases, sanitation measures were combined with quarantines in an effort to ward off the threat. However, while cities might enforce quarantines, the theory that the disease was spread from person to person was difficult to sustain, given the way an epidemic started and progressed. To many medical observers, locally generated miasmas provided the best explanation for the characteristics of a yellow fever epidemic. In the context of late eighteenth and early nineteenth century ideas about disease, Dr. Rush’s claim that a rotting cargo of coffee could infect the quality of the air, which in turn could poison the human body, was convincing. Such theories also had the benefit of centuries of medical authority, an important consideration in eighteenth and nineteenth century medicine. Most of all, the explanation was compelling because it offered an explanation which was rooted in the contemporary understanding of the interaction between the environment, the air, and the human body in causing disease. —KATRINA FORD
Viewpoint: No, yellow fever epidemics are not the product of locally generated miasmas; yellow fever is spread by the mosquito Aedes aegypti. Physicians in the late eighteenth and nineteenth centuries were divided over the etiology and epidemiology of yellow fever. Debate centered on the question of the origins of yellow fever epidemics. Were they a product of miasmas, poisons that arose from the fetid environmental conditions and tainted the air? Or was yellow fever a contagion, carried into an area by an infected person and spread to others? For many medical observers, the idea that such a terrible disease could be a product of the local conditions was unthinkable. Instead, they regarded the disease as foreign, something that was brought into the community from the outside. As a result, local authorities enforced quarantines to protect an area from epidemics that
might be raging in other cities and towns. However, the argument that miasmas were the cause of epidemics had centuries of medical authority in its favor, and had broad support among medical experts in the early nineteenth century. Yet, as medical research began to focus on the search for the specific microscopic entities that caused diseases, the theory of miasmas became outmoded. Finally, the discovery and confirmation of the role of the mosquito as a vector in spreading the virus that causes yellow fever rendered both miasmas and earlier ideas of contagion obsolete in explaining yellow fever epidemics. Contagion Like the theory of miasmas, the concept of contagion has had a long and complex role in the history of medical thought. The term generally referred to a disease that could be passed from person to person, causing the same illness in each one. However, in an age before scientists understood how specific diseases were caused by bacteria or viruses, exactly how a disease could be transferred from one person to another was a source of enormous debate and inquiry. Nevertheless, in the case of epidemic illness in the eighteenth century, it was generally thought that such diseases were contagious. The measures that communities took against epidemics were based on the belief that physical contact with a sick person would cause another person to become ill. Therefore, for an epidemic of a disease such as yellow fever to occur in an area where the disease had not existed before, it was believed that it had to have been carried from the outside by an infected person.
Many medical theorists challenged the belief that yellow fever was a contagion. During the 1793 Philadelphia epidemic, Dr. Benjamin Rush argued that yellow fever was the result of
The Decline of Miasma By the middle of the nineteenth century, as yellow fever ravaged communities in the American South, serious doubts were raised about the theory that epidemics were caused by locally generated miasSCIENCE
IN
DISPUTE,
VOLUME
2
Carlos Juan Finlay (Corbis-Bettmann. Reproduced by permission.)
MEDICINE
It was not difficult for communities to identify foreign sources of the yellow fever contagion. In the case of the 1793 epidemic in Philadelphia, the recent arrival of refugees from the island of Hispaniola in the Caribbean, an area notoriously infested with yellow fever, was regarded by many prominent physicians as the source of the epidemic. Later epidemics in the cities of the American South were also traced to the arrival of ships and trade from the fever zones of the Caribbean and South America. Such beliefs were no doubt informed by xenophobia and parochialism. Communities have always tended to blame outbreaks of diseases on foreign scapegoats, and people also feared the damage the belief that the disease had arisen out of an unhealthy environment could do to the reputation and economic growth of their city. However, the idea the disease was a contagion was also founded on the observation that epidemics appeared to develop following the arrival of ships, goods, and people from other infected areas.
miasmas produced by a filthy environment. While Rush met opposition for his theories, particularly from the Philadelphia College of Physicians, anticontagionist ideas gained support amongst many physicians and medical theorists in the early nineteenth century. However, popular belief continued to assume that the disease was a contagion and could be caught through contact with an infected person. Contemporary accounts of the Philadelphia epidemic indicate that people were convinced that the disease could be caught through physical contact. People avoided friends and acquaintances in the street and shaking hands was frowned upon for fear it would spread the disease. Some observers also noted the cruel treatment that Philadelphians received from people outside the city, due to the popular fear that they carried the disease and could spread it to others. While miasmas may have gained ground among medical observers as an explanation for yellow fever, quarantines against people and goods from infected areas continued to be enforced by cities in North America and Europe. This indicates a continued belief in the contagious nature of the disease, despite what medical wisdom may have claimed.
207
portable was not always inconsistent with the emphasis placed upon the role of the local environment. Many claimed that while the disease could be imported into an area, local conditions determined whether an epidemic would take hold. Others remained committed to the idea that epidemics in some cities arose as a result of poisonous exhalations from urban filth, but allowed that the disease could then be spread to other towns and cities through infected goods. In the attempt to provide explanations for the characteristics of yellow fever epidemics, miasma and contagion were not necessarily contradictory approaches. This justified the combination of sanitation and quarantine measures in fighting off the threat of yellow fever.
Aedes aegypti, the mosquito that carries yellow fever. (© Bettmann/CORBIS.
MEDICINE
Reproduced by permission.)
mas. Some observers questioned the association between yellow fever and a dirty environment, particularly when a city contained all the elements that supposedly generated miasmas, but no outbreak of the disease occurred. Such questions raised important issues. If the disease was a product of miasmas rising from putrefying filth, why did outbreaks not occur whenever such pollution was present? It was not clear why some towns had epidemics while others, with the same or even worse sanitary conditions, might escape altogether. It was becoming apparent to many people that miasmas from rotten garbage and filth were alone not enough to explain an outbreak of yellow fever. There had to be some other factor or element present to account for an epidemic. However, exactly what that factor was remained unresolved, and produced much debate and speculation until the beginning of the twentieth century. Medical observers were also coming to be accept that the disease, if not directly contagious, was at least transportable. Although it did not seem to spread directly from person to person, it was apparent that epidemics did follow transport routes, from port to port, along railway tracks, and up rivers. In particular, in a variant of the idea of contagion, clothes and possessions that had come into contact with yellow fever victims were believed to be the major culprit in transporting the disease, rather than the people themselves. The idea that the disease was trans-
208
SCIENCE
IN
DISPUTE,
VOLUME
2
The increasing focus of medicine in the nineteenth century upon specific diseases, was important to the development of new epidemiological theories. Previously, diseases were generally not conceived of as specific things with particular causes and cures. Yellow fever was regarded as but one of the many types of disease classed as fevers, which may have differed in their symptoms, but not in their essential causes. However, as physicians began to think about yellow fever and other illnesses such as malaria or cholera as distinct diseases, they began to focus on identifying the specific causes of these diseases. By the 1870s, medical opinion suggested that specific microscopic “yellow fever germs” were the cause of epidemics, not general miasmas that tainted the air. The hunt was on in the latter decades of the nineteenth century to identify this germ, although at this time, the exact definition of what a germ was varied widely. However, because it was believed that germs thrived in an unclean environment, filth and unhygienic conditions remained a focus for public health authorities attempting to prevent epidemics. A shift had occurred from seeing the environment as the source of miasmas that poisoned the air, to seeing it as the breeding ground for germs that spread the disease. As it turned out, both theories were incorrect. The Hidden Culprit: A Mosquito Some early observers of yellow fever had observed the connection between the onset of epidemics and the presence of large numbers of mosquitoes. This was explained by those who supported the theory of miasma as further evidence of the fetid and poisonous nature of the air. In 1881, the Cuban epidemiologist Dr. Carlos J. Finlay presented a paper that argued that yellow fever was spread by the mosquito Aedes aegypti. However, Finlay was unable to present experimental proof of his hypothesis, and nothing came of his theory until 1900, when a team of medical experts, headed by Dr. Walter Reed, a U.S. Army pathologist and bacteriologist, was appointed by the U.S. Medical Corps to investigate yellow fever
epidemics and their causes. At this time, the U.S. Army bases in Cuba were badly affected by yellow fever epidemics, and the building of the Panama Canal was also being hindered. The Reed commission, with Finlay’s help, carried out experiments to test the different hypotheses of the cause and transmission of the disease. In one experiment, a group of army volunteers was isolated in a hut with the bedding and clothing of yellow fever patients, while another group was housed in clean surroundings and exposed to the bite of a mosquito that had bitten a yellow fever patient. The first group remained healthy, while the second contacted the disease, putting to rest the argument that the disease was spread through contact with the clothing, bedding, and belongings of yellow fever victims. These findings were developed into an intensive public health campaign by U.S. Army surgeon William Crawford Gorgas, who from 1898 to 1902 was in charge of sanitation measures in . in the Cuban city of Havana. Gorgas’s campaign was based on exterminating the mosquito and its habitat within urban areas in Havana, and any source of standing water, such as jars, pitchers, basins, where the mosquito might breed, was destroyed. The army hierarchy and public health officials did not immediately accept Gorgas’s campaign. Many ridiculed the idea that the disease was spread by the bite of the mosquito and criticized the campaign for ignoring the filth and rubbish that had traditionally been regarded as the source of yellow fever. However, Gorgas was able to practically eliminate yellow fever from Havana, and repeated this success in Panama, opening the way for the completion of the canal. A similar campaign was carried out in the last attack of yellow fever in the United States in New Orleans in 1905, where school children were rewarded for bringing in dead mosquitoes. The Reed commission had concluded that the agent causing the disease was a virus, although the exact identification and classification of the virus could not be established until the late 1920s. A vaccine for yellow fever was developed in the 1930s. Although the eradication of the mosquito means that yellow fever has disappeared from cities in the United States, it continues to be present in some tropical areas of the world, and travelers to these areas require vaccinations to be protected from the disease.
Further Reading Carey, Matthew A. Short Account of the Malignant Fever, Lately Prevalent in Philadelphia. 4th ed. New York: Arno Press Inc., 1970. Estes, J. Worth, and Billy G. Smith, eds. A Melancholy Scene of Devastation: The Public Response to the 1793 Philadelphia Epidemic. Canton, MA: Science History Publications, 1997. Hannaway, Caroline. “Environment and Miasmata.” Companion Encyclopedia of the History of Medicine, W. F. Bynum and Roy Porter, eds. London, New York: Routledge, 1993, pp. 292–308. Humphries, Margaret. Yellow Fever in the South. New Brunswick, NJ: Rutgers University Press, 1955. McCrew, Robert E. Encyclopedia of Medical History. New York: McGraw-Hill Book Company, 1958. Pelling, Margaret. “Contagion/Germ Theory/ Specificity.” Companion Encyclopedia of the History of Medicine, W. F. Bynum and Roy Porter, eds. London, New York: Routledge, 1993, pp. 309–34. Powell, J. H. Bring Out Your Dead: The Great Plague of Yellow Fever in Philadelphia. Philadelphia: University of Philadelphia Press, 1949.
SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Without the knowledge of the role of mosquitoes in transmitting diseases, epidemics of yellow fever were profoundly baffling to physicians and other medical observers. Later developments in medical science showed that epidemics of yellow fever were not the result of miasmas generated in the local environment. The anticontagionists could not explain the con-
nection of epidemics with the movement of people and goods, nor could they offer an explanation as to why communities had outbreaks in some years and not others. However, there was no model of contagion that could adequately account for the epidemiological characteristics of yellow fever. As ways of thinking about diseases and their causes were transformed in the nineteenth century, such hypotheses became inadequate as explanations for disease. Medical thought began to focus upon specific diseases and their causes, and the idea of miasmas was too vague and general to function as an adequate explanation for epidemics of yellow fever. It could be said that those who advocated the miasmas were correct in their belief that yellow fever was caused by an element present in the air. However, that element was not a poison produced by local wastes, but a virus made airborne within the body of its mosquito vector. In the late eighteenth century, such a suggestion would have seemed ludicrous to most medical theorists. A century later, it was basis of a public health campaign that led to the eradication of yellow fever from the cities of the Americas and the Caribbean. —KATRINA FORD
209
Do current claims for an Alzheimer’s vaccine properly take into account the many defects that the disease causes in the brain? Viewpoint: Yes, current claims for an Alzheimer’s vaccine properly take into account the many defects that the disease causes in the brain—the claims are based on sound experimental results regarding beta-amyloid plaques. Viewpoint: No, a vaccine based on preventing the formation of beta-amyloid plaques is premature and could well prove ineffective—and possibly even harmful to humans.
By the end of the twentieth century, Alzheimer’s disease, a condition once considered very rare, had emerged as the most common cause of dementia. Alzheimer’s disease affects approximately four million people in the United States alone. Although almost half of all patients with dementia appear to suffer from Alzheimer’s, the disease primarily attacks those over the age of 65. About 4% of the population over age 65 is affected; by age 80 the prevalence is about 20%. Researchers predict that, unless methods of cure or prevention are discovered very soon, the disease will afflict about 14 million Americans by 2050. Although patients in the later stages of Alzheimer’s often succumb to infection, the disease itself is probably the fourth leading cause of death in the United States. Unfortunately, despite advances in diagnostic testing, a definitive diagnosis of Alzheimer’s disease can only be made at autopsy. The National Institute of Neurological Disorders and Stroke defines Alzheimer’s disease as a “progressive, neurodegenerative disease characterized by memory loss, language deterioration, impaired visuospatial skills, poor judgment, indifferent attitude, but preserved motor function.” The early symptoms of the disease, such as forgetfulness and loss of concentration, are often ignored because they appear to be natural signs of aging. A clinical diagnosis of Alzheimer’s disease is generally made by excluding other possible factors, such as fatigue, depression, hearing loss, or reactions to various drugs, and applying internationally recognized criteria for Alzheimer’s. The disease is characterized by a gradual onset of subtle intellectual and memory problems, which become progressively severe over a period of about 5 to 15 years. As symptoms progress, the patient may display increasing loss of memory, diminished attention span, confusion, inability to recognize friends and family, restlessness, problems with perception and coordination, inability to think logically, irritability, and the inability to read, write, or calculate. Eventually, the patient will need full-time supervision. During the late stage of the disease the patient may be unable to recognize family members, communicate, or swallow. On average, patients die seven to eight years after diagnosis.
210
Descriptions of senility and dementia are ancient, but the first modern description has been attributed to the French psychiatrist Jean Etienne Esquirol, who wrote of a progressive “senile dementia” in 1838. Alzheimer’s disease is named after the German psychiatrist Alois Alzheimer, who described a neurological disorder of the brain associated with progressive cognitive impairment in 1906. Alzheimer’s mentor Emil Kraepelin, one of the founders of modern psychiatry, had previously identified the clinical symp-
toms of the condition. After the death of a 55-year-old patient, Alzheimer studied the patient’s brain under the microscope and described the abnormal structures now called senile plaques. According to Alzheimer, these abnormal structures led to a shrinking of the brain. Theories about the cause of the disease are still a matter of dispute, but since the 1980s researchers have focused their attention on deposits known as senile plaques, composed of a protein called beta-amyloid, found in the brains of Alzheimer’s patients. Many researchers believe that the formation of beta-amyloid protein plaques sets off a series of pathological reactions in the brain that ultimately lead to the death of brain cells and the development of the disease. This is known as the amyloid cascade hypothesis. The disease process also involves the formation of neurofibrillary tangles, caused by abnormal tau protein. Scientists are still uncertain as to whether the deposition of plaques leads to the formation of tangles or vice versa. Researchers believe that discovering ways to control the formation of beta-amyloid deposits should lead to effective therapies for curing or preventing the disease. Other theories about the cause of Alzheimer’s disease suggest quite different strategies for cure or prevention. Some scientists believe that Alzheimer’s disease is caused by inflammatory processes associated with aging rather than by the formation of beta-amyloid plaques in the brain. Another theory ascribes the development of Alzheimer’s disease to the formation of toxic proteins in the brain, possibly derived from beta-amyloid, rather than the buildup of plaques and tangles. The amyloid cascade hypothesis remains controversial, primarily because of the weak correlation between the severity of neurological impairment in the patient and the extent of the deposition of amyloid plaque found at autopsy. Critics of the amyloid cascade hypothesis also point out that amyloid plaque can be found in the brains of people with normal intellectual function. Moreover, it is often found at sites far from the areas with the characteristic nerve loss found in patients with Alzheimer’s. Some researchers argue that the action of soluble toxins could explain the lack of correlation between the deposition of amyloid plaque and the progress of the disease. Other scientists suggest that Alzheimer’s is caused by the gradual deterioration of the blood vessels that normally maintain the blood-brain barrier, thus allowing toxic substances to enter and accumulate in the brain. The situation might be further complicated by the possibility that Alzheimer’s disease is not really a single disease, but a diagnostic term that has been applied to many different diseases with different causes. One of the most promising but controversial avenues of research is a vaccine (AN-1792) that appears to prevent or reverse the formation of the amyloid plaques associated with Alzheimer’s disease. In testing this strategy in genetically altered mice that developed the amyloid plaques associated with Alzheimer’s disease, researchers found evidence that the vaccine appeared to retard or even reverse the development of the amyloid plaques. When immature mice were inoculated with a peptide that caused the formation of antibodies that attacked the precursor of brain plaques, little or no plaque was later found in the brains of the treated mice. Significant amounts of plaque developed in the control group. Further experiments suggested that the vaccine could reduce the amount of plaque in older mice. However, researchers acknowledged that the results in mice might not be directly applicable to the situation in human patients. Although the transgenic mice used in these experiments produce amyloid plaque, they do not exhibit the extensive neuronal loss and other signs found in Alzheimer’s disease patients. Some scientists, therefore, believe that vaccines may ultimately cause extensive damage in human patients. That is, if the vaccines cause the production of antibodies that penetrate the blood-brain barrier, the antibodies may trigger a massive and dangerous immune response. Despite the uncertainly about the safety and efficacy of the experimental vaccine, human clinical trials for AN-1792 were initiated in December 1999. Early trials indicated that the vaccine appeared to be safe and that further tests to determine efficacy could be conducted. Evaluating the efficacy of the vaccine in humans, especially the impact of treatment on the subtle defects in memory, mood, and logical thinking, will be a time-consuming and difficult enterprise.
Despite continuing debates about the cause of Alzheimer’s and the most appropriate strategy for treating and preventing the disease, all researchers, physicians, and concerned citizens agree that finding ways to prevent or cure Alzheimer’s disease must be a key health-planning priority. — LOIS N. MAGNER SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Although, as yet, no drugs can cure or prevent Alzheimer’s disease, physicians believe that early diagnosis and treatment are important in delaying the onset of severe symptoms. Patients with mild to moderate symptoms may benefit from medications such as tacrine (Cognex), donepezil (Aricept), and rivastigmine (Exelon). According to a survey released by the Pharmaceutical Research and Manufacturers of America in 2001, 21 new drugs for Alzheimer’s disease were either in clinical trials or awaiting final approval by the Food and Drug Administration. Those who question the safety and efficacy of experimental vaccines believe that more resources should be devoted to the search for more conventional therapeutic drugs.
211
of older adults in the United States is growing rapidly. It is estimated that about 4 million Americans already have the disease, and that number is expected to rise to 14 million by midcentury unless a means of curing or preventing the disease is found. But is an Alzheimer’s vaccine a realistic hope or just hype? Today, many scientists say the promise is quite real. Beta-Amyloid Plaques To understand how a vaccine might work, it helps to know a bit about how Alzheimer’s affects the brain. Our knowledge of this comes largely from scientists who have looked through microscopes at slices taken from the brains of people who died from one cause or another while having the disease. Scientists have noted two main hallmarks of Alzheimer’s: dense clumps, called plaques, found in the empty spaces between nerve cells and stringy tangles found within the cells themselves. For decades, a debate has raged about which of these features is more important in causing the disease. The Alzheimer’s vaccine targets the plaques, which consist of a protein fragment called beta-amyloid. Research on the vaccine may go a long way toward settling the dispute.
Alois Alzheimer (Photograph by Mark Marten. Photo Researchers. Reproduced by permission.)
Viewpoint: Yes, current claims for an Alzheimer’s vaccine properly take into account the many defects that the disease causes in the brain— the claims are based on sound experimental results regarding beta-amyloid plaques.
MEDICINE
Scientists from Elan Pharmaceuticals in South San Francisco, California, stunned the world in July 1999 with their announcement: A new vaccine tested in mice seemed to prevent the buildup of beta-amyloid deposits in the brain, one of the hallmarks of Alzheimer’s disease. Suddenly, there was a glimmer of hope where before there had been nothing but dread. Alzheimer’s is an incurable brain disorder that usually occurs in people over age 60. The effect on the brain has been likened to gradually turning off all the lights in a house, room by room. Living with Alzheimer’s disease is a very difficult experience, not only for people with the disease, but also for their family members and friends. At present, there is nothing that can be done to stop it. Not surprisingly, then, word that there might be a vaccine on the horizon was greeted with great excitement. The news was especially welcome at a time when the number
212
SCIENCE
IN
DISPUTE,
VOLUME
2
Plaques show up early in Alzheimer’s disease, forming first in parts of the brain used for memory and learning. Eventually, the spaces between nerve cells can become cluttered with them. There is only a weak relationship between the density of the clumps and the severity of a person’s symptoms, however. Also, plaques are found even in the brains of healthy older people, although in smaller quantities. Still, the plaques seem to play a crucial role in the disease process. Perhaps the most telling evidence comes from genetics. Beta-amyloid is a short fragment of a larger protein called beta-amyloid precursor protein (bAPP). One rare, inherited form of Alzheimer’s is caused by mutations in the gene that carries the instructions for making bAPP. Others are caused by defects in genes that carry the instructions for an enzyme that snips apart bAPP to make beta-amyloid. How does beta-amyloid do its damage? One theory is that the brain sees tiny bits of beta-amyloid as foreign invaders, so it mounts an immune system attack against them. Immune cells called microglia are called out to clear away the beta-amyloid. As the microglia continuously go about their mission, the result is a state of chronic inflammation—the same immune response that causes a cut to become red, swollen, and tender when it is infected. Over time, the constant inflammation is thought to lead to the death of nearby nerve cells. Among the strongest pieces of evidence to support this view are studies suggesting that the long-term use of anti-inflammatory drugs such as ibuprofen reduces the risk of getting Alzheimer’s.
Of course, the formation of beta-amyloid plaques is not the only change seen in the brains of people with Alzheimer’s. Scientists have not forgotten about the twisted threads, called neurofibrillary tangles, that form inside the nerve cells. The chief component of these tangles is a protein called tau. In the central nervous system, tau is best known for its ability to bind and help stabilize microtubules, which are part of a cell’s internal support structure. Think of the transport system within a nerve cell as a railroad. The microtubules are the tracks, while tau makes up the ties that hold them together. When the tau gets tangled up, it no longer can hold the tracks together, and communication between the cells is derailed. The Story of AN-1792 Given the presence of tangles, can a vaccine that targets just the plaques really work? Many scientists now believe it can. Perhaps the most vocal advocate is Dr. Dale Schenk, a neurobiologist at Elan Pharmaceuticals, who with his colleagues developed the groundbreaking vaccine called AN-1792. Schenk’s inspired idea was deceptively simple: If vaccines could prevent everything from measles to polio, could one help stop Alzheimer’s disease as well?
Schenk’s team first genetically altered mice so that they developed the same kinds of plaques in their brains as people with Alzheimer’s. The scientists then injected some of the mice with a synthetic form of beta-amyloid, in hopes that their immune systems would mount an attack against it. When the immune system detects foreign invaders in the body, it produces tailormade molecules called antibodies to counteract them. Most vaccines are made of weakened or killed bacteria or viruses, which cause the body to make antibodies against a particular disease. Schenk thought the body might view the injected beta-amyloid as foreign and make antibodies against it, just as if it were a disease-causing germ.
A molecule that the immune system makes to match and counteract foreign substances in the body (antigens). BETA-AMYLOID: A protein fragment snipped from a larger protein called beta-amyloid precursor protein (bAPP). BETA-AMYLOID PLAQUES: Dense deposits, made of a protein fragment called beta-amyloid, found in the empty spaces between nerve cells. COGNITIVE FUNCTION: Mental process of knowing, thinking, learning, and judging. IMMUNE RESPONSE: The way an organism responds to the presence of an antigen (foreign invader in its body). NERVE GROWTH FACTOR (NGF): One of several naturally occurring proteins found in the brains of all vertebrate animals that promotes nerve cell growth and survival. NEUROFIBRILLARY TANGLES: Abnormal structures in various parts of the brain consisting of dense arrays of paired helical filaments (threadlike structures) composed of any of a number of different proteins (such as tau) and that form a ring around the cell nucleus. TAU PROTEIN: A family made by alternative splicing of a single gene. Although found in all cells, they are major components of neurons and predominantly associated with the axons. TRANSGENIC: An organism that has genes from another organism deliberately artificially inserted into its genetic makeup. VACCINE: A preparation that prompts the immune system to make antibodies against a particular diseasecausing agent. ANTIBODY:
was greatly slowed in the treated mice compared to the untreated ones. Some even seemed to have a decrease in the plaques that had existed before the treatment had started. Schenk and his colleagues still were missing some key pieces of the puzzle, however. For one thing, the mice in the studies did not develop the neurofibrillary tangles seen in the brains of people with Alzheimer’s. The only way to know for sure if AN-1792 would be safe and effective for humans was to try it. The first step in human testing for a new drug in the United States is phase-one clinical trials. These small, early studies are designed to find out how a drug acts in the human body and whether it is safe. The phase-one trials for AN-1792 involved 100 people with mild to moderate Alzheimer’s disease. No obvious safety problems were found, and some people did make antibodies in response to SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
The gamble paid off. The results, published in July 1999 in the scientific journal Nature, were astounding. In one experiment, the researchers gave monthly injections of the vaccine to a group of mice starting at six weeks old, when they had yet to form plaques in their brains. Other groups of young mice did not get the vaccine. By the time all of the mice were 13 months old, those who had received the vaccine still had virtually no plaques, while those in the other groups had plaques covering 2% to 6% of their brains. In another experiment, the researchers gave injections of the vaccine to mice starting at 11 months old, when plaque formation already had begun. Once again, other groups of same-age mice did not get the vaccine. After seven months, plaque development
KEY TERMS
213
the vaccine. Testing then moved ahead to the next step, phase-two trials. These are larger studies designed to test a drug’s effectiveness as well as its safety. A two-year phase-two trial for AN-1792, launched in 2001, is expected to include 375 people with mild to moderate Alzheimer’s disease in the United States and Europe. Even if the results are positive, the drug still must go through lengthy, larger phase-three trials before it can be approved for sale. More Vaccine Research Meanwhile, other research teams had begun testing the vaccine in mice with similarly encouraging results. First and foremost, there was the chicken-or-egg question to answer: Since it had never been proved for certain which came first, plaques or Alzheimer’s, just showing that a vaccine could reduce the plaques was not enough; scientists also needed to show that it could decrease Alzheimer’s-like changes in thinking and behavior.
MEDICINE
Two studies published in Nature in December 2000 showed just that. One study was led by Dr. Dave Morgan at the University of South Florida at Tampa; the other was led by Dr. Christopher Janus at the University of Toronto in Canada. In both cases, the researchers gave genetically altered mice repeated injections of the beta-amyloid vaccine, much as Schenk had done. They then gave the mice different versions of learning and memory tests in which the animals had to swim through a water maze until they learned the location of an underwater platform. Later, the mice were tested to see how well they remembered where the platform was. Both studies found that mice who received the vaccine did much better than those who did not. Yet a mystery remained: although the improvement in memory was large, the reduction in plaques was not. One possible explanation is that there is a critical threshold of plaques needed to cause learning and memory problems, and even a relatively small decrease in plaques can push the number below that level. Another possibility is that there is a toxic subset of betaamyloid that is not being teased out by current methods. These studies were a vital link in the chain, since they showed that a beta-amyloid vaccine alone could lead to behavioral improvements, at least in mice. The case for the vaccine also was strengthened by a third study that appeared in the same issue of Nature. This study, headed by Dr. Guiquan Chen at the University of Edinburgh in Scotland, focused on the genetically altered mice that have played such a crucial role in other research. The scientists found that the mice, who develop plaques but not tangles in their brains, do indeed show more Alzheimer’slike changes in behavior as they age than normal
214
SCIENCE
IN
DISPUTE,
VOLUME
2
mice. This finding backed up the belief that beta-amyloid may be at the root of the disease. Granted that the vaccine seems to work in mice, how exactly does it do so? That remains an open question. One possible explanation: When the vaccine is injected into the body, the immune system makes antibodies to it. These antibodies start circulating throughout the body in the blood. A few of them leak into the brain, where they may bind to plaques and act as flags for microglia to come to the area and clean it up. At first, some scientists feared that this process might actually make symptoms worse, by leading to the kind of inflammation that is thought to cause the death of nearby neurons. Fortunately, that did not prove to be the case. According to Morgan, “We started out expecting that the vaccine would overactivate the microglia, causing inflammation and perhaps neuron death, and certainly not helping the mice. It did just the opposite. While the microglia were activated, it didn’t seem to be at a high enough level to cause neuron loss.” Another possibility is that the 99.9% of antibodies that stay in the blood are the critical ones, rather than the 0.1% that leak into the brain. Antibodies bind to beta-amyloid in the bloodstream. In an effort to restore levels of the substance in the blood, the body may withdraw some beta-amyloid from the brain. Problems and Solutions In all of the animal studies described so far, mice were given the vaccine in multiple injections. Similarly, in the phase-one human trials of AN-1792, people were given shots in the arm. However, a study led by Dr. Howard Weiner of Harvard Medical School holds out the hope that the vaccine might one day be given by a less painful means. In Weiner’s study, published in the October 2000 issue of Annals of Neurology, scientists used the same type of beta-amyloid vaccine as previous researchers, but gave it to the genetically altered mice in a nasal spray. They found that treated mice had 60% fewer plaques than untreated ones—not as dramatic a decrease as that seen with injections, but still impressive.
One concern about the beta-amyloid vaccine is that the vaccine itself may prove toxic. When the whole beta-amyloid molecule is used, it can form tiny fibers called fibrils. These fibrils can attract other molecules and cause them to clump together. If some of the beta-amyloid in a vaccine were to cross from the blood into the brain, as it is capable of doing, it might even contribute to plaque formation there. Dr. Einar Sigurdsson and his colleagues at the New York University School of Medicine think they may have found a novel solution to this problem. Their new vaccine is a modified form of beta-amyloid. Because the vaccine does not have fibrils or create clumps, it may be a safer alternative. A study
published in the August 2001 issue of the American Journal of Pathology found that the new vaccine was very effective, reducing plaques in genetically altered mice by 89%. What works in mice does not always work in humans, of course. There still is much research to be done on the Alzheimer’s vaccine. Even if all goes well in human trials, the vaccine will have its limits. For some older people, it may not be effective, since the immune system’s ability to mount an antibody response tends to decline with age. For those who already have advanced Alzheimer’s disease, it simply may be too late. Although the vaccine may halt or slow the disease, it will not bring dead nerve cells back to life. Yet the potential value of such a vaccine is enormous. In people with early stages of the disease, it may keep the symptoms from getting worse or at least slow them down. In people with rare genetic defects that cause inherited Alzheimer’s, it may keep the disease from ever starting. For the rest of us who could one day develop garden-variety, noninherited Alzheimer’s, the prospects are bright as well. According to Sigurdsson, “Right now, scientists are trying to find ways to diagnose Alzheimer’s disease before any symptoms appear. If you could detect plaques with a brain scan, for instance, before a person’s memory and thinking become impaired, you could start the vaccine then and perhaps prevent the disease from occurring.” —LINDA WASMER ANDREWS
Viewpoint: No, a vaccine based on preventing the formation of beta-amyloid plaques is premature and could well prove ineffective—and possibly even harmful to humans.
Cognitive, behavioral, and functional deficits are universal in patients with AD and progress as the disease advances. Early signs include forgetfulness, lack of concentration, and loss of the sense of smell. Deficits increase gradually over time, at a rate that varies from patient to patient, but the disease eventually causes severe memory loss, confusion, personality changes, impaired judgment, language defects, and behavioral disorders, among many other deficits. Sufferers lose the ability to pay attention, respond to visual cues, recognize faces (even of closest relatives), express thoughts, and act appropriately. Behavioral deficits include restlessness and extensive wandering, physical and verbal aggression, inappropriate social behaviors, and disruptive vocalizations. Functionally, tasks that usually require little thought are often performed backward, such as sitting down before getting to the chair or, when dressing, putting on outerwear before underwear. In advanced stages, the verbal repetition of overlearned or ritual material, such as prayers, songs, or social responses, occurs. The wide range of functional and behavioral deficits seen in AD may be, at least in part, caused by changes in thought processes that result in the inability to monitor and control behavior. Searching for Causes and Cures While there are many theories about what causes AD, no specific cause has yet been identified. Similarly, while there are several treatments in the clinical trial stage, none has yet been proven to arrest, reverse, or cure the disease in humans. There is considerable optimism about a vaccine called AN-1792, an experimental immunotherapeutic agent being developed by the Elan Corporation of Ireland in collaboration with the Wyeth-Ayerst Laboratories in the United States. The development of this drug is based on the hypothesis that beta-amyloid (A∫42), a prominent protein fragment in amyloid plaques seen in AD, is not just a marker (identifier) of AD, but basically is the cause of the disease; this hypothesis has not yet been proven.
The purpose of AN-1792, a form of A∫42, is to stimulate an individual’s immune system to prevent the development, and possibly to reverse the buildup, of the beta-amyloid plaques associated with AD. According to a fact sheet published by the Alzheimer’s Association, researchers hope that treatment with AN-1792 will “produce antibodies that would ‘recognize’ SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Alzheimer’s Disease and Its Manifestations Alzheimer’s disease (AD) is the most common cause of dementia—the general loss of cognitive, or intellectual, ability. As of 2001, AD is irreversible and incurable. This degenerative neurological disease primarily affects individuals in the over-65 age group, causing progressive loss of cognitive, functional, language, and movement ability, and ultimately death. According to the Alzheimer’s Association, in the late 1900s and early 2000s, 1 in every 10 people over the age of 65 years and half of those over the age of 85 years suffered from the disorder. While its exact cause remains unknown, researchers know that the disease is manifest by the degeneration and death of cells in several
areas of the brain. The only way to diagnose AD positively is to perform an autopsy on the brain to identify telltale protein deposits called amyloid plaques and neurofibrillary tangles consisting of a protein called tau. Researchers believe these plaques and tangles cause the degeneration and death of brain cells that result in AD.
215
and attack plaques.” Promising results of animal studies using the drug were published in Nature in July 1999. In this study, Dr. Dale Schenk, the vice president of discovery research at Elan Pharmaceuticals, and his team tested AN-1792 on transgenic (TG) mice—animals especially bred and genetically engineered to overproduce beta-amyloid plaques that are structurally and chemically similar to those found in the brains of humans with AD. According to Kenneth J. Bender, writing for the Psychiatric Times in September 2000, vaccinating these mice with AN-1792 “appeared to prevent plaque deposition in the brains of the youngest animals. The vaccine also appeared to markedly reduce the extent and progression of the plaques and associated neuropathology in older specimens.” In two subsequent studies, one at the University of Toronto and another at the University of South Florida at Tampa, behavioral tests using TG mice immunized with AN-1792 showed their improved short-term memory performance over mice treated with a “similar but nonactive vaccine.” In an elaborate water maze test, the location of an exit ramp was frequently changed. Relying on their short-term memory was the only way the mice could determine the current exit location. Changing the exit location caused mice with elevated beta-amyloid levels to become confused.
MEDICINE
However, David Westaway of the University of Toronto research team urged caution, stating that the relationship between the accumulation of plaque and memory and learning may not be straightforward. Bender, in his Psychiatric Times article, wrote, “While such evidence of beneficial effect on function as well as pathology is encouraging, Schenk cautioned against yet assuming that it is a harbinger of clinical improvement in patients with AD. ‘Of course, mice aren’t humans,’ he [Schenk] remarked to an Associated Press reporter.” Again, other experts warned that the mouse maze test did not address other key mental abilities destroyed by AD, including language and judgment. From Animals to Humans The first clinical trial of the vaccine in humans, a phase-one study aimed at assessing the safety of the drug, was completed in July 2001. Approximately 100 patients with mild to moderate AD participated in the trial in both the United States and the United Kingdom. Dr. Ivan Lieberburg, the executive vice president and chief scientific and medical officer of the Elan Corporation, announced the trial results with enthusiasm. “The product showed that it was safe for patients and we didn’t see any significant problems with it other than sore arms at the injection site. . . . More importantly, as well we saw that
216
SCIENCE
IN
DISPUTE,
VOLUME
2
in a significant proportion of the patients they were able to demonstrate an immune response. Their antibody levels went up and that indicates that this was having an effect in these patients.” However, the scientists noted that no cognitive or memory improvements were seen in those patients. According to an Alzheimer’s Association fact sheet, much research is yet required to determine the drug’s efficacy in AD patients. “One key question about AN-1792,” reads the fact sheet, “is whether it will actually improve mental function in humans.” Also, the role beta-amyloid plaques play in AD remains to be determined. While evidence suggests that the abnormal deposits contribute to brain cell damage, neurofibrillary tangles also appear to contribute, and may even be the culprit. Again, according to the Alzheimer’s Association fact sheet, even if AN-1792 does prevent accumulation of—or even clears—plaque from the human brain and therefore arrests brain cell damage, “we cannot predict the degree of the drug’s effectiveness, and it will not target other disease mechanisms that may be at work in Alzheimer’s disease.” A further concern regarding the vaccine’s use in humans “is the possibility that by provoking an immune reaction to one of the body’s own proteins, AN-1792 could stimulate an autoimmune reaction in which the body mobilizes a wholesale assault on its own tissues.” Yet another question remaining is how effectively the drug will stimulate antibodies in humans, as antibody production was detected only in a portion of phase-one trial participants. “AN-1792 may create a stronger immune response in mice, for whom the substance is a foreign protein, than it produces in humans,” reads the Alzheimer’s Association fact sheet. Also, while AN-1792 generates antibodies against A∫42 in some individuals and is an important element in breaking down or slowing down the disease process, “A∫42 is only one element in complex molecular disruptions that occur in Alzheimer’s.” In a report on the Web site ABCNews.com entitled “Memory in Mice Improved,” the authors quote Dr. Karl Herrup, the director of the Alzheimer’s Center at Case Western Reserve School of Medicine in Cleveland, Ohio: “The Alzheimer’s model was engineered in mice, and while it bears many similarities to the human disease, there are differences as well. The biology of the human disease may or may not be blocked by plaque removal. . . . The human animal may respond differently in some if not all cases.” The report also quotes Dr. Zaven Khachaturian, a senior science adviser to the Alzheimer’s Association in Chicago, who said, “Although the findings that vaccination as a treatment strategy eliminated or reduces the
amyloid in the brain has great scientific significance, its ultimate clinical value remains to be determined. No patient goes to the doctor for the ‘amyloid in their brain,’ they go to get help for memory loss or behavioral problems.” Mixed Feelings about Possible Outcomes The next step, a phase-two clinical trial, will assess AN-1792’s effectiveness and identify the optimal dosage in an attempt to determine whether it will improve mental function in humans. This trial is expected to begin sometime near the end of 2002 and will last approximately two years. “We’re hoping that if we see anything like what we saw in our mice experiments in people in phase two clinical study, that this would be a truly remarkable result,” said Lieberburg. However, until the human trial is completed, the only thing yet understood about the vaccine as it relates to humans is that it is “safe and well tolerated.” The question of its effect on cognitive ability, let alone on behavioral and functional ability, remains unanswered.
In a July 23, 2001, article written for CNN, Rhonda Rowland quoted Dr. William Thies, the vice president of medical and scientific affairs at the Alzheimer’s Association, who said, “While we don’t know whether the product is going to work, we’re going to find out an awful lot of valuable information no matter what the outcome of the trial is. . . . If it turns out that the vaccine clears the protein out and it still doesn’t affect the disease, then that’s a clear indication that amyloid is not the causative factor.” As Rowland then pointed out, even if the vaccine does interrupt the disease process and arrest it in its current stage, arrest does not imply a cure. “For people who have well-established disease, the vaccine can do nothing to return dead brain cells and certainly can’t return memories,” commented Thies.
In this study, as in the AN-1792 study, human application may have different results than in animal models. Tuszynski pointed out, “Animals do not suffer from Alzheimer’s disease.” This is an important factor to keep in mind in the AN-1792 trials, particularly in light of the fact that rats do not naturally produce A∫42 plaques and must be genetically modified to do so. Again, as researchers have noted, these plaques are only similar to those seen in human AD patients and the animals do not literally suffer from AD. Hopefully, however, with continued studies such as those referred to above, and perhaps with the combination of treatment techniques, the devastating effects of AD eventually can be brought under control and maybe even prevented. —MARIE L. THOMPSON
Further Reading Alzheimer’s Association. . Alzheimer’s Association, Greater San Francisco Bay Area. “What’s New in Research: Alzheimer’s Vaccine? Facts: About AN1792, the ‘Alzheimer Vaccine.’” October 2001. . Alzheimer’s Disease Education and Referral Center. . Bender, Kenneth J. “Progress against Alzheimer’s Disease Includes New Research on Possible Vaccine.” Psychiatric Times 17 (September 2000). Nash, J. Madeleine. “The New Science of Alzheimer’s.” Time 17 (July 2000). Schenk, D., et al. “Immunization with AmyloidBeta Attenuates Alzheimer-Disease-Like Pathology in the PDAPP Mouse.” Nature 400 (1999): 173–77. Smith, D. E., et al. “Age-Associated Neuronal Atrophy Occurs in the Primate Brain and Is Reversible by Growth Factor Gene Therapy.” Proceedings of the National Academy of Sciences 96 (1999): 10893–98. SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Different Experiment, Similar Concerns A recent surgical study that has completed phaseone clinical trials supports Thies’s statement about dead brain cells. Under the guidance of Dr. Mark Tuszynski at the University of California, San Diego School of Medicine, researchers hope to “protect and even restore certain brain cells and alleviate some symptoms, such as short-term memory loss, for a period that could last a few years.” The procedure consists of taking a small sample of the patient’s own skin cells and inserting nerve growth factor (NGF) genes taken from the patient’s nervous system tissue. These genetically modified cells multiply in vitro (in a culture medium in a laboratory), producing large quantities of NGF. The NGF cells are then surgically implanted at the base of the frontal lobe of the patient’s brain, an area that undergoes extensive degeneration and death of cells during the course of AD.
The human trial was based on results from experiments in normal aging rats and monkeys in which 40% of the cholinergic neuron cell bodies had atrophied. In his study findings published in the September 14, 1999, edition of the Proceedings of the National Academy of Sciences, Tuszynski reported that cholinergic neuronal cells were returned to almost normal following the implantation of NGF cells. In a February 2001 report in the same journal, Tszynski’s team also reported that axons (necessary in carrying messages from one neuron to the next) that had shriveled and even disappeared in old monkeys were actually restored to, and sometimes exceeded, normal levels following implantation.
217
St. George-Hyslop, Peter H. “Piecing Together Alzheimer’s.” Scientific American 283 (December 2000): 76–83.
MEDICINE
Svitil, Kathy. “Clearing the Deadly Cobwebs.” Discover 20, no. 8 (August 1999).
218
SCIENCE
IN
DISPUTE,
VOLUME
2
Terranella, Scott, et al. “Memory in Mice Improved.” 20 December 2000. .
Should human organs made available for donation be distributed on a nationwide basis to patients who are most critically in need of organs rather than favoring people in a particular region?
Viewpoint: Yes, a nationwide system—made possible through advances in the transportation and storage of organs—would be the most equitable method to distribute human organs. Viewpoint: No, a nationwide distribution system would introduce new inequities to organ donations. The current system of regional and local distribution is superior.
During the second half of the twentieth century, surgeons developed procedures for transplanting kidneys, livers, and hearts into patients whose own organs were failing. The American physician Joseph E. Murray performed the first successful human kidney transplantation in 1954. In this case, Murray was able to take a kidney from the patient’s healthy twin brother, but transplantation largely depended on the use of organs from those who had recently died. Liver transplants for patients with end-stage liver disease were performed in the 1960s, but all these early procedures ended in failure. Public excitement about organ transplants, however, was raised when South African surgeon Christiaan Barnard performed the first human heart transplant operation in December 1967. Ten years later, the heart transplant field experienced a wave of disappointment and disillusionment, primarily because the immune response invariably led to the rejection of foreign organs. Optimistic surgeons nevertheless predicted that organ transplants would one day be as commonplace as blood transfusions. During the 1980s, drugs like cyclosporin, which suppresses the immune response, transformed organ transplants from experimental procedures into routine operations. In the 1990s increasingly complex multiple organ transplants were performed, such as kidney-pancreas, heart-lung, and kidney-liver combinations. The success of transplant operations continued to improve, as measured by the survival rates of organs and patients. Health-care prophets warned that in the not too distant future the supply of money rather than the transplantable organ might become the rate-limiting factor. By the end of the twentieth century, despite the enormous costs involved, the demand for organs continued to exceed the supply. As surgeons began their dramatic attempts to save lives by means of organ transplants, U.S. federal government efforts to regulate organ and tissue donation led to the passage of the Uniform Anatomical Gift Act (UAGA) of 1968. By 1972 every state had adopted the provisions of the UAGA. This legislation established the legality of donating a deceased individual’s organs and tissues for transplantation, medical research, or education. Another important goal of UAGA legislation was to protect health-care personnel from the potential liability that might arise from acquiring organs for approved purposes. Because uncertainties about determining the moment of death were exacer-
219
bated by the need to secure functional organs, the Uniform Determination of Death Act of 1980 was important to advances in transplant surgery. This act recognized brain death, as well as cessation of the heartbeat and respiration, as death. Under the new definition of death, irreversible cessation of all brain function, including the brain stem, was established as a valid criterion for determining death. Because only a few medical centers performed organ transplants in the 1960 and 1970s, and donor organs could only be kept functional for a very short period, the allocation of organs was generally handled on a local or regional basis. As procedures and outcomes improved, more patients became transplant candidates and the competition for donor organs increased. In 1984, just one year after the FDA approved cyclosporin, the National Organ Transplant Act was signed into law in order to establish an equitable and efficient national system to match organ donors and recipients. The act prohibited the buying and selling of organs. To protect dying patients who might become organ donors, the law prohibited the doctor who determined brain death from involvement in organ procurement. The law also initiated the formation of a veritable alphabet soup of agencies and committees charged with carrying out the nationally recognized goals of organ procurement and allocation. The act also established the Organ Procurement and Transplantation Network (OPTN) in order to provide a system for the equitable allocation of donated organs. In 1986, the OPTN awarded a contract to the United Network for Organ Sharing (UNOS) to manage organ allocation and facilitate communication and cooperation among members of the transplant community. One year later, UNOS was also given responsibility for maintaining the national Scientific Registry for Organ Transplantation, to facilitate the compilation and analysis of data about solid organ transplants (kidney, kidney-pancreas, liver, pancreas, heart, heart-lung, lung, and intestinal transplant procedures). To coordinate organ sharing throughout the United States, UNOS established 11 geographic regions and established professional standards for transplant centers, organ procurement organizations (OPOs), and tissue-typing laboratories involved in transplantation. Policies formulated in accordance with the National Organ Transplant Act call for the equitable allocation of organs to patients who are registered on waiting lists on the basis of medical and scientific criteria, without regard to race, sex, financial status, or political influence. Efforts to establish a national distribution system have been opposed by state governments and those organizations that believe such a system will have an adverse effect on their own transplant candidates and OPOs. The ever-increasing disparity between the supply of and demand for organs continues to create tensions within the transplant community. Despite decades of appeals for organ donations, the need for organs continues to grow about twice as fast as the supply. In 1990, about 15,000 organs were transplanted, but about 22,000 people were listed as in need of an organ. Almost 23,000 organs were transplanted in 2000. By the end of 2001, there were about 79,000 people on the national transplant waiting list for a kidney, liver, heart, lung, pancreas, or intestine.
MEDICINE
Attempts to balance the goals of achieving optimum patient outcome, equitable distribution of organs, and decreased organ wastage have generated often bitter controversies because of the scarcity of transplantable organs. Members of the transplant community are divided on the issue of establishing a nationwide list for organ distribution, but all agree that recruiting more organ donors is the key to resolving the debate about organ allocation. Alternatives to donor organs, such as xenotransplantation (using organs from pigs or other animals), artificial organs, and growing or repairing organs through the use of stem cells, are remote and still uncertain measures. Recognizing the importance of increasing organ donation, on his first day as Secretary of Health and Human Services in April 2001, Tommy G. Thompson urged all Americans to “Donate the Gift of Life.” Through a national campaign called “Workplace Partnership for Life,” Thompson urged employers, unions, and other employee organizations to join in a nationwide network to promote organ donation. The goal of Thompson’s Gift of Life Donation Initiative is to encourage Americans to donate blood, tissue, and organs. The secretary also directed the Health Resources and Services Administration to organize a national forum to study organ registries and policies in all states. —LOIS N. MAGNER
Viewpoint: Yes, a nationwide system—made possible through advances in the transportation and storage of organs—would be the most equitable method to distribute human organs.
220
SCIENCE
IN
DISPUTE,
VOLUME
2
In the black-and-white, heavily political world of the debate about organ allocation, there exists the perception that “sickest first” and “geographical boundaries” are mutually exclusive terms. From a purely medical point of view, that is not always the case. When a donor organ is removed, it is put into a cold nutrient solution and transported to the hospital that will perform the transplant. Different organs have different “endurance limits,” the length of
time they can exist without blood supply in cold temperatures before they are damaged and become unusable. There are some circumstances in which long-distance transplant is not feasible, because transporting the organ will exceed this “endurance limit,” and any organallocation policy must take into account the medical limits on organ transportation. These limits, however, keep changing with development of faster airplanes and the availability of more private charters for transport. No allocation policy should limit, in advance, the area in which an organ must remain. As the American Medical Association stated, donated organs “should be considered a national, rather than a local, or regional, resource.” The term “sickest first” is inaccurate in the context of organ transplantation; a more accurate term would be the patient “with the greatest medical urgency.” What opponents of the “sickest first rule” justifiably claim is that the sickest people are not good transplant candidates. The bodies of these patients will most likely be unable to cope with the trauma of transplant surgery and its aftermath. The realistic “sickest first” rule actually favors patients who are in the most urgent need of transplants, but are still viable candidates. An allocation policy favoring these patients should be based on medical criteria, agreed upon by transplant surgeons and other experts in the field. In addition, any organ-allocation policy should apply different standards for different organs. Liver patients in complete liver failure need immediate transplants. Patients with what is known as end-stage kidney disease can be sustained for a time on dialysis treatments. Obviously, the definition of, and criteria for, “greatest medical urgency” in a liver patient will be differ from the that of a kidney patient.
Organ transplanted into an individual from another person who is genetically different from the recipient. ANTIGEN: Any molecule that, when encountered by the body’s immune system, will invoke an immune response. HLA: Human leukocyte antigens. A group of six antigens that play an important role in tolerance or rejection of a transplanted organ. ALLOGRAFT:
Allocating transplant organs on the basis of geographic boundaries can create absurd situations. An organ can be transplanted into a patient who could still be sustained by medical treatment, while a patient in the next state who could have been saved by the very same organ will die, simply because he or she lived in the wrong zip code, so to speak. In 1998, 71% of liver transplants were carried out on patients in the least urgent category, while 1,300 people died that same year waiting for a liver. Three Institute of Medicine (IOM) experts, working on their own “as an intellectual exercise,” found that 298 of these 1,300 patients would have received a transplant under a broader sharing policy. The Department of Health and Human Services (DHHS) presented data it collected on liver and heart transplants that were done under the geographic boundaries system. These results were presented at a congressional hearing on the proposed “Final Rule” that would change the organ-allocation system in the United States. The department looked at rates of transplant within a year of a patient being listed as a candidate, the one-year survival rates following transplants, and the risk of dying while on the waiting list in the various transplant programs across the country. The results were adjusted to account for differences in patients’ health status (risk adjustment). The risk-adjusted differences in rate of transplants within a year ranged from 71% of liver patients transplanted in some programs, as opposed to less than 25% patients transplanted in others. In patients with heart disease, the range of transplants within a year was 36% to 72%. The risk of dying while on the waiting list was less than 8% in some liver-transplant programs, and more than 22% in others. The numbers in the same category (risk of dying while on the waiting list) in the heart-transplant programs ranged from 9% to 23%. The one-year survival rates after transplants ranged from 65% SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
The Inequity of Geographical Boundaries A system based on solely geographical boundaries calls for organs available in one zone to be offered to patients in that zone first, regardless of medical urgency. Some zones, for a variety of reasons, have a higher organ-donation rate than others, or better transplant programs. When patients become aware of these zones, financial status comes into play. Patients who can afford to travel for medical treatment can list themselves in one—or more—transplant centers outside their area of residence. To be listed in a center, doctors at the center must examine the patient in person. The patient must then be able to travel on a moment’s notice if an organ becomes available. The net result is that more affluent people, or people with better insurance coverage, can often get an organ transplant far more quickly than those who have to rely solely on their local organ-procurement system.
KEY TERMS
221
to 86% in liver-transplant programs, and 67% to 84% in heart-transplant programs. Obviously geographical boundaries create a staggering inequality in access to transplants, and in the outcome of these procedures. Does Sharing Beyond Geographic Boundaries Work? Many opponents of the “sickest first” rules fear that broader sharing will lead to the closing of many small transplant centers. If the small centers close, they say, access to transplants may be restricted for those patients who live in rural or remote areas, or have limited means of travel. In fact, according to transplant experts, a broader sharing will benefit the small centers, which will receive more organs for their patients. An independent review of the IOM agreed that broader sharing would not have an adverse effect on small centers.
MEDICINE
The organ at the center of most debates about allocation policy is the liver. With most other organs, severely ill patients can be assisted by methods such as dialysis and ventricular assist devices. However, there are, to date, no available options for patients whose livers stop working. As a result, those who discuss organ allocation policies use the liver as a key component in the debate. In computer modeling done by the United Network for Organ Sharing, broader sharing of livers reduced the number of deaths in patients with liver disease. This opinion was confirmed by the IOM’s independent analysis of liver-transplant data across the country. The IOM recommended, based on this analysis, to change the allocation policy for livers, and establish broader sharing areas. The Campaign for Transplant Fairness, a lobbying group in Washington, DC, claims that even this recommendation does not go far enough, since it maintains regional boundaries, albeit larger ones. In a November 2000 interview, Dick Irving, then president of the National Transplant Action Committee, said: “There is plenty of evidence that the current system of organ allocation is not fair and that patients are needlessly dying because they are being overlooked in favor of healthier patients.”
222
Kidney Allograft Survival Rates But could the shipment of an organ across wider geographic areas damage the organ to the point where it would become useless to the recipient? Although every organ has different survival time outside the body, a recent study published in the New England Journal of Medicine on October 25, 2001, is worth mentioning.
The study examined the survival rates of kidney allografts, organs transplanted from another person who is genetically different from the recipient. Survival rates were compared between local and distant transplants. The study looked at pairs of kidneys from the same donors, where one kidSCIENCE
IN
DISPUTE,
VOLUME
2
ney was transplanted locally and the other was shipped to a different area of the country. The results (adjusted for factors such as age, race, etc.) showed that locally transplanted kidneys fared better (fewer were rejected) in the first year in individuals that did not have a complete antigen match with the donor. There was no difference in the survival rates of locally transplanted and shipped kidney allografts after the first year. We must bear in mind that, since 1990, priority in kidney transplants (from cadaveric donors, which the study looked at) is given to individuals who have a complete antigen match with the donor. Antigens are responsible for invoking the body’s immune responses, and certain group of antigens called HLA plays an important role in the rejection of transplanted organs. When the study looked at the rate of allograft survivals for the first year in HLA-matched individuals, there was no difference between locally transplanted and shipped kidneys. In addition, the study found that shipped kidneys transplanted in HLAmatched individuals fared better than kidneys transplanted locally into mismatched individuals. This study illustrates a few important points. The first (and obvious) point is that the success of an organ transplant depends on many factors. In this study, a complete antigen match (the preferred situation) offsets the adverse effects of long travel time for a shipped kidney. The study does not prove that geographic boundaries are better for transplant patients. What the study does prove is that if organ-allocation policy is amended based on sound medical criteria—for instance, kidneys will be shipped only to completely matched individuals, with few exceptions—shipped kidneys can be as useful to transplant candidates as locally available ones, and sometimes even more useful, depending on the degree of antigen matching. The other point this study illustrates is the inflammatory nature of the organ-allocation debate. When the study was published, a headline in the mainstream media proclaimed, “Shipping Kidneys Elsewhere Raises Failure Risk, Study Says.” Actually, the study stated that there was no significant association between one-year allograft survival rates and shipping status in completely matched individuals. Yet according to the Associated Press story, the conclusions of the study support the argument that organ should be kept locally. Public Perception Proponents of allocating organs based on geographical boundaries say that broader sharing would lead to decreased organ donation, as people would want to benefit their community first. This claim has been disproved in several surveys. A survey reported in Transplantation, the official journal of the Transplantation Society, found that “responders were willing to allocate a portion of organs to
older and sicker patients even when they felt that, overall, the allocation system should direct organs to patients with the greatest potential to benefit.” In 1998, Peter A. Ubel of Philadelphia Veterans Affairs Medical Center and Dr. Arthur Caplan of the University of Pennsylvania Center for Bioethics quoted studies that found that the public does not see maximizing outcome as the sole goal of transplantation, but rather prefer “to distribute resources to severely ill patients, even when they benefit less than others.” Moreover, the IOM report found that in areas of broader sharing, donations actually increase. The problem of organ allocation is so hugely controversial because we are faced with a very small supply and a very great demand. Clearly the biggest help to patients needing organ transplants would be to increase the number of donated organs. States with aggressive “Donate for Life” campaigns have seen the numbers of organ donations increase dramatically. However, a change in federal donor policy might be called for. In Belgium, citizens have to “opt-out” of being organ donors, as opposed to the “opt-in” requirement in the United States. In the first ten years of the opt-out program in Belgium, only 2% of the population chose to opt out, and donation increased by 183%. A survey done in the United States in 1993 shows that 75% of the eligible population would want to donate organs. Many, however, never discuss their wishes with their families, and fail to otherwise indicate that they wish to become donors. If we would simply ask the 25% to opt out, we would automatically eliminate the missed opportunities and greatly increase the supply of badly needed organs. We may not be able to resolve the debate about organ-allocation policy, but we can greatly alleviate the problem that led to the debate in the first place. Interestingly enough, the “sickest first” rule has always been applied in the United States under the regional boundaries system. Under that system, the most medically urgent patients in the region where a donor organ was available were offered the organs first. If the most urgent patient was ruled out as a candidate, the organ was offered to less urgent patients in the same region, even if more urgent patients were dying elsewhere in the country. One has to wonder whether the current debate over organ-allocation policy is motivated more by territorial or financial concerns than by altruism. —ADI R. FERRARA
Throughout most of the twentieth century, advances in medicine—in vitro fertilization, frozen embryos; surrogate childbearing, whereby a fertilized human ovum is implanted in the womb of a third party; human cloning; human gene therapy; and genetic engineering— outstripped the ability of laws to meet the challenges created by these advances. However, in one area, lawmakers and the government seemed to have moved swiftly. With the enactment in 1984 of the National Organ Transplant Act, the United States moved to introduce order and fairness into a patchwork system of transplant centers, organ-procurement organizations, and associated transplant laboratories clustered in some major cities and scattered around the country. Above all, the new law sought to ensure that citizens would be able to receive transplants of organs based on medical need rather than on wealth, status, and other accidents of birth or fortune. The law sought to encourage donation of organs by individuals and their families when a tragic sudden and unforeseen death occurred. Individuals were encouraged to clearly indicate their desire to will organs to recipients with medical need; families were urged to cooperate when the possibility of recovery from an accident was no longer even a slim hope. These efforts have been successful in many ways. Medicine and surgery have made enormous strides in perfecting transplant techniques and the improving the continuing care of individuals who receive the organs. Campaigns to promote and encourage greater involvement of the general public in agreeing to donate organs upon death have been less successful. By early January 2002, nearly 80,000 individuals were waiting for an organ to become available. In 2000, the number of donors reached only 11,684, and some 13 people die each day while awaiting organ transplant. In the face of this disappointment, the public, the government, the U.S. Congress, the organizations that provide the organs and carry out the transplants, experts in the field and, above all, the individuals (and their families) whose lives could be saved by a transplant, demand more. Legal and regulatory reforms over the past two decades have been launched to remedy factors in the Organ Procurement and Transplant Network that interfere with the focus of the original legislation—saving and improving lives. However, legal and regulatory reforms alone may not be the best way to make the system work. Neither may nationwide distribution of organs based on medical need be the answer. Many of the impediments to making organs available to anyone with an urgent need wherSCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Viewpoint: No, a nationwide distribution system would introduce new
inequities to organ donations. The current system of regional and local distribution is superior.
223
ever they are in the United States have been overcome. There are systems to register patients and to contact the transplant centers when organs do become available. Improvements in the testing of organs and recipients to determine suitability of the organs to be transplanted have reduced the chance of mismatches or unsuitable transplants. In the past, the rejection of organs by the new recipients was a problem in all but the most genetically similar individuals, such as identical twins. Now, powerful new immunosuppressive drugs reduce the likelihood of rejection. Other advances in medical technology permit organs to be preserved without the deterioration that might jeopardize the success of transplantation, prolonging the time available to transport the organs from place to place. Laws and regulations demand improvements in the standards of medical criteria for placing patients on transplant lists and for determining medical need, so that subjective or nonmedical factors are eliminated from decisions when registering patients for transplants.
MEDICINE
Ideally, patients with the most critical need, those facing imminent death, would be the top candidates for receiving organs as soon as organs become available, as long as there was an appropriate match. However, a number of voices have advocated that preference be given to specific regions in allocating scarce organs, particularly livers from cadavers, and allocations on the basis of medical need have not always prevailed. For example, the 1996 Code of Medical Ethics of the American Medical Association states: “organs should be considered a national, rather than a local or regional, resource. Geographical priorities in the allocation of organs should be prohibited except when transportation of organs would threaten their suitability for transplantation.” This same position is embodied and endorsed in final regulations issued by the U.S. Department of Health and Human Services (DHHS) that govern the Organ Procurement and Transplantation Network (OPTN). The DHHS has been given authority by the U.S. Congress to exercise oversight of the OTPN. As amended, the Organ Transplant Act makes clear that its principal aim is to ensure a nationwide system that fairly distributes organs for transplant based on need and not location. However, local priorities often undermine the intent of the law. In 1998, Dr. Arthur Caplan, a prominent philosopher in the field of bioethics (the area of ethics that helps unravel complex life-and-death issues emerging from scientific advances) spoke out in support of the DHHS position in the New England Journal of Medicine: “The federal government’s tough stance against geographic favoritism should be embraced. It is inherently unfair to give priority to some patients merely because of where they live. . . . The geographic
224
SCIENCE
IN
DISPUTE,
VOLUME
2
basis of the allocation system . . . is ethically indefensible.” Underlying this philosophical view is the notion that the best way to allocate a scarce resource is to save a life in immediate danger. Why, then, in the face of such a strong ethical imperative, have a number of states enacted laws to prohibit the transport of organs out of the state? In fact, the states have been responding to another perceived unfairness. Waiting lists for organs in some areas of the United States show clear disparities; in some regions, patients wait a median of 30 days for a liver, in others the median is over 200 days. Many patients in areas with long waiting lists will die before they receive a liver. Faced with the prospect of its own citizens and residents being deprived of life-saving intervention, is it any wonder that the state lawmakers have responded by putting the interests of their own residents first? Organs are in short supply—each year need exceeds supply by a factor of 5 to 10. Although it seems fair to treat the sickest patient first, that reasoning assumes that patients who are not as ill will be able to receive treatment later. However, this is not the reality in the case of liver transplants. When one patient receives a liver, that means another may never receive one. Patients could wait years, becoming increasingly ill and even risking death until they were at a critical point, and there is no guarantee that a suitable liver would become available even then. Another way of looking at the ethics of allocating scarce resources is to consider how to bring about the best overall result for everyone. From this perspective, a number of factors come into play, as patients who are close to death do not respond well to new organs. Frequently the donated organ fails, and the patient requires a fresh transplant to survive, but even when the organ functions, the patient’s general health is so poor that he or she may survive for only a year or less. These sick patients may never regain a satisfactory quality of life, they are hospitalized and require expensive care. Consequently, in 75% of the cases the result of the transplant is only a postponement of death and the loss of an organ for someone else, and even, at times, the need for another organ transplant to try to save the life of the individual who received the organ. From this perspective, does it make sense to risk “wasting” organs on the most ill when a greater overall benefit can result from a successful transplant? Under mandatory liver sharing, when the geographic region is expanded, the waiting time for the gravely ill is reduced, although even then some of the gravely ill do not receive a liver. But the overall net effect is even more discouraging. In one study, the overall wait for the gravely ill was cut from five to three days. But the cost was significant. Twenty-one of the gravely ill did not
receive a transplant even with a greater geographic pool. Of those further down the waiting list, 132 died. Thus, for a shortened waiting time, the tradeoff was increased mortality for a larger number. The remedy of mandatory wider geographic or nationwide sharing saves only a small number of lives, mainly in the short term, and results in prolonging morbidity and ends in death for a greater number. The shortage of organs is so great that any improvement that might be achieved through a nationwide sharing is cancelled out. The numbers of transplants in some areas may increase, but there is no increase in the number nationwide nor in nationwide survival numbers. Some types of liver failure can only become worse due to the longer waiting times that result from mandatory nationwide sharing. For patients with liver cancer, the average waiting time is only about a year, but during the wait the cancer grows, perhaps spreads, rendering the tumors incurable. However, evidence shows that tumor-free survival rate over five years for patients who have an organ transplant at an early stage of liver cancer are remarkably high—over 90 to 100%. Even in more advanced cases, this is a better survival rate than that of the sickest patients who received liver transplants near the point of death. Economic benefits also support the view of earlier transplant, even if nationwide sharing might help save the lives of the gravely ill. Earlier transplants can eliminate the expense associated with nationwide availability to the most seriously ill, those of lengthy hospitalization and the recovery and transport of organs from and to distant locations. Furthermore, early transplants mean that patients are able to return to normal and productive lives. These individuals contribute to the economy, pursue career goals, and are there to provide loving support to their families. Patients who receive transplants near death seldom return to normal life, and caring for them drains the financial and emotional resources of their families.
Under the present system, although 98 to 99% of patients are listed in only one place, the alert or more affluent can arrange to place themselves on more than one list to increase their chances for a transplant. A system of mandatory
The Organ Procurement and Transplantation Network is a patchwork system. Although they aspire to do an outstanding job and gain satisfaction from success, organizations in the network do not always have clear or rational lines of jurisdiction or responsibility, and there are rivalries and areas of personal and professional involvement. But that is not to belittle the energy that goes into local operations. These organizations take on the task of encouraging and promoting organ donations and conduct delicate negotiations to obtain family agreement under the most challenging of circumstances. Many represent families of persons who need a transplant or have received one. Transplant physicians and surgeons in the network are dedicated to saving and improving lives. Therefore, the whole system is not a feat of “engineering,” but of intensely involved stakeholders. One could not expect many of the “local” energies and commitments that make the operation successful to be as effective if the organs donated and procured were “placed” in some other location. The arguments for a nationwide listing often cite surveys of potential donors and families of donors, in which they express no desire to restrict the use of the organs to a specific location or region when urgent need exists elsewhere, and one would expect this altruistic view. But we should be skeptical of surveys of donors or potential donors, indeed of surveys in general. What attitudes do the professionals and volunteers who are involved in securing agreement to donation hold? It is their energies that stand to be undercut by a system that diminishes their immediate investment in the outcome. One hesitates to speculate that the enthusiasm and commitment of these organizations might be impaired by a nationwide mandatory listing and availability. Likely it would not. But the many voices who have expressed concern that limiting or diminishing local energies can affect the overall system need to be acknowledged. In the end, the arguments are a matter of consensus and sound negotiation. What is working well does not have to be totally revised. However, a mandatory nationwide sharing of organs will not address the scarcity. The real issue is not local or geographic favoritism, the use of that description has only served to obscure the real issue. We need more societal investment to encourage organ donation, and funding to insure greater access and availability to underserved groups and areas. Simply redrawing maps is not the answer. —CHARLES R. MACKAY SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
It is not altogether comfortable to place a dollar value on human life, and the economic issues should be subordinated to the goal of saving lives and restoring health. However, economic considerations are part of the equation of making the best use of scarce resources—organs to be transplanted. Even some who have argued against the apparent benefits of geographic organ allocation have acknowledged that a decreased emphasis should be placed on giving organs to the most severely ill patients.
nationwide sharing could eliminate this source of unfairness. However, the problems created in centralizing to a “national list” would prove logistically and administratively out of proportion to the problem of multiple listing.
225
Further Reading Bonfield, Tim N. “The Wait for Organs To Change.” Cincinnati Enquirer (November 24, 2000). Committee on Organ Procurement and Transplantation. Assessing Current Policies and the Potential Impact of the DHHS Final Rule. Washington, DC: Policy Institute of Medicine National Academy Press, 1999. Hostetler, A. J. “Organ-Sharing Changes Urged Study Supports Move to Need-Based System.” Richmond (Virginia) Times-Dispatch (August 2, 2000).
MEDICINE
Kahn, Jeffrey P. “States’ Rights Or Patients’ Rights? The New Politics of Organ Alloca-
226
SCIENCE
IN
DISPUTE,
VOLUME
2
tion” CNN. . “Reorganizing the System.” The NewsHour with Jim Lehrer. . “Background and Facts/Liver Allocation and Transplant System.” University of Pittsburgh Medical Center. . Ubel Peter A., and Arthur L. Caplan. “Geographic Favoritism in Liver Transplantation: Unfortunate or Unfair?” New England Journal of Medicine 39, no. 18 (1998): 1322–25.
At this stage of our knowledge, are claims that therapeutic cloning could be the cure for diseases such as diabetes and Parkinson’s premature and misleading?
Viewpoint: Yes, therapeutic cloning is so fraught with controversy, and the technique is so far from perfect, that it may be decades before it is put into practical use. Viewpoint: No, recent scientific advances in the area of therapeutic cloning indicate that cures for diseases such as diabetes and Parkinson’s are possible in the not-too-distant future.
In July 2001, after a heated debate about human cloning, the U.S. House of Representatives voted to institute a total ban on all forms of human cloning. The House bill classified human cloning as a crime punishable by fines and imprisonment. The House rejected competing measures that would have banned cloning for reproductive purposes, but would have allowed therapeutic cloning. President George W. Bush immediately announced his support for the ban on human cloning and research involving human embryos. Opponents of all forms of experimentation on human embryos believe that therapeutic cloning eventually would lead to reproductive cloning. After the House vote, many scientists predicted that a ban on therapeutic cloning and stem cell research in the United States would lead pharmaceutical and biotechnology companies to move their laboratories to other countries. The purpose of therapeutic cloning, also known as somatic cell nuclear transfer, is to generate embryonic stem cells that can be used in the treatment of human disease and the repair of damaged organs. Embryonic stem cells were first isolated from mouse embryos in the 1980s. Human embryonic stem cells were not isolated until 1998. Although therapeutic cloning is conceptually distinct from reproductive cloning, both techniques have generated major scientific, technical, political, and ethical disputes. Much of the controversy associated with therapeutic cloning is the result of confusion between reproductive cloning, in which the goal is to create a baby, and therapeutic cloning. Advocates of therapeutic cloning claim it is essential to the development of regenerative medicine, i.e., repairing the body by using immunologically compatible stem cells and signaling proteins. Biopharmaceutical companies believe therapeutic cloning and regenerative medicine could transform the practice of medicine within a decade. Even in the early phases of development, cloned human cells could be used in drug discovery and to screen out potentially dangerous drugs and toxic metabolic drug products. In order to generate embryonic stem cells, a cell nucleus taken from the patient would be inserted into an enucleated donated human egg cell. Factors in the cytoplasm of the egg cell would reprogram the patient’s cell nucleus so that it would behave as if it were the nucleus of a fertilized egg. However, because such cells are the result of nuclear transplantation rather than the
227
product of the union of egg and sperm nuclei, some scientists suggest that they should be called activated cells rather than embryos. After about five days of development the egg cell would become a blastocyst, a hollow ball made up of some 250 cells. The inner cell mass of the blastocyst contains the embryonic stem cells. These so-called master cells can produce any of the approximately 260 specialized cell types found in the human body. That is, the original egg cell is totipotent (capable of forming a complete individual), but embryonic stem cells are pluripotent (able to form many specialized cells, but unable to form a new individual). Stem cells would be harvested, cultured, and transformed into specialized cells and tissues for use in the treatment of diseases such as diabetes, Parkinson’s, and Alzheimer’s, and in the repair of damaged organs such as the heart, pancreas, and spinal cord. Replacement cells and tissues generated by means of therapeutic cloning would match the patient’s genotype. Therefore, the patient’s immune system would not reject the new cells. In addition to producing possible medical breakthroughs, therapeutic cloning research could provide fundamental insights into human development. Indeed, therapeutic cloning might disclose mechanisms for reprogramming adult cells, thus eventually obviating the need for embryonic stem cells. Stem cells also are found in differentiated tissues, but their ability to form different kinds of cells and tissues appears to be rather limited. Nevertheless, research carried out in 2000 suggested that adult-derived stem cells had previously unexpected possibilities. For example, neural stem cells differentiated into skeletal muscle cells and skeletal muscle stem cells differentiated into blood cells. The ethical and moral implications of these reports overshadowed their scientific and medical importance. Stem cell researchers warned that although adult stem cells might have interesting properties, they are far more limited in their ability to differentiate into various tissue types than embryonic stem cells. Opponents of embryonic research have argued that the potential uses of adult stem cells mean that banning therapeutic cloning would not inhibit the development of medicine. Researchers warn, however, that adult stem cells are very rare, difficult to isolate and purify, and may not exist for all tissues. The House vote to ban all forms of human cloning demonstrated how controversial issues surrounding human cloning, stem cell research, and the abortion debate have become confounded and inextricably linked. Although many Americans believe that stem cell research and therapeutic cloning are morally justifiable because they offer the promise of curing disease and alleviating human suffering, others are unequivocally opposed to all forms of experimentation involving human embryos. In contrast, after assessing the risks and benefits of new approaches to research using human embryos, the United Kingdom concluded that the potential benefits stem cells derived from early embryos justified scientific research. In January 2001 the United Kingdom approved regulations that allowed the use of therapeutic cloning in order to promote embryonic stem cell research. A High Court ruling in November 2001 on a legal loophole in the January law pushed the government to then explicitly ban reproductive cloning in the United Kingdom. In June 2001, under the sponsorship of the National Research Council, the National Academy of Sciences established a panel on the scientific and medical aspects of human cloning. The panel’s final report, released in January 2002, recommended that human reproductive cloning be banned, but endorsed therapeutic cloning for generating stem cells “because of its considerable potential for developing new medical therapies to treat life-threatening disease and advancing our fundamental biomedical knowledge.” However, a White House spokesman reiterated President Bush’s opposition to all forms of human cloning. —LOIS N. MAGNER
MEDICINE
Viewpoint: Yes, therapeutic cloning is so fraught with controversy, and the technique is so far from perfect, that it may be decades before it is put into practical use. Inside a forming embryo, no bigger than the period at the end of this sentence, lie hundreds of tiny stem cells. Initially these cells are undifferentiated, meaning that their fate is undecided. The power of a stem cell lies in its pluripotency: its potential to develop into every tissue, muscle, and organ in the human body. For decades, scientists have been attempting to discover these fledgling
228
SCIENCE
IN
DISPUTE,
VOLUME
2
cells and then to harness their power. If scientists could direct stem cells to develop into specific tissues or organs in the lab, they could replace cells damaged by disease or injury. Imagine the potential of restoring motility to someone who has been wheelchair-bound by a spinal injury, returning memory to an Alzheimer’s patient, or replacing skin that has been horribly burned. Now imagine having a virtually limitless supply of stem cells, created in the laboratory by cloning an embryo out of a cell from a patient’s own body and a donated egg. The resulting cells could then be coaxed into forming whatever tissue was needed, and there would be no risk of rejection because the cells and tissue would be made up of virtually the same genetic material as the patient.
KEY TERMS The process by which an unspecialized cell becomes specialized to perform a particular function. EMBRYONIC STEM CELLS: Undifferentiated cells that are unlike any specific adult cell in that they have the ability to differentiate into any of the body’s 260 different types of cells (e.g., bone, muscle, liver, and blood cells). OOCYTE: A cell from which an egg or ovum develops by meiosis; a female gametocyte. PLURIPOTENT: The term used to describe cells with the potential to develop into all of the cell types of an organism. In humans, pluripotent cells are able to develop into any and all of the body’s 260 different types of cells. SOMATIC CELL: Any cell of a mammalian body other than egg or sperm cells. DIFFERENTIATION:
What sounds like a miracle cure for everything from Alzheimer’s to diabetes to Parkinson’s disease is undoubtedly promising, but therapeutic cloning is so fraught with controversy, and the technique is so far from perfected, that it may be decades before it is put into practical use. Standing in the way of therapeutic cloning research is a trifold hurdle, one built of ethical, political, and scientific obstacles. To harvest an embryo’s stem cells, scientists must destroy it. Does that amount to “playing God” by, in effect, killing a potential human being? Will governments allow scientists to make such life-and-death decisions? Most countries already have said a vehement “no” to embryonic cloning. If governments withhold financial backing and restrict experimentation, will research organizations have the funding and the freedom to continue embryonic stem cell research? Even if they are permitted to pursue this line of research, can scientists harness the technology to put stem cell–based treatments into practical use in the foreseeable future?
It is important to differentiate what we traditionally define as conception from cloning, however. A human being is created through the sexual conjoining of sperm from the father and egg from the mother. The resulting child inherits genetic material from both parents. In the case of cloning, scientists use a process called somatic cell nuclear transfer, in which the nucleus of the woman’s egg is removed and replaced with the nucleus of a cell from another person. The resulting embryo carries the genetic material of the cell donor. Regardless of the genetic distinctions between cloned and conceived embryos, the idea of tampering with human life in any way has been met with vehement debate in political and religious communities. At a U.S. House of Representatives subcommittee meeting in February 1998, religious groups and medical ethicists were furious even with the idea of cloning a human embryo. “This human being, who is a single-cell human embryo, or zygote, is not a ‘potential’ or ‘possible’ human being, but is an already existing human being—with the ‘potential’ or ‘possibility’ to simply grow and develop bigger and bigger,” said Dianne N. Irving, a biochemist and professor of medical ethics at the Dominican House of Studies, a Roman Catholic seminary in Washington, D.C. Take the issue of embryonic cloning one step further, and it comes under even greater fire. If scientists can create an embryo in the lab, wouldn’t the next step be to implant that SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
Moral Debate Where does life begin? This fundamental question is at the root of the debate over embryonic stem cell research. If you believe that life begins at conception, as many people do, then creating an embryo solely for the purpose of harvesting its stem cells, even if the embryo is only a few days old and no bigger than a speck, amounts to murder.
The transfer of a cell nucleus from a somatic cell into an oocyte (or egg cell) from which the nucleus has been removed. STEM CELL: A cell taken from an embryo, fetus, or adult that can divide for indefinite periods in culture and create the specialized cells that make up various organs and tissues throughout the body. TELOMERES: Strands of DNA (deoxyribonucleic acid) that tie up the ends of chromosomes in a cell. THERAPEUTIC CLONING: The use of somatic cell nuclear transfer for the therapeutic purpose of providing cells, tissues, and organs for patients requiring replacement or supplementation of diseased or damaged tissue, rather than for reproduction purposes. SOMATIC CELL NUCLEAR TRANSFER:
229
Stem cell research is highly controversial in the United States. (Photograph by Kamenko Pajic. AP/Wide World Photos.
MEDICINE
Reproduced by permission.)
embryo into a surrogate mother’s womb and allow it to develop into a baby? The idea of cloning human beings calls up frightening scenarios of designer babies, genetically engineered with the agility of Michael Jordan and the intellect of Stephen Hawking, or of parents trying to bring back a child who has died. An even more frightening, but far-flung, scenario is one in which scientists would genetically alter a forming embryo to create a brainless body that could be harvested for its organs. In 1997, news that a group of Scottish researchers had successfully cloned the sheep Dolly from the cells of a six-year-old ewe incited fears around the world that human cloning would not be far behind. Those fears reemerged in early 2001, when Panayiotis Zavos, a former University of Kentucky professor, and Dr. Severino Antinori, an Italian fertility specialist, announced their intention to clone the first human being. But other scientists in the field were quick to point out the improbability that such attempts would be successful. Cloning Dolly took 277 attempts. Research with sheep, goats, mice, and other animals has shown that about 90% of cloned embryos die within the first trimester. Animals that make it to full term are often born abnormally large or with ill-functioning organs. Believing such research to be unethical and irresponsible, most scientists adhere to a self-imposed moratorium on cloning humans, and most governments have banned such experimentation.
230
SCIENCE
IN
DISPUTE,
VOLUME
2
Scientists could avoid the moral issue of creating and destroying embryos by making only the cells they need through cloning. A good idea, but one whose time has not yet come. Currently, there is no way to manipulate a cloned cell without creating an embryo. Freedom to Clone—Will Governments Fund or Forbid? The ethical debate alone may bring research into therapeutic cloning to a halt. But when bioethical concerns enter the public arena, the political fallout may sound the death knell for cloning research, as governments restrict the freedom and funds of research institutions.
In July 2001, Congress passed the Human Cloning Prohibition Act of 2001, which condemned all forms of human cloning, even therapeutic, as “displaying a profound disrespect for life,” setting penalties at more than $1 million and 10 years of imprisonment. President George W. Bush said his administration was unequivocally “opposed to the cloning of human beings either for reproduction or research,” and proved it by prohibiting the use of public funding for embryonic research, with the exception of 64 existing stem cell lines pulled from embryos that already had been destroyed. Other governments also have passed cloning research restrictions. In January 2001, the United Kingdom became the only country to legalize the creation of cloned human embryos for therapeutic research, but
the practice is strictly regulated. Before gaining approval for their research, U.K. scientists must prove that no alternatives to embryonic research will achieve the same results.
Scientists also are limited in their ability to direct a stem cell to differentiate into specific tissue or cell types, and to control that differentiation once the cells are transplanted into a patient. For example, in the case of diabetes, scientists must first create insulin-producing cells, then regulate how those cells produce insulin once they are in the body. Before gaining control over cell differentiation, scientists need to learn what triggers a cell to transform into, say, brain tissue over liver tissue. They also need to understand how that differentiation is controlled by environmental factors, and how best to replicate those factors in a laboratory. Before implanting the cells into a human patient, they also must prove that the cells can serve their intended function, by implanting them into an animal or other surrogate model. SCIENCE
IN
DISPUTE,
VOLUME
2
A National Academy of Science panel on human cloning, January 2002. From left: Maxine Singer of the Carnegie Institution, Irving Weissman of Stanford University, and Mark Siegler of the University of Chicago. (Photograph by Doug Mills. AP/Wide World Photos. Reproduced by permission.)
MEDICINE
A Technology in Its Infancy Cloning has, in effect, been in use for centuries with plants, yet human embryonic stem cell research is still in its infancy. In 1981 researchers learned how to grow mouse embryonic stem cells in the laboratory, but it was not until 1998 that scientists at the University of Wisconsin-Madison were able to isolate cells from an early embryo and develop the first human stem cell lines. Since most of the research thus far has been limited to animals (usually mice), any existing knowledge of how stem cells replicate must still be translated into human terms. Human and mouse cells do not replicate the same way under laboratory conditions, and researchers do not know
whether cells will behave the same way in the human body as they do in a mouse.
231
The ability to steer differentiation becomes even more intricate when attempting to trigger cells to develop into human organs. Cloning a heart or a kidney from cultured cells could save some, if not all, of the estimated 4,000 people who die in the United States each year while awaiting an organ transplant. The complexity of these organs clearly necessitates years of further research before accurate replication can occur. Once scientists are able to create the right type of stem cells and to direct them in completing a specified function, they still must address safety issues before putting the cells to use. Every step of the process gives rise to potential safety concerns—from the health of the egg donors to the way in which cells are manipulated in the lab. Embryonic stem cells are created using a patient’s own cell and a donor egg. That creates a problem in and of itself, as human eggs are not easy to come by and require willing female donors. Once donors are located, they must be carefully screened for hereditary diseases. Although the nucleus is removed from the donor egg, the egg still retains the mother’s mitochondrial DNA (deoxyribonucleic acid). That genetic material can carry with it a hereditary defect—for example, if the mother had a predisposition to heart disease, she could theoretically pass that faulty gene to the patient receiving her egg.
MEDICINE
Another source for concern is the way cloned cells and tissues will behave once they are implanted. Will they age too quickly if they are drawn from an older person’s cells? Research already has proven that in clones created from an older person, the telomeres (DNA-containing ends of chromosomes) are shorter, reflecting the aging process. Also, will the cloned cells develop normally or will they be subject to malformations? If cells are implanted before becoming fully differentiated, they have the potential to give rise to teratomas (cancerous tumors), which could threaten the life they were designed to save. Scientists still do not know at which stage in the differentiation process the tumor risk declines. Scientists also must improve the process by which they differentiate cells in culture. First, all procedures must be standardized, because cells differentiated by a variety of means might grow at varying rates or not all be equally effective. Second, researchers must gain more control over the differentiation process and find an alternate means to keep cells from differentiating before they are ready to steer them in the proper direction. Currently, scientists use mouse feeder cells to stop cells from differentiating. The feeder cells contain necessary materials to maintain the pluripotent capacity of undifferentiated cells. But mouse cells, as with any animal
232
SCIENCE
IN
DISPUTE,
VOLUME
2
cells, can carry animal-borne viruses to a human stem cell recipient. Promising Therapy Still Years Down the Road When talking about therapeutic cloning, as with any new technology, it is important to maintain an historical perspective. Most scientific discoveries that seem to have occurred overnight actually were years in the making.
Even as scientists move forward in their understanding of and ability to master the technique of therapeutic cloning, the ethical and political debates will rage all the more fervently. Religious groups and many among the public and scientific communities stand firm in their belief that life begins at conception, and that to create life only to destroy it is morally wrong. Governments, mindful of the bioethical issues at stake, are unlikely to bend on previously enacted anticloning legislation. Stem cells hold enormous potential. One day, they may be used to repair or replace cells and tissue lost from diseases like heart disease, Parkinson’s, Alzheimer’s, and cancer. It is more likely, at least in the near future, that stem cell research will be directed down less controversial avenues. One of the most promising alternate therapies involves adult stem cells, trained to act like embryonic stem cells without the need to create a new embryo. While not pluripotent like embryonic cells, stem cells culled from adult tissue or blood may be more malleable than scientists originally believed. In October 2001 researchers at the H. Lee Moffitt Cancer Center and the University of South Florida identified a stem cell gene shared by both embryonic and adult stem cells, indicating a similarity between the two types of cells. In laboratory experiments, scientists have been able to coax adult mouse bone-marrow stem cells into becoming skeletal muscle and brain cells, and to grow blood and brain cells from liver cells. If scientists can reprogram human adult stem cells to act like embryonic stem cells, perfecting embryonic cloning and stem cell technology and overcoming the accompanying ethical and political hurdles may never be necessary. —STEPHANIE WATSON
Viewpoint: No, recent scientific advances in the area of therapeutic cloning indicate that cures for diseases such as diabetes and Parkinson’s are possible in the not-too-distant future.
Because of their potential to develop into many—if not all—cell types of the human body, embryonic stem cells have become the focus of much medical and research attention. The latest research has extended the study of embryonic stem cells to another level: therapeutic cloning, where a patient’s own cells would be transformed into a living embryo whose stem cells could be used to create tissue to treat such diseases as diabetes and Parkinson’s. Because the therapeutic cloning procedure would involve the cells of a given patient, stem cells derived from the embryo would be genetically identical to the patient’s cells, and therefore would enable replacement or supplementation of the patient’s diseased or damaged tissue without the risk of immune incompatibility and its lifelong treatment with immunosuppressive drugs and/or immunomodulatory protocols.
Development I: Derivation of Stem Cells from Human Embryos Researchers had, for several years, been isolating embryonic stem cells from mice, hamsters, and other animals when in the late 1990s two teams of researchers announced that they had successfully produced human stem cells in their laboratories. In 1997 a research team led by John Gearhart, professor of gynecology and obstetrics at Johns Hopkins University, isolated stem cells from aborted fetal material and in a petri dish was able to differentiate the cells into several kinds of tissue. In late 1998 a research team led by James Thomson, a University of Wisconsin-Madison developmental biologist, was able to derive stem cells from surplus human embryos obtained from in vitro fertilization clinics and to produce a viable stem cell line—in other words to keep the cells in their undifferentiated state in culture. This achievement by Thomson and his team was sinSCIENCE
IN
DISPUTE,
VOLUME
2
Severino Antinori (left) and Panayiotis Zavos (right). (Photograph by Joe Marquette. AP/Wide World Photos. Reproduced by permission.)
MEDICINE
When researchers first considered the prospect of using human stem cells to cure disease in 1975, they were deterred by the ordeal of isolating the cells from the embryo. For a number of years thereafter, researchers’ attempts to isolate embryonic cells from the model of choice—a mouse model—proved frustrating, as did attempts with other kinds of animals. Then, in the late 1990s and the early years of the new millennium, a number of developments involving stem cell research offered new information that set the stage for therapeutic cloning as a principal means for treating human disease.
Three research developments that have triggered a wave of excitement in this area are the derivation of stem cells from human embryos, the cloning of animals via somatic cell nuclear transfer, and evidence suggesting that stem cells from human embryos may have the potential to cure disease. In the minds of supporters, these three developments—when taken together— challenge the notion that claims of therapeutic cloning as a cure for diseases such as diabetes and Parkinson’s are premature and misleading.
233
gular, because in nature embryonic stem cells exist in their undifferentiated state for a very short time before developing into other cells. Embryonic stem cells, taken from blastocysts (early stage embryos consisting of fewer than 100 cells), are useful entities because they are pluripotent; that is, they have within them the capacity to grow and specialize into any and all of the 260 cell types of the human body, such as cardiac and skeletal muscle, blood vessels, hematopoietic cells, insulin-secreting cells, and various neural cells. Stem cells, unlike somatic cells such as liver cells, brain cells, and skin cells that have taken on a specific function, are “blank” cells that have not gone through the differentiation process.
MEDICINE
At this stage of their knowledge, many researchers believe that embryos undergoing early stage cell division (usually between the fifth and seventh day of development) are the best and only source of pluripotent stem cells. It is true that stem cells are also present in children and adults, but these stem cells (which are referred to as adult stem cells) are multipotent— that is, they are able to form only a limited number of other cell types in contrast to embryonic stem cells’ potential to grow and specialize into more than 200 separate and distinct cell types. In their May 2001 report in the journal Cell, the pathologist Neil Theise of New York University and the stem cell biologist Diane Krause of Yale University and their colleagues confirmed the multipotency of adult stem cells when they claimed that an adult stem cell from the bone marrow of mice had the capacity to form only seven tissues—blood, lung, liver, stomach, esophagus, intestines, and skin. Besides their limitation to form only a limited number of other cell types, adult stem cells are not very plentiful, they are not found in the vital organs, they are difficult to grow in the laboratory, their potential to reproduce diminishes with age, and they have difficulty proliferating in culture (as do embryonic stem cells). Development II: Cloning of Animals via Somatic Cell Nuclear Transfer In 1997 scientists used somatic cell nuclear transfer to create Dolly, a Scottish lamb cloned from a cell taken from an adult sheep. Since this initial success, researchers have used the somatic cell nuclear transfer technique to produce a range of mammalian species, including goats created by geneticists at Tufts University in 1999 and rhesus monkeys created by researchers at Oregon Regional Primate Research Center in 2000. More specifically, Dolly was created by Ian Wilmut, an embryologist at the Roslin Institute in Edinburgh, Scotland, by taking a somatic cell from a ewe’s mammary gland and fusing it with a denucleated oocyte (egg cell) to make a sheep embryo with the same genetic makeup (i.e., a
234
SCIENCE
IN
DISPUTE,
VOLUME
2
clone) as that of the ewe. This ability to reprogram cells, researchers say, has brought the prospect of making human replacement cells for the treatment of degenerative diseases significantly closer to reality. In 1999 Wilmut stated, “Human stem cell therapy has the potential to provide treatments for several diseases for which no alternatives are available, such as Parkinson’s disease and diabetes. Some research on human embryos is essential if these important therapies are to become available.” As Wilmut indicated, one purpose of therapeutic cloning and stem cell research is to serve as a principal means for treating human disease. There exists, however, a more underlying purpose: To serve as a learning tool that would enable researchers to understand and learn how a somatic cell (a cell that has only one purpose and does not have the ability to differentiate into other types of cells) can be programmed to forget its narrow destiny and to differentiate into other types of cells. Opinion leaders such as the Royal Society in London argue convincingly that, for now, therapeutic cloning is a necessary route to greater understanding of the stem cell differentiation process, and contend that therapeutic cloning eventually will put itself out of business as it unlocks the secrets of cell reprogramming and enables trained professionals to use healthy somatic cell bodies to create needed stem cell replacements. Researchers anticipate, for example, that a somatic cell, such as a cheek cell, could be reprogrammed to develop into a brain cell that would produce dopamine to alleviate or even cure Parkinson’s disease. Development III: Evidence Suggesting that Stem Cells from Human Embryos May Have the Potential to Cure Disease The findings of three 2001 studies using mouse embryos provide evidence suggesting that stem cells from human embryos may have the potential to cure disease.
In the first study, Kiminobu Sugaya, assistant professor of physiology and biophysics in psychiatry at the University of Illinois, and his colleagues showed that old rats (24-month-old rats, which are equivalent in age to 80-year-old humans) performed better on a test of memory and learning after researchers injected neural stem brain cells from aborted human fetuses into the rats. The researchers explained the rats’ improved performance in this way: The injected embryonic stem cells developed into neurons and other brain cells, providing added brain power and stimulating secretion of natural chemicals that nurtured the older brain cells. In another study Ron McKay, a molecular biologist at the National Institutes of Health (NIH), and his colleagues converted noncloned mouse embryonic stem cells in laboratory dishes
NUCLEAR CELL TRANSFER Nuclear cell transfer, a much-used technique in the cloning of adult animals, requires two cells: a donor cell and an oocyte (egg cell). For the technique to be successful, the two cell cycles must somehow be synchronized. Some researchers accomplish synchronization by enucleating or removing the nucleus from the unfertilized egg cell and coaxing the donor cell into a dormant state—the state in which an egg cell is more likely to accept a donor nucleus as its own. Then, the dormant donor nucleus is inserted into the egg cell via cell fusion or transplantation. Scientists have used different nuclear cell transfer techniques. The Roslin technique was used by Ian Wilmut, the creator of the sheep Dolly. Wilmut used sheep cells, which typically wait several hours before dividing. He accomplished dormancy in the
into complex, many-celled structures that looked and acted like the islets of Langerhans, the specialized parts of the pancreas responsible for regulating blood sugar levels. This work has been described by Doug Melton, the chairman of molecular and cell biology at Harvard University, as the most exciting study in the diabetes field in the last decade—partly because it supports the possibility that similar structures grown from human embryonic cells could be transplanted into diabetic children, who lack functioning islet cells.
—Elaine Wacholtz
Conclusion Three recent research developments—the derivation of stem cells from human embryos, the cloning of animals via somatic cell nuclear transfer, and evidence from three 2001 studies (which suggests that stem cells from human embryos may have the potential to cure disease)—dispute the notion that claims of therapeutic cloning as a cure for diseases such as diabetes and Parkinson’s are premature and misleading. With these developments in mind, researchers have expressed increased hopes for the prospect of one day growing spare cells for tissue engineering and spare parts for transplantation medicine over and above their immediate reservations. One reservation has to do with the perfection of the therapeutic cloning technique, which has largely been tested on mice. “More work has to be done before the therapeutic cloning technique can be successfully adapted to humans,” explained McKay, the leader of the NIH study where mouse embryonic stem cells were converted into specialized pancreatic cells, “because the jump from animal to human studies is a tricky one.” As mentioned earlier, Oregon researchers have done therapeutic cloning on rhesus monkeys, but that cloning did not focus on converting stem cells into pancreatic cells or any other specialized cells, as did McKay’s research at the NIH.
Another reservation has to do with the ethical and legal controversy associated with therapeutic cloning. The controversy stems from the fact that the embryo must be destroyed in order SCIENCE
IN
DISPUTE,
VOLUME
2
MEDICINE
In a third study Teruhiko Wakayama, a reproductive biologist at Advanced Cell Technology in Worcester, Massachusetts, and his colleagues showed that mouse embryos created in their laboratory contained true stem cells that could be converted into all of the major cell and tissue types. In a related mouse experiment, Wakayama and Peter Mombaerts from Rockefeller University worked with researchers from Rockefeller and the Sloan-Kettering Institute to create a blastocyst via somatic cell nuclear transfer from an easily accessible source of adult cells—the mouse’s tail—and converted the extracted stem cells into a chunk of dopaminesecreting neurons (the type of brain cell that degenerates in Parkinson’s disease). The work of Wakayama and his colleagues, say supporters, moves medical research a giant step closer to the ultimate, long-term goal for Parkinson’s patients—to be able to grow compatible replacement neurons from their own cells.
donor cell by in vitro starvation. That is, he starved the cell outside the body of the donor before injecting it into an enucleated egg cell. The Honolulu technique was used by Teruhiko Wakayama, who used three different kinds of mouse cells, two of which remained in the dormant state (not dividing) naturally, while the third was always either in the dormant state or nondormant (dividing) state. Wakayama injected the nucleus of a naturally dormant donor cell into an enucleated egg cell—without the added step of starving the donor cell. Wakayama’s technique is considered the more efficient technique, because he was able to clone with a higher rate of success (three clones in 100 attempts) than Wilmut (one clone in 277 attempts).
235
to retrieve the stem cells. Opponents of therapeutic cloning consider the destruction of the embryo morally objectionable because they believe that human personhood begins at conception or, as in cloning and nuclear transfer, at the genetic beginning. Supporters of therapeutic cloning, on the other hand, believe that human personhood is not at stake if the embryos used are less than 14 days old— because at such an early stage of development, embryos are too immature to have developed any kind of individuality. In fact Michael West, the president and chief executive officer of Advanced Cell Technology, explained in late 2001 that before 14 days, embryos can split to become two or can fuse to become one. “There is no human entity there,” said West. A third reservation has to do with tissue engineering. While many of the cell types differentiated from therapeutic cloning will likely be useful in medicine as individual cells or small groups of cells, as in the treatment of diabetes and Parkinson’s disease, a bigger remaining challenge will be to learn how to reconstitute in vitro simple tissues, such as skin and blood vessel substitutes, and more complex structures—vital organs, such as kidneys, livers, and ultimately hearts. At this stage of their knowledge, researchers concede that some of these reservations may challenge their scientific ingenuity. But they vehemently assert, on the basis of the research developments discussed in this paper, that they do not consider claims that therapeutic cloning could be the cure for diseases such as diabetes and Parkinson’s to be premature or misleading. Robert Lanza, the vice president of medical and scientific development at Advanced Cell Technology, recently called attention to a similar situation— when Dolly the lamb was cloned in 1997. Lanza pointed out that the cloning event brought to the scientific community a powerful new technology at a time when many thought that such a feat was impossible. —ELAINE WACHOLTZ
Further Reading
MEDICINE
Edwards, Robert G., and Patrick C. Steptoe. A Matter of Life: The Story of a Medical Breakthrough. London: Hutchinson, 1980.
236
“Embryonic Stem Cells. Research at the University of Wisconsin-Madison.” . Hall, Stephen S. “Adult Stem Cells.” Technology Review (November 2001): 42–9.
SCIENCE
IN
DISPUTE,
VOLUME
2
Ho, Mae-Wan, and Joe Cummins. “The Unnecessary Evil of ‘Therapeutic’ Human Cloning.” Institute of Science in Society, London, 23 January 2001. Kato, Y., et al. “Eight Calves Cloned from Somatic Cells of a Single Adult.” Science 282 (11 December 1998): 2095–98. Kind, A., and A. Colman. “Therapeutic Cloning: Needs and Prospects.” Seminars in Cell and Developmental Biology 10 (1999): 279–86. Lanza, Robert P., et al. “Human Therapeutic Cloning.” Nature Medicine 5, no. 9 (September 1999): 975–77. Naik, Gautam. “‘Therapeutic Cloning’ Holds Promise of Treating Disease.” Wall Street Journal (27 April 2001), sec. B, p. 1. Rantala, M. L., and Arthur J. Milgram, eds. For and Against. Vol. 3, Cloning. Chicago: Open Court, 1999. Thomson, James A., et al. “Embryonic Stem Cell Lines Derived from Human Blastocysts.” Science 282 (6 November 1998): 1145–47. U.K. Department of Health. Chief Medical Officer’s Expert Advisory Group on Therapeutic Cloning. Stem Cell Research: Medical Progress with Responsibility, 16 August 2000. London, 2000. U.S. Department of Health and Human Services. National Institutes of Health. Stem Cells: Scientific Progress and Future Research Directions, June 2001. Bethesda, Md., 2001. Wakayama, Teruhiko, et al. “Differentiation of Embryonic Stem Cell Lines Generated from Adult Somatic Cells by Nuclear Transfer.” Science 292 (27 April 2001): 740–43. ———. “Full-Term Development of Mice from Enucleated Oocytes Injected with Cumulus Cell Nuclei.” Nature 394 (23 July 1998): 369–74. Wilmut, Ian. “Cloning for Medicine.” Scientific American (December 1998). ———, et al. “Viable Offspring Derived from Fetal and Adult Mammalian Cells.” Nature 385 (1997): 810–13. ———, Keith Campbell, and Colin Tudge. The Second Creation: Dolly and the Age of Biological Control. New York: Farrar, Straus, and Giroux, 2000.
PHYSICAL SCIENCE Historic Dispute: Are atoms real?
Viewpoint: Yes, atoms are real, and science has developed to the point that atoms can not only be seen, but can also be individually manipulated. Viewpoint: No, many pre-twentieth-century scientists, lacking any direct evidence of the existence of atoms, concluded that atoms are not real.
At the start of his Lectures in Physics, the 1965 Nobel Laureate in Physics Richard Feynman asks what one piece of scientific knowledge the human race ought to try to preserve for future generations if all the other knowledge were to be destroyed in some inevitable cataclysm. His answer that the single most important scientific fact is that all matter is composed of atoms, now seems completely reasonable. How is it then, that less than 100 years before, the very existence of atoms could be disputed with some vehemence? Although the notion of atoms has been around for a long time—over 2,500 years—it is important to note that it has meant different things in different epochs and to different thinkers. It meant one thing to the ancient Greek matter theorists, another to the Epicurean philosophers, something else to early modern scientific thinkers, yet another things to nineteenth century chemists, and means something a bit different again to contemporary atomic physicists. The atomic hypothesis, that all matter is composed of tiny indestructible particles, is generally attributed to Democritus (c. 460–370 B.C.), a Greek philosopher writing in the fifth century B.C., although the idea was not entirely new with him. A century later, another Greek, Epicurus (341–270 B.C.), adopted the idea to his philosophical system, which argued against an active role for God or gods in determining the course of events in the world and denied the possibility of life after death. Plato (c. 428–348 B.C.) accepted the existence of atoms and tried to explain the properties of the four classical elements—air, earth, fire, and water—in terms of the shapes of their atoms. His student Aristotle (384–322 B.C.) dismissed this idea in favor of a metaphysics in which the form of objects was imposed on an underlying continuous substance. Although the ideas of Aristotle were at first regarded with suspicion by church authorities, they were eventually embraced as consistent with Christian belief. In the twelfth century the Italian Saint Thomas Aquinas (1225–1274) adopted the metaphysics of substance and form to explain the sacraments of the Catholic Church, and theologians introduced the term transubstantiation to describe the transformation of the substance, but not the form or appearance, of the bread and wine used in the Mass. To advocate that matter was an aggregate of unchanging atoms became heretical and therefore dangerous, at least in Christian Europe. Scientific and philosophical interest in the atomic hypothesis revived in the Renaissance. The Italian mathematician Galileo Galilei (1564–1642), the English physicist Isaac Newton (1642–1727), and the Anglo-Irish physicist Robert Boyle (1627–1691) all advocated the existence of atoms. Real progress toward the modern concept of the atom could not occur without the modern notion of chemical element. In the Skeptical Chymist, published in
237
1661, Boyle argued that there were many more elements than the four accepted in antiquity and that the list of elements could only be established by experiment. Two centuries later, in his Elementary Treatise in Chemistry, the French chemist Antoine-Laurent Lavoisier (1743–1794) published what is considered to be the first modern list of elements—“modern” in that it includes oxygen rather than the problematic phlogiston, but still included light and caloric (heat) as elements. As the nineteenth century began, an English school teacher and tutor, John Dalton (1766–1844), began to consider the quantitative consequences of the existence of atoms in chemical analysis. According to Dalton’s Law of Definite Proportions, the ratios of the weights of the elements that formed any particular compound was fixed and represented the ratio of the weights of the atoms involved. A second law, the Law of Multiple Proportions dealt with the case in which two elements formed more than one compound. In this case the weights of one element that combined with a fixed weight of another would always be in the ratio of small whole numbers. For example, the weight of oxygen combined with one gram of nitrogen in the compound NO would be half that which combined with one gram of nitrogen in the compound NO2. Aristotle (The Library of Congress.)
Dalton’s understanding of compound formation underlies the discussion of the difference between physical and chemical change with which most modern chemistry texts begin. Chemical changes are more drastic and involve more energy. They also typically yield compounds with qualitatively different properties than those of the original substances. Expose a piece of soft, shiny sodium metal in an atmosphere of the irritating, green chlorine gas and one obtains the common salt that adds flavor to food. Dissolving sugar in water, in contrast, yields a solution that tastes sweet like sugar and is transparent like water. Chemical changes produce new compounds that obey Dalton’s laws, physical changes do not.
PHYSICAL SCIENCE
But the textbooks oversimplify the reality. The weight of sodium that combines with one gram of chlorine will vary slightly depending on the exact conditions of preparation. The range of deviation from the ideal, or stoichiometric, weight ratio will be small for most ionic solids, but is measurable by the careful analytical chemist. We now understand that the deviation from the ideal ratio of atoms arises from the presence of defects that all solid structures tolerate to some extent. However, in the early days of chemical analysis these small exceptions were enough to call into question the assumption that atoms combine in definite small-number ratios. A further complicating factor is that the distinction between mixtures and compounds breaks down in some metal alloys. The forces between atoms in some metallic mixtures are as strong as those in the pure metals and the components of the alloy are not readily separated, even though the chemical composition is quite variable. The existence of deviations in stoichiometry and the development of a thermodynamic formalism which accounted for the stability of these materials without invoking the existence of atoms caused a number of the most eminent physical chemists of the nineteenth century, including the French chemist Pierre-Eugéne Marcellin Berthelot (1827–1907) and the German physical chemist Friedrich Wilhelm Ostwald (1853–1932), to remain skeptical about the existence of atoms. The observation of the French chemist Joseph-Louis Gay-Lussac (1778–1850) in 1808 that chemical reactions between gases involve volumes combining in small number rations, and the explanation provided in 1811 by the Italian physicist Amedeo Avogadro (1776–1856) that equal volumes of gas contained equal numbers of molecules, strengthened the case for belief in atoms appreciably. It nonetheless left open the possibility that the liquid and solid states might be continuous in nature, with atoms and molecules only forming on evaporation. The fact that atoms could not be directly observed was still troublesome to the Austrian physicist and philosopher Ernst Mach (1838–1916) and his disciples, who cautioned against attributing reality to entities never observed. Mach’s position is not unreasonable, given the prior history of science in attributing reality to such
238
SCIENCE
IN
DISPUTE,
VOLUME
2
“unreal” concepts as phlogiston, caloric, and the luminiferous aether. Mach’s stature as a highly regarded physicist, however, delayed general acceptance of the existence of atoms until early in the twentieth century. In the end, atoms became accepted not because they were eventually observed but because they provided such a powerful and coherent explanation of the phenomena of physics and chemistry. In contrast to the original notion of indivisible particles, a detailed picture of the atom as composed of more elementary particles emerged. Further, transformations of the atoms of one element into those of another were found to occur in radioactive elements. The development, in the later twentieth century, of techniques that could form images of the atoms on a solid surface only confirmed the existence of those atomic particles that explained so much about the behavior of matter. —DONALD R. FRANCESCHETTI
Viewpoint: Yes, atoms are real, and science has developed to the point that atoms can not only be seen, but can also be individually manipulated.
Unfortunately, this modification to the atomism of Leucippus and Democritus gave their detractors enough ammunition to ignore its many merits. As a result, the Greek philosopher Aristotle (384–322 B.C.) was able to propagate his antiatomistic theory. Primarily, Aristotle denied the existence of a void and affirmed the continuous nature of matter. He refused to accept any limits on the divisibility of matter, saying that matter had inherent qualities, such as color, smell, and warmth. He subscribed to the theory of four elements, namely, earth, air, fire and water, and introduced a fifth element, ether, which governed celestial qualities. In effect, Aristotle’s ideas resulted in the separation of terrestrial and celestial laws. The church accepted his views, and the combined effect was that Aristotle’s influence essentially halted the development of atomic theory until after the Renaissance. Atomism developed separately in India during the same time period and appeared in the Middle East during medieval times. It may seem strange to mention religion in the discussion of the atom, but the development of atomic theory was often frustrated by the inability of its developers to separate religion and science. Many of the odd ideas and wrong turns in the history of science were made in the name of unifying science and religion. The Development of Atomic Theory The Seventeenth Century. Experimental work performed by the Italian mathematician and physicist Evangelista Torricelli (1608–1647) and the French scientist and philosopher Blaise Pascal (1623–1662) on air pressure was a driving force behind the renewal of atomism. Pierre Gassendi (1592–1655), a French scientist, was a leader in refuting Aristotelian theory and relied on the work of Torricelli and Pascal to return to the atomic concepts of Democritus and Epicurus. Gassendi was the first to use the term molecule, to describe a group of atoms acting as a unit. He also returned to the concept of random atomic SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
The idea that matter was not continuous but consisted of discrete particles was first proposed by the Greek philosopher Anaxagoras (c. 500–428 B.C.). He claimed that matter consisted of infinitely small particles which he called omiomeres. He believed that these small particles contained the quality of all things, and had developed a theory for the creation of matter from omoiomeres. It was, however, another Greek philosopher, Leucippus (5th century B.C.), who actually used the term atom. He applied the term to describe a particle that was indivisible, compact, without parts, and that had a homogeneous composition. These atoms differed by their qualities, such as size and shape. Leucippus also maintained that there were an infinite number of atoms in constant, random motion. If a collision occurred, the atoms could be scattered, or they could coalesce to form an aggregate. In addition to the existence of atoms, Leucippus postulated the existence of a void that allowed the atoms uninhibited random movement and was central to the theory of atomism, as Leucippus’s hypothesis came to be called. The successor of Leucippus, another Greek philosopher named Democritus (c. 460–370 B.C.), refined the concept of atomism to the extent that it is difficult to distinguish his thoughts from those of Leucippus. Epicurus (341–270 B.C.), also a Greek, added weight as yet another defining characteristic of an atom. He modified the hypotheses of Leucippus and Democritus, saying that all atoms moved at the same speed, irrespective of their weight or volume. Based on the physics of the time, this conclusion led Epicurus to believe that all atoms moved in a slow, but definite, downward direction. This view of atomic movement made collision between atoms difficult to imagine. Epicu-
rus, realizing this, applied a modification stating that occasionally, an atom would “swerve,” allowing it to collide with another.
239
tists who made them did not do so blindly. They debated endlessly among themselves about the truth of their own hypotheses, and tried often to disprove their own developments!
Niels Bohr (© Bettmann/CORBIS.
PHYSICAL SCIENCE
Reproduced by permission.)
240
motion described by Democritus, rather than the downward movement of Epicurus. Many other scientists and philosophers contributed to the slow redevelopment of atomic theory, but the next gigantic leap was made by the English physicist Isaac Newton (1642–1727). Of his laws of motion, the gravitational law had the effect of once again unifying terrestrial and celestial science. This central development overrode Aristotle’s influence, and atomic theory began to develop in earnest. It was proposed at this time that Newton’s gravitational law could be applied to describe the attraction between atoms, but Newton himself did not believe this. He suspected that electrical and magnetic forces were important at such small scales, but he had no idea how true his thoughts would prove to be. It must be remembered that while the particulate nature of matter was beginning to gain popularity, all of the atomic theory proposed by Democritus was still not accepted. For example, the Anglo-Irish physicist Robert Boyle (1627–1691) supported atomism, but not the concept of random motion of atoms. He believed that atoms could not move without reason, and that it was God who decided how and where atoms moved. The Eighteenth and Nineteenth Centuries. The eighteenth and nineteenth centuries saw great advances in atomic theory. These theories are all the more convincing because the scienSCIENCE
IN
DISPUTE,
VOLUME
2
The French chemist Antoine-Laurent Lavoisier (1743–1794) firmly defined an element as a substance that had not yet been decomposed by any means. He also clearly stated the law of conservation of matter—during a chemical reaction, matter is neither created nor destroyed. Another French chemist, JosephLouis Proust (1754–1826), stated the law of constant proportion in 1806—irrespective of its source, a substance is composed of the same elements in the same proportions by mass. These two laws enabled the English chemist John Dalton (1766–1844) to propose his atomic theory and state the law of multiple proportions. Dalton’s atomic theory contained four statements: 1) All matter is composed of atoms, very tiny, indivisible particles of an element that cannot be created nor destroyed. 2) Atoms of one element cannot be converted to atoms of another element. 3) Atoms of one element are identical in mass and other properties and are different from atoms of other elements. 4) Compounds are formed when specific ratios of different elements chemically combine. In spite of Dalton’s insight in developing the atomic theory, he incorrectly assumed that elements such as hydrogen and oxygen were monatomic. It was the work on gas volumes of two chemists, Joseph-Louis Gay-Lussac (1778–1850) from France, and Amedeo Avogadro (1776–1856) from Italy, that led to the hypothesis that gases such as oxygen and hydrogen were formed from two atoms of the same element combined and were thus diatomic, not monatomic. In 1869, Dmitry Ivanovich Mendeleyev (1834–1907), a Russian chemist, published his periodic table of the elements, in which he ordered the known elements based on their masses and their chemical properties. In fact, he was bold enough to switch certain elements where he thought the properties belonged in a different column, and history proved his thinking correct. Mendeleyev’s greatness lies in the fact that he used his table to predict the properties of elements that had not yet been discovered. With the introduction of the concept of valency—the property of an element that determines the number of other atoms with which an atom of the element can combine—the French chemist Joseph-Achille Le Bel (1847–1930) and Dutch physical chemist Jacobus Hendricus van’t Hoff (1852–1911) were able to imagine the concept of molecules with three-dimensional structures. Even at this date, detractors of the atomic theory existed. Adolf Wilhelm Hermann Kolbe (1818–1884), a German organic chemist described as one of the greatest of that time,
KEY TERMS Originally believed to be the smallest indivisible particle of which matter was composed, it is now known to consist of protons, neutrons, and electrons. BROWNIAN MOTION: Phenomenon discovered by the Scottish botanist Robert Brown (1773–1858) in 1828 where tiny particles in a dilute solution constantly move in random motion. Brown first used pollen, and so thought that some spark of life was making the particles move, but later work showed than inanimate substances also had the same effect. In the early twentieth century Brownian motion was shown to be a product of atomic collisions. The slides used by the French physicist Jean Perrin in his 1905 investigations into Brownian motion contain moving particles to this day. ELECTRON: Negatively charged subatomic particle found in an area around the nucleus determined by the orbital it occupies. Its mass is approximately one two-thousandth that of the proton. In a neutral atom, the number of electrons equals the number of protons. NEUTRON: Subatomic particle that has no charge; a component of the nucleus, its mass approximately equals that of the proton. A variation in the number of neutrons in a particular element leads to the formation of isotopes. NUCLEUS: Dense core of the atom containing almost all the mass of the atom and consisting of protons and neutrons. MICHELSON-MORLEY EXPERIMENT: Attempt to detect to try to detect a difference in the speed of light in two different directions: parallel to, and perpendicular to, the motion of Earth around the Sun. First performed in Berlin in 1881 by the physicist Albert A. Michelson (1852–1931); the test was later refined in 1887 by Michelson and Edward W. Morley (1838–1923) in the United States. ATOM:
Several schools of thought existed at this time. Some were enthusiastically in support of atomism. Others weakly supported or remained neutral on the thought, and some supported conflicting thoughts. Of the latter category, two major groups, called the equivalentists and the energeticists, were particularly vocal. The French
chemist Pierre-Eugéne Marcellin Berthelot (1827–1907), an equivalentist, exerted his considerable power as a government official to prohibit the teaching of atomic theory. In fact, the mention of atoms was avoided and many texts contained the idea of atomic theory merely as an appendix, if at all. French physicist Pierre-Maurice-Marie Duhem (1861–1916), Austrian physicist Ernst Mach (1838–1916), and German SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
issued scathing comments regarding van’t Hoff’s vision of three-dimensional molecules.
Michelson and Morley expected to see their light beams shifted by the swift motion of Earth in space, but to their surprise, could not detect any change. POSITIVISM: Philosophy, most popular in the nineteenth century, that denies the validity of speculation or metaphysics, and stresses scientific knowledge. The English philosopher Francis Bacon (1561–1626) and the Scottish philosopher David Hume (1711–1776) were early positivists, but it was the French philosopher Augustus Comte (1798–1857) who developed positivism into a coherent philosophy. PROTON: Positively charged subatomic particle; a component of the nucleus. The number of protons in an element determines its atomic number, and each element has a unique number of protons. QUANTUM: Small packet of energy. Its energy is in multiples of hν, where h is Planck’s constant and ν is the frequency of the radiation being described. QUANTUM MECHANICS: Most current model of the atom. Quantum mechanics uses wave functions to describe the region of greatest probability of finding the electrons in an atom. SPECIFIC HEAT: The amount of heat needed to raise the temperature of a unit mass by one temperature unit (degree). THERMODYNAMICS: Branch of physics concerned with the nature of heat and its conversion to other forms such as mechanical, chemical, and electrical energy. WAVEFUNCTION: Mathematical solution to the wave equation developed by Erwin Schrödinger (1887–1961). The wavefunction mathematically contains the limitations originally set out by Danish physicist Niels Bohr (1885–1962) to describe the energy states of electrons in an atom.
241
Thompson Kelvin (1824–1907), was of a positively charged cloud containing the negatively charged electrons, much as a plum pudding contains raisins.
Werner Heisenberg (© Hulton-Deutsch Collection/CORBIS. Reproduced
PHYSICAL SCIENCE
by permission.)
physical chemist Friedrich Wilhelm Ostwald (1853–1932), all energeticists, preferred the consideration of perceived data to that of hypothetical atoms. Ostwald was said to have denied the existence of matter! Physicist Albert Einstein (1879–1955) was extremely critical of Mach’s ideas. The Austrian physicist Ludwig Boltzmann (1844–1906), whose development of the kinetic molecular theory of gases relies extensively on the existence of atoms, supported Einstein. While German physicist Max Planck (1858– 1947) initially accepted Mach’s ideas, he later changed his mind and refuted his theories. Of Duhem, Mach and Ostwald, only Ostwald later openly accepted atomic theory. The others staunchly denied it until the end. The Twentieth Century. The combined efforts of English physicist Sir Joseph John (J.J.) Thomson (1856–1940), American physicist Robert Andrews Millikan (1868–1953), German physicist Wilhelm Conrad Roentgen (1845–1923), Dutch physicist Antonius van den Broek (1870–1926), British physicist Lord Ernest Rutherford (1871–1937), and English physicist Henry Moseley (1887–1915) led to the discovery of the electron and the realization that it was a component of all atoms. Further, it was realized that the number of electrons was proportional to the atomic mass, although the electron itself was much smaller than an atom. The first model of the atom, proposed by British physicists J.J. Thomson and Lord William
242
SCIENCE
IN
DISPUTE,
VOLUME
2
Rutherford’s famous gold foil experiment provided the first real model of the atom. In this experiment, Rutherford found that positively charged particles directed at a thin piece of gold foil were sometimes deflected rather than going straight through consistently. At times, the particles were deflected straight back to the source. He proposed that the atom contained a dense central portion that he called the nucleus. The nucleus was much smaller than the atom, but contained the majority of the mass and had a strong positive charge. Rutherford also envisioned the electrons orbiting the nucleus in the way that the planets orbit the sun. Since the atom is electrically neutral, the number of electrons was such that the negative charge of the electrons would balance the positive charge of the nucleus. The problem with this model was that according to classical physics, the orbiting electron would constantly emit electromagnetic radiation, lose energy, and eventually spiral into the positive nucleus. If this were so, all matter would eventually self-destruct. Also, this model did not explain known spectroscopic observations. If the model were correct, the electron would pass through all different energy levels, thus emitting spectral lines at all frequencies and causing a continuous spectrum to be observed. Instead, a distinct set of spectral lines was observed for each different element. Max Planck essentially solved the second problem when he proposed that energy was emitted not continuously, but in small packets, which he called quanta. Rather than being able to assume any value on a continuous scale—like being able to assume any position on a ramp— energy was limited to certain values—like being able to stand on one step or another, but not between steps. This concept was so radically different that Planck himself barely believed it. Einstein immediately found an application of Planck’s quantum in his explanation of the photoelectric effect. He suggested that light consisted of photons, each having the energy of a quantum. The Danish physicist Niels Bohr (1885–1962) combined the observations of both Planck and Einstein to propose a new model for the atom. He stated that instead of circling the nucleus emitting energy randomly, the electrons could only assume certain discrete energy values (i.e, quantized) that were at specific distances from the nucleus. In applying the concept of quantization, to the electron, he effectively removed the problem of the electron spiraling into the nucleus. Further, Bohr postulated that spectral lines resulted from the movement of the electron from one energy level to
another. This explained the presence of discrete lines rather than a continuous band in atomic spectra. Bohr revolutionized atomic theory with his model and with the fact that it gave the correct values for the observed spectra of hydrogen. However, Bohr’s model was not successful with other elements. Further refinements were made when the English physicist Sir James Chadwick (1891– 1974) discovered the neutron, thus accounting for the entire mass of the atom and explaining the existence of isotopes. French physicist LouisVictor de Broglie (1892–1987) took the next step when he inverted Einstein’s observation that light behaved like particles. He stated that particles could behave like light and exhibit wave properties. While this is true for large objects like footballs, their wavelength is insignificant due to their large mass. However, for tiny particles such as electrons, the wavelength was no longer insignificant. This revelation established the concept of wave-particle duality, the fact that matter could behave as a wave and vice versa.
ocritus. The work of two Dutch-born American physicists, Samuel Goudsmit (1902–1978), and George Uhlenbeck (1900–1988), added yet another quantum number, ms, the spin quantum number, the only one of the four quantum numbers that is nonintegral. This fourth quantum number allowed the Austrian-born physicist Wolfgang Pauli (1900–1958) to clarify the electronic structure of atoms using his exclusion principle, stating that no two electrons in an atom could have the same four quantum numbers. This principle led to the concepts of spin coupling and pairing of electrons and completely explained the valence structures of atoms. The valence structures were instrumental in determining periodicity of the elements, the observation that elements in certain families had very similar physical and chemical properties. Just when it was believed that the nature of the atom had been resolved, German physicist Werner Heisenberg (1901–1976), using a different method, declared that, mathematically, a limitation was inherent in the extent to which we could determine information about the atom. His uncertainty principle stated that the product of the uncertainty in the position and the uncertainty in the velocity (or momentum) of the particle had to be greater than or equal to a constant, h/2π. This constant is very, very small. This would not normally be considered a problem, since it is possible to know exactly how fast a football is flying and also exactly where it is. SCIENCE
IN
DISPUTE,
VOLUME
2
John Dalton (The Library of Congress, Prints and Photographs Division.)
PHYSICAL SCIENCE
Erwin Schrödinger (1887–1961), an Austrian theoretical physicist who had been a vocal critic of Bohr’s theory of movement between levels, or electrons “jumping,” developed the model of the atom that is with us today. His wave mechanical model of the atom was made possible by de Broglie’s work. It is mathematically complex, and yet extremely elegant. Essentially, Schrödinger described certain mathematical functions, which he called wavefunctions, and upon which he placed the normal mathematical restrictions of continuity, consistency, uniformity, and finite nature. Under these conditions, only certain values of energy would be possible for the energy of the electron, thus creating a natural path for quantization, unlike Bohr’s imposed quantization. The wavefunctions, also called orbitals, described the electrons in that particular energy state. The values that determine these wavefunctions are known as quantum numbers. The first is n, the principal quantum number that determines the energy level of the electron. The second is l, the azimuthal quantum number that determines the shape of the orbital. The third is ml, the magnetic quantum number that determines the multiplicity of the orbital. All these numbers are integers. Several interesting conclusions resulted from Schrödinger’s wave mechanical model of the atom. The first was that little doubt was left that the atom had to be described in three dimensions. The second was that the orbital described a region in space rather than a specific path. The third was that the square of the wavefunction described the region of highest probability, of finding the electron. Once again, rather than being definite, the interpretation had reverted to randomness, as described by Dem-
243
posed of atoms. Later experiments have demonstrated that the atom itself is composed of a number of subatomic particles, the major three being the proton, the neutron, and the electron. The fact that irrespective of the element, all these constituents are present, indicates that they are objectively real. Today, it is possible to see atoms, not with our eyes, but using sophisticated technology called scanning tunneling microscopy, or STM. Not only can we resolve surfaces to the extent that the individual atoms can be seen, but we can manipulate the atoms on the surface, pluck an atom from one place, and place it elsewhere. It is possible to construct a circuit using one molecule connected to an electrode consisting of one atom! It is also possible to design and construct molecules that have very specific properties and structures. None of this would be possible if atoms were not real. Are atoms real? For those of us who do not have access to a scanning tunneling microscope, we have only to look at all around us to say yes. —RASHMI VENKATESWARAN
Ernst Mach
However, the mass of a football is so large that its momentum is large. The mass of an electron is so small, however, that the product of the uncertainties in its position and momentum come very close to this constant. Essentially, Heisenberg maintained that if we knew one value (such as the position), we could not be certain of the other (such as its momentum). Further, this result indicated that by the very act of observing the position of an atom, we interfere in its behavior. With this statement, Heisenberg brought atomic theory back to the region of philosophy! Einstein never accepted this limitation.
PHYSICAL SCIENCE
(The Library of Congress.)
The determination of whether atoms are real thus centers on the clarification of what “real” is. If reality is determined by perception, then a perception that is universal is truly real. A color described objectively using a wavelength is more real than one described by eye since no two people see the same way. The universe is made of matter. We can touch and see it and subjectively, we know it exists. The fact that certain types of matter behave identically has been shown above. The experimental evidence clearly demonstrates that matter can be divided into the distinct classifications of pure substances (elements and compounds) and mixtures (physical combinations of elements and/or compounds). Further, the above scientists categorically determined that compounds are formed from combinations of elements whose relative proportions can be measured, and that elements are com-
244
SCIENCE
IN
DISPUTE,
VOLUME
2
Viewpoint: No, many pre-twentieth-century scientists, lacking any direct evidence of the existence of atoms, concluded that atoms are not real. Today the reality of the atom is taken for granted. Pictures taken by tunnelling electron microscopes can even “show” individual atoms. However, while now the reality of the atom is accepted as commonplace, it was not always so. Only at the turn of the twentieth century were experiments conducted that gave any direct evidence of atoms. Before that, the atomic hypothesis was a “best guess,” and was opposed by many scientists, since atoms could not be seen, felt, or sensed in any manner or form. Given the state of evidence at the time, the scepticism shown towards atoms was completely justified, and helped provide the impetus for the theoretical and experimental innovations that led to the existence of atoms being proved. Philosophical Atoms The atom as we understand it today is a recent invention, a product of experimental evidence and quantum theory. Yet the general idea of atoms, small invisible particles as the building blocks of the universe, is a very old one. The Greek philosopher Democritus (c. 460–370 B.C.) expanded on earlier ideas to give an atomic theory of matter, reasoning that it was impossible to divide an object for-
ever, there must be a smallest size. Other Greek philosophers developed the atomic theory into an all-encompassing idea that even explained the soul, which was supposed to consist of globular atoms of fire. However, the atomic hypothesis had powerful opponents in the Greek philosopher Aristotle (384–322 B.C.) and his followers, who strongly denied such entities for their own philosophical reasons. Aristotle’s ideas were to become the dominant school of thought, becoming entrenched in the Middle Ages when Aristotle’s teachings were linked to the Bible, and atomism disappeared from intellectual thought. The idea of the atom was rediscovered in the Renaissance during the sixteenth and seventeenth centuries. Like the Greeks, the supporters of seventeenth-century atomism were more concerned with ideas than experiments, and the indivisibility of matter was the key philosophical argument for the existence of atoms. Two of the biggest names in seventeenth-century science, the French mathemetician René Descartes (1596–1650), and the English physicist Sir Isaac Newton (1642–1727), both endorsed atomism, and with such authority behind the idea, it soon became a given. However, as was typical with the ideas of the two great men, each formulated a version of atomism that contradicted the other’s. Various followers of Newton and Descartes debated the finer points of atomism and atomic collisions for over two centuries, until a compromise was finally reached. Although experiments played a part in the development of such theories, much of the debate was based on philosophical, and even nationalist lines, with many French scientists supporting Descartes’s ideas, and the majority of British scientists slavishly following Newton’s. The authority of the two men was considered greater than some experimental evidence by their supporters, and many false paths were followed in the cause of championing one over the other.
Antiatomism That there were strong objections raised against the atomic hypothesis should not be surprising. There were many different hypotheses about atoms circulating towards the end of the nineteenth century, a number of them contradictory, and often supporters of atomism seemed to invoke philosophy and authority over experiment. Perhaps the most surprising thing is that such opposition to atoms was not taken seriously until the 1890s. One of the foremost voices in opposition was Ernst Mach (1838–1916), an Austrian physicist who also dabbled in psychology and physiology and had a strong interest in the philosophy and history of science. Mach argued that while the idea of atoms explained many concepts, this did not mean they were to be considered real. Mach’s scientific philosophy owed a great deal to that of the Scottish philosopher David Hume (1711–1776), and the German philosopher Immanuel Kant (1724–1804). Mach placed observation at the forefront of the scientific process, and demanded that scientific assertions about nature be limited to what could be experienced. Mach rejected causes in favor of laws, but he did not see these laws as true; rather, they were to be thought of only as an economical way of summarizing nature. Such ideas brought Mach into furious debate with German physicist Max Planck (1858–1947), for whom laws such as the Conservation of Energy were to be considered real, not just a convenient fiction.
Mach’s view has been called a phenomenological philosophy of science, as he stressed observations of actual phenomena, and claimed if something could not be sensed, it could not be called real. Mach claimed that science should not proceed from objects, as they are only derived concepts. The only thing that can be known directly is experience, and all experience consists in sensations or sense impressions. Hence Mach denied the existence of atoms simply because they could in no way be sensed. However, he did allow the notion of atoms to serve as a useful and economical method of explaining certain observations. To Mach atoms were a mathematical shortcut, much like the symbols used in algebra. However, to claim they were real was, Mach argued, empty theorizing, since there was no way of experiencing them. Such positivist, or antimaterialist, views were popular toward the end of the nineteenth century, and Mach’s ideas were shared by many others. Energeticists such as the German physical chemist Friedrich Wilhelm Ostwald (1853–1932), and the French physicist PierreMaurice-Marie Duhem (1861–1916), who SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
Dalton’s Chemical Atom It was the field of chemistry, not physics, that mounted the strongest scientific campaign for the existence of atoms. The English chemist John Dalton (1766–1844) observed that, in some chemical reactions, there are no fractions when the chemicals combine, and concluded that atoms are bonding in set integer ratios according to the compound produced. Importantly, Dalton’s model allowed for predictions to be made, and he proposed some general rules for chemical combinations. However, Dalton did not give any reason for the validity of his rules, and many of his conclusions appeared arbitrary to most early readers. Also, his work on specific heats was by no means convincing, and seemed to contradict his own rules. While hindsight has proved Dalton mainly right, the strength of his
arguments, and those of his few supporters, did not convince the majority of his contemporaries of the reality of atoms.
245
unlike Mach considered energy to be real, shared Mach’s antiatomism, and mounted strong, sustained, and successful attacks on those supporting the reality of atoms. Energeticists argued that there was no need to reduce thermodynamics to the statistical motion of theoretical atoms, when all could be explained in terms of energy. Ostwald wrote that the “atomic hypothesis had proved to be an exceedingly useful aid. . . . One must not, however, be led astray by this agreement between picture and reality and combine the two.”
PHYSICAL SCIENCE
While denying the existence of atoms may seem wrong today since we “know” that atoms exist, the sceptical scientific approach of Mach, Ostwald, Duhem, the French mathematician Jules Henri Poincaré (1854–1912), and many others, proved correct when applied to other constructions in science such as the notion of absolute space and the concept of the ether. In the nineteenth century it was assumed that light waves travelled through a medium, like other waves, and this was dubbed the ether. Mach argued that while the notion of a substance for light to move through was useful, this did not mean the ether was real, as it could not be detected in any way. As it turned out, he was completely correct, as the Michelson-Morley experiment was to show, and Einstein’s theory of relativity was to explain. Mach’s interest in the history of science led him to attack the mystical elements he saw as leftovers from past giants such as Newton and Descartes. Newtonian concepts such as action at a distance could not be experienced, and Mach showed that mechanics could have been developed into just as reliable a science without Newton’s assumption of absolute space. The Statistical Atoms of Boltzmann and Maxwell At the same time that Mach and others were insisting that atoms could not be said to be real, the work of the Austrian physicist Ludwig Boltzmann (1844–1906), and Scottish physicist James Clerk Maxwell (1831–1879), attempted to show that atoms had certain specific characteristics and obeyed Newton’s laws. Boltzmann used statistical mechanics to predict the visible properties of matter. His work described things such characteristics as viscosity and thermal conductivity in terms of the statistical analysis of atomic properties. Maxwell, better known for his work on electricity, also formulated a statistical kinetic theory of gases independently of Boltzmann. However, while the mathematics and physics in their work is now recognized as outstanding, the BoltzmannMaxwell atom hypothesis failed to convince opponents. Their work was strongly attacked, and was often misunderstood, partly because neither was a clear writer nor an accomplished self-publicist. There were also a number of problems that arose from Boltzmann and
246
SCIENCE
IN
DISPUTE,
VOLUME
2
Maxwell’s work. For example, the form of entropy that Boltzmann derived contradicted the Newtonian notion of reversibility in mechanics. Some critics argued that Boltzmann’s work was incompatible with the second law of thermodynamics, and although he tried to defend his theories against these charges, they remained serious defects for atomism. Boltzmann attempted to write and teach philosophy to counter the views of Mach, Ostwald, and other antiatomists, but failed to formulate a convincing philosophical explanation. He became embroiled in private debates with Mach, through an exchange of letters, which show his frustration and confusion with many of the developments of late nineteenth-century physics. Boltzmann also had some very public debates with Ostwald, which were seen by many as too vicious in nature, and which often disguised the close friendship of the two men. Indeed, Mach was so worried that the arguments were getting out of control that he proposed a compromise theory in an attempt to cool the situation. However, Boltzmann became tormented by his failures to convince the majority of scientists of the reality of atoms, and began to feel that the antiatomists were winning. Indeed, he became something of a lone voice in the wilderness, the last staunch atomist when the majority saw atoms as an arcane notion, a hangover from the mysticism of the Greeks. Boltzmann suffered from bouts of depression, and eventually committed suicide, sadly only a few short years before the final victory of the atomic hypothesis. Near the end of his life Boltzmann wrote: “In my opinion it would be a great tragedy for science if the [atomic] theory of gases were temporarily thrown into oblivion because of a momentary hostile attitude toward it, as was for example the wave theory because of Newton’s authority.” Yet it was precisely because of the weight of authority of Newton and Descartes and others that the atomic hypothesis had been so accepted, and it was the inability of experimental and theoretical science to show the effects of atoms that had led to such damning criticism. The Atom Is Victorious Experimental work that finally showed direct evidence of atoms began to emerge at the turn of the twentieth century. Work on Brownian motion, where small particles in a dilute solution “dance” in constant irregular motion, showed it to be an observable effect of atomic collisions. Theoretical calculations by Albert Einstein (1879–1955) were supported by experimental work by the French physicist Jean Perrin (1870–1942), and led many antiatomists to concede defeat. New research in the field of radioactivity was also providing strong evidence for small particles that were even smaller than atoms, such as English
physicist Sir Joseph John (J.J.) Thomson (1856–1940) discovery of the electron in 1897 (which took some time to be fully accepted). In 1908 Ostwald became convinced that experiments had finally given proofs of the discrete or particulate nature of matter. In 1912 Poincaré declared that “[A]toms are no longer a useful fiction . . . The atom of the chemist is now a reality.” Mach never seems to have been totally convinced—after all atoms could still not be experienced directly—but he ceased to pursue the antiatomist case with any vigor. Although the antiatomists were shown to be wrong, their stand against atoms was still an important one for science in general. The ideas of Mach in particular were to have a lasting influence on physics, and his attack on Newtonian concepts such as the absolute character of time and space were important to development of relativity theory. Einstein acknowledged his debt to Mach in 1916, saying “I even believe that those who consider themselves to be adversaries of Mach scarcely know how much of Mach’s outlook they have, so to speak, absorbed with their mother’s milk,” and noted that Mach’s writings had a profound early influence on him, and were a part of the puzzle that led to the theory of relativity. Mach also had a founding influence on the Vienna Circle of logical positivists, a group of philosophers, scientists, and mathematicians formed in the 1920s that met regularly in Vienna to investigate the language and organizing principles of science. The antiatomists were correct to question the existence of atoms, and it must be remembered that their views held sway at the end of the nineteenth century, and for good scientific and philosophical reasons. At the very least, the scepticism of the antiatomists pushed others to search for strong experimental proof of atoms, and thereby put the whole of physics and chemistry on a much stronger base. —DAVID TULLOCH
Further Reading American Chemical Society. Chemical & Engineering News 79, no. 50 (December 10, 2001). Bradley, J. Mach’s Philosophy of Science. London: The Athlone Press, 1971. Brock, W. H., ed. The Atomic Debates. Great Britain: Leicester University Press, 1967. Cercignani, Carlo. Ludwig Boltzmann: The Man Who Trusted Atoms. Oxford: Oxford University Press, 1998. Cohen, Robert S., and Raymond J. Seeger, eds. Ernst Mach: Physicist and Philosopher. Dordrecht, Holland: D. Reidel Publishing, 1970. Lindley, David. Boltzmann’s Atom. New York: Free Press, 2001. MacKinnon, Edward M. Scientific Explanation and Atomic Physics. Chicago: The University of Chicago Press, 1982. Pullman, Bernard. The Atom in the History of Human Thought. New York: Oxford University Press, 1998. Sachs, Mendel. Ideas of Matter: From Ancient Times to Bohr and Einstein. Washington: University Press of America, Inc., 1981. Sambursky, Shmuel, ed. Physical Thought: From the PreSocratics to the Quantum Physicists: An Anthology. London: Hutchinson & Co., 1974. Scott, Wilson L. The Conflict between Atomism and Conservation Theory 1644–1860. London: McDonald, 1970. Silberberg, Martin. Chemistry: The Molecular Nature of Matter and Change. St. Louis: Mosby-Year Book, Inc., 1996. Solomon, Joan. The Structure of Matter. New York: Halsted Press, 1974.
PHYSICAL SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
247
Does the present grant system encourage mediocre science?
Viewpoint: Yes, by attenuating peer review mechanisms, grant evaluation systems encourage mediocre science. Viewpoint: No, although far from perfect, the present grant system acts to promote science.
In economic terms, the knowledge obtained through scientific research usually falls in the category of “public good,” that is, a product of labor that benefits the population as a whole and for which there is no particular ownership. Although some scientific discoveries are treated as “intellectual property,” through the use of patents or trade secrets, at least for a time, since the scientific revolution of the seventeenth and eighteenth centuries, scientific societies and academic institutions have placed a premium on the open publication of results. The questions of who should pay for the expense of producing research results and how to allocate resources among the putative producers of them has no obvious or easy answers. Centuries ago, scientific work was often limited to those who could find patrons in among the ruling class or the very wealthy. As the potential of systematic research to benefit industry, commerce, health, and agriculture became apparent in the eighteenth century, governments established bureaus of standards, geographic surveys, and technical schools at which research could be carried out. In nineteenth and early twentieth century United States, the accumulation of massive wealth by a few families led to the endowment of major universities (Vanderbilt, Duke, Canegie-Mellon, Stanford) and Foundations (Ford, Rockefeller, Sloan). Both the government agencies and private foundations sometimes solicited proposals for research projects or new facilities, at times calling on external experts to decide between meritorious proposals. Thus the grant system was born.
248
The United States government played a relatively minor role in making grants to academic institutions prior to the World War II. During the war, the massive mobilization of scientific talent and rapid development of the atomic bomb, radar, and other military hardware made a strong impression on the national leadership. At the same time, the scientists and academic managers who had worked shoulder to shoulder with high ranking military officials and other government bureaucrats gained substantial political savvy and clout. The outcome was the establishment of a government commitment to fostering research and the education of research scientists at United States universities. A National Science Foundation (NSF) was established to fund basic science research, and the role of the National Institutes of Health (NIH) in funding medical research, both on its own campus and at universities, was greatly expanded. Other agencies also began or expanded their own grant programs. The federal government commitment to support research and increase the pool of scientists and engineers was strengthened in 1957
when the Soviet launch of Sputnik I, the first earth satellite, reminded Americans that technical feats were not their exclusive preserve. Among the most remarkable features of the postwar grant programs was the understanding that the government would pay the full cost of the research. For a successful researcher on a university faculty, this might include: full salary and fringe benefits for the summer months, partial salary during the academic year if the work involved “release time” from teaching duties, stipends and tuition payments for graduate students working as research assistants, full salaries for postdoctoral fellows and technicians, the cost of supplies and equipment, the cost of travel to related professional meetings, and a subsidy (page charges) of the cost of publishing the results in a scientific journal. To this the university was permitted to add a percentage for “indirect costs” such as building maintenance and depreciation and the purchase of library materials. In the mid 1970s the rate of indirect cost charged by some elite private universities exceeded 100%. The availability of such large sums of money was quickly factored into the planning process at many universities. New doctoral programs were added, new faculty hired, and new buildings built, often with federal assistance. There is little debate today that the system did increase the supply of scientists and produce significant new discoveries. But the system was bound to suffer some strain from its own success, as rapidly increasing numbers of scientists had to compete for at best slowly increasing funds. By the end of the 1970s new questions were being asked about the grant system. Is it fair? Is it efficient? While it produces much good science, does it actually encourage mediocre science? A case could be made that the system is not only fair, but benevolent. Even if a proposal is not funded, the submitter is afforded the benefit of review by a number of experts at no cost, and is welcome to submit a revised proposal addressing any objections raised. On the other hand, new investigators will be at a disadvantage compared to established scientists, since the expert reviewers are more likely to have confidence in the ideas of an investigator with a proven track record. Further, there is also a danger of grants being made on the basis of style more than substance. Many books and short courses on “grantsmanship” writing proposals and securing grants, are now offered to teach scientists how to write proposals most likely to be rated highly by reviewers. As the No essay that follows notes, even the benefit of expert review is no longer guaranteed by agencies such as the NIH, where a triage system of ranking proposals to decide which proposals are worthy of funding allows some to be discarded without the deeper study that others will receive. The grant system is certainly inefficient, in that a great many scientists will spend from several days to several weeks out of each year reviewing the proposals of other scientists and several weeks more writing their own proposals. When, as at present, the vast majority of proposals are not funded, some scientists will feel obliged to generate a much larger number of proposals so that at least some funding will be obtained. Does the grant system favor mediocre science over risk taking, truly innovative research? The likelihood of funding is certainly higher for a proposal for which the reviewers can see a plausible outcome. Also more funds will be available for research obviously connected to a desirable social or medical goal, such as environmental remediation or a cure for breast cancer. Thus the setters of the political agenda often steer the direction of research towards certain applications. On the other hand, the recipients of research grants have considerable latitude in the actual implementation of their program. Frequently, some more exploratory work is done along with that outlined in the proposal. Program officers, the people who make the actual funding decisions and monitor the recipients’ reports, generally have no interest in micromanaging the day-to-day course of research and allow their grantees considerable latitude as long as some progress is reported toward the goals of the original proposal.
That there are clearly aspects of the grant system that are counterproductive, the vast majority of scientists would agree. Whether there is a better system that can in fact be implemented in United States institutions without disrupting many highly productive researchers is another matter. Hopefully adoption of a new system will not occur without consideration of the pros and cons of the present one. —DONALD R. FRANCESCHETTI SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
Another way in which the grant system could be said to encourage mediocre science is the way it limits access to academic careers of scientists whose interests lie outside “popular” areas of research. Candidates for teaching positions at major universities are often told that they will be expected to bring in a preset number of dollars in research grant support in the first five or six years or they will not be recommended for tenure. Once tenure is achieved, the investigator’s freedom to change direction in research, is principle, guaranteed. The rewards accompanying bringing in more funding, however, work against major changes in research direction, however, unless the new area has also become “fashionable.”
249
KEY TERMS Companies that seek corporate profit for the benefit of their organizers or shareholders. In most cases, public grant sources require such entities to provide their own funding. Under unusual circumstances where the proposed research is, as termed in NSF guidelines, “of special concern from a national point of view,” or is “especially meritorious,” such projects may receive public funds. FOUNDATION: Legal entity operating to promote research and/or development. Foundations may be general or specific in their distribution of grants. There are more than a 250,000 private foundations chartered to operate in the United States. A small number of these foundations, however, contribute over one-half of all grant monies. GRANT PROPOSAL: Written proposal that outlines a research topic or question. The proposal contains preliminary data related to the projected research (i.e., outlines what is already known) and formulates a hypothesis regarding the subject of the research. GRANT PROPOSERS: Individuals and organizations submitting proposals for grants. In many cases, academic institutions (universities and colleges) make such proposals on behalf of the investigators who will actually conduct the research. FOR-PROFIT ORGANIZATIONS (FPOS):
PHYSICAL SCIENCE
Viewpoint: Yes, by attenuating peer review mechanisms, grant evaluation systems encourage mediocre science.
250
Although grants are designed to promote “good” science, the process has become so cumbersome, clogged, and confused that, despite noble intent, the grant process increasingly encourages mediocre or “safe” science. Science research, especially basic science research, is heavily dependent on grants from public and private institutions. By the year 2000, universities and colleges depended on almost $20 billion annually to fund their research programs, and most of this money was invested in science research. Grant-derived SCIENCE
IN
DISPUTE,
VOLUME
2
In most cases, the grant proposer is responsible for administering and accounting for grant expenditures. NONPROFIT ORGANIZATIONS (NPOS): Organizations that do not seek profit for the benefit of their organizers. NPOs range from independent museums with broad interests, to research institutes and laboratories focusing on very specific areas of science. NPOs usually do not include academic institutions. NPOs may also be grant proposers and grant recipients. TRIAGE: From the French meaning “to divide into three,” the term has its origin in the treatment of battlefield wounds. Wounded soldiers are usually triaged, or divided into three groups: (1) those who should receive only minimal attention, (2) those with wounds not severe enough to demand immediate attention, and (3) those to whom prompt medical attention can mean the difference between life or death. Whenever medical resources are scarce and the number of patients is more than staffing or supplies can handle, triage procedures are instituted to most fairly and effectively allocate resources. As applied to the grant awarding system, triage refers to disposing of those proposals not likely to be recommended for funding prior to a full review.
funding paid a range of research costs, from test tubes to the salaries of technicians and professional investigators. Foundation funding in the private sector is directly related to economic growth. Private foundations are required by law to annually distribute 5% of the valuation on foundation assets. Accordingly, during times of economic growth, foundation giving, which is in many cases dependant on underlying investments (e.g., stocks) must increase. During economic recessions, giving usually decreases. Federal grant funds, although not legally tied to economic growth, also historically mirror economic trends. Accordingly, funding for science research derived from both private and federal grants increased throughout the 1990s. The rate of growth in the competition for those grant dollars, however, vastly exceeded that rate of real growth in funds available. Increasing com-
petition and dependence on grants to fund increasingly complex and expensive research programs exacerbated pre-existing weaknesses in strained grant evaluation systems. Moreover, specific reforms such as triage and electronic submissions designed to cope with increasing numbers of grant applications are proving to have the unintended side effect of profoundly shaping the kinds of science research funded. Grant awards are rapidly becoming a contest of grantsmanship, the ability to write proposals and secure grants, rather than being decided on scientific merit. This emphasis on the form and procedures of the grant evaluation process, rather than on the substance of the science proposed, continually forces researchers away from the lab and into seminars on the craft of grant writing. More ominously for science, the investigators are forced in many cases to develop research proposals specially designed to please grant review committees. When such an emphasis is placed upon politics over scientific merit, science research loses in several significant ways. First, there is a loss of scientific diversity, as grant evaluation committees view proposals that have predictable outcomes as less of a risk to precious investment capital. Critics of the current grant process consider this trend a hidden drive toward safe science, away from the more adventurous research that throughout history has been the path to spectacular insights and advances in science.
Third, revisions in the grant review process—no matter how well intended—both entrench established lines of research and dis-
Despite the lofty rhetoric of federal programs charged with funding science research, the numbers regarding actual funding reflect an increasingly brutal reality for investigators at all levels. The grant process is extremely competitive. Most grant proposals are not funded, and the percentage of proposed projects funded has steadily declined since the mid-1980s to current levels at which only 10% of proposals are ultimately funded. In this environment, some scientists and their sponsoring institutions become proposal mills—putting out a shotgun pattern of tens of proposals in hope that one or two may get funded. The time cost is a staggering drain on scientists and scientific research. For many months of the year, investigators may spend more time on the grant application process than on actual research. It would be unreasonable to expect that any evaluation system dependent upon human judgment could be free of bias and prejudice. Regardless, recent attempts to reform and streamline the grant evaluation process actually make the system more fallible and less reliable. For example, the NIH—the single largest source of grant funds for research in the biomedical sciences—has moved to a triage system to decide which proposals are worthy of funding that essentially labels some proposals as unfundable, without even carrying out a full peer review of the merits of the proposal. NIH grants and funding have unquestionably led to significant and revolutionary advances in biomedicine and healthcare. The NIH deserves ample credit for these, just as the NSF deserves praise for past success. Regardless, it remains debatable whether the current system, beset by a crush of applications, can continue to promote good science—and aggressive science—when faced with a grant crisis precipitated by a crush of applications and threatened declines in funds available to researchers in less robust economic times. Under the current grant review process, researchers or their sponsoring institutions, the proposers, submit their research ideas and strategy, as in the case of NIH proposals that are directed to various study sections composed of members with expertise in the area of research. Other foundations and institutes have similar procedures, often based upon NIH or NSF models. In the specific case of NIH, the study section members score the proposals and then rank them using a priority score. NIH priority scores range from 1 to 5, with 1 being the highest priority, thus the higher the priority score, the lower the chances of funding. Funding eventually establishes cutoffs, priority scores over which no funding will be given. In recent years, no proposals over 2.1 were funded. SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
Second, as grantsmanship becomes increasingly important, new investigators fight an uphill battle to gain funding and build labs. Already several steps behind seasoned principal investigators who know how to craft strong proposals, new researchers often struggle along on grants designed for new scientists. These grants, although of generous intent, are paltry on the pocketbook and unrealistically low with regard to the real costs of research. Although there is some funding of dissertation research, organizations such as the National Science Foundation (NSF) actively discourage graduate students from submitting grant proposals. Although often highly touted, early education and grants to scientists starting out on research programs are often insufficient. In fact, only about one of four researchers seeking initial National Institutes of Health (NIH) funding actually apply for the “easier-to-obtain” grants designed for researchers making their first application for funding as a principal investigator. More confining and debilitating to new researchers are early development grants, which carry restrictive clauses that prohibit researchers from seeking other types of funding.
proportionately fund research into politically “in vogue,” publicity driven topics.
251
Regardless of the exact procedures, protocol, and terminology, similar models of evaluation often work against more open-ended, basicscience oriented research proposals. Basic science proposals usually contain a wider range of possible outcomes than do more narrowly focused goal-oriented projects (e.g., projects regarding a specific clinical application). Reviewers tend to regard this unpredictability as a negative quality, and the emphasis on lowered risk results in a higher priority score, reflecting a lower project priority, for the proposal under review. It is certainly difficult to evaluate and assign a priority to an unknown outcome, but the essential and long-cherished concept of scientific adventurism lies at the heart of empirical science. To slant research toward a particular goal usually casts a pall on the interpretation of data. In a very real sense, it upends the classic scientific method of probing for fallibility. Moreover, it is a wellknown axiom of science that researchers, regardless of discipline, often find the results they are looking for. With a specific goal in mind, even the most intellectually honest of researchers is prone to shade and interpret data that corresponds with and conforms to expected results. The present grant system encourages mediocre science because it encourages predictability. Regardless of the terminology of the particular grant foundation, researchers who fail to predict all possible outcomes for a project receive worse priority scores. As a result, the project becomes less a scientific inquiry and more an exercise designed to validate predicted results.
PHYSICAL SCIENCE
Political factors also influence funding. While including nonscientists on grant evaluation teams is a popular trend at many foundations, the inclusion of nonspecialists often means that evaluations of proposals are swayed and influenced by trendy, fashionable, or “politically correct” factors. Hot topics in the news or issues of special social concern often receive elevated evaluations that result in a lack of funding to more scientifically worthy projects. Much as financial stock prices may soar on naked speculation about any company with the word “biotech” in its title or prospectus, speculation— in the form of a prospective judgment on the priority or worthiness of a planned research project—can also soar because a project is related to several other already funded research projects. Funding in this way simply entrenches science. If a proposal falls outside the interests of the evaluating committees or study group members, it is inevitable that an evaluation of the project’s potential, particularly when graded by a priority score, will diminish. The fact that there is bias in evaluating groups is evidenced by the fact that substantially identical proposals can receive a wide range of evaluative scores. In an effort to counter this and reach the most target-specific
252
SCIENCE
IN
DISPUTE,
VOLUME
2
evaluating groups, some researchers now attempt to limit the scope of their research and avoid interdisciplinary proposals. In an increasingly global research environment, restrictive and burdensome clauses with regard to foreign versus domestic organizations are at best cumbersome. Public sources such as NSF will usually support only those portions of internationally collaborative research projects that are considered to be conducted by United States citizens. Inundated with grant requests, larger granting institutions such as the NIH have instituted various triage-type procedures to handle the influx of grants. Whether triage-based policies and procedures work to alleviate problems of bias and prejudice remains a highly contentious issue. Many investigators contend that the attenuated review process hampers funding for innovative projects that might benefit from fuller consideration by the entire review process. The difficulty in obtaining grants has created a vicious cycle. Because of the large number of grant applications, grants that are deemed “noncompetitive” at NIH are returned without full review. At a minimum, this eliminates much of the potential benefit traditionally derived from an investigator’s ability to refine and resubmit improved proposals responsive to full peer criticism. Triage systems attenuate the peer review process and place an increased dependence on assessments of anticipated benefits or results. Some critics of the present systems argue that, in spite of the merits of peer review, the system is ripe for abuse in a modern era of intensely competitive science. Stripped of full review, basic science proposals increasingly suffer, in part due to the trend of emphasizing clinical, or applied science. This trend fundamentally reshapes the intent of research, and results in mediocre science and a weak foundation upon which to build future “applied” research. The American astronomer Carl Sagan (1934–1996) often asserted, “There are many hypotheses in science which are wrong. That’s perfectly all right; they’re the aperture to finding out what’s right. To be accepted, new ideas must survive the most rigorous standards of evidence and scrutiny.” —K. LEE LERNER
Viewpoint: No, although far from perfect, the present grant system acts to promote science. Although there is no question that the grant application system can become a fickle proce-
dural minefield for scientists, it is quite another matter to contend that the existing grant system encourages mediocre science. To analyze such a contention, it is important to clearly distinguish between the process of obtaining a grant and the scientific outcome produced by grant-funded research. And it is also important not to portray a diversity of funding sources as a one-eyed, onesource monolithic monster. Foundations such as nonprofit organizations (NPOs), for-profit organizations (FPOs), and government entities (e.g., the NIH and NSF), provide billions of dollars to facilitate research in basic and applied sciences. A wide variety of other governmental agencies and private foundations also supply grants to advance scientific research. There is no question that there are problems in the methodologies of large granting agencies such as the National Institutes of Health (NIH) and the National Science Foundation (NSF). Review process problems, for example, may lead to grants being easier to obtain for studies involving the application of research rather than for basic science research. However, arguments that assert the existing grant system encourages mediocre science are contradicted by the only true measure of the current system—the steady output of scientific research, innovation, and advances. Criticism of a process that is based upon anticipated results can be, at best, uncertain. In the worst cases, it represents a misguided attempt to change an already functioning system without any real data to support the proposed modification. There are admittedly often maddening, time-consuming hurdles for investigators to vault, and the grant allocation process is far from perfect. Regardless, as with all areas of politics— that being the appropriate term for a debate on the allocation of resources—interpretations regarding the current grant system are often “self-serving” rather than “science serving.”
The grant notices in any of the major journals or professional science publications of broad scope, such as The Scientist, reveal the typical spread of grant awards. On the monetarily mod-
Critics of the grant system often cite such examples in support of a contention that basic science often suffers at the benefit of research tied to potential applications. Such assertions often overlook the fact that studies with a wide gap in funding may significantly vary in duration and anticipated expense. More importantly, with finite resources, it is fair to argue that it is often prudent to place a value on the potential outcome of a research project. When research has a direct link to biomedical issues, grant amounts usually soar. A recent study on the genetic basis of skin cancer announced simultaneously with the grants cited above provided approximately $1 million dollars to researchers. Again, elevated grant awards may reflect some combination of cost and an estimation of potential benefit but, as in the case of the grant for skin cancer research, the million-dollar grant came from a private foundation that makes awards in many areas of science and biomedicine. Critics of the current grant process often focus on the problems related to obtaining federal grants. To focus criticism on the grant process, and to inflate the rhetoric of criticism to claim that the present system promotes mediocre science, denies the reality that there are a wide variety of funding sources available to researchers. Furthermore, alleged defects in the federal-review process must be taken in the context of the advantages offered by the system and the fact that there are other sources of funding for research projects. Grants may be relatively narrow in focus, but of broad humanitarian and international application. For example, a research program seeking to establish a program to combat the transmission of parasites by developing programs to control insect vectors—the insects responsible for the transmission of particular diseases—garnered multiyear support from the private John D. and Catherine T. MacArthur Foundation of Chicago. This type of research is goal-oriented, with a specific target for investigators established at the outset. Such projects are often funded by grants from private foundations with broader humanitarian aims. In contrast, groups with narrow focus tend to selectively support the types or research most directly related to their specific interests. For example, a group dedicated to the study of paralysis, such as the American Paralysis Association, may provide hundreds of thousands of dollars toward finding a cure for paralysis caused by spinal cord trauma or stroke. SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
Science research grants fund proposals submitted by individual researchers or organizations, such as specialized research institutes, that submit proposals on behalf of individual researchers or research groups. The costs of doing research have spiraled upward in the past decade. With some notable exceptions in fields such as pharmaceuticals or human genetics where companies expect to make eventual profits, the process and progress of research is often dependent upon a mixture of public and private grant money. In the basic sciences, the dependence on research grant money is almost absolute, with little funding to be obtained from private sector sources.
est end of the scale, a basic science research project in paleontology might receive a NSF grant of a few thousand dollars, while a botanical research project on the genetics of plants that might eventually prove beneficial in improved crop yields garners more than a $100,000 in multiyear funding from the same foundation.
253
These types of awards, while goal oriented, may still foster basic medical research. For example, the grant for research related to paralysis specifically funded inclusive projects on topics of basic medical research related to in vitro analysis of spinal cord regeneration mechanisms, or the biochemical analysis of specific proteins found in high concentrations following spinal cord trauma. Many grants awarded from public and private funds act to support good science at the most fundamental level. For example, some grants are designed solely to prepare undergraduate students for graduate education in science. Although certainly not basic science research, these types of grants are important in the training of future scientists. In a sense, they are the most basic and most fundamental investments in science. The present system also contains checks and balances that promote good science by ensuring separation between the research lab and the marketplace. Programs designed strictly for the marketplace are the antithesis of rigorous scientific endeavor, in which, as American physicist and Nobel Prize winner Richard Feynman (1918–1988) once asserted, “one must be as ready to publish one’s failures as one’s successes. . . .” Recognizing this truth, many grant agencies, including the NSF, specifically refrain from funding research designed to develop products for commercial purposes or markets.
PHYSICAL SCIENCE
There has certainly been a shift away from basic science research toward applied science research within granting agencies such as the NIH. Regardless, this trend is balanced by the actions of other agencies to specifically encourage rigorous “high” science, or the pursuit of basic science knowledge. For example, the NSF specifically discourages proposals involving particular medical goals in which the aim of the project may be the diagnosis or treatment of a particular disease or disease process.
254
It is also unfair to assert that requiring potential researchers to evaluate the potential merits of their work encourages mediocre or goal-oriented “safe” science. Being able to clearly state the objectives and potential of science research is neither imprudent nor constraining. Scientific serendipity has always been a factor in the advancement of science—but is, by definition, something that takes place along a path intended initially lead elsewhere. In almost all cases, it is difficult to evaluate the potential of a true unknown. Arguments against the current grant system also generally ignore the fact that part of the grant review system is designed to assess the suitability of the methods researchers intend to apply to test their hypotheses. Lacking such oversight, the grant proposal process breaks SCIENCE
IN
DISPUTE,
VOLUME
2
down into political infighting regarding the value of potential outcomes. With attention to such review, the grant process encourages the foresightful and prudent application of science data and techniques. It is easy to argue that the current system may discourage the brilliant undiscovered scientist. However, the resources and funds available to support science research are finite, and that some rational process must exist to allocate resources economically. The NIH and the NSF often fund science education programs. As long as faculty oversight is provided, some granting foundations may even award grants to graduate students conducting research programs intended to culminate in their doctoral dissertation. The present grant system also seeks to provide special support for women, minority scientists, and scientists with disabilities. As with direct grants to students, grants to faculty at nonresearch institutions, primarily those teaching at undergraduate colleges, are designed solely to provide the most fundamental support of science in the development of the next generation of researchers. Grants can also be used to remedy a shortage of investigators in a particular area of research. Far from being exclusive, the present system of public funding also provides a mechanism for unaffiliated scholars and scientists, especially those with a demonstrated facility and capacity to perform the type of research proposed, to apply for support. These types of grant awards are admittedly increasingly rare, but a procedural mechanism does remain whereby significant proposals can at least be reviewed. The current grant system—for all of its faults and political infighting—adds a layer of protection to scientific research by weeding out proposals with little merit. In addition, daunting as the grant process may be, even a failed proposal can be valuable to the earnest researcher. Because experts in the particular area of inquiry often review grant proposals, researchers may discover facts or data that can better shape future proposals or even assist in research. Reviewers also often help shape proposals by critically evaluating theoretical or procedural defects. Reflecting a variety of procedures, most grant review processes promote good science by allocating resources based upon the significance of the project (including its potential impact on science theory) and an evaluation of the capability and approach of the investigator or investigative team. In particular, evaluating committees, especially when staffed with experts and functioning as designed, can help fine-tune research proposals so that methodologies are well integrated and appropriate to the hypothesis advanced. In cases of clinical and potentially dangerous research, such as the genetic alteration of microorganisms, the grant review processes pro-
vides supervision of procedures to assure that research projects are conducted with due regard to ethical, legal, and safety considerations.
McGowan, J. J. “NIH Peer Review Must Change.” Journal of NIH Research 4, no. 8 (August 1992).
With regard to promoting science, the present grant system is apparently the worst possible—except for all others. Although far from perfect, the present system represents a workable framework of critically important peer review. Admittedly subject to all the human fallibilities, it requires refinement, rather than general reproach, because it provides a measure of safety, quality control, and needed economy to science research. —BRENDA WILMOTH LERNER
Mohan-Ram, V. “NSF Criteria.” Science (October 8, 1999).
Further Reading Düzgünes, N. “History Lesson.” The Scientist 12, no. 6 (March 16, 1998): 8.
———. “Grant Reviews, Part Two: Evolution of the Review Process at NIH and NSF.” Science (September 10, 1999). Rajan, T. V. “Would Harvey, Sulston, and Darwin Get Funded Today?” The Scientist 13, no. 9 (April 26, 1999). Smaglik, P. “Nobelists Beat Adversity to Advance Science.” The Scientist 11, no. 24 (December 8, 1997). Swift, M. “Innovative Research and NIH Grant Review.” Journal of NIH Research 8, no. 12 (1996): 18.
PHYSICAL SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
255
Is a grand unified theory of the fundamental forces within the reach of physicists today? Viewpoint: Yes, history, recent advances, and new technologies provide reasonable hope that a grand unified theory may be within reach. Viewpoint: No, a grand unified theory of the fundamental forces is not within the reach of physicists today. The belief that a small number of organizing principles underlies the immense variety of phenomena observed in the natural world has been a constant theme in the history of science. One of the first unification theories was introduced in the fifth century B.C. by Empedocles: a system of four elements— air, earth, fire, and water—with qualities of coldness or hotness and moistness or dryness. This system soon was supported by a geometrical model outlined in Plato’s Timaeus (ca. 360 B.C.), and extended to the medical realm with Hippocrates’ theory of the four humors. While the four-element theory bears little resemblance to the modern periodic table of chemical elements or its mathematical explanation in terms of quantum mechanics, it represents the same basic drive toward unification. It also provides an important lesson: unification theories, no matter how impressive, may be misleading. The human tendency to perceive patterns in the data of experience is strong, and may have something to do with the high value placed on unifying theories in science. Ancient Europeans pondered the night sky and grouped stars into mythical figures. Astronomers still sometimes make use of this ancient system of constellations to locate objects. Other cultures found other figures. The tendency to find patterns in random data has been well documented in psychological studies. The modern notions of force and the relationship between force and motion first were established in Isaac Newton’s Philosophiae Naturalis Principia Mathematica (Mathematical principles of natural philosophy), published in 1687. Newton’s three laws of motion and his force law for universal gravitation provided a single framework within which the motion of both terrestrial objects (footballs and cannonballs) and celestial objects (moons and planets) could be computed with unprecedented accuracy. Like the unification theories to follow, in addition to integrating two realms of phenomena previously thought to be governed by different laws, the Newtonian synthesis allowed predictions that were subsequently confirmed, including the occurrence of eclipses, cometary returns, and the existence of the planets Neptune and Pluto, which were discovered through their effect on the orbits of other planets.
256
The investigation of electric and magnetic effects in the nineteenth century led to a unified theory of the electromagnetic force in the set of equations developed by the Scottish physicist James Clerk Maxwell in 1864. Maxwell’s equations allowed for solutions that described waves traveling through space at a speed determined by the basic constants of the electric and magnetic force laws; this speed is exactly the observed speed of light. Because the solutions were not restricted to any specific wavelength, Maxwell’s equations stimulated the investigation of the entire electromagnetic spectrum from radio waves to gamma rays. Attempts to reconcile the wavelike character of light with Newton-
ian mechanics led to a belief in a “luminiferous aether” that permeated all space, but allowed the passage of material objects without resistance. A true unification did not occur until 1905, when Albert Einstein’s special theory of relativity replaced the absolute notions of time and space underlying Newton’s mechanics with postulates that allowed for different measurements by observers in motion with respect to each other. After Einstein’s development of the special theory of relativity there was no need for the concept of an “aether.” Dmitry Mendeleyev’s periodic table classifying the chemical elements was another impressive unification achieved in the nineteenth century. Like the syntheses of Newton and Maxwell, it made predictions, in this case of new chemical elements and their properties, that were soon confirmed. The discovery of the electron in 1897 by the English physicist Joseph John Thomson brought about the hope that the chemical properties of the elements could be explained from the structures of their atoms. Experimental studies of the atom’s structure revealed that all of the positive charge in the atom was confined to a very tiny nucleus, with the negatively charged electrons distributed in space around it. Unfortunately, the known laws of electromagnetism and dynamics allowed no such arrangement of charges, moving or stationary, to be stable. Again the basic Newtonian mechanics had to be revised. The resulting quantum mechanics theory, which reached its definitive form around 1925, provided a workable picture of atomic structure and explained both chemical bonding and the characteristic optical spectra of the elements. The quantum mechanical synthesis made it clear that, outside of the nucleus, only two fundamental forces were at work. One, the gravitational force, was extremely weak, but nonetheless was the only active force in the universe on the astronomical scale. The other, the electromagnetic force, was responsible for chemical bonding, tension and compression in materials, friction, and the contact force between material bodies. The two twentieth-century modifications of Newtonian mechanics—special relativity and quantum mechanics—had yet to be reconciled with each other. A major step in this direction was the development of the equation for the behavior of electrons by the British physicist Paul Dirac in 1928, about the same time as the more general nonrelativistic quantum theory was taking form. Dirac’s equation predicted the existence of strange “negative energy” states for the electron—states that in fact described the positron, the antiparticle to the electron discovered in the early 1930s in the cosmic ray experiments of the U.S. physicist Carl Anderson. The discovery that an electron and a positron could annihilate each other and convert their entire mass-energy into electromagnetic radiation—while sufficiently energetic quanta of electromagnetic energy could, under the right circumstances, create an electron-positron pair—called for the development of a comprehensive theory of electrons and positrons. This theory, now called quantum electrodynamics, provided a picture of the photon, or quantum of electromagnetic energy, as the carrier of the electromagnetic force. Charged particles attract or repel each other by the exchange of photons, and the photon itself in a sense carries the potential for pair creation with it. The basic model of material particles exerting forces on each other through the exchange of particles also provided the key to understanding the short-range forces between nuclear particles. In 1935 the Japanese physicist Hideki Yukawa proposed a family of medium-weight particles, or mesons, that carried the strong but short-range nuclear force between protons and neutrons. Accelerator experiments eventually yielded direct evidence for the meson, as well as the existence of particles much heavier than the proton that decayed quickly into protons and mesons. Eventually enough particles were found in the proton-meson or baryon family that it was proposed that all of them were constructed from even more elementary particles—a family of quarks that exchanged particles called gluons as they interacted with each other.
The stumbling block, then, to a true grand unified theory or “theory of everything” is the integration of the last fundamental force, gravity, with the standard model. While a force-carrying particle, the graviton, has been proposed, it has not been demonstrated to exist. Proposals that on theoretical grounds might be acceptable as a theory of everything (such as string theory) appear to require energies many orders of magnitude higher than can presently be confirmed. The question addressed in the following articles is whether a preponderance of evidence supporting such a grand unification theory can be gained by experiments that will be feasible over the next decade or two. —DONALD R. FRANCESCHETTI SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
A third fundamental force, the weak nuclear force that is responsible for beta decay, also proved amenable to the force-carrier interpretation, although the force-carrying particles, the W+, W-, and Z0 bosons, were confirmed experimentally only in 1983. Since the electromagnetic, strong, and weak forces all appeared to work by the same mechanism, it was only natural for physicists to seek a further unification of these forces. A unified theory of the weak and electromagnetic forces was proposed by the U.S. physicist Sheldon Lee Glashow in 1961 and worked out in detail over the next decade. The unified electroweak theory required the existence of a new type of particle, the Higgs boson, which had not been found as of 2001. Several unified theories for the electroweak and strong forces have been put forward, and the interpretation of each of these within the force-carrier or standard model makes many physicists confident that a unified theory will be confirmed.
257
KEY TERMS A unification of the fundamental force of electromagnetism (that light is carried by quantum packets called photons, manifested by alternating fields of electricity and magnetism) and the weak nuclear force. FIELD THEORY: A concept first advanced by the Scottish physicist James Clerk Maxwell as part of his development of the theory of electromagnetism to explain the manifestation of force at a distance without an intervening medium to transmit the force. Einstein’s general relativity theory is also a field theory of gravity. FUNDAMENTAL FORCES: The forces of electromagnetism (light), weak force, strong force, and gravity. Aptly named, the strong force is the strongest force but acts over only the distance of the atomic nucleus. In contrast, gravity is 1039 times weaker than the strong force and acts at infinite distances. GRAVITATIONAL FORCE: A force dependent upon mass and the distance between objects. The English physicist and mathematician Isaac Newton set out the classical theory of gravity in his Philosophiae Naturalis Principia Mathematica (1687). According to classical theory, gravitational force, always attractive between two objects, increases directly and proportionately with the mass of the objects, but is inversely proportional to the square of the distance between the objects. According to general relativity, gravity results from the bending of fused space-time. According to modern quantum theory, gravity is postulated to be carried by a vector particle called a graviton. LOCAL GAUGE INVARIANCE: A concept that asserts that all field equations ultimately contain symmetries in space and time.
PHYSICAL SCIENCE
ELECTROWEAK FORCE:
258
Viewpoint: Yes, history, recent advances, and new technologies provide reasonable hope that a grand unified theory may be within reach. The ultimate step awaiting modern physicists is the unification of the theories that are SCIENCE
IN
DISPUTE,
VOLUME
2
Gauge theories depend on the difference between values as opposed to absolute values. A force that binds quarks together to form protons and neutrons and holds together the electrically repelling positively charged protons within the atomic nucleus. UNIFIED FIELD THEORY: A theory describing how a single set of particles and fields can become (or underlies) the observable fundamental forces of the electroweak force (electromagnetism and weak force unification) and the strong force. VIRTUAL PARTICLES: A particle that is emitted and then reabsorbed by other particles involved in a force interaction (e.g., the exchange of virtual photons between charged particles involved in electromagnetic force interactions). Virtual particles do not exist outside of the force interaction exchange. Indeed, if manifested outside the virtual exchange process, they would no longer be virtual and would violate the laws of conservation of energy. Virtual particles do not need to follow the laws of the conservation of energy if the exchange of virtual particles takes place in such a manner that the product of the energy discrepancies (energy imbalance) and the duration of the imbalance remains within Planck’s constant as dictated by the Heisenberg uncertainty principle. WEAK FORCE: The force that causes transmutations of certain atomic particles. For example, weak force interactions in beta decay change neutrons and protons, allowing carbon 14 to decay into nitrogen at a predictable rate, which is the basis of carbon-14 dating. STRONG FORCE (OR STRONG INTERACTIONS):
part of the quantum-based “standard model,” which encompasses electromagnetism, the weak force, and the strong force, with a quantum theory of gravity in a way that is consistent with the theory of general relativity. There are great difficulties and high mountains of inconsistency between quantum and relativity theory that may put a “theory of everything” far beyond our present grasp. However, physicists may soon be able to take an important step toward this ultimate goal by advancing a grand unified theory
that, excepting quantum gravity, would unite the remaining fundamental forces. Although a theory of everything is tantalizingly beyond our present grasp, it may well be within the reach of the next few generations. Advances in the last half of the twentieth century, particularly in the work of the Pakistani physicist Abdus Salam and the U.S. physicist Steven Weinberg, already have provided a base camp for the assault on a grand unified theory. Their work has united two of the fundamental forces—electromagnetism and the weak force— into electroweak theory. Such unifications are not trivial mathematical or rhetorical flourishes; they evidence an unswerving trail back toward the beginning of time and the creation of the universe in the big bang. The electroweak unification reveals that, at higher levels of energy (e.g., the energies associated with the big bang), the forces of electromagnetism and the weak force are one in the same. It is only at the present state of the universe—far cooler and less dense—that the forces take on the characteristic differences of electromagnetism and the weak force. Physics as a History of Unification The history of physics, especially following the publication in 1687 of Isaac Newton’s Philosophiae Naturalis Principia Mathematica (Mathematical principles of natural philosophy), reveals a strong tendency toward the unification of theories explaining different aspects of the universe. The work of Newton, an English physicist and mathematician, advanced that of the Italian astronomer and physicist Galileo Galilei and unified the theories of celestial mechanics with empirically testable theories of gravity into a theory of universal gravitation. By asserting a mutual gravitational attraction between all particles in the universe, Newton’s theory brought a mathematical and scientific unification to the cosmos for the first time.
In the early years of the twentieth century, the German-American physicist Albert Einstein unified Newton’s universal gravitation with theories of space-time geometry (made possible by revolutionary advances in nineteenth-century mathematics) to assert his general theory of relativity in 1916.
Development of the Standard Model The standard model of particle physics is the result of more than a century of theoretical unifications. The inclusive sweep toward the standard model began with James Clerk Maxwell’s work of 1864, in which he combined the empirical laws of electricity and magnetism into a single set of equations. These included as solutions electromagnetic waves that traveled through space and interacted with matter exactly as light does. The theory of electromagnetism removed the need for the concept of an ether (a medium in the vacuum of space akin to the medium of water through which water waves travel), and provided an elegant elaboration of light in an electromagnetic spectrum. Indeed the electromagnetic spectrum, ranging from radio waves to x rays and gamma rays, continues to provide the most accessible and profound evidence of the unification of natural phenomena. Radio waves, microwaves, infrared, the visible light of our everyday existence (including the colors of the rainbow), ultraviolet light, x rays, and gamma rays are all forms of light that differ only in terms of wavelength and frequency.
Advances in quantum theory—made possible by the work of the German physicist Max Planck and Einstein, and subsequently in the SCIENCE
IN
DISPUTE,
VOLUME
2
Abdus Salam (The Bettmann Archive/Newsphotos, Inc. Reproduced by permission.)
PHYSICAL SCIENCE
At the dawn of the twenty-first century, general relativity remains the only theory describing the gravitational force. This is significant because the other great theory explaining the cosmos—quantum theory and the resulting standard model—does not yet include a theory of gravity. The standard model presently accounts for the strong and electroweak forces. Because quantum theory and relativity theory are inconsistent and mutually exclusive on some key postulates, their reconciliation and fusion to
unify all four underlying forces has occupied the bulk of theoretical physics during the twentieth century.
259
1920s by the Austrian physicist Erwin Schrödinger and the German physicist Werner Heisenberg—established the photon or light quantum as the carrier particle (boson) of the electromagnetic force. In the 1940s and 1950s, the theory of electromagnetism was reconciled with quantum theory through the independent work of the U.S. physicist Richard Feynman, the U.S. physicist Julian Schwinger, and the Japanese physicist Shin’ichiro Tomonaga. The reconciled theory was termed quantum electrodynamics (QED), and asserts that the particle vacuum consists of electron-positron fields. Electron-positron pairs (positively charged electron antiparticles) manifest themselves when photons interact with these fields. The QED theory describes and accurately predicts the subsequent interactions of these electrons, positrons, and photons. According to QED, electromagnetic force-carrying photons, unlike solid particles, can exist as virtual particles constantly exchanged between charged particles such as electrons. The theory also shows that the forces of electricity stem from the common exchange of virtual photons, and that only under special circumstances do photons become observable as light.
PHYSICAL SCIENCE
The development of a weak force theory (or weak interaction theory) built upon theories describing the nature and interactions of beta particles (specifically beta decay) and neutrinos. The existence of this different set of force-carrying particles (termed W+, W-, and Z0 bosons) was verified in 1983 by the Italian physicist Carlo Rubbia and the Dutch physicist Simon van der Meer at CERN (the European Organization for Nuclear Research in Geneva, Switzerland). Weak force interactions such as those associated with radioactive decay also produce neutrinos, which was first postulated by the Italian-American atomic physicist Enrico Fermi in the 1930s. Fermi spurred the experimental quest and discovery of the neutrino by asserting that neutrinos must exist in order to explain what would otherwise be a violation of the law of conservation of energy in beta decay.
260
In 1967 the theories of electromagnetism and weak forces were unified by Weinberg, Salam, and the U.S. physicist Sheldon Lee Glashow. The Glashow-Weinberg-Salam theory states that both forces are derived from a single underlying electroweak force. Accordingly, photons, W+, W-, and Z0 particles are lower-energy manifestations with a common origin. Electroweak force interactions, predicted by electroweak theory, have been observed and verified by experiments in the largest particle accelerators. The best conceptual example, often advanced by Weinberg, the British physicist Stephen Hawking, and others, likens the higherenergy electroweak state to a ball spinning rapSCIENCE
IN
DISPUTE,
VOLUME
2
idly around the top track of a whirling roulette wheel. At this high-energy state the ball takes on no particular number; it is only when the energy drops and the wheel slows that the ball drops into a characteristic state described as “12” or “16.” At the higher-energy states achievable in large particle accelerators, photons, W+, W-, and Z0 particles lose their individual characteristics. Correspondingly, weak force and electromagnetic interactions begin to act the same and are unified into an electroweak force. This concept of the electroweak force is joined by the strong force theory (or strong interaction theory)—a theory derived from theories and data describing the existence and behavior of protons, neutrons, and pions—to form the standard model. Unifying the Electroweak, Strong, and Gravitational Forces Following research lines similar to the development of the QED and electroweak theories, physicists at the beginning of the twenty-first century are seeking a unified theory of the electroweak and strong forces that can be reconciled with relativity theory. A very large roulette wheel in the form of a large particle accelerator will be needed to achieve the energy levels that mathematical calculations predict will be needed to fuse the electroweak and strong forces. Although physicists cannot currently construct accelerators that are capable of achieving such high energy levels, they may be able to advance and verify a unified theory based upon interactions and phenomena that are observable at realistically achievable energy levels.
Other unified theories, including quantum chromodynamics (a quantum field theory of strong force interactions), establish an undeniable trend toward a unification of forces that is consistent with the big bang theory. Our experience with the four fundamental forces breaks down with the increasing temperature and pressures of a condensed universe. Within the first few millionths of a second of the big bang, these forces evolved from underlying unified forces. Following the ascending energy tail simply retraces the path of evolution of these forces. Precise and quantitative observations of particles at the achievable energy levels of our current large accelerators may yield evidence of a grand unified theory, by leading to explanations for particle behavior that is inconsistent with the standard model. For example, recent experiments at the Fermi National Accelerator Laboratory in Batavia, Illinois, have shown inconsistencies in the rare interactions of neutrinos. By extending existing data, it is already possible to argue for string and supersymmetry theories. Assuming that grand unified forces and fields do exist, a number of exotic particles may provide evidence of these theories. Most impor-
tantly, if these exotic particles exist in the low ranges of their predicted energies, physicists may be able to detect those particles in large-diameter accelerators. Moreover, the standard model has been repeatedly confirmed by experiment. At energies up to 200 gigaelectronvolts (GeV) the model accounts for all the observed particles, and there is no reason to doubt that the asyet-undiscovered particles predicted by it will eventually turn up. The Dutch physicist Gerardus ’t Hooft and the Dutch-American physicist Martinus J. G. Veltman have been studying unification energies and problems dealing with the reconciliation of quantum and relativity theory. Their work has allowed very precise and accurate mathematical calculations of the energy levels at which particles may exist. Analyses by ’t Hooft and Veltman indicate that the Higgs particle—which is needed to verify a critical part of gauge theory (assertions concerning the similarity of space) and is an important milestone on the path to a grand unified theory—may be observable at the Large Hadron Collider set to be working at CERN by 2005. Along with advances in technology and the expected experimental harvest of new particles, there are also theoretical advances, such as the loop quantum gravity hypothesis, that may provide an alternative to current string-theory unifications of quantum theory with general relativity. Regardless, arguments against an achievable grand unified theory fail to look back upon the long path of theoretical unification in physics, and simply make the eventual conquest of a theory of everything more tantalizing and challenging. —BRENDA WILMOTH LERNER
Viewpoint: No, a grand unified theory of the fundamental forces is not within the reach of physicists today.
Steven Weinberg (Photograph by Kevin Fleming. CORBIS. Reproduced by permission.)
As traditionally used by physicists, a grand unified theory is a theory that would reconcile the electroweak force (the unified combination of electricity, magnetism, and the weak nuclear force) and the strong force (the force that binds quarks within the atomic nucleus together). A grand unified theory that could subsequently incorporate gravitational theory would become the ultimate unified theory, often referred to by physicists as a “theory of everything.” The Standard Model Quantum field theory (how subatomic particles interact and exert forces on one another) is part of the so-called standard model of atomic particles, forces, and interactions developed by the U.S. theoretical physicist Murray Gell-Mann and others in the latter half of the twentieth century (i.e., the standard model is a field theory). Quantum field theory remains an area of intense theoretical and experimental research. Until field theory is itself fully reconciled with relativity theory, however, it is impossible to achieve the type of synthesis and reconciliation accomplished by other partial unification theories, such as the unification of electromagnetism and relativity found in quantum electrodynamics (QED theory), quantum chromodynamics (QCD theory), or the unificaSCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
There are both technological and theoretical obstacles to a reachable grand unified theory of physics. Fundamentally, one of the major theoretical hurdles to a reachable synthesis of current theories of particles and force interactions into a grand unification theory is the need to reconcile the evolving principles of quantum theory with the principles of general relativity that were advanced by the German-American physicist Albert Einstein in 1916. This synthesis is made difficult because the unification of quantum mechanics (a unification of the laws of chemistry with atomic physics) with special relativity to form a complete quantum field theory consistent
with observable data is itself not yet complete. Thus, physicists are at least a step away theoretically from truly attempting a grand unified theory of the electroweak and strong forces.
261
the mass of particles unless explained by the presence of other, nondirectional fields termed “scalar fields.” The fields must be scalar, meaning directionless, or else space itself would seem to be directional—a fact that would contradict many physics experiments that establish the uniformity or nondirectionality of space. The particles sought after to verify the existence of the scalar fields are Higgs particles. A Higgs particle is theorized to be the manifestation or quantum particle of the scalar field, much as the photon is the manifestation or quantum particle of the electromagnetic field. Higgs particles must have masses comparable to the mass equivalent to 175 gigaelectronvolts (GeV)—the mass-energy contained in the heaviest known particle, the top quark. This is already about 175 times the mass energy of the proton. According to some mathematical models, the Higgs particles may be hundreds of times more massive than protons and much more massive than the top quark.
Albert Einstein (The Bettmann Archive/CorbisBettmann. Reproduced by permission.)
tion of electromagnetism with the weak force (which was achieved by the U.S. physicist Sheldon Lee Glashow, Pakistani physicist Abdus Salam, and U.S. physicist Steven Weinberg in the advancement of electroweak theory).
PHYSICAL SCIENCE
According to modern field theory and the standard model, particles are manifestations of field and particles interact (exert forces) through fields. For every particle (e.g., quarks and leptons—one form of a lepton is the electron), there must be an associated field. Forces between particles result from the exchange of particles that are termed “virtual particles.” Electromagnetism depends upon the exchange of photons (QED theory). The weak force depends upon the exchange of W+, W-, and Z0 particles. Eight different forms of gluons are exchanged in a gluon field to produce the strong force. Technological Constraints The technological barriers to a unified theory are a consequence of the tremendous energies required to verify the existence of the particles predicted by the theory. In essence, experimental physicists are called upon to recreate the conditions of the universe that existed during the first few millionths of a second of the big bang—when the universe was tremendously hot, dense, and therefore energetic. Experiments at high energy levels have revealed the existence of a number of new particles, but there is a seeming chaos to
262
SCIENCE
IN
DISPUTE,
VOLUME
2
Fully exploring and verifying the existence of scalar fields, and the particles associated with them, will require accelerating particles to tremendous energies. This is the research goal of high-energy physicists, especially those working with larger accelerators such as those at the Fermi National Accelerator Laboratory in Batavia, Illinois, and CERN (European Organization for Nuclear Research) in Geneva, Switzerland. If Higgs particles exist at the lower end of the mass energy scale (“low end” only in comparison to the energies needed for strong force fusion, but still 350,000 times greater then the mass energy of the electron), they may be detectable at the accelerators presently conducting high-energy physics experiments. If greater energies are required, the near-term development of a grand unified theory will depend upon the completion and results of the Large Hadron Collider at CERN. Regardless, the energy requirements required to identify the particles associated with a unified field required by a grand unified theory are greater still. Most mathematical calculations involving quantum field indicate that unification of the fields may require an energy of 1016 GeV. Some models allow the additional fusion of the gravitational force at 1018 GeV. Supersymmetry and technicolor force theories (a variation of strong force theory) may offer a solution to the hierarchy problem of increasing masses and energies because they may permit the discovery of particles that are a manifestation of fused strong force at much lower masses and energies than 1016 GeV. However, these theoretical refinements, if valid, still predict particles too massive and energetic to create and thus are unverifiable at our present levels of technology.
The higher energies needed are not simply a question of investing more time and money in building larger accelerators. Using our present technologies, the energy levels achievable by a particle accelerator are proportional to the size of the accelerator (specifically its diameter). Alas, archiving the energy levels required to find the particles of a grand unified force would require an accelerator larger than our entire solar system. Although there are those who argue that a grand unified theory is within our collective theoretical grasp because of the success of QED, QCD, and electroweak theory, the technological constraints of energy production mean that any grand unified theory will be based upon evidence derived from interactions or anomalies in the standard model that are observable at our much more modestly attainable energy levels. In contrast, the partial unified theory contained by reconciling electromagnetism with the weak force (a force that acts at the subatomic level to transform quarks and other subatomic particles in processes such as beta decay) is largely testable by the energies achievable by the largest particle accelerators. When quantum electrodynamic (QED) theory was made more reliable by a process termed “renormalization” (allowing positive infinities to cancel out negative infinities) by the U.S. physicist Richard Feynman, U.S. physicist Julian Schwinger, and Japanese physicist Shin’ichiro Tomonaga, the theory reconciling quantum theory with relativity theory advanced by QED was highly testable. Accordingly, there is a valid question as to whether we have, or can even envision, the type of technology and engineering that would allow testing of a grand unified theory. Theoretical Constraints There are also theoretical constraints. Perhaps more important than technological limitations, there are fundamental differences in the theoretical underpinnings of how the fundamental forces of nature (i.e., the electromagnetic, weak, strong, and gravitational forces) are depicted by quantum theory and relativity theory.
Quantum theory was principally developed during the first half of the twentieth century through the independent work on various parts of the theory by the German physicist Max Planck, Danish physicist Niels Bohr, Austrian physicist Erwin Schrödinger, English physicist Paul Dirac, and German physicist Werner Heisenberg. Quantum mechanics fully describes wave particle duality and the phenomena of superposition in terms of probabilities. Quantum field theory describes and encompasses virtual particles and renormalization. In contrast, special relativity describes space-time geometry and the relativistic effects of different inertial reference frames (i.e., the relativity of describing motion) and general relativity describes the nature of gravity. General relativity fuses the dimensions of space and time. The motion of bodies under apparent gravitational force is explained by the assertion that, in the vicinity of mass, space-time curves. The more massive the body, the greater the curvature or force of gravity. Although both quantum and relativity theories work extremely well in explaining the universe at the quantum and cosmic levels respectively, the theories themselves are fundamentally incompatible. Avoiding the mathematical complexities, a fair simplification of the fundamental incompatibility between quantum theory and relativity theory may be found in the difference between the two theories with respect to the nature of the gravitational force. Quantum theory depicts a quantum field with a carrier particle for the gravitational force that, although not yet discovered, is termed a “graviton.” As a force carrier particle, the graviton is analogous to the photon, which acts as the boson or carrier of electromagnetism (i.e., light). In stark contrast, general relativity theory does away with the need for the graviton by depicting gravity as a consequence of the warping or bending of space-time by matter (or, more specifically, mass). M theory is a fusion of various string theories that postulate the particles exist as different vibrations of an underlying fundamental string SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
The attempt at a fusion of quantum field theory with electromagnetism is not new. In fact, during the first half of the twentieth century, Einstein devoted considerable time to an attempted unified field theory that would describe the electromagnetic field and the gravitational field as different manifestations derived from a single unified field. Einstein failed, and at the beginning of the twenty-first century there remains no empirical basis for a quantum explanation of gravity. Although a quantum explanation of gravity is not required by a grand unification theory that seeks only to reconcile electroweak and strong forces, it is important to acknowledge that the unification of force and
particle theories embraced by the standard model is not yet complete, and that gravity can be absolutely ruled out of the advancement of future unified theories. The problem is that it may not be possible to rule out gravity and to develop a unified theory of electroweak and strong forces that ignores gravity. Moreover, although electromagnetism and weak interactions coherently combine to form electroweak theory, strong force actions among quarks (the fundamental particles comprising neutrons and protons) are not fully mathematically reconcilable with electroweak theory.
263
entity. An energy of 1016 GeV is still required to open the six or more extra dimensions required by M theory. The development of relativity theory was guided by the development of gravitation theory, and the development of the quantum standard model was facilitated by the gauge theory of electrodynamics and the assertion of local gauge invariance. As Weinberg points out in his elegant essay of the achievability of a grand unified theory (1999), in essence, string and M theories are not so guided—in fact, they require us to guess at the physical realities of extra dimensions that, by definition, we cannot access. This only compounds the already profound difficulties of verifying principles of quantum and relativity theory grounded in accessible space-time dimensions. Conclusion Given our modest technologies, the only way to verify a grand unified theory will be to advance a theory that accounts for all measurable values and constants. This approach imparts a testability/disproveability defect into such theory. In keeping with the principle that a scientific theory must account for all data as well as be able to make accurate predictions about the nature and interactions of particles, such a unified theory may be as shaky as the “ether” of physics before relativity. The best we may be able to hope for in the near future is a unified theory that seems pretty good, but will fall short of the verification of relativity and quantum theory as they now exist.
PHYSICAL SCIENCE
All the hope for a reachable unified theory depends upon whether the ultimate unified field is consistent and in accord with relativity theory (e.g., renormalizable). Recent experiments indicating a possible small mass for the neutrino, if verified, may allow us insight into whether or not nonrenormalizable interactions extend beyond the gravitational force. Nature may dictate that the gravitational force remains nonrenormalizable. If that is the ultimate truth, then the quest for unification must come to a dead end. The question is whether that dead end is past the point where a grand unified theory might be obtainable. Predicting the proximity of any major theoretical advance is usually folly, but the chances are very good that any formulation of a unified theory will prove elusive, at least through the first half of the twenty-first century. The irreconcilability of current quantum theory with general relativity theory seems clear. Those who argue that a grand unified theory is reachable in the near future, however, hang their theoretical hat on the fact that quantum theory, with its dependence on particles and quantum fields to manifest forces, does not require a quantum theory of gravity. Nevertheless, they are also
264
SCIENCE
IN
DISPUTE,
VOLUME
2
dangerously and somewhat blindly counting out any role for gravity at the energy levels required by a grand unified theory. —K. LEE LERNER
Further Reading Bohr, Niels. The Unity of Knowledge. Garden City, N.Y.: Doubleday, 1955. Davies, Paul. Superforce: The Search for a Grand Unified Theory of Nature. New York: Simon and Schuster, 1984. Feynman, Richard. The Character of Physical Law. Cambridge, Mass.: M.I.T. Press, 1965. ———. QED: The Strange Theory of Light and Matter. Princeton, N.J.: Princeton University Press, 1985. ———, and Steven Weinberg. Elementary Particles and the Laws of Physics: The 1986 Dirac Memorial Lectures. New York: Cambridge University Press, 1987. Georgi, H., and S. L. Glashow. “Unity of All Elementary Particle Forces.” Physical Review Letters 32 (1974): 438–41. Greene, Brian. The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. New York: W. W. Norton, 1999. Hawking, Stephen W. A Brief History of Time: From the Big Bang to Black Holes. New York: Bantam Books, 1988. Howard, Don, and John Stachel, eds. Einstein and the History of General Relativity: Based on the Proceedings of the 1986 Osgood Hill Conference, North Andover, Massachusetts, 8–11 May 1986. Boston: Birkhäuser, 1989. Hoyle, Fred. Astronomy. New York: Crescent Books, 1962. Kobayashi, M., and T. Maskawa. “CP Violation in the Renormalizable Theory of Weak Interaction.” Progress of Theoretical Physics 49 (1973): 652–7. Pais, Abraham. “Subtle Is the Lord—”: The Science and the Life of Albert Einstein. New York: Oxford University Press, 1982. Rees, Martin, Remo Ruffini, and John Archibald Wheeler. Topics in Astrophysics and Space Physics. Vol. 10, Black Holes, Gravitational Waves, and Cosmology: An Introduction to Current Research. New York: Gordon and Breach, 1974. Stachel, John J. “How Einstein Discovered General Relativity: A Historical Tale with Some Contemporary Morals Regarding General Relativity and Gravitation.” In
General Relativity and Gravitation: Proceedings of the Eleventh International Conference on General Relativity and Gravitation, Stockholm, 6–12 July 1986, ed. M. A. H. MacCallum, 200–08. New York: Cambridge University Press, 1987.
Weinberg, Steven. Dreams of a Final Theory. New York: Pantheon Books, 1992. ———. “A Unified Physics by 2050?” Scientific American December 1999, 68–75.
PHYSICAL SCIENCE
SCIENCE
IN
DISPUTE,
VOLUME
2
265
Can radiation waste from fission reactors be safely stored?
Viewpoint: Yes, radiation waste from fission reactors can be safely stored using existing technical expertise and drawing on the experience of test facilities that are already in operation. Viewpoint: No, radiation waste from fission reactors cannot be safely stored, given the ever-present danger of human error and natural catastrophe during the thousands of years in which the waste must be stored.
The development of human societies has been characterized by an everincreasing demand for energy. Until the industrial revolution this meant the muscle power of humans and animals, augmented by wind and flowing water for limited applications such as sailing and milling. Fuels—wood and sometimes coal—were burned for warmth but not for mechanical work. With the invention of the steam engine, the energy content of wood and coal became available for such work, and the consumption of fuel dramatically increased. Forests were decimated and coal mines dug in great numbers. Eventually petroleum and natural gas deposits augmented coal as energy reservoirs, as the development of electrical technology put numerous horsepower at the disposal of the ordinary citizen in the industrialized world. However, the stores of these fuels would eventually not be enough to keep up with demand, and pollution of the air and water with the wastes of combustion would threaten the environment. In the 1930s, physicists realized that the fission of uranium nuclei could be stimulated by neutron bombardment, and that the fission process released neutrons, which could stimulate additional fission events. The possibility of a self-sustaining chain reaction releasing immense amounts of energy was thus at hand. This discovery occurred, however, as the world headed towards World War II. In August 1939, a month before Germany invaded Poland, a number of leading physicists prevailed upon Albert Einstein to write a letter to President Franklin D. Roosevelt, recommending that the Unites States government fund nuclear research. This letter led to the development of the atomic bomb and its use as a weapon of war. Following victory over Japan in 1945 and a period of debate between the proponents of military control over atomic energy and those favoring civilian control, the United States government set up the Atomic Energy Commission. New reactors were built: for research, to generate isotopes for medical use, and to generate electrical power. An “Operation Plowshare” was established to research civilian uses of nuclear explosives. Nuclear power plants were initially considered quite attractive as an energy source. They produced none of the air pollution inherent in the burning of fossil fuels, and the energy produced per pound of fuel was a million times greater. However, nuclear power had both political and practical drawbacks.
266
There are always political issues associated with nuclear power. Many Americans are intimidated by anything “nuclear” or radioactive. The cold war, with its nuclear bomb tests, air-raid drills in schools, and discussion of the
need for shelters against radioactive “fallout,” certainly did not make for peace of mind. But elements of overreaction can also be noted. When engineers developed a way of using the well-established technique of nuclear magnetic resonance (NMR) to image the soft tissues of the human body without exposure to x rays, hospitals soon found it necessary to rename the NMR imaging technique as “magnetic resonance imaging (MRI)” to reduce patient anxiety.
A nuclear waste storage site in the Netherlands. (Photograph by Yann ArthusBertrand. CORBIS. Reproduced by permission.)
The practical drawbacks to nuclear power include the possibility of accident, the possible theft of nuclear materials or sabotage by terrorists, and the problem of waste disposal and storage. Accidents and theft can in principle be prevented by proper diligence and an engineering “fail-safe” approach. The problem of waste storage and disposal has proven more difficult to resolve. Reactor fuel elements eventually become unsuitable for continued use in the reactor but remain highly radioactive, while other shorter-lived radioactive waste is produced as a byproduct of reactor operation and fuel extraction. Radioactive wastes are classified as low-level waste (LLW), intermediate-level waste (ILW), or high-level waste (HLW), depending on the time it will take for the radiation emitted to decay to background levels. This time can be calculated from the half-lives of the radioactive elements contained in the waste. Half-lives can vary from fractions of a second to millions of years. Low-level wastes contain short half-lived elements and typically decay to background levels in less than 100 years. Intermediate-level waste requires a somewhat longer storage period, but both LLW and ILW, which constitute the greatest volume of waste, can be buried in sites that can be trusted to be stable for a century or so.
Essentially all the HLW generated by nuclear power plants in the United States is stored at the sites at which it is generated, and storage space is running out. The notion of deep burial in geological formations believed to be stable has some support, but responsible opponents point to the difficulty of assuring that the waste will not be released by earthquakes or seepage into groundwaSCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
High-level wastes, which include spent fuel rods, will remain radioactive for hundreds of thousand of years. Furthermore, uranium and plutonium are two of the most toxic substances known, so that a discharge of any amount into the environment presents chemical health risks in addition to those risks associated with radiation. The safe storage of HLW in a remote location would require that the cumulative risk of release from even highly improbable events remain small over time periods of hundreds of thousands of years. The risks involved in shipping HLW to this long-term storage would also have to be eliminated.
267
ter. There is also the possibility that radiation released by the waste will, given enough time, compromise the physical integrity of the disposal site. However, even if nuclear power generation is abandoned, enough HLW already exists that there is no alternative to finding an adequate long-term storage or disposal method. The dispute over what that method is will be a major issue confronting twenty-first century science. —DONALD R. FRANCESCHETTI
Viewpoint: Yes, radiation waste from fission reactors can be safely stored using existing technical expertise and drawing on the experience of test facilities that are already in operation.
PHYSICAL SCIENCE
Radioactive waste from fission reactors is already being stored safely all over the world. Although the majority of radioactive waste (radwaste) is buried, a small amount of the most dangerous waste is currently stored by the reactors that produce it, and it is the permanent disposal of this waste that is currently causing concern. If politics were left out of the equation, then the safe permanent disposal of all radioactive waste is possible today. The scientific knowledge and technical know-how already exists, the plans have been on the drawing board for many years, and test facilities have been running in a number of countries. However, the political will to make legislative changes, and the public acceptance of such disposal sites, are necessary before these plans can be put into operation.
268
Storage Versus Disposal It is important to note the difference between the temporary storage of radwaste, and the permanent disposal of such waste. Most reactor-made radwaste is first stored on-site at the reactor facilities. The vast bulk of radioactive waste, about 95% by volume, is stored only for a few months or years before being buried. The remaining radwaste, which mainly consists of spent fuel rods, is stored in temporary facilities. This waste has been mounting up for decades, as there are currently no permanent disposal sites in operation. Spent fuel rods are the most active and “hot” waste. Within their metal cladding, they contain numerous fuel pellets, each about the size of a fingertip; each pellet contains potential energy equivalent to 1,780 lb (808 kg) of coal, or 149 gal (565 L) of oil. Because of their small size, 40 years worth of spent fuel rods in the United States is only about 44,000 tons (40,000 t), and would cover an area the size of a football field to a depth of about 5 yd (4.5 m).
Spent fuel rods are classified as high-level waste (HLW), for despite their small size they are very radioactive, produce a large amount of heat, and remain dangerous for 100,000 years SCIENCE
IN
DISPUTE,
VOLUME
2
before their activity finally falls to background levels. The most common type of radwaste is low-level waste (LLW), consisting of solid material that has levels of radioactivity that decay to background levels in under 500 years. Ninetyfive percent of LLW decays to background levels within 100 years or less. Large volumes of LLW are produced in the mining of radioactive material, the storage and transport of such goods, and even by being in close proximity to radioactive sources. LLW can include anything from equipment used in a nuclear power plant, to the seemingly harmless contents of a wastepaper basket from a laboratory. Practically, for radwaste disposal purposes, intermediate-level waste (ILW) and LLW are grouped together, while the more dangerous HLW is treated separately. Radwaste disposal aims to keep the waste secure until such time as its emissions reach background levels. For LLW and ILW, the methods are simple, cheap, and very effective, with the vast majority of waste being disposed of in “shallow” burial sites. While the term “shallow” may give some cause to worry, it is only shallow in comparison to the proposed deep disposal methods for HLW discussed in the following section, and can actually be up to 110 yd (100 m) below the surface. Simple trenches are used for some types of very short-lived LLW, and engineered trenches are created for other waste. An engineered trench may be many tens of yards deep and lined with a buffer material such as clay or concrete. Containers of waste, themselves highly engineered, are then placed in the trench, and the trench can then be backfilled with a buffer material, adding yet another level of protection. There are also other, less common, methods for disposing of LLW, such as concrete canisters, vaults, and bunkers. The sites for burial must be away from natural resources, isolated from water, and in areas of low geological activity. Such sites come under strict national and international controls, and are constantly monitored. High-Level Waste Disposal High-level waste does not leave the nuclear power plant in which it is produced. The spent fuel is stored in steellined, concrete vaults filled with water, which cools the fuel and never leaves the plant itself, to avoid any possible contamination. However, such monitored storage is not an acceptable long-term solution, and nuclear power plants
were not designed to be the final resting places for such waste. Because of its extreme longevity, HLW requires a disposal solution that will remain safe and secure without human monitoring. Many options have been suggested, but all have their problems.
KEY TERMS The natural amount of radiation that surrounds us everyday. It comes from a variety of sources, but mostly from natural sources in Earth left over from its creation, and also from space. GEOLOGICAL DISPERSAL: Method of disposing of radioactive waste by burying it deep within a stable geological structure, such as a mountain, so that it is removed from the human environment for a geological timespan, allowing it to decay to background levels. HALF-LIFE: Time for a given radioisotope in which half the radioactive nuclei in any sample will decay. After two half-lives, there will be one-fourth of the radioactive nuclei in the original sample, after three half-lives, one-eighth the original nuclei, and so on. HIGH-LEVEL WASTE: Highly radioactive material resulting from the reprocessing of spent nuclear fuel. RADIOACTIVITY: Spontaneous disintegration of an unstable atomic nucleus. RADWASTE: Commonly used abbreviation of the phrase “radioactive waste”; applied to everything from spent fuel rods to train carriages used in the transportation of radioactive material. REM: Derived from the term “Roentgen equivalent man,” a rem is a radiation unit applied to humans. Specifically, a rem is the dosage in rads that will cause the same amount of damage to a human body as does one rad of x-rays or one rad of gamma rays. The unit allows health physicists to deal with the risks of different kinds of radiation on a common footing. BACKGROUND RADIATION:
The idea of launching radwaste into space, completely outside Earth’s environment, is in some ways a very attractive option. The waste would have little or no effect on space, which is already a radioactive environment. However, the possibility of a launch accident such as the 1986 Challenger space shuttle disaster highlights the key problem with such a method. In addition, even if a completely safe launch method could be devised, the cost of space disposal would be astronomical. Many early efforts to dispose of HLW focused on the notion of reprocessing spent fuel cells so they could be used again. However, reprocessing proved to be very expensive and produced vast quantities of highly toxic liquid waste, which was even more problematic than the used fuel cells. Although a number of reprocessing plants were built, and a few operated, their use has more to do with politics than a sensible solution to HLW disposal. Nuclear incineration, a process that can turn HLW into LLW, suffers from similar cost and contamination problems. Some countries, including Sweden and Canada, have suggested disposing of HLW in ice formations. One advantage of this method is that the heat-producing radwaste would sink itself deeper. However, the costs of transporting the waste to such remote areas would be prohibitive. In addition, there is no scientific data concerning the extreme long-term behaviour of ice formations, and there are a number of legal constraints.
Geological Disposal of HLW The most promising potential solution is deep geological disposal, which basically means burying HLW
deep underground. The aim of deep disposal is to isolate the HLW from the environment for as long as possible, partly by mimicking natural processes, and also by over-engineering to ensure a higher degree of safety. A number of natural burial sites have kept naturally occurring radioactive material shielded. At Oklo, Gabon, Africa, a large amount of radioactive ore has been shielded by bedrock for two billion years, and under Cigar Lake in Canada, a body of uranium ore embedded in clay acts as a natural repository. Deep burial sites can be engineered to avoid the shortcomings of other proposed methods. Multiple barriers would separate the radwaste from the outside world, much like a set of Russian dolls, where removing one doll reveals another. The first level of protection would be the packaging of the HLW itself, such as a metal canister. This would in turn be overpacked inside a second metal or ceramic conSCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
Sea burial of radwaste is one of the few proposed methods for which there is a large amount of data, mainly thanks to the ill-advised, and sometimes illegal, sea dumping of the past. Sea burial relies not only on containment, but also on the diluting power of the oceans to quickly reduce any leaks to background levels. The data show that sea burial is a very effective method that provides safety to the human environment and has only a localised effect on the immediate surroundings. However, all nuclear waste–producing nations have signed international agreements banning the use of the sea for radwaste disposal. In part this is due to the reckless early dumping done before the impact had been studied, and also because of political pressure from nonnuclear waste–producing countries.
269
A low-level nuclear waste site in the state of Washington. (Photograph by Roger Ressmeyer. CORBIS.
PHYSICAL SCIENCE
Reproduced by permission.)
270
tainer. These canisters would then be placed inside the underground facility, up to a 0.33 mi (0.5 k) underground. Once full, the facility would be back-filled with buffer material to fill up the spaces between the canisters and the walls. Over the next 100,000 years, the contents of the deep disposal facility would slowly decay to background levels, and eventually become one with the surrounding rocks. Underground burial protects the waste from both natural disasters and human interference. For example, underground sites are much more resistant to earthquake damage than surface sites. Physical barriers, the remote location, and the choice of a site well away from potential resources will deter all but the most determined and largescale attempts to gain access to the waste. Deep geological disposal is the preferred HLW disposal method for all countries producing nuclear waste, but it does have its opponents. Some critics have suggested that more needs to be known about the implications of deep disposal before such facilities are opened. To that end test sites have been collecting data around the world, and hundreds of studies and experiments have been carried out modelling potential hazards and their solutions. The impact of everything from volcanoes to meteorites has been considered, as well as more likely concerns such as climate and groundwater changes. Groundwater is the most likely method for disposed radwaste to move back to the human SCIENCE
IN
DISPUTE,
VOLUME
2
environment, and the biggest concern for those involved in the design of disposal sites. All rocks contain some amount of water, and it moves through pores and fissures, leeching out mineral deposits as it flows. Such is the power of water flow that, no matter how over-engineered a deep disposal facility may be, water will eventually cause the release of waste. The aim is to make the timescale during which such leakages occur a geological one by selecting dry sites well above the water table, and using innovative engineering, such as the Swedish WP-cave design which isolates the entire facility from any possible groundwater leakage by using a hydraulic cage. Politics and Public Relations However, the biggest obstacles by far for deep geological disposal are not technical, but political and publicrelations problems associated with radwaste. In many ways the nuclear power industry has only itself to blame. Early burials were often haphazard and made with little concern for the environment, and many early LLW trenches were prone to flooding and leaching of radioactive material into the local groundwater. Since the 1970s there have been much stricter regulations controlling the disposal and storage of all industrial waste, and a much greater appreciation for the danger these represent to humans and the environment. As a result the burial of nuclear waste is one of the most regulated activities on
the planet, and the nuclear power industry can proudly claim to have all of its recent waste safely contained, unlike many other industries. However, the nuclear power industry has a history of poor public relations. Combined with early mistakes, this has led many to protest against nuclear power and the dumping of waste, and to resist moves to open burial sites in their vicinity. The nuclear power industry is now trying to redefine its image within the community, stressing the clean air aspects of nuclear generation, and the benefits to consumers in terms of quantity of power and cost savings. These public relations moves are important, as deep disposal sites must gain many public consents. Even though a disposal site may benefit a nation, it is often hard to convince those who will be living closest to the site to see the positive side. Another reason for opposition to deep disposal sites is the issue of the transportation of HLW to such areas, which in some cases passes close to human settlements. Yet the safe and efficient transportation of hazardous waste (including ILW and LLW) already occurs in many countries, and there are about 100 million shipments of toxic waste annually in the United States. The safety record for radwaste transports in the United States is astoundingly high, with only four transportation accidents since 1973, with no resulting injuries. Other countries have similar, or better, safety records. However, accidents in other industries, most notably with oil shipping, have left the public unwilling to accept industry assurances about the safety of waste transport. Only one country, Finland, has made a positive decision regarding the final disposal of high level radwaste: it hopes to have an final disposal facility operating around 2020. Other countries seem poised to follow suit, but face further political and public hurdles.
After the detonation of the first atomic bomb in 1945, Robert Oppenheimer, the key scientist in the Manhattan Project, which developed the bomb, was quoted as saying, “I have become Death: the destroyer of worlds.” Oppenheimer could not have known exactly how prophetic these words were. From that fateful moment in the middle of the last century, we have been living in the nuclear age. However, with the end of the Cold War in the early 1990s, the threat of nuclear war was greatly diminished. Although we no longer had to focus on the destructive power of the atomic bomb, we suddenly found ourselves with a much greater problem right in our own backyards—what to do with radioactive waste. This waste came not from the development of weapons, but rather from the production of electrical power. Because we thought science would soon come up with a solution, we weren’t too alarmed at first. However, as it turned out, the problem would be with us for a long time to come. What Makes Radioactive Waste Such a Problem? Nuclear power plants produce power through the use of a fission reactor. Uranium is processed into fuel rods, which are then placed into the reactor core. The heat created through the nuclear reaction produced by these rods through the splitting of atoms, provides us with electricity. However, this reaction begins to diminish as fuel is “spent.” Before long, usually 12 to 18 months, the rods must be withdrawn from the reactor and replaced with new ones. Although they are no longer useful in producing energy, these spent fuel rods are now high-level waste (HLW) and incredibly dangerous.
To understand how dangerous these rods are, one must consider that an exposure to 5,000 rems (a unit of radiation dosage applied to humans) will instantly debilitate a human being, with death following within seven days. On average, an unshielded spent fuel rod can provide a 20,000 rem dose each hour of exposure to within a few feet. A significantly smaller dosage of radiation can cause cancer years—or even days—later. Nor do these spent fuel rods lose their radioactivity in a short period of time. Radioactive elements are characterized by a quantity SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
The safe disposal of radwaste is not a technical problem, but rather a political one. The scientific and technical details for the disposal of nuclear waste have been studied in extraordinary detail, and the need for permanent disposal is pressing. Even if all nuclear waste production, industrial, medical, military and scientific, were to cease tomorrow, there would still be a legacy of waste that will last over 100,000 years. In addition, the growing demands on power consumption, and public demands for more power stations and cleaner, cheaper power all suggest that the number of nuclear power stations around the world is likely to increase. When this happens, the need for a permanent disposal sites will become more urgent. —DAVID TULLOCH
Viewpoint: No, radiation waste from fission reactors cannot be safely stored, given the ever-present danger of human error and natural catastrophe during the thousands of years in which the waste must be stored.
271
Radioactive waste storage continues to be highly controversial. Shown here is a sign erected by opponents to a waste repository near Canyonlands National Park in Utah. (Photography by Galen Rowell. CORBIS. Reproduced by permission.)
called a half-life, which determines the amount of time it would take for half the element to decay into a more stable form. HLW normally takes 10 to 20 half-lives to lose its hazardous qualities. To put this in perspective, Plutonium239, a common element in spent fuel, has a halflife of approximately 24,400 years. This means it could provide a lethal dose of radiation well over 200,000 years after it is removed from a reactor core. Anyone that interacts with this material, even millennia from now, must remain shielded or risk deadly exposure.
PHYSICAL SCIENCE
With the vast potential for harm this HLW possesses, as well as the incredible period of time for which it remains dangerous, the question of safe disposal methods has become a serious matter. However, even now the question remains, for the most part, unanswered. Many suggestions have been made, but as yet none have been viable. This is a terrifying prospect considering the fact that although HLW makes up only 3% of all radioactive waste, it contains 95% of the dangerous radioactivity. Problems Associated with Current Storage Methods Currently, spent fuel rods are stored on-site at their producing reactors. Not one ounce of this HLW, during 40 years of production, has been placed into a permanent disposal facility. For now, the rods remain in cooling pools, where they are stored until they become safe for transportation. Because of the heat and
272
SCIENCE
IN
DISPUTE,
VOLUME
2
radiation they produce, the rods will have to remain in this form of storage for several decades before they can be considered manageable. This wet form of storage produces numerous problems and concerns. For example, this form of storage is only a temporary solution. It was, indeed, intended to be temporary, as it was assumed that a better solution would be discovered as time passed. Optimists thought that scientists would work on the problem and the temporary storage method would buy them time to find a solution. Time is, however, running out. Although the pools are able to contain the HLW for the moment, space is limited, as each pool can only hold a certain number of rods at one time. Each year, more HLW is created and thus requires storage. With the incredibly slow turnover rate (the time it takes to move the old rods out to make space for the new ones), HLW will begin to back up much sooner than most would like. Some facilities in the United States have already reached their pool capacity. In these cases, the rods have been placed into dry storage—contained in a cask made of metal or concrete, which is then filled with inert gases. Even so, this form of storage can only be used several years after the rods have been cooled off in storage pools. The cooling pools themselves are also at risk from damage by outside influences, such as
earthquakes, tornadoes, or hurricanes. Improper storage or human error could also create serious problems, as the pools require constant upkeep and surveillance. Should the water levels drop too low or the rods become too close to one another, the rods will begin the early stages of a nuclear reaction. At best, this will produce massive amounts of dangerous heat and radioactivity. At worst, it would lead to a horrific meltdown that could not be contained.
There are two major problems with this plan, both short term and long term. In the short term, one must consider the extreme risks inherent with the transportation of the HLW to the disposal site. In some cases, this could
Even if the HLW should reach its destination at the Yucca Mountain depository without incident, numerous other problems remain. When considering the risks of permanent HLW storage, the dangers cannot be thought of in terms of years, decades, or even centuries. Indeed, we must think in the terms of millennia, perhaps even longer than the human race has been on Earth. In the tens and hundreds of thousands of years it will take for the HLW to become safe, the planet itself can change drastically. The geologic processes that shaped Earth are still at work, and could have dramatic consequences on a facility such as the Yucca Mountain depository. SCIENCE
IN
DISPUTE,
VOLUME
2
The U.S. Department of Energy has considered Yucca Mountain in Nevada as the site for high-level radioactive waste. (Photograph by Roger Ressmeyer. CORBIS. Reproduced by permission.)
PHYSICAL SCIENCE
The Yucca Mountain Depository—Is It the Answer? Every country in the world has different options for its long-term HLW disposal plans, ranging from reprocessing to burying it deep in the earth. The United States has begun to focus on the Direct Disposal option. In this plan, the spent fuel rods progress through the cooling off phase in on-site standing pools until they eventually become ready for shipment. Once manageable, the rods are transported across the country to an underground depository. The proposed location of this depository is beneath the Yucca Mountains in Nevada, where emplacement tunnels about 985 ft (300 m) beneath the surface will then serve as the final resting place for the HLW.
involve distances of thousands of miles. Transportation of the spent fuel rods will be done with large trucks. The HLW itself will be contained in casks similar to those used for dry storage, which must remain completely intact for several thousand years to prevent the radioactive material from escaping. The longer these casks are in transit, the greater the risk of them being damaged through human error or natural disasters. Consider the damage that might be unleashed by a road accident, during which a breach could spill deadly radioactivity into the environment. Should several casks be damaged, the rods contained within could react to one another just as violently as they could in a cooling pool accident. In both cases, the result would be catastrophic, with long-lasting ramifications.
273
There are 33 known geologic fault lines near or in the Yucca Mountain area, and each has the potential of unleashing a devastating earthquake. Should an earthquake occur, the storage facility could be severely damaged and, in turn, break the HLW casks. The water table beneath the facility, which feeds the Amargosa Valley, risks serious contamination in such an event. Farming communities making use of this water would be devastated and turned toxic for generations. While the fault lines may not be active at this moment, this does not mean they could not become active in the future. There is another worry, a volcano that is only 10 miles away from the site. By itself, it has the potential of creating seismic events that could damage the facility and its contents irreversibly. The human element in the equation must also be considered. For one, sabotage or human error could be viable threats to such a facility. It would only take one mistake to unleash a disaster of unimaginable proportions, although admittedly the facility would be well guarded. Also, the facilities must be maintained throughout several generations to make sure none of the casks are leaking and to keep an eye on potential geological threats. Even though such a staff would be small and easy to manage, they would have to be vigilant long into the future. Also, the casks containing the HLW will eventually erode and begin to leak after about 1,400 years. Although several elements will have become inert before this time, others, like plutonium, will hardly be through their first half-life and still be extremely dangerous. Nothing but bare rock would stand between these elements and the outside world. Nor will there be anything between them and other HLW, posing a risk for nuclear reaction.
PHYSICAL SCIENCE
In the end, we are left with a deadly threat that will not go away for thousands of years. At
274
SCIENCE
IN
DISPUTE,
VOLUME
2
the moment, the only solutions for dealing with it are unsafe and temporary. Even in the long term, there are no trustworthy resolutions, and quick fixes cannot be relied upon to solve this potentially catastrophic problem. The situation worsens with each passing day, as the amount of HLW increases. Because of the devastating effects of radioactivity on the environment, the storage of HLW cannot be considered to be solely a national problem—it is a global threat, and one that must be addressed immediately. Safe solutions must be found. Otherwise, the ramifications could be felt not just beyond this generation and the next, but also for a thousand generations to come. The risk is just too high for half-measures. —LEE A. PARADISE
Further Reading Chapman, Neil A., and Ian G. McKinley. The Geological Disposal of Nuclear Waste. Chichester: John Wiley & Sons, 1987. Garwin, Richard L., and Georges Charpak. Megawatts and Megatons. New York: Alfred A. Knopf, 2001. Olson, Mary. High-Level Waste Factsheet. . Pusch, Roland. Waste Disposal in Rock. New York: Elsevier, 1994. Steele, James B. Forevermore, Nuclear Waste in America. New York: Norton, 1986. Warf, James C., and Sheldon C. Plotkin. Disposal of High-Level Nuclear Waste. .
Is DNA an electrical conductor?
Viewpoint: Yes, DNA is an electrical conductor, despite inconsistent experimental results and disagreement about how it conducts. Viewpoint: No, experiments have not conclusively proved that DNA is an electrical conductor; furthermore, there is no universally accepted definition of a wire conductor at the molecular level.
In 1941, long before the historic determination of the double helix structure of DNA by James Watson, Francis Crick, and Rosalind Franklin, Albert Szent-Györgyi, the Nobel laureate biochemist who had discovered Vitamin C, proposed that some biological molecules might exhibit a form of electrical conductivity. This proposal was based on the observation that x-ray damage—the knocking out of an electron by an x ray—at one part of a chromosome could result in a mutation in a gene located some distance away through motion of the electrons in the molecule. As the structure and genetic function of DNA became understood, it seemed natural to expect that the proposed conductivity would be found in the DNA molecule itself. DNA, the fundamental information storage molecule in all self-reproducing life-forms, is an interesting polyatomic assembly in its own right. DNA is not, however, a single compound, but rather a family of polymeric molecules in which strands of alternating deoxyribose and phosphate groups carrying purine and pyrimidine bases are (at low temperatures and in an aqueous medium at the right pH and ionic strength) bound to strands bearing complementary base sequences. With modern technology, DNA strands of any predetermined sequence can be synthesized and “cloned” in vitro at relatively low cost, and the availability of such carefully defined molecular material has led scientists to seek for novel applications of DNA outside the biological realm. A number of DNA-based computing schemes have been proposed and tested. There is also interest in using DNA as a building material on the nanometer scale. Interest in DNA conductivity, apart from its biological function, has heightened in recent years, as computer-chip technology continues to develop ways of making chips from smaller and smaller components. The components still need to be wired together, and the size of the wires has to decrease if more powerful chips with larger numbers of components are to be made. Thus the prospect of being able to use individual DNA molecules as “wires” to connect the elements in future generations of integrated circuits (chips) is quite attractive. To describe an individual molecule as a conductor requires carefully defining what being a conductor might mean on a molecular level. The most common form of electrical conductor is a macroscopic piece of a metallic element or alloy. Conduction in such materials can be viewed as the result of the component atoms participating in a special “metallic” form of bonding in which the electrons involved in bonding remain nonetheless free to travel long distances in response to an applied electric field. Some metallic conductors become superconductors below a characteristic temperature. In supercon-
275
ductors, electrons travel in pairs, and their motion is correlated with the vibration of the component atoms in such a way that no energy is lost as heat. They exhibit no electrical resistance. Semiconductors are another important class of electrically conducting materials. Semiconductors are elements or chemical compounds held together by localized electron-pair bonds from which electrons can escape if they gain a small amount of additional energy. Chemically pure semiconductors are devoid of conductivity at low temperatures but become conducting as they are warmed. Their conductivity is enhanced greatly if a small fraction of the atoms is replaced by an impurity, or “dopant” atoms, that have one more or fewer electrons. Other forms of conduction are also possible, for example, by a kind of electron “hopping” between impurity or defect sites.
Albert Szent-Györgyi (© Bettmann/CORBIS. Reproduced by permission.)
Most of the proposed mechanisms for conduction in DNA would involve electron motion along the axis of the double helix, although whether the motion would resemble that in metallic conductors, superconductors, semiconductors, or some form of hopping, has not been definitively resolved. Conduction is assumed to take place at right angles to the planes of guanine-cytosine (G-C) and adenine-thymine (A-T) pairs that hold the two helical strands together. The paired bases are themselves aromatic organic compounds, which means that some of their most loosely bound electrons occupy the so-called pi molecular orbitals that extend above and below the molecular planes. Depending on how the electrons in these pi orbitals interact, one could have any of the types of conduction just described, or none. As the Yes essay indicates, it is clear that one can introduce ions between the stacked base pairs to render DNA conducting. Whether DNA is a conductor in the absence of such doping ions is less clear. Claims for normal conduction, hopping conduction, and super- and semi-conduction, have emerged from different laboratories. It is reasonable that DNA with different ratios of G-C to A-T pairs would have different conductivity characteristics, and it is not out of the question that DNA strands with certain base sequences will differ markedly from each other in conductivity. Measurement of the conductivity of a molecule will be influenced by the position and type of conducting contacts that have been made with it. The variation in results reported to date may in part be attributable to both differences in sequence and electrical contact.
PHYSICAL SCIENCE
If DNA, at least with some base sequences, is shown to be conducting, there is still the question of whether the conductivity has anything to do with the suitability of DNA as the genetic material. Does the conductivity in some way reduce the incidence of harmful mutations or the frequency of mistakes in reproduction? The true extent and full significance of DNA conductivity may not be fully understood for some time. —DONALD R. FRANCESCHETTI
276
Viewpoint: Yes, DNA is an electrical conductor, despite inconsistent experimental results and disagreement about how it conducts. Does DNA (deoxyribonucleic acid) conduct electricity? Yes. Researchers are not entirely in agreement as to how it conducts, but there is SCIENCE
IN
DISPUTE,
VOLUME
2
convincing evidence that it does. DNA research has attracted attention around the globe. Swiss researchers have demonstrated that DNA conducts electricity in the same way as a wire, while Dutch researchers have found it is a semiconductor. The Swiss team also found a clue that might explain why there have been inconsistent results in the research to determine the conductivity of DNA. Scientists at labs in France and Russia working together have found DNA conducts electricity, and they believe contradictory
results may be related to how it is connected. Canadian scientists are patenting a novel form of DNA that they developed by chance, which has definite commercial potential. The Canadian Connection In 2001 a Saskatchewan provincial government agency provided $271,000 (Canadian) to manufacture and test new light-based electronic transistors that will employ a novel conductive form of DNA, dubbed M-DNA by the team of researchers who discovered it at the University of Saskatchewan under the direction of Jeremy Lee, professor of biochemistry at the university’s College of Medicine. A Toronto-based company is adding another $277,000 (Canadian) to develop and commercialize a new biosensor tool based on the M-DNA molecule.
DNA normally exists as two intertwined strands of many connected nucleotides. Each nucleotide consist of a 5-carbon sugar, a nitro-
In March 2000, Lee said of their discovery in an interview for SPARK (Students Promoting Awareness of Research Knowledge), “M-DNA is the smallest wire that you can imagine because it’s only one molecule thick. And the beauty of DNA is that it self-assembles. You don’t need a machine to put it together. It can make itself. You throw the sequences together and the base pairs automatically match up.” SPARK is part of NSERC, the Natural Sciences and Engineering Research Council of Canada, a national body for making strategic investments in Canada’s capability in science and technology. Among its functions, NSERC provides research grants. Canadian, U.S., European, and Japanese patent applications have been filed on M-DNA. The University of Saskatchewan Technologies, Inc. (UST), the technology commercialization arm of the University of Saskatchewan, listed electronic applications along with biosensing and microarray applications, as among available technologies for partners to develop or license in 2001. An M-DNA molecule acts as a semiconductor. It has distinct advantages in the miniaSCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
The novel form of DNA was an unexpected discovery, according to Dr. Lee. In what he describes as curiosity driven research, the group found that DNA readily incorporates metal ions at a high pH (a very basic solution). Conducting metal ions such as zinc, cobalt, or nickel were incorporated into the center of the DNA helix between the base pairs. Researchers then found the new DNA not only conducts electricity, but it does so without losing its ability to bind to other molecules.
gen-containing base attached to the sugar, and a phosphate group. The bases are adenine, cytosine, guanine and thymine; but are usually identified as A, C, G, and T. The DNA double helix looks much like a twisted ladder with the rungs formed by connecting G-C and A-T base pairs. The metal ions in M-DNA are in the center of the rungs.
277
turization of electronics in its size, and its ability to self-assemble and create highly organized and predictable structures.
PHYSICAL SCIENCE
Biosensors have applications in medicine, environmental monitoring, biological research, process control, security, and national defense. When groups of biosensors that can test for multiple substances are assembled, they are called microarrays. M-DNA has advantages over current DNA biosensors in that it could potentially test samples from more sources and is more sensitive and versatile. The electronic signal can be quantitatively as well as qualitatively measured, which would increase the data provided by an M-DNA biosensor, according to the researchers.
278
Specific applications of M-DNA in biosensing could include screening for genetic abnormalities. It could also be used to identify environmental toxins, drugs, or proteins, and also to search for new antitumor drugs that work by binding to DNA. As promising as the applications of M-DNA to biosensors are, the potential to use M-DNA as “wires” in integrated circuits, may be even more lucrative. DNA provides the molecular blueprint for all living cells. It is also now recognized as an ideal tool for making nanoscale devices. The term nano comes from the Greek word for dwarf. It is also a prefix meaning one billionth, as in the word nanometer (nm), one billionth of SCIENCE
IN
DISPUTE,
VOLUME
2
a meter (m). To put that into perspective, a nanometer is about the width of 3 to 5 “average” atoms, or 10 hydrogen atoms, hydrogen being the smallest of all atoms. DNA molecules are about 2.5 nm wide. To the world of technology, nano is key to a scientific revolution based on very small things. DNA as a Conductor Suggested by a Nobel Laureate The biochemist Albert SzentGyörgy (1893–1986) was awarded the 1937 Nobel Prize for Physiology and Medicine for his discoveries about the roles played by organic compounds, especially vitamin C, in the oxidation of nutrients by the cell. Born in Hungary, he emigrated to the United States in 1947 for political reasons.
The idea that DNA might work like a molecular wire can be traced back to 1941, when Szent-György suggested that biological molecules could conduct electricity, although he added, “It cannot be expected that any single observation will definitively solve this problem.” As evidence Szent-György offered instances of genetic mutation that occur in one place when something such as irradiation with x rays is inflicted on chromosomes some distance away. He noted it was as if the radiation had sent an electric signal along the DNA to cause disruption at a distance. Szent-György saw the question of whether DNA conducts electricity as important to understanding the mechanics of
KEY TERMS 1 x 10-10 (one ten-billionth) of a meter. BASE: Molecule or ion that can combine with a hydrogen ion. The four bases in DNA are all nitrogen-containing organic compounds; “organic” means they contain combined carbon. BASE-PAIR: The pairing of two nucleotide bases. The base guanine pairs only with cytodine, creating the G-C or C-G basepair. The base adenine pairs only with thymidine, creating the A-T or T-A configuration. The chemical bonds between a string of nucleotide base pairs hold the two strands of DNA material together in a shape described as a double helix. CONDUCTOR: Material or substance that transfers electricity, heat, or sound. CRYSTALLINE: In a solid state (not a liquid, solution, or gas). DNA: Deoxyribonucleic acid. INSULATOR: Substance that prevents or reduces the transfer of electricity, heat, or sound. KELVIN: Scale of temperature (abbreviated as K, with no degree symbol) in which the size of the degree is the same as in the Celsius system, but where zero is absolute zero, not the freezing point of water. Absolute zero is the theoretical point where a substance has absolutely no heat energy (i.e., there is no molecular motion). 32°F (0°C) is 273K. SEMICONDUCTOR: Substance that can conduct electricity under some, but not all, ANGSTROM:
conditions. The properties of a semiconductor can depend on the impurities (called dopants) added to it. Silicon is the best-known semiconductor and forms the basis for most integrated circuits used in computers. NANOMETER: 1 x 10-9 (one one-billionth) of a meter. NUCLEOTIDE: Nitrogen-containing molecules, also called bases, which link together to form strands of DNA. There are four nucleotides, named for the base each contains: adenine (A), thymidine (T), cytodine (C), and guanine (G). OHM: Unit of electrical resistance. OXIDATION-REDUCTION: Any process that makes an element, molecule, or ion lose one or more electrons; where reduction is the gain of one or more electrons. Whereas electrons have a negative charge, gaining electrons reduces (i.e., makes less positive) the charge on the element, ion, or molecule. Oxidation, being an opposite process, raises the charge (i.e., makes more positive). SEMICONDUCTOR: Material or substance that has electrical properties between those of a conductor, through which charges move readily, and those of an insulator, through which the flow of charges is greatly reduced. The electrons in the molecular structure of a semiconductor piece of DNA can delocalize and hop through the double-helix structure.
there would be a day when we could arrange atoms the way we want, once we got the tools. These tools started to be available in 1981, when an IBM team invented a scanning tunneling microscope.
Another Nobel Laureate Suggests the Nano Connection The nano part of the DNA story can be traced back to 1959 and another Nobel laureate, Richard P. Feynman, who presented a classic lecture at the California Institute of Technology that year titled “There Is Plenty of Room at the Bottom.” Feynman used biological systems as an example of working on a very small scale. He said, “[Biological systems] manufacture substances; they walk around; they wiggle; and they do all kinds of marvelous things— all on a very small scale.” Feynman predicted
Part of the momentum for the rise in the “nanoage” has come from the semiconductor industry’s concerns that Moore’s law is reaching its limits. Moore’s law is not a law of nature or government, but an observation made in 1965 by Gordon Moore, the cofounder of Intel, an American manufacturer of semiconductor computer circuits, when he plotted the growth of memory chip performance versus time. Moore observed that the number of transistors that can be fabricated on a single integrated circuit was doubling every 18 to 24 months. However, SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
genetic mutation. In more recent work, optical experiments with fluorescence quenching using DNA molecules has encouraged research into DNA as an electrical conductor.
279
and G-C. Their research indicates that charge is best carried by G bases, and that G-C pairs work best where they are not separated by many A-T pairs. They concluded the sequencing made a difference, although they agreed that more research is needed. The Dutch Connection Research by a team of scientists at the Delft University of Technology in the Netherlands, including Cees Dekker, a professor of physics and the recipient of awards for his work in nanotechnology, and researchers from the Dutch Foundation for Fundamental Research into Matter devised an experiment to study the conductivity of DNA. They prepared an artificial DNA fragment 10.4 nm long to bridge an 8 nm gap between two electrodes and demonstrated it consistently acted like a semiconductor over a range of conditions. The conductivity was observed at ambient conditions, in vacuum, and at cryogenic temperatures. Their work was reported in Letters to Nature in 2000.
Richard Feynman (© Bettmann/CORBIS.
PHYSICAL SCIENCE
Reproduced by permission.)
280
smaller has limits. As components get down to the 100 nm (0.00001 cm) range, they approach the quantum mechanical world of atoms and molecules, and the laws of physics for larger structures no longer apply. Research on materials that can be used in the quantum range, such as DNA, is going on in key labs around the globe. Jeremy Lee’s Saskatchewan team sees DNA as a self-replicating semiconductor with a very attractive potential for future molecular computers. The Swiss Story In 1999, physicists HansWerner Fink and Christian Schonenberger of the University of Basel in Switzerland reported that they were able to measure conductivity in bundles of DNA that were 600 nm (0.00002 in, 0.00005 cm) long. They ground one end of the bundle on a carbon grid and applied a voltage to the other end through a tungsten tip, then measured the conductivity as the voltage was varied. The team reported electrical measurements that suggested DNA is a good linear conductor, and as efficient as a good semiconductor. Their interests in DNA as a conductor are particularly focused on its use in wiring ultrasmall electronic devices.
Also at the University of Basel, Bernd Giese and colleagues suggested why there are differing results being reported on DNA as a conductor. They suggest the difference may be in the sequencing of the nucleotide base pairs, A-T, SCIENCE
IN
DISPUTE,
VOLUME
2
Dekker said the team used a prepared double-stranded DNA molecule of 30 G-C pairs because they learned from the earlier research that this configuration was most likely to conduct. Normal DNA contains both A-T and G-C pairs with a maximum of 15 repeats of one pair. As a semiconductor, the DNA strand acts much like the silicon used in computer chips. According to Dekker, it may be possible some day to make smaller chips using DNA. However, he adds, there is still research to be done to determine exactly how DNA transports electricity. More on Global Research In 2000, researchers under the direction of physics professor Tomoji Kawai at Osaka University in Japan prepared networks of DNA strands linked together in a single layer on mica. The researchers have been able to change the thickness of the DNA network, making DNA networks with 10- to 100-nm mesh up to 1.7874 in (4.5 cm) square. Kawai says their work could possibly lead to a method to produce high-density electronic devices and ultimately to integrated circuits created out of DNA. In fact, he is optimistic that a DNA memory device could be made using their techniques to deposit DNA networks early in the twenty-first century.
Kawai suggests DNA conducts electricity in complex ways, depending on the oxidationreduction potential of the bases and the distance between them. The group tested specific complementary pairs of bases within their DNA networks. They attached gold particles to serve as one electrode, while the tip of an atomic force microscope made the second contact. In early 2001, Alik Kasumov and colleagues at the Solid Physics Laboratory in Orsay, France, and the Moscow Academy of Science, Russia,
demonstrated DNA conducts as a metal conducts at temperatures above –457.87∞F (1K). Below that temperature, DNA molecules connected to superconducting electrodes 0.5 micrometers (1 micrometer is approximately 0.000039 in) apart become what they describe as proximity induced superconductors.
A predicted model for M-DNA. (Illustration by Jeremy Lee. Reproduced by permission.)
Viewpoint: No, experiments have not conclusively proved that DNA is an electrical conductor; furthermore, there is no universally accepted definition of a wire conductor at the molecular level. Physicists, chemists, and radiation biologists have long been fascinated with the question “Is DNA a conductor or insulator?” In 1941, Nobel laureate Albert Szent-György (1893—1986) proposed that biological molecules such as chroSCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
In 2001 and 2002, Jacqueline K. Barton and her colleagues at the California Institute of Technology published reports that offer considerable evidence to indicate that DNA conducts electricity like a metal wire. They report on the charge transport—the conductivity—of DNA in a variety of multiple-stranded DNA assemblies. The Barton Group is focusing on understanding how DNA conducts. They too have suggested there is a relation between conductivity and how the base pairs are stacked. The group is also exploring the design and application of DNAbased electrochemical sensors. They are using
DNA films to develop a completely new family of DNA-based sensors. —M. C. NAGEL
281
Georgia Institute of Technology in Atlanta, “The problem is that there is no universally accepted definition of a wire on a molecular scale. Certainly, DNA is not a classical wire in the sense that copper is.”
An artist’s drawing of a DNA molecule. (© Digital Art/CORBIS.
PHYSICAL SCIENCE
Reproduced by permission.)
mosomes (which are composed of DNA) could conduct electricity along a chainlike form for a certain distance, after observing that x-ray radiation focused on one part of a chromosome could lead to damage on another section of the chromosome. How did the radiation get from one section of the chromosome to another? A rationalized answer—the radiation kicked off an electron that traveled along the DNA chain. In 1962, after the 1953 James Watson, Francis Crick, and Rosalind Franklin x-ray crystallography discovery of the double-helix structure of DNA, physicists Daniel Eley and D. I. Spivey measured by the DC conductivity of dried DNA samples. Their results suggested that DNA, with its unique stacking of bases, could serve as an electrical conductor. In the 1990s scientists began to vigorously reinvestigate the electron transfer conductivity and insulating properties of DNA with a variety of research techniques. These physicists, chemists, and radiation biologists particularly focused on experiments that study how the DNA nucleotide sequence, structure, length, buffer, conducting material, and environment (e.g., liquid, air, vacuum) affect the electrical properties of DNA. The results of the many experiments have been inconclusive. Since DNA has both insulating and conducting properties, today many scientists agree that DNA is a semiconductor. According to Gary Schuster at the School of Chemistry and Biochemistry at the
282
SCIENCE
IN
DISPUTE,
VOLUME
2
Inconclusive Experimental Results A series of DNA electron transfer donor-acceptor experiments (conceptually similar to having a cathode and anode attached by a wire, where the piece of DNA is the wire) in the 1990s support the hypothesis that DNA is not a conducting wire. The electron transfer rate is used to calculate beta (ß), a constant that can be used to characterize the electrical conductivity of materials. A lower beta value indicates higher electrical conductivity. In 1994, chemists Anne M. Brun and Anthony Harriman followed up on Jacqueline Barton’s 1993 initial electron transfer donoracceptor experiments at the California Institute of Technology and investigated how the length of a double-stranded DNA helix affected the rate of electron transfer along the DNA chain. They attached electron donor and acceptor groups to the DNA and measured the electron transfer rate on varying lengths of pieces of DNA. For a 17-A˚ piece of DNA, the beta constant was 0.9-1. The Brun and Harriman beta value of 0.9 A˚-1 differs from Barton’s 0.2 A˚-1 beta value. In 1997, Frederick D. Lewis and colleagues at Northwestern University in Illinois used photo-oxidation techniques and investigated electron transfer between a donor molecule, in this case a guanine base in the DNA strand, and an attached acceptor. The photooxidation experiments yielded a beta value of 0.7 A˚-1. The 1998 oxidation electron transfer experiments of Keijiro Fukui and Kazuyoshi Tanaka at Kyoto University, Japan, yielded a beta value of 1.42 A˚-1 between an intrinsic guanine base (donor) and an introduced dye molecule (acceptor). At the University of Basel, Switzerland, Bernd Giese’s electron-transferhopping work in 1998 with guanine radicals as donors and guanine bases as acceptors in different lengths of DNA sequences yielded a beta value of 1.0 A˚-1. In 1998, Satyam Pryadarshy and Steven M. Risser at the University of Pittsburgh performed theoretical quantum chemical calculations using the same experimental electron transfer donor-acceptor DNA systems and found beta values in the range of 0.6 A˚-1—1.4 A˚-1. The experimental and theoretical beta values for the DNA electron donor-acceptor systems are in contrast to the beta values of conducting carbon nanowires, which are 0.0 A˚-1—0.2 A˚-1.
Mobility, a measure of an electron’s travel velocity in a medium, can also be calculated from electron transfer experiment data. For conducting materials (metals), the mobility is on the order of 103. For semiconductor materials, the
mobility is 10-6—102. For insulating materials, the mobility is less than 10–14. The calculated mobilities in the DNA systems are in the range of 10-5 to 10-7 cm2/V/s which characterizes the DNA electron-transfer systems as semiconductors, not conductors.
independent of factors such as DNA conformation, sequence, and water content, and primarily a function of the density of packing. Mobility measurements from the x-ray irradiated crystalline DNA indicate an increase in charge mobility for DNA that is warmed to room temperature, which is characteristic of a semiconductor.
The Work of Fink and Schonenberger In 1999, physicists Hans-Werner Fink and Christian Schonenberger at the Institute of Physics in Basel, Switzerland, continued the investigation of the importance of the length of DNA and conductivity. Fink and Schonenberger were able to set-up a 600-nanometer (nm) (0.00002 in, 0.00005 cm) piece of DNA and attempt to measure a current through the DNA with a fine needle-like tip of tungsten. The experimental results showed that the resistivity values were comparable to those of conducting polymers. Therefore, DNA is transporting electrical current similarly to a good semiconductor.
With all the experimental and theoretical data about DNA conductivity that does not classify DNA as a classical conductor or insulator, trying to understand the mechanism by which electrons are traveling through the DNA can help better define DNA as a molecular wire or conductor. The electron transfer experiments by Jacqueline Barton and coworkers led the scientists to propose that electrons delocalize and transfer through what she terms the “pi-way” orbital structure of the duplex DNA. The work of Giese and other researchers led scientists to suggest a parallel super-exchange sequential charge-hopping mechanism where the electrons hop as discrete electronic entities where there is not significant electronic overlap between the piorbitals of the adjacent base pairs. The photochemistry experiments of Henderson, Schuster, and colleagues has led to a suggested mechanism of phonon-assisted polaron-like hopping. In the polaron-like hopping mechanism, the researchers propose that small domains (perhaps a string of five bases) form a delocalized polaron, and the polaron hops from domain to domain on the DNA duplex strand. However, DNA does not have enough degree of delocalization of the polaron or a sufficient rate of hopping to be classified as wirelike conductance similar to conducting polymers such as doped polyacetylene and polythiophere, or copper wire.
The many scientists have also been interested in deciphering the role that the sequence of nucleotides in a strand of DNA plays in modulating the electrical insulating or conductivity properties. Bernd Giese’s electron transfer work with guanine bases shows that the electron “hopping” transfer between guanosine-cytosine (GC) base pairs is less efficient the more adenosine-thymidine (AT) pairs that are in between. Cees Dekker and his colleagues at the Delft University of Technology in the Netherlands have used varying lengths of strands of DNA that are made only of guanosine-cytosine base-pairs in electrostatic trapping and scanning tunneling experiments that measure the resistance as greater than 1013 ohms for pieces of DNA that are 40 nm and longer. Erez Braun and coworkers at the Technion in Israel have also observed DNA that is 16 um long acting as a conductor. Researchers in Tomoji Kawai’s group at Osaka University in Japan also showed that sequence and length of DNA affected the resistance properties and measured values that range from 109 ohm to 1012 ohm using scanning tunnelling microscopy.
Further Reading Ball, Phillip. “Switched On.” Nature. . de Bakker, Liesbeth. “DNA Goes Electric.” . SCIENCE
IN
DISPUTE,
VOLUME
2
PHYSICAL SCIENCE
William A. Bernhard and his biophysicist colleagues at the University of Rochester School of Medicine in New York have approached the question of whether DNA is a conductor or an insulator by measuring the electrical conductivity (resistance) of DNA at different temperatures. The biophysicists irradiated crystalline DNA with x rays at 4K (-452.47°F, -269°C) and measured electrons that were trapped in the DNA with electron paramagnetic resonance. They found that the crystalline DNA trapped as much as 60% of the electrons. The measurements of trapped electrons are characteristic of an insulator or poor conductor rather than a conductor. The scientists argue that the total number of radicals trapped in the DNA appears to be relatively
Conclusion What will scientists do with the information about the electrical conductivity and resistance properties of DNA? Understanding these properties leads to developments in the field of nanotechnology. Nanoelectronics consists of wires, transistors, and other components that have dimensions measured in billionths of a meter. Today, DNA is already being used in biosensor technology. The role the DNA plays in each device is different. Scientists hope to build more specific and precise nanocircuits that may be DNA-like molecules by using knowledge about DNA’s electrical conductivity and resistance. —LAURA RUTH
283
Dekker, Cees. “Electronic Properties of DNA.” .
Kasumov, A. Yu., et al. “Proximity-Induced Superconductivity in DNA.” Science 291 (2001): 280–82.
“DNA Used to Create Self Assembling Conducting Wire: Breakthrough Will Lead to Next Leap in Electric DNA.” InSCIght. .
Porath, D., et al. “Direct Measurement of Electrical Transport Through DNA Molecules.” Letters to Nature 403 (2000): 636–38.
“Emerging Nanoelectronics.” Science Daily Magazine. .
Thiel, Karl A. “The Body Electric: How DNA May Build the Nanoelectronics of the Future.” .
Giese, Bernd. “Hop to It.” Chembytes ezine. .
Wang, Linda. “Live Wires.” “Technology Review.” <www.technologyreview.com/ magazine/sep00/benchmark2.asp>.
Grigorenko, Elena V. DNA Arrays: Technologies and Experimental Strategies. Boca Raton, FL: CRC Press, 2001.
Wilson, E. K. “DNA: Insulator or Wire?” Chemical and Engineering News (1997): 33–9.
PHYSICAL SCIENCE
Kreuzer, Helen, et al. Recombinant DNA and Biotechnology: A Guide for Students. 2nd ed. Washington, DC: ASM Press, 2001.
284
SCIENCE
IN
DISPUTE,
VOLUME
2
INDEX
A Aaronson, Marc, 25 Abdel-Malek, Linda, 189 Abiogenesis, 109 Accretion hypothesis (moon formation), 2–3, 6 Acetabulum replacement. See Total hip replacements Acoustic energy and ancient stonework, 73 Acrylic bone cement bone cement implantation syndrome, 96 cement debris, 89–90, 95–96 PMMA, 90, 91–93 total hip replacements, 89–97 Acrylic polymers in total hip replacements, 92 Addition polymerization, 92 Adenine-thymine pairs, 276, 280 Adhémar, Joseph, 79 Adult stem cells. See Stem cells Adventures of Huckleberry Finn (Twain), 192 Aedes aegypti. See Mosquitoes Aggression and sociobiology, 120–121 Akiyama, Toyohiro, 41 Alcock, John, 121 Alderman, Ellen, 183–184 Aldrin, Buzz, 41 ALH 84001 (meteorite), 15 Allee, Warder C., 117, 123 Altruism and genetics, 117, 120 Aluminum oxide in total hip replacements, 93 Alvarez, Luis, 54, 56 Alvarez, Walter, 54, 56 Alzheimer, Alois, 210–211, 212 Alzheimer’s disease beta-amyloid plaques, 211 mice studies, 211, 213–215, 216 nerve growth factor, 217 neurofibrillary tangles, 211 symptoms and progression, 210–211, 215 vaccine, 210–217 American Medical Association, 224 Ammonia in fly ash cement, 102 AN-1792 (vaccine), 210–217 Anaxagoras, 239 Anderson, Carl, 257 Animalcules, 112 Anthropology data collection, 130–132 Mead’s methodology, 127–135 See also Sociobiology Antiatomism, 240–242, 245–247 Antinori, Severino, 230, 233 Apollo missions, 3–4 Aquinas, Thomas, 237 Arapesh tribe, 130 Archaeology of the Great Sphinx of Giza, 70–75, 72 Archimedes, 168–169 Ardipithicus ramicus, 143 Ardipithicus ramicus dadabba, 143 Aricept, 211 Aristotle, 238 atomic theory, 237, 239, 245 spontaneous generation, 107, 109 Armstrong, Neil, 40 Arsenic in drinking water, 60, 63, 65–66 Arthroplasty and hip replacements, 89–97 Asaro, Frank, 56
Asteroids ALH 84001 meteorite, 15 collisions with Earth, 14, 18 See also Meteorites Astronomy ancient astronomy, 2 giant impact theory, 1–9 Hubble constant, 20–28 Nabata neolithic structures, 73 at-Tusi, Sharaf ad-Din, 169 Atmosphere, Martian, 13 Atoms antiatomism, 240–242, 245–247 development of atomic theory, 237–247 gold foil experiment, 242 Greek theories, 237, 239, 244–245 quantum theory, 239, 242–243 uncertainty principal, 243–244 Australopithecus afarensis, 142 discovery, 139–140 species designation, 136–144 Australopithecus africanus discovery, 139, 141 species designation, 136–144 Australopithecus anamensis, 140 Australopithecus boisei discovery, 139 species designation, 136–144 Australopithecus ramidus discovery, 140 species designation, 136–144 Australopithecus robustus, 141 Avogadro, Amedeo, 238, 240 Ayerst, Debs, 175
B Backfill using controlled low-strength materials, 105 Background extinction rate, 49, 53 Baer, Karl Ernst von, 108 Bagley, Bruce, 188 Barnard, Christiaan, 219 Barrow, Isaac, 166–167, 169 Barton, Jacqueline K., 281, 282, 283 Barton, John Rhea, 94 Bastian, Henry Charlton, 108, 111, 115 Bateson, Gregory, 127 Bateson, Mary Catherine, 127 Batteries, chemical, in space probes, 32, 35 Batteries, radioactive. See Radioisotope thermal generators Bender, Kenneth J., 216 Benedict, Ruth, 127–128 Berkeley, George, 171 Bernhard, William A., 283 Bernoulli, Jacob, 167, 170 Bernoulli, Johann, 167, 170–171 Berry, Caroline, 124–125 Berthelot, Pierre-Eugéne Marcellin, 238, 241 Beta-amyloid plaques amyloid cascade hypothesis, 211 fibrils, 214–215 formation, 211 mice, 213 role in Alzheimer’s disease, 212–213 vaccine, 210–217 Bethell, Tom, 125
285
Bias (data collection) anthropology, 130–132, 134–135 fossils, 48, 54, 57 Billington, James, 199 Binary accretion hypothesis (moon formation), 2–3, 6 Binder, Alan, 5 Bioceramics in hip replacements, 89–90, 92, 94–96 Biodiversity benefits, 149 ecosystem stability, 146–154 experiments, 149–151 intermediate-disturbance hypothesis, 153 mass extinctions, 47–57 mathematical models, 148–149 paramecium studies, 151–152 rain forests, 150 See also Evolution; Extinction Biological control agents cane toads, 162–163 dangers, 160–163 English sparrow, 162 invasive species, 155–163 mongoose, 157, 160, 162 myna birds, 161 Old World climbing fern, 159 purple loosestrife, 157–159, 163 testing, 160, 163 thistles, 159–160, 162 See also Invasive species Biological determinism and sociobiology, 117–125 See also Cultural determinism Biology biodiversity, 146–154 biological control agents, 155–163 mass extinctions, 47–57, 50t sociobiology, 117–125 spontaneous generation, 107–115 See also DNA; Medicine Biosensors and DNA, 278, 283 Bipedalism, 139 Birds and extinction, 53 Blackmore, Susan J., 124 Blast furnace slag in cement, 105 Blastocysts, 228 Blossey, Bernd, 158, 163 Boas, Franz, 127, 128, 133 Boeree, C. George, 124 Bohr, Niels, 240 atomic theory, 242–243 quantum mechanics, 263 Boltzmann, Ludwig, 242, 246 Bone cement. See Acrylic bone cement Books aesthetic value, 199–200 archival value, 194–196 deacidification of paper, 195 functionality, 196 Bottled water, 68, 68 Boyle, Robert, 237–238, 240 Brace, C. Loring, 142 Bradburn, Norman M., 186, 189 Bradshaw, Jerald, 68 Brains and evolution, 139, 144 Braun, Erez, 283 Broecker, Wallace S., 81, 86 Broek, Antonius van den, 242 Broglie, Louis-Victor de, 243 Broom, Robert, 139 Brownian motion, 246 Brun, Anne M., 282 Buffon, Georges-Louis Leclerc de, 108, 110, 112 Bufo marinus, 162–163 Bullock, J.M., 161 Bush, George W., 227, 228, 230
INDEX
C Cadmium in drinking water, 68 Calculus, 165–171 Cameron, Alfred, 7, 8 Cameron, James, 43 Campaign for Transplant Fairness, 222 Canali on Mars, 10 Cancer and arsenic, 60, 66 Cane toads, 162–163 Canup, Robin, 6, 8 Caplan, Arthur, 223, 224 Capture hypothesis (moon formation), 2–3, 6
286
SCIENCE
IN
DISPUTE,
VOLUME
2
Carbon cycle and biodiversity, 147 Carbon dioxide ice ages, 81, 83, 86 Carlos, Juan (King of Spain), 199 Carson, Rachel, 156 Cassini (space probe), 33 design and mission, 32–33, 34 Jupiter, 36, 37 plutonium power source, 29–38 Cataclysta camptozonale, 159 Catholic Church. See Roman Catholic Church Cauchy, Augustin-Louis, 171 Cavalieri, Bonaventura, 169 Cayce, Edgar, 75 CD-ROMs, 193 Cement. See Acrylic bone cement; Fly ash cement Cepheid variables, 21, 26, 26–27 CERN (European Organization for Nuclear Research), 260, 262 Chadwick, James, 243 Challenger accident, 39, 40–41 Charnley, John, 90, 91–92, 94, 95 Charon (moon), 8 Charpentier, Jean de, 79 Chen, Guiquan, 214 Chimpanzees and evolution, 137 Chiu, F.Y., 96 Chloride and fly ash cement, 104 Chlorination of drinking water, 64 Chromium in hip replacements, 91, 92 Churchill, H.V., 67 Cigar Lake (Canada) uranium ore, 269 A Civil Action, 60, 65 Civil engineering and fly ash cement, 99–105 Cleopatra’s palace, 73 Climatology and ice ages, 77–86 Cloning, Therapeutic. See Therapeutic cloning Cloud forests, 51 See also Rain forests Coaccretion hypothesis (moon formation), 2–3, 6 Coal and fly ash, 100 Cobalt in hip replacements, 91, 92 Cognex, 211 Cohn, Ferdinand, 115 Collins, Francis, 125 Collins, John, 167 Colonization of Mars, 10–19 Coming of Age in Samoa (Mead), 127–135 Commoner, Barry, 147 Computer modeling ecosystem stability, 148–149, 152 giant impact theory, 5–6 ice ages, 80–81 Computerization. See Digitization Computers with DNA circuits, 275, 279–280 Concrete construction, 102 fly ash cement, 99–105 See also Fly ash cement Condensation polymerization, 92 Conductivity of DNA, 275–283 Confidentiality, Doctor-patient de-identification of records, 185–186, 188 digitalization of medical records, 181–189 norm of confidentiality, 186 Connell, Joseph, 153 Construction fly ash cement, 99–105 stadiums, 102 Contagion theory. See Germ theory Controlled low-strength materials, 105 Cooling pools (fuel rod storage), 273–274 Cooper, Ted, 187 Copland, Aaron, 198–199 Coral reefs and biodiversity, 51, 153 Core samples ice caps, 82, 83, 86 marine sediments, 79–81 Vostok ice core, 83 Cosmological constant, 27 Cosmology and the Hubble constant, 20–28 Costa Rican cloud forests, 51 Cottony cushion scale, 160 Creationism and spontaneous generation, 111 Creative Teaching Associates, 176 Cretaceous-Tertiary extinction, 48, 55–56 Crick, Francis, 275, 282 Croll, James, 79–80
D
E Early magma ocean (Moon), 5 Earth collisions with asteroids, 14, 18 core, 3, 5 giant impact theory, 1–9 ice ages, 77–86 Easiteach (educational service), 176 Ecklund, Larry, 176 Ecosystems biodiversity and stability, 146–154 biological control agents, 155–163 mathematical models, 148–149 See also Extinction; Invasive species Edge effect in rain forests, 52 Education using whole-class teaching, 172–179 Efflorescence, 103 Egyptian artifacts Great Sphinx of Giza, 70–75, 72 Inventory Stela, 74 Einstein, Albert atomic theory, 242, 246, 247 general relativity, 2, 22, 259, 261 photoelectric effect, 242 quantum mechanics, 259–260 special relativity, 257 unified field theory, 263 Elan Pharmaceuticals, 212, 215–216 Electrical conductivity of DNA, 275–283 Electromagnetic theory, 256, 259 Electroweak theory, 259–264 Eley, Daniel, 282 Eli Lilly and Co., 184–185 Elton, Charles, 147, 148 Embryonic stem cells. See Stem cells Emerson, Alfred E., 123 Empedocles, 256 English sparrows, 162 Environmental Protection Agency and drinking water standards, 59–69 Epicurus, 237, 239 Epidemiology of yellow fever, 201–209 Equirol, Jean Etienne, 210 Erosion of Great Sphinx of Giza, 70–71, 73–75 Erwin, Terry, 51 Ethylene oxide in hip replacements, 93 Eudoxus of Cnidus, 168 Eugenics cultural determinism, 133 sociobiology, 122 Evolution extinction, 48 gradualism, 144 hominid species, 136–144 punctuated equilibrium, 144 sociobiology, 117 spontaneous generation, 111–112 See also Biodiversity; Extinction Exelon, 211 Extinction background extinction rate, 49, 53 Cretaceous-Tertiary extinction, 48, 55–56 extrinsic catastrophism, 55 ice ages, 56 instantaneous extinction, 51–52 intrinsic gradualism, 55 key species, 52 mass extinctions, 47–57, 50t, 53t Permian-Triassic extinction, 48, 49–50, 56 rain forests, 47–48, 53 See also Biodiversity; Evolution; Invasive species Extrinsic catastrophism, 55
INDEX
Dalton, John, 238, 240, 243, 245 Dart, Raymond, 137, 138–139, 141 Darwin, Charles human evolution, 136, 140 sociobiology, 117 spontaneous generation, 111 Darwinism. See Evolution Davis, Donald R., 4, 7 Dawkins, Richard, 118 DDT (pesticide), 156 de Lubicz, R.A. Schwaller, 71 de Vaucouleurs, Gerard, 21, 24–25 Deceleration parameter, 27 Deep burial of radioactive waste, 269–270, 273–274 Dekker, Cees, 280, 283 Della Valle, Alejandro Gonzales, 96 Democritus, 237, 239, 244–245 Deoxyribonucleic acid. See DNA DES (hormone), 188 Descartes, René analytical geometry, 169 atomic theory, 245 Deuterium on Mars, 14 Devlin, Keith, 178 Diabetes and stem cell treatments, 231 Differentiation of stem cells, 231–232, 234 Digital libraries. See Libraries Digitization copyright, 197–198 format migration, 192–194 goals, 194–196 libraries, 191–200 medical records, 181–189 purpose, 188, 196–199 Dioxin and fly ash cement, 101–102 Dirac, Paul, 257, 263 Diseases Alzheimer’s disease, 210–217 cancer, 60, 66 cryptosporidium, 68 diabetes, 231 drinking water, 60, 62, 66, 68 fluorosis, 67 therapeutic cloning treatments, 227–236 tooth decay, 67 yellow fever, 201–209 See also Medicine; Organ transplants The Diversity of Life (Wilson), 47–48, 52, 57 Divorce, 121 DNA (deoxyribonucleic acid), 282 biosensors, 278, 283 computer circuits, 275, 279–280 conductivity, 275–283, 277, 278 donor-acceptor experiments, 282–283 M-DNA, 277–278, 281 mutation and conductivity, 278–279 nanotechnology, 277, 279–280, 283 semiconductivity, 280, 283 superconductivity, 281 Doctor-patient confidentiality. See Confidentiality, Doctor-patient Dolly (sheep), 230, 235 Domingo, Frank, 74 Donepezil, 211 Donk, Jan van, 81 Donor-acceptor systems in DNA, 282–283 Doppler shift, 20–22 Drinking water, 60 arsenic, 60, 63, 65–66 bottled water, 68, 68 cadmium, 68 cryptosporidium parvum, 68 fluoridation, 67 lead, 60, 67–68 MTBE contamination, 60, 66 nitrates, 67 standards, 59–69
treatment, 63, 63–64 turbidity, 67 See also Groundwater; Water Duhem, Pierre-Maurice-Marie, 241–242, 245–246 Duke, Michael, 13 Dusch, Theodor von, 112
Index
Crow, Cameron, 41 Cryptosporidium parvum contamination, 68 Cuba and yellow fever, 202, 208–209 Cultural determinism, 128, 133 See also Biological determinism Cuvier, Georges, 137 Cyclosporin, 219 Cytosine-guanine pairs, 276, 280
F Fa’apuna’a, 134 Family violence, 120–121 The Fateful Hoaxing of Margaret Mead (Freeman), 128, 134 Federal funding of science. See Research grants Femoral head replacement. See Total hip replacements Fermat, Pierre de, 169
SCIENCE
IN
DISPUTE,
VOLUME
2
287
Fermi, Enrico, 260 Fermi National Accelerator Laboratory, 262 Feynman, Richard, 280 atoms, 237 nanotechnology, 279 publishing, 254 quantum electrodynamics, 260, 263 Film, 193–194 Filtration of drinking water, 64 Fink, Hans-Werner, 280, 283 Finland and radioactive waste disposal, 271 Finlay, Carlos Juan, 202, 207, 208 Fish biodiversity, 152–153 Fisher, Helen E., 121 Fisher, Richard, 25 Fission hypothesis (moon formation), 2–3, 6 Fission reactors. See Nuclear reactors Floppy disks, 193 Fluoridation, 67 Fluorosis, 67 Fly ash cement construction, 100–101 environmental issues, 100–101, 105 flowability, 104–105 quality, 99–105 strength, 103 See also Concrete Fomites, 202 Format migration, 192–194 Fortune, Reo, 127, 128, 131 Fossils hominid species, 136–144 mass extinctions, 48, 54 observational biases, 48, 54, 57 See also Paleoanthropology Foundation funding of research. See Research grants Fracastoro, Girolamo, 202 Franklin, Benjamin, 195 Franklin, Rosalind, 275, 282 Free will, 118–119, 123 Freeman, Derek, 128, 131, 133–135 Freneau, Philip, 205 Frisch, Karl von, 118 Fuel rods, 268, 271–273 Fukui, Keijiro, 282 Funding of research. See Research grants
INDEX
G G-forces, 42 Gagarin, Yuri, 41 Galaxies Hubble constant, 20–28 standard candles, 24 Galerucella beetles, 159, 159 Galilei, Galileo atomic theory, 237 celestial mechanics, 259 Gamma irradiation and hip replacements, 93 Garn, Jake, 41, 42 Gassendi, Pierre, 239–240 Gay-Lussac, Joseph-Louis, 238, 240 Gearhart, John, 233 Gell-Mann, Murray, 261 Gellman, Robert, 187–188 Geology of Egypt (Hume), 75 Germ theory spontaneous generation, 107–115 yellow fever epidemics, 201–209 Giant impact theory (moon formation), 1–9 Giese, Bernd, 280, 282, 283 Giza (Egypt) and archaeology, 70–75, 72 Glaciation, global. See Ice ages Glashow, Sheldon Lee, 257, 260, 262 Glenn, John, 41, 42 Global warming ice ages, 81, 83, 85–86 mass extinctions, 49–50 Globular clusters, 27–28 Goddard Space Flight Center, 12, 14 See also NASA Goldin, Daniel Mars exploration, 16 space tourism, 45 Goodman, Daniel, 153 Goodman, Richard A., 134 Gorgas, William Crawford, 209 Gorillas and evolution, 137
288
SCIENCE
IN
DISPUTE,
VOLUME
2
Gott, J. Richard, 14 Goudsmit, Samuel, 243 Gould, Stephen J., 118 Gradualism, 144 Grand unified theory current progress, 256–264 electroweak theory, 259–264 standard model, 259–261, 261–262 theoretical constraints, 263–264 theory of everything, 258–259, 261 Grants, Research. See Research grants Gravitational slingshots, 30 Great Sphinx of Giza, 70–75, 72 Greek science atomic theory, 237, 239, 244–245 mathematics, 168–169 spontaneous generation, 107, 109 unification theories, 256 Greenhouse effect and ice ages, 86 Gregory, James, 166, 169 Groundwater drinking water standards, 59–69 radioactive waste disposal, 270 Woburn (MA) contamination, 60, 65 See also Drinking water; Water Guanine-cytosine pairs, 276, 280 Gulf Stream, 77–78, 85–86
H Haas, Otto, 93 Haboush, Edward J., 91 Haeckel, Ernst, 146 Haile-Selassie, Yohannes, 137, 140–141, 143 Hairston, N.G., 151–152 Hamilton, Alexander, 203 Hamilton, William Donald, 118 Handler, Philip, 146 Harr, Jonathan, 60, 65 Harriman, Anthony, 282 Hartmann, William K., 4, 7 Hawass, Zahi, 75 Hawking, Stephen, 260 Hay bacillus endospores, 114–115 Hays, James D., 81 Health Insurance Portability and Accountability Act, 187, 188–189 Health maintenance organizations, 182 Heisenberg, Werner, 242, 243–244, 259–260, 263 Helmont, Jan Baptista van, 109 Henderson, Gideon, 78–79, 81, 83 Henderson, R.W., 162 Hernandez, Sonia, 177 Herrup, Karl, 216 Heterogenesis, 109 Higgs bosons, 257, 261, 262 High-level radioactive waste. See Radioactive waste Hip replacement surgery. See Total hip replacements HIPAA (Health Insurance Portability and Accountability Act), 187, 188–189 Hippocrates four humors, 256 Hippocratic Oath, 181, 182 HLW (high-level waste). See Radioactive waste HMOs (health maintenance organizations), 182 Hoff, Jacobus Hendricus van’t, 240–241 Hollingshead, James, 12 Holmes, Lowell D., 134 Hominids, 136–144 Homo erectus, 139 Homo habilis discovery, 139, 141 species designation, 136–144 Hooft, Gerardus ‘t, 261 Hooke, Robert, 166 Hooker Telescope, 23 Hopper, Keith R., 161 Hubble, Edwin, 20–22, 21, 25 Hubble constant, 20–28 Huchenski, Jacki, 189 Huchra, John, 25 Human Cloning Prohibition Act (2001), 230 Human Genome Project, 123, 125 Hume, David, 245 Hume, W.F., 75 Huygens, Christiaan, 166 Huygens (space probe), 33, 34, 37 Hyperlinking, 197
I Ice ages
J Janus, Christopher, 214 Jefferson, Thomas, 203 Jericho (ancient city), 73, 75 Joblot, Louis, 108 Johanson, Donald, 136–144, 139 Juan Carlos (King of Spain), 199 Judet, Jean, 91 Judet, Robert, 91 Jupiter (planet), 36, 37
K
L Lanza, Robert, 236 Large Hadron Collider, 261, 262 Lavoisier, Antoine-Laurent, 238, 240
M M-DNA, 277–278, 281 M theory (physics), 263–264 MacArthur, Robert, 146–147 Mach, Ernst, 238, 241–242, 244, 245–247 Malaysian rain forest destruction, 49 Malik, Kenan, 122, 124, 125 Malpighi, Marcello, 112 Managed health care, 182 Manchester, S.J, 161 Manned space flight civilians, 39–46 costs, 11–12, 16–18 G-forces, 42 lunar landings, 3–4 Mars, 10–19 risks, 39–46 training, 40–42, 42, 44, 44–45 See also International Space Station; Space stations Manson, Patrick, 202 Margaret Mead and Samoa (Freeman), 128, 133–134 Margaret Mead and the Heretic (Freeman), 128, 133–134 Marinas, Benito, 68 Marine biology biodiversity, 153 mass extinctions, 48, 50, 50t sediment cores, 78–80 Mars (planet), 11 atmosphere, 13 life, 10, 14 manned exploration, 10–19 meteorites, 15 surface features, 13 water, 13, 19 Mars Society, 12, 15, 18 Maspero, Gaston, 71 Mass extinctions. See Extinction Mathematical modeling. See Calculus; Computer modeling Mathematics development of calculus, 165–171 flexible groupings in classrooms, 178–179
SCIENCE
IN
DISPUTE,
VOLUME
INDEX
K-T (Cretaceous-Tertiary) extinction, 48, 55–56 Kaku, Michio, 35 Kant, Immanuel, 245 Karlsbad Decrees (1819), 192 Kasumov, Alik, 280–281 Kawai, Tomoji, 280, 283 Keeling, Ralph, 83 Keesing, Roger, 131 Keill, John, 168 Kelvin, William Thompson, 242 Kennedy, Caroline, 183–184 Key species in extinctions, 52 Khachaturian, Zaven, 216–217 Khafre (pharaoh), 70–71, 73–75 Khufu (pharaoh), 74 Kidney allografts, 222 Kirby, David, 189 Kirchheimer, Barbara, 186, 187 Kitcher, Philip, 125 Kolbe, Adolf Wilhelm Hermann, 240–241 Kraepelin, Emil, 210–211 Krause, Diane, 234 Kropotkin, Peter, 118
Index
causes, 77–86 mass extinctions, 56 ILW (intermediate-level waste). See Radioactive waste Imbrie, John, 81 Informed consent, 183–184, 185 Infusoria. See Microorganisms Insects biodiversity, 51 biological control agents, 155–163 mosquitoes and yellow fever, 201–209 spontaneous generation, 107–108 Insolation and ice ages, 77, 77–86 Intermediate-disturbance hypothesis, 153 Intermediate-level radioactive waste. See Radioactive waste International Mars Society, 12, 15, 18 International Space Station, 17 civilian space flight, 39–46 cost overruns, 16–18 See also Manned space flight; Space stations Intrinsic gradualism, 55 Invasive species biodiversity, 148, 150–153 biological control agents, 155–163 cane toads, 162–163 cottony cushion scale, 160 English sparrows, 162 mongooses, 157, 160, 162 myna birds, 161 Old World climbing ferns, 159 purple loosestrife, 157–159, 163 thistles, 159–160, 162 zebra mussels, 148 See also Biological control agents; Ecosystems; Purple loosestrife; Zebra mussels Inventory Stela (Egypt), 74 Iridium in extinction boundaries, 56 Irving, Dianne N., 229 Irving, Dick, 222 Isotopes Earth and Moon, 3–4, 5 radioactive decay, 272 radioisotope dating, 81 Ives, Anthony, 153
Le Bel, Joseph-Achille, 240 Lead in drinking water, 60, 67–68 Leakey, Jonathan, 139 Leakey, Louis, 136–144, 142 Leakey, Maeve, 138, 140 Leakey, Mary, 136–144, 142 Leakey, Richard, 138, 140 Lee, Jeremy, 277–278, 280 Leeuwenhoek, Antoni van, 108, 109–110, 110, 112 Legacey, Denis, 18 Lehner, Mark, 75 Leibniz, Gottfried Wilhelm von, 165–171, 167 Leshin, Laurie, 13 Leucippus, 239 Lewis, Frederick D., 282 Lewontin, Richard C., 118, 121–122, 123 l’Hôpital, Marquis de, 167, 171 Libraries, 196 digitization, 191–200 format migration, 192–194 mission, 192–193 Trinity College (Dublin, Ireland), 195 University of Southwestern Louisiana, 197 weeding of materials, 195 Library Company of Philadelphia, 195 Library of Alexandria, 196 Lichtenberg, Byron, 43 Lieberburg, Ivan, 216 Life on Mars, 10, 14 Light and Doppler shift, 20, 21 Lime (substance), 103–104, 105 Liver transplants, 222, 224–225 LLW (low-level waste). See Radioactive waste Lorenz, Konrad, 118 Love and sociobiology, 121 Low-level radioactive waste. See Radioactive waste Lowell, Percival, 10 Lowrance, William W., 186 Lubicz, R.A. Schwaller de, 71 Lucy (hominid fossil), 139, 140, 142 Lunar landings, 3–4 Lunar Prospector (moon probe), 5 Lygodium microphyllum, 159 Lythrum salicaria. See Purple loosestrife
2
289
INDEX
Greek origins, 168–169 whole-class teaching, 172–179, 177 Maximum contaminant levels, 59–69 Maxwell, Bo, 12 Maxwell, James Clerk atomic theory, 246 electromagnetic theory, 256, 259 May, Robert M., 152 Mayr, Ernst, 141, 142–143, 146 McCauliffe, Christa, 39, 41, 45, 46 McCluney, Ross, 35–36 McDowell, Nancy, 130–131 McGuffin, Peter, 122 McInerney, Joseph, 124 McKay, David, 14 McKay, Frederick, 67 McKay, Ron, 234–235 McKie, Robin, 123–124 MCLs (maximum contaminant levels), 59–69 Mead, Margaret, 131 criticism of her methodology, 127–135 data collection, 130–131 influences, 129 New Guinea study, 130–131 Samoan study, 129–130, 133 Mead’s Coming of Age in Samoa: A Dissenting View (Goodman), 134 Medco, 184 Medical records de-identification, 185–186, 188 digitization, 181–189, 183, 185 informed consent, 183–184 Medicine Alzheimer’s disease vaccine, 210–217 digitization of medical records, 181–189 organ donation distribution, 219–225 yellow fever epidemiology, 201–209 See also Biology; Diseases Medium-level radioactive waste. See Radioactive waste Meer, Simon van der, 260 Melton, Doug, 235 Mendeleyev, Dmitry Ivanovich, 240, 257 Merck and Co., 184 Meteorites age of solar system, 27 ALH 84001 meteorite, 15 mass extinctions, 55–56 See also Asteroids; Planetoids Methyl tert-butyl ether in drinking water, 60, 66 Miasmas and yellow fever, 201–209 Mice Alzheimer’s disease research, 211, 213–215, 216 stem cells, 233, 235 Michel, Helen, 56 Microorganisms biodiversity experiments, 150, 151–152 spontaneous generation, 107–115 Milankovitch, Milutin, 77, 80, 84–85 Milankovitch cycles, 77, 80–81, 84–85 Miller, Joel, 124 Millikan, Robert Andrews, 242 Milton, John, 192 Milwaukee (WI) cryptosporidium outbreak, 68 Mirsky, Steve, 159 Mombaerts, Peter, 235 Mongooses, 157, 160, 162 Moon, 4 colonization, 18–19 core, 3, 5 early magma ocean, 5 giant impact theory, 1–9 lunar landings, 3–4 myths, 7 oxygen isotope ratios, 3–4, 5 water, 19 Moore, Austin T., 91 Moore, Gordon, 279 Moore’s law, 279–280 Morgan, Dave, 214 Moseley, Henry, 242 Mosquitoes and yellow fever, 201–209, 208 Mould, Jeremy, 25 Mount Wilson observatory, 23 MTBE in drinking water, 60, 66 Mueller, Joseph, 67 Muller, Richard A., 80–81 Multiple barrier treatments of drinking water, 64 Multipotency of stem cells, 234
290
SCIENCE
IN
DISPUTE,
VOLUME
2
Mundugumor tribe, 130–131 Murray, Joseph E., 219 Myers, Norman, 47, 52 Myna birds, 161, 162 Mystery of the Sphinx (television show), 71
N Nabata neolithic structures, 73 NADW (North Atlantic Deep Water) circulation belt, 77–78, 85–86 Naeem, Shahid, 149–150 Naish, Tim, 85 Nanotechnology and DNA, 277, 279–280, 283 Napoleon III, 113 NASA (National Aeronautics and Space Administration) civilian space flight, 39–46 “faster, better, cheaper,” 37 Mars exploration, 12–19 radioisotope thermal generators, 29–38 See also Goddard Space Flight Center NASCAR and math curricula, 176 National Academy of Sciences, 231 National Council of Teachers of Mathematics, 176, 177, 178 National Drinking Water Advisory Council, 66 National Environmental Education and Training Foundation, 61 National Institutes of Health, 248, 251–255 National Numeracy Strategy (Great Britain), 174–175 National Organ Transplant Act (1984), 220, 223 National Primary Drinking Water Regulation, 66–67 National Research Council, 60, 66 National Science Foundation, 248, 251–255 National Secondary Drinking Water Regulation, 66 National Transplant Action Committee, 222 Natural Sciences and Engineering Research Council of Canada, 278 Naturphilosophie, 110 Needham, John Turberville, 108, 110, 112 NEETF (National Environmental Education and Training Foundation), 61 Nelson, Bill, 41 Nelson, Marc, 68 Nerve growth factor, 217 Neurofibrillary tangles, 211, 213, 215 Neutrinos and mass, 264 New Guinea tribal life, 130 New Orleans (LA) yellow fever epidemic, 204 Newton, Isaac, 170 atomic theory, 237, 240, 245 calculus, 165–171 gravity, 1–2, 256, 259 motion, 256 NIH (National Institutes of Health), 248, 251–255 Nitrates in drinking water, 67 Noble, David, 193 North Atlantic Deep Water circulation belt, 77–78, 85–86 Northern Hemisphere and ice ages, 77–86 Noxious vapors and yellow fever, 201–209 NRC (National Research Council), 60, 66 NSERC (Natural Sciences and Engineering Research Council of Canada), 278 NSF (National Science Foundation), 248, 251–255 Nuclear cell transfer, 235 See also Therapeutic cloning Nuclear reactors radioisotope thermal generators, 32 spent fuel rods, 268, 271–273 waste storage and disposal, 266–274 Nuclear waste. See Radioactive waste Nutcracker man. See Australopithecus boisei
O Oath of Hippocrates, 181, 182 Observatories, 23 Ocean currents and ice ages, 77–78, 85–86 OCLC World Cat (database), 195 Ohio College Library Center, 195 Oklo (Gabon) radioactive ore, 269 Old Kingdom (Egypt), 71, 73–75 Old World climbing ferns, 159 Oldenburg, Henry, 166–167 Olson, Erik D., 61 Omiomeres, 239 Oonishi, Hironobu, 96 Oppenheimer, Robert, 271
P
Q Quantum mechanics atomic theory, 239, 242–243 electromagnetic theory, 259–260 field theory, 261–262 grand unified theory, 257 quantum chromodynamics, 260, 262 quantum electrodynamics, 257, 260
R Radioactive batteries. See Radioisotope thermal generators Radioactive waste current storage methods, 272–273 dangers, 271–272 disposal in space, 269 geological disposal, 268, 269–270, 273–274 high-level waste, 268–269 ice burial, 269 low-level disposal sites, 268, 270 public perception, 270–271 sea burial, 269 storage and disposal, 266–274 Utah, 272 Yucca Mountain, 273 See also Radioisotope thermal generators Radioactivity. See Radioactive waste; Radioisotope thermal generators Radioisotope thermal generators accidents, 31, 35, 37–38 alternatives, 32–33, 35–37 Cassini, 29–38 design, 30–31, 32, 34–35 safety, 29–38 Ragsdale, David, 160, 163 Rain forests Amazon river, 150 biodiversity, 151 cloud forests, 51 destruction, 49 edge effect, 52 mass extinctions, 47–48, 51–52, 53 preserve patches, 52, 53 Rats and stem cell research, 234 Raup, David, 18, 49, 53 Reagan, Ronald, 16 Red shift. See Doppler shift Redi, Francesco, 107, 109, 112 Reed, Walter, 202, 208–209 Reice, Seth R., 153 Reproductive cloning regulations, 227, 228, 230–231 techniques, 235 Research grants award system, 248–255 Canada, 278 National Institutes of Health, 248, 251–255 National Science Foundation, 248, 251–255 peer review, 249, 251–252 triage systems, 252 Revolutions de la Mer, Deluges Periodics (Adhémar), 79 Reynolds, David, 174 Rhinocyllus conicus, 159–160, 162 Rice crops, 151 Riemann, Georg Friedrich Bernhard, 171 Riley, Brien, 122 Risser, Steven M., 282 Rivastigmine, 211 Road construction using fly ash cement, 101, 105 Robert E. Nolan Co., 187 Robins, Donna, 65 Robins, Kevin, 65
SCIENCE
IN
DISPUTE,
VOLUME
INDEX
P-T (Permian-Triassic) extinction. See Permian-Triassic extinction Paleoanthropology cladistics, 143–144 hominid species, 136–144 phylogeny, 143–144 politics, 144 scenarios, 143–144 See also Fossils Panama Canal and yellow fever, 209 Pangaea, 56 Paramecium, 151–152 Pascal, Blaise atomic theory, 239 probability theory, 169 Pasteur, Louis, 107–115, 113 Patient-doctor confidentiality. See Confidentiality, Doctor-patient Pauli, Wolfgang, 243 Peer review of grant awards, 249, 251–252 Pell, John, 167 Pemberton, Robert, 159, 161–162 Periodic table of the elements, 240 Permian-Triassic extinction, 48, 49–50, 52–53, 56 Perrin, Jean, 246 Petrie, Flinders, 71 Philadelphia College of Physicians, 205, 207 Philadelphia (PA) yellow fever epidemic, 203, 204–205, 207 Physics atomic theory, 237–247 grand unified theory, 256–264 Pickford, Martin, 137, 140 Pimm, Stuart, 53, 152 Planck, Max, 239, 245, 259–260, 263 Planetoids and giant impact theory, 1–9 See also Asteroids; Meteorites Planets colonization, 14–15 formation, 1–9 manned exploration, 10–19 unmanned exploration, 29–38 Plant extinction rates, 53 Plato, 237, 256 Plomin, Robert, 122 Pluripotency of stem cells, 228, 234 Pluto (planet), 8 Plutonium half-life, 272 health hazards, 31–32, 271–272 radioisotope thermal generators, 32 space flight, 29–38 See also Radioactive waste PMMA (polymethylmethacrylate), 90, 91–93, 95–96 Poincaré, Jules Henri, 246, 247 Polyethylene and hip replacements, 91–92, 94, 96 Polymethylmethacrylate, 90, 91–93, 95–96 Population explosion and mass extinctions, 47, 50–51, 53–54 Poss, Richard, 13–14 Postman, Neil, 198 Pouchet, Félix-Archimède, 108, 110–111, 112–115 Pozzolans, 100, 103 Precession, Orbital. See Orbital eccentricities Preserve patches of rain forests, 52, 53 Privacy and medical records, 181–189 Privacy Rule (medical records), 188–189
Proconsul africanus, 137 Project Gutenberg, 199 Proust, Joseph-Louis, 240 Prozac, 184–185 Pryadarshy, Satyam, 282 Public health drinking water standards, 59–69 yellow fever, 201–209 Punctuated equilibrium, 144 Purple loosestrife, 157 biological control agents, 156, 157–159, 159, 163 range, 163 See also Invasive species Pyramids, Egyptian. See Egyptian artifacts
Index
OPTN (Organ Procurement and Transplantation Network). See Organ Procurement and Transplantation Network Orans, Martin, 128 Orbital eccentricities and ice ages, 77, 80, 84–85 Organ Procurement and Transplantation Network, 220, 223, 224, 225 Organ transplants kidney allograft survival rates, 222 liver transplants, 222, 224–225 organ allocation, 219–225 patient prioritization, 221 public opinion, 222–223 rejection, 219 Ornstein, Ewald, 96 Orrorin tugenensis discovery, 140 species designation, 136–144 Orthopedics and hip replacements, 89–97 Ostwald, Friedrich Wilhelm, 238, 242, 245–246, 247
2
291
Robischon, Noah, 188 Roche limit, 7 Roentgen, Wilhelm Conrad, 242 Rohm, Otto, 93 Roman Catholic Church atomic theory, 237 censorship, 192 spontaneous generation, 113 Ross, Ronald, 202 Royal Society (England) development of calculus, 166–167, 168 therapeutic cloning, 234 RTGs (radioisotope thermal generators). See Radioisotope thermal generators Rubbia, Carlo, 260 Rush, Benjamin, 205, 206, 206, 207 Russian Space Agency, 40, 41, 43 Rutherford, Ernest, 242
INDEX
S Safe Water Drinking Act, 59, 62, 63, 64, 66 Safronov, V.S., 7 Sagan, Carl, 11, 252 Salam, Abdus, 259, 259, 260, 262 Salt and fly ash cement, 104 Samoan adolescence studies, 127–135 Sandage, Allan, 21, 22, 24, 24–25, 27 Saturn (planet), 29 Savage, Paul B., 68 Savas, Stephen, 186 Scanning tunneling microscopy, 244, 279 Schenk, Dale, 213–214, 216 Schiaparelli, Giovanni, 10 Schoch, Robert, 73, 74–75 Schonenberger, Christian, 280, 283 Schröder, Heinrich, 112 Schrödinger, Erwin atomic theory, 243 quantum mechanics, 259–260, 263 Schülze, Franz, 112 Schuster, Gary, 282 Schwann, Theodor, 112 Schwinger, Julian, 260, 263 Scientific grants. See Research grants Scientific theory, 1–2, 252 Segerstrale, Ullica, 121 Seismicity of the Great Sphinx, 73 Self-sacrifice and genetics, 117, 120 Semiconductivity of DNA, 276, 280, 283 Senile dementia. See Alzheimer’s disease Senile plaques. See Beta-amyloid plaques Senut, Brigitte, 137, 140 Sepkoski, J. John, 18 September 11, 2001, terrorist attacks, 196 Sexual behavior Mead’s Samoan study, 127–135 sociobiology, 120–121 Shackleton, Nicholas, 81 Shankman, Paul, 128 Shugart, Alan, 193 Shuttleworth, Mark, 43 Siegler, Mark, 231 Sigurdsson, Einar, 214–215 Silent Spring (Carson), 156 Silvani, Harold, 176 Singer, Maxine, 231 Sipos, Attila, 124–125 Skeletal fluorosis, 67 Slipher, Vesto Melvin, 20, 22 Slowey, Niall, 78–79, 81, 83 Sluse, René-François de, 167 Smith-Peterson, Marius Nygaard, 90, 94 Sociobiology, 117–125 See also Anthropology; Cultural determinism Sociobiology: The New Synthesis (Wilson), 117–125 Soerens, Thomas, 68 Solar concentrators, 35–36 Solar panels on space probes, 29, 32–33, 35–36 Somatic cell nuclear transfer. See Therapeutic cloning Sound and Doppler shift, 20 Sousa, Wayne P., 153 Southern Hemisphere and ice ages, 77–86 Space flight. See Manned space flight; Unmanned space flight Space probes Cassini, 33 gravitational slingshots, 30 Huygens, 33
292
SCIENCE
IN
DISPUTE,
VOLUME
2
power demands, 32–33, 36–37 safety, 29–38 Space shuttles civilian space flight, 39–46 training, 44 Space stations, 17 civilian space flight, 39–46 cost overruns, 16–18 See also International Space Station Space tourism, 39–46 Denis Tito, 40, 41, 43, 45 public opinion polls, 43, 46 Space Transportation Association, 42–43 Spallanzani, Lazzaro, 108, 110, 112 Sparrows, English, 162 Species designation, 136–144 Species diversity. See Biodiversity Sphinxes, 70–75, 72, 74 Spivey, D.I., 282 Spontaneous generation Darwinism, 111–112 definition, 109 history of theory, 107–115 Sputnik (satellite), 39 Squire, Peter, 189 STA (Space Transportation Association), 42–43 Standard candles (astronomy), 24, 26–27 Standard model (particle physics), 259–261, 261–262 Star Wars (motion picture), 194 Steinhardt, Bernice, 185 Stem cells diabetes, 234–235 differentiation, 231–232, 234 disease treatments, 227–236 ethics, 229–230 forecasts, 231–232, 235–236 memory, 234 protests, 230 regulations, 227–228, 229–230 Stephens, Briton, 83 Sterilization spontaneous generation, 110–111, 113–114 total hip replacements, 93 Sternlof, Kurt, 81 Stevenson, David, 8–9 Stevenson, Harold, 174 Stiegler, Jim, 174 String theory, 260–261, 263–264 Strong, Donald, 159, 161–162 Strong nuclear force, 257, 260, 262 Sugaya, Kiminobu, 234 Sulfates and fly ash cement, 104 Superconductors, 275–276, 281 Supernovas, 27 Supersymmetry theory, 260–261, 262 Szent-Györgyi, Albert, 275, 278–279, 281–282
T ‘t Hooft, Gerardus, 261 Tacrine, 211 Tanaka, Kazuyoshi, 282 Tap water. See Drinking water Tau protein, 211, 213, 215 Tchambuli tribe, 130 Teaching, Whole-class. See Whole-class teaching Teflon in hip replacements, 91, 94, 95 Telescope, Mount Wilson, 23 Telomeres, 232 Theise, Neil, 234 Theory of everything, 258–259, 261 See also Grand Unified Theory Therapeutic cloning disease treatments, 227–236 ethics, 229–230 forecasts, 231–232, 235–236 protests, 230 regulations, 227–228, 229–231 Thies, William, 217 Third International Mathematics and Science Study, 173–174 Thistles and biological controls, 159–160, 162 Thompson, Frederick R., 91 Thompson, Tommy G., 220 Thomson, James, 233–234 Thomson, Joseph John, 242, 247, 257 Thorium and radioisotope dating, 81 Thutmose IV (pharaoh), 74 Thymine-adenine pairs, 276, 280
U Ubel, Peter A., 223 Uhlenbeck, George, 243 Uncertainty principal, 243–244 Uniform Anatomical Gift Act (1968), 219–220 Uniform Determination of Death Act (1980), 220 United Kingdom cloning regulations, 228, 230–231 United Network for Organ Sharing, 220, 222 United States Office of Ground Water and Drinking Water, 59, 62 United States Public Health Service, 59, 61 University of Saskatchewan Technologies Inc., 278 University of Southwestern Louisiana, 197 Unmanned space flight safety, 29–38 space probes, 11 UNOS (United Network for Organ Sharing), 220, 222 Uranium radioisotope dating, 81 underground ore, 269 Utah radioactive waste site, 272
V Vaccines Alzheimer’s disease, 210–217 nasal sprays, 214 yellow fever, 209 van’t Hoff, Jacobus Hendricus, 240–241 Vaucouleurs, Gerard de, 21, 24–25 Vedalia beetles, 160 Velikovsky, Immanuel, 2 Veltman, Martinus J.G., 261 Venter, Craig, 123–124 Vitallium in hip replacements, 94 Volcanos, 56 hominid fossils, 139 mass extinctions, 49–50 Vostok ice core, 83
W Wakayama, Teruhiko, 235 Ward, William, 7 Washington nuclear waste site, 270 Water drinking water standards, 59–69 Mars, 13, 13, 19 Moon, 19 See also Drinking water; Groundwater Water erosion of Great Sphinx of Giza, 70–71, 73–75 Watson, James, 275, 282 Weak nuclear force, 257, 260 Weathering of Great Sphinx of Giza, 70–71, 73–75 Weeden, C.R., 160 Weinberg, Steven, 259, 260, 261, 262, 264 Weiner, Howard, 214 Weissman, Irving, 231 Welles, Orson, 10 Wells, H.G., 10 Wertheim, Margaret, 124 West, John Anthony, 71, 73–74, 75 West, Michael, 236 Westaway, David, 216 White, Timothy, 140, 143 Whitfield, John, 81, 83 Whitman, Christine Todd, 60, 66 Whitman, Royal, 94 Whole-class teaching alternatives, 178–179 Great Britain, 174–175 mathematics, 172–179, 177 methodology, 173–174 problems, 177–178 programs, 176 technology, 175–176 Wichman, Harvey, 42 Wiebe, Arthur, 176 Wiechert, Uwe, 5 Wilmut, Ian, 234, 235 Wilson, Edward O., 121 biodiversity, 148 mass extinctions, 47–48, 52, 57 sociobiology, 117–125 Wind erosion of Great Sphinx of Giza, 70–71, 75 Wobble, Orbital. See Orbital eccentricities Woburn (MA) groundwater contamination, 60, 65 Wyeth-Ayerst Laboratories, 215
Index
Tiemann, Mary, 61 Tilman, David, 149, 151 Tilt, Orbital. See Orbital eccentricities TIMMS (Third International Mathematics and Science Study), 173–174 Tinbergen, Nikolaas, 118 Titan (moon), 34, 37 Tito, Denis, 40, 41, 43, 45 Tobias, Phillip, 142, 143 Tomonaga, Shin’ichiro, 260, 263 Tong, Rosemarie, 132 Tooth decay and fluoridation, 67 Torricelli, Evangelista, 239 Total hip replacements, 89–97 complications, 94–96 durability, 93 history, 90–92, 94, 95 improvements, 95 materials, 92–93 Trichloroethylene in groundwater, 60, 68 Trinity College (Dublin, Ireland), 195 Tully, Brent, 25 Tumlinson, Rick N., 43 Turbidity of drinking water, 67 Tuszynski, Mark, 217 Twain, Mark, 192 Tyndall, John, 114, 115
Y Yellow fever, 201–202 mosquitoes, 201–209, 208 New Orleans, 204 Philadelphia, 203, 204–205, 207 quarantine, 206 vaccine, 209 Yucca Mountain (NV), 273–274, 273 Yukawa, Hideki, 257
Z Zaret, Thomas, 152 Zavos, Panayiotis, 230, 233 Zebra mussels, 148 Zinjanthropus boisei. See Australopithecus boisei Zubrin, Robert, 15, 16
INDEX
SCIENCE
IN
DISPUTE,
VOLUME
2
293