Darwinian Reductionism Or, How to Stop Worrying and Love Molecular Biology alex rosenberg
• • • • • • • •
• •
Darwin...
51 downloads
779 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Darwinian Reductionism Or, How to Stop Worrying and Love Molecular Biology alex rosenberg
• • • • • • • •
• •
Darwinian Reductionism
•
Darwinian Reductionism Or, How to Stop Worrying and Love Molecular Biology
•
Alex Rosenberg
•
The University of Chicago Press chicago and london
alex rosenberg is the R. Taylor Cole Professor of Philosophy and Biology at Duke University. He has published ten books, including Economics—Mathematical Politics or Science of Diminishing Returns? and Instrumental Biology, or The Disunity of Science, both published by the University of Chicago Press. The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2006 by The University of Chicago All rights reserved. Published 2006 Printed in the United States of America 15 14 13 12 11 10 09 08 07 06 isbn-13: 978-0-226-72729-5 (cloth)
1 2 3 4 5
isbn-10: 0-226-72729-7 (cloth) Library of Congress Cataloging-in-Publication Data Rosenberg, Alexander, 1946– Darwinian reductionism, or, How to stop worrying and love molecular biology / Alex Rosenberg. p. cm. Includes bibliographical references and index. isbn 0-226-72729-7 (hardcover : alk. paper) 1. Molecular biology—Philosophy. 2. Biology—Philosophy. 3. Reductionism. I. Title: Darwinian reductionism. II. Title: How to stop worrying and love molecular biology. III. Title. qh506.r654 2006 572.8—dc22 2005037319 ⬁ The paper used in this publication meets the minimum 䊊 requirements of the American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ansi z39.48-1992.
• For David Hull oJ filosovfoz tw`u zwologw`u and David Sanford oJ filosovfoz tw`u filosofw`u
• Contents
Preface ix Introduction: Biology’s Untenable Dualism 1 1 What Was Reductionism? 25 2 Reductionism and Developmental Molecular Biology 56 3 Are There Really Informational Genes and Developmental Programs? 94 4 Dobzhansky’s Dictum and the Nature of Biological Explanation 134 5 Central Tendencies and Individual Organisms 157 6 Making Natural Selection Safe for Reductionists 177 7 Genomics, Human History, and Cooperation 201 8 How Darwinian Reductionism Refutes Genetic Determinism 222 References 239 Index 249
Preface
I would like to think that my thirty years of reflection on the relationship of molecular biology to the rest of the discipline has consisted in a series of views successively approximating the version of reductionism defended in this book. Alas, that would be a species of self-deception. What is true is that over this period, during which I despaired of finding an argument that would vindicate reductionism about nonmolecular biology, the prospect of irreducibility had weighed heavily on my epistemological and metaphysical conscience. For a biological science that cannot be systematically connected to the rest of natural science gives hostages to mystery mongering or worse— creationism, “intelligent design,” and their new-age variants. In The Structure of Biological Science (1985) and Instrumental Biology or the Disunity of Science (1994), I identified impediments to reduction and attempted to draw the force of their metascientific implications. But neither was a stable equilibrium in the resolution of forces pulling toward the autonomy of biology and pushing toward its integration within physical science. The reason, I now see, for the inadequacy of these views— different though they were from one another—was their failure fully to appreciate the role of Darwinian theory in biology “all the way down” to the level of the macromolecule, along with the recognition that biology is history. The former thesis must be credited to Theodosius Dobzhansky, and in this book has accordingly been dubbed Dobzhansky’s dictum: “Nothing in biology makes sense except in the light of evolution,” where, of course, evolution means the Darwinian mechanism of blind variation and natural selection. The latter thesis, that biology as a discipline is history, should probably be credited to Charles Darwin himself, though the philosopher whose writings have most impressed it upon me is Elliot Sober. Probably more of
x
p r eface
my disagreements with him over the years have been due to my neglect of this insight more than any other. Both of these insights together enabled me finally to see how, in fact, matters stand between biology and the physical sciences, and to see clearly that they vindicate a strong but subtle reductionism. Whence the present work. The other philosophers from whose constructive disagreements about reductionism I have most benefited are Philip Kitcher, Robert Brandon, Kenneth Waters, Ken Schaffner, Paul Griffiths, Peter Godfrey-Smith, Dan Dennett, Mohan Matthen, Marcel Weber, Bill Wimsatt, Kim Sterelny, Michael Ruse, Roberta Millstein, and Samir Okasha. The whole debate, of course, took off owing to the insights of David Hull, to whom this book is codedicated. I have also profited from detailed comments on the argument by Andre Ariew and Sahotra Sarkar. Among biologists with whom I have debated these ideas, my greatest debts are to Dan McShea and especially to Fred Nijhout, neither of whom should be expected to endorse these views any more than the philosophers named previously. I must also record debts here to two other evolutionary biologists, Dan Promislow and Wyatt Anderson. Moreover, I am indebted to coauthors on several papers whose descendants have found their way into a number of chapters in this book. I must thank Fred Bouchard, David Kaplan, Stefan Linquist, and Philip Rosoff for permission to use material that we developed together. In addition, many of the ideas that found later form in the present text were first tried out in discussion and debate among students and faculty in the Duke Center for the Philosophy of Biology, including especially Tamler Sommers, Marion Hourdequin, Grant Ramsey, Marshall Abrams, and Sunny Yu. I have also profited from feedback by Marc Lange, John Roberts, and John Carroll in the Research Triangle Philosophy of Science Consortium. To Lange I owe the challenge whose solution seems to me to have untangled the Gordian knot of biological antireductionism. Geneva, Switzerland August 2005
Introduction Biology’s Untenable Dualism The molecular revolution in biology is now more than fifty years old. Most people date it from the day in April 1953 when Nature published “A Structure for Deoxyribose Nucleic Acid” by one J. D. Watson and one F. H. C. Crick. Watson and Crick were rather muted about the significance of their proposal. With a British sense of understatement one is inclined to attribute to Crick, they wrote in the second sentence, “This structure has novel features which are of considerable biological interest” (737). And almost at the end of their one-page paper, they coyly admit, “It has not escaped our notice that the specific base pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” Nonmolecular biologists may be excused if they remark that these passages are almost the last time that discoveries in molecular biology have been announced with such becoming modesty. Indeed, there is a joke that reflects the degree to which molecular biology has tried the patience of all the rest of the discipline over the last six decades. It goes like this: The evolutionary biologist and the molecular biologist go to heaven. They are met at the pearly gates by Saint Peter, who announces, “Sorry, but there is at the moment room for just one of you.” In consternation, they both ask, “Well, what are you going to do?” Saint Peter replies, “Each of you will tell me why I should admit you, and then I will choose one. You first,” he says, indicating the molecular biologist. At this point the evolutionary biologist steps forward. “Send me to hell!” she shrieks. Surprised, Saint Peter inquires, “But why, I haven’t even heard your case yet.” To which the evolutionary biologist replies, “I’d rather go straight to hell than have to listen one more time to someone telling me how wonderful molecular biology is.” My aim in this book is to convince the reader that heaven does belong to the molecular biologist, and to convince non-
2
in trod uct ion
molecular biologists that there is room for them in the molecular biologist’s heaven. The relationship between molecular biology and the rest of the discipline is not really a joking matter. It is a scientific and philosophical problem that rivals the original untenable dualism: the mind/body problem. In fact, in some ways the problem about biology is more perplexing than the problem about psychology. The mind/body problem is simple to state, and so far impossible to resolve. On the one hand, modern science—and, of course, neuroscience—is committed to “physicalism”: the assumption that there is only one kind of stuff, substance, or thing in the universe, from matter, material substance, and physical objects, all the way down to quarks. On the other hand, there is the mind: mental states, thoughts, feelings, sensations, emotions, consciousness. So, physicalism is committed to the conclusion that the mind just is the brain and “nothing but” the brain. However, beyond the hand-waving of the neuroscientist and the philosopher’s arguments for global physicalism, there is no evident way decisively to show that these mental things are “nothing but” physical things. And there are arguments, still unrefuted, advanced by dualists since Descartes, against the claim that the mind is “nothing but” the brain. Those “substance dualists,” who oppose physicalism and deny that the mind is identical to the brain, are in just as bad a fix as the monists who assert physicalism. For they have to explain how physical events in the body can have nonphysical effects in the mind, and how nonphysical events in the mind can have physical effects in the body. For example, breaking a bone in your leg is a physical process that involves changes to the composition of matter in the body, and has effects in the mind—intense feelings of pain. But if the onset of feelings of pain is not a physical event because the mind is not material, it is hard to see how the causal signal from the breaking bone could “jump the gap” from physical, chemical, mechanical processes in the brain to allegedly nonphysical states in the mind, states like being in pain. Simply to say, as the dualists do, that the causal signal just does jump that gap is as much a matter of sheer assertion and hand-waving as the physicalist is guilty of. Neither physicalism nor dualism has satisfactory answers to its problems.
reductionism as a research progr am Of course, the success of the scientific worldview’s program of research since the seventeenth century strongly suggests that the physicalist view of the mind will eventually be vindicated. Ever since then, more and more phenomena— physical, chemical, and, apparently, biological—have been shown to be “noth-
Biology’s Untenable Dualism
ing but” matter in motion. But showing that psychological processes are “nothing but” physical ones remains a hard problem. So hard that when Francis Crick first faced it squarely, almost sixty years ago, he decided it was premature to tackle the dualism of mind and body. Instead, he decided to apply himself to another, perhaps more tractable, problem: unraveling the dualism of the biological and the chemical. If he could show that heredity is a wholly physical process, he thought he would do much to vindicate physicalism about the biological realm. You might suppose that Watson and Crick’s achievement, and the subsequent successes of molecular biology in uncovering the physical mechanisms of so many biological processes, has provided the greatest vindication for physicalism since the atomic theory showed that all chemical reactions were “nothing but” physical processes. It’s certainly true that the discovery of the structure and chemical functioning of DNA has done a great deal to vindicate in the public mind the “nothing but” thesis. Certainly Crick thought so. And of course it is not just discoveries and well-confirmed theories in molecular genetics that have substantiated physicalism. Much of molecular biology is devoted to elucidating the chemical reactions through which all bodily processes are realized: the action of enzymatic catalysts, hormonal messengers and “second” messengers, the neurotransmitters and ion channels between and within nerve cells. Everywhere it has turned its hand, molecular biology has uncovered the purely physical character of biological phenomena. As we shall see hereafter, it has done so largely with the tools that Crick, Watson, and subsequent nucleic acid biologists provided. This is what makes molecular biology appear even more “genocentric” than it is. Having vindicated physicalism about the biological to his satisfaction, Crick has spent most of the rest of his life on the harder problem of dissolving the dualism of mind and body. Meanwhile, Crick’s standards of satisfaction for the solution of the problem of how heredity can be physical have been much too low for many biologists and philosophers. It’s not just that they have refused to generalize from the demonstration that heredity is “nothing but” a physical process, to the conclusion that all biological processes are “nothing but” physical processes. These philosophers and biologists have even rejected Crick’s conclusion about heredity being shown to be “nothing but” a physical process. The funny thing about their rejection is that, unlike Descartes’ dualist followers, almost every one of these philosophers and biologists who deny the “nothing but” thesis is a self-proclaimed physicalist, at least about biology. How can you be a physicalist and deny the “nothing but” thesis? And why would you do so? It is this paradoxical state of affairs that makes a perplexing mystery out of the problem of exactly how molecular biology relates to the rest of biology.
3
4
in trod uct ion
Substance dualism about biology would not be hard to express, though it would be hard to find anyone who owns up to believing it. The idea that biological processes are not physical, that they involve vital spirits, divine sparks, entelechies or omega points, is clear enough. But it was extirpated from science soon after the beginning of the last century. It’s just not a live option. The only biologists who deny physicalism are an assortment of cranks and creationists to whom serious science pays no heed. We’re all physicalists now. On the other hand, there are almost no “reductionists”—biologists or philosophers who spend time publicly endorsing and arguing for any version of the “nothing but” thesis. About the only prominent biologist who might revel in the label, besides Watson and Crick, are E. O. Wilson and Richard Dawkins. Everyone else in the public debates about molecular biology seems to be both a physicalist and an antireductionist. They hold that the adequacy, accuracy, correctness, completeness of biological theories and explanations need not and in most cases do not hinge on the provision of theories and explanations from physical science that show how biological phenomena are physical. Reductionism is the thesis that biological theories and the explanations that employ them do need to be grounded in molecular biology and ultimately physical science, for it is only by doing so that they can be improved, corrected, strengthened, made more accurate and more adequate, and completed. But, I suspect, though most molecular biologists are reductionists, or would be if they thought much about it, they are not parties to the debate. I also suspect that most of them are more interested in getting on with the reduction of the rest of biology to molecular biology than arguing about whether it’s possible or desirable. But if pressed, molecular biologists would provide something like the following argument for reductionism. To begin with, the history of science, or at least physical science since the seventeenth century, is the history of successive successful reductions. Kepler began the process by identifying the roughly elliptical paths of the planets around the sun and Galileo followed by identifying the roughly constant acceleration of bodies in the vicinity of the Earth. The Newtonian revolution consisted in reducing both of their discoveries to a single set of fundamental laws of motion. In doing so, Newton was able both to increase the precision of predictions of the motion of bodies, both terrestrial and celestial, and to unify their disparate explanations of the behavior of planets and cannon balls as special cases of a single phenomenon. The subsequent two centuries saw a persistent increase in the explanatory range and predictive precision of Newtonian mechanics as it subsumed more and more phenomena—the tides, eclipses, buoyancy, aerodynamics, until by the end of the nineteenth century, heat was shown to be a mechanical process and thermodynamics was absorbed
Biology’s Untenable Dualism
into the Newtonian worldview. That left electromagnetism to be reduced to Newtonian mechanics. Successful explanation and precision in prediction have been closely connected in physical science ever since Newton, largely because of the explanatory and predictive successes of his theory. However, as measurement precision increased through the period, the predictive accuracy of Newton’s theory declined, so that at the beginning of the twentieth century it faced serious explanatory problems. These problems arose in regard to both very large-scale phenomena such as the orbit of Mercury and very small-scale phenomena such as radiation, as well as difficulties posed by attempts to bring together mechanics and electromagnetism into one theory. But the solution to these predictive and explanatory problems daunting Newton’s theory was a new wave of reductions, to the theory of relativity and of quantum mechanics, which explained both the accuracy and the errors of Newton’s theory by reducing it to special cases of each of them, while both absorbed different parts of electromagnetic theory. The resulting problem facing physics was that these two theories—quantum mechanics and the theory of relativity—are incompatible with each other, and much twentieth-century physical research has been devoted to attempts to reduce one of these two theories to the other. In particular, physicists sought to show that there is a single theory, which explains how gravitational force and the forces between subatomic particles are all variations on a single underlying process that manifests itself in a variety of ways. This reductionist program has, of course, not yet succeeded. But its urgency shows the importance of reductionism as a research program in physics. Meanwhile, the history of chemistry has shown a quite similar trend over the last two hundred years. First, Mendelev formulated the periodic table of the elements at the end of the nineteenth century. Then, starting early in the twentieth century, physicists and chemists began to show that the regularities of chemical synthesis could increasingly be explained and predicted by reducing them to regularities of atomic and subatomic bonding, which in turn were reduced to regularities of quantum mechanics. The result of all this reductive unification has been a synthesis of chemical and physical theories with an explanatory range and predictive precision that is reflected everywhere we turn in twenty-first-century technology. Until 1953, biology seemed recalcitrant both to the reductionistic trend evinced by physical science and to its predictive payoff that makes for increasingly reliable technological application. Of course, if reduction and predictive precision are as closely related as physics suggests, this dual recalcitrance will have been no accident.
5
6
in trod uct ion
Before 1953, there were a number of widely accepted explanations for biological processes; some of them, especially in physiology, made indispensable use of chemical and physical theory. And there was general theory—the theory of natural selection, as well as narrower theory—for instance, Mendel’s laws of segregation and assortment of genes. But general theories, exceptionless laws, and quantitative regularities were few and far between in biology. Few enough to encourage philosophers in the 1950s and ’60s to deny that biology was very much like the physical sciences, to suggest that it had its own unique and distinctive explanatory strategy, and to insist that these explanations were not to be tested by predictions, since biological phenomena were not predictable. Molecular biologists will admit that prior to 1953, biology’s most basic theories lacked features characteristic of theory in physical science: direct evidential support, explanatory generality, predictive precision, or all three. But on their view, these are defects to be corrected by a reductionistic research program. Consider Mendel’s laws: almost immediately after their rediscovery in the early twentieth century, exceptions to them began to pile up: crossover, linkage, meiotic drive, autosomal genes, and so on. If we could reduce Mendel’s laws to their macromolecular foundations, then presumably both their range of application and the exceptions to them would be explained, and for that matter might then be predicted. We could then employ Mendel’s laws with confidence in areas where we knew their exceptions would not arise, and avoid reliance on them in applications where they are likely to play us false. Such precision would be of considerable value in agriculture and medicine, just for a start. Darwin’s theory offers the reductionist another example. One of the longstanding complaints made against the theory of natural selection is that it is almost without predictive content. If predictive content is a measure of explanatory power, then the theory’s claims to explain diversity, complexity, and adaptation on the Earth are open to challenge—one creationists have often made their own. Anyone familiar with the details of terrestrial evolution will understand why it is hard to make evolutionary predictions about the future of terrestrial flora and fauna: the number of factors which conjunctively determine fitness is vast, and their strength varies over evolutionary timescales that we cannot easily measure during single human life spans. But if we could ground the principles of the theory of natural selection on more fundamental principles from, say, chemistry and physics, that were themselves well supported by the confirmation of their precise predictions, we would have provided very powerful evidential support for the theory, in spite of its predictive difficulties for plants and animals. The molecular biologist will hold that the road to wider explanatory power, greater predictive precision, and an ever-increasing payoff in reliable techno-
Biology’s Untenable Dualism
logical application is paved by reduction. If there are obstacles to the reduction of the biological to the macromolecular, they are temporary, or at least not logical or physical obstacles. But until these obstacles are surmounted, the technological payoff for human welfare of biology will be limited. As they are removed it will rapidly accelerate. As the rest of scientific change through three centuries has shown, reduction is the most powerful way to correct, deepen, and broaden scientific theory. And, by and large, it is the only way to make technology based on it reliable enough to employ. It is still early days for molecular biology as a subdiscipline. A half century or so from Watson and Crick’s breakthrough is not exactly the blink of an eye, but it is a short time by the standards of scientific change. Yet molecular biologists would be right to feel vindicated in their commitment to reductionism as a research strategy. Their discoveries have been surprising and have begun to do the work of correcting prior theory and enhancing predictive success in at least some areas of medicine, agriculture, and bioengineering. And nothing in the laboratory has arisen to suggest impediments to the research program of reductionism.
antireductionism: physicalism with a human face But, as noted above, the remarkable fact is most participants in the public discussion of reductionism—including even a number of distinguished and vocal molecular biologists—reject it. And they reject reductionism notwithstanding their commitment to physicalism. Now, on the face of it, the combination of physicalism and antireductionism is a surprising and perplexing intellectual package. It surely looks like an “unstable equilibrium,” a position in which no one can long remain. The physicalist antireductionist claims that all biological facts are physical ones, that there are no ghosts in the biological machinery, and yet physical science—chemistry and biology—cannot now and never will be able to explain or even express all these physical facts. These physical facts about the world, which physics and chemistry, even in their completed form—whatever that might look like—will be unable to explain are not facts about the basement-level of physical processes, the behavior of strange and colored quarks. That there are basement-level physical facts, which cannot be further explained, everyone will grant. Explanations have to come to an end somewhere. But if physicalist antireductionism about biology is right, then besides these unexplainable facts about the unobservable basic building blocks of the universe will be another set of physically unexplained facts. And these facts not explainable by physics and chemistry will be facts about the flora and fauna we can see, feel, hear, and smell. What is even more mysterious, these biological facts will have biological explainers, not physical ones. And these explainers
7
8
in trod uct ion
will be basic facts just like the unexplained explainers of physics about quarks, but they won’t be facts about quarks. But, of course, what sort of facts will they be? If they are physical facts that can’t be explained by facts about quarks, then there will be another range of physical facts not explained by physics besides the basic facts about quarks, and this range of facts will be a set of basic unexplained facts too. But then what makes this second range of facts physical if they are not facts about quarks or explained by facts about quarks? And, on the other hand, if this set of physically unexplained facts that explain the biological ones are not physical ones, we have to give up physicalism and admit that there are non-physical facts—ghosts in the machine. You can see the problems to which physicalist antireductionism seems to lead. At a minimum, the burden of proof rests on those who hold that physicalist antireductionism is a coherent position. It looks like a view that is going to need a lot of argumentation. And yet, despite the burdens of physicalist antireductionism, almost all philosophers of biology have wrapped themselves in its mantle, many nonmolecular biologists have embraced it with enthusiasm, and even some world-famous molecular biologists have defended it at length. They have done so from the best of motives. And because interested spectators share these motives, and take experts’ words for it, they are inclined to share the apparent public consensus of informed scientists’ and philosophers’ rejection of reductionism. It’s not difficult to identify motives for antireductionism: wants, desires, wishes that would be more nearly attainable if reductionism were false. Some biologists will oppose the increasing incursion of molecular biology in their parts of the discipline, just because they fear being made obsolete by the advance of science. In a world of scarce resources, the more importance is attached to one area of research, the less money there will be for another area. For example, students of systematics and taxonomy may conclude that a DNA-based phylogeny may have no further use for their particular skills and knowledge of subtle differences in the phenotypic traits of closely similar but distinct species. Thus, they will admit that information about distinctive DNA sequences may be helpful. But they also have an incentive to argue that distinguishing organisms and families of them by nucleotide sequence alone would produce the wrong taxonomy even if, as they would in any case deny, it were feasible to do so. The notion of DNA as a bar code (Herbert et al. 2003) that would enable biologists to “automate” taxonomy would be anathema to them. Ecologists and biologists concerned with environmental preservation may fear that funds directed away from their part of the discipline and toward everfaster gene-sequencing machines will have a baleful effect on the livability of this planet for creatures like us. Spending money to elucidate the molecular details of photosynthesis instead of testing the hypothesis of global warming may
Biology’s Untenable Dualism
be a case of fiddling while Rome, and everywhere else, burns. Understanding the full biosynthetic pathway from the introduction of a pollutant into a stream to the mutation that renders a species extinct is less important than determining whether the extinction will lead to the collapse of the stream’s entire ecology—perhaps a hundred different niches. Then there are those whose religious convictions lead them to reject reductionism as the latest and most threatening version of the theory of natural selection’s banishment of purpose, intelligence, and meaning from the universe and its history. Theism could remain unruffled while Newtonian mechanism deprived the physical realm of the ends and purposes which Aristotle’s physics had accorded it. For it was confident that, as Kant put it, “there could never be a Newton for the blade of grass.” That is, no one could deprive flora and fauna of their “designed-ness,” and show the hypothesis of God’s purposes to be scientifically gratuitous. Darwin, of course, should have put an end to these dogmatic slumbers. But creationism, and its latter-day variant, “intelligent design” theory, remain vivid testimony of the triumph of providential hope over ever-mounting evidence. A reduction of the adaptations of organisms, organs, tissues, and cells to features of macromolecules might be too much even for providential hopes to hold out against. (Even where theology has made room for Darwinism [in, for example, the address of Pope John Paul II to the Pontifical Academy of Sciences on October 22, 1996], cognition and consciousness are exempt from creation or operation in accordance with a nonpurposive mechanism.) Theologians and others, including, of course, public intellectuals on the political Right, will find a complete physio-chemical understanding of humanity threatening to human dignity, individual responsibility, and divine agency. They will oppose the claims of macromolecular reduction because it might provide such an understanding. (See, for example, Fukuyama 2002.) As La Rochefoucauld said, to explain is to excuse, and a complete explanation of human action in terms of the interaction of macromolecules excuses us from any credit for our accomplishments and blame for our misdeeds. For surely we cannot control all or even most of the macromolecular processes critical to the neural causes of our behavior. Thus, the reduction of apparently free choice to fixed and determined behavior will deprive people of the political Right of the premises needed for their argument that inequalities in wealth or income between individuals are morally permissible. For determinism about all human conduct (and its causes) makes it impossible to claim that some outcomes of individual choice are earned or deserved, because there is no such thing as really free choice. Similarly, macromolecular determinism undercuts the classical liberal (contemporary conservative) claim that equality of opportunity is the most society is obliged to provide, that equality of outcome—egalitarianism and redistribu-
9
10
in trod uct ion
tion to attain it—immorally deprives those who have earned more than others through their own free choice. If our temperament, habits, abilities, and tastes are all to be fully explained by the macromolecular neurobiology of our brains and the rest of our bodies, and we have no control over any of them, if indeed they can all be adjusted by macromolecular intervention after our conception and/or birth, depending on how much money our parents have and what tastes and preferences they have been determined to bear, then surely the differences of outcome to which talents, tastes, abilities, and capacities lead simply reflect differences in opportunity which were never equalized. Ironically, those, typically on the Right, who attach greatest weight to human agency, free choice, and individual responsibility make common cause with public intellectuals on the Left in opposing reductionism in biology. These philosophers, scientists, and others on the Left oppose a stronger doctrine distinct from mere physical determinism. They reject the view known as genetic determinism, according to which human traits, such as a disposition toward violence, or the division of gender roles which characterize most societies; or xenophobia, racism, alcoholism, and mental illness; or intelligence and industriousness, are fixed by genetic inheritance and impervious to environmental changes, that is, to social intervention, learning, reform, treatment. (Classic statements of the view are to be found in Gould 1981; Lewontin and Levins 1985, for example.) The doctrine of genetic determinism is morally nefarious in their view especially because it encourages complacency about inequalities, both social and natural, by attributing the former to the latter, and suggesting that natural, that is, genetic, inequalities are ineradicable. Genetic determinism not only encourages complacency about the status quo, it discourages attempts at reform or revolution by its suggestion that the status quo reflects strategies and institutions adapted by eons of evolution through natural selection. Opponents of genetic determinism view the success of genomics and molecular biology generally as providing a halo around these morally repugnant claims by relentlessly uncovering the gene for this and the gene for that, or at least alleging to do so. Opponents of genetic determinism do not wish merely to show that the scientific evidence is against it; they would like to pull the conceptual rug out from under it altogether by showing that genetic determinism is incoherent, conceptually confused, and resting on a logical as well as a moral mistake. And they believe that the falsity of reductionism in biology would provide that demonstration. If the genes and the macromolecules cannot by themselves account for the biological traits of organisms in general, they can hardly be expected to do so for human behavior, human affairs, and human institutions. It is this opposition to genetic determinism which most powerfully fuels the motivation to reject reductionism about biology.
Biology’s Untenable Dualism
Now, of course, each of these arguments about the consequences of reductionism, for funding elsewhere in science, for our understanding of human traits, behaviors, and institutions, and perhaps even for atheism, have weighty counterarguments; and for all we know, reductionism has no such implications anyway. But whether there are valid arguments about the implications of reduction or not is not so important to many of the opponents of reduction. What is important is the prospect that educated and influential people will find them plausible, embrace their conclusions, and be forced to choose between endorsing what they suppose is scientific advance and accepting the received wisdom, the values of our civilization. History has all too often shown that when people are forced so to choose, they adopt the scientific worldview. It is to prevent people from having to choose between humanistic commitments and the scientific worldview that opponents of reductionism are motivated to deny it scientific support. Identifying the motives people have for the views they hold is almost always the prelude to a scientific and philosophical mistake. Just because we can show that the motives for someone’s advancing a view are suspect, we have done nothing to show the view is false. And, of course, if a view is defended out of the most honorable of motives, it could still be quite false. Ironically, in logic, the mistake of inferring the falsity (or the truth) of a conclusion from the history of the motives or interests of those who advance it is called “the genetic fallacy.” (Genetic here derives from genesis—the origin of some belief does not reflect its soundness or truth.) An assessment of reductionism must focus on its strengths and defects, not on the motives of its supporters or detractors.
what’s wrong with reductionism? What exactly is reductionism, and what’s so wrong, dangerous, and dogmatically narrow-minded about it? Most of the time, reductionism is a term of abuse. It is employed indifferently to identify two different explanatory strategies in science. One of them is clearly a mistaken strategy, and rightly criticized. This is the temptation to simply ignore causal variables in explaining some outcome. For example, Marxian historians used to be criticized for wrongly reducing all the causes in history to economic ones. It would be equally wrong and just as reductionistic to “reduce” the causes of some historical event such as the outbreak of World War I simply to the assassination of the Austrian archduke in Sarajevo. There were other factors: the economic competition and the naval armaments race between Germany and Britain, the kaiser’s fear of encirclement by the Entente, the willingness of the Russian government to protect the Serbs, its surprising speed in mobilizing, the British guarantee to Belgium, and so on.
11
12
in trod uct ion
Closer to the present dispute, it would be wrongly reductionistic to trace the causes of a chocolate Labrador’s coat color to its genes. Not even all of its genes alone will produce pigment or pattern by themselves. They may be necessary, but they are not sufficient for coat color. They require a lot of additional cellular machinery, substrate amino acid molecules, and thermodynamic noise to produce coat color. Similarly, an attack of sickle-cell anemia is not just a matter of there being a serine molecule in the wrong place on the outside of each one of the sufferer’s hemoglobin molecules, which causes them to clump together and block the arteries. To begin with, there need to be, well, arteries, and then there is the question of where the arterial blockage takes place and when. The sort of reductionism that simply neglects causally necessary factors is one that doesn’t require too much refutation. But the sort of reductionism found to be threatening in biology is quite a different doctrine. The reductionism that opponents of the hegemony of molecular biology need to refute is the claim that in the case of the sickle-cell anemia episode or in the case of the Labrador’s coat color, the complete or whole causal story is given at the level of macromolecules. This sort of reductionism holds that there is a full and complete explanation of every biological fact, state, event, process, trend, or generalization, and that this explanation will cite only the interaction of macromolecules to provide this explanation. This is the reductionism that a variety of biologists and their sympathizers who identify themselves as antireductionists need to refute. But then, why should we suppose that this sort of reductionism is correct? On whom does the burden of proof lie: the reductionists, to show that there is no biological process beyond the scope of macromolecular explanation; or the antireductionists, to show that some biological processes are beyond the reach of macromolecular explanations? At least some antireductionists do accept the burden of proof, and adduce considerations in favor of their view. Perhaps the most widely known popular “argument” against reductionism is the claim that it is false since in biology, and elsewhere, “the whole is more than the sum of its parts.” The trouble with this claim is that with almost any intelligible interpretation of this slogan, reductionists can simply concur, and deny that it undermines their views. Of course, the whole is more than any individual part, and the whole has properties which no individual part has: the property of wetness that water has is not a property that any H2O molecule has. But this is no reason to deny the reducibility of wetness to properties of H2O molecules. Wetness is identical to the weak adhesion of these molecules to one another, owing to the polar bonds between the molecules; and these bonds are the result of the distribution of electrons in the outer shells of the constituent atoms of the H2O molecules. Sure, wetness only
Biology’s Untenable Dualism
emerges when lots of H2O molecules aggregate, but that’s no reason to deny its reduction to these relations—the well-understood spatial relations, and polar bonds among them. But, the antireductionist may continue, wetness is not just the spatial distribution and bonding of the H2O molecules, it’s also or instead the feeling we get when we touch water, and this property of water can’t be reduced to relationships among the molecules. Well, that’s right. But the feeling of wetness is a complex relation between the molecules and our neurological system—more molecules, of course. It’s not an isolated property of the water. Reducing the feeling of wetness of water is another matter altogether different from reducing its wetness. Reducing feelings, sensations, is one that science has yet to accomplish, owing to the incompleteness of our understanding of neurology. But it would be blatantly question-begging to assert that no macromolecular—that is, reductionistic—explanation of our sensations can ever be provided. Making such an unargued assumption is very far from taking on the burden of proof. It is, in fact, a matter of shifting the burden of proof onto the reductionist, and demanding an impossibly high standard of proof: that science should complete the reduction of human neurology in order to show that the wetness of water is equal to the relations among molecules. Although reductionists cannot accept so high a standard, they can and do argue that the whole history of science since the seventeenth century has been a continuing empirical vindication of reductionism. This, of course, is a matter on which reductionists and antireductionists can have a meaningful dispute. Does the successful reductionist trend in physical science extend to biological, social, and behavioral science? Ever since Immanuel Kant, antireductionists have insisted that it does not. It was Kant who, we noted above, famously held that the blade of grass will never have its Newton. Properly understood, this is a far less question-begging slogan for antireductionism than “the whole is more than the sum of its parts.” That there will never be a Newton for the blade of grass was meant by Kant to suggest that the kind of explanatory and predictive precision to which physical phenomena submit will never be attained by human cognitive and computational powers. Suppose this claim were true. Would it be enough to reconcile physicalism and antireductionism? What everyone, reductionist or antireductionist, will grant is that there are many explanations now provided for biological processes which are accepted as appropriate to the contexts in which they are given. That is, there are explanations at various levels of detail for, to take a well-known example, sexual recombination. Some are appropriate to the secondary-school student’s context of biological inquiry; some to the university student’s background knowledge and ability to understand; and there is a context of inquiry characteristic of
13
14
in trod uct ion
the biological specialist, to which a more complicated and more detailed explanation of recombination in terms of the stages and substages of meiosis is appropriate. Now, it is claimed by the antireductionist that in addition to being appropriate to the background knowledge and interests of the trained biological specialist, this explanation is adequate, complete, and correct. In particular, it will not be replaced in the future by an explanation of recombination in terms of macromolecular phenomenon, nor will it be corrected or completed by macromolecular discoveries, nor will its explanatory principles themselves be explained by more fundamental principles adverting to the behavior of macromolecules either largely or exclusively. Replacement of the meiosis explanation of recombination is, of course, not what the reductionist envisions, nor is the reductionist committed to a macromolecular correction or completion of the meiosis explanation of recombination (though reductionism doesn’t exclude these possibilities). It’s the last of the three claims, that the meiosis explanation of recombination will not itself ever be explained by macromolecular principles, that is in dispute. The antireductionist has to hold that the meiosis explanation of recombination is “rock-bottom” biology. In Kant’s terms, no “Newton for the blade of grass”: no explanation of meiosis in terms of purely physical science. Of course, the antireductionist accepts that there will be major discoveries at the macromolecular frontier of biology. It’s just that no matter how impressive, they will neither replace (eliminate) nor explain (reduce) meiosis. Why not? Here is one reason that might be offered, which we should reject, and which indeed antireductionists have explicitly rejected: the cognitive and computational capacities of Homo sapiens (or, for that matter, the maximally cognitively powerful creatures biology could allow for anywhere in the universe) are too puny to either discover or hold together in memory all the molecular details of every way in which meiosis is actually accomplished, and too puny to deploy this information in explanations of meiosis that we (or more powerful agents) could recognize to be explanatory for the process of meiosis as it is variously realized on this planet. Call this the epistemic argument, since it makes reduction an epistemic impossibility. The epistemic argument’s conclusion may, for all we know, be true. On the other hand, it may be false. For all we know, there are limits to the complexity and diversity of the natural realm, and what is more important, technological advance in information storage and processing may substantially enhance our capacity to understand macromolecular processes and their combinations. Consider how much of an advance bioinformatics has made in the time since the early 1980s when sequencing ten base pairs a week was an ac-
Biology’s Untenable Dualism
complishment. By the early years of the twenty-first century, computational biology was able by a computational algorithm to identify all the genes on a chromosome from the brute nucleotide-sequence data. It would be a mistake to underestimate the power of the human mind and its prostheses. Even if there are limits to our cognitive and computational powers that make reductive explanations of some biological processes impossible for us, the epistemic argument’s conclusion isn’t strong enough a foundation to underwrite antireductionism, or at least an antireductionism that the “philosophical” antireductionist “needs.” For a merely contingent biological fact about us, together with a contingent fact about some other biological process that jointly are inaccessible to macromolecular explanation by us, would not be a strong enough basis to secure the autonomy of all parts of biology from physical reduction. And the fact that those parts are not reducible in practice, owing in part to contingent facts about us, would be a positive vindication for reductionism both in theory and in much biological practice. This would be especially so if a reductive explanation for this practical recalcitrance were available. Compare a similar situation in physics. The motion of bodies is reducible to Newton’s three laws plus the inverse square law of gravitation. And yet, once more than two bodies are involved in a calculation of the forces on and the path of a third body, the calculation cannot be completed. The physicist has to resort to methods of approximation. But this is no reason to deny that the motion of bodies is not fully reducible to Newton’s laws. Indeed, Newton’s laws help us explain why minds capable only of a finite number of calculations must resort to approximation in these cases. The antireductionist needs there to be something in the nature of all truly biological phenomena that acts as a barrier to reduction, not a more or less leaky sieve that allows for some reductions of the biological to the physical and blocks others. Anything less gives hostages to the fortunes of scientific advance.
dobzhansky’s dictum There is a better, stronger argument for antireductionism than the merely epistemic one. Unlike the epistemic argument, it does not rely on human cognitive and computational limitations. It relies instead on the most important single fact shared by all biological systems: they are all members of lineages of descent which are subject to evolution by natural selection. The world-famous geneticist Theodosius Dobzhansky once titled a brief, semipopular article, “Nothing in Biology Makes Sense except in the Light of Evolution.” Of course, Dobzhansky meant “evolution by Darwinian natural selection.” Whether he meant this slogan as pardonable hyperbole or subdisciplinary hubris, the fact is, this claim
15
16
in trod uct ion
is literally true, and most of the advance in the philosophy of biology over the last half century has vindicated its literal truth. Nothing I will say in this work should be taken to challenge its literal truth. Indeed, most of the arguments for reductionism to follow hinge on the literal truth of Dobzhansky’s dictum, as I shall call it. This will be surprising to many in light of the role it is usually accorded in arguments against reduction. Given Dobzhansky’s dictum, an argument for antireductionism that turned on the irreducibility of the theory of natural selection would close the case against reductionism. For if everything biological reflects the operation of natural selection, and natural selection is not a process reducible to physical science, nothing biological is completely reducible, without at least some irreducible remainder, to the physical. And for all we know, there may be a great deal of the biological not reducible very much at all, or in fact completely irreducible to physical processes. Notice that the dialectical situation is asymmetric. If this argument is sound, the irreducibility of natural selection will be sufficient for the irreducibility of biology altogether. On the other hand, even if the reductionist can provide a reduction for the theory of natural selection, this would not be sufficient to vindicate reductionism beyond question. But it should shift the burden of proof to the antireductionist and make the physicalist antireductionist intellectually uncomfortable. The argument against reduction from Dobzhansky’s dictum turns on another distinction, drawn most persistently and perhaps also originally by another immortal of twentieth-century biology: Ernst Mayr, who employed the distinction in the very way now to be illustrated to argue for the characteristic difference between biology and physical science. This is the distinction between proximate and ultimate explanations (Mayr 1982). It is a distinction easy to grasp in an illustration. Consider the interrogatory sentence “Why does the buckeye butterfly have spots that resemble owl eyes on its wings?” This query expresses at least two different questions. You can see that it does so by considering the different answers that might be given in response to it. One answer traces out the sequence in which certain nucleotide sequences—the regulatory genes— in certain cells of the developing butterfly’s wing express certain proteins— promoters and repressors, and how these proteins switch on and off other nucleotide sequences—the structural genes—in such a way that their joint products build up new cells and color the already existing ones in the pattern of pigments that constitute the eyespot. This answer traces out the immediately prior events that cause the appearance of the eyespot in (almost all) buckeye butterflies. Since these events constitute a spatiotemporally linked chain of causes and effects, or better, a spatiotemporally linked network of feed-forward
Biology’s Untenable Dualism
and feed-back events that result in the anatomical structure of the eyespot, they are aptly called its “proximate” cause. Proximate causation is not limited only to development. Almost all biologically significant events, states, processes, and conditions have such proximate causes. (The qualification “almost” reflects the extremely infrequent but occasionally significant quantum events such as mutations.) We can identify the proximate causes of particular (token) biological events, states, and conditions—for instance the proximate cause of the eyespot in the only buckeye butterfly flying around inside this room. Or we can identify the type or types of proximate causes for a type of biological event, state, or process—for instance the characteristic eyespot found on almost all buckeye butterflies. For reasons that will become clear below, biology differs from physical science in that when we seek a proximate explanation for the instances of a biological kind, almost always we will have to provide more than one distinct network of feed-forward/feed-back elements to do anything like the complete job. By contrast, explaining why all samples of copper are conductors involves identifying just one proximate mechanism. By now it should be obvious that there is an altogether different answer to a different question expressed by the same form of words, “Why does the buckeye butterfly have spots that resemble owl eyes on its wings?” The other question is appropriately answered by noting that the eyespot is an adaptation, as it camouflages the buckeye butterfly from birds, which are its natural predator and the natural prey of owls. To call the eyespot an adaptation is, of course, to confer upon it an evolutionary etiology—a history of successive random variations and natural selection by environments through which the lineage of contemporary buckeye butterflies and their parent species lived over geological timescales, environments filled with predators and predators of predators, which filtered out alternative, less effective means of survival and favored the spread of this particular strategy of camouflage among the buckeye butterflies. Of course, in the hands of sophisticated evolutionary biologists such explanations— especially when explaining intraspecific evolution of populations—will take on a fairly precise statistical character. But the explanatory strategy is very much the same: it is one that adverts to a long-term process, many of whose details are unknown and need not be known, so confident are evolutionary biologists of the general process. Despite our inevitable ignorance of many of the links in the causal chain, we know that ultimately there must be such a story of variation and selection that brought about the eyespot in the buckeye butterfly. Whence the ultimate explanation, which is quite different from the proximate one. Unlike physical science, biology is the discipline that seeks ultimate evolutionary explanations along with proximate explanations. In fact, properly understood, even biology’s proximate explanations presuppose further ultimate
17
18
in trod uct ion
explanations. Recall Dobzhansky’s dictum: nothing in biology makes sense except against the background of evolution. It will now become evident why this claim is literally true. Biological explanations, even proximate ones, begin with a description of what is to be explained, the so-called explanandum. In this case the explanandum is the fact that buckeye butterfly wings evince eyespots. Now, what exactly is a butterfly wing, where does the wing start on a butterfly, why treat the wing of a butterfly as the same anatomical part as the wing of a moth, the wing of a dragonfly, or, for that matter, the wing of a bird? Well, obviously, the butterfly’s wing is distinct from the thorax and similar to the wing of the moth, the dragonfly, and the bird in virtue of its function. To call something a wing is not to describe it in terms of its composition, or structure, but in terms of the effects of something’s having a wing: but which effect? Obviously, the effect of flight! Among all the many effects of having a wing, the one which confers its function, flight, is the one selected for because it and/or its precursors was an evolutionary adaptation.1
1. This and the very few subsequent footnotes in this book record largely philosophical digressions from the argument. In this case, the subject is a competing analysis of functional language and functional explanation originally advanced by R. Cummins (1975) in connection with functional attribution in psychology and, due to Larry Wright, advanced as an alternative to the so-called etiological or selected effects analysis, which I have in effect taken over above. According to this alternative “causal-role” analysis of functional description and explanation, terms like gene, for example, have no implicit or explicit teleological content. Rather, they advert to “nested capacities,” that is, to components of larger systems to whose behavior (whether goal-directed or not) they make a causal contribution. Cummins’s analysis makes the attribution of a function f to x relative to an “analytical account” of how x’s f-ing contributes to the “programmed manifestation” of some more complex capacity by a system that contains x. Thus, for example, some quantity of nucleic acid is a gene relative to an analytical account of how the sequence’s capacity to record and transcribe the primary sequence of a protein contributes to development and hereditary contributions of the organism that contains it. The reason such an analysis attributes no teleological content to a functional attribution is reflected in the fact that it can be realized by any number of completely nonbiological systems in which contained capacities contribute to the manifestation of containing capacities. Thus, for example, there is an analytical account of how the position and composition of a boulder in a stream contributes to the capacities of the stream’s rapids to capsize canoes or to power turbines or make it difficult for salmon to swim upstream, even though no one would suppose that it is the function of the boulder to do so. Defenders of the “causal-role” account argue that far from being an objection, this fact simply shows that there is a continuum from less interesting to more interesting functional attributions which largely reflects the complexity of contained and contain
Biology’s Untenable Dualism
Biology “taxonomizes” the phenomena in which it interests itself functionally, not structurally. Physics and chemistry taxonomize their phenomena in terms of physical composition and spatial relations—that is, structurally. Biology taxonomizes all the way down to the level of molecular biology. In fact, what makes something a matter of molecular biology instead of a matter of
ing capacities; moreover, they argue, biology requires such a teleology-free analysis of functional description. According to Amundsen and Lauder (1998), there are compartments of the discipline such as functional anatomy, in which items with nested capacities are accordingly given functions without any commitment to their “selected effect” etiologies, if any. Amundsen and Lauder advance a potentially more powerful argument in favor of the place of “causal-role” functions in biology: the distinction between homology and homoplasy. The wing, for example, has evolved 40 or more separate times. Accordingly, many wings are not homologies but homoplasies. Not only are the etiologies that gave rise to wings independent, but the particular “design problems” successively solved by wings as diverse as the flying fish’s, the dragonfly’s, the sparrow’s, and the bat’s are diverse as well. Biologists need to consider the alternative hypotheses that the wings of two creatures constitute a homology—type identity owing to descent, as opposed to homoplasy—type identity owing to convergence on the same solution to a “design problem.” The only way this is possible is, presumably, by making a prior independent identification of the parts, components, or traits, which may be similar owing to either descent or convergence. Such independence means independent of etiology—that is, homologous descent. The “causal-role” analysis and defenses of it, such as those of Amundsen and Lauder, are not difficult to reconcile with the claim that biology’s taxonomy reflects the selected effects etiology advanced originally by Wright. Consider, for example, the claim that the homology/homoplasy distinction requires neutrality on whether etiology individuates a kind or not. Of course, the selected effects account of analysis doesn’t commit its exponents to any particular etiology, only to the generic claim that each item in biological taxonomy has some etiology or other. The analysis is not even committed to the claim that all items characterized as wings have the same etiology. Moreover, there is some reason to suspect that at least as Cummins originally stated his analysis, it may presuppose a potentially question-begging teleological component. What Cummins calls the “analytical account” of the contribution of x’s f-ing to system s’s G-doing is required to be a “programmed manifestation” of G by s. Now, unless Cummins can show that the notion of “programming” is itself free of teleological content, the “causal-role” analysis will still have an undischarged commitment to teleology that can be cashed in by application to it of the selected effects analysis I advocate here. Even if programmed manifestations need not always be teleological, we may reconcile the Cummins analysis with the selected effects view by noticing that almost all kinds in biology are functional in Cummins’s sense, and that for such biological kinds f, the agency responsible for the existence of the programmed manifestation of G by s to which x’s f-ing contributes, is a past process of natural selection. This will not only
19
20
in trod uct ion
organic chemistry is that the objects of the former subdiscipline, unlike those of the latter, are individuated functionally. Molecular biology is interested in enzymes; organic chemistry, in amino acid catalysts. What is the difference? An enzyme is an amino acid catalyst with an adaptation role, a catalytic effect that, among its many actual and potential catalytic effects, has been selected for in the evolution of biological systems. The very name biology confers on an enzyme reflects its selected effect, the one among its many effects by virtue of which it has persisted through geological time. Now the argument against reductionism anywhere in biology can be framed: since Dobzhansky’s dictum is literally true, every proximate explanation in biology is implicitly ultimate, every such explanation includes an implicit commitment to the theory of natural selection. The successful reduction of biology and its explanations to physical science therefore requires the successful reduction of the theory of natural selection to physical science. But this is what no reductionist can do. Indeed, no reductionist has ever tried to do it. And with good reason, the antireductionist insists: it cannot be done, and it is obvious to reductionists that it cannot be done. Reductions cannot be completed. Reductionism is false, QED. Case closed. There are just two minor consequences of this conclusion which should trouble the antireductionist. One of these should be disturbing to those who seek from science the assurance of the continued amelioration of the human condition. The other worry is a philosophical one. Since technological improvements of the sort required for amelioration in medicine, agriculture, and elsewhere require the sorts of increases in precision and control that only reduction provides, antireductionists will have to resign themselves to limits, and rather immediately evident limits, to medical and agricultural benefits from biology. And then there is the problem we faced at the beginning of this debate: when it comes to reconciling physicalism and reductionism, it looks like by winning their argument, antireductionists have painted themselves into a logical corner. Because their only convincing argument for in-principle antireductionism requires there to be at least one set of biological facts—the general processes of natural selection—which are not fixed by the physical facts. Unless the antireducvindicate Dobzansky’s dictum, but, as we shall see in the next chapter, also explain why, even in the psychological processes to which Cummins originally intended to apply his analysis, multiple realizability is the rule. And, as will become apparent in chapter 1, it is on the multiple realizability of biological kinds (including psychological kinds) that the arguments about reduction turn. Even in those few biological cases where Cummins’s account might be preferred, as for example Amundsen and Lauder claim for functional anatomy, the real issue is multiple realizibility and its sources.
Biology’s Untenable Dualism
tionist can draw the force of this admission, the unstable equilibrium of physicalism and antireductionism must shift catastrophically (in the mathematical sense at least) to vitalism—the existence of non-physical facts (about life) that explain biological processes, or eliminativism—the denial that biological descriptions pick out real properties of biological systems. Neither option is in the slightest bit tenable.
darwinian reductionism The combination of physicalism and antireductionism is, as I said, an untenable dualism. In this book, I argue that the physicalist must be a reductionist. But the physicalist must be a reductionist of a sort not hitherto seen among biologists and philosophers of biology. The reductionism defended here is not tempered or qualified, partial or tentative. But it is “nuanced.” It has some twists; it has learned some important lessons from antireductionism and from those biologists and philosophers who have defended it. Most important, it accepts Dobzhansky’s dictum as literally true, and faces squarely the crucial problem of how to provide a reduction of the biologist’s theory of natural selection to the scientific foundations recognized by physics and chemistry. The reductionists’ solution to this problem then becomes a crucial tool in their defense of the research program of the molecular biologist, a defense that should convince even Saint Peter that the molecular biologist belongs in heaven. But, of course, the payoff may be more terrestrial than providential. For if the argument for reductionism that I attributed to the molecular biologist above is even roughly correct, reductionism holds out the best hope for the sort of technological improvements in applied biology we have so far seen only in applied physics and chemical engineering. And if reductionism is impossible as a scientific research strategy, we must resign ourselves to an unpredictable and therefore uncontrolled future along with the rest of the Earth’s flora and fauna. If the present work can reduce antireductionist-inspired pessimism about the prospects for biological prediction and its technological application, it will have been more than a merely philosophical exercise. In the next chapter, I try to show how the philosophical problem of reductionism has been modified over the course of the last half century by discoveries in biology and advances in the philosopher’s understanding of biology. The chapter’s title, “What Was Reductionism?” is meant to emphasize that the terms in which the dispute was traditionally expounded by the logical empiricists and their postwar successors rest on presuppositions about biology that have been superseded, and biological findings that have been overtaken by events. Philosophers will recognize the obstacle to reductionism that biology erects under
21
22
in trod uct ion
the labels of “supervenience” and “multiple realizability.” In chapter 1 I explain how and why the Darwinian character of biological phenomena preclude any straightforward reduction of the sort we are familiar with from physical science. The key is to recognize that once nature begins to select among traits of organisms, it does so by selecting them for their effects on survival and reproduction. Natural selection cannot discriminate between structurally different traits with the same effects, especially when the structural differences are slight. Since many different structures will have the same or similar effects on the survival and reproduction of the organisms that bear them, structural heterogeneity among equally well adapted organisms will be commonplace. And the same will be true about the components of organisms. In light of natural selection’s blindness to structural differences, the redundancy of the genetic code, for example, would be no surprise. As we shall see in the next chapter, the sort of reduction physical science leads us to expect cannot deal with the prospect that processes to be explained by reduction to their components are the products of a diverse and unmanageably large number of different, more basic processes, that is, are “multiply realized.” Biologists need not grapple with the details about the varying “supervenience” theses that philosophers have employed formally to reconcile physicalism with the obstacle to reduction that “multiple realizability” raises. But they need to see what problems it makes for any smooth reduction of biology to physical science. As I shall argue, the impossibility of postpositivist reduction reveals the irrelevance of this account of reduction to contemporary biology, not the impossibility of its reduction to physical science. Thus, the debate between reductionists and antireductionists must be completely reconfigured. Once the reductionist’s and the antireductionist’s theses have been restated to make them relevant to the contemporary understanding of biology, instead of the twentieth century’s understanding of physical science, we may get on to adjudication of the issue between them. Chapter 2 offers biology’s strongest vindication of reductionism: the power of the nucleic acids, taking amino acids and polypeptides as inputs, literally to program the embryo. I provide a fairly detailed and intentionally tendentious account of the genetic program in the development of the Drosophila embryo, and then go on in that chapter and chapter 3 to consider several widespread and influential objections to the idea that the genes orchestrating the macromolecules are pretty much the whole story of organismal growth. Among these objections, perhaps the most important are those that deny that the genes play any special informational role in development, and those that even more radically deny that there really is any such thing as the gene—above and beyond the heuristic notion that recent history shows to be an increasingly inconvenient fiction. These latter two claims are the focus of chapter 3.
Biology’s Untenable Dualism
Having vindicated the gene and clarified its special role in development, in chapters 4 through 6 I wrestle with problems for the version of reductionism defended here that are alleged to arise from the nature of the theory of natural selection. As we have seen, Darwin’s theory is literally the keystone of every structure biology studies, from the macromolecule to the entire species and beyond. Unless reductionism as a research program can be shown to be consistent with the theory of natural selection, it is a program biologists will have to forego. Physicalists should find intolerable the idea that somehow the biological process of natural selection floats free, untethered in physical processes. So, in showing how the theory and the process can be understood and explained by appeal to wholly physical processes, reductionists accomplish three things: they draw the force of the strongest argument against reductionism—the argument from ultimate explanations; they reconcile Darwinism with physicalism, something even latter-day antireductionists need to do; and most important, they help us understand how the process of natural selection really operates at the many different levels, and indeed in different directions at these levels, that the biologist explores. In the last two chapters I address first indirectly, and then directly, the anxiety that reductionism raises about genetic determinism. Chapter 7 offers a glimpse into what taking gene-sequence data seriously can show us about the nature and history (as opposed to the natural history) of one particular terrestrial lineage, Homo sapiens. I report on the light that gene-sequence data can shed on cultural evolution, a process which no responsible reductionist supposes to be a matter of genetically encoded inheritance. If it turns out that culture holds our genes on a leash (instead of the reverse), then at least some twitches on the leash will show up in gene-sequence changes over time. And this record of human affairs recorded in the genes is just what chapter 7 reports. Finally, in chapter 8 I turn directly to the subject of genetic determinism, on which earlier chapters of this book remain silent. Since eagerness to refute the thesis motivates so much of the resistance to reductionism about biology, it is truly ironic that the strongest argument against the thesis rests on evidence we would never have acquired but for pursuit of the reductionistic research program of molecular genetics. I conclude this book’s articulation and defense of Darwinian reductionism by showing exactly how its research program makes genetic determinism untenable even for its paradigm cases, the so-called inborn errors of metabolism. the aim of Darwinian reductionism is in large measure to make each of these research programs safe for one another, but also to show how one of them, Darwinism, in fact is the means to vindicate the other. But before going further, it’s worth noting that the reductionism introduced here, and to be defined and
23
24
in trod uct ion
expounded more fully in the next chapter, is a claim about the role of all of molecular biology in deepening, improving, completing, and correcting the rest of biology. It is not just a claim about molecular genetics’ powers to do so. Nucleic acid chauvinism is no part of the view to be defended. Reductionism accords an equal role to amino acids, polypeptides, proteins, organometallics, metal ions, lipids, and every other biologically active organic and inorganic molecule. Nevertheless, a great deal of the focus in the chapters to come will be on genes and nucleic acids. This is unavoidable given the state of play in contemporary molecular biology. Not only is it the nucleic acids in the germ-line genes that, as I shall argue, program development, but the genes regulate the processes occurring within and among the somatic cells. Moreover, most of what we know about proteins and enzymes nowadays—their primary structure, their concentrations in various cells, and much of their interactions—we know because we can read it off of the structure and function of the nucleic acids from which these proteins are synthesized. Someday, proteonomics and the rest of molecular biology will be less beholding to genomics than they are today. But, for the moment, the potential of the reductionistic program of molecular biology stands or falls with genetics’ prospects of reducing development. For that reason, molecular genetics takes center stage in an argument about molecular biology.
1
•
What Was Reductionism?
let us distinguish functional biology from molecular biology. Functional biology is the study of phenomena under their functional kind-descriptions—for example, organism, organ, tissue, cell, organelle, gene. Molecular biology is the study of certain classes of organic macromolecules. This distinction is not entirely satisfactory, for many of the kinds identified in molecular biology are also individuated functionally, as the example of DNA replication at the end of the introduction illustrated. What makes a kind functional is that its instances are the products of an evolutionary etiology—a history of random variation and natural selection. Since natural selection operates at the macromolecular level, some of its kinds will be functional too. But the functional/molecular distinction is a convenient one that reflects widespread beliefs about a real division in the life sciences. Let’s employ it as a handy label for the two parts of biology whose relationship is disputed between reductionists and antireductionists. Employing it, let’s briefly review the nature of the dispute about how functional and molecular biology are related. Reductionism is a metaphysical thesis, a claim about explanations, and a research program. The metaphysical thesis that reductionists advance (and antireductionists accept) is physicalism, the thesis that all facts, including all functional biological facts, are fixed by the physical and chemical facts; there are no nonphysical events, states, or processes, and so biological events, states, and processes are “nothing but” physical ones. The re-
26
ch ap t er one
ductionist argues that the metaphysical thesis has consequences for biological explanations: they need to be completed, corrected, made more precise, or otherwise deepened by more fundamental explanations in molecular biology. The research program that reductionists claim follows from the conclusion about explanations can be framed as the methodological moral that biologists should seek such macromolecular explanations. (Note that reductionism is not the evidently indefensible thesis that all biology is molecular biology, that molecular biology not only provides the explanans [what does the explaining], but also uncovers all the facts to be explained [the explanandum, plural explanantia]. This is not reductionism, for it affords no role to functional biology. It is some kind of eliminativism no reductionist has ever advocated.) Antireductionism does not dispute reductionism’s metaphysical claim, but denies it has implications either for explanatory strategies or methodological morals. The antireductionist holds that explanations in functional biology need not be corrected, completed, or otherwise made more adequate by explanations in terms of molecular biology. The disagreement over the adequacy of explanations in functional biology drives a significant methodological disagreement with consequences for the research program of biology. The reason is simple: if the aim of science is explanation and at least some explanations in functional biology are adequate, complete, and correct, then the methodological prescription that we must search for molecular completions, corrections, or foundations of these functional explanations in molecular processes will be unwarranted. Consequently, molecular biology need not be the inevitable foundation for every compartment of functional biology. If the aim of science is explanation, and functional explanations are either false or incomplete and molecular explanations are either (more) correct or (more) complete, and thus more adequate, then biology must act on the methodological prescription that we should seek macromolecular explanations. If at its explanatory base all biology is molecular biology, then all biologists, or at least all those who seek complete and correct explanations, will have eventually to be molecular biologists as well as functional biologists. This reductionist conclusion is one that most philosophers of biology, and those biologists sympathetic to them, believe to have been safely disposed of. Once we recognize the proximate/ultimate distinction in explanation and Dobzhansky’s dictum about biology, the demonstration that reductionism is false can be left as an elementary exercise in Introductory Philosophy of Science class. Reductionism requires laws to be reduced. But, as will soon be clear, there are no laws in biology to be reduced; or if there are any laws in biology, they govern evolution by natural selection and are not open to reduction to physical laws. No laws, no reductionism. QED. If this argument is too telegraphic, we may, trespassing on the impatience of
What Was Reductionism?
au courant biologists and philosophers, rehearse some of its details—suppressed premises, enthymemic inferences, conversational implicatures, and all. We may divide our down and dirty history of reduction into two parts: the vicissitudes of reduction in general, and its narrowly biological problems in particular.
the eclipse of postpositivist reduction To begin with the general problems, we need to recall how the original exponents of reduction, certain logical empiricists and their successors, supposed reduction was to proceed. And we need to remember some of the qualifications added to the original model in order to bring it into contact with the history of science. Reduction is supposed to be a relation between theories. In the anglophone locus classicus, Ernest Nagel’s Structure of Science (1961), reduction is characterized by the deductive derivation of the laws of the reduced theory from the laws of the reducing theory. The deductive derivation requires that the concepts, categories, and explanatory properties or natural kinds of the reduced theory be captured in the reducing theory. To do so, terms that figure in both theories must share common meanings. Though often stated explicitly, this second requirement is actually redundant, as valid deductive derivation presupposes univocality of the language in which the theories are expressed. However, as exponents of reduction noted, the most difficult and creative part of a reduction is establishing these connections of meaning, that is, formulating “bridge principles,” “bilateral reduction sentences,” “coordinating definitions” that link the concepts of the two theories. Thus, it was worth stating the second requirement explicitly. Indeed, early and vigorous opponents of reduction as the pattern of scientific change and theoretical progress argued that the key concepts of successive theories are in fact incommensurable in meaning, as we shall see immediately below. From the reception of Watson and Crick’s discoveries, reductionists began to apply their analysis to the putative reduction of Mendelian or population genetics to molecular genetics. The difficulties they encountered in pressing Watson and Crick’s discovery into the mold of theoretical reduction became a sort of poster child for antireductionists. In an early and insightful contribution to the discussion of reduction in genetics, Kenneth Schaffner (1967) observed that reduced theories are usually less accurate and less complete in various ways than reducing theories, and therefore incompatible with them in predictions and explanations. Accordingly, following Schaffner, the requirement was explicitly added that the reduced theory needs to be “corrected” before its derivation from the reducing theory can be effected. This raised a problem that became nontrivial in the fallout from Thomas Kuhn’s Structure of Scientific Revolutions (1961) and Paul Feyerabend’s “Reduction, Empiricism and Laws” (1964). It be-
27
28
ch ap t er one
came evident in these works that “correction” sometimes resulted in an entirely new theory, whose derivation from the reducing theory showed nothing about the relation between the original pair. Feyerabend’s examples were Aristotelian mechanics, Newtonian mechanics, and relativistic mechanics, whose respective crucial terms, impetus and inertia, absolute mass and relativistic mass, could not be connected in the way reduction required. No one has ever succeeded in providing the distinction that reductionism required between “corrections” and “replacements.” Thus, it was difficult to distinguish reduction from replacement in the crucial cases that really interested students of reduction. This was a matter of importance because of reductionism’s implicit account of scientific change as increasing approximation to more fundamental truths. It was also Schaffner who coined the term “layer-cake reduction” to reflect the notion that synchronically less fundamental theories are to be explained by reduction to more fundamental theories—at the basement level, some unification of quantum mechanics and the general theory of relativity; above these, physical and organic chemistry; then molecular biology and functional biology; at the higher levels, psychology, economics, and sociology. Synchronic reduction is supposed to be explanatory because on the account of explanation associated with reduction, the deductive-nomological (D-N) model, explanation was logical deduction, and the explanation of laws required the deduction of laws from other laws. Synchronic reduction is mereological explanation, in which the behavior of more composite items described in reduced theories is explained by derivation from the behavior of their components by the reducing theory. Thus, reduction is a form of explanation. Diachronic reduction usually involves the succession of more general theories that reduce less general ones by showing them to be special cases which neglect some variables, fail to measure coefficients, or set parameters at restricted values. As the history of science proceeds from the less general theory to the more general, the mechanism of progress is the reduction of theories. But if there is no way to distinguish reduction from replacement, then the incommensurability of replacing theories makes both the progressive diachronic and synchronic accounts of intertheoretical relations impossible ideals. More fundamentally, reductionism as a thesis about formal logical relations among theories was undermined by the increasing dissatisfaction among philosophers of science with the powers of mathematical logic to illuminate interesting and important methodological matters such as explanation, or theory testing. Once philosophers of science began to doubt whether deduction from laws was always sufficient or necessary for explanation, the conclusion that intertheoretical explanation need take the form of reduction was weakened. Similarly, reductionism is closely tied to the axiomatic, or so-called syntactic, approach to theories, an approach that explicates logical relations among theo-
What Was Reductionism?
ries by treating them as axiomatic systems expressed in natural or artificial languages. Indeed, “closely tied” may be an understatement, since deduction is a syntactic affair, and is a necessary component of reduction. But, for a variety of reasons, the syntactic approach to theories has given way among many philosophers of biology to the so-called semantic approach to theories. The semantic approach treats theories not as axiomatic systems in artificial languages but as sets of closely related mathematical models. The attractions to philosophers of biology of the semantic approach must be manifest. For a science like biology, without laws of the sort we meet with in physical science, can hardly display axiomatized theories; and one in which mathematical models figure so centrally in explanations is immediately amenable to analysis from the semantic perspective. (For a useful introduction to this conception and its attractions for biology, see Thompson 1988; Lloyd 1993.) But, on the semantic approach, the very possibility of reduction by deductive derivation of the axioms of one theory as theorems of the other became moot. The semantic approach treats theories as families of models, and models as implicit definitions, about which the only empirical question is whether they are applicable to phenomena. For reduction to obtain among theories semantically characterized requires an entirely different conception of reduction. On the semantic view, the reduction of one theory to another is a matter of employing one (or more) model(s) among those that constitute the more fundamental theory to explain why each of the models in the less fundamental theory is a good approximation to some empirical process, showing where and why it fails to be a good approximation in other cases. The models of the more fundamental theory can do this to the degree that they are realized by processes underlying the phenomena realized by the models of the less fundamental or reduced theory. There is little scope in this sort of reduction for satisfying the criteria for postpositivist reduction. (We will return to the role of mathematical models in the explanation of biological processes in chapter 4.) To the general philosophical difficulties that the postpositivist account of reduction faced, biology provided further distinct obstacles. To begin with, as Hull first noted (1974), it is difficult actually to define the term gene as it figures in functional biology by employing only concepts from molecular biology. In other words, the required “bridge principles” between the concept of the gene as it figures in population biology, evolutionary biology, and elsewhere in functional biology and as it figures in molecular biology could not be constructed. And all the ways philosophers contrived to preserve the truth of the claim that the gene is nothing but a (set of) string(s) of nucleic acid bases could not provide the systematic link between the functional “gene” and the macromolecular “genes” required by a reduction (see chapter 4 below for more details). There is, of course, no trouble identifying “tokens”—particular bits of matter we can
29
30
ch ap t er one
point to—of the population biologist’s genes with “tokens” of the molecular biologist’s genes. But token-identities won’t suffice for reduction, even if they are enough for physicalism to be true. The second problem facing reductionism in biology is the absence of laws, either at the level of the reducing theory or the reduced theory, or between them. If there aren’t any laws in either theory, there is no scope for reduction at all. That there can be no laws in biology, with the one exception of the laws that govern evolution by natural selection and their consequences, is the major conclusion of the next chapter. For the moment, let’s assume there are none; antireductionists will by and large grant the assumption, for it strengthens their case for autonomy and against reduction (see Kitcher 1984, for example). Of course, the first problem, that of “defining” functional genes in terms of macromolecules, is really not very different from the problem of identifying laws linking functional genes to macromolecules, since the “bridge principles” reduction requires will have to be laws of nature. Thus, the argument (to come in detail in chapter 4) that there are no biological laws makes impossible fulfilling either reductionism’s criterion of connection or its criterion of derivation by deduction. Whereas the antireductionists were at most able to show that the criterion of connectability with respect to the Mendelian and the molecular gene was not fulfilled as the two theories were in fact stated, we can go much further in vindicating their conclusion. We can demonstrate that the criterion required by “layer-cake” reductionism cannot be satisfied as a fundamental matter of biological process. As we saw, individuation of types in biology is almost always via function: to call something a wing, a fin, or a gene is to identify it in terms of its function. But biological functions are naturally selected effects. And natural selection for adaptations—that is, environmentally appropriate effects—is blind to differences in physical structure that have the same or roughly similar effects. Natural selection “chooses” variants by some of their effects, those which fortuitously enhance survival and reproduction. When natural selection encourages variants to become packaged together into larger units, the adaptations become functions. Selection for adaptation and function kicks in at a relatively low level in the organization of matter. Accordingly, the structural diversity of the tokens of a given Mendelian or classical or population biological or generally “functional” gene will mean that there is no single molecular structure or manageably finite number of sets of structures that can be identified with a single functional gene.1 1. Philosophers will recognize the relationship between the functional gene and the DNA sequence as one of “multiple realization” common to the relation functionalism in the philosophy of psychology alleges to obtain between psychological states and neural
What Was Reductionism?
Functional biology tells us that there is a hemoglobin gene, and yet there is no unique sequence of nucleic acids that is identical to this hemoglobin gene— nothing that could provide a macromolecular definition of the hemoglobin gene of functional biology. Of course, there is some ungainly disjunction of all the actual ways nucleic acid sequences nowadays do realize or in the past have realized the hemoglobin gene—that is, all the sequences that can be translated and transcribed into RNA, which in a local ribosome will code for one or another of the different types: fetal, adult, or the varying defective hemoglobin-protein sequences. But this ungainly disjunction, even if we knew it, and we don’t, won’t serve to define the functional hemoglobin gene. The reason is obvious to the molecular biologist. An even vaster disjunction of nucleic acid sequences than the actual sequence will work just as well, or indeed just as poorly, to constitute the functional hemoglobin gene (and probably will do so in the future, given environmental contingencies and mutational randomness). Just think of the alternative introns that could separate exon regions of the sequence (and may do so in the future, given mutation and variation). And then there are all the promoter and repressor genes and their alternative sequences, not to mention the genes for producing the relevant ribosomal protein-synthesizing organelles, all equally necessary for the production of the hemoglobin protein, and so claiming as much right to be parts of the functional hemoglobin gene as the primary sequence of the coding region of structural gene itself. Just as the actual disjunction is too complex to state and yet not biologically exhaustive of the ways to code for a working hemoglobin protein, so also all these other contributory sequences don’t exhaust the actual biological alternatives, and so make the macromolecular definition of the functional hemoglobin gene a will-o-the-wisp.2
ones. The blindness of selection for effects to differences in structure provides the explanation for why multiple realization obtains between genes and polynucleotide molecules. Indeed, almost every functional kind in biology will be multiply realized, owing to the fact that the kind has an evolutionary etiology. This claim is argued in greater detail and generality in chapter 4 below. Meanwhile, it is worth noting that if correct, the evolutionary etiology of functional kinds will also explain the multiple realization of the kinds of all the “special sciences” insofar as they are sciences which treat a biological system, viz. Homo sapiens. 2. In other words, being a hemoglobin molecule “supervenes,” in the philosopher’s term, on being a particular sequence of amino acids, even though there is no complete specification possible nor scientifically fruitful of all the alternative particular sequences of amino acids that could constitute (i.e., realize) the function of the hemoglobin molecule in oxygen transport. Roughly, a biological property, P, supervenes on a (presumably complex) physical and/or chemical property Q if and only if when any thing has property P, it has some physical/chemical property Q, and anything else that has physical/
31
32
ch ap t er one
This structural diversity explains why no simple identification of molecular genes with the genes of population genetics of the sort postpositivist reduction requires is possible. More generally, the reason there are no laws in biology is thus the same reason there are no bridge principles of the sort postpositivist reduction requires. (This result will be even less surprising in light of the postpositivist realization that most bridge principles in science will be laws, not definitions.) The unavoidable conclusion is that as far as the postpositivist or “layer-cake” model of intertheoretical reduction is concerned, none of its characteristic preconditions are to be found in theories of functional biology, theories of molecular biology, or for that matter in any future correction of one or the other of these theories.
problems for postpositivist antireductionism If antireductionism were merely the denial that postpositivist reduction obtains among theories in biology, it would be obviously true. But recall, antireductionism is not merely a negative claim. It is the thesis that (1) there are generalizations at the level of functional biology, (2) these generalizations are explanatory, (3) there are no further generalizations outside functional biology which exchemical property Q must also have biological property P. (See Rosenberg 1978.) There is among philosophers a fairly sustained debate about the force of the “must” in this formulation. Does the supervenience of the biological on the physical/chemical have to obtain in virtue of natural laws, or even some stronger sort of metaphysical necessity? As I argue here and many philosophers hold independently, biological properties are “local”—make implicit but ineliminable reference to a particular place and time (the Earth since 3.5 billion years ago). Thus, it may be that biological properties are only locally supervenient, a much weaker thesis than one that makes it a matter of general law everywhere and always in the universe. (See “Concepts of Supervenience” in Kim 1993.) When a biological property is supervenient on more than one complex physical/ chemical property, then it is also a multiply realized property. The supervenience of the biological on the physical is a way of expressing the thesis of physicalism. The blindness of natural selection to differences in structure is what turns the supervenience of the biological on the physical into the multiple realization of the biological by the physical. A philosophically adequate account of physicalism would require one to adjudicate among supervenience theses of varying strength and content. Because such a project is beyond the scope of this work, I try to avoid even using the term supervenience more than once or twice hereafter. Nevertheless, there is a more extensive discussion of the thesis of physicalism in chapter 6, which seeks to ground the theory of natural selection in physical theory as reductionism ultimately requires.
What Was Reductionism?
plain the generalizations of functional biology, (4) there are no further generalizations outside functional biology which explain better, more completely, or more fully what the generalizations of functional biology explain. All four components of antireductionism are daunted by at least some of the same problems that vex reductionism: the lack of laws in functional biology and the problems facing an account of explanation in terms of derivation from laws. If there are no laws and/or explanation is not a matter of subsumption, then antireductionism is false too. But besides the false presuppositions antireductionism may share with reductionism, it has distinct problems of its own. Indeed, these problems stem from the very core of the antireductionist’s argument, the appeal to ultimate explanations underwritten by the theory of natural selection. To see the distinctive problems that an appeal to the proximate/ultimate distinction raises for biology’s autonomy, consider a paradigm of putative irreducible functional explanation advanced by antireductionists. Our example is due to one of the most influential of antireductionist physicalists, Phil Kitcher. It is one that has gone largely unchallenged in the almost two decades between the first and the latest occasion in which it has been invoked in his rejection of reductionism. The example is the biologist’s explanation of independent assortment of functional genes: The explanandum is (G) Genes on different chromosomes, or sufficiently far apart on the same chromosome, assort independently. According to Kitcher, the functional biologist proffers an explanans for (G), which we shall call (PS): (PS) Consider the following kind of process, a PS process (for pairing and separation). There are some basic entities that come in pairs. For each pair, there is a correspondence relation between the parts of one member of the pair and the parts of the other member. At the first stage of the process, the entities are placed in an arena. While they are in the arena, they can exchange segments, so that the parts of one member of a pair are replaced by the corresponding parts of the other members, and conversely. After exactly one round of exchanges, one and only one member of each pair is drawn from the arena and placed in the winners’ box. In any PS process, the chances that small segments that belong to members of different pairs or that are sufficiently far apart on members of the same pair will be found in the winners’ box are independent of one another. (G) holds
33
34
ch ap t er one
because the distribution of chromosomes to gametes at meiosis is a PS process. Kitcher writes, “This I submit is a full explanation of (G), and explanation that prescinds entirely from the stuff that genes are made of” (1999, pp. 199–200). Leave aside for the moment the claim that (PS) is a full explanation of (G), and consider why, according to the antireductionist, no molecular explanation of (PS) is possible. The reason is basically the same story we learned above about why the kinds of functional biology cannot be identified with those of molecular biology. Because the same functional role can be realized by a diversity of structures, and because natural selection encourages this diversity, the full macromolecular explanation for (PS) or for (G) will have to advert to a range of physical systems that realize independent assortment in many different ways. These different ways will be an unmanageable disjunction of alternatives so great that we will not be able to recognize what they have in common, if indeed they do have something in common beyond the fact that each of them will generate (G). Even though we all agree that (G) obtains in virtue only of macromolecular facts, nevertheless, we can see that because of their number and heterogeneity these facts will not explain (PS), still less supplant (PS)’s explanation of (G), or for that matter supplant (G)’s explanation of particular cases of genetic recombination. This is supposed to vindicate antireductionism’s theses that functional explanations are complete and that functional generalizations cannot be explained by nonfunctional ones, nor replaced by them. But this argument leaves several hostages to fortune. Begin with (G). If the argument of the previous section is right, (G) is not a law at all but the report of a conjunction of particular facts about a spatiotemporally restricted kind, “chromosomes,” of which there are only a finite number extant over a limited time period at one spatiotemporal region (the Earth). Accordingly, (G) is not something that we can expect to be reduced to the laws of a more fundamental theory, and the failure to do so constitutes no argument against reductionism classically conceived—nor is the absence or impossibility of such a reduction much of an argument for antireductionism. The antireductionist may counter that regardless of whether (G) is a generalization, it has explanatory power and therefore is a fit test case for reduction. This, however, raises the real problem that daunts antireductionism. Antireductionism requires an account of explanation to vindicate its claims. Biologists certainly do accord explanatory power to (G). But how does (G) explain? And the same questions are raised by the other components of the antireductionist’s claims. Thus, what certifies (PS)—the account of PS processes given above—as explanatory, and what prevents the vast disjunction of macromolecular accounts
What Was Reductionism?
of the underlying mechanism of meiosis from explaining (PS), or, for that matter, from explaining (G) and indeed whatever it is that (G) explains? There is one tempting answer, which I shall label “explanatory Protagoreanism,” the thesis that “some human or other is the measure of all putative explanations, of those which do explain and those which do not.” 3 Thus, consider
3. Perhaps the most well-known argument for what I call “explanatory Protagoreanism” is Putnam’s (1975, pp. 295–98) “square peg–round hole” argument. On this view, explanations of why a particular square peg goes through the round hole in a board based on considerations from geometry are superior to explanations of the same event that advert to quantum mechanics; the former explanations are entirely adequate and correct, and require no supplementation, correction, or deepening by more fundamental considerations about the material composition of the peg and board, or laws and generalizations that they instantiate. The reason given for this conclusion is that the latter explanation provides irrelevant detail and fails to identify features of the explanandum that are shared with other similar cases. This argument endorses explanatory Protagoreanism in two closely connected ways: first, it makes the adequacy of one explanation and the inadequacy of the other turn on whether information is “relevant,” not relevant only to the causal process involved, but relevant to something else as well—presumably our interests. No one could deny that the material composition of peg and board and the laws governing it are causally relevant to the explanandum. If relevance is to be judged by other criteria, I can think of none but our interests. Similarly, the similarity of the explanandum to others must be understood as relative. Their appeal to similarity also drags in our interests. Wittgenstein noted that anything can be similar to anything else, and that criteria of similarity reflect our interests. Of course, I am also inclined to argue that explanations of why square pegs don’t go through round holes which advert to geometry only are either seriously incomplete or false. We need to add information that assures us of the rigidity of the materials under the conditions which obtain when the peg is pushed through the hole; and once we begin trying to make our explanation complete and correct, the relevance of the more fundamental physical facts and laws governing them becomes clearer. Sober (1999) advances a slightly different argument against Putnam’s conclusion that the geometrical explanation is superior; his conclusion, however, is similar to my own. He notes that Putnam’s argument begins by conceding that both explanations are correct, or at least equally well supported. Accordingly, he infers that the only reason Putnam can offer for preferring the broader geometrical explanation to the deeper physical one is our “subjective” interests. Putnam would be better advised simply to deny that the quantum theoretical description of the causal process instantiated by the peg and hole is explanatory at all. But it is hard to see how one could disqualify the quantum story as not explanatory at all, even if it were guilty of irrelevant detail and silence on an objective pattern instantiated by this and other peg-and-hole cases.
35
36
ch ap t er one
the question of why a macromolecular explanation of (PS) is not on the cards. One answer is, presumably, that it is beyond the cognitive powers of any human contemplating the vast disjunction of differing macromolecular processes, each of which gives rise to meiosis, to recognize that conjoined they constitute an explanation of (PS). Or similarly, it is beyond the competence of biologists to recognize how each of these macromolecular processes gives rise to (G). This is explanatory Protagoreanism. That the disjunction of this set of macromolecular processes implements PS processes and thus brings about (PS) and (G) does not seem to be at issue. Only someone who denied the thesis of physicalism—that the physical facts fix all the biological facts—could deny the causal relevance of this vast motley of disparate macromolecular processes to the existence of (PS) and the truth of (G). In fact, there is something that the vast disjunction of macromolecular realizations of (PS) have in common that would enable the conjunction of them to fully explain (PS) to someone with a good enough memory for details. Each was selected for because each implements a PS process, and PS processes are adaptive in the local environment of the Earth from about the onset of the sexually reproducing species to their extinction. Since selection for implementing PS processes is blind to differences in macromolecular structures with the same or similar effects, there may turn out to be nothing else completely common and peculiar to all macromolecular implementations of meiosis besides their being selected for implementing PS processes. But this will be a reason to deny that the conjunction of all these macromolecular implementations explains (PS) and/or (G) only on a Protagorean theory of explanation. Antireductionists who adopt what is called an erotetic account of explanation (in preference to a unification account, a causal account, or the traditional D-N account of explanation) will feel the attractions of explanatory Protagoreanism. For the erotetic account of explanations treats them as answers to “why” questions posed about a particular occurrence or state of affairs, which are adequate—that is, explanatory—to the degree they are appropriate to the background information of those who pose the why question and to the degree that the putative explanation excludes competing occurrences or states of affairs from obtaining. Since it may be that we never know enough for a macromolecular answer to the question of why does (G) obtain, no macromolecular explanation of why (G) obtains will be possible. Similarly, we may never know enough for a macromolecular explanation of (PS) to be an answer to our question, why do PS processes occur? But this seems a hollow victory for antireductionism, even if we grant the tendentious claim that we will never know enough for such explanations to succeed. What is worse, it relegates antireductionism to the status of a claim about biologists, not about biology. Such philosophical
What Was Reductionism?
limitations on our epistemic powers have been repeatedly breeched in the history of science. Antireductionists wedded to alternative, nonerotetic accounts of explanation cannot adopt the gambit of a Protagorean theory of explanation in any case. They will need a different argument for the claim that neither (G) nor (PS) can be explained by its macromolecular supervenience base (see note 2 above), and for the claim that (PS) does explain (G) and (G) does explain individual cases of recombination. One argument such antireductionists might offer for the former claim rests on a metaphysical thesis: there are no disjunctive properties, or, if there are, such properties have no causal powers. Here is how the argument might proceed: The vast motley of alternative macromolecular mechanisms that realize (PS) have nothing in common. There is no property—and, in particular, no property with the causal power to bring about the truth of (G) which they have in common. Physicalism (which all antireductionists party to this debate embrace) assures us that whenever (PS) obtains, some physical process, call it Pi, obtains. Thus, we can construct the identity (R) (or at least the biconditional) that (R) (PS) = P1 v P2 v . . . v Pi v . . . v Pm where m is the number, a very large number, of all the ways macromolecular processes can realize PS processes. The Protagorean theory of explanation tells us that (R) is not explanatory, roughly because it’s too long a sentence for people to keep in their head. A causal theory of explanation might rule out (R) as explaining PS on the ground that the disjunction, P1 v P2 v . . . v Pi v . . . v Pm, is not the full cause.4 This might be either because it was incomplete—there is always the possibility of still another macromolecular realization of (PS) arising—or because disjunctive properties just aren’t causes, have no causal powers, perhaps aren’t really properties at all. A unificationist theory of explanation (or, for that matter, a D-N account) might hold that since the disjunction cannot be completed, it will not effect deductive unifications or systematizations. Thus, (PS) and (G) are the best and 4. This tendentious expression is likely to cause philosophical controversy. We assume here a distinction between causes and standing conditions also necessary for an effect, but not part of its cause. Additionally, a philosophically perspicuous account would have to take sides on whether P1 v P2 v . . . v Pi v . . . v Pm names a disjunction of gerundial phrases or is a disjunction of statements naming facts. For a treatment of the ontological issues, see Bennett 1989. Again, like most other technical issues in metaphysics, we sidestep these matters here. The reduction of the biological does not turn on how they are adjudicated.
37
38
ch ap t er one
most complete explanations biology can aspire to. Antireductionist versions of all three theories, the causal, the unificationist, and the Protagorean, need the disjunction in (R) to remain uncompleted in order to head off a reductionist explanation of (PS) and/or (G). Consider the first alternative, that (R) is not complete, either because some disjuncts haven’t occurred yet or perhaps because there are an indefinite number of possible macromolecular implementations for (PS). This, in fact, seems to me to be true, just in virtue of the fact that natural selection is continually searching the space of alternative adaptations and counteradaptations, and that threats to the integrity and effectiveness of meiosis might in the future result in new macromolecular implementations of (PS) being selected for. But this is no concession to antireductionism. It is part of an argument that neither (PS) nor (G) report an explanatory generalization, that they are in fact temporarily true claims about local conditions on the Earth. On the second alternative, (R) can be completed in principle, perhaps because there are only a finite number of ways of realizing a PS process. But the disjunction is not a causal nor a real property at all. Therefore, it cannot figure in an explanation of either (PS) or (G). There are several problems with such an argument. First, the disjuncts in the disjunction of P1 v P2 v . . . v Pi v . . . v Pm do seem to have at least one or perhaps even two relevant properties in common: each was selected for implementing (PS) and causally brings about the truth of (G). Second, we need to distinguish predicates in languages from properties of objects. It might well be that in the language employed to express biological theory, the only predicate we employ that is true of every Pi is a disjunctive one, but it does not follow that the property picked out by the disjunctive predicate is a disjunctive property. Philosophy long ago learned to distinguish things from the terms we hit upon to describe them.5 How might one argue against the causal efficacy of disjunctive properties? One might hold that disjunctive properties will be causally efficacious only when their disjuncts subsume similar sorts of possible causal processes. If we adopt this principle, the question at issue becomes one of whether the disjunction of P1 v P2 v . . . v Pi v . . . v Pm subsumes similar sorts of causal processes. The answer to this question seems to be that the disjunction shares the fea5. In fact, the physicalist is committed to treating “some nucleus n undergoes a PS process” as what Kim calls a “second order predicate,” which names the property described by the “first order” disjunction of statements to the effect that n undergoes P1 v n undergoes P2 v . . . v n undergoes Pi v . . . v n undergoes Pm. (See Kim 1998, 2005.) The details of why the physicalist is so committed would require an excursion into metaphysics, for which few readers would have the patience.
What Was Reductionism?
tures of having been selected for, resulting in the same outcome—PS processes. Thus, the disjunctive predicate names a causal property, a natural kind. Antireductionists are hard-pressed to deny the truth and the explanatory power of (R). Besides its problems in undermining putative macromolecular explanations of (PS), (G), and what (G) explains, antireductionism faces some problems in substantiating its claims that (PS) explains (G) and (G) explains individual cases of genetic recombination. The problems, of course, stem from the fact that neither (PS) nor (G) are laws, and therefore an account is owing of how statements like these can explain. Of course, it is open to the antireductionist to embrace the account of biological explanation as implicitly invoking the theory of natural selection to connect initial condition statements about how one frozen accident selected for another historical state of affairs. Biological explanation is evolutionary explanation and evolutionary explanation is historical explanation, in which the implicit laws are those of Darwinian theory. This will be true even in molecular biology. To cite a favorite example of mine (first elaborated in Rosenberg 1985, chap. 3), the explanation of why DNA contains thymine while messenger mRNA, transfer tRNA, and ribosomal rRNA contain uracil is a thoroughgoing historical one. Long ago on Earth, DNA won the selective race for best available solution to the problem of high-fidelity information storage; meanwhile, RNA was selected for low-cost information transmission and protein synthesis. Uracil is cheaper to synthesize than thymine, because thymine has a methyl group that uracil lacks. Cytosine spontaneously deaminates to uracil. DNA with uracil produced by deamination results in a point mutation in the conjugate DNA strand on replication, since cytosine pairs with guanine, while uracil and thymine both pair with adenine. A repair mechanism evolutionarily available to DNA removes uracils and replaces them with cytosines to prevent this point mutation. The methyl group on thymine molecules in DNA blocks the operation of this repair mechanism when it attempts to remove thymines. Employing this relatively costly molecule was a cheaper and/or more attainable adaptation than DNA’s evolving a repair mechanism that could distinguish uracils that are not the result of cytosine deamination from those that are the result of deamination. So, it was selected for. Meanwhile, the spontaneous deamination of cytosine to uracil on one out of hundreds or thousands of RNA molecules engaged in protein synthesis will disable it, but results only in a negligible reduction in the production of the protein it would otherwise build. Ergo, natural selection for economic RNA transcription resulted in RNA’s employing uracil instead of thymine. Notice how the explanation works. First we have two “generalizations”: DNA contains thymine, RNA contains uracil. They are not laws but, in fact, statements about local conditions on the Earth. After all, DNA can be
39
40
ch ap t er one
synthesized with uracil in it and RNA can be synthesized with thymine; second, the explanation for each appeals to natural selection for solving a design problem set by the environment. Third, tRNA, mRNA, and the various rRNAs are functional kinds, and they have their function as a result of selection over variation. Fourth, we can expect that in nature’s relentless search for adaptations and counteradaptations, the retroviruses, in which hereditary information is carried by RNA, may come to have their RNAs composed of thymine instead of uracil if and when it becomes disadvantageous for retroviruses to maximize their rates of mutation. At this point, of course, the original generalizations will, like other descriptions of historical patterns, cease to obtain; but we will have an evolutionary explanation for why they do so, and we will be able to retain our original explanation for why these generalizations obtained about the composition of DNA and RNA during the period and in the places where they did so. In these respects, explanation in molecular biology is completely typical of explanation at all higher levels of biological organization. It advances historical explanation sketches in which the principles of the theory of natural selection figure as implicit laws. Explanation sketches do not submit of deductive derivation one from another.
reconfiguring the debate about reduction We can conclude that so far as the “layer-cake” reductionism of postpositivist philosophers of science and its antireductionist opposition are concerned, both views are irrelevant to the real issue about the relation between functional and molecular biology. If there is a real dispute here, it is not about the derivability or nonderivability of laws in functional biology from laws in molecular biology, as there are no laws in either subdiscipline. Nor can the real dispute turn on the relationship between theories in molecular and functional biology. As chapter 4 argues, there is only one general theory in biology, Darwinism. As Dobzhansky recognized, it is equally indispensable to functional and molecular biology. Once this conclusion is clear, the question of what reductionism was in the postpositivist past can be replaced by the question of what reductionism is now. For the obsolescence of the postpositivist model of reduction hardly makes the question of reductionism or its denial obsolete. The accelerating pace of developments in molecular biology makes this question more pressing than ever. But it is now clear that the question has to be reformulated if it is to make contact with real issues in biology. As we shall see much more fully in chapter 4, biology is unavoidably terrestrial. Its explanatory resources are spatiotemporally restricted in their meanings. Thus, the debate between reductionism and antireductionism will have to
What Was Reductionism?
be one about the explanation of particular historical facts,6 some obtaining for longer than others, but all of them ultimately the contingent results of general laws of natural selection operating on boundary conditions. Reductionism needs to claim that the most complete, correct, and adequate explanations of historical facts uncovered in functional biology are provided by appeal to other historical facts uncovered in molecular biology, plus some laws that operate at the level of molecular biology. Antireductionism must claim that there are at least some explanations in functional biology which cannot be completed, corrected, or otherwise improved by adducing wholly nonfunctional considerations from molecular biology. One way to do this would be to show that there are some functional biological phenomena that cannot in principle be decomposed or analyzed into component molecular processes. But such a demonstration would threaten the antireductionist’s commitment to physicalism. A more powerful argument for antireductionism would be one which shows that even in macromolecular explanations, there is an unavoidable commitment to ultimate explanation by (implicit) appeal to irreducible functional—that is, evolutionary—laws. Reductionists can provide a strong argument for their view and rebut antireductionist counterargument effectively. But to do so, they need to show that 6. Yet another philosophical digression: the debate cannot be a dispute about “explanation,” for example a disagreement about pragmatic, erotetic, Protagorean versus nonerotetic accounts of explanation. For that is a general problem in the philosophy of science, not a problem about reductionism in the philosophy of biology. One way to prescind from the disagreement between reductionists and antireductionists about the further issue of explanation is to borrow Railton’s (1981) notion of an “ideal explanatory text.” Railton writes, “This full blown causal account would extend, via various relations of reduction and supervenience, to all levels of analysis, i.e. the ideal text would be closed under relations of causal dependence, reduction, and supervenience. It would be the whole story concerning why the explanandum occurred, relative to a correct theory of the lawful dependencies of the world” (1981, p. 247). In terms of the notion of an “ideal explanatory biological text,” the antireductionist holds that such a text need not advert to descriptions and generalizations about macromolecular processes, and that a text adverting only to nonmolecular biological considerations could be ideal. The reductionist denies this thesis. This way of drawing the distinction is inspired by Salmon’s (1989, p. 161) observation that “the distinction between the ideal explanatory text and [less than complete] explanatory information can go a long way . . . in reconciling the views of the pragmatists [about explanation] and the realists” or “objectivists,” as Salmon elsewhere calls them. Exponents of an “ontic” view of explanation, such as Salmon, will presumably be satisfied with the epistemic/ontological distinction drawn above. Hereafter, following Salmon, I will assume that the ideal/nonideal explanatory text distinction will enable us to prescind from disputes about explanation irrelevant to reduction.
41
42
ch ap t er one
ultimate explanations in functional biology are unavoidably inadequate, and inadequate in ways that can only be improved by proximate explanations from molecular biology. This would indeed refute antireductionism. Or it would do so if the reductionist can show that these proximate explanations are not just disguised ultimate explanations themselves. It is the literal truth of Dobzhansky’s dictum that, of course, will make one suspect that this cannot be done. What reductionists must ultimately argue is that the laws of natural selection to which even their most macromolecular explanations implicitly advert are reducible to laws of physical science. This second challenge is the gravest one reductionism faces. For if at the basement level of molecular biology there is to be found a general law not reducible to laws of physics and chemistry, then antireductionism will be vindicated at the very core of the reductionist’s favored subdiscipline. The remainder of this chapter takes up the first challenge, that of showing what makes ultimate explanations in functional biology inadequate in ways only proximate molecular explanations can correct. The next chapter is devoted to showing that the only laws of nature to be found in biology are the regularities about evolution by natural selection that Darwin uncovered. It is they that must ultimately be grounded in physical science, if reductionism is to be vindicated. Recall Mayr’s distinction between proximate and ultimate explanation.7 We 7. Mayr’s distinction is subject both to misunderstanding and to dispute. It will be well to dispel the former and identify the latter. Besides claiming that biology is distinct from physical science in its employment of ultimate explanations, Mayr was also prescient in his insistence that evolutionary explanations focused on the statistical distribution of traits in populations by contrast to the “essentialist” or “typological” explanations characteristic of physics. These latter explained why all normal objects of a certain kind had a given property. Biology, however, is the domain of variation, and here normality is a matter of a statistical distribution in a population. Sometimes Mayr’s proximate/ultimate distinction is confounded with his population/typological distinction. The confusion is not unnatural, as Mayr held that ultimate explanations will have population-level explananda, while explanations in physical sciences are all proximate and typological. For an illuminating discussion of Mayr’s population/typological distinction, see Sober (1984), who follows Lewontin (1974). Ariew (2003) attacks Mayr’s proximate/ultimate distinction as illuminating about biology, and argues that evolutionary explanations are not really ultimate but always and only statistical accounts of changes in distributions of a trait in a population over time. Chapter 5 treats the role of probability in evolutionary theory and addresses Ariew’s important claims at some length. There it is argued that claims about populations need to be explained in terms of more basic claims about individuals, i.e. reduced to them. Thus, Mayr’s proximate/ultimate distinction is vindicated. Whether it will support an antireductionist account of biology is what we now explore.
What Was Reductionism?
illustrated it by invoking the question, why do butterflies have eyespots on their wings? This question may express a request for an adaptationalist explanation that accords a function, in camouflage for instance, to the eyespot on butterfly wings, or it may be the request for an explanation of why at a certain point in development eyespots appear on individual butterfly wings and remain there throughout their individual lives. The former explanation is an ultimate one, the latter is a proximate one. Reductionism in biology turns out to be the radical thesis that ultimate explanations must give way to proximate ones and that these latter will be molecular explanations. To expound this thesis about explanations, reductionism adduces another distinction among explanations. It is a distinction known to philosophers of history, a relatively uncultivated division of philosophy over the last half century, but perhaps one whose relevance to biological explanation may become increasingly apparent. The distinction is between what are called “howpossible” explanations and “why-necessary” explanations. A how-possible explanation shows how something could have happened, by adducing facts which show that there is, after all, no good reason for supposing it could not have happened. A why-necessary explanation shows that its explanandum had to have happened. These two different kinds are distinct and independent of each other. Each kind of explanation will be appropriate to a different inquiry, even when the two different inquiries are expressed in the same words. For example, “Why did the kaiser’s army violate Belgian neutrality at the beginning of World War I?” might be answered by the response, “The kaiser had reason to think the British wouldn’t honor their guarantee to the Belgians, or if they did, that their few divisions would have made the difference in the first month on the western front.” If this answer is satisfactory, then it is an answer to a how-possible question, which shows that the actual is possible, that what really happened could have happened. On the other hand, the same question, when asked by someone who already knows about the kaiser’s beliefs, will be a request to show why they actually led to the German army’s violation of Belgian neutrality, including why the German general staff agreed with the kaiser’s assessment, what orders they gave the divisional commanders, and so on. It’s worth noting that even professional historians are usually satisfied with how-possible answers to explanatory questions, both because the amount of information needed to answer the whynecessary version of the question will be vast, and because most of it has usually been lost from the historical record. Nevertheless, there is an important asymmetry between how-possible and why-necessary explanations that philosophers of history recognized. Once a how-possible explanation has been given, it makes perfect sense to go on and ask for a why-necessary explanation. But the reverse is not the case. Once a
43
44
ch ap t er one
why-necessary explanation has been given, there is no point asking for a howpossible explanation. For in showing why something had to happen, we have removed all obstacles to its possibly happening. Some philosophers of history went on to suggest that why-necessary explanations are “complete.” But this is a notion hard to make clear in the case of, say, causal explanations, in which it is impossible to describe all the conditions, positive and negative, individually necessary and jointly sufficient for the occurrence of an event which we seek to explain. For our purposes, all that will be required is the observation that a why-necessary explanation is more complete than a how-possible explanation, and that is the source of the asymmetry between them. It is not difficult to graft this distinction onto the one broached above between erotetic and pragmatic approaches to explanation. On the erotetic view, whether a question expresses a request for a how-possible explanation or a why-necessary one is a matter of the context in which the question is put, the information available to the interlocutors, and their aims and interests. Accordingly, sometimes a why-necessary explanation will not be an appropriate response to an explanatory question. But all this is compatible with the fact that a why-necessary explanation provides more information about the causally necessary conditions for the matter to be explained. The exponent of a nonerotetic approach to explanations will hold that there is such a thing as a complete and correct explanation independent of contexts of inquirers’ questions, and that insofar as they are both incomplete, the how-possible explanation is more incomplete and the why-necessary closer to the whole story. The reductionist will sympathize with this view, as we shall now see. Consider the ultimate explanation for eyespots in the buckeye butterfly species Precis coenia. Notice to begin with that there is no scope for explaining the law that these butterflies have eyespots, or patterns that may include eyespots, scalloped color patterns, or edge bands, even though almost all of them do have such markings. There is no such law to be explained, as there are no laws about butterflies, still less any species of them. That the buckeye butterfly has such eyespots is, however, a historical fact to be explained. The ultimate explanation has it that eyespots on butterfly and moth wings have been selected for over a long course of evolutionary history. On some butterflies, these spots attract the attention and focus the attacks of predators onto parts of the butterfly less vulnerable to injury. Such spots are more likely to be torn off than more vulnerable parts of the body, and this loss does the moth or butterfly little damage while allowing it to escape. On other butterflies, and especially moths, wings and eyespots have also been selected for taking the appearance of an owl’s head, brows, and eyes. Since the owl is a predator of those
What Was Reductionism?
birds that consume butterflies and moths, this adaptation provides particularly effective camouflage. Here past events help to explain current events via implicit principles of natural selection. Such ultimate explanations have been famously criticized as “just-so” stories, allegedly too easy to frame and too difficult to test (Gould and Lewontin 1979); though its importance has been exaggerated, there is certainly something to this charge. Just because available data or even experience shows that eyespots are widespread does not guarantee that they are adaptive now. Even if they are adaptive now, this is by itself insufficient grounds to claim that they were selected because they were the best available adaptation for camouflage, as opposed to some other function—or, for that matter, that they were not selected at all but are mere “spandrels,” or traits riding piggyback on some other means of predator avoidance or some other adaptive trait. Reductionists will reply to this criticism that adaptationalist ultimate explanations of functional traits are “how-possible” explanations, and the “justso-story” charge laid against ultimate explanation on these grounds mistakes incompleteness (and perhaps fallibility) for untestability. The reductionist has no difficulty with the ultimate functional how-possible explanation, as far as it goes. For its methodological role is partly one of showing how high fitness could in principle be the result of purely nonpurposive processes. More important, on the reductionist’s view, such a how-possible explanation sets the research agenda that seeks to provide why-necessary explanations. It is these whynecessary explanations that cash in the promissory notes offered by the howpossible explanations. But if we are not already convinced reductionists, we may well ask, why must such why-necessary explanations be macromolecular? The reason is to be found in a limitation on ultimate explanations recognized by many: its silence regarding crucial links in the causal chains to which it adverts. The how-possible explanation leaves unexplained several biologically pressing issues, ones implicit in biologically well-informed requests for an ultimate explanation. These are the question of which alternative adaptive strategies were available to various lineages of organisms, and which were not, and the further question of how the feedback from adaptedness of functional traits— like the eyespot—to their greater subsequent representation in descendants was actually effected. The most disturbing lacuna in a how-possible explanation is its silence on the causal details of exactly which feedback loops operate from fortuitous adaptedness of traits in one or more distantly past generations to improved adaptation in later generations, and how such feedback loops approach the biological fact to be explained as a locally constrained optimal
45
46
ch ap t er one
design. Dissatisfaction with such explanations, as voiced by those suspicious of the theory of natural selection, those amazed by the degree of apparent optimality of natural design, as well as the religious creationist, all stem from a single widely shared and very reasonable scientific commitment. It is the commitment to complete causal chains, along with the denial of action at a distance, and the denial of backward causation. Long before Darwin, or Paley for that matter, Spinoza diagnosed the defect of purposive or goal-directed explanation: it “reverses the order of nature,” making the cause the effect. Natural selection replaces goal-directed processes. But natural selection at the functional level is silent on the crucial links in the causal chain which convert the appearance of goal-directedness to the reality of efficient causation. Therefore, explanations that appeal to it sometimes appear to be purposive or give hostages to fortune by leaving too many links in their causal chains unspecified. Darwin’s search for a theory of heredity reflected his own recognition of this fact. The charge that adaptational explanations are unfalsifiable or otherwise scientifically deficient reflects the persistent claim by advocates of the adequacy of ultimate explanations that their silence on these details is not problematic. Only a macromolecular account of the goal-directed process could answer these questions. Such an account would itself also be an adaptational explanation: it would identify strategies available for adaptation by identifying the genes (or other macromolecular replicators) that determine the characteristics of Lepidopterans evolutionary ancestors, and that provide the only stock of phenotypes on which selection can operate to move along pathways to alternative predation-avoiding outcomes—leaf-color camouflage, spot camouflage, or other forms of Batesian mimicry; repellant taste to predators; Mullerian mimicry of bad-tasting species; and so on. The reductionist’s why-necessary explanation would show how the extended phenotypes of these genes competed and how the genes that generated the eyespot eventually became predominant, that is, selected for. In other words, the reductionist holds that (1) every functional ultimate explanation is a how-possible explanation, and (2) there is a genic and biochemical pathway selection process underlying the functional how-possible explanation. As we shall see below, reduction turns the merely how-possible scenario of the functional ultimate explanation in to a why-necessary proximate explanation of a historical pattern. Note that the reductionist’s full explanation is still a historical explanation in which further historical facts—about genes and pathways—are added, and are connected by the same principles of natural selection that are invoked by the ultimate functional how-possible explanation. But the links in the causal chain of natural selection are filled in to show how past adaptations were available for and shaped into today’s functions. Antireductionists will differ from reductionists not on the facts but on
What Was Reductionism?
whether the initial explanation was merely an incomplete one or just a howpossible explanation. Antireductionists will agree that the macromolecular genetic and biochemical pathways are causally necessary to the truth of the purely functional ultimate explanation. But they don’t complete an otherwise incomplete explanation. They are merely further facets of the situation that molecular research might illuminate (Kitcher 1999, p. 199). The original ultimate answer to the question of why butterflies have eyespots does provide a complete explanatory answer to a question. Accordingly, how-possible explanations are perfectly acceptable ones, or else the ultimate explanation in question is something more than a mere how-possible explanation. Who is right here?
how-possible versus why-necessary explanations in evolutionary biology On an erotetic view, how-possible and why-necessary explanations may be accepted as reflecting differing questions expressed by the same words. The reductionist may admit that there are contexts of inquiry in which how-possible answers to questions satisfy explanatory needs. But the reductionist will insist that in the context of advanced biological inquiry, as opposed to, say, secondaryschool biology instruction, the how-possible question either does not arise or, having arisen in a past stage of inquiry, no longer does. How-possible questions do not arise where the phenomena to be explained are not adaptations at all, as in constraints or spandrels, and the only assurance that in fact howpossible explanations make true claims is provided by a why-necessary explanation that cashes in their promissory notes by establishing the adaptive origins of the functional traits in molecular genetics. This will become clearer as we examine proximate explanation in biology. Consider the proximate explanation from the developmental biology of butterfly wings and their eyespots. Suppose we observe the development of a particular butterfly wing, or, for that matter, suppose we observe the development of the wing in all the butterflies of the buckeye species, Precis coenia. Almost all will show the same sequence of stages, beginning with a wing imaginal disk eventuating in a wing with such spots; and a few will show a sequence eventuating in an abnormal wing or one without the characteristic eyespot, and so maladapted to the butterfly’s environment. Rarely, one may show a novel wing or markings fortuitously more well-adapted to the environment than the wings of the vast majority of members of its species. Let’s consider only the first case. We notice in one buckeye caterpillar (or all but a handful) that during development, an eyespot appears on the otherwise
47
48
ch ap t er one
unmarked and uniform epithelium of the emerging butterfly wing. If we seek an explanation of the sequence in one butterfly, the general statement that in all members of its species, development results in the emergence of an eyespot on this part of the wing is unhelpful. First, because examining enough butterflies in the species shows it is false. Second, even with an implicit ceteris paribus clause or a probabilistic qualification, we know the “generalization” simply describes a distributed historical fact about some organisms on this planet around the present time and for several million years in both directions. One historical fact cannot by itself explain another, especially not if its existence entails the existence of the fact to be explained. That all normal wings develop eyespots does not go very far in explaining why one does. Most nonmolecular generalizations in developmental biology are of this kind. That is, they may summarize sequences of events in the lives of organisms of a species or, for that matter, in organisms of higher taxa than species. Here is an example of typical generalizations in developmental biology from Wolpert (1998, p. 320): Both leg and wing discs [in Drosophila] are divided by a compartmental boundary that separates them into anterior and posterior developmental regions. In the wing disc, a second compartment boundary between the dorsal and ventral regions develops during the second larval instar. When the wings form at metamorphosis, the future ventral surface folds under the dorsal surface in the distal region to form the double layered insect wing. Despite its singular tone, this is a general claim about all (normal) Drosophila embryos, and their leg- and wing-imaginal discs. And it is a purely descriptive account of events in a temporal process recurring in all (normal) Drosophila larvae. For purposes of proximate explanation of why a double layer of cells is formed in any one particular embryo’s imaginal disc, this statement is no help. It simply notes that this happens in them all, or that it does so “in order to” eventually form the wing, where the “in order to” is implicit in the small word to. How is the pattern of eyespot development described in the extract from Wolpert in fact to be proximally explained? The logic of how the genes explain development is the subject of chapter 2. Here, some of the details of a developmental explanation may be given in order to show its special relevance to the proximate/ultimate distinction. Having identified a series of genes that control wing development in Drosophila, biologists then discovered homologies between these genes and genes expressed in butterfly development; and that whereas in the fruit fly they con-
What Was Reductionism?
trol wing formation, in the butterfly they also control pigmentation. The details are complex, but following out a few of them shows us something important about how proximate why-necessary explanation can cash in the promissory notes of how-possible explanation and in principle reduce ultimate explanations to proximate ones. In the fruit fly, the wing imaginal disk is first formed as a result of the expression of the gene wingless (so called because its deletion results in no wing imaginal disk and no wing), which acts as a position signal to cells directing specialization into the wing-disc structure. Subsequently, the homeotic selector gene apterous is switched on and produces apterous protein only in the dorsal compartment of the imaginal disk, controlling formation of the dorsal (top) side of the wing and activating two genes, fringe and serrate, which form the wing margin, or edge. These effects were discovered by preventing dorsal expression of apterous, which results in the appearance of ventral (bottom) cells on the dorsal wing, with a margin between them and other (nonectopic) dorsal cells. Still another gene, distal-less, establishes the fruit fly’s wing tip. Its expression in the center of the (flat) wing imaginal disk specifies the proximo-distal (closer to body/farther from body) axis of wing development. It is the order in which certain genes are expressed and the concentration of certain proteins in the ovum that explain the appearance of eyespot development in the buckeye butterfly. The elucidation of its details continue to be reported in the scientific literature from month to month. (It’s worth recording here the naming convention for many genes. A gene is named for the phenotypic result of its deletion or malfunction. Thus, wingless builds wings. Note that genes are individuated functionally and evolutionarily. Wingless is so called because of those of its effects which were selected by the environment to provide wings. Similarly for distal-less.) Once these details were elucidated in Drosophila, it became possible to determine the expression of homologous genes in other species, in particular in Precis coenia. To begin with, nucleic acid sequencing showed that genes with substantially the same sequences were to be found in both species. In the butterfly, these homologous genes were shown to also organize and regulate the development of the wing, though in some different ways. For instance, in the fruit fly, wingless organizes the pattern of wing margins between dorsal and ventral surfaces, restricts the expression of apterous to dorsal surfaces, and partly controls the proximo-distal access where distal-less is expressed. In the butterfly, wingless is expressed in all the peripheral cells in the imaginal disk which will not become parts of the wing, where it programs their deaths (Nijhout 1994, p. 45). Apterous controls the development of ventral wing surfaces in both fruit flies and butterflies, but the cells in which it is expressed in the Drosophila ima-
49
50
ch ap t er one
ginal disk are opposite those in which the gene is expressed in Precis imaginal disks. As Nijhout describes the experimental results, The most interesting patterns of expression are those of Distal-less. In Drosophila Distal-less marks the embryonic premordium of imaginal disks and is also expressed in the portions of the larval disk that will form the most apical [wing-tip] structures. . . . In Precis larval disks, Distal-less marks the center of a presumptive eyespot in the wing color pattern. The cells at this center act as inducers or organizers for development of the eyespot: if these cells are killed, no eyespot develops. If they are excised, and transplanted elsewhere on the wing, they induce an eyespot to develop at an ectopic location around the site of implantation . . . the pattern of Distal-less expression in Precis disks changes dramatically in the course of the last larval instar [stage of development]. It begins as broad wedge shaped patterns centered between wing veins. These wedges gradually narrow to lines, and a small circular pattern of expression develops at the apex of each line. . . . What remains to be explained is why only a single circle of Distal-less expression eventually stabilizes on the larval wing disks. (Ibid., p. 45) In effect, the research program in developmental molecular biology is to identify genes expressed in development, and then to undertake experiments— particularly ectopic gene-expression experiments—that explain the longestablished observational “regularities” reported in traditional developmental biology. The explanantia uncovered are always “singular” boundary conditions insofar as the explananda are spatiotemporally limited patterns, to which there are always exceptions of many different kinds. The reductionistic program in developmental molecular biology is to first explain the wider patterns, and then explain the exceptions—“defects of development” (if they are not already understood from the various ectopic and gene-deletion experiments employed to formulate the why-necessary explanation for the major pattern). Is there an alternative to the reductionist’s why-necessary explanation in terms of the switching on and off of a variety of genes that control the emergence and activity of cells of certain types at the eyespots? Some antireductionists seek such an alternative in explanatory generalizations that cut across the diverse macromolecular programs which realize development. For example, Kitcher (1999) identifies certain mathematical models as regularities important to “growth and form” (consciously echoing D’Arcy Thompson [1917]) in development and that suggest a multilevel process, one in which levels above the macromolecular really are explanatory. In particular, Kitcher cites the work of J. D. Murray (1989).
What Was Reductionism?
Murray elaborated a set of simultaneous differential equations reflecting relationships between the rates of diffusion of pigments on the skin and the surface areas of the skin. By varying the ratio of skin surface to diffusion rates, Murray’s equations can generate patterns of spots, stripes, and other markings in a variety of mammals. As Kitcher has pointed out (1999, p. 204), Murray’s system of equations, together with some assumptions about the ratio of surface area to diffusion rates of pigments, implies that there are no striped animals with spotted tails—an apparently well-established observational regularity. Though Kitcher does not mention it, Murray goes on to develop another system of differential equations for the relation between surface area and pigment that produces eyespots on butterfly wings. What is of interest in the present debate is Murray’s assessment of the explanatory power of these mathematical models—sets of differential equations together with restrictions on the ratios among their variables: Here we shall describe and analyze a possible model mechanism for wing pattern proposed by Murray (1981b). As in [mammalian coat color], a major feature of the model is the crucial dependence of the pattern on the geometry and scale of the wing when the pattern is laid down. Although the diversity of wing patterns might indicate that several mechanisms are required, among other things we shall show here how seemingly different patterns can be generated by the same mechanism. (Murray 1989, pp. 450–51) Murray concludes, The simple model proposed in this section can clearly generate some of the major pattern elements observed in lepidopteran wings. As we keep reiterating in this book, it is not sufficient to say that such a mechanism is that which necessarily occurs. . . . From the material discussed in detail in [another chapter of Murray’s book] we could also generate such patterns by appropriately manipulating a reaction diffusion system capable of diffusion driven pattern generation. What is required at this stage if such a model is indeed that which operates, is an estimate of parameter values and how they might be varied under controlled experimental conditions. (p. 465) . . . It is most likely that several independent mechanisms are operating, possibly at different stages, to produce diverse patterns on butterfly wings. . . . Perhaps we should turn the pattern formation question around and ask: “What patterns cannot be formed by such simple mechanisms?” (p. 451)
51
52
ch ap t er one
Murray treats his sets of simultaneous equations not as generalizations with independent explanatory power but as parts of a how-possible explanation that needs to be cashed in by developments which convert it to a why-necessary explanation or supplant it with such an explanation. In the period after Murray first produced his models, molecular biology has provided more and more of the proximate why-necessary explanations the reductionist demands for the historical facts about butterfly eyespots. This program is by no means complete, and the reductionist’s why-necessary explanations are not yet in. But they are obviously coming. In providing them, the reductionist also pays the promissory notes of the ultimate how-possible explanations biologists proffer. Recall that the ultimate how-possible explanation of the eyespot appeals to its predator-distraction and camouflage properties, but is silent on why this adaptation emerged instead of some other way of avoiding predation. Consequently, it is vulnerable to question, and invulnerable to test. Developmental molecular biology can answer questions about adaptation by making its historical claims about lines of descent open to test. The developmental molecular biologists S. B. Carroll and colleagues, who reported the beginnings of the proximal explanation sketched above, eventually turned their attention to elucidating the ultimate explanation. They write, The eyespots on butterfly wings are a recently derived evolutionary novelty that arose in a subset of the Lepidoptera and play an important role in predator avoidance. The production of the eyespot pattern is controlled by a developmental organizer called the focus, which induces the surrounding cells to synthesize specific pigments. The evolution of the developmental mechanisms that establish focus was therefore the key to the origin of butterfly eyespots. (Carroll et al. 1999, p. 532) What Carroll’s team discovered is that the genes and the entire regulatory pathway that integrates them and that controls anterior/posterior wing development in the Drosophila (or its common ancestor with butterflies) have been recruited and modified to develop the eyespot focus. This discovery of the “facility with which new developmental functions can evolve . . . within extant structures” (p. 534) would have been impossible without the successful why-necessary answer to the proximate question of developmental biology. Besides the genes noted above, there is another, Hedgehog, whose expression is of particular importance in the initial division of the Drosophila wing imaginal disk into anterior and posterior segments. As in the fruit fly, in Precis the Hedgehog gene is expressed in all cells of the posterior compartment of the wing, but its rate of expression is even higher in the cells that surround the foci of the eyespot. In Drosophila, Hedgehog’s control over anterior/posterior differentiation
What Was Reductionism?
appears to be the result of a feedback system at the anterior/posterior boundary involving four other gene products, and in particular one, engrailed, which represses another, cubitus interruptus (hereafter ci for short), in the fruit fly’s posterior compartment. This same feedback loop is to be found in the butterfly wing posterior compartment, except that here the engrailed gene’s products do not repress ci expression in the anterior compartment of the wing. The expression of engrailed’s and Ci’s gene products together result in the development of the focus of the eyespot. One piece of evidence that switching on the Hedgehogengrailed-ci gene system produces the eyespot comes from the discovery that in those few butterflies with eyespots in the anterior wing compartment, engrailed and ci are also expressed in the anterior compartment at the eyespot foci (but not elsewhere in the anterior compartment). “Thus, the expression of the Hedgehog signaling pathway and engrailed is associated with the development of all eyespot foci and has become independent of the [anterior/posterior] restrictions [that are found in Drosophila]” (Carroll et al. 1999, p. 534). Further experiments and comparative analysis enabled Carroll and coworkers to elucidate the causal order of the changes in the Hedgehog pathway as it shifts from wing production in Drosophila (or its ancestor) to focus production in Precis eyespot development. “The similarity between the induction of engrailed by Hedgehog at the [anterior/posterior] boundary [of both fruit fly and butterfly wings, where it produces the intervein tissue in wings] and in eyespot development suggests that during eyespot evolution, the Hedgehog-dependent regulatory circuit that establishes foci was recruited from the circuit that acts along the Anterior/Posterior boundary of the wing” (ibid.). Of course, the full why-necessary proximate explanation for any particular butterfly’s eyespots is not yet in, nor is the full why-necessary proximate explanation for the development of the Drosophila’s (or its ancestor’s) wing. But once they are in, the transformation of the ultimate explanation of why butterflies have eyespots on their wings into a proximate explanation can begin. This fuller explanation will still rely on natural selection. But it will be one in which the alternative available strategies are understood and the constraints specified; the time, place, and nature of mutations are narrowed; the adaptations are unarguably identifiable properties of genes—their immediate or mediate gene products (in Dawkins’s [1982] terms, their extended phenotypes); the feedback loops and causal chains will be fully detailed; and the scope for doubt, skepticism, questions, and methodological critique that ultimate explanations are open to will be much reduced. At the outset, I claimed that reductionism is a methodological dictum that follows from biology’s commitment to provide explanations. This claim can now be made more explicit, even against the background of an erotetic theory of
53
54
ch ap t er one
what explanations are adequate and when. Everyone should agree that biology is obliged to provide why-necessary explanations for historical events and patterns of events. The latter-day reductionist holds that such why-necessary explanations can only be provided for by adverting to the macromolecular states, processes, events, and patterns on which these nonmolecular historical events and patterns supervene. Any explanation that does not do so cannot claim to be an adequate, complete why-necessary explanation. The reductionist does not claim that biological research or the explanations in which it eventuates can dispense with functional language or adaptationalism. Much of the vocabulary of molecular biology is thoroughly functional. As I have noted, the reductionist needs the theory of natural selection to make out the case for reduction. Nor is reductionism the claim that all research in biology must be “bottom-up” instead of “top-down” research. So far from advocating the absurd notion that molecular biology can give us all of biology, the reductionist’s thesis is that we need to identify the patterns at higher levels because they are the explananda for which molecular biology provides the explanantia. What the reductionist asserts is that functional biology’s explanantia are always molecular biology’s explananda. So, why isn’t everyone a reductionist? Why, indeed, is antireductionism the ruling orthodoxy among philosophers of biology and even among biologists? Because, in the words of one antireductionist, again invoking D’Arcy Thompson’s expression, reductionism’s alleged mistake “consists in the loss of understanding through immersion in detail, with concomitant failure to represent generalities that are important to ‘growth and form’” (Kitcher 1999, p. 206). The reductionist rejects the claim that there is a loss of biological understanding in satisfying reductionism’s demands on explanation, and denies that there are real generalities to be represented or explained. In biology there is only natural history—the product of the laws of natural selection operating on macromolecular initial conditions. Reductionism accepts that selection obtains at higher levels, and that even for some predictive purposes, focus on these levels often suffices. But the reductionist insists that the genes and the proteins they produce are still the “bottleneck” through which selection among other vehicles is channeled. Without them, there is no way to improve on the limited explanatory power to be found in functional biology. Insofar as science seeks more-complete explanation for historical events and patterns on this planet, with greater prospects for predictive precision, it needs to pursue a reductionistic research program. That is, biology can nowhere remain satisfied with how-possible ultimate explanations—it must seek why-necessary proximate explanations, and it must seek these explanations in the interaction of macromolecules.
What Was Reductionism?
But there remains a serious lacuna in this argument for reductionism in a historical science like biology, one large enough to drive home a decisive antireductionist objection. Although the reductionism here defended claims to show that the how-possible ultimate explanations must be cashed in for whynecessary ultimate explanations, these explanations are still ultimate, still evolutionary—they still invoke the principle of natural selection. And until this principle can show unimpeachable reductionist credentials, it remains open to say that even at the level of the macromolecules, biology remains autonomous from physical science. Without a reduction of the laws governing natural selection, Dobzhansky’s dictum ensures that even at the bottom of molecular biology, physicalism is still a problematic commitment. The grounding of the laws of natural selection in physical science is a task postponed to chapters 4 and 5. Meanwhile, we need to defend in much greater detail the claim made here: there are no explanations in developmental biology except for those provided by molecular genetics.
55
2
• •
Reductionism and Developmental Molecular Biology in 1953, reductionism was mere philosophy. Today it is a successful research program. Tomorrow it will be an obvious truism. What will make it so is not the sequencing of the human and other genomes accomplished by the Human Genome Project, though it was probably a necessary condition. Nor will it be the biotechnology that recombinant genetics puts at our disposal, though its prospect of material reward may have energized more than a few molecular biologists. What will have turned reductionism from an exercise in philosophical handwaving into “normal science” is the sudden success of molecular developmental biology. In the first part of this chapter, I illustrate the success it has already achieved. Then, in second half of this and in all of the next chapter, I respond to the arguments of those who would deny the promise or indeed even the achievements of the explanation of development by the activity of macromolecules. The reductionist needs to demonstrate the feasibility of the program by a worked-out example. And the example has to withstand the critical scrutiny, skepticism, and alternative interpretation of the antireductionist. The only way I do the former, short of sending the reader to the scientific literature, is to report it myself. In order to do the latter, show that it withstands the antireductionist’s objections, I need to give it in more detail than perhaps some nonspecialists will want. But reading through the detailed results of discoveries honored by Nobel Prizes may be repaid at least by an increased sense of the pow-
Reductionism and Developmental Molecular Biology
ers and prospects of a reductionist research program. Molecular developmental biologists may without loss skip the second section below. But if they read on through the rest of the chapter and the next, they will be surprised and perplexed to find that claims about molecular developmental biology which they think obvious and well established are controverted at every point by philosophers of science and biologists eager to undermine reductionism. The first thing to show, however, is that before molecular developmental biology there were no explanations of embryological development at all, let alone ones fully or even partially adequate.
the explanatory vacuum of developmental biology The success of molecular developmental biology is so consequential just because the concerted, apparently holistic teleology of multicellular fertilization, embryogenesis, and maturation has been an impenetrable mystery to causal science since Aristotle. How development as a process of efficient causation was even possible, let alone actual, seemed to transcend human understanding. It was almost as logically impenetrable as the question of how physical matter can embody thought. Unraveling the mystery of development would also bode well for a full understanding of the operation of and relationship among the somatic cells in the mature adult. For surely operating a cell is a simpler matter than building it from scratch. In fact, a causal account of embryological development may even embolden us to think that the mind/body problem might someday be solved. A candid examination will, I think, reveal that until the advent of molecular biology, there were really no explanations at all in developmental biology. This subdiscipline consisted in a set of ceteris paribus generalizations about the sequence of events in embryogenesis in a few experimentally tractable model systems, and about the observed consequences of their perturbation. At most, explanations in developmental biology appealed to dispositions with little more empirical content than the stages the dispositions were disposed to produce. Once it became clear that these generalizations reported events resulting from macromolecular processes, a series of speculative mechanisms were postulated whose empirical content differed from the dispositions previously postulated only in their explicit commitment to the existence of some molecular occurrent properties or other to realize these dispositions. It was only when the molecular details came to be given that something approaching satisfactory explanation began to be available in developmental biology. The typical developmental generalization traces out exactly 66 stages in the
57
58
ch ap t er t wo
normal development of the Xenopus laevis (a frog species) from fertilization through cleavage, blastula, and gastrula, to neuralization and organogenesis, and finally metamorphosis from the tadpole to the adult. The generalization that describes the stages in development of the normal X. laevis was the result of decades of painstaking microscopic observation, as were the remarkable generalizations about frog embryos and sea urchin development that resulted from the work of Wilhelm Roux and Hans Driesche at the end of the nineteenth century. The former discovered that destruction of one frog embryo cell at the two-cell stage produced normal development of one-half the embryo. The latter discovered that destruction of one cell of a two-cell sea urchin embryo resulted in the development of a small but complete and otherwise normal embryo. Roux’s explanation for the frog result is that development is mosaic: the fate of a cell is determined by the immediately prior state of the cell. But this contradicts Driesche’s explanation that each cell embodies a complete recipe for regulation of development. Subsequent research reconciled frog and sea urchin results by discovering other anatomical generalizations. But these generalizations remained resolutely descriptive, while the hypothesized explanations lacked much content beyond the anatomical generalizations they explained. Thus, in the late 1940s Nieuwkoop eventually located a signaling center in the frog embryo (named for the discoverer), which explains Roux’s result consistently with Driesche’s insight about regulation: the first cell division usually divides the egg through the point of sperm entry and also the Nieuwkoop center. The cells with the Nieuwkoop center develop into the dorsal half, while those at the sperm entry point develop into the ventral half of the embryo. In Roux’s experiment, the cell killed was at the ventral side, but it remained attached to the cells with the Nieuwkoop center. These living cells then developed or “dorsalized” normally into onehalf the embryo. Had the killed cell been severed from the Nieuwkoop center, Driesche’s result, a small but whole embryo would have emerged. Remove the Nieuwkoop center or graft it elsewhere, and other irregularities of nondevelopment or double development occur. A generation before Nieuwkoop’s work, Spemann and Mangold showed that a partial second embryo would result from the grafting of a small region from the dorsal lip of the blastopore of one species of newt into a randomly chosen site on the embryo of another species. Spemann and Mangold labeled this region the “organizer” and hypothesized that it directs the “induction” of one tissue’s development by another. But the labels can provide no more explanation for the observed generalizations of development in frogs or sea urchins than can the “dormative virtue” explain why opium makes people sleepy. Con-
Reductionism and Developmental Molecular Biology
sider the question: why does the dorsal lip of the blastopore produce a partial second embryo when grafted? Answer: because it is the “organizer”; that is, in normal development it has the power to induce the surrounding tissue to develop into the normal embryo, and thus will do so when grafted onto any region in another embryo. And what is induction? It is what organizers do. Like the “dormative virtue” explanation, the appeal to organizers and induction is not completely vacuous; it labels a capacity, and therefore presupposes that there are manifest properties of embryos that realize these capacities. The explanatory work consists in identifying the mechanism whereby these manifest properties discharge the capacities. There may be other (contained) capacities that intervene between the occurrent properties of embryos and their developmental capacities. But for an explanation to have sufficient empirical content, these contained capacities must ultimately be cashed in for occurrent or manifest properties. By “sufficient empirical content,” I mean enough specific observable information so that we can employ components of the explanandum to decide when development will and will not proceed normally before it does so, that is, predictive information. It is in this sense that the generalizations of nonmolecular developmental biology lack much empirical content. The notion that these “explanations” of developmental biology are a natural stopping place or that they are adequate, appropriate, or correct as answers to questions biologists pose to themselves and one another seems without merit. To hold that the dorsal lip of the frog blastopore is an organizer because it or structures from which it is descended were selected for this function is true enough. But it provides no insight into how the dorsal lip does this, why it makes normal embryological development necessary. Nothing better shows the role of concepts like “organizer” or “induction” as almost completely nonexplanatory ones, that at most set the agenda for explanation, than the concept of “positional information.” The concept, first introduced by Wolpert in 1969, captures the notion that the cell knows what developmental pathway it should follow in light of its position relative to other cells in the embryo. This information about position is expressed in a coordinate system that cells can recognize. The phenomenon Wolpert’s notion of positional information is meant to capture is in some respects the very opposite of the phenomenon of induction—in transplanted material, “induced” cells are made to develop in ways different from surrounding tissue. That cells “know their location with respect to other cells”—that is, have positional information—is shown by the fact that exchanging the positions of cells which are normally destined to become, say, the front of a chick’s wing, with cells that normally become its back, results in these cells developing in accordance with their new
59
60
ch ap t er t wo
positions, as though they recognized their new locations and as though this information, not some internally determined fate, drives their development. Now, what is evident about the employment of the range of intentional metaphors—recognize, know, information—is that cells don’t have the cognitive equipment required literally to know or recognize, and insofar as “information” is treated as the content of mental states, they can neither acquire nor employ information. So, what is the point of such description? It is, pretty clearly, to set the standard for an explanation, to tell us what would count as an explanation of development: the explanation will have roughly to show how something that doesn’t have the cognitive capacities required to know and act on positional information can act in a way so similar to a cognitive agent who could have such information and act on it! This is, in fact, the entire methodological role of the natural, inevitable, almost unavoidable recourse to intentional language in molecular biology. It is not employed out of some misguided anthropomorphism, nor as a shorthand for an unintentional description of purely chemical and physical processes. Rather, it expresses the explanatory obligations that the reductionist research program in molecular biology imposes on itself: to show how nonintentional systems can, by dint of purely physical and chemical means, engage in performances which intentional agents apparently accomplish by intentional means. We will return to this point in chapter 3. For the moment, it suffices to see that the premolecular invocation of notions such as “organizer,” “induction,” or, for that matter, “knowledge of positional information” is at best minimally explanatory, and in fact is a rhetorically powerful means of identifying the explanatory challenge which developmental biology faces. What could possibly discharge the explanatory obligation these explanatory proxies incur? Where should we seek the manifest properties these dispositional generalizations require? The obvious answer is in the molecular biology of the cell. Indeed, once he introduced the notion of “positional information,” Wolpert (1969) almost immediately coined the term diffusible morphogen to indicate a chemical whose differential concentration across a region of space is a signal to cells to take on different shapes. The notion of a diffusible morphogen has a bit more structural content than the concepts of induction and organization; after all, if it is diffusible, a morphogen must have spatial existence and be in a gaseous or liquid state of matter. But its “morphogenic” role is as much a placeholder for some physical or chemical mechanism as the term organizer. Wolpert’s introduction of the notion of a diffusible morphogen was tantamount to the recognition that only a macromolecular mechanism could explain embryological development, and that developmental biology should set out to find it.
Reductionism and Developmental Molecular Biology
progr amming the drosophila embryo Let’s consider an actual explanation of embryological development. In particular, let’s focus on Drosophila embryological development, as this model system has been hitherto most extensively studied by molecular biologists. The explanation whose details I will trace out here is remarkably at variance with a variety of antireductionist theses in the philosophy of biology. Once we see how the explanation proceeds, we will turn to the antireductionist theses it undermines. Developmental biologists’ observations of Drosophila embryogenesis enabled them to conclude that, like other such processes, it is constituted by a stable sequence of unambiguously countable steps. On fertilization, the single nucleus divides quickly into nine copies; these nine nuclei move toward the fertilized egg’s surface, where they divide five more times. Only then are they each encapsulated separately by cytoplasm to form cells. At cell cycle 13, the cellular blastoderm then divides several more time before gastrulation, and then its germ band extends. Of course, the regularity of this general description confers upon it no explanatory force. Here molecular developmental biology enters the story in a way that proved worthy of the Nobel Prize. As Nusslein-Volhard has written, the number of genes specifically involved in the establishment of positional information in the egg is quite small. About 30 genes have been identified so far, and the total number is not likely to be much greater than this. Second the two body axes are established independently, as mutations either effect the anterior-posterior pattern, or the dorsalventral pattern, but never both. Third, the number of embryonic phenotypes observed is much smaller than the number of genes. (NussleinVolhard 1992, p. 203) By the end of the last century, Nusslein-Volhard and other molecular biologists had learned the exact role of most of these genes in the early stages of Drosophila development. In retrospect, we should think of their research strategy as that of reverse-engineering a piece of hardware to extract the software that it implements. More than a decade later, their work continues to be vindicated by such surprising discoveries as the regulatory role of microRNAs. And it was during this decade or so that it became apparent to the molecular biologist that in building the embryo, the genes operated in accordance with Boolean switching rules in a small number of relatively simple linear programs. It bears emphasis that I do not mean this claim to be metaphorical. As I shall illustrate and then
61
62
ch ap t er t wo
argue, the genes literally program the construction of the Drosophila embryo in the way the software in a robot programs the welding of the chassis of an automobile.1 A Boolean switching rule is one with which logicians and computer programmers are familiar. An example of such a rule is one of the rules of a twovalued logic that determines how the truth-value of a complex statement is fixed by the truth-values of its simpler component statements. For example, “P and Q” is true just in case P is true and Q is true, and is false otherwise; “P or Q” is true just in case at least one of P and Q is true, and false otherwise. Computer programs are all composed exclusively of steps that reflect these simple rules. The development of the Drosophila embryo realizes, exemplifies, instantiates, a sequence of stages completely describable by a set of Boolean rules that reveal how each stage is an algorithmic output function of previous input stages in accordance with these rules. In essence, therefore, Drosophila development simply follows a program that at each molecular process delivers an output which is operated upon by the next process in accordance with Boolean (extensional) rules that can also be implemented by any of a number of different desk- or laptop computers. Each state of each cell in the embryo is fixed by the concentration of gene product and by the chromatin configuration of the genes at the immediately prior state. The chromatin configuration of a gene—roughly whether that nucleotide sequence is bound or unbound by a chromatin protein molecule—determines whether the gene can produce mRNA for a protein product. Chromatin bound to a stretch of DNA makes mRNA copying of that sequence impossible. The initial chromatin configuration of a newly fertilized diploid cell itself is determined by the presence of (maternal or embryo) gene products during the S phase of meiosis. Subsequently, each gene can be switched on or off once in each cell cycle by alteration of its chromatin state. (This “subprogram” regulating the chromatin states has begun to be reverseengineered as well. It appears that states of the chromatin are regulated by genes which express relatively small microRNAs that are not transcribed into proteins but act directly, along with proteins, to regulate chromatin states. See Nelson et al., 2003.) In general, the state of a gene, call it B—whether switched on and producing 1. There are philosophers and biologists eager to controvert this claim on the grounds that the genes do not carry information, and this is a sine qua non for literal programming. See, for example, Griffiths 2001; Sarkar 2005. I take this matter up in detail in the next chapter, and argue that while they are correct to say that the genes do not carry information, this is no reason to deny that they program development. For, in the relevant sense, computer software does not carry information either.
Reductionism and Developmental Molecular Biology
protein or switched off—depends on two inputs: the state of its activator or repressor gene, say, A being on or off, and the chromatin concentration surrounding B in the previous cell cycle. A 4-valued “truth table” makes B’s current state a function of A’s state and B’s local chromatin concentration at the immediately prior cell cycle. Gene B can be in one of four states: 0—total repression of its mRNA transcription and protein concentration; 1—low expression of B’s product; 2—intermediate expression; or 3—high expression of its mRNA and consequent protein product. A can have one of eight effects on B: it can be either a very weak, weak, intermediate, or strong activator of B, or a very weak, weak, intermediate, or strong repressor of B. Given A’s relation to B as repressor or activator, there are four Boolean rules that produce an output value for B’s protein expression—3 strong, 2 intermediate, 1 weak, or 0 very weak—as a function of A’s state and B’s prior chromatin state, which also comes in four values: 0 very low, 1 low, 2 intermediate, 3 high. The Boolean rules for the output values of B as a function of its initial state and the values of A are given in the following table, which lists the basic programmed components of the larger structured programs to be introduced below. Thus, if A is a strong activator of B, then when B is in state 0 in cell cycle n, B’s expression level in cell cycle n + 1 will remain unchanged from its value in n. If B’s value is anything above 0 at n, then a strong activator will change its value to 3 at n ⫹ 1. If A weakly represses B, then when B’s nth cycle input level table 1 Boolean 4-valued table for activation/repression of gene B by gene A B’s input expression values (or initial chromatin binding) B’s output expression levels If A is a promoter gene A very weakly activates B A weakly activates B A intermediately activates B A strongly activates B If A is an inhibitor gene A very weakly represses B A weakly represses B A intermediately represses B A strongly represses B Source: Adapted from Bodnar 1997.
0
1
2
3
No effect No effect No effect No effect
No effect No effect 1 3
No effect 1 2 3
1 2 3 3
No effect No effect No effect No effect
No effect 2 1 0
2 1 0 0
1 0 0 0
63
64
ch ap t er t wo
is 2, its nth cycle output value will be 1; if its input value is 2, its output value is reduced to 1. With activation and repression governed by Boolean rules, each cell cycle of three stages of each cell (G1, S, G2, M) in the developing embryo instantiates the same program, in particular a “DO loop.” • DO for each cell cycle: • Display current value for all gene products. • (G1, transactivator synthesis) Calculate protein concentrations as determined by chromatin state for each cell. • (S/G2, DNA synthesis) Calculate new chromatin state for each gene. • (M, cell division, daughter cells have protein chromatin states of parent at time of division) Determine chromatin and protein value of new nuclear genes, divide, and migrate. • Repeat for each new cell. This basic program is instantiated repeatedly in a larger program over thirteen nuclear divisions that begins with initial input states determined, a distribution of Bicoid, Torso, Tailless proteins by maternal-effect genes bicoid, torso, tailless. (By convention in the molecular biological literature, genes are designated in italics, and the proteins they express are named by proper nouns in bold. Genes are often named after the defects caused by their excision or mutation: for example, tailless is so called because it was first individuated by a knockout that resulted in no tail on the adult animal.) These proteins are inputs to the gap genes—hunchback, Kruppel, knirps, and the terminal genes—torso, tailless, giant, plus Kruppel again as well as giant, tailless, and torso (switched on again a second time). These genes also initially program the homeotic genes (deformed, sex combs reduced, antennapedia, ultrabithorax, abdominal-A, abdominal-B). The terminal genes then activate the pair-rule I genes—tramtracks, hairy, even-skipped, Fushi tarazu, which switch on the pair-rule II genes—hairy, runt, even-skipped, Fushi tarazu, paired, and these then program the segment polarity genes—engrailed and wingless. The pair-rule II genes also refine the initial pattern of expression of the homeotic genes initially activated by the terminal genes. At each box in the flowchart (see figs. 1 and 2), the component genes in the cell employ the basic program to set the values of the genes they activate or repress, and to set the values of their own successor genes, which move to daughter cells during the cell cycles. The full program is structured in that it contains repeated subprograms (built from the basic one): in particular, it contains two copies of a subprogram for solving the positional information problem which Wolpert named the “French flag” problem; four “stripe-doubling” problems; a line-drawing problem; and a final selector genetic network problem,
Reductionism and Developmental Molecular Biology
f ig ur e 1 The French flag program for the blastoderm (a slightly different program is implemented in the growth zone). Arrows indicate activation, turnstiles indicate repression, and double turnstile indicates strong repression. (After Bodnar 1997)
all solved by three distinct but similar programs whose components repeatedly invoke the Boolean table above. It will be worthwhile to outline two of these important subprograms, for it is here that the molecular explanations for development finally begin to discharge the promissory notes of “positional information” and other such metaphoric placeholders. The “French flag” was first broached by Wolpert (1969) as a positional information problem that development had to solve: given a row of apparently undifferentiated cells, devise a program which will differentiate them into equal bands of blue, white, and red—that is, the French tricolor—or, for that matter, equal bands (for instance, two parasegments wide) of any sort of three distinct developmental outcomes along the anterior/posterior axis. The flowchart for French flag patterning in the blastoderm involves four genes, A through D, in the pattern of activation and repression, following the basic Boolean rules given above in figure 1. We can walk through the way this program builds a three-unit structure from an undifferentiated input step by step. We begin with the unfertilized egg, in which the input protein for gene D is distributed homogeneously, while the input protein for A is distributed in a gradient that divides the egg into three areas of different concentration: 3 (high), 2 (medium), and 1 (low). Thus, the egg can be digitally distinguished into four sections, each with a specified amount of protein expressed by each of A, B, C, and D: Anterior [3003] [2003] [1003] [0003] Posterior, where [3003] indicates that the anteriormost region has a high concentration of A’s and of D’s product, and no B- or C-expressed protein. Since A’s and D’s proteins are present in high concentration, they operate to
65
66
ch ap t er t wo
block chromatin shutoff, and regulate B and C in accordance with the flowchart in figure 1 above. A switches on B and represses D in the first three compartments, and the first cell division results in eight regions, determined by the Boolean rules above: Anterior [3300] [3300] [2302] [2303] [1303] [1303] [0003] [0003] Posterior. The interaction of the four levels of expression of the four genes in accordance with the Boolean rules produces sixteen cells in the next cycle, Anterior {4 copies of [3300], 4 copies of [2313], 4 copies of [1313], 4 copies of [0003]} Posterior. At the fourth iteration, 32 cells are produced with the required tricolor pattern: Anterior {8 copies of [3300], 16 copies of [(2 or 1)010], 8 copies of [0003]} Posterior. The anterior eight cells all have a distinctive expression of B, the middle 16 a distinctive concentration of C’s product, and in the posterior eight cells D is expressed strongly. Further cell divisions in accordance with the Boolean rules will maintain the distinctive levels of B, C, and D throughout development. Of course, an amended program can produce additional stripes: add a gene BÐ activated by B, repressed by A and C in a forward feed. At the outset of Drosophila development, A, B, C, and D are instantiated by bicoid, hunchback, Krupple, and knirps. In particular, bicoid’s three levels of expression (high, low, null) act (A in the diagram above) on hunchback (B), Kruppel (C), and knirps (D) in accordance with the French flag program to set up the initial pattern of three differentiated segments from front to rear. Later in development, when the embryo has reached 72 cells, this same program structures the dorsal/ventral pattern of the embryo. Here A is dorsal, B is twist, C is snail, and D is both decapentaplegic and zerknult. The stripe-doubling program given below is repeated four times: relatively early on, at the same time as bicoid produces the French flag patterns, high concentrations of terminal gene products at each end of the egg form two stripes, between which Kruppel is expressed, followed by the double striping of giant between each terminal stripe and the middle Kruppel stripe. Then later, the gap genes program the pair-rule stripes by three iterations of the stripe-doubling program. Once sections two parasegments in length have been laid down in the Dro-
Reductionism and Developmental Molecular Biology
f ig ur e 2 The stripe-doubling program. (After Bodnar 1997)
sophila embryo, a program for “line-drawing,” which delineates the segments of the embryo, comes into play. The further details of these programs carry the development of the Drosophila embryo beyond the syncytial blastoderm to the cellular blastoderm. A general flowchart for the programming of the Drosophila embryo up to the formation of the cellular blastoderm (adapted from Bodnar 1997) is given below (fig. 3). The initial gradient is set by the maternal-effect genes, bicoid, torso, and tailless. These provide inputs to the gap genes, maternal hunchback, Krupple, knirps, and zygotic hunchback, and the terminal genes, torso, tailless, giant, and Kruppel, whose interaction with one another sets up a pattern of seven stripes, each two parasegments wide. The program implements the French flag subprogram described above. The gap and terminal genes then program the first set of pair-rule gene programs involving tramtrack, hairy, even-skipped, and Fushi tarazu that produce two alternating broad seven-stripe patterns, each one parasegment wide. Here the subprogram employed is the double-striping program diagrammed above, in which Krupple acts as both a gap gene and a terminal gene. The interaction of the French flag program and the double-striping program results in the two-segment-wide pattern the gap genes produce. Feedback among the pair-rule II genes sharply refines the pattern into demarcated stripes and sets the polarity within the stripes by switching on the segment polarity genes, engrailed and wingless. The gap and terminal genes, meanwhile, also activate the homeotic genes, deformed, sex combs reduced, antennapedia, ultrabiothorax, abdominal-A, and abdominal-B, in the broad stripes that the gap
67
f ig ur e 3 The Drosophila developmental program to cell cycle 14. (Adapted from Bodnar 1997)
Reductionism and Developmental Molecular Biology
and terminal genes establish. The pair-rule II genes, hairy, runt, even-skipped, Fushi tarazu, and paired, refine these stripes in the cellular blastoderm at the fourteenth cycle of nuclear division. There is a powerful independent confirmation that this hypothetical program is implemented by the process of embryological development. This evidence provides two related reasons to conclude that the program explains why the development of the Drosophila body plan goes through precisely the stages that have long been known through microscopic observation, but not previously provided with any explanation whatever. Both involve modifying the program in biologically significant ways. First, in many cases, modifying the actual program by changing the initial chromatin value for a particular gene in one or more cells at the right point in their cell cycle produces developmental variations that are already known to be associated with mutations at that very gene. Of 84 known mutations among the 28 genes that operate in the Drosophila blastoderm developmental program, 66 operate to deflect embryological development in a way that knowing the program enables us to predict as experimental outcomes (Bodnar 1997, p. 416). Second, and perhaps even more remarkably, the program can be modified in ways that shed light on how Drosophila, a so-called long-germ-band insect, can evolve from an ancestor it shares with short-germ-band organisms such as the red flour beetle, Tribolium castaneum. In this way, the program enables us to make visible progress in satisfying the demand that evolutionary biology’s ultimate how-possible explanations be converted to more causally complete why-necessary explanations. A long-germ-band insect is one whose blastoderm stage corresponds to the whole future embryo: all its segments form at the same time after gastrulation. Other insects’ development shows a so-called short-germ-band pattern of development in which the blastoderm is the source only of the anterior segments of the embryo; posterior segments are formed after blastoderm formation and gastrulation (Wolpert et al. 1998, p. 158). The short band pattern is an evolutionarily ancestral pattern from which the long germ band evolved. One possible mechanism for how the long-germ-band pattern evolved from the short-germ-band pattern involves a mutation in the mechanism that regulates sequential growth of the segments that characterizes the short germ band. Such a mutation would permit simultaneous segment determination. In particular, a mutation that allows maternal nurse cells to inject the torso-like gene’s product from both ends of the cell instead of just the anterior end would allow the posterior system to preempt the anterior system’s programming of the abdomen, and to define all the segments at the same time instead of sequentially. Such a muta-
69
70
ch ap t er t wo
tion would result in posterior development simultaneous with the development of the anterior system, as in the long-germ-band insects. It is not difficult to make slight changes in the structured subprograms of the 28-gene program for Drosophila embryogenesis that will result in the embryogenesis of short-germ-band insects such as the contemporary tribolium beetle. The changes begin with the deletion of the posterior Torso and Tailless gradients, leaving only an anterior gradient and a uniform distribution of Bicoid protein. They involve mechanisms known to be present in the Drosophila embryo along with the suppression of the hairy gene’s enhancers and a slight change in the French flag program operating at cycle 9, to permit continued division of posterior cells in the short-germ-band embryo. One interesting difference between Drosophila embryogenesis and Tribolium genesis is that the program of the gap genes is regulated by spatial gene-product gradients in the former, but is regulated by temporal gradients of the same gene products in the latter. Instead of being turned off and on at a single pattern at the same time in the Drosophila, they are turned on in that pattern sequentially in the Tribolium. The relative simplicity of the changes in the Drosophila embryogenesis program to produce the Tribolium embryogenesis program suggests a natural line of inquiry about the genetic program of the common ancestor of fruit flies and beetles, and some hypotheses about how each of the lineages which eventuated in these two species did so. In effect, we have here the beginnings of a research program that can convert the adaptationalists’ “how-possible” stories of the evolution of beetles and fruit flies to the why-necessary explanations that the reductionist research program seeks. Of course, it may be impossible to fill out all the details of this evolutionary history, but molecular evolutionary biology is the only way to do so. The program for Drosophila embryological development here reported, and some of whose components have been detailed, is of course only a “snapshot” or “freeze frame” of how experimental data have been and are being brought together to provide an explanation for what nonmolecular developmental biology can describe yet not explain. As with other computers, the Drosophila embryo implements not only this program but many others at the same time. Indeed, some of these are subprograms that the embryo must run in order to run the higher-level program of development. It appears that in the Drosophila embryo, as in other eukaryotic systems, a small number of genes regulate structural gene expression, but do so by transcribing short microRNAs which regulate directly and without further translation into protein, thereby digesting “unwanted” messenger RNA, for example (see Mattick 2003). Which directed program is (erotetically) explanatory depends on the level of organization at which the inquiry is made. Computers familiar to us em-
Reductionism and Developmental Molecular Biology
ploy very high-level programming languages, lower-level languages to compile calculations, and assembly language programs at even lower levels, all the way down to the simplest Boolean languages at the level of individual gates. Mutatis mutandis for the Drosophila embryo. There are more details to uncover in the Drosophila embryological program at stages of the calculation not yet identified, and revisions to be made in the flowchart as further evidence about newly discovered genes and their regulation emerges. Moreover, there are levels of implementation below that of the gene and the protein: at the level of DNA, the many different kinds of processed and unprocessed RNAs, amino acids and polypeptides, pre- and pro-proteins. These lower levels of programming are to the higher-level program what the assembly language program in our computers is to C⫹⫹, Pascal, Java, Windows, or whatever highest-level program is employed in our laptops.
five challenges for the genetic progr am of development But the high-level program sketched here and the process it reflects raise many serious questions for both proponents of reductionism and its opponents. These questions almost leap out of the pages in which J. W. Bodnar, the molecular biologist whose reverse engineering from the data to the program is expounded here, summarizes the story of Drosophila development up to the end of the last century: Experimental results pointed towards a theoretical model which accounts for a cascade of transcriptional activation with multiple temporal levels of genes each of which is programmed by threshold mechanisms using a small subset of the previous genes. Gene switching in genetic networks is coupled to intracellular molecular events through the cell cycle. A cell is an active chemical system that controls intracellular concentrations of regulatory molecules. Second messenger and transactivator concentration or activity is modulated directly in various compartments within a single cell by mechanisms such as transcription, nuclear transport, or phosphorilation and can vary widely within a single cell throughout a single cell cycle. . . . Therefore, a concentration of gradient is “read” independently by the individual cells or nuclei—and ultimately by the chromatin structure of the individual cells—as they progress through their individual cell cycles. . . . Each individual cell senses a gradient to switch chromatin, protein, and cell state both in space and time. Consequently, experiments indicate that
71
72
ch ap t er t wo
an integrated model must account for the gene switching events in each individual cell during each cell cycle. Gene switching logic in genetic networks is stored as chromatin states, and cells contain a memory which allows sequential genetic networks to be coupled together into developmental programs. Experiments suggest that each cell “remembers” its state in the combination of equilibrium chromatin configurations and transactivator concentrations as it progresses through the cell cycle. All the rules for activation of any individual gene are contained in all cells but only recalled as the input gene products become available during a developmental program. Patterns take a fixed number of cell cycles to develop—fourteen from the egg to the Drosophila blastoderm and eleven from the egg to the mature C. elegans. Therefore experiments point toward models in which genes are switched between defined states at the cellular level—based on a memory stored in the nuclear structure. (Bodnar 1997, p. 419) Bodnar concludes, In the midst of writing the computer programs for a developmental program it became apparent that the double-entendre of “programming” reflects a fundamental characteristic of information systems, and that biological information systems are in essence biological computers. Much is known about the “hardware” of biological computers, but currently little is known about the “software.” As I write the computer program in Pascal, the logic and syntax of the Pascal programming closely paralleled the logic and syntax of the developmental program . . . : DNA domains correspond directly to string variables, gene products to procedures, cellcycles to DO-loops, patterns of nuclei to arrays, and growth of the organism to running the complied computer program. (Ibid., p. 421) Much more, of course, was learned in the half decade between the time Bodnar wrote out the high-level program of Drosophila development and the time the present book was written. In particular, subprograms are being elucidated whereby two dozen or so microRNAs work to implement parts of the higherlevel program. This further reverse-engineering of software has only strengthened the conclusion that the fundamental nature of development is the implementation of a program (see Mattick 2003). So, these passages raise several questions, ones that have vexed philosophers of biology and biologists attempting to understand the nature of the gene, the course of embryological development, and how we should theorize about it. Some of them are of direct relevance to the explanatory reductionism that such
Reductionism and Developmental Molecular Biology
a macromolecular developmental program seems to vindicate. Among these questions are the following: 1. Even supposing that the action of the 28 genes identified up to the end of the twentieth century in the development of the Drosophila embryo is properly understood as realizing a computer program, why suppose that the rest of the details of Drosophila development and behavior are equally intelligible from a purely macromolecular perspective? Why suppose that development among vertebrates should be anything like Drosophila embryogenesis? Is it reasonable to extrapolate from one case to the claim that a macromolecular program will explain development everywhere and always? 2. Why suppose that, even in this case, the whole story is macromolecular? Even reading the summary and conclusions quoted, one notes repeated reference to cells, their properties and behavior. If the role of whole cells is indispensable to the program for Drosophila embryogenesis, we will have to be confident that there is an adequate, purely macromolecular explanation of the role of the cell and its cycle before we can conclude that molecular biology alone, and unaided, provides the explanation for development. And if the explanatory role of the whole cell or its nonmolecular parts is both indispensable and irreducible, then reductionism’s explanatory claim must be surrendered. 3. A variant on question 2 expresses doubts of a coalition of biologists and others about the genetic-program explanation of development: why suppose that the genes have any special role in development? There is a vast range of other conditions—physiological and environmental—causally necessary for fertilization and embryogenesis along with the products of the genes. According to the “causal democracy thesis” advanced, for example, by developmental systems theorists (such as Griffiths and Grey [1994]), each of these causally necessary factors is on a par with the others; none is even primus inter pares; and, depending on the explanatory interests and practical focus of a biologist, any one of them can take center stage in one or another explanation of development. Accordingly, “genocentrism”—the attribution of a special role in development to the genes—is unwarranted, and along with its eclipse we should reject reductionism as well. 4. The passage extracted is, like so many descriptions of macromolecular processes, replete with what philosophers call “intentional idiom.” That is, a large number of cognitive processes and cognitively controlled actions are attributed to macromolecules that they could not literally undertake: recognition, memory, signaling, reading, “knowledge of position.”
73
74
ch ap t er t wo
Most prominent of all among the terms used to describe the role of the gene is information, and the most frequently employed descriptive term in the particular passage and the macromolecular processes reported is program. Exponents of genocentrism will ground the special explanatory status of the genes on their roles in an informational program. But besides the fact that macromolecules can’t think, the developmental systems theorists will make common cause with other opponents of genocentrism to deny that the gene bears any special informational role in any biological process. Does this view, if correct, threaten the reductionism here defended, and is it correct? More important, how should we understand the intentional idiom of molecular biology: as metaphor run amok, as implicit redefinition of intentional words from ordinary language into technical terms, or as the expression of an explanatory demand that molecular biology must satisfy—that is, showing how the appearance of intentionality at the cellular level can be explained unintentionally at the molecular level? 5. The genetic program as elaborated here proceeds on the assumption that the notion of the gene is itself quite unproblematic, that there are genes, that they can be distinguished, individuated, counted, and otherwise treated as the relevant units of hereditary transmission and developmental control. But, it has been alleged, the history of genetics in the twentieth century has shown that the notion of the gene is one which, though for a long time productive and fertile in its encouragement of scientific progress, has been overtaken by events. The complexities in heredity and development that molecular biology has uncovered, on this view, make the gene an obsolete idea. It is an idea that will be eclipsed in the next century by other notions, perhaps ones which will vindicate antireductionist conclusions to the same extent that the twentieth century’s fixation on the gene reflected reductionistic ones. And when the gene is superseded, so will the explanation of development here elaborated. The remainder of this chapter is devoted to answering the first three of these five challenges. The next chapter treats the remaining two questions. The division of labor between these chapters reflects the fact that the first three matters are ones which have given biologists, and among them molecular biologists, pause, while the last two are ones in which philosophers have been equally concerned. Nevertheless, the package of all five constitutes a serious challenge to the claim that it is the genetic program which explains the regularities of development, and only the genetic program which can do so. Vindication of genocentrism in the face of these challenges may not be as strong an argument
Reductionism and Developmental Molecular Biology
for its truth as the success of the empirical program so far described. But it will certainly help!
can the whole story of development be a matter of progr amming? First question, why suppose that the whole story of development is just an elaboration of this small part of it? Why suppose that all the action is in the genetic network, or can be described as a program realized in nucleic acids? Why suppose on the basis of the evidence of about three dozen genes in the first fourteen cell cycles of one species, as the opening of this chapter dared to suggest, that the regulation of the somatic cell in the adult of this and all other species is just a matter of the operation of a program? Perhaps the most dramatic evidence the molecular biologist could cite to support such extrapolations begins with some of the genes involved in the program of embryogenesis described above, and which have been shown to be central to development up and down the phylogenetic spectrum: the homeotic selector genes, which as we have seen are programmed in early Drosophila development by the gap, terminal, and pair-rule II genes. There is, in fact, fairly startling evidence that these genes produce the mostcomplex organs following a program that will turn out to be similar to the one which fixes the segments of the fruit fly embryo. Gehring, Halder, and Callaerts (1995) have reported experiments in which a previously identified homeotic gene, eyeless, when activated in somatic cells all over the body of adult Drosophila, results in the growth of complete eyes. These eyes, including cornea, pseudocone, cone cells, and primary, secondary, and tertiary pigment cells, are functional at least to the extent that their photoreceptor cells respond to light. Gehring’s team has induced eyes in the wings, antennae, halteres, and in all six legs, and they were able to do so in 100 % of the flies treated under conditions in which the Eyeless promoter gene functions. Gehring describes eyeless as a “master-control” gene whose activation by itself is necessary and sufficient to trigger a cascade of genes harbored in all the cells (1995, p. 1791), but is normally silent in all but those which give rise to eyes. Presumably, a protein coded by eyeless binds to some set of genes, switching them on and producing a cascade of proteins that ectopically build an eye on the fly’s back, under its wing, on its haltere, or even on the end of one of its antennae. And this set of genes is, of course, to be found in every nucleus in the fruit fly’s body. Gehring estimates that the number of genes required for eye morphogenesis is 2500 (out of approximately 17,000 genes in the Drosophila
75
76
ch ap t er t wo
genome), and that all are under direct or indirect control of eyeless. Moreover, eyeless appears to directly control later stages of eye morphogenesis. Apparently, the same master-control gene functions repeatedly to switch on later genes crucial to eye development, suggesting that evolution has employed the same developmental switch several times in selecting for eye-developing mechanisms. What is more, Sey, the mouse gene homologous to the fruit fly’s eyeless gene, will produce the same result when inserted into fruit fly somatic cells and switched on. And there is evidence that in the mouse, Sey is a master-control gene as well. Proteins encoded by the homologous genes in the two species share 94% sequence identity in the paired domains. Gehring’s laboratory has identified counterparts to the fruit fly’s eyeless gene, which are implicated in eye development across the whole range of species from planaria to squid to humans. Eyeless and its homologues may be present in all metazoan. These results suggest that one of the most intricate of organs is built by the switching on of complex programs by a relatively small number of the same genes across a wide variety of species. The fact that the mouse’s Sey and Drosophila’s eyeless gene both work to produce ectopic eyes in Drosophila despite the great differences between, say, mammalian eyes and insect eyes suggests that eye development in both species is the result of the implementation of different high-level programs employing a large number of iterations of a relatively small set of the same subprograms in a variety of different orders and with various feedback and feed-forward loops. It is also known that the regulatory genes involved in the development of the Drosophila eye are all relatively close together on the chromosome, that these genes build the eye without the intervention of specialized cellular structures beyond those required for any developmental process. Identifying the other genes in the program that produce the entire eye should in principle be a piece of normal science. As Gehring concludes, “The observation that mammals and insects, which have evolved separately for more than 500 million years, share the same master-control gene for eye morphogenesis indicates that the genetic control mechanisms of development are much more universal than anticipated” (1995, p. 1792). Once eyeless is switched on by regulatory proteins, nothing else beyond the constituent macromolecules is needed, apparently, to program the eye’s development. Of course, a recalcitrant antireductionist could with logical consistency resist forever the conclusion that the whole story of development in all creatures great and small is a matter of genetic programming. As we have seen, biology is a historical science in which the development and behavior of each organism and each set of organisms must be explained separately, and in which, owing to the blindness of selection for differences in structure, there is no law or theory to explain common themes; and even very similar outcomes will often be the
Reductionism and Developmental Molecular Biology
result of different causal pathways, and will thus be different in the physical facts that constitute them. Thus, no matter how much we pile up details vindicating the explanatory strategy of uncovering the genetic program, we leave formally open the existential claim that there exists somewhere a counterexample to this program of research. After a certain point, it is clear that the burden of proof must shift to those who resist the reductionist conclusion. Have we reached that point yet? One reason to think so is that when we explore other programs so far assembled, that explain later development in the Drosophila embryo or in other completely different vertebrate systems, such as the limbs of the chick embryo, the details of macromolecular processes are largely the same. Many of the same subprograms implicated in early segmentation of the Drosophila are also involved in chick limb-bud development. That this will be the case is exactly what the explanation of development as a program composed of subroutines iterated in evolution would lead us to expect. In particular, it is well established that in programmed expression the same three genes—engrailed, decapentaplegic, and hedgehog—control such disparate developments as wing and leg emergence in Drosophila and the coloration and patterning of the wing in the buckeye butterfly. Two of these genes, engrailed and decapentaplegic, figure in the earliest stages of Drosophila embryogenesis to produce the segmentation pattern as described above. Homologues of these genes also figure in vertebrate limb development. In the Drosophila wing imagal disc cells, the posterior compartment first expresses engrailed gene products along with the gene products of the segment polarity gene Hedgehog, which divides the wing into front and back (using the same line-drawing program employed in blastoderm segmentation). Hedgehog protein in turn induces adjacent cells in both the anterior and posterior sections to express decapentaplegic’s product by inhibiting decapentaplegic’s inhibitor. The same program begins the development of the Drosophila leg. In Drosophila leg development, a set of programs results in growth “proximodistally,” that is, from the body outwards, by establishing a gradient from the center of the leg imagal disk. This center is caused to rise up from the body at the apex of a mound, with the leg built beneath it as it rises through a temporally ordered program. Recall the shift to spatial orderings in the long-germ-band Drosophila from temporal orderings of development in the short-germ-band insects. In the development of eyespots in the butterfly—an organism that also has a common ancestor with the fruit fly and the beetle—this program runs synchronously instead of diachronically, and instead of building a limb by outward protrusion, it builds an eyespot by spatial diffusion of the same and similar gene products. There is one thing to bear in mind in answering the question, is the pro-
77
78
ch ap t er t wo
gram described above the whole story? Of course, it is not the whole genetic program. The whole genetic program’s story must be understood to include the regulatory and structural RNA and protein products that the switched-on and switched-off genes produce and don’t produce. And it must be understood to include the operation of the maternal genes and their microRNA and polypeptide products, which regulate the building of the unfertilized egg, fill it with a gradient of Bicoid protein, and make it “fertilizable.” The genetic program sketched out above begins at an initial state that includes the already “built” egg at the moment of fertilization; the chromatin state of the nucleus, which results from fusion of egg and sperm; and the distribution of gene products such as Bicoid and maternal microRNAs in the ovum. If, however, this initial state is not itself the product of an equally algorithmic program, then little of the general claim of reductionism is established by showing that embryological development is such a program. But, of course, the reductionist claims that this initial state is as much a result of programmed molecular processes as the embryological development it makes possible. It is true that the reductionist has not shown that prefertilization structures are so produced. But then the reductionist has not shown that postfertilization development is a matter of molecular genetic programs for any but a handful of model systems through a limited range of development. The question therefore arises again: on whom does the burden of proof now rest? And a further question needs to be at least explored: how much evidence is enough to lead the developmental biologist to conclude with confidence that all development is a matter of the implementation of a genetic program? There are two questions here. First is the question about whether prefertilization, the egg’s developmental scenario, is given by a genetic program, and the second is whether all embryology works the way Drosophila embryology does up to the development of the first 72 cells. Consider the second question. Why suppose that even in this case, the whole story everywhere in the biological realm is macromolecular? The insight that biology is a historical science should enable us to answer the question about how much evidence will suffice to conclude that the reductionist’s hypothesis is vindicated. Were biology a nomological discipline, in which we could expect substantial unification of generalizations into a small number of laws, the amount of data sufficient to confer confidence on general hypotheses about molecular development in all organisms would be relatively modest and manageable. Compare the amount of evidence required to strongly confirm laws in physical science. Though a hypothesis about, say, chemical synthesis makes a claim about what obtains everywhere and always, it requires only a modest number of experiments to secure it considerable credibility. On
Reductionism and Developmental Molecular Biology
the other hand, in history a generalization about a local historical trend (“European revolutions result from rising expectations”) is vindicated only when every putative instance has been examined, and the established historical trends do not suggest between them any more than a longer or more widespread trend; they don’t support any nomological generalizations. The same must be expected in biology. Species are not kinds. Each species is a spatiotemporally extended particular object, a lineage of organisms with a sometimes vague start and sometimes an equally vague end point. How development proceeds in one species is a matter of how it proceeds among the normal members of the species; and owing to the blindness of selection for differences in structure, every description of development for every species will reflect variation in physical detail (even among the normal members). So, no molecular generalizations should be anticipated by the reductionist or demanded by the antireductionist from molecular developmental biology. At most, one should expect a detailed molecular narrative about development in each “normal” case for the salient environmental differences conspecifics find themselves in, along with details about important variations and mutational variants. Under these circumstances, then, vindicating the reductionist thesis about molecular developmental biology is matter of piling up details about enough model systems over a long-enough period from prefertilization to adulthood so that the burden of proof shifts to those who would deny this reductionist conclusion. It is not for the reductionist philosopher to say that this has already happened. All we can do is remove alleged conceptual obstacles to its happening. The rest of this and all of the next chapter deal with the most widely alleged of these obstacles. Recall the first question: is the highly specialized organization of the ovum, which is required for the embryological program to start running after fertilization, equally the result of a macromolecular program? And even prior to this, is oogenesis and spermatogenesis fully fixed by the algorithmic operation of macromolecular programs alone? In seeking the explanation for oogenesis, the first thing one notes is another example of the fact that premolecular developmental biology provides no explanantia—the explainers, only explananda—that which requires explanation: Oocyte development begins in a germarium, with stem cells at one end. One stem cell will divide four times to give 16 cells with cytoplasmic connections between each other. One of the cells that is connected to four others will become the oocyte, the others will become nurse cells. The nurse cells and the oocyte become surrounded by follicle cells and the resulting structure buds off from the germarium as an egg chamber. . . . The ooycte grows as the nurse cells provide material through the cyto-
79
80
ch ap t er t wo
plasmic bridges. The follicle cells play a key role in patterning the oocyte. (Wolpert et al. 1998, p. 136) Once the oocyte is formed, the anterior-posterior (top to bottom) and the dorsal-ventral (back to front) structure of the oocyte is determined by gene regulation and gene expression in these follicle cells. Asymmetrical development is due to the asymmetry in the spatial configuration of the nurse cells at one end of the oocyte blocking contact with follicle cells. (Of spatial configuration, more below.) This asymmetry in contact between the follicle cells and the oocyte results in the gradient of Bicoid protein as the input for the program of initial differentiation in the fertilized embryo. Nurse cells secrete Bicoid mRNA into the end of the oocyte that they face. Where follicle cells and the oocyte are in contact (everywhere but the anterior end facing the nurse cells), mRNA transcribed from the gurken gene in the oocyte is secreted. This protein, Gurken, binds to a receptor on the adjacent follicle cell, called Torpedo. This binding results in a signal back through the oocyte, reorganizing its microtubular cytoskeleton. This array in turn keeps the greater concentration of Bicoid protein at the anterior end of the oocyte, where it triggers the gap genes of the fertilized nucleus to begin differentiation in the embryo. One is inclined to say that building the oocyte is child’s play compared to building the entire embryo. Similarly, the program implemented by the genes in the various somatic cells to regulate their responses to one another and to the environment must be no more complex than that which they implement in development. Reverse-engineering the genetic program of Drosophila development should make molecular biology confident about the fruits of a reductionistic research program everywhere. It is certainly open to ask whether the differentiation of nurse cells and oocyte cells, the formation of follicle cells, or, for that matter, the development of the germarium where all this takes place is also a matter of the operation of a macromolecular program. Indeed, these are questions that molecular developmental biology must eventually answer. But surely it is asking too much of the advocate of reductionism to await the completion of these tasks before expressing confidence that the story will turn out to differ only in the macromolecular details of each of these programs? Again, it must be considered, once the embryo’s program is elucidated, where does the burden of proof now lie: with reductionists or their opponents?
does the genetic software require irreducible hardware? This brings us to the second and third of the five questions raised above. Can the genes and their program claim center stage in the explanation of develop-
Reductionism and Developmental Molecular Biology
ment when so much else is causally necessary for it? Opponents of reductionism will deny that their view requires further empirical evidence that outweighs the macromolecular details reported above. They will argue that reductionism—genetic or broadly macromolecular—is self-refuting, that the very descriptions of the macromolecular process invoked in the last section reveal the causal indispensability of biological entities, processes, events, and conditions to the operation of the macromolecular program. If the program’s operation requires such irreducible wholes in order to produce the adult from the embryo, reductionism as a research program can be at best a delusion fostered by blindness to everything but the genes and their products. The antireductionist rejects the claim that macromolecular explanations adverting to the causal role of larger structures than molecules—say, organelles, cells, tissues, and so on—can be completed, deepened, corrected, or otherwise improved by further macromolecular explanations. Antireductionism denies the claim that reductionism can help itself to such structures on the understanding that it will eventually explain their features reductively as biological science grows. These antireductionist claims were influentially advanced by Philip Kitcher in the 1980s: “Antireductionism construes the current division of biology not simply as a temporary feature of our science stemming from our cognitive imperfections but as the reflection of levels of organization in nature.” Kitcher’s articulation of this view, in fact, briefly invokes explanations from molecular developmental biology: To understand the phenotype associated with a mutant limb-bud allele, one may begin by tracing the tissue geometry to an underlying molecular structure. The molecular constitution of the mutant allele gives rise to a non-functional protein causing some abnormality in the internal structure of cells. The abnormality is reflected in peculiarities of cell shape, which in turn, affects the spatial relations among the cells of the embryo. So far we have the unidirectional flow of explanation which the reductionist envisages. However, the subsequent course of the explanation is different. Because of the abnormal tissue geometry, cells that are normally in contact fail to touch: because they do not touch, certain important molecules, which activate some batteries of genes, do not reach crucial cells; because the genes in question are not “switched on” a needed morphogen is not produced; the result is an abnormal morphology in the limb. Reductionists may point out, quite correctly, that there is some very complex molecular description of the entire situation. The tissue geometry is after all a configuration of molecules. But this point is no[t] relevant. . . . Certain genes are not expressed because of the geometrical
81
82
ch ap t er t wo
structure of the cells in the tissue: the pertinent cells are too far apart. However this is realized at the molecular level, our explanation must bring out the salient fact that it is the presence of a gap between cells that are normally adjacent that explains the nonexpression of the genes . . . [W]e lose sight of the important connections by attempting to treat the situation from a molecular point of view. . . . The point can be sharpened by considering situations in which radically different molecular configurations realize the crucial feature of tissue geometry: situations in which heterogeneous molecular structures realize the breakdown of communication between the cells. (Kitcher 1984, pp. 371–72)2 Kitcher concludes that so far from vindicating reductionism, explanations in developmental biology require a species of “downward causation”: Hence, embryology provides support for the stronger anti-reductionist claim. Not only is there a case for the thesis of autonomous levels of explanation, but we find examples on which claims at a more fundamental level (specifically, claims about gene expression) are to be explained in terms of claims at a less fundamental level (specifically, descriptions of the relative positions of pertinent cells). (Ibid., p. 372) The defender of reduction will reply to this argument that it neither substantiates Kitcher’s antireductionism about development nor vindicates any downward causation. For the crucial role that geometry plays in the case he describes is just the sort of spatiotemporal, that is, physical, fact which reductionism requires we cite in satisfactory explanations of development: in the case of developmental abnormality, the pertinent cells are indeed too far apart; but the causally relevant feature of the situation is the spatial property of being too far apart, not the biological property of being the pertinent cells, nor the combined spatial 2. This argument has remained popular. See, for example, Laubichler and Wagner (2001), who employed it to dispute a less well-developed version of the argument of this chapter in Rosenberg 1998. Defects and oversights in their counterargument are identified in G. Frost Arnold 2004, “How to Be an Anti-Reductionist about Developmental Biology: Response to Laubichler and Wagner.” Frost Arnold notes the emphasis on the role of spatial position that Laubichler and Wagner share with Kitcher, and shows how the counterexamples to reduction they construct turn on tendentious assumptions about the availability of spatial position as a variable in reductionistic explanations. Frost Arnold goes on to identify the sort of counterexample to reduction Laubichler and Wagner would need to refute reductionistic explanations of development. He notes that there is no reason to suppose that such processes as they require to really undercut the present argument actually obtain.
Reductionism and Developmental Molecular Biology
and biological property of the pertinent cells being too far apart. Moreover, the pertinent property of spatial separation is realized by the pertinent molecules that constitute the cell. Antireductionism requires that in the case of abnormal embryological development, the causally relevant property is not that the source of the chemical gradient is too far from the location of the gene whose expression it controls. It requires that the source of the gradient is in a cell that is too far away, and furthermore antireductionism requires the source’s being in a cell, and not just in a region surrounded by a lipid bilayer that hinders osmosis, which is causally indispensable to the developmental abnormality (and mutatis mutandis for normal development). But this latter claim is presumably false, for it can be experimentally established that the presence of a lipid-bilayer barrier to diffusion is sufficient to suppress the concentration of a gradient below the level required for gene expression. In other words, the effect that the presence of the cell has on development is identical to the effect that a purely macromolecular structure has. If such a structure is in fact present when a developmental abnormality ensues (as indeed it is), then either (1) the abnormality is overdetermined—both the distance of the cell from the gene and the distance of the lipid bilayer from the gene brought about the abnormality, and each would have done so had the other condition not obtained; or (2) the cause is indifferently and equivalently described as the distance of the macromolecular lipid bilayer from the gene or the distance of the cell from the gene, and the pertinent causal property is the distance between the macromolecules, since the cell membrane is a lipid bilayer. Besides its intrinsic implausibility, the notion that normal and abnormal embryological development is overdetermined by distinct macromolecular processes and cellular ones also burdens the antireductionist with the claim that cells’ effects on development do not proceed from their occurrent physical and chemical properties—in this case their membranes’ being lipid bilayers. This consequence will strike the philosopher or biologist committed to physicalism or materialism as no less “spooky” than nineteenth-century vitalism. For given the evidence that the separation of the pertinent cells is regularly associated with the distance between their lipid bilayers and the developmental abnormalities, the only way to deny that the cells’ separation is identical to the distance between their bilayers is to assert that the cell separation has two distinct effects: the distance between bilayers and the developmental abnormalities. But even if there were no independent evidence that bilayer separation (in cell-free in-vitro experiments) is sufficient to suppress gene expression, the antireductionist would now have to answer the question, what is it about the distance between cells that could be distinct from the separation of the bilayers of mol-
83
84
ch ap t er t wo
ecules that make up their membranes? And the only answer that will do the work the antireductionist needs is that the spatial separation of the pertinent cells is not a physical fact at all. It is worth repeating here that as a research program, reductionism does not eschew the employment of concepts, terms, kinds, and taxonomies that characterize phenomena in nonmolecular terms. Reductionism is not eliminativism. Besides countenancing terms like cell as acceptable expressions in biological description, reductionism accepts the reality of cells and their causal roles. Arguments against reductionism that turn on the indispensable role of such terms in biological explanations offered and accepted in contemporary biology wrongly presume that it is incompatible with such indispensability. So far from eliminating “cells” as causal agents in biological processes, reductionism expresses a commitment to explain their causal roles more fully and with more predictive precision. Moreover, reductionism is a program that must proceed in a piecemeal and opportunistic fashion. In its explanation of how the genetic program controls Drosophila embryological development, it will help itself to many concepts familiar from nonmolecular cell physiology. What reductionism denies is that there are distinct causal properties of the items such terms name that are not open to identification in macromolecular terms. (These issues are taken up again at some length in chapter 6; see especially note 1.)
the epigenetic counterexample to genocentrism Recall the third of the five questions broached about the genetic program as developmental explanation above. More than one of the stern opponents of reductionism will condemn everything said up to this point just on the grounds that molecular biology has no right to claim any special role for the genes in the causation or explanation of development, and in particular no right to wrap itself in the meretricious mantle of information, computer programs, artificial intelligence, or, for that matter, real intentionality, representation, and action that only human beings are capable of. It is by no means clear that the reductionist must respond to these arguments against genocentrism. For reductionism need hold no special brief for the genes, or even the nucleic acids, in the causation and explanation of development. Its commitment is to molecules—amino acids, organometallics, and atomic ions, along with nucleic acids. It is appeal to all of these that reductionism claims provides the improvements, completion, deepening, and enhancements of predictive confirmation and application that scientific explanation in biology requires. Thus, note the role of chromatin states at the outset of each subroutine in the program for building the Drosophila embryo given in the sec-
Reductionism and Developmental Molecular Biology
ond section above. The chromatin state of the cell—how tightly and to which stretch of nucleic acid this protein is bound—plays as crucial a role as that of any maternal RNA, chemical gradient, or nucleic acid molecule in embryological development. The rejection of the claim that the genes play a special role in development, one expressed in concepts like “code,” “information,” and “program,” is one to which reductionism can certainly be reconciled. A large part of the next chapter is principally devoted to examining the strengths and weaknesses of the claim made on behalf of the genes as expressing development-controlling information in a code which programs the organism. The aim is in part to adjudicate contrary claims about the genetic program as literal or metaphorical, and as adept or unadept; and in part to explore the implications of the dispute for reductionism. We will see that exponents of the informational-intentional interpretation of the genes have a great deal of philosophical work to do in order fully to vindicate their views, but that the failure to complete this project raises no doubts about molecular developmental explanations that reductionists need concern themselves with. Still, the vindication of a special role for the genes in development would strongly substantiate reductionism; holding that an argument against it would have no impact on the attractiveness of reductionism as a research strategy would be disingenuous. Such an argument against the special role of the genes in development is advanced by the developmental systems theorists (for example, Griffiths and Grey [1994]). Invoking the “causal democracy principle,” these biologists and philosophers note rightly that many things are causally necessary along with the genes for development. If all are equally necessary, and none sufficient for development, then all are on a par: each factor is the equal of the others and has, so to speak, one vote in determining the developmental outcome. Whence “causal democracy.” Developmental systems theorists go on to maintain that those factors which we identify as crucial or specially important will depend on the interest among those seeking explanation and will be matters of emphasis in their questions. According to developmental systems theory, the unit of development is the life cycle; each life cycle includes anything in the organism and the environment that causally participates in the construction of those traits which foster the fitness of the lineage of reproducing life cycles. These life cycles will sometimes coincide with the lifetimes of individual organisms, and sometimes they won’t. Not all opponents of the genocentric explanation of development go this far, but many will cite the developmental systems theorists’ favorite exception to the hegemony of the genes: the process of epigenesis, in which some developmentally important traits are not coded by the genes at all. Surely epigenesis undercuts genetic heredity.
85
86
ch ap t er t wo
Epigenesis is a notion invoked initially by Waddington (1957) to describe cases of inheritance that do not involve transmission through the genes. If there is such a thing, then the genes are not the sole vehicles of hereditary transmission, and what is hereditarily transmitted is not always nucleic acid–encoded information about proteins, enzymes, and their complexes, still less information about features of the body or behavior of organisms. Opponents of genocentrism cite alleged epigenetic systems of transmission among insects, birds, mammals, and humans in which developmental and adult traits are transmitted across generations without the need for genes to code for them. The most common example of epigenetic inheritance offered as an alternative to genetic inheritance widely to be found in nature is host imprinting. The European cuckoo is a brood parasite, and acquires a relatively strong fixation on the species of bird in whose nests it is laid by its parents. It will preferentially lay its eggs in nests of birds on whose appearance it imprinted at an early age. In a more extreme case, the parasitizing nestlings of each different subspecies of the African widowbird (Viduinae), which brood-parasitizes a finch species, learn and use the songs of the host subspecies, and even have mouth markings similar to that of the host subspecies nestlings, which encourages food regurgitation by host parents. The mouth marking similarity is presumably a genetically coded adaptation, but the vocalization and the imprinting are learned. The female widowbird will ovulate only when she detects reproductive behavior in her host subspecies, and male widowbirds will attract females of the same subspecies by the finch song they learned as nestlings in the finch subspecies’ nests. The result is a new generation of widowbirds with the right markings. Thus, the nongenetic environment of one generation of each widowbird subspecies transmits its environment’s features—the appearance and vocalization of the finch subspecies it parasitizes, via the widowbird’s imprinted egg-laying behavior—to the next generation of widowbirds. Even if the genes program the widowbird, it is the finch subspecies’ song and appearance that “program” the song, mate selection, and egg-laying preference of every generation of widowbird subspecies that parasitizes it. (“Program” in quotes, since the exponents of epigenetic transmission generally deny there are any programs, genetic or otherwise, in heredity.) This sort of epigenetic inheritance certainly shows that genetic inheritance cannot be the whole story of heredity, and therefore genetic transmission cannot be the whole story of evolution either. As such, epigenetic processes are another component in the argument for the uncontroversial conclusion that evolution requires phenotypic vehicles or interactors as well as genomic replicators. And if epigenetic inheritance is widespread, then a full account of evolution by natural selection will have to be reconciled with such processes.
Reductionism and Developmental Molecular Biology
Does the biological actuality of a few or even a large number of cases of epigenetic heredity undermine the claim that it is the genes which program the organism, and therefore it is the genes which explain development? Epigenesis is touted as an alternative to genetic heredity, but do the epigenetic causes of inheritance have the role in building the organism that the genes have? If the answer is yes, then the gene’s claim to explanatory uniqueness and centrality in development may be threatened. And if the answer is no, what are the consequences for the claim that epigenesis is an alternative to the genes as a vehicle of hereditary transmission and developmental control? Reductionists will find epigenetics to be a surprising source of skepticism about the explanatory power of the genetic program, for it is something they are familiar with as a consequence of genetic programming. As the term is employed among molecular biologists, epigenetics is defined as the study of heritable changes in gene function that occur without a change in the DNA sequence. But it is always a change brought about by a direct and immediate modification of the DNA. Epigenetic inheritance is a mechanism that occurs variously in prokaryotic and eukaryotic genomes. For present purposes, it is best illustrated in the phenomenon of genomic imprinting. Normal embryos begin with one set of genes from each parent. Developmental abnormalities arise when both genes come from only one parent, even though that parent, with the same two sets—the same nucleotide sequences (bar single-nucleotide polymorphism)—has experienced a perfectly normal development. In the mouse, for example, an embryo developing from two paternal genomes instead of one from the mother and one from the father develops poorly and then dies, but has a relatively well developed placenta. By contrast, an embryo with two maternal genomes experiences strong development, while its placenta is poorly developed. The cause in both cases, of course, is some abnormality in gene expression: genes provided for the embryo-building program by the sperm are not functioning to program the embryo correctly; genes provided by the ovum for the placenta program are not functioning to build the placenta normally. And the cause of this failure must be something different that happens to the genomes in germ cell development in the parents. This difference occurs not in the nucleotide sequence, but in some other chemical modification of the genes that parents contribute to offspring, since the cause is preserved though meiosis, fertilization, and early development of the next generation; that is, it is inherited, and inherited without being coded in the gene sequence (epigenetically inherited). Of course, epigenetic inheritance is required for normal mouse development, even when the two gene sequences contributed by the parents are identical, nucleotide for nucleotide; there must be some chemical difference between the sequences in virtue of their maternal
87
88
ch ap t er t wo
or paternal origin that makes for normal placental and embryological development. This is the phenomenon of gene imprinting.3 What is the chemical difference between identical nucleic-acid sequences on which imprinting relies? Here is one such difference: one of the four nucleotide bases, cytosine, can be methylated; that is, methyl groups (a carbon atom covalently bound to three hydrogen atoms) can be attached to the cytosine pointing outside, away from its base-pair opposite, guanine. A methyl group oriented away from the double-helix backbone of the DNA blocks the operation of a promoter at that point and so prevents the associated gene from being expressed. In fact, between 2% and 7% of the cytosines in mammalian cellular DNA are methylated, and almost all of the cytosines followed immediately by a guanine (CG pairs) are methylated. Maternal- and paternal-nucleotide sequences are methylated at different cytosine bases, and so permit expression of different genes in their respective somatic cells, thus ensuring normal development. For example, in mouse embryos, both the genome from the mother and the one from the father carry genes for IGF-2, one of two Insulin-like Growth Factors that encourage growth of the placenta; and both also carry the H19 gene, which expresses a product that controls the degradation of IGF-2. The IGF-2 gene from the mother is methylated so that it cannot be expressed, and the H19 gene from the father is methylated so that it cannot be expressed. Regulation of the Igf-2 gene is required for normal development of the placenta. The paternal H19 gene does not do this, and so allows for a large placenta taking nutrients from the mother to the embryo. The maternal genome in which the Igf-2 gene is switched off and the H19 gene is switched on controls placental growth in the interests of the mother. The correct pattern of methylation is maintained in mitosis by a methylating enzyme present at DNA replication forks which recognizes the methyl group on the CG pair of molecules in the template-DNA sequence and adds a methyl group to the corresponding guanine-cytosine pair of the new DNA molecule. There are two “design problems” raised by this method of controlling and coordinating the expression of multiple genes. The first is that copies of the gene sequence which figures in the meiotic development of germ cells need to be stripped of these methyl groups, because any of the chromosomes on which they reside may end up in male or female germ cells. The second problem is that shortly after fertilization, each of the genomes must be remethylated in 3. A formal treatment of the mathematical theory of epigenetic imprinting in genes is provided in Haig 1997, which advances an inclusive fitness-based account of the phenomenon, and concludes by exploring why epigenetic imprinting is so infrequently to be met with in nature.
Reductionism and Developmental Molecular Biology
the same pattern as its sources’ genes’ methylation patterns to assure normal development. It is these methylation patterns that are inherited epigenetically, that is, without being expressed in nucleotide sequences. How are these two tasks discharged? The demythlation problem seems relatively easy to solve. A demethylating enzyme present and active after meiosis but early in the development of germ cells removes all methyl groups from cytosines in genomes whether they were synthesized from maternal-origin DNA or paternal-origin DNA. Primordial germ cells develop into oocytes and sperm, depending on whether the embryo in which they figure is male or female. Primordial sperm (prospermatognia) genomes are methylated at CG pairs by denovo methylases prior to sperm formation, while methylation of oocyte-gene sequences occurs after birth of the female. In oocyte development, the methylation is controlled by the product of the Dnmt3L gene. (Dnmt stands for DNA methyltransferase, of which at least four have been identified as active in embryological development. Though similar in sequence to them, Dnmt3L is, however, not a methyltransferase but probably regulates them.) Let’s ask the ultimate, or evolutionary, question about epigenetic imprinting of the Igf-2 gene in the female and the H19 gene in the male. There is a widely recognized evolutionary explanation for these differences in the reproductive strategies of males and females or, alternatively, the reproductive strategies of the genes with male parentage and those with female parentage. The products of one of these genes, Igf-2, have effects that enhance the reproductive fitness of the parental genes and reduce that of the maternal genes. The other gene, H19, has effects that enhance the reproductive fitness of the maternal genes and reduce that of the male parent’s genes. The former gene helps produce a strong placenta which advantages the fetus carrying the paternal genes at the expense of the mother, since better nutrient transfer weakens her ability to survive into future breeding seasons. The latter gene limits placental development, with fitness-enhancing effects for the mother and her genes. This will be especially true in breeding systems such as that of the mouse, where a single litter may have mixed paternity, and there is little likelihood that the mother’s future offspring will share paternity. Thus, paternal genes will be selected for securing maximal resources from the mother, and maternal genes for retaining resources for the mother. One mechanism for accomplishing these goals is parental genetic imprinting. The competition between these two genes is a classic case of genomic conflict, of an arms race at the level of the polynucleotides, with consequences for mouse embryological development and for modification of the genes that program it. Thus, in the “first round,” males switch off the genes advantageous to females by methylation, and females do the reverse. There is substantial evidence of a “second round,” in which females reprogram genes
89
90
ch ap t er t wo
advantageous to males: paternal genes are particularly subject to remethylation during the period they spend in the maternal cytoplasm before their gene products reach effective concentrations. The result is a “balance of power” in which normal placental/embryonic development takes place. The epigenetic inheritance phenomenon just described is a thoroughly molecular affair. It is also a thoroughly genetic affair. That is, although differential inheritance patterns here are not matters only of the order of nucleic acid bases, they very clearly are matters of the chemical modification of nucleic acids by the relatively direct action of other nucleic acids. That is, genes are methylated by regulators produced by other genes. Molecular biologists studying the program of microRNA processing and the genes which code for these enzymatically active microRNAs (especially sense-antisense transcriptional units) are adding details to this reduction of epigenesis to the work of the nucleic acids (see Herbert 2004). So, the causal pathway from one generation of methylated-DNA sequence to another generation of such sequence is via the action of other DNA sequences, which by methylating serve as regulatory genes. Methylation patterns turn out to be garden-variety, genetically encoded, genetically heritable phenotypes. There is nothing in this story to undermine a genocentric view of inheritance. That this is so is, if anything, made clearer when imprinting by epigenetic methylation is viewed from the evolutionary point of view as the result of strategies realized by competing gene-sequence lineages engaged in an evolutionary arms race. Reductionists promoting biological explanation and genocentrists promoting heredity will both draw attention to the biomedical aspects of gene imprinting. There are a small number of medical syndromes that reflect non-Mendelian inheritance, and that medical researchers have traced to defects in imprinting via methylation. Prader-Willi syndrome and Angelman syndrome are genetic disorders of males and females respectively and result in different symptomatology. They both can result in a number of different genetic defects. In most cases of Prader-Willi and Angelman, the cause is the deletion of a 4000 base pair of DNA sequence on chromosome 15. But in 5% of the cases, the cause is a defect in the gene that controls their remethylation and that is also located on chromosome 15. Either way, the disorders’ etiologies begin with a change in the nucleotide sequence (most likely in a gene for sense-antisense transcription), and not with the epigenetic differences. With these considerations about what molecular biology treats as the paradigm case of epigenesis in mind, let’s return to the sort of examples that supposedly undermine genocentrism in heredity and the genetic programming explanation of development. About the first thing to notice is the longevity of nucleic
Reductionism and Developmental Molecular Biology
acid–based genetic inheritance by comparison to molecular epigenesis, and the longevity of molecular epigenesis by comparison to host-imprinting epigenesis. The first mechanism of inheritance has been around so long that it is now ubiquitous in the biosphere. It has been around so long that it has become the sole extant solution to the design problem of high-fidelity hereditary transmission. With the possible exception of the prion, every reproducing system employs nucleic acids for hereditary transmission. The inheritance of methylation, by contrast, is a much newer phenomenon, one that probably dates to no earlier than the emergence of sexual reproduction, if the arms-race theory of gene competition is to be credited. The genes that code for microRNAs apparently responsible for genetic imprinting are not to be found among prokaryotic genomes. The cases of host-imprinting that ethologists and others bring to our attention have been around only for a very short period. What can we infer from these differences in longevity? One thing is certain: as in physics, so also in evolution; for every action there is a reaction, though not always equal and opposite. Just as genomic imprinting by methylation patterns is a move in a strategic game between genes, so also host imprinting is a move in a strategic game between each widowbird subspecies lineage and each finch subspecies lineage it parasitizes. The theory of natural selection tells us that in the long run, the host-organism lineages will respond to this parasitizing strategy in a way that reduces its costs to the host-species lineage. And on a geological timescale, this will happen sooner rather than later. Even where cases of host imprinting are cases of symbiosis instead of parasitism, the equilibrium is never so stable that it will not eventually succumb to a new strategy. Once a new strategy triumphs, the epigenetic phenomenon vanishes. Each particular case of epigenesis is thus a temporarily successful solution to a local design problem. By contrast, genetic inheritance is, so far at least, a permanently successful solution to a global design problem. Furthermore, as is particularly clear in the case of genomic imprinting, epigenesis is a solution to a design problem faced by genomes—nucleic acid sequences—in competition with one another. The detailed ultimate evolutionary explanation of genomic imprinting eventually ends at the gene sequence after all. Though the explanandum is nongenetic—that is, non-nucleic acidbased—inheritance, the explanans in the end comes down to nucleic acid– based genetic inheritance: in the particular case discussed above, selection for the Dnmt3L gene, which controls remethylation of the paternal genome in the fertilized embryo. Will the ultimate explanation be any different in the case of host imprinting? The question is not rhetorical. Given the temporary character and the fragility of the transmission pattern from, say, a particular adult widowbird’s (finch-
91
92
ch ap t er t wo
subspecies-like) vocalizations to its offspring’s similar vocalizations, the real work in explaining the hereditary pattern here is going to be done by an account of the genetically encoded program for the neurology of singing in general, environmental tune-learning by both finches and widowbirds, and a suite of neurological capacities that will be in play and will continue to explain the traits of widowbird and finch species long after finch subspecies have found a strategy for reducing the parasitism that lowers their fitness. Consider, widowbirds in each generation of a subspecies sing the same tune. But the tune sung in generation 1 doesn’t cause the tune sung in widowbird generation 2. The tune sung in widowbird generation 2 is caused by the tune sung by finches in generation 1. How can we tell? Because we can switch finch hosts and widowbird parasites to different subspecies, and the latter will sing the song of the new host subspecies. The finch tune causes the widowbird tune. But, presumably, the finch tune is programmed by finch genes in standard (nonfitness-reducing) environments. So, would it not be far-fetched to infer that the widowbird tune is programmed by finch genes? Certainly it is the stability of finch genes that explains the stability of the widowbird tune. Now, in the competition between finch genes and widowbird genes, there is pressure on finch genes to change the finch tune so that widowbird genes cannot enable widowbirds to mimic it. If such a variation in finch genes does not arise, finch fitness must fall. How is this nongenetic inheritance? Significant changes in either gene sequence will destroy the inheritance pattern of widowbird tunes. Significant changes in finch tunes will cause significant changes in widowbird tunes, without changing widowbird genes. But if the finch tune changes are caused by significant finch gene changes, then it is gene changes after all that program the widowbird’s tune. Another way to put it is that the widowbird tune is a maladaptation of the finch, resulting from the expression of a finch gene, one that will be selected against. Alternatively, the widowbird tune is an adaptation of the widowbird genes that won’t last long. As noted above, the reductionist can accept the reality of the epigenesis with a degree of equanimity that may not be open to the genocentrist concerning development. So long as the ultimate details of epigenetic transmission and developmental control turn out to be macromolecular, reductionism will be vindicated. The genocentrist will reject epigenetic transmission as a ground to deny explanatory uniqueness to the genome, owing to the failure of epigenetic causes to bear information and to program their effects in anything like the way that the genes carry information that enables them to program the organism. However, that the genes do this themselves is just what is denied by those who appeal to epigenesis as a counterexample to genocentrism. And some of these opponents go on to argue that in fact there really is no such thing as the gene
Reductionism and Developmental Molecular Biology
at all. The notion has been a very useful fiction, a device that recent work in molecular biology has shown to be decreasingly heuristic. On their view, the notion of the gene is likely to be superseded in twenty-first-century biology, thereby making moot the debate about genocentrism and obviating the confusions and misunderstandings that characterize public discussion of the role of genes in the determination of character and behavior. It is to these two broad issues that we turn in chapter 3.4 4. Epigenesis is one of a broader set of processes that evolutionary biology must accommodate in the theory of natural selection. Other processes that are similar to epigenesis and await a canonical Darwinian account include niche construction, group selection, and cultural evolution. Once these processes, which involve blind variation and natural selection among traits that are not genetically encoded, become well understood from the perspective of evolutionary biology, the Darwinian reductionist faces the further challenge of reductive explanation. This is, however, not the challenge of explaining selection at any one level by appeal to selection operating in the same direction at the very “next” lower level—even if we could individuate levels so neatly, as we shall see in chapter 6. It is also important to recall that reducing these processes does not mean reducing them to the behavior of genes. It is important to remember that reductionism is not the thesis that the only molecules that matter are the nucleic acids! In the primate case, the full and complete account of these epigenetic processes will involve at a minimum the neurotransmitters. A full discussion of the prospects for the reduction of epigenetic processes is beyond both this work and our current understanding. But some of the pertinent issues are broached in chapters 6 and 7.
93
3
• • •
Are There Really Informational Genes and Developmental Programs? as noted in the introduction, the achievements of molecular developmental biology have by no means been welcomed among philosophers of biology and biologists. This has been especially true among those anxious about two related matters: first, the potential for public misunderstanding of scientific findings, and second, the willful misrepresentation and misuse of such findings in the deformation of public debate and policy. The anxiety has arisen owing to the encouragement that molecular developmental biology, and its interpretation among biologists, may give to versions of “genetic determinism.” If, after all, the genes program the embryo, and, as the saying goes, the child is the father of the man, then at the very least it may seem that the genes hold our potentials on what E. O. Wilson has called a short leash. Even worse, if research on “master control genes” such as eyeless is widely confirmed in other organs and other species, as may safely be assumed, two further conclusions will become hard to avoid: (1) the maintenance of bodily integrity in the adult and its behavior are as much a matter of genetic programming in the somatic cells as development is the outcome of programming by the genes of the fertilized germ cells; and (2) the degree of homology of the regulatory genes, such as those in the homeobox, across distantly related species from the Drosophila to the mouse to us, is so great and so much a matter of gene duplication as opposed to real sequence variation, that there will scarcely be a naturalistic perspective from which to argue for genuine qualitative human exceptionalism.
Are There Really Informational Genes and Developmental Programs?
Thus, the motivation for seeking a refutation of the claimed uniqueness of the gene in the causation of development is very strong. The details of attempts to refute genetic determinism by undermining genocentric accounts of development resolve themselves into the five questions outlined in chapter 2, and especially resolve themselves into arguments against the informational role of the gene, and indeed against the very coherence of the notion of the gene altogether. This chapter is devoted to exploring these latter two issues.
genocentrism and information Both genocentrists and their opponents agree that the genes are not causally sufficient for any outcome, whether genetic or epigenetic. Their dispute concerns whether the genes do or do not contribute something so special and distinctive that they are explanatorily more basic than anything else. We have already seen that nucleic acid has a much longer track record of conveying heredity than anything else, from methyl groups to finch tunes. Moreover, it is both ubiquitous and indispensable even in epigenetic transmission. And the genocentrist will argue that the nucleic acids have another property, lacking elsewhere in nature, which confers on them the central explanatory role in both heredity and development. Without this special role, there may be no particular reason to single out the nucleic acids as causally special in development and heredity, and without it certainly the claim that molecular biology does all the explaining in development will look less plausible to many. After all, the genetic program includes two very powerful explanatory components: one is that the order in which the parts of the embryo are built is represented in the nucleicacid sequence; the second is that what each component is made of and does is represented in the same sequence. Since nothing else in the biosphere can make the same claim about representations, the genome does have a special explanatory role in both heredity and development. It is this last claim that the opponents of genocentrism reject. They argue that nothing in the biosphere has the status of representing development or hereditary guiding information, including the genes. Ergo, claims based on its unique informational role are unwarranted. At most, all the talk about genes as information, as programming development, as blueprints or recipes for the body, as sending signals or being proofread or decoded, and, for that matter, as “master controls,” is so much mere metaphor—and seriously misleading metaphor at that. Here the exponents of the genetic-program explanation of development will respond that calling the DNA sequence a coded message is no metaphor, that the genes carry information in a literal sense not shared by other causally necessary conditions for development. They will argue that there is a program in
95
96
ch ap t er three
operation here, not just in the sense that any functionally described process realizes a program, but in the sense that the laptop on which these sentences are composed follows a program. And they will hold that it is this program that explains development, that breakdowns in this program explain abnormalities in development, that mutations in it explain differences in development; and it is this program that provides the basis for converting the how-possible explanations of non-molecular evolutionary biology to the why-necessary explanations of molecular evolutionary biology. Opponents of genocentrism will grant that there is an attenuated sense of information in which many other things besides the genome carry information about development and hereditary traits. This is the sense of information as it figures in mathematical information theory. It is true that the mathematical theory of information due to Shannon and Weaver (1963) applies to any causal chain, including ones in the environment transmitting epigenetic “hereditary” information, just as well as it does to genetic transmission of hereditary information. This is simply because the Shannon-Weaver formalism is a way of measuring the quantity of information and the reliability of transmission which any causal chain can be employed to transmit. But the transmission of information requires more than a causal chain. Otherwise, the causal chain that transmits a scrambled TV signal would count as transmitting information (beyond the “information” that the signal was scrambled).1 Unlike the other causes of development, genes are claimed to be informationbearing causes because the nucleotide sequence of adenine, thymine, guanine, and cytosine (hereafter A,T,G,C) constitutes a code in which the structure of proteins is expressed, and from which that structure can be read off. That it is literally code is reflected in several facts about it; for one, the genetic code is redundant. Though with four units one can send 26 (that is, 64) different bits
1. Philosophers’ worries about how best to understand the use of informational and other intentional descriptions of the behavior and constitution of macromolecules long antedate the debate about genocentrism. Rosenberg (1986) argued that most of the ordinarily intentional predicates that molecular biologists employ were implicitly redefined by them in ways that purged their intentional character, thus rendering terms such as information, proofreading, recognition site, and so on unproblematical. Anxiety about the aid and comfort genocentrism might give to genetic determinism made the issue increasingly salient among philosophers and biologists, and eventuated in an exchange in the pages of Philosophy of Science among Maynard Smith, Sterelny, Godfrey-Smith, Sarkar, and Winney. See also Sarkar 1996, and especially Griffiths 2001. As the reader will see below, the question of whether genes really carry information turns out to be immaterial to the question of whether they program the embryo.
Are There Really Informational Genes and Developmental Programs?
of information, only 20 bits—the 20 amino acids—are actually encoded, and some amino acids can be signaled by any one of three different messages (three nucleotide “codons”). Histidine’s codons are CAT and CAC, for example. More important, as Monod (1974) first held and Maynard Smith (2000) argues, the genetic code is informational in a sense nothing else in development is informational, in large measure because, like a signal system (say, Morse code) it is arbitrary. It is held to be arbitrary in the following sense: as a matter of physical possibility, any particular triplet, say CAT, which codes for histidine, could just as well have coded for the amino acid glutamate. That is, we can imagine a process of protein synthesis that attaches a glutamate molecule to a transfer RNA with the CAT codon instead of a histidine molecule. There is, of course, an explanation for why CAT codes for histidine and not glutamate, and that explanation involves natural selection and drift operating on the initial conditions obtaining at the time the coevolution of nucleic acid/amino acid began. It will presumably show that the actual coding of 20 amino acids by a particular redundant pattern of 64 nucleic acid codons is a “frozen accident”: it could, consistent with the laws of chemistry and physics, have turned out that a different coding emerged. Can epigenetic codings make the same claim of physical arbitrariness? That is, would it be the case that, holding the laws of nature constant and merely changing the initial conditions at the time that an epigenetic hereditary mechanism kicked in, it would have resulted in the epigenetic transmission of the same traits? Almost certainly not. Well, actually it might be quite difficult to show positively that the code which translates CAT into histidine is really arbitrary in the way epigenetic transmission of songbird melodies, for example, is not arbitrary. And in that case, the genetic code will turn out to be no more arbitrary than many other information-bearing things, such as clouds, which bring information about rain. What exactly does arbitrary mean in the present connection? Compare the word cat to the word ouch. There is a full and complete explanation of why cat names cats, in English. But consistent with all the laws of nature, cat could have carried information about hats or bats or cots or cabs or casts, and so on. That it carries information about cats is presumably fixed by some laws or other operating on some set of initial conditions, facts about fairly local conditions that obtained on the Earth at some time in the past. By contrast, that ouch carries information about pain seems somewhat less arbitrary: consistent with the laws of nature, it could have meant what ah means. That it carries information about pain is still a matter of initial conditions, but perhaps less local ones than the ones that resulted in ‘cat’ meaning cat. And the fact that in very many languages, both Indo-European and Asian, the /m/ sound figures prominently in the word referring to the female early childhood caregiver (mother, madre,
97
98
ch ap t er three
auma) may be a matter of even less local initial conditions. By contrast, clouds carry information about rain independent of any local conditions: the connection is explained by laws alone, and initial conditions are not involved. Roughly, a connection is arbitrary in the present context if its explanation reflects at least in part the role of initial conditions in the explanation of the connection. The more local the conditions, the more arbitrary the connection. The trouble is that among the competing explanations for the genetic code, several do not require initial conditions at all, and others exhibit only the most minimal reliance on such conditions. At least five or six accounts of the origin of the genetic code have been offered. The stereochemical theory of the origin of the code requires as explanans only the laws of chemistry and physics to bring about the particular distribution of codons to amino acids that figure in the genetic code as we know it. Indeed, this theory even provides a scenario according to which a natural sequence of chemical events builds derived amino acids out of basic ones and changes existing codon sequences in step with the amino acid syntheses to build up the genetic code. As such, it gives the code a naturalness and inevitability that approaches the periodic table of the elements’ association between atomic structure and chemical properties, a highly nonarbitrary relationship. Rival evolutionary accounts trade on features of the genetic code noticed soon after its discovery: in general, structurally similar amino acids share moresimilar codons, and amino acids with pyrimidine (T and C) nucleic-acid-based codons are more similar to one another than ones coded for by purine (A and G) bases. Accounts trading on these facts suggest that the code developed in the particular order it did because of the operation of natural selection at the level of interacting amino acids and nucleic acids. Indeed, it is now known that the code is not universal, and that there are a small number of exceptions to it (just what we would expect of a historical trend, not a real general law [Jukes 1985]). Indeed, recently adaptational hypotheses have been advanced to explain why certain exceptions to the code have appeared—for example, the shift in the codon CUG from coding for luecine to serine in certain Candida species (Santos et al. 1999). If the distribution of the chemical substrate on which natural selection operates is almost everywhere in the universe the same, and if there is a purely chemical affinity between nucleic acid codons and amino acid molecules, then we can expect the same genetic code to develop ubiquitously. Even if chemical affinities do not result in just one code, natural selection operating at the level of macromolecules could choose among competing codes. That the code is nearly ubiquitous among all biological systems on Earth is good evidence that if it is the result of evolution, it must have won the competition among alternative codes so completely that no competitor was left in the field. Since natural selec-
Are There Really Informational Genes and Developmental Programs?
tion among vast numbers of molecules over evolutionary timescales allows only vanishingly small scope for drift, the conclusion that the code is more than a frozen accident is not surprising, nor is the persistent search among molecular biologists for an explanation of the code that reveals its nonarbitrary character. The explanation of the code is a question on which much current research is devoted among molecular biologists, and for all we know the “frozenaccident” view (Crick 1968) that would vindicate arbitrariness may be correct. On this view, there are many equally chemically or chemically and evolutionarily feasible genetic codes, and the emergence of a single code for all life forms on Earth is largely a matter of initial conditions peculiar to the Earth, much as the emergence of cat as code for Felis domesticus is arbitrary. But pending the outcome of this debate, the exponents of DNA’s distinctive informational role cannot help themselves to arbitrariness as evidence of the genetic code’s informational character.
does the genetic code have (original) intentionality? But even granting arbitrariness, there is another, more significant requirement that must be satisfied by the genes if they are literally to be said to carry information in a way that the rest of the causally necessary conditions for development and heredity do not. At a minimum, the three nucleotides of a codon must carry information about a particular amino acid. The codon has to be “about” the amino acid in the way that cat is “about” felines. A codon’s being about an amino acid is a matter of the codon’s containing information that, for example, the amino acids transferring RNA’s anticodon should have a certain sequence. The “aboutness” or “content” of a codon is the requirement that the informational state of the nucleic-acid sequence must be, in the philosopher’s argot, “intentional.” Molecular biology is, of course, riddled with intentional expressions: we attribute properties such as being a messenger (“second messenger”) or a recognition site; we ascribe proofreading and editing capabilities; and we say that enzymes can discriminate among substrates (as when “synthetase avoids hydrolyzing isoleucine-AMP, a desired intermediate”). Even more tellingly, as we have seen, molecular developmental biology describes cells as having “positional information,” meaning that they know where they are relative to other cells and gradients. The naturalness of the intentional idiom in molecular biology presents a problem. All these expressions and ascriptions involve the representation, in one thing, of the way things are in another thing. A human messenger carries a message, which represents the sender’s thoughts; recognition involves representing the item recognized as falling under a certain class or satisfying a description; proofreading and editing require making a comparison
99
100
ch ap t er three
between the way things are written and the way they were intended to or should be written. Positional information is used to recognize whether you are in the right location. The naturalness of this idiom in molecular biology is so compelling that merely writing it off as a metaphor seems implausible. Be that as it may, when it comes to information in the genome, the claim manifestly cannot be merely metaphorical, not, at any rate, if the special role of the gene is to turn on its informational content. But to have a real informational role, the genome must have intentional states. So, at any rate, both the genocentrists and their opponents agree (compare Griffiths 2001 and Maynard Smith 2000). Exactly what is intentionality? Intentionality is best introduced by examining a paradigm case of informational content, such as belief states. If Joe believes that Superman was born on Krypton, then his state of belief “contains” the proposition that Superman was born on Krypton; the belief is “about” Superman, and attributes to him a property, that of being born on Krypton. The example uses Superman in order to reflect the fact that what a belief is about need not exist, nor need it attribute a property that anything actually does have. What most clearly reflects the intentionality of beliefs, their containing propositions and being about or “directed at” objects that may or may not exist, is an extremely interesting logical feature they all share. Suppose that Lois Lane believes that Superman was born on Krypton. Now substitute for Superman the words Clark Kent. Now consider the claim that Lois Lane believes that Clark Kent was born on Krypton. This claim, derived by substituting “equals for equals”—Clark Kent for Superman—is presumably false. Here is another way a statement about what Lois believes can be changed from true to false just by substitution equivalents. Since Krypton is the only planet that blew up just after Superman’s departure from it as an infant, we can substitute “was born on Krypton” for “was born on the only planet which blew up just after Superman’s departure from it as an infant.” Since Lois presumably doesn’t know anything about the history of Krypton, the resulting statement that she believes Superman was born on a planet that blew up just after his departure is false. By making an innocent substitution in the contained statement (Superman was born on Krypton) that preserves its truth, we have changed the containing statement (Lois believes that Superman was born on a planet which exploded . . . ) into a falsehood. (Note that for the points made here, it is irrelevant that Superman, Lois Lane, and Krypton are all fictional items. The same conclusions are illustrated by Cicero and Tully, the morning and evening stars, and other coreferring terms familiar to philosophers since the work of Frege and Russell.) This feature of intentional states is not the whole story about them, but for our purposes it may be enough, as we now have a useful test of whether a state is intentional. If the state of one thing is about another thing, or represents it, or has informational content, then the truth or falsity of a description of the
Are There Really Informational Genes and Developmental Programs?
representing state should be sensitive to the way its content—the represented state—is described. In the case of beliefs, desires, hopes, fears, plans, actions, and other human psychological states and their effects in action, the source of intentionality and of the sensitivity of these states and actions to how they are described is, of course, due to their all being states that rely on thought. Since thinkers are not omniscient, there will be lots of ways of describing their objects of thought that thinkers don’t recognize as true of their objects of thought. Consequently, we get the sensitivity to substitutions of descriptions that is the hallmark of intentionality. Now, for nucleotide sequences to carry intentional information, as those who accord it an informational role in development require, the descriptions of the information the sequences contain must be sensitive to the terms in which we describe that information. When we say that CAT means “histidine,” or refers to histidine, or carries information about histidine, it will have to be the case that there are some ways of describing histidine which we could substitute for the word histidine in the statement “CAT means histidine” or “CAT represents histidine” or “CAT is about histidine” that would convert the statement from true to false. But there are no such descriptions. Consider the following attempts to produce such a falsehood. They will be bizarre, perhaps even funny, certainly without scientific interest. But none of them will be false. CAT means the only amino acid spelled with an initial h in English; CAT represents Francis Crick’s favorite molecule; CAT’s informational content is about a molecule whose chemical structure is symbolized like home plate is in baseball, with a cross in the right-hand batter’s box. There is a more general way to see the problem if we consider how advocates of the informational character of the genetic code argue for its having intentional content. The distribution of black marks on white paper to form cat means the domesticated feline owing to our endowing the marks with meaning, interpreting them as a sign for cats. Roughly speaking, the intentionality of the marks-on-paper cat is derived: it gets its intentionality from our beliefs about the references of particular English words and our desires to communicate our thoughts about cats to others. What gives the codon CAT its meaning? The standard argument is that natural selection does so. That is, CAT has been selected to mean histidine by a schedule of variation and selection, which lead to the ubiquity of the code in which CAT is histidine codon. Notice this claim assumes that several theories of the origin of the code are false, for example the stereochemical theory, and it may be incompatible with the claim that the code is arbitrary. Let’s leave these problems aside in what follows. There are more serious problems for this proposal about where CAT gets to mean “histidine.” So, CAT’s functional role in protein synthesis, which is the result of natural selection, confers upon individual CAT codons their intentionality. This inten-
101
102
ch ap t er three
tionality is presumably not derived but original intentionality. For presumably, Mother Nature doesn’t have her own “free-floating” desires, beliefs, or other intentional states. To suppose otherwise is to accord to nature, or whatever it is that natural selection depends on, the very sort of mentality, purpose, or design that Bishop Paley hoped for and Charles Darwin expunged from nature. So, the question that needs to be addressed is whether and how natural selection can produce intentionality. And on this question, the jury is, so to speak, in: it cannot. And the reason is that the functions which natural selection accords to structures—molecules, organelles, cells, tissues, and, for that matter, organs and organisms, and so on—don’t produce the sensitivity to alternative descriptions of their functional “content” which intentionality requires.2 To see why, consider a case where content seems attributable without any controversy. The brain of the frog has been programmed by natural selection to cause the frog to flick its tongue out in exactly the right direction and exactly the right time to trap a fly at location (x,y,x,t). Can we attribute to the state of that brain—or some component of it, perhaps a few thousand of its neurons—the intentional content “fly at x,y,z,t”? One would think so. After all, that brain state was selected for phylogenetically in the frog lineage, and ontogenetically in this particular frog because it led to environmentally appropriate tongue-flicking. But consider: as a result of the same history of selection, the same brain state will also cause the frog to flick its tongue out at a BB at x,y,z,t. So, the brain state’s content must be “fly or BB at x,y,z,t.” And, of course, it will also stick out its tongue at a black currant at x,y,z,t. So, is the brain state’s intentional content about the disjunction fly or BB or black currant? Of course, our imagination is the only limit on our ability to further and indeed endlessly expand this disjunctive list of what the content of the frog’s brain state is. In other words, by making truth-preserving substitutions for fly in the sentence “There is a fly at x,y,z,t,” we cannot convert to a falsehood the statement that the frog’s brain state contains “fly at (x,y,z,t).” And, of course, there is another problem: fly means the same
2. Philosophical digression. Here I appreciate that I take sides on the future success of an important and attractive research program in psychology: teleosemantics, to which philosophers such as Dennett, Dretske, Millikan, and Neander among others have committed themselves. While I believe that teleosemantics will give us the right theory of cognition, I doubt it will give us referential opacity, and therefore I doubt it will give us intensionality. Accordingly, it cannot explain intentionality. Instead, teleosemantics will explain away the appearance of intentionality in mental representation. But even if I am wrong about the future success of teleosemantics as a research program in psychology, for reasons to be given in the next three sections, it would still be gratuitous to describe a gene sequence as carrying information with intensionality and/or intentionality.
Are There Really Informational Genes and Developmental Programs?
as mouche in French, Drosophila in Latin, and so on. Thus, we can substitute the word for “fly” in any human language into the sentence that gives the content of the frog’s brain state, without changing from a truth to a falsehood the claim that the brain state contains the statement that there is a fly at x,y,z,t (compare Lois Lane’s beliefs about Superman). The upshot is that if natural selection does accord the frog’s brain state with content, it is not intentional content. And, of course, the same argument can be advanced for the claim that the codon CAT is about, means, or represents histidine. The codon’s content is not intentional. So, it’s not informational in the required sense. There is another way to see that the genetic program that builds the Drosophila embryo does not bear information in the way required to differentiate it from other factors necessary for development. This way trades on an argument famous in the philosophy of mind for alleging to show that an important research program in cognitive science is based on a puerile mistake. The research program is that of so-called strong artificial intelligence (AI), which treats the mind as software and the brain as hardware. It argues that we can understand cognition without first learning a lot about the brain, by identifying the program—the software, which runs on our “wet” ware, our brains. Computers, after all, engage in computation, detection, and information storage, all by following a program. And many physically different computers (Macs and PCs, for example) do so by following the same program. So, the argument goes, if AI can uncover the program that human cognition instantiates in the brain, we will have understood the brain. Cognition is, of course, “about” something, has content, and so is intentional. If strong AI’s research program were to succeed, then showing that the brain’s following a program may be sufficient for intentionality might provide a powerful argument for the claim that the embryo’s following a program may be sufficient for intentionality, albeit of a far less “thoughtful” kind. The conception of cognition as following a program has, however, been subjected to the following counterexample due to the philosopher John Searle (1980).3 Suppose you or I are placed in a windowless room; cards containing 3. The next few pages rehearse Searle’s infamous “Chinese-room” argument against the claim that following a program has anything much to do with cognition. Philosophers who accept the conclusions of this argument can skip them. Those who reject Searle’s argument should read on to see that the present employment of Searle’s argument does not require that one accept its conclusions about strong AI. Rather, it is used to show that the debate about whether genes carry information in the intentional sense that both genocentrists and their opponents require is irrelevant to the claim that they have a distinctive role in programming development.
103
104
ch ap t er three
ink marks are slipped into the room through a slot. The marks are meaningless to us, like inkblots or Jackson Pollack reproductions or just random squiggles. In the room with us are a large file cabinet filled with other inkblot cards and a volume the size of a large phonebook; on each page of this volume are two matched columns of pictures of squiggle cards. We are instructed to examine each incoming card, find its picture in the left-hand column of a page in the volume, and then, seeing what inkblot it matches up with on the right-hand column, we must seek the matching card in the file cabinet and slip it out the slot through which the first card came. After a long time, we become rather good at this; we’re able to locate pictures of cards quickly and put our hands on the matched cards with unerring accuracy. These cards contain Chinese pictogram sentences. We don’t know this, however. It’s not just that we can’t read Chinese—we don’t know that there is such a language, or that some languages use pictograms, or even that these inkblots are the products of human artifice. All we are doing is, in effect, following a program that can be given in a standard computer flowchart: look at incoming card; compare incoming card to pictures in book; identify outgoing card-picture; find corresponding card in file cabinet; put outgoing card through slot. Now, as it happens, the cards are Chinese pictograms, and they are being painted and then fed into the slot by Chinese people. Mostly these cards are questions; some are answers to questions. The cards we slip through the slots are also Chinese answers to questions, though some are themselves questions the Chinese speakers paint cards to answer. Remarkably, to someone who can understand Chinese, the sequence of cards is a perfectly intelligible conversation between the Chinese speakers, who have the appropriate Chinese intentional states that provide the cards they slide into the slot with meaning—that is, derived intentionality— and whose intentional states accord the cards we slide out with meaning—that is, derived intentionality. But, of course, you or I in the room don’t have any of the relevant intentional states that would enable us to accord the cards meaning; all we are doing is following a program! Ergo, following a program is not sufficient for intentionality of its inputs and outputs. It may not even have anything to do with intentionality, content, meaning, “aboutness” information, and so on. Even if following a program is what the mind does when it is engaged in cognition, program-following will at most be an uninteresting, merely necessary condition for intentionality. So, at any rate, Searle concludes. More broadly, he argues, computers don’t compute, if by compute we mean a cognitive process that is intentional. On his view, we, whose brains manifest nonderived, original intentionality, use computers to compute: we interpret the pixels on their screens and the printouts from their printers and the synthesized noise coming out of the speakers attached to them as having derived intentionality, no differ-
Are There Really Informational Genes and Developmental Programs?
ent from the derived intentionality we attribute to the ink marks you are seeing now. How our brains implement, realize, or instantiate nonderived intentionality is a mystery Searle does not solve. There are many responses in the philosophical literature to Searle’s “Chinese-room” argument against strong AI’s claim to explain intentional cognition as following a program. And the defenders of the informational role of the genetic code had better concern themselves with these replies, for they will need them. Searle’s argument may not prove that cognition is not a matter of following a program, but it does show that following a program is not sufficient for intentionality in the sense required for the transmission of information. However, it turns out that genocentrists need not concern themselves with these matters, because for their purposes Searle’s argument may have proved too much. For it, in fact, gives us a way to see the genetic program’s special role in development even while it has, at most, derived intentionality—derived, that is, from our interpretation—and does not transmit information, at least not independently of us and our interpretations. The genocentrists, it turns out, can claim for the genes the special role in development they require without needing to argue that the genes bear information at all.
dna computing and the irrelevance of original intentionality Let’s assume that Searle is correct: we compute, using computers to do so, and we accord content to their inputs, their outputs, and the programs they run by interpreting these as representing cognitive states of our own, which have nonderived intentionality. The software/hardware distinction familiar to us reflects the fact that many physically different pieces of hardware can run the same programs. Both Macs and PCs, for example, can run Word for Windows programs, but “realize,” “instantiate,” and “implement” these programs on physically different types of microprocessors and other pieces of hardware. They do so in part because higher-level programs like Word for Windows get rewritten in the proprietary assembly languages that differed from manufacturer to manufacturer of the computers we employ. Indeed, for the purposes of high-speed intensive calculation of the sort required to deal with the most difficult problems, a high-level program can be implemented on, say, a single Cray supercomputer and on a system of PCs running in parallel. The architecture and the components of the hardware will be quite different, even as some of the programs running on them are the same. In recent years, computers have been used to help complete proofs in mathematics for theorems that otherwise would have remained in doubt: the four-
105
106
ch ap t er three
color theorem and Fermat’s theorem are two examples that spring immediately to mind. Here, of course, following Searle, we cannot say that the computers proved lemmas or intermediate theorems needed by the mathematicians proving the theorems. Rather, the mathematicians used the computer programs, interpreting the computer inputs and outputs in such a way as to themselves prove the theorems (theorem-proving is an intentional act). One interesting fact about even the most powerful of current computers is that there are some computational problems which are too complex for them to be employed to solve. Among these are calculations involving the so-called NP-hard problems (NP from nondeterministic polynomial). Among the most famous of these NP-hard problems is the traveling salesman problem: for any finite number of cities to be visited by a traveling salesman, given the distances between them, find the route that minimizes travel distance while enabling the salesman to visit them all and return to his starting point. There is no general solution to this problem of which is the shortest route. Once the number of cities grows to more than several dozen, finding the answer to this question will require more computer power and time than is available on even the fastest supercomputers in the largest network of parallel computers employing deterministic programs. Approximation methods on nondeterministic (parallel) computers must be employed to solve this problem, and there is no implementable algorithm for establishing that the solution derived is optimal. What does all this have to do with the genetic code? Well, it has been shown that when the number of cities to be visited grows beyond 50 or so, a computer composed of strands of nucleic acids can provide a better approximate solution to this problem faster and more cheaply than any computer based on a siliconchip microprocessor. Of course, both computers will run the same high-level program (though their “assembly languages” will implement the higher-level program differently), but the DNA computer produces a result faster and more cheaply. The high-level program consists in the following five steps: Suppose the number of cities to visit is n. First, one generates a large number of random paths through some or all of the n cities. Next, screen the set of paths, eliminating any path that does not return to the starting point; then eliminate all sets of paths which do not go through some cities n times; then eliminate all those which miss any city; and finally, eliminate those paths which do not enter all cities at least once. Any remaining paths will be solutions, though perhaps not optimal solutions to the problem. It sounds simple, but the combinatorial possibilities rapidly become unmanageably great. The fastest supercomputers can perform 109 calculations per second, and this is too slow to provide an answer when the number of cities begins to exceed 70. But a DNA computer can be constructed out of strands of molecules which can be combined, preferentially
Are There Really Informational Genes and Developmental Programs?
anneal with one another, amplified by polymerase chain reaction, cut by restriction enzymes, ligated at their sticky ends, and detected by electrophoretic techniques. By the mid-nineties, molecular biologists and computer scientists had designed DNA computers that can solve simple versions of the traveling salesman problem and can in theory solve complex ones that are beyond the powers of the silicon-based supercomputer. This should, of course, not be surprising. Molecular interactions enable the DNA computer to perform 1014 calculations per second, a fivefold improvement over conventional microchip computers, at an energetic cost that is a staggering 1010 lower than that of a contemporary silicon-based computer. Moreover, the DNA computer stores data at a density of approximately 1 bit per cubic nanometer, compared with existing storage media, which record 1 bit per 1012 cubic nanometers. It’s no surprise, therefore, that a DNA computer can provide results of equal accuracy with extraordinarily cheaper, faster, and more economical means than a conventional computer (see Adleman 1994). The upshot is obvious. If a DNA computer can implement the same program that a silicon-chip-based computer can, then in whatever sense a computer follows the program, the DNA computer does too. If a DNA computer can more effectively and efficiently deal with a larger number (for instance, in an NP-hard problem) than any silicon-chip-based computer—series or parallel, super or not, then surely it can deal with a simpler problem such as implementing the program that builds the Drosophila embryo. How can we be confident of this? Well, the program that the DNA implements can also be run as a Pascal program on a Mac or PC lap- or desktop. And, of course, if the program that the Mac runs to produce a printout of successive stages of the embryo is the same that the embryo’s genome itself runs to build successive stages of the Drosophila, then they must literally both be running the same software. Recall that many physically different computers hooked up to quite different input and output peripheral devices can run the same programs. But the silicon-chip-based computer is an information storage and processing device, owing to our interpreting its inputs, outputs, and internal states as having derived intentionality, like any other artifact. The Drosophila genome was constructed not by us but by evolution, and evolution, I have argued, cannot confer intentionality, derived or nonderived. Ergo, the objection concludes, the Drosophila genome does not bear information, whether it is literally a program or not. This objection misses several points. First, it needs an argument to show that the causal origin of a computer is relevant to its functioning as one, that is, to our according the program it runs as having derived intentionality. Second, we may even grant that before the revolutionary developments in molecular
107
108
ch ap t er three
biology, no cognitive agent knew the program which the Drosophila genome runs, and it is only in the last decade or so that we have been able to accord the program derived intentionality. This means, however, that long before its first interpretation by molecular biologists, the program the genome implements could have been accorded a kind of possible derived intentionality or capacity for derived intentionality, and this is a counterfactual property that apparently nothing else in the developmental life cycle of the Drosophila could have had. For those, like me, to whom counterfactual properties provide cold comfort, there are more powerful considerations to mount against the argument that the genome’s recent acquisition of derived intentionality disqualifies it from a unique role as the program of development. First, notice that derived intentionality requires original intentionality in us. And if natural selection is the process that made our brains, it is hard to see where they got their original intentionality. The problem of naturalistically explaining the original intentionality of the human (and infrahuman) brain is perhaps the most serious fundamental challenge facing neuroscience and its philosophy. No one has yet solved it. Pending the solution, it would be unwarrantably complacent to help oneself to the problematical notion of a mysterious nonderived original intentionality in us in order to argue that the Drosophila genome cannot have the derived intentionality of our artifacts. Indeed, if human intentionality turns out to be derived from some evolutionary process as yet unimagined (and it will have to be unimagined so far, if it is to prove “unmysterious”), it will turn out that both artifacts and genomes will be on a par, neither of them deriving their intentionality directly from something with nonderived intentionality, and both tracing their intentionality back to evolution by natural selection. Of course, as I indicated above, I am dubious that natural selection can actually produce original intentionality in the brain or anywhere else, and so it cannot produce derived intentionality either. Both will, on my view, turn out to be illusions, like the purposes we overlay on nature and that natural selection has dispelled. But this is another story in the philosophy of psychology, and we need not pursue it further for present purposes. The crucial question is not intentionality but programming! Whether intentional or not, what seems hard to deny is that the Drosophila genome programs the embryo. The only real issue is whether it is unique in doing so. Opponents of the genocentric approach to development will want to deny either of these statements. The argument given above, that the genome has no privileged informational role, will not support the denial of the claim that it programs the embryo, since a (nonderived) informational role is evidently not required for the genome to program the embryo. Is it plausible to suppose that other components of the life cycle of the Drosophila have a coequal role in programming
Are There Really Informational Genes and Developmental Programs?
the embryo? Could the various gradients and other distributions of maternally laid-down molecules and maternal mRNA in the ovum also be said to program the embryo, or, for that matter, as Evelyn Fox-Keller suggests, to program the genes? She writes, “Does the word genetic refer to the subject or to the object of the program? Are the genes the source of the program, or that upon which the program acts?” (Fox-Keller 2000, p. 87) One reason Fox-Keller seems inclined to treat the genes as the program’s subject or input is that she treats the notion of a program here as a lively metaphor, reflecting provocative analogies from computer science to molecular biology: “Without question, computers have provided an invaluable source of metaphors for molecular biology, the metaphor of a program being only one of many. . . . Compelling as the analogy may be, equating the genetic material of an egg with the magnetic tape of a computer does not imply that the [genetic] material encodes a program” (Fox-Keller 2000, p. 81). The word program is here just a metaphor for “to effect,” “change,” or “bring about.” But, of course, Fox-Keller’s mistake is not taking the language of program literally. Taken literally, there is no question of the genome being programmed by the resources that it works with to create the Drosophila embryo. Not to put to fine a point upon it, we can be confident of this conclusion just because other than the gene, nothing else involved in the creation of the Drosophila embryo could be employed to calculate the solution to a particular NP-hard problem. Of course, we can exploit almost any regular natural process to compute some function or other. Thus, by employing Ohm’s law, we can compute products and quotients given readings of voltage, wattage, and resistance. We can use the turning of leaves from green to red in the autumn and the rings the tree trunks form to compute seasons and years. Accordingly, it is not beyond the bounds of human ingenuity to employ regular biological processes besides those implicating the genes to solve some computational problems. These processes too will therefore turn out to be programs. If there are regularities relating nongenetic factors to developmental outcomes, then presumably these too may be construed or actually employed as programs. And, of course, we are familiar with such nongenetic factors—the epigenetic ones, such as methylation or the finch song on which widowbirds imprint. Will epigenetic regularities we are ingenious enough to employ to effect certain computations, ones that we could make much more easily, suffice to deny the genome its claim to uniqueness as the program for development? Well, it will certainly be sufficient for the developmental systems theorist, or others who wish to deny the genome an in-principle uniqueness. But I dare say it will not move the scientist, either the computer scientist or the molecular biologist. The reason is obvious. DNA computation is a practical reality, the basis of a device with profound advan-
109
110
ch ap t er three
tages over silicon-chip-based computers, which can be programmed and reprogrammed to realize any algorithm silicon-based computers can implement. Moreover, we can even bug and debug the program, or at least its parts. As noted above, one change we can make in the program will produce the shortgerm insect, a presumptive ancestor of the Drosophila; several others will produce outcomes we previously labeled as mutations. Even if we grant that there are generalizations about some biological systems regular enough to realize some mathematical function or other, and thus to be employed by us as computers, no one could reasonably describe these systems as capable of implementing a program of any real flexibility, power, and computational utility. Indeed, it could be argued that merely realizing a mathematical function that we could (though never would actually) employ to compute some ordered pairs of numbers is insufficient to mark something out as implementing a program. For some physical configuration of matter to constitute a system that implements a program, it should nontrivially and literally instantiate the hardware/software distinction. That is, we should be able to identify other physical configurations that will implement the same mathematical function, reflecting the multiple realizability of programs by physically different processors; and we should be able to identify other programs that changes in the original physical configuration will enable it to realize. Notice that the genome and the genetic code satisfy both of these requirements. Doubtless, clever philosophers will be able to identify a small number of cases elsewhere in the biosphere that can also do so. There are two things worth bearing in mind about such cases. In order to be reliable and useful in real computation, they will have to reflect biological regularities that operate with close to invariable regularity and at fairly high speeds. Ergo, they will be about molecular interactions, not the sort of factors that developmental systems theorists are likely to attempt to put on a par with the genome (for example, beaver dams or finch songs). They will not be anything as ubiquitous in the biosphere as the hardware/software configuration of the genome and the genetic code. The genocentrist can accept such examples of other programs with equanimity, for the assertion that the genome is unique in its programming of all development cannot be undercut by the recognition that there may be other programs operating at the molecular level, and even one or two such molecular programs also involved, along with the genome, in the programming of development for some of the organisms that the DNA programs.
the once and future gene The final problem facing the genetic program as an explanation of development is, according to some, the demise of the gene concept and the denial that there
Are There Really Informational Genes and Developmental Programs?
are any such things properly so called as genes. According to influential commentators on the subject, gene is a kind-term that has outlived its usefulness, and will shortly go into eclipse: Even though the message has yet to reach the popular press, to an increasingly large number of workers at the forefront of contemporary research, it seems evident that the primacy of the gene as the core explanatory concept of biological structure and function is more a feature of the twentieth century than it will be of the twenty-first. What will take its place? Indeed, we might ask, will biology ever again be able to offer an explanatory framework of comparable simplicity and allure? (Fox-Keller 2000, p. 9) No gene, no genome. No genome, no hardware for the genetic program to run on. No program, no simple and alluring explanation for development. No simple and alluring explanation for development, reductionism as a research program must be wrong. The argument for this conclusion traces the changes in the concept of the gene over the course of its hundred-year history, and concludes it does not carve nature at the joints: there is no one type of thing that is properly called a gene. If gene doesn’t name a thing, it does not name a thing with the sort of causal role that explanations in molecular developmental biology require. It is certainly correct that the concept of the gene has undergone very great changes over the course of the century recently ended. Many firmly held beliefs about the gene have had to be surrendered, and surprises have repeatedly staggered geneticists just when they thought they had finally come to grips with the complications the gene presents. But complicated though the vicissitudes are through which our best guesses about the genes have gone, reporting its imminent death would be premature to say the least. The premises in Fox-Keller’s argument are correct; it is the conclusion that does not follow. In the remainder of this chapter, I trace out philosophically significant episodes in the twentiethcentury history of the gene, treat some of the apparent barriers to a reductionistic research program it has been thought to raise, and show that in the end, a proper view of biology as history ensures the continued relevance of the concept “gene” and makes it hard to deny the reality of the genes. That contemporary biology views the reports of the gene’s death as exaggerated seems obvious. Consider only the general dispute about exactly how many—or rather, how surprisingly few—are the number of genes in the human genome, somewhere between 29,000 and 35,000. This claim, which some sharply dispute, would indeed be without sense if there are really no such things as genes. Of course, no one can predict the future course of scientific theoriz-
111
112
ch ap t er three
ing, and this includes those who foresee the eclipse of the gene as much as those who do not. So, a lengthy report about the indispensable role that the concept has recently played in biology will prove little about the future. But it can show that for all the scientific complications in our concept of the gene, there are no logical, metaphysical, epistemic, methodological, or evidential barriers to its unity, coherence, existence, and unique causal role as providing the program of development and the unit of hereditary transmission. Of course, the denial that there is any such thing as the gene might just be a picturesque way of making the philosophically and methodologically important point that the concept of the gene is not a natural kind, not a type of thing which we should expect to figure in fundamental biological theory or strict laws of nature. This is a conclusion that may surprise many biologists. But it will be easy to show that the notion of “gene” does not cut nature at the joints. That is, it is not one that will figure among the basic explanatory kinds in natural science’s final and fundamental description of the furniture of nature. In this sense, however, the notion that there is no such thing as the natural kind “gene” is perfectly compatible with the existence of 29,000 individual genes in the individual human’s genome, and half that number in the individual Drosophila’s genome. It is these gene tokens to which biology is committed, and it is owing to these genes that molecular biology’s explanatory framework of “comparable simplicity and allure” remains intact. The history of the gene concept is a history of increasing success in one respect and increasing failure in another. It has been a success insofar as the relentlessly reductionistic project of genetics has successfully located, counted, and finally taken apart the genes into their component pieces. It has been a failure in that the same relentlessly reductionistic program has shown that the molecular details are too complex, disjunctive, and synergistic to vindicate any single, manageably long but complete reductionistic account of what a gene is. The history is also well known, both to molecular biologists and to philosophers of biology who have struggled to explicate the formers’ conviction that molecular biology tells us exactly what a gene is, what it’s made of, and how it works. Having hypothesized genes to explain the assortment of traits in Mendelian ratios, it became incumbent on genetics to say what exactly genes are, to provide an account of how they work independent of the traits whose assortment they are called upon to explain as phenotypes. Otherwise, genetic theory would be taxed with the same charge of circularity that vexed premolecular developmental biology. It would be too much like Dr. Pangloss’s explanation of why opium puts people to sleep: owing to its dormative virtue. How do we know opium has a dormative virtue? Because it puts people to sleep. Why do the offspring of purebreds share the same phenotype? Because they share the
Are There Really Informational Genes and Developmental Programs?
same genotype. How do we know they share a genotype? Because they breed true. Since the gene was originally defined in terms of its function—its effects in inheritance and development of phenotype—early on, geneticists recognized the need to explain how it did so. The first step in doing so was to locate the genome, the second step was to identify its structure. The task did not look much different from that facing atomic theory in light of Mendelev’s successful organization of the elements into the periodic table. In that case, what was needed was an account of the fundamental constituents of the elements which would explain their known chemical affinities to one another; their physical properties, such as state, specific gravity, and conductivity; and, finally, their order in Mendelev’s table of the elements. The solution to almost all of these explanatory challenges was subsequently provided by atomic theory, which identified the structure of the fundamental units of the elements—atoms—and showed how their individual structures explained their aggregate effects, as reflected in Mendelev’s table. The same challenge appeared to face genetics. In retrospect, it should not have been surprising that this challenge of characterizing the gene’s structure would be far more difficult to meet. After all, unlike the element or the atom, the gene, along with everything else biological, is a local phenomenon, and produced by natural selection working on local conditions. Since natural selection is blind to physically different structures with identical or even just similar effects for their own survival and replication, it’s no surprise that even at the level of nucleic acids, many different structures will be selected to perform the function of the gene. If the number is large enough, then the complete account of the structure and operation of the gene, that is, whatever it is that performs the pertinent hereditary and developmental roles, will be far from simple and uniform. Indeed, in retrospect, it should be surprising that the facts of heredity and development don’t turn out to be even more complicated than they are! We might well have expected that genes are composed of many different molecules in quite different arrangements. The fact that all genes are composed of nucleic acids, and almost all genes of DNA, is by itself significant in light of natural selection’s blindness to structure. For it suggests that either (1) there really is only one way to perform the hereditary and developmental functions, so that if there is life anywhere else in the universe, its genetic machinery is nucleic acid based; or (2) here on Earth at least, DNA-based heredity and developmental control is so much better than every other competing physical system of heredity and developmental control that it defeated all these rivals a very long time ago. In the task of providing an account of the gene, location came first. Weisman had located the hereditary factors to the chromosomes by the late 1880s. Since
113
114
ch ap t er three
in experimental systems employed in the early twentieth century the chromosomes reflected the same sort of segregation and independent assortment as do genes (identified by their apparent phenotypical effects), their localization to the chromosomes was firmly established. With the sustained focus on Drosophila as the model system of choice, and the discovery of a large number of Drosophila traits that honor Mendelian regularities along with the polytene chromosomes in their salivary glands, relative distance relations among genes on chromosomes, measured in recombination units of centimorgans, became feasible. Still there was no way to divide up the chromosome into genes except by tracking the chromosomal segments’ effects on phenotypes. To begin to do this, one first has to be able to count the number of genes on a chromosome. To do this, to “individuate” a gene, to be sure that one has not double-counted or missed one out, requires that each gene have a unique effect or product. If gene is a “count noun,” such as chair or electron, as opposed to a “mass noun,” such as wood or snow, then individuating genes requires either knowing their composition so that we can tell where one ends and the one next to it begins, or knowing their unique effects. Answering the composition question about unobservable entities seems harder than answering the unique effects question, since these effects need not be unobservable, and in some cases were known not to be. But it was always obvious that most traits of organisms are quantitative, come in degrees or amounts, and involve the effects of many genes. No such quantitative polygenetic traits can enable us to individuate genes. And most observable traits we pick out are known not to be phenotypes, just because they don’t assort in anything like Mendelian ratios. If by phenotype is meant the single effect or product in development and adult function that assorts in the same way the gene does, and thus can tell us the presence from the absence of the gene, that is, can individuate and enable us to count genes, then almost nothing we can observe with the unaided eye is a phenotype. But, of course, it’s no requirement on phenotypes that they be so detectable. So, the conclusion widely accepted by the 1940s, that for each gene there is one enzyme which it produces that counts as its phenotype, should not be surprising. Demands on individuation, together with what Tatum and Beadle (1941) discovered in the early 1940s about how mutations in distinct genes in neospora interrupt synthesis of distinct enzymes, made it hard to avoid the conclusion that the only candidates for phenotypes strictly so called would be the enzymatic products of gene expression. This is a conclusion biologists may want to embrace for other reasons. Over the period of the twentieth century, geneticists persistently found more and more apparent exceptions to Mendel’s laws, that is, traits treated as phenotypic but whose segregation and assortment violated these laws: crossover, linkage, meiotic drive, autosomal phenotypes, and a host of other exceptions are well known. One way
Are There Really Informational Genes and Developmental Programs?
to preserve Mendel’s laws in the face of these counterexamples to them was to deny that the traits which behaved in these unruly ways are phenotypes. Such a claim would be ad hoc, of course, unless we had a good reason independent of their violation of Mendel’s laws to exclude them. The one-gene/one-enzyme hypothesis not only had substantial evidence in its favor, and stood a chance of individuating genes; presumably, the genes and enzymes it correlated would both honor Mendel’s laws, or at least come closer to doing so than other gene/ trait combinations. So, armed with the one-gene/one-enzyme hypothesis, or a slightly broader version, one gene/one polypeptide or protein, the biologist could in principle count the number of genes in a genome by isolating all the distinct enzymes or proteins synthesized at the ribosomes. Never mind that the ribosomal machinery was not discovered until two generations after the one-gene/one-enzyme theory was advanced. If one could extract all the enzymes or proteins that are not themselves intermediate and ultimate products but basic ones, then one could hope to count the number of different genes in the system that produced them. Or at least one could do so if there was only one copy of any gene in the genome. And this too is something much later learned to be false. Still, the one-gene/one-enzyme hypothesis provided a start on individuating the genes, a way of getting at least an order of magnitude estimate on their number in any genome. Meanwhile, there was still the alternative approach of trying to individuate genes by figuring out where one ends and the next one begins. Of course, the one-gene/one-enzyme or polypeptide hypothesis could help in this project. If we know the number of genes on a chromosome, by knowing all the polypeptides they express, by knocking parts out of the chromosome and noting which enzymes are not produced, we can in principle learn whether any two parts of the chromosome are parts of the same gene. If they have the same effect in enzyme nonexpression, they will be parts of the same gene; otherwise, not. But this approach would be made much more powerful by combination with the direct one of identifying the material composition of the chromosome and considering how the material structure could give rise to distinct enzymes. Of course, this is exactly what eventually happened. First Chagraff confirmed the genes’ chemical composition out of purines and pyrimidines, and their suggestively constant ratios. Then Watson and Crick provided the material structure, showing how the purines and pyrimidines were put together, concluding archly that the implications of this structure for hereditary transmission and developmental control “had not escaped” their notice. Molecular biology seemed well on its way to successfully individuating genes, and so vindicating the explanatory role that had already been accorded them in respect of heredity
115
116
ch ap t er three
and development. It was this culmination of the half century of success along the path to complete individuation that established the reality of the gene as the basic entity of heredity and development. It is worth momentarily contemplating what the consequence of failure in this enterprise of characterizing the gene’s structure would have been. Such failures are by no means unprecedented in science, and they pretty much have a common outcome. A predictively or otherwise instrumentally successful theory postulates items to which it gives names—sometimes mass nouns, such as phlogiston, more rarely count nouns, such as engram or crystalline sphere. The research program associated with, for example, the theory that memories in the brain are stored in engrams had the obligation to provide independent evidence for the existence of its postulated explanatory variable. Such evidence, in the case of a theory employing count nouns, such as engram, consists in the provision of criteria of individuation for the items its noun phrase refers to, criteria that enable us to improve on the predictions of the theory and to refine the precision of its generalizations as a consequence of locating, counting, and understanding how the “parts” of the entity fit together. Failure to provide such criteria of individuation with the anticipated improvement in predictive power is usually taken as strong evidence against the existence of the items postulated by the theory. In the case of the engram, no one thinks there are any. Had geneticists been unable to move successfully in the direction required to structurally individuate genes, with a payoff for enhanced prediction and explanation, the notion would have gone the way of the “engram” concept. But after 1953, the complications in the story began to set in. Again, they are well known to molecular biologists and to the philosophers who have sought to clarify the relationship between molecular genetics and the rest of biology. In the fifty years after 1953, it became increasingly apparent that the individuation project was going to be much more fraught than originally expected. Although it was now known that genes are sequences of nucleic acids, by itself this information provided no practical basis for individuation. Molecular biology was still limited to counting genes by counting their distinctive enzymes. There is a philosophical side light here well known among philosophers of biology that it is worth briefly reporting to biologists. As noted briefly in the introduction, in the first decade and a half after Watson and Crick’s discovery, philosophers of biology were confident that it would enable them to show how Mendelian genetics is reducible to molecular genetics in a way that vindicated reduction by deductive derivation in physical science. All we needed to do was formalize Watson and Crick’s discovery into a definition of the gene in terms of nucleic-acid sequences. Once this was accomplished, we could show how
Are There Really Informational Genes and Developmental Programs?
the behavior of the nucleic-acid sequences in meioses realizes Mendel’s laws of segregation and assortment. Two obstacles immediately intervened. First, there was the problem that the gene was defined as a unit of hereditary recombination, of phenotypic control, and of mutation; but it was known that each of these functions was realized by quite different nucleic-acid sequences, so that no simple identification of genes and nucleic-acid sequences was possible. Some philosophers explored the possibility of replacing the concept of “gene” with more fine-grained functional concepts, such as Benzer’s cistron, the recon, and the muton. But as these terms never really caught on among geneticists and, what is more, couldn’t deal with the second of the two problems a reductive definition faced, philosophers refocused their interest on a more nuanced analysis of the original concept of the gene. The second problem a reductive definition of the gene faced emerged with the redundancy of the genetic code and the functional neutrality of many nucleotide substitutions. One consequence of these two discoveries is that a given gene type, identified by its function in polypeptide synthesis, such as the hemoglobin gene or the insulin gene, could be structurally constituted (or as the philosophers came to put it, “supervened upon”) by a vast disjunction of different nucleic-acid sequences (which “multiply realize” it).4 While any one of these sequences could discharge the function of the gene, the list of all such sequences would at a minimum be very long, and potentially incapable of completion. But in that case, the prospects for a complete structural definition of any gene were bleak. Philosophers were quick to point out that this problem bedevils only the traditional reductionist requirement that the kind-terms gene and gene for . . . be connected systematically with a kind of nucleic acid structure. The complex nucleic acid chemistry of the genes does not undercut the conclusion that each particular “gene token,” on a particular chromosome in an individual cell of a single actual organism’s body, is “nothing but” a sequence of particular molecules. It only undercuts the integrity of the gene as a natural kind of the sort we are familiar with in physical science, such as atom, element, acid, catalyst. The identity of tokens in the absence of the identity of types was, however, disquieting to postpositivist reductionists. As we saw in chapter 1 (and will explore further in chapter 4), their felt need for type-identities reflects a number of misunderstandings about biology, notably its character as a historical science. 4. These philosophers’ notions, “supervenience” and “multiple realization,” are introduced in footnotes in chapter 1.
117
118
ch ap t er three
Meanwhile in biology over the next two decades, work to elucidate the mechanism of developmental control required the multiplication of types of genes: at first, regulatory versus structural genes—the distinction is not conceptually problematical, even though regulatory gene products (promoters and repressors) produced polypeptides which operated to switch other, structural genes on and off. Regulatory gene products may not be garden-variety enzymes catalyzing reactions downstream from the genome, but their peptide produces a mode of interaction with the genes still governed by standard enzyme kinetics. Besides regulatory and structural genes, there would have to be genes that code not for proteins or enzymes but for various kinds of RNA: messenger (mRNA), transfer (tRNA), ribosomal (rRNA). Much later, it was shown that mRNA could catalyze its own splicing, thus falsifying the generalization that all enzymes are proteins; the tRNAs and rRNAs are also not proteins. And in the decade after 1992, many more microRNAs (miRNA) essential to gene regulation but not translated into proteins were discovered, with many of their genes located in the exons, of all places, in protein-coding genes. But again, adding exceptions to the one-gene/one-enzyme hypothesis to accommodate these genes that did not produce enzymes was only a complication in the individuation project. Individuation can proceed on the “one-gene/one-enzyme or one-gene/oneregulatory-protein or one-gene/one-RNA-molecule” hypothesis. Of course, if regulatory proteins and various RNAs are necessary for the synthesis of an enzyme by a structural gene, then the DNA sequences that express these molecules may well be reckoned parts of the structural gene itself. Or if they are not so counted, a principled reason will be required for excluding them. Otherwise, the criterion of individuation for one structural gene will count all these gene sequences as part of it. For without sequences coding for tRNAs and rRNAs, there would be no structural protein product attributable to any sequence of DNA. And for miRNAs, no sequence is sufficient for its polypeptide product. But we cannot just add these sequences to the structural gene with equanimity, however. To begin with, many of the sequences for the RNAs required and many of the sequences for the regulatory proteins that turn a structural sequence on and off are physically distant from the structural sequence. To include some or all of these spatially separated sequences in one gene would deny the kind of spatiotemporal integrity to the gene as a discrete thing that common sense and previous science assumed it had. This is not an irremediable difficulty, merely an inconvenience. But there were several more to come. Of course, many of the genes for the RNAs needed to make one enzyme and even some of the regulatory genes needed for it are involved in the synthesis of other enzymes So, it will turn out that a given DNA sequence—even quite a
Are There Really Informational Genes and Developmental Programs?
long one—will be part of several different genes, on this view. And this would be another wrinkle in the individuation project. Is it better to exclude from the gene those sequences that play the same role in the production of more than one enzyme even if they are causally required for the synthesis of that enzyme? This tactic is, in effect, the surrender of the one-gene/one-enzyme hypothesis as a causal claim, since it will now turn out that the nucleic-acid sequence which matches the amino-acid sequence of a polypeptide is not enough to produce that polypeptide. We could, of course, treat the hypothesis as a slogan for the much more complicated procedure of counting genes by first, elucidating the causal roles of all the sequences required for a product; and second, excluding from the sequence that constitutes the gene for a given product any sequence which also plays a causal role with respect to the production of some other gene product, be it enzyme, regulatory protein, or RNA molecule. The one-gene/one-protein hypothesis thus identifies the gene by virtue of the only gene product it is causally necessary for. Genes causally necessary for more than one product won’t be correctly counted by this hypothesis. Will there be many such genes, besides those which code for RNAs, and multigene promoters and repressors? At this point, it began to seem to some philosophers that there might be no fact of the matter here concerning exactly how many genes there are, for different proposals about how to combine sequences into functional genes produce different counts of the number of genes, and there appears to be no nonarbitrary way to choose among them. Biologists were, however, not yet daunted. More complications were to come. And the complications are to be found at both ends of the causal chain, from DNA to protein end products, and in the middle of the chain as well: among the protein end products; in the middle, at the messenger RNA that transcribes the gene sequence; and within the gene sequence itself. First of all, there is the problem for individuating genes raised by the discovery of introns and exons. These are not new problems for counting genes. As we have already seen, individuation can live with a gene composed of spatially distributed noncontiguous sequences. But the existence of introns certainly doesn’t add to the physical integrity of the gene. Second, there is the self-splicing of mRNA to remove exons, and more important the posttranscriptional modification of messenger RNAs (mRNAs) prior to their translation into proteins. Here too there will be a variety of genes producing the machinery for posttranscriptional modification. These will presumably be genes for enzymes that catalyze the modification of mRNAs, thus necessary for the ultimate product but not part of the nucleic-acid sequence that the product will individuate. Then there is posttranslational modification of inactive proteins into active ones
119
120
ch ap t er three
and the silencing of some genes by microRNAs digesting their mRNA. Again, the nucleic acid machinery necessary for this modification cannot be counted as the part of the gene for the active enzyme, even though it is indispensable to the production of the enzyme that individuates the gene. An even more serious problem for individuation of genes arose with the discovery that the genetic material contains start codons (ATG) and three stop codons (TGA, TAG, and TAA). Employing these, why not just read the genes off the nucleic-acid sequence: beginning anywhere, when you read a start codon, a new gene begins, and it ends with the first stop codon one comes to. Such a sequence is, of course, an open reading frame. For any nucleotide sequence, there will be six possible open reading frames. It is often assumed that the longest open reading frame in a sequence is a gene, and sometimes it turns out this way. If only matters were so simple. To begin with, 95% of the genome in humans, for instance, is widely supposed to be junk DNA of either no function or unknown function. It certainly does not code for proteins (though some of it now appears to code for microRNA, which has important roles in development and evolution). Finding start and stop codons in this junk DNA will not individuate genes. So, it appears we still need to approach matters from the prior identification of enzymes, proteins, and other gene products. If we know the amino-acid sequence of the protein, we can read back the alternative nucleicacid sequences that code for it. Alas, given the code’s redundancy, there will be a staggeringly large number of nucleic-acid sequences for any enzyme, and several different sequences can be expected actually to have been realized in the nucleic-acid sequences of different individuals, even in the same small population, let alone different individuals in a species, order, family, or higher genera. And, of course, there are few proteins whose amino-acid sequence is known. Indeed, much of the interest in the genome stems from the fact that we can much more easily go from codon sequence to amino-acid sequence than the other way. Combine the multiplicity of reading frames with the existence of introns (and genes for microRNAs within exons), and another whole dimension of problems for gene individuation arises. Within an open reading frame, there can often be a dozen or more introns. It is easy to deny membership in the relevant gene to these introns, since their sequences are not represented in the gene product; but what are we to say when alternative excision of introns and splicing of exons produces two quite different mRNAs, and consequently two distinct protein products from the same open reading frame, that is, the same nucleic-acid sequence? This is uncommon but not unknown. Moreover, there are at least two other ways in which the same nucleotide sequence can produce
Are There Really Informational Genes and Developmental Programs?
two different products. First, a sequence beginning with one start codon may have a second before the first stop codon, and so encode two different products. Second, the same sequence, read in different reading frames, will contain different start and stop codons and so code for different products. It is plain that the methods of individuation by function and by structure simply do not line up together in a division of the genetic material into component genes. We may as well end our history of complications here (excluding RNA editing on the grounds that it happens only in mitochondria), though we can be certain that more wrinkles in the story will emerge. The upshot is that there can be no single complete and operationally manageable individuating description of the count noun gene. That is, there is no characterization of a list of properties common and peculiar to all genes, in terms of either their causal roles in the synthesis of proteins, enzymes, RNAs, and so on or their number and kind of nucleotide sequences, or both. A list of features of either types of sequences or types of gene products individually necessary and jointly sufficient for being a gene is just not available, or likely to become available by dint of further research. Indeed, further research is going to push such a definition further out of reach. And we know very well why this is so: natural selection.
natur al selection and the individuation of genes Natural selection has been operating at the level of the nucleotide sequence longer and faster than at any other level of biological organization. Since natural selection is blind to structural difference so long as its effects on fitness are the same or similar, the rate and duration of evolution on the Earth has had maximal opportunity to generate a wide variety of structures with similar effects and, similarly, to fine-tune the Rube Goldberg solutions to successive design problems in such a way as to produce the combination of complexity and ubiquity we find in the genomes on the Earth. Natural selection was no more likely to build homogeneous natural kinds at the molecular level than anywhere else in the biocosm. And even when it produced a unique solution (the doublestranded DNA), it probably did so by making it win a competition against alternatives, and it did not allow the uniqueness to remain unexploited for long (consider the RNA virus and the prion). Some philosophers of biology have declined to despair of a definition or characterization of the gene in light of the jungle of complexity the twentieth century has uncovered in the biosynthetic pathway from nucleic acids to finished proteins and enzymes. One proposal, due to C. K. Waters (1990, 1994), accepts that genes can only be individuated by their products, and that the products (if any) of any nucleotide sequence will themselves vary in amino-acid sequence
121
122
ch ap t er three
(depending on intron excision and posttranscriptional and posttranslational modification, along with other sorts of processing further downstream). Accordingly, each of these products, up and down the stream from the genome, individuates a gene. Thus, the answers to questions like how many genes are there? or is the third intron in mouse-Beta-globin sequence part of the Betaglobin gene or not? are always, it depends . . . And what the answers depend on is which gene products are being employed to individuate genes. It turns out, therefore, that all individuation claims about genes will be “relational.” That is, the question, how many genes are there in the nucleic-acid sequence from basepair 1 to base-pair 165,000 of the Phage T4 virus? is not a well-formed question. The well-formed question must begin by listing the gene products with respect to which the count is to be made. Different lists of gene products will give different counts, and there is no such thing as the correct or most complete list, and therefore no such thing as the correct or complete count of the Phage T4’s genes. Biologists who have followed the philosophers’ discussion this far may be excused for concluding that Waters’s proposal is not so much a definition of the gene as a denial that there really is any such thing. Waters’s proposal is better viewed as the claim that there is no fact of the matter concerning how many genes there are, or whether any sequence is a gene; the nucleic-acid sequence may be subject to a variety of divergent taxonomies that are all relative to the interests and information of the molecular geneticist, and disputes and disagreements about matters of individuation can largely be settled by identifying different background assumptions about the gene product of interest. The scientific realists among these biologists, those who are interested in the biological facts independent of our interests and the current state of our knowledge and ignorance, may join some realist philosophers of biology in concluding that “gene” is a heuristic device, and the theories in which it figures are useful instruments. They will go on to hold that the only biological realities here are the phenomena which molecular biology describes: sequences—repetitive and nonrepetitive, coding and noncoding, structural and regulatory, introns and exons, nuclear and extranuclear. It is these concepts that taxonomize the genome into its actual, objective, interest-free parts. One trouble with this bracing conclusion is that along with making short work of the controversy about what the gene is and how many there might be in any genome, it makes equally short work of practically the rest of biology. For it is a version of eliminativism, the thesis that kind-terms which do not figure in general laws true everywhere and always throughout the universe do not identify real things, but merely reflect our interests and cognitive limitations. Eliminativism is not an option for reductionists. Besides, as I will show, here
Are There Really Informational Genes and Developmental Programs?
and elsewhere in biology it is a mistake that results from a misunderstanding of biology. Rejecting Waters’s account as promiscuously polymorphous, Griffiths and Nuemann-Held have advanced a more radical reconstruction of the gene concept, not as a type of thing or entity but as a type of process: The sequence of the DNA can . . . be compared to a sequence of letters without spaces or punctuation marks. The state of the developmental system is then analogous to a scheme imposed on these letters—grouping letters into words, adding punctuation marks and editing notes. A different developmental system imposes a different scheme over the letters, that is, over the DNA sequence. It is therefore misleading to think of functional descriptions of DNA, such as “promoter region,” as explicable solely in terms of structural descriptions of DNA, such as “sequence.” The structural description is, at best, a necessary condition for the functional descriptions to apply. These considerations lead away from . . . [Waters’s] classical molecular gene concept to what we have christened the “molecular process gene concept.” According to this concept, “gene” denotes the recurring process that leads to the temporally and spatially regulated expression of a particular peptide product. . . . This gene concept allows for alternative mRNA splicing as well as for mRNA editing by including the particular processes involved in either. There is a great deal of continuity between this proposal and the classical molecular conception of the gene: the gene still has the function of coding for a polypeptide, and it still includes specific segments of DNA. However, the gene is identified not with these DNA sequences alone but rather with the process in whose context these sequences take on a definite meaning. . . . When one speaks of the “gene for” a particular product, one is implicitly referring not only to DNA sequences but also to all the other influences that cause that sequence to give rise to this product. The molecular process gene concept stresses these connections and helps scientists bear in mind the easily overlooked fact that the production of this polypeptide product is the result, not of the presence of the DNA sequence alone, but of a whole range of resources affecting gene expression. If there is anything that is “for” a gene product, it is the molecular process that produces the product rather than a sequence of nucleotides which . . . just “is.” (Griffiths and Nuemann-Held 1999, p. 661) The holism of this proposal is unsurprising in light of Griffiths’s own allegiance to developmental systems theory; its commitment to “causal democracy,”
123
124
ch ap t er three
which accords equal standing to anything causally necessary for development; and its denial that the gene has a special informational or other role in development. Insofar as this conceptualization would deny to the genome its role in programming the embryo, it fails to reflect the literal role of the genome as program. Moreover, to shift the term gene from labeling a kind of thing to labeling a kind of process may turn out to be no simplification, improvement, or clarification. After all, the class of biosynthetic processes that the kind will include may have all the heterogeneity of the class of things the term gene was originally coined to label. Notice that all the work in this individuation is done by the spatial and temporal distribution of polypeptide products, and thus it adds several more relational contingencies to the individuation of the genome into genes beyond the multiplicity of different proenzymes, preproteins, and posttranslational and -transcriptional products that relativize Waters’s proposed definition. Like Waters’s account, it will not give us a single answer to a simple nonrelational individuation question. Moreover, the proposal packs a great number of things—gene products, biosynthetic pathways, regulatory mechanisms, whose properties and behavior are to be explained by appeal to the gene and its properties—into the definition of gene, thereby threatening to trivialize these explanations. The proposal is, in any case, too revisionary to find favor among molecular biologists as a reconstruction of their notion of “gene.” It foregoes at least half of the basis on which individuation has always been assumed to proceed: the claim that genes are made up exclusively of DNA (and RNA, in the case of some viruses). By shifting to the treatment of the gene as a process, an event, in which the participating objects include nucleic acids, amino acids, proteins, and enzymes composed of them, as well as whatever else is implicated in the biosynthetic pathways that characterize the relevant process, this proposal might more properly be said to change the subject than to characterize genes. In that sense, Griffiths and Neumann-Held’s proposal is another version of sheer eliminativism about genes. Though it retains the word, it gives up the kind that the word gene is used to pick out. One tip-off that the proposal is eliminativist is that it does away with so many of the explanations to which the genes as things have been party. And it does away with the explanatory strategy that treats the genome as hardware that realizes a program for development. This, of course, is part of Griffiths’s agenda, as expressed elsewhere in his argument for the developmental systems theory’s approach to development: one that treats the genes as just another coeval component of the life cycle which builds the embryo (Griffiths and Grey 1994). If we decline to move in this direction with the developmental systems theorists, Griffiths’s and Neumann-Held’s eliminativism about genes as things might just as well lead one to embrace the view that we can retain our explanations
Are There Really Informational Genes and Developmental Programs?
and their strategy while surrendering the gene as a coherent natural kind. All we have to do is substitute, for the gene as the hardware that realizes the developmental programs, the various nucleic-acid sequences that do so: repetitive and nonrepetitive, coding and noncoding, structural and regulatory, introns and exons, nuclear and extranuclear. It will then be these concepts that taxonomize the genome into its actual, objective, interest-free parts. The gene will have dropped out of our explanatory ontology in favor of the nucleic-acid sequence. All this seems to be making much too heavy weather of the complex details of the genome, its parts, and their causal roles in polypeptide synthesis. The trouble with the entire debate, from the assimilation of Watson and Crick’s discovery to the end of the century, has been its steady refusal to take seriously Dobzhansky’s dictum. As elsewhere in biology (and in the philosophy of biology, for that matter), nothing much makes sense here except in light of evolution. If we take seriously the recurrent process that leads from the nucleotide sequence to the spatial and temporal expression of particular polypeptide products as an adaptation, we may be able to reconcile the reality of the genes as individuated things with the complexity and heterogeneity of their molecular realizations. Once we have identified a recurring process from the nucleotides to the place and time a polypeptide is expressed, we are in a position to ask the biologist’s central question, the adaptational question, why was this process selected for? At the level of macromolecular processes that produce proteins and enzymes, the number of interactors and the rate or interaction have been high enough so that almost everything that has persisted has had an adaptational etiology. There is precious little room for drift at the level of the macromolecules.5 The adaptational question is therefore always apposite. And the question may be 5. Biological digression. This claim is not meant to gainsay Kimura’s important theory about the neutral character of most nucleotide substitutions in the genome. That most nucleic acid substitutions have no effect on fitness should be no surprise in light of two considerations: first, since so much of the eukaryotic genome is apparently “junk,” which codes for no product—neither protein nor RNA—neither these sequences nor changes in them will have any adaptational significance, favorable or unfavorable; second, even within regions of the DNA which do code for gene products with adaptational significance, the redundancy of the code, and the fact that natural selection for the protein products of these sequences will be blind to many differences in their structure, make the sequence heterogeneity of even closely related genomes inevitable. However, as Kimura (1961) was himself quick to note, selection will reduce variation at those locations with adaptational significance. Thus, there will be little variation among those sequences that fix the allosteric and active site properties of the enzymes which DNA codes for. And where such variation obtains, the likelihood of maladaptive variants being fixed by drift will be vanishingly small, since the total population of genes for one of these adaptationally important gene products will be extremely large.
125
126
ch ap t er three
easier to answer, since at the level of the macromolecules identifying design problems is far less likely to be distracted by anthropocentrism than it is at the level of creatures who live in the same environment as us. We have, of course, elucidated only few of the detailed adaptational explanations for why a particular biosynthetic pathway has been preserved through evolution. But for each of them, we have thereby acquired good evidence that the nucleotide sequences necessary for them have been treated by natural selection as unified entities “worth” preserving through the vicissitudes of evolution. Perhaps the most venerable of these pathways and sequences as objects of scientific scrutiny are the process of respiration and the family of hemoglobin genes, all of whose expression share similar polypeptide outcomes, and all of which share homologous DNA sequences at loci crucial to the smooth operation of the allosteric and active sites of their peptide outcomes. It is safe to say that these nucleic-acid sequences and the processes they are implicated in are typical enough to be a fair basis for generalization. Natural selection has done a great deal to genomes over the period since nucleic-acid sequences first appeared. One thing it has done is preferentially to preserve sequences owing to their contributions to the solution of a myriad of design problems that emerged over the course of the last 3.5 billion years on Earth. Many of these perennially preserved sequences are indeed genes. Perhaps most of them are. Indeed, the way in which contemporary molecular biology estimates the number of genes in a species’ specimen organism turns on this assumption. Distinctive gene products have behind them evolutionary etiologies, and each one of these must pass back through at least one RNA molecule, usually a messenger RNA, less often a transfer RNA, occasionally a ribosomal RNA, a microRNA, and perhaps a few more even less common RNAs. As natural selection is a relatively nearsighted process, it seems safe to say that each unprocessed RNA transcribes at least one gene, and the number of RNAs, processed or not, gives a good estimate of the total number of genes. It is, of course, this fact on which the counting of genes by the employment of “expressed sequence tags,” or ESTs, revolves. An EST is a labeled fragment of DNA synthesized (via complementary DNA) from the mRNAs expressed in somatic cells. When combined with the genome of an organism, these ESTs will preferentially link up to—hybridize with—portions of open reading frames of the genes that express the mRNAs and polypeptides the ESTs are synthesized from. It is by the use of ESTs constructed from the mRNAs preferentially synthesized in the various tissues of the human body that the estimate of 30,000 genes was derived. ESTs are only a rough guide to the mRNAs, however. Low-concentration mRNAs with very short half-lives will escape the notice of EST isolating methods. mRNAs
Are There Really Informational Genes and Developmental Programs?
with high sequence similarity will be confounded by this method, while ESTs from preprocessed and processed, modified and unmodified copies of the same mRNA will result in overcounting the genes. If matters were otherwise, we could just employ the RNAs to individuate, count, and locate the genes by constructing full-length complementary DNAs from them and hybridizing those with the genomes we wish to “annotate”—that is, decompose into their component genes. But, of course, matters are not this simple. Much of the structure of the eukaryotic genome created by natural selection does not have the function of producing proteins and enzymes. Many of the preferentially preserved sequences are distinct parts of genes: some are introns within genes that do not code for any gene product. Why has natural selection preserved these noncoding regions within genes? Presumably because they make for “evolvability.” They foster the possibility of shuffling coding regions from various genes together to make new genes. Even among the exons, some sequences are allowed to vary a certain amount, within a range determined by the redundancy of the code or the indifference of a gene product’s function to minor changes in its amino acid structure. And then there are the highly repetitive sequences to be found at centromeres and teleomeres, which are plainly not genes. Natural selection has been sculpting the genome out of the DNA for several billion years; the result has been a division mainly into genes, but a lot of other things have been created too, both as components of genes and as parts of the architecture of the genome presumably to support the genes—not to mention the by-products selection has introduced, and the appearance of selfish DNA sequences which natural selection has produced or at least permitted through the 95% of the human DNA sequence that appears to be mere junk. Because natural selection has shaped the genome, the impossibility of providing a neat individuating characterization of the gene was only to be expected. Owing to the persistence of variation, there are no neat, nondisjunctive natural kinds anywhere else in the biological realm. No one should expect them in molecular biology either. And this is just another way of saying that we can no more give a general molecular definition or characterization of gene as a type than we can give such a definition of species as a type. Consider the challenge facing biologists’ attempts to define the general kindterm species. One is inclined to help oneself to Ernst Mayr’s definition of the term as “any interbreeding population reproductively isolated from other populations” (Mayr 1982, p. 273), and for many cases this definition will do nicely. But it will not accommodate asexual species; and while this is not a severe defect in a world that is mainly populated with sexual ones, at least for the first half of the history of life on Earth, sexual reproduction was nonexistent.
127
128
ch ap t er three
Moreover, Mayr’s definition will force us sometimes, against our taxonomic inhibitions, to split anatomically diverse populations into two species and assimilate sibling species and interbreeding ones into species. The solution, of course, is not another, better definition that gets at the essence of being a species; it’s the recognition that natural selection is too much of an opportunist to employ just one or any small number of ways of making species. Once we accept that particular species, like Cygnus olor, Didus ineptus, and Homo sapiens, are spatiotemporally extended and distributed individuals composed of their parts and not kinds of classes exemplified by their instances, the Latin terms we use to describe them turn out to be proper names, like Marco Polo, Napoleon Bonaparte, or Madonna. And the general term under which these names fall—Homo sapiens—is no more likely to be a natural kind than the general term (thing, entity, particular object) under which all proper nouns fall is likely to be a natural kind that figures in a general law or theory. No one need deny that for particular species, the term species does not name a natural kind. Mutatis mutandis for genes and gene. Individual genes are to the general category of gene as individual species are to the general category of species. A gene name, such as Hbf (the human fetal hemoglobin gene), names not a kind but a spatiotemporally distributed particular object, whose parts are the nucleotide sequence tokens that produce the mRNAs, most of which are eventually translated into human fetal hemoglobin subunit molecules. Each of these sequence tokens is related to the others via a line of descent no different from that which relates each of the parts of the species Homo sapiens. But, of course, each of the lineages of sequence tokens, like each of the lineages of people, has experienced environmental vicissitudes. In the case of the sequences, these vicissitudes are ones that may change their nucleotide sequence; indeed, some have been changed in such ways that they no longer produce normal fetal hemoglobin or anything for that matter. Owing ultimately to variation and selection, the set of all human fetal hemoglobin genes (normal and defective) has no single trait in common (though fortunately for us, most of them share many traits, including the production of polypeptide molecules with oxygen-carrying capacities stronger than most adult hemoglobins). What makes them all fetal hemoglobin genes is their evolutionary etiology, just as what makes all people Homo sapiens is not some property common and peculiar to us all but our common descent. Thus, if we attempt to give a set of necessary and sufficient features shared by all fetal human hemoglobin genes, we will fail. And, of course, the spatiotemporally restricted particular named by Hbf is part of a larger spatiotemporally restricted particular, the line of descent shared by all the human (adult, fetal, sickle cell, thalessemia, and other) hemoglobin genes; and this in turn is part of an even larger set, the mammalian hemoglobin genes, and this is part of a
Are There Really Informational Genes and Developmental Programs?
still larger set, including, presumably, plant leghemoglobin. It is the case that none of the nucleic-acid sequences which are members of these successively larger sets have common and peculiar properties, “essential properties” which characterize them the way, for example, that all members of the set of oxygen molecules share an essential property of atomic number 79. It is for this reason, of course, that there is no way to organize the types of particular genes (at any level of generality) that parallels the way the periodic table organizes elements. Every gene name, like every species name, designates a spatiotemporally distributed particular object. The more general term gene is a kind-term that names the set of all spatiotemporally distributed particular genes. But it is not one that will admit of a general characterization that gives the essence or the necessary and sufficient conditions for being a gene, either in terms of some order of nucleotides, size, proportion, or nucleic acid composition or in terms of common features of a gene product. And the reason is the same as the reason that there is no similar definition for the concept of “species.” It is natural selection that creates species and that creates genes. And natural selection operates in such an open-ended diversity of methods to produce species, one from another, and genes, one from another, that there will be nothing common and peculiar to all species or all genes upon which we can construct a general and adequate individuating definition, or a single operational characterization that will enable us unambiguously to count them all, to say when each one ends and the next one begins. This does not mean that we cannot count species or genes. It means that the methods we need to use to do so are various and determined by local outcomes of the evolutionary process. I have used the metaphor of natural selection sculpting genes out of the nucleic-acid sequence. But given the dynamic character of the way natural selection shapes and reshapes lineages of nucleic-acid sequences into lineages of genes over time, a better metaphor is given in the notion of a “jurisdiction.” The physical geography of a landmass can be individuated by surveys based in natural features. It can also be individuated or even divided up into units of various kinds: nations, states or provinces, departments, counties, townships, wards. It can be divided into legislative districts, executive and judicial districts, appellate court circuits, postal codes, and zoning districts. Some of these units are related to one another as whole and part; others are mutually orthogonal. Each of the divisions is reflected in the causes and effects of location in one or another of these units. Over time, every one of these units changes—grows, shrinks, becomes extinct, or otherwise evolves—as do their component units, not as a result of natural selection but owing to human institutional selection. For example, at the level below the national boundary, the jurisdictions drawn on the
129
130
ch ap t er three
North American continent in 1800 are utterly different from those drawn upon it today. In Europe, the changes at the infranational level may be fewer, but the changes in national borders are even greater. The nucleic-acid sequence that controls the development and the cellular physiology of organisms in any lineage is like the physical geography of a landmass which over time is divided into different jurisdictions—in the case of the nucleic-acid sequence, by the natural selection of its distinct effects in polypeptide and RNA synthesis. At any one time, these jurisdictions will include gene families, individual regulatory and structural genes, promoter and repressor sites, introns and exons, highly repeated sequences, selfish DNA, junk DNA, pseudogenes, and so on. Thus, some of the jurisdictions will be parts of other, larger jurisdictions, and some will be parts of several jurisdictions, while others will be proper parts of none and will not have any proper parts. Jurisdictional part-hood here is a matter of contributing as a unit to the adaptation of the larger jurisdiction. Over evolutionary time, of course, more local jurisdictions can recombine into different regional jurisdictions, they can become independent units, or they can be eliminated altogether, all depending on the environmental appropriateness of their effects. By constantly redrawing the jurisdiction on the DNA sequences of various lineages, natural selection makes at best temporary the numbering of these units, and may make jurisdictions that were at one time more important less so, or vice versa. Presumably at the outset of evolution on this planet, the most important jurisdiction on the hereditary material was nothing as complicated as the hemoglobin gene, or even compacted enough to be called a gene at all. At present, the most numerous jurisdictions appear to be ones that are composed of a thousand nucleotides or more, include some introns and exons, may overlap with one another, code for polypeptides, and are selected for in packages that include nucleic acids coding for promoters and RNAs. That is, the most numerous jurisdictions on the genome appear to be the structural genes. But natural selection may not always select for such large jurisdictions, or it may begin to select for larger ones. In short, to reconcile the structural heterogeneity of the nucleic acids with the molecular biologist’s functional individuation of the gene, we need to attend to the evolutionary biology of both the structure and its function. And we need to keep firmly in mind the blindness of selection for function to differences in structure. This approach is by no means entirely novel among reductionists. Richard Dawkins (1982) defines a gene as a DNA sequence that is an active replicator, in other words a sequence competing with other sequences to be selected for; that is, a sequence whose products have some favorable influence over whether it gets copied and thus represented in the next generation of nu-
Are There Really Informational Genes and Developmental Programs?
cleotide sequences. It has been advanced as an objection to Dawkins’s account that there will be sequences far shorter than anything we are willing to call a gene which satisfy this definition. Thus, consider the single-codon locus, in the DNA for hemoglobin, that normally codes for glutamate. A single nucleotide change in this codon will result in the replacement of glutamate by valine in the sixth position of the beta-chain subunit of hemoglobin. Valine is a nonpolar amino acid that will cause the beta subunit to stick to other beta subunits and thus deform the hemoglobin cell into a sickle. The single nucleotide in this long sequence that makes for glutamate thus has a favorable influence over whether it gets copied or not, this satisfying Dawkins’s definition of gene. This, Dawkins complains, is “an absurdly reductionistic reduction ad absurdum” (Dawkins 1982). But Dawkins and his critics are too hasty. First of all, his claim that genes are those sequences preserved by selection must be treated as giving a necessary, not a sufficient, condition for being a gene. That preservation by selection is not sufficient for being a gene is something the existence of selfish DNA (not to be confused with selfish genes) already makes clear. There are many different lengths of DNA sequence, some smaller than any gene, some larger than any gene, which face selection together. For an example of a larger package, just combine a structural gene with its regulatory sequence and the ancillary genes for the RNAs required to make its product. Among the smaller packages will be the introns within a gene which can combine in various combinations to code for differing gene products. Even smaller will be the microRNA genes less than 22 nucleotides—just over 7 codons—in length. But mostly, an evolutionary individuation of the genome will concur with the ones molecular biologists make simply by reading back from unprocessed intron sequence—including mRNAs. Mostly, but not perfectly. For the evolutionary individuation will count as a unit (though perhaps not as a whole gene) any DNA sequence which discharges one or more determinate subroutines in the program of development. By hybridizing the RNAs (or their ESTs) and the ribosomal and transfer RNAs to the nucleotide sequence, molecular biology will be able to divide up the genome into open reading frames. Sometimes, especially in eukaryotic genomes, these open reading frames will be like distinct political “jurisdictions” that overlap: the same sequence of nucleotides will belong to more than one gene. The possibilities for complication are enormous: a single sequence even hundreds of base pairs long could easily belong to six or more open reading frames. Beyond that, it could belong to as many different genes as there can be further start codons between the first one in any reading frame and the first stop codon in that frame. Thus, at any point in time, there is an answer to the question, how many genes are there, in fact, in any sequence? But the answer
131
132
ch ap t er three
may be different at different times in the evolutionary history of the sequence, depending on what natural selection has so far selected for among the sequences in light of their effects. And, as noted above, natural selection will not just structure the genome into genes. It will structure the genome more finely than just into genes: there will be subsequences separated by introns that may be the result of shuffling and may later result in new combinations; there will be the introns that allow for the shuffling, and the highly repetitive sequences selected for the structural integrity they provide the whole genome: and there will be sequences of selfish DNA that get selected for no advantage to anything else besides themselves. Because of the heterogeneity of outcomes that natural selection produces wherever it operates, there will be no particularly greater uniformity of size, location, or composition among these smaller “jurisdictions” of the sequence than there are among the genes. And so, when it comes to individuation, there will be no particular advantage in switching from gene talk to a more fine-grained level of description. What remains to be said is that the DNA sequence in the genome of a fertilized ovum programs the embryo, and that the program is encoded in this sequence by its structure of separated (and occasionally) overlapping structural and regulatory genes, whose (derived intentional) information-encoding role is maintained by other genes (such as the repair, proofreading, histone, and copying-machinery genes, and so on) and by nongene sequences (at the telomeres and centromeres, and so on). And it is by uncovering this program that molecular biology provided the explanatory resources that developmental biology requires. It will not have escaped the reader’s notice that throughout the last two chapters, in advancing the reductionist research program of molecular biology I have felt perfectly free to invoke the process of natural selection repeatedly. Natural selection has been invoked not only to help explain molecular phenomena but also to make these phenomena relevant to functional biology. Moreover, at several turns I have had recourse to Darwinian theory to elaborate the version of reductionism here defended and to respond to objections against it. But the defender of biology’s autonomy will argue that when the reductionist invokes the theory of natural selection, the result must be self-refuting. For nothing makes more manifest the irreducibility of biology to physical science than its dependence on the theory of natural selection. Insofar as I have repeatedly endorsed Dobzhansky’s dictum as literally true, I need to show that there is no difficulty, still less logical inconsistency, in combining reductionism and Darwinism. And this is the task of the next three chapters. Chapter 4 shows why biological explanation requires causal laws and that the only such laws in biology are the ones Darwin discovered. Chapter 5 argues that a proper understanding of these laws
Are There Really Informational Genes and Developmental Programs?
and how they are applied requires us to reject a holistic approach to evolution in favor of one that grounds the process on causal relationships among lineages, groups, families, organisms, genes, and other individuals. Chapter 6 completes the project by showing how natural selection as a biological process can be seen to be the result of processes that physical science will recognize as completely unproblematic. This conclusion will vindicate the reductionist’s appeal to Darwinism and show clearly how biology can deal with its untenable dualism.
133
4
• • • •
Dobzhansky’s Dictum and the Nature of Biological Explanation the argument for reductionism in biology has to begin by conceding to the discipline the truth of the theory of natural selection and the research strategy this theory inspires. Operating on this concession, it must show how reduction can complete, strengthen, and otherwise improve the results unreduced biology has secured employing this theory. The immediate problem that faces reductionism as a research strategy in biology is that this fundamental theory figures in the chief bulwark erected against any such reductionistic prospect. This bulwark, as we saw in the introduction, is the argument that biological explanations are all explicitly or implicitly ultimate explanations, relying on the theory of natural selection, and this theory is not reducible to any more-fundamental physical theory. Well, why not? Why should we be confident independent of whatever theoretical advance physical science may make that no such reduction is in the cards? Here the antireductionist consensus among philosophers steps forward and provides an argument that would be logically decisive if it were sound: 1. The reduction of one theory to another requires that the laws of the less basic theory be explained by appeal to the laws of the more basic theory. (Recall the history of reduction in physics over the four hundred years since 1600 sketched in the argument for reduction I foisted on the molecular biologist in the introduction.) 2. Biology, especially the theory of natural selection, embodies no laws or other generalizations of the sort suitable for such explanation.
Dobzhanksy’s Dictum and the Nature of Biological Explanation
Ergo, 3. Biological theory cannot be reduced to any more fundamental set of laws. Philosophers of biology don’t flatly deny that there are statements in biology that are labeled laws. But they do recognize that these statements aren’t enough like the laws of physics and chemistry that the latter could be employed to correct, complete, and deepen explanations provided by the former.
there are (almost) no exact laws in biology But the absence of laws also makes biological explanation mysterious. For if there is one thing we know from the study of other sciences, it’s that explanations need laws or something like them. One set of facts does not explain another set unless a link—causal or otherwise—can be made between them. And that link can only be made by some broader general relationship between the types of facts that explain and those which are explained.1 What that relationship could be in the absence of laws is a deep problem among philosophers of biology. Here are two quite different responses to the problem, advanced by philosophers who otherwise share a commitment to physicalism and antireductionism. It is easy to see on the one hand why their suggestions don’t solve the problem of how biology could explain without causal laws, and on the other hand that both philosophers recognize that the absence of laws is a serious problem for biological explanation. In The Advancement of Science (1993), Philip Kitcher offers an analysis of evolutionary explanation that quite explicitly eschews laws. His model of explanation in evolutionary biology involves a schematic pattern for the deduction or derivation from ecological conditions to reproductive outcomes via premises of the following schematic form: (2) Analysis of the ecological conditions and the physiological effects on the bearers of P, P1, . . . , Pn [traits whose distribution in any generation is to be explained]
1. Philosophers will recognize here a commitment to a Humean view of causation that is so widely accepted, few will think it requires further argument. For the record, there is a minority view among twentieth-century philosophers stretching from C. J. Ducasse to Elizabeth Anscombe holding that singular causal claims can be established independent of laws or even empirical regularities. I cannot argue against this view here, but I have done so at the length of a book elsewhere (Beauchamp and Rosenberg 1981). There Davidson’s (1967) view, that singular causal claims imply the existence of a law without implying any particular generalization, is defended.
135
136
ch ap t er four
Showing (3) Organisms with P had higher reproductive success than organisms with Pi (i from 1 to n). (4) P1, . . . , Pn are heritable. Therefore, (5) P increased in frequency in each generation of the lineage [of the organisms in question]. (6) There are sufficiently many generations [in this lineage]. Therefore, (7) (Virtually) all members of the lineage now have P (Kitcher 1993, p. 28). Such inferences require either a set of substantive inference rules or a major premise embodying a generalization, either of which must embody contingent truths about the relevant causal processes, or else the patter of reasoning will be incapable of explaining contingent facts, what Kitcher elsewhere calls “objective dependencies.” But, as Nagel pointed out in his treatment of ampliative inference rules (1961, pp. 66–67), the difference between such rules and substantive general laws is largely notational. In this case, the substantive inference rule is going to be reliable only if it could also be expressed as a law or laws about how heritable variations in fitness result in descent with modification, that is, evolution. Kitcher’s explanatory schemas do not mention laws, but their reliable applicability requires them. A similar problem daunts Elliot Sober’s apparently quite different approach. Sober has argued that there are laws in biology, but these laws are mathematical truths, which biologists call models. He writes, Are there general laws in biology? Although some philosophers have said no, I want to point out that there are many interesting if/then generalizations afoot in evolutionary theory. Biologists don’t usually call them laws; models is the preferred term. When biologists specify a model of a given kind of process, they describe the rules by which a system of a given kind changes. Models have the characteristic if/then format we associate with scientific laws . . . they do not say when or where or how often those conditions are satisfied. (Sober 1993, p. 15) Sober provides an example: “R. A. Fisher described a set of assumptions that entail that the sex ratio in a population should evolve to 1:1 and stay there. . . . Fisher’s elegant model is mathematically correct.” Fisher’s model is a mathematical truth, as Sober himself recognizes: Are these statements [the general if-then statements] that models of evolutionary processes provide empirical? In physics, general laws such as
Dobzhanksy’s Dictum and the Nature of Biological Explanation
Newton’s law of gravitation, and the Special Theory of Relativity are empirical. In contrast, many of the general laws in evolutionary biology (the if/then statements provided by mathematical models) seem to be nonempirical. That is, once an evolutionary model is stated carefully, it often turns out to be a (non-empirical) mathematical truth. I argued this point with respect to Fisher’s sex ratio argument in sec. 1.5. . . . If we use the word tautology loosely (so that it encompasses mathematical truths), then many of the generalizations in evolutionary theory are tautologies. What is more we have found a difference between biology and physics. Physical laws are often empirical, but general models in evolutionary theory typically are not. (Ibid., pp. 71ff.) It should be evident that mathematical models cannot explain contingent processes, for the same reason purely logical, nonampliative inference rules cannot. What makes Fisher’s sex-ratio model explanatory is the further empirical claim that it is natural selection acting on a population that results in its maintaining a 50:50 sex ratio. And this causal claim itself will have to rely on an explanatory generalization, presumably some basic principle of natural selection. We will consider models and other surrogates for the role of explanatory generalization again below. Meanwhile, the important point is that explanation in biology cannot really get along without laws or something very like them. Actually, Sober’s denial that there are laws in biology of the sort we are familiar with in physical science is almost right, as we shall see. And the reason it’s almost right is owing to Dobzhansky’s dictum. For it is the pervasive operation of natural selection that makes for the absence of more than one law in biology. It’s in the nature of a domain governed by natural selection over blind variation that no (other) laws will arise. To see why is relatively easy and of the profoundest importance for reductionism’s biological prospects and obstacles. Natural selection is a mechanism, or better, a filter that selects for effects, and not just any effects, but those which are adaptive. These adaptive effects that the environment selects for are the properties of biological systems which have functions. The functions are just the selected effects: wings, fins, legs— structures with many different effects for the systems that have them. But what makes them wings, fins, or legs is one or a small number of their effects—the ones that produce locomotion through the air, the water, or on solid ground. As noted in the introduction, every biologically interesting structure is labeled by the term that expresses its selected effect; how a structure is “individuated”— how the borders between it and other structures in the same animal or plant (or fungi) are drawn—depends on its selected effect, its function. And when we bring diverse biological structures into single natural kinds, for example “organ of locomotion,” which includes wings, fins, and legs, we do so in virtue of a
137
138
ch ap t er four
selected effect they all share, despite their diversity of structure.2 By contrast, in physical science few things are identified and individuated via their effects (selected or otherwise), and functions are never to be met with. Now, any mechanism that selects for effects naturally cannot discriminate between differing structures with the same effects. Such “functionally equivalent” structures will make it past exactly the same environmental filters, just because the effect one of them is selected for is also present in the other, despite their structural disparity. For example, the differing structures of the vertebrate eye—insects, octopi, birds, mammals—differ in many respects of ocular anatomy. But these different structures pass at least some environmental tests for adaptation equally well enough to have survived several geological epochs and to still be available for further adaptation. In fact, their differences reflect another aspect of the blindness of natural selection to differences in structure. Two different structures don’t have to have exactly the same selected effects in order to survive in an evolutionary contest. It is enough for them to have roughly similar effects, to be functionally similar, not functionally equivalent. After all, the environment rarely remains constant for as long as is required for the survival of only the uniquely best among alternative structures that happen to meet some environmental need to varying degrees. If at some time before all the “also rans” are eliminated the environment changes and begins filtering for another adaptative effect, multiple structures will survive even when they are not functionally equivalent but only functionally similar. But functional equivalence and functional similarity combined with structural difference will always increase as physical combinations become larger and more physically differentiated from one another. Since selection for function is blind to differences in structure, it is easy to see that there will be no laws in any science which, like biology, individuates kinds by naturally selected effects, that is, by functions. A biological law will have to have a form something like “All Fs are Gs,” where the Fs will have to be a functional kind, while the G will either be another functional kind or a structural kind. The Fs will have to be functional, since the generalizations are about biological systems, and all biological systems are functionally individuated. The Gs can be either functional or structural. For example, All amphibians reproduce sexually
2. Note 1 of the introduction treats both Cummins’s alternative causal-role analysis of function and the objections to the etiological account advanced there that are due to Amundsen and Lauder (1998). Note that the argument of this book requires only that the functional character of biological systems be the result of selection for effects which is blind to differences in structure.
Dobzhanksy’s Dictum and the Nature of Biological Explanation
links the functional kinds amphibian and sexual reproducer. The generalization All genes are composed of DNA links a functional kind, gene, with a nonbiological molecular structure. But neither of these statements can be a strict law, because of the blindness of natural selection (which forms structurally heterogeneous functional kinds) to structure (which will heterogeneously realize functional kinds). The “All Fs are Gs” generalization is usually given the following symbolization among philosophers: (x)(Fx → Gx) to be read as “For all x, if x is an F, or has property F, then x is a G, or has property G.” Fx has to be a functional property or kind, since the law in question is biological. Gx will itself be either a structural predicate or a functional one. Either it will pick out Gs by some physical attribute common to them, or it will pick out Gs by descriptions of one of the causes or effects that everything in the extension of Gx possesses. But could there be a (biologically significant) physical feature common to all items that have property F or are Fs (or, as the philosopher would say, are in the extension of Fx)? Probably not in a million years . . . for that is usually too short a time for nature to winnow diverse physical structures with selectively similar effects. Fx will have to be a physically heterogeneous class, since its members have been selected for their effects. To say Fx is physically heterogeneous is just to say that there is no structural property its instances all share. So, if Gx is true of all of the items in the extension of F, Gx cannot be a structural predicate. Of course, some structural feature may be shared by all of the members of F. But it will not be a biologically significant one. Rather, it will be a property shared with many other things—like mass or electrical resistance. These properties will have little or no explanatory role with respect to the behavior of members of the extension of Fx. For example, the generalization that “all mammals are composed of confined quarks” does relate a structural property (quark confinement) to a functional one (mammality) and is exceptionlessly true. But if it is a law at all, it is not a law of biological interest. True universal laws about the structure or composition of a functional kind in biology are in principle possible, but in fact ruled out because of the operation of random variation and natural selection. Dobzhansky’s dictum in action. The existence of a functional property different from Fx that all items in its extension share is even more improbable than the existence of a structural property all Fs share. If Fx is a functional kind, then, owing to the blindness of selection to structure, the members of the extension of Fx are physically diverse. As such, any two Fs have nonidentical (and usually quite different) sets of
139
140
ch ap t er four
effects. Without a further effect common to all Fs, selection for effects cannot produce another selected effect different from F and common to all; the environment cannot uniformly select all members of F for some further adaptation, since the members of F don’t have a further effect universally in common to be selected.3 Or at least the probability that they do is the product of the already vanishingly small probability that all members of Fx have a structural feature in common responsible for their all being Fs, and that this brings about another structural feature in common responsible for their all being Gs. Not in a million million years.
and no inexact laws either Biology does embody a large number of lenient, ceteris paribus generalizations, that is, propositions of the form “All Fs are Gs, other things being equal.” These generalizations are to be found all the way from the obvious ones like “The robin’s egg is blue, ceteris paribus” to “The human has 23 pairs of chromosomes, other things equal” to “The gene is composed of DNA” to “All enzymes are proteins,” generalizations that figure in a lot of biological explanations. But if the argument against strict laws given above is sound, then we cannot expect any strict or exact laws to lie behind these “other things equal” generalizations. In fact, Dobzhansky’s dictum can be used again to provide still another argument: that there are no such strict laws behind biology’s ceteris paribus generalizations; that these generalizations’ “other things equal” clauses cannot be progressively narrowed down and eventually eliminated; and most important, that the explanations biologists advance employing these ceteris paribus generalizations leave hostages to fortune that can only be ransomed by molecular biology’s reductive explanations. The argument begins with an observation about physical laws. There is in the physical realm a finite (indeed a small) number of forces—mechanical, electromagnetic, thermodynamic—that all work together to produce actual outcomes we seek to explain. To the extent a textbook generalization of mechanics, like F ⫽ (gm1)(m2)/d2, is silent on these other forces, it is not a completely true description of physical processes, but rather a ceteris paribus law. As we add 3. It is worth noting here that natural selection will package together a functional trait and a nonfunctional one over a long enough period to give the correlation the appearance of a law, when the two traits are phenotypes resulting from genes closely linked on the same chromosome, whence the phenomenon of pleiotropy. Linkage can even package two traits equally adaptive in the same environment. But when environments change, the package is broken up, as I explain below.
Dobzhanksy’s Dictum and the Nature of Biological Explanation
one after the other of this finite number of different physical forces together in a single law, we are in effect narrowing down the range of interferences and exceptions to our “law” that our ceteris paribus physical law excludes. If the number of forces is finite, we can identify a strict law behind our ceteris paribus generalization that backs it up, explains both its errors and exceptions, and its successful applications in explanation and prediction. Indeed, the direction in which contemporary physics is moving suggests that eventually the number of distinct physical forces we need to factor into our most fundamental explanations is less than four. For physics is committed to reducing the strong, weak, electromagnetic, and gravitational force to a smaller number of fundamental physical forces—or perhaps to no forces, but rather a smaller number of particles, or dimensions for that matter. Compare the situation in biology. The role of natural selection makes practically unlimited the number of interfering forces that we would need to list in order to turn a ceteris paribus generalization into a strict law. The reason is to be found in the role of the environment in setting adaptational or design problems for evolving lineages to solve. At a relatively early stage in evolution, these design problems take on the reflexive character of what have been called arms races, dynamic strategic competitions in which every move generates a countermove, so that conditions are never constant, other things are never equal, and ceteris is never paribus. Ever since Darwin’s focus on artificial selection, it has been recognized that in the evolution of some species, other species constitute the selective force channeling their genetic changes. The interaction of predator and prey manifest the same relationship. Since the importance of frequency-dependent selection became apparent, it has been recognized that an interbreeding population can be an environmental force influencing its own evolutionary course. This strategic interaction process isn’t limited to predator and prey, or even competing populations: it can be found in the ways in which genes for adult traits interact over evolutionary time with genes for their fetus’s traits, like the hemoglobin genes. Among the environmental features that filter genetic variations and allow comparatively more adaptive ones to pass through are other genes, both within a gene’s own cellular milieu and beyond it in competing as well as cooperating organisms. Competition for limited resources is endemic to the biosphere. Any variation in a gene, individual, line of descent, or species that enhances fitness in such a relentlessly competitive environment will be selected for. Any response to such a variation within the genetic repertoire of the competitor gene, individual, lineage, or species will in turn be selected for by the spread of the first variation, and so on. One system’s new solution to a design problem is another system’s new design problem. If the space of adapta-
141
142
ch ap t er four
tional moves and countermoves is very large, and the time available for trying out these stratagems is long enough, we will be able to add to the “and so on” of the penultimate sentence the words ad infinitum.4 What this means, of course, is that any functional generalization in biology will be a ceteris paribus generalization in which, over evolutionary timescales, the number of exceptions will mount until its subject becomes extinct. Take a simple example, such as “Zebras have black and white vertical stripes.” The explanation for why they do is that lions are colorblind and the stripes tend to provide camouflage, because individual zebras will be hard to detect in high grasses and because zebras grazing together will be hard to differentiate. This strategy for survival can be expected in the long run to put a premium on the development of ocular adaptations among lions—say, color vision—that foil this stratagem for zebra survival. This in turn will lead to either the extinction of zebras or the development of still another adaptation to reduce lion predation, say, green stripes instead of black and white ones. And in turn this stratagem will lead to a counterstroke by the lion lineage. The fantastic variety of adaptational stratagems uncovered by biologists suggests that there is a vast space of available adaptive strategies among competing species, and that large regions of it are already occupied. The upshot is that to the extent that general laws must be timeless truths to which empirical generalizations approximate as we fill in their ceteris paribus clauses, no such laws are attainable in biology, because we can never fill in these clauses. One nice set of examples of this state of affairs is to be found at the basement level of molecular biology, where it was once assumed that we would discover exceptionless, strict general laws. Consider the generalizations that all enzymes are proteins, that hereditary information is carried only by nucleic acids, or the so-called central dogma of molecular genetics: DNA is transcribed to RNA and RNA is translated into protein. Each of these apparently exceptionless generalizations has been discovered in recent years to be subject to exceptions. It turns out that RNA catalyzes its own self-splicing, prions (proteins responsible for mad cow disease) carry hereditary information, and retroviruses carry their own hereditary material in RNA and transcribe it to DNA. In each case, the full 4. This argument, that the persistence of a strategic interaction problem deprives generalizations about functional traits of nomological status, needs to be distinguished from a different argument due to J. Beatty (“The Evolutionary Contingency Thesis” [1995]), with a quite similar conclusion. Beatty’s argument turns on the environmental initial conditions that produce initial correlations of traits, not on the reflexive role of natural selection in breaking them up. His argument is, of course, compatible with and reinforces the present one.
Dobzhanksy’s Dictum and the Nature of Biological Explanation
story of how these exceptions to the relevant generalizations emerged is a story that reflects the operation of natural selection finding strategies in adaptational space that advantage one or another unit of selection in the face of stratagems employed by others. Over the long run, the number of exceptions to any functional generalization will increase, and increase in ways we cannot predict. If laws are timeless truths, then there will no laws in biology, or at least none to which our generalizations will visibly approach in approximation. For the ceteris paribus clause of every biological statement is subject to a huge number of qualifications, from which some drop out and others are added, as a result of the vagaries of local environmental changes. But wait, you may say, over time the number of exceptions to most biological ceteris paribus clauses may be very large, may grow, and may change in ways we cannot predict; but we are talking geological epochs here, periods of time that dwarf human lifetimes, require vast environmental changes and large numbers of extremely infrequent mutations. Surely on this scale, many of the ceteris paribus generalizations of biology are reliable, whether we call them laws or not? True enough. But it is hardly a reason for biological or philosophical complacency. To begin with, we know from elsewhere in science and ordinary experience that a generalization with a long list of mostly unsystematized and indeed unknown qualifications on its truth lacks explanatory power. It lacks such power because it doesn’t report the factors and forces which make it (mostly) true. More often, such generalizations turn out to be “accidental truths.” The difference between these generalizations and biological ceteris paribus generalizations can only be a matter of degree, not a matter of kind. The biological ones report accidents frozen in evolutionary time, while the others report accidents frozen only for much shorter periods. Both describe finite “local” trends, and differ only on the dimensions of the locality. If short-term accidents lack explanatory power, then either (1) long-term ones lack such power as well, or (2) we need an explanation for why they don’t. But then there are the problems raised by those biological “other things being equal” generalizations that fail to show themselves before our eyes, ones we start out thinking are reliable but lose confidence in as we learn more, generalizations like “Humans have 23 pairs of chromosomes” or the central dogma, both of which looked so firm and exceptionless when first propounded, and lost their explanatory power as the exceptions to them piled up. Apparently, as we test generalizations about smaller and smaller biological systems and their components—tissues, organs, cells, organelles—the frequency with which they are disconfirmed increases even as the kinds of disconfirming instances decreases. An example will illustrate this trend. Consider Mendel’s laws of independent
143
144
ch ap t er four
assortment and segregation of genes. Within two decades of their rediscovery in the early twentieth century, exceptions to the “laws” began to pile up: crossover and linkage showed that the “laws” were at best ceteris paribus generalizations. And the frequency of linkage and crossover was employed to measure the physical distance between genes in units of morgans and centemorgans, depending on the frequency with which pairs of them violated Mendel’s laws. And, of course, more and more exceptions to Mendel’s laws turned up over time, owing to meiotic drive, autosomal genes, and so on, and we still don’t know how large the range of phenomena the “and so on” covers. But when investigation shifted to a lower level of functional individuation—for example the structural gene, and consider the generalization “one gene, one enzyme”—in relatively short order exceptions began to turn up. They did so in relatively manageable numbers—two genes to produce one working enzyme, the same sequence coding for two distinct proteins, regulatory genes that code for regulatory proteins which may or may not qualify as enzymes at all. Complications, yes, but not paradigm-shifting. By the time the existence of introns and exons was established, we could be confident in the truth of the generalization that prokaryotes lack exons, and eukaryotes bear them. But it certainly does not look like this generalization reflects some significant basic fact of biology. Even the RNAvirus exception to the central dogma is not conceptually disturbing. We could perfectly well imagine the discovery tomorrow of a prokaryote with an exon in one of its genes. It would be a discovery of roughly the same order of magnitude as Thomas Cech’s discovery that proteins were not the only enzymes, that RNA is a self-splicing catalytic molecule. This was a Nobel Prize–winning discovery, but it did not shake the foundations of biology. Why not? Because, as elsewhere, the discovery of exceptions to accidental regularities is not really very surprising. In fact, the explanation for why there are fewer exceptions at lower levels than at higher levels of biological organization involves another invocation of Dobzhansky’s dictum. The smaller biological systems have been around longer and reproduce more rapidly than the larger ones. Accordingly, though natural selection is equally blind to structural differences among them as it is to such differences among multimolecular and multicellular systems, there has been a good deal more time as measured in reproductive generations, and a good deal more fundamental environmental (that is, chemical) stability for evolution to operate at these levels. It has had enough time to select for relatively fine differences in adaptational fitness among differing macromolecules and small packages of them with similar effects. As a result, at the level of the macromolecule there are just fewer ways left to skin the cat; fewer alternative structures with the same effects, and so fewer types of qualifications, exceptions, and counter-
Dobzhanksy’s Dictum and the Nature of Biological Explanation
examples to generalizations biologists uncover, but a large enough number of copies of the same exceptions that when we begin to frame generalizations at these low levels of biological organization, they confront us immediately. But the crucial thing to note is that these generalizations about which polynucleotide sequence has introns and which doesn’t, or whether amino acids and nucleic acids can be enzymes, are not particularly explanatory. In fact, they are more suited to being explananda—descriptions of what biology needs to explain, than to being explanans—biology’s explainers. The lower-level generalizations of molecular biology, with a small number of relatively easy to uncover exceptions, don’t differ in degree from those uncovered at much higher levels of biological organization. If the former don’t have much or any explanatory power, it remains a mystery as to why. Thus, biology must take seriously the explanatory and predictive problems that face ceteris paribus “laws.” For it looks as though they are almost all reports of frozen accidents—whether momentary, as with the nucleotide sequence of the AIDS virus, or glacial, as in the stripes of the tiger. These problems come under two headings. First is the problem of empirically testing the ceteris paribus clauses, whose very nature insulates their generalization from such testing: when an empirical test appears to falsify the generalization, it is always possible that conditions were not equal, so the generalization is not really disconfirmed. If the conditions to be held constant are huge in number and variable from occasion to occasion, then there is no way to tell when apparent falsification is real falsification. This makes for equally serious problems when it comes to verification. How are we to tell that the circumstances in which an apparent verification obtains will be repeated . . . ever? We cannot if the number of conditions we need to establish is huge and variable. And, of course, we cannot rely very heavily on ceteris paribus generalizations in agriculture, medicine, and other pursuits where costs of employing unreliable means are high. Predictive reliability is just the same thing as an extremely high degree of very precise confirmation. It’s obvious that no such reliability is available for many of the biological generalizations that bear on our own health and welfare. Second, there is the philosopher’s problem of how explanation works in biology. Why suppose that the explanations these generalizations are employed in are correct, complete, general, or otherwise really explanatorily adequate? Even more seriously, why suppose that they can be corrected, completed, or made more general by increasing the precision of and decreasing the exceptions to the generalizations which do the explaining? These are not rhetorical questions. Answering the second question about the nature of biological explanation will enable us to frame the limits of biology’s predictive power and technological application as well.
145
146
ch ap t er four
how biological models explain Biologists, both molecular and nonmolecular, are not likely to give too much pause to either the conclusion that there are no laws in biology of the sort we are familiar with in scientific explanation elsewhere, or the suggestion that without such laws, biological explanations are unfounded. If pressed, biologists of both sorts will invoke their models as carrying the explanatory burden in the discipline. Nonmolecular biologists will especially advert to mathematical models such as those due to Hardy and Weinberg, or Fisher’s sex-ratio model, or perhaps Alan Turing’s equations for developmental processes. Molecular biologists will invoke the model systems they elucidate in order to identify threads common across a range of macromolecular mechanisms. Let’s consider the mathematical models first, as they have attracted the interest of physicalist-antireductionist philosophers not otherwise able to identify explanatory laws in biology. Fisher’s sex-ratio model is an oft-cited example of a biological model that provides for the sort of explanations prized in the discipline as particularly insightful. The model shares a form with general laws common in other disciplines. It is an “if-then” statement, whose antecedents include the assumption that there are two sexes; that the ratio of male to female offspring of an organism is a fixed lifetime probability which varies between p(female) ⫽ 1 to p(male) ⫽ 1; and that natural selection obtains among reproducing individuals. Given these antecedents, we may logically infer that as the ratio of males to females in the whole population diverges from 50:50, organisms that have a higher probability of having the minority offspring will be favored: if there are more females than males in the population, then competition to mate with the scarcer males will increase their chances of having offspring, and vice versa. As a result, swings away from a 50:50 sex ratio will be dampened. Thus, the consequent of the model, the clauses following the “then,” will express the conclusion that the sex ratio will remain stable, around 1 to 1 males to females; and the greater the departure from this ratio at any one time, the stronger the selection for organisms that preferentially give birth to members of the minority sex. Thus, we may use this model to explain the apparently miraculous pre-established harmony of the sex ratio in mammals. Models like Fisher’s share another feature with physical laws besides logical form. We can be confident that it is true everywhere and always, across time and space: wherever its assumptions obtain, the sex ratio will be 1:1. On the other hand, there appears to be a very significant dissimilarity between the mathematical models of biology and the otherwise similar laws of physics. In a mathematical model, we can derive the consequent from the assumptions in the antecedent by logic and mathematics alone. We need not sub-
Dobzhanksy’s Dictum and the Nature of Biological Explanation
ject the if-then statement to empirical testing in order to establish its truth. It’s a necessary truth, if ever there were one. By contrast, once we have stipulated the conditions under which Newton’s law of gravitational attraction holds, we cannot derive by logic alone the consequence that gravity varies inversely as the square of the distance. Or even that there is such a thing as gravity. We need empirical data to formulate the law, and mechanical contrivances like Atwood’s machine to test it. Well, what’s so bad about that? A biological model will turn out to be more secure than a physical law. After all, unlike physical laws, it cannot be refuted by the facts! But it is largely because physical laws can be refuted by the facts that they explain the facts which confirm them. A proposition that cannot be refuted by any particular fact is compatible with anything’s happening at all. Because it cannot rule anything out as not happening, it also cannot explain anything that does happen. To explain what happens is to rule out other things as not happening. A law or model or anything else that can’t rule anything out can’t explain anything either. Consider the definition “All bachelors are unmarried.” It cannot explain why Noel Coward was a bachelor, and it will remain true if Mr. Coward had married. It owes its certainty to the same thing that makes it explanatorily impotent: its status as a necessary truth. What all this means is that the mathematical models of biology are necessary truths, and that if they are parts of explanations, then there must be something else in the explanations that makes them explanatorily relevant to what they purport to explain. What follows is an analogy that may make the point more sharply. A biologist’s mathematical model is a necessary truth that works like a rule of chess: if the king is in check, and there is only one space to which it can move without remaining in check, then the king must be moved to that space. This if-then statement will explain the motion of a particular piece of wood only if we add that the piece of wood is being treated as a king in a chess game between two persons, who want to play chess, believe they are playing chess, understand the rules of chess rightly, believe that one has the other in check, and so on and so forth. In other words, our rule seems to explain the move, but only against the background of a large number of complex assumptions about the chess players’ beliefs and desires, only some of which are about the rule, along with a large number of laws (most of which we do not know) that connect the beliefs and desires to the movements of the players’ hands. Appealing to the rule explains only because we know enough about these unstated assumptions. If mathematical models in biology explain in the same way as the constitutive rules of chess explain, we need to ask, what are the unstated assumptions that make them explanatory? There is one set of assumptions which would make biological models
147
148
ch ap t er four
explanatory that we can rule out immediately: if mathematical models in biology worked like the mathematical models in physics, there would be no mystery about how they explain. Physical models explain because they are related to physical laws; there are no biological laws for biological models to be related to. Consider the ideal-gas law and the Bohr atom—two mathematical models in physics. An ideal gas is by definition a system that acts in accordance with the equation PV ⫽ nrT; a Bohr atom is by definition one whose electron energy states are quantized to produce the Balmer lines in a spectrum. These are necessary truths. Why do they explain? Well, for each of these physical models, there is a set of contingent generalizations about gases or atoms, from which it follows that “Within a given range of TP and V, real gases come close enough to instantiating PV ⫽ nrT” or “Real atoms approximate to Bohr atoms.” These statements are lawlike generalizations, despite their “close enough” or “approximate to” or “other things equal” clauses, just because there are a number of relatively complex generalizations in physical theory that gases and atoms figure in and that explain these ceteris paribus generalizations. But we know that neither ceteris paribus laws nor strict laws are to be found in biology. So, this explanation for why physical models work is not available to explain why biological ones do. In the physical sciences, models have turned out to be way stations toward general laws about the way the world works. The sequence of equations of state for a gas moves from the ideal-gas model toward successively greater predictive accuracy and explanatory unification. Any such equivalent expectation in biology is ruled out by the absence of nomological generalizations of the familiar sort we know and love in physics and chemistry. Consider the set of models that characterize population biology—models which begin with a simple twolocus model that reflects Mendel’s “laws” of independent assortment and segregation. After the first disconfirming complication was discovered—gene linkage—geneticists added a ceteris paribus clause to Mendel’s laws. Then genetic crossing-over was discovered. After a certain point, geneticists ceased adding qualifications to Mendel’s laws, and began to treat them as the historically earliest and simplest in a sequence of models that have been continually complicated as research has uncovered the multitude of different ways in which natural selection has explored adaptational space. Because there are so many survival/reproduction strategies available to nucleic acids, Mendel’s two original laws have been so riddled with exceptions that it isn’t worth revising them to accommodate exceptions. Biologists ceased adding qualifications to them, and instead began to construct other models, which introduce more and more loci, probabilities, recombination rates, mutation rates, population size, and so on. But they have done so without elaborating a single population-genetic theory
Dobzhanksy’s Dictum and the Nature of Biological Explanation
that could underlie and systematize these models the way that physical theory underwrites its models. For what could the theory that underlies and systematizes these Mendelian models be like? Since the models’ predicates—its Fx’s and Gx’s—are all functional, the theory systematizing them will be expressed in functional terms as well. But we know already that any theory so expressed will itself not provide the kind of strict or lenient laws that a systematization of the models requires, that is, a set of laws that will explain when they obtain and when they do not obtain. Let’s take stock: we began by noting an argument against the reduction of biology to physical science that turns on the absence of biological laws to be reduced to physical ones, and then developed an argument from Dobzhansky’s dictum to the absence of such laws. Without them, however, biological explanation appears to be mysterious. What is more, biological explanation does employ mathematical models that look rather like the models we find in physics. The appearance is deceptive, though, because there are no laws standing behind these models as there are standing behind physical models. And this just increases the mystery, since without them mathematical models by themselves have no explanatory power for empirical happenings at all.
the only laws in biology are darwin’s The source of demystification here can only be the theory of natural selection. In On the Origin of Species, Darwin made two broad claims, both of them about what happened on the Earth over a long period and what caused it to happen: the common descent of the large but finite number of particular biological systems on the planet, and the importance of natural selection as the source of their diversity, complexity, and adaptation. In other words, Darwin made a claim about (natural) history. But he also made claims about the features common to the instances of evolution he reported, and credited these common features to the operation of general laws—he specifically mentioned the law of “unity of type,” and the law of “conditions of existence”: It is generally acknowledged that all organic beings have been formed on two great laws: Unity of Type, and the Conditions of Existence. By unity of type is meant that fundamental agreement in structure which we see in organic beings of the same class, and which is quite independent of their habits of life. On my theory, unity of type is explained by unity of descent. The expression of conditions of existence . . . is fully embraced
149
150
ch ap t er four
by the principle of natural selection. For natural selection acts by either now adapting the varying parts of each being to its organic and inorganic conditions of life; or by having adapted them during past periods of time. . . . In fact, the law of the Conditions of Existence is the higher law; as it includes, through the inheritance of former variations and adaptations, that of Unity of Type. (1859, p. 206) Let’s try to state the law of the conditions of existence a little less telegraphically and more explicitly, though still in terms Darwin employed; and then let’s consider whether this putative law suffers the same fate as other purported laws in biology. Darwin might have stated his laws of natural selection thus: 1. Biological systems not on the verge of extinction or fixity reproduce with heritable variations. 2. If heritable variation obtains among biological systems, then there will be fitness differences among these biological systems. 3. In the long run, the more-fit variants will leave a higher proportion of descendants than the less-fit variants. Among the conclusions Darwin derived from these principles of his theory is the following: 4. Until fixity or extinction is attained, there will be descent with modification, that is, evolution. This presentation of the theory of natural selection is not the only one, nor perhaps the most perspicuous or economical one. Some contemporary versions will substitute replicators and interactors or vehicles for biological systems in order to draw the distinction Darwin made between the two processes required for evolution: inheritance, accomplished by replicators, and adaptation, accomplished by the replicators’ interactions with the environment or by separate interactors that the replicators give rise to through variation and selection acting on them. Employing this terminology, we may simplify the purported laws of natural selection into a single central generalization: If there is random variation among replicators, then there will be selection for fitness differences between them or between their interactors. Or again, if x and y are replicators or their interactors, we can frame this generalization in more familiar terms as a principle of natural selection (PNS): PNS (x)(y) [If x is fitter than y in generation n, then probably (there is some future generation, n⬘, in which x will have more descendants than y)]
Dobzhanksy’s Dictum and the Nature of Biological Explanation
The operation of this principle on successive generations of replicators and interactors produces “descent with modification,” also known as evolution. Do either of these two versions of the PNS—or, for that matter, the generalizations 1–4 above—run afoul of the arguments against laws in biology, from the functional individuation of their kind-terms to the blindness of natural selection to structure, and the strategic character of selection? They do not seem to do so. In particular, none of them are subject to qualifications or ceteris paribus clauses in virtue of the operation of selective forces on the Earth. After all, these principles constitute the mechanism of natural selection itself; there is no scope for natural selection to qualify, limit, or shape its own operation. Biologists will, of course, have no trouble accepting the truth of the PNS, nor will philosophers. For the latter, the issue is not its truth but (1) whether it is a law or not, and (2) whether it is the law that confers explanatory power on the theory of natural selection. These two issues are intimately connected for philosophers. If the answer to (1) is that the PNS is no law because it is true by definition, owing to the meaning of fitness, then the answer to question (2) cannot be affirmative, and we are no further along in our quest for how biology explains than we were when exploring how models explain. The charge widely employed among biologists that PNS is no law owing to the meaning of fitness is a serious and long-standing one. Suppose we define greater fitness as greater reproduction; then PNS will read (x)(y) [If x reproduces more than y in generation n, then probably (there is some future generation, n⬘, in which x will have more descendants than y)] It is clear that this statement will turn out to be true by definition just as surely as Fisher’s sex-ratio model is. The probability mentioned in its consequent will always be 1.0, for n⬘ will just be the immediately next generation after n. So understood, we could drop the “probably,” and this would make manifest the vacuous character of the PNS when fitness is so defined. More ink has been spilled dealing with this problem than any other in the philosophy of biology; so much so that the whole subject was once referred to as the philosophy of fitness. Regarding the potential implications of the allegation that the PNS is a necessary truth, the mere consequences of a definition are grave. In particular, it provides ideological opponents of evolutionary theory a convenient basis on which to deny it is an empirical claim of science and so to exclude it from the biology curriculum of schools and universities. The claim also has been used to insist that other nonscientific alternative accounts of diversity, complexity, and
151
152
ch ap t er four
adaptation, such as creationism or “intelligent design theory,” as it laterally has been styled, be given equal standing in these curricula. The whole issue of defining fitness is of sufficient importance that much of the next chapter will be devoted to it. For the moment, I put it aside on the ground that both reductionists and antireductionists have an equal stake in the search for an account of fitness which rebuts this charge. For the antireductionist needs an irreducible but explanatory evolutionary theory to ground the argument from the proximate/ultimate distinction to the autonomy of biology from physical science. And the reductionist needs a law to be reduced to the laws of physical science. Or so it seems. Still a further reason for assuming at least for the nonce that this problem is soluble is that, as I now hope to show, if the PNS is a law, then the mystery of how biological explanation proceeds—including the mystery of how mathematical models explain—can be solved.
biological explanation is historical, all the way down to the molecules Assume that the PNS is a law of nature at the core of Darwin’s theory. Then, following Dobzhansky’s dictum, the PNS is implicitly involved both in every biological explanation and indeed in all of biology’s functional descriptions of phenomena that are to be explained or do any explanatory work. In other words, the lion’s share of biological explanations that don’t explicitly invoke the PNS are, in a certain innocent sense of the term, “incomplete.” They assume the truth of the PNS, and it is to be understood as background information in every biological explanation, background information that need not be trotted out on every occasion to eke out the full explanation. So understood, the role of the PNS in biological explanation illuminates exactly in what sense biology is, as Darwin would have recognized, a historical science, and how the more limited generalizations of biology as well as its mathematical models explain. If the PNS is a law of nature, it is the only one in biology, or at least the only fundamental one. Other laws, if they are distinct laws, such as the competitive exclusion principle or the laws of island biogeography, will be laws in virtue of their being logical consequences derivable from the PNS alone. Beyond the theory of natural selection, the rest of biology is a set of subdisciplines whose domains are historically delimited by the operation of natural selection on local conditions (the Earth). To begin with, biology is a historical science, since all functional individuation reflects the vagaries and vicissitudes of natural selection over limited time periods; almost all biological kinds are the result of selection over variation in order to solve design problems. Second, solutions to the same problem are multiple, and one biological system’s solution sets another
Dobzhanksy’s Dictum and the Nature of Biological Explanation
biological system’s next design problem. Thus, each system’s environment varies over time in a way that makes all putative biological generalizations specifically about it merely frozen “accidents.” Any subdiscipline of biology—from paleontology to developmental biology to population biology to physiology to molecular biology—can uncover at best historical patterns, owing to the fact that (1) its kind vocabulary picks out items generated by a historical process, and (2) its generalizations are always open to being overtaken by evolutionary events. When a historically limited biological generalization of the form “All Fs are Gs” obtains, this fact is to be explained by appeal to the operation of the principles of natural selection on local conditions—some of these “All Fs are Gs” statements will describe long-established and widespread historical facts, such as the fact that over at least 3 billion years on Earth, all hereditary materials have been composed of nucleic acids (prions excepted); other such local historical trends of them will be very local and equally transitory, such as the description of the primary sequence of the latest AZT-resistant mutation of the virus that causes AIDS. In most cases, the explanations of why these generalizations obtain will be at most “explanation sketches”—incomplete explanations that cannot be completed because the completing details will be too numerous and long ago effaced in the course of evolution. In order to complete an ultimate or adaptational explanation of any particular “All Fs are Gs” statement, it would be necessary to show why Fs being Gs, or having property G, rather than property H or J or K or . . . came to be the actual solution to the design problem set by F’s environment. This would require an identification of the in-principle alternative solutions to the “design problem” that being a G or having property G solves; an account of which of them were available to Fs; details which show why G solved the problem better than the other available solutions; and an account of the subsequent environment of Fs which shows why G is maintained even after local environmental conditions (and their adaptational problems) have changed. When such auxiliary information is neither available nor otherwise worth securing, the most we can expect are adaptational explanation sketches with assumptions about particular past environments, recombinations, and mutations that are not open to direct and obvious test. But at least their antecedents and consequents would be linked by nomological generalizations in the way required for scientific explanations. Or at least they would be so linked were we to accept the PNS implicit in these explanation sketches as a law. Though biology cannot fill in the details, it can be confident that the nomological generalizations involved are known and have been at least since 1859. When “All Fs are Gs” explains, in spite of its ceteris paribus clauses and its known exceptions, it does so because there is a complete explanation that
153
154
ch ap t er four
includes among its initial conditions that F obtains for some finite class of biological systems, along with other known and more usually unknown initial conditions. The other component of the explanans is the PNS. Together with the initial conditions, it implies that the members of the class of biological systems that have F and satisfy the other unknown conditions also have G. Mathematical models differ in two respects from these explanation sketches that implicitly invoke the PNS along with the initial condition that other things are equal: the models are explicit in their appeal to the PNS, and given the PNS, the list of initial conditions sufficient for the explanandum is fully known. What makes a mathematical model a necessary truth is that the stated (initial) conditions of its antecedent, including the PNS or one of its deductive consequences, logically imply the consequent condition. This means, of course, that for every mathematical model, we can derive one or more ceteris paribus empirical generalizations. Suppose we detached one or more of the initial conditions in the explanation provided by a mathematical model, such as Fisher’s sex-ratio model, and combined it with the consequent condition, deleting the PNS and the other stated conditions from the model. Then we should be able to produce one of those exception-ridden ceteris paribus generalizations that figures in explanation sketches and describes a more or less frozen accident. Thus, consider the result of stripping away parts of the Fisher model, including its explicit appeal to the PNS: we get the generalization that “the sex ratio in species with two sexes hovers around 1:1.” This is indeed a fairly reliable claim, on the order of “The eggs of robins are blue,” “Arctic species have a higher volume-tosurface-area ratio than non-Arctic species of the same family,” or “The buckeye butterfly has an eyespot on its wing.” It has several known exceptions (meiotic drive, for example) and probably many more unknown ones, but for the most part it is true most of the time. Thus, the role of the PNS as the implicit law in biological explanation sketches solves the mysteries of how both biology’s ceteris paribus generalizations and its necessary truths explain. The antireductionist will observe and the reductionist must admit that the character of biological explanations as sketches will be reflected in molecular biology as much as anywhere in biology. Here too the explanatory power is carried by an often unmentioned implicit appeal to the PNS. What follows is a particularly clear example of how the PNS works into every nook and cranny of a molecular explanation. Consider the explanation of how genes are copied that appeals to semiconservative chemical synthesis in the 5⬘-to-3⬘ direction of a double-stranded DNA molecule. This synthesis is initiated by the action of an RNA primer, a set of proteins that untwist the molecule. It is completed by DNA polymerases, which stitch the nucleotides together. There is, of course, a process described by or-
Dobzhanksy’s Dictum and the Nature of Biological Explanation
ganic chemistry that causally explains the physical steps in the process. But it is in virtue of natural selection that this macromolecular process constitutes gene copying. And, of course, the description of the chemical constituents and the chemical process of how genes are copied turns out to have known and unknown exceptions. For example, the genes in an RNA virus are not doublestranded DNA molecules to begin with. Of course, we can accommodate quite easily this exception in a generalization about how genes are copied (and RNA genes actually require the DNA replication process as a component of their copying). But there are almost certainly already existent unknown exceptions, and almost certainly there will be new exceptions in the future due to the operation of natural selection continually searching adaptational space. Many of the items to which a macromolecular explanation adverts, for example gene, primer, polymerase, are functional kinds produced by natural selection, though its role is unmentioned in the explanation sketch. Because they are naturally selected kinds, they will be structurally heterogeneous, and pending the discovery of all the structurally diverse ways macromolecules can realize these kinds, the biochemical explanation of gene duplication will be a sketch. Finally, and most crucially, even in molecular biology, proximate explanation turns out to be implicitly evolutionary. Here is a particularly nice “textbook” illustration of how proximate explanation in molecular biology invokes connections effected by the theory of natural selection to answer an (italicized) question about a process: A striking feature of [the process of replication] is the intricate interplay of many proteins. Genetic analysis suggests that at least fifteen proteins directly participate in DNA replication. Why is DNA replication so complex? In particular why does DNA synthesis start with an RNA primer that is subsequently erased? An RNA primer would be unnecessary if DNA polymerases could start de novo. However, such a property would be incompatible with the very high fidelity of DNA polymerases. . . . DNA polymerases test the correctness of the preceding base pair before forming a new . . . bond. This editing function markedly decreases the error frequency. In contrast, RNA polymerase can start chains de novo because they do not examine the preceding base pair. Consequently, their error rates are orders of magnitude higher than those of DNA polymerase. The ingenious solution . . . is to start DNA synthesis with a low fidelity stretch of polynucleotide but mark it “temporary” by placing . . . [short RNA primer] sequences in it. These short RNA primer sequences are then excised by DNA polymerase I and replaced with a high fidelity DNA sequence. . . . Much of the complexity of DNA replication is imposed by the need for very high accuracy. (Stryer 1983, p. 587)
155
156
ch ap t er four
The PNS haunts this entire discussion: it is natural selection that imposes the demand for very high fidelity in information storage by genes; it is natural selection that relaxes the demand for information transmission and protein synthesis by RNA. DNA polymerases are part of nature’s solution to the problem of providing for high fidelity in long-term information storage; RNA polymerase is nature’s solution to the problem of producing large numbers of short-lasting signals quickly and cheaply. In this and in other proximate explanations in biology, the connection between the explanandum and the explanans is effected by the principles of natural selection so clearly that, like principles of rational action in history, they need not even be mentioned to eke out the explanation. Biology is history, but unlike human history, it is history for which the “iron laws” of historical change have been found, and codified in Darwin’s theory of natural selection. And because everything else in biology is history—the description and explanation of local accidents—there are no laws in biology other than Darwin’s. But owing to the literal truth of Dobzhansky’s dictum, these are the only laws biology needs. This conclusion raises a challenge for antireductionist: to show that the PNS is in fact innocent of the charge of tautology owing to the biologist’s definition of fitness. If the antireductionist declines this challenge, two other alternative challenges must be faced: either identify another law or laws that will carry biology’s explanatory burden, or show how biology can explain without laws at all. If, however, the antireductionist embraces the PNS as a law, then the reductionist must deal with a potentially more daunting challenge. For reductionism is committed to the notion that biological laws can be reduced to physical ones. And there seems scant prospect that the PNS is open to such a reduction. Dobzhansky’s dictum may after all make the world safe for autonomous biology. But, of course, it would have to live unreconciled with physicalism. Taking stock, we have identified a proposition that stands a chance of being the sort of law that biology needs, a law about which the reductionist and the antireductionist can make the dispute between them explicit. The next chapter explores an interpretation of the PNS that will sustain the antireductionist’s claim. We will see that so treated, the theory of natural selection in which the PNS figures will not meet the needs for which modern biology employs it. This result, however, is only half of what is required to vindicate reductionism. The other half of what we need is a positive account of how the PNS, and with it the rest of Darwinian theory, can be adequately grounded in physical science.
5
• • • • •
Central Tendencies and Individual Organisms chapter 4 argued for the indispensability and uniqueness of the principle of natural selection (PNS) at every level of biology, right down to the macromolecular. But in doing so, it has given important hostages to the antireductionist’s argument. Recall from the introduction that this argument turns on the claim that ultimate or evolutionary explanations are what make the difference between biology and physical science. If the antireductionist can show that the PNS by itself demands an approach to biology that is incompatible with reductionism, or that it can only be defended against serious criticisms on an interpretation that makes reductionism impossible, no more convincing argument for the autonomy of the biological could be given. Now it is in fact the case that over the century and a half since Darwin first advanced the theory of natural selection, it has faced serious interpretive problems and significant conceptual challenges. These are problems that reductionists and antireductionists about biology both need to deal with, either responding to the charges jointly, in ways that are neutral on the dispute between them, or separately, on the basis of their different approaches to the reducibility of biology to physical science. Since, by and large, antireductionism is a far more popular approach among philosophers of biology than reductionism, few have sought to deal with biology’s problems from a perspective that is neutral, as between reductionism and its rejection.
158
ch ap t er f iv e
Indeed, several thoughtful approaches to the interpretation of the theory of natural selection have primarily been supposed to substantiate a holistic view of the theory and the process of evolution incompatible with its reduction not just to physics but even to the ecology of competing individual lineages, populations, groups, organisms, or genes. Exponents of this interpretation of the theory of natural selection treat it solely as a claim about the “central tendencies” in evolution. In the words of Kitcher and Sterelny, “Evolutionary theory, like statistical mechanics, has no use for such a fine grain of description [as the biography of each organism]: the aim is to make clear the central tendencies in the history of evolving populations” (1988, p. 345). If the theory of natural selection has no use for the biography of individual organisms, it will have no more use for any thing at a still lower level of organization, such as the individual genotype or gene, for that matter. And, of course, if the theory of natural selection really is like thermodynamics, then it won’t just be a matter of the theory’s “having no use for” individuals. The theory will be blind to them, and the fate of individuals—whether particular groups, organisms, or genes—will be invisible to the theory. Many a philosopher and some biologists have followed R. A. Fisher in comparing the theory of natural selection to thermodynamic theory. They have held that just as the second law of thermodynamics’ claims about the entropy of a gas are not made true by any thermodynamic properties of individual particles, similarly, the theory of natural selection’s claims about fitness differences and its population-level consequences are not made true by any claims about fitness differences among individuals. Such a holistic, antireductionist view of the theory of natural selection is said to have many advantages, not least of which is solving the long-standing problem of defining fitness in such a way as to preserve the contingent, empirical, explanatory status of the theory. For the reductionist who is committed to Dobzhansky’s dictum, even at the level of the macromolecular, such an interpretation will be hard to accept. For if it is correct, then not only will nonmolecular functional biology not be reducible to physical science, but so also will molecular biology be irreducible. This conclusion would, of course, make even more untenable the joint commitment of many biologists and philosophers to physicalism and antireductionism. But, given the importance of the theory of natural selection to their disciplines, it would probably tip the scales against physicalism altogether, were they really forced to choose. In this chapter, therefore, I consider the holistic “central-tendencies” interpretation of the theory of natural selection, and the way in which it employs probabilities both to deal with the interpretative problems of the theory and to underwrite its holism. My conclusion is that the holistic interpretation and
Central Tendencies and Individual Organisms
its parallel to thermodynamics are seriously mistaken and prevent the theory from coherently defining fitness or making a principled distinction between selection and drift. But these are two things any successful interpretation of the theory of natural selection must do. I show that these two things—defining fitness and empirically distinguishing drift and selection—can only be accomplished by a theory that does have some use for individual organisms and their biographies. In chapter 6, I carry the reductionist’s argument by showing that the theory of natural selection’s claims about both individuals and populations are unproblematically grounded in the physical processes that obtain between macromolecules. In effect, the program of these two chapters is to make natural selection safe for reductionists and to show how doing so rids us of the untenable dualism.
could fitness be a probabilistic propensity? Chapter 4 introduced the PNS as the explanatory core of Darwinism and defended its status as the unique nonderived general law in biology. One relatively direct consequence of the treatment of the theory of natural selection as a claim about central tendencies among populations is to interpret the PNS in the following way: PNSpop (x)(y)(E) [If x and y are competing populations and x is fitter than y in E at generation n, then probably (x’s size is larger than y in E at some generation n⬘ later than n)] PNSpop is relativized to environment E, since fitness is relative to an environment. We cannot narrow down the later generation beyond “some” generation or other, for reasons that will be made clear below. Some exponents of the central-tendencies approach (such as Matthen and Ariew 2002, pp. 72ff.) substitute a deductive consequence of the PNS, a version of Fisher’s fundamental theorem: “In a subdivided population the rate of change in [overall population] growth rate [that is, fitness] is proportional to the variance in growth rates [that is, fitnesses].” They write, “The objective of natural selection is to explain and predict changes in the relative frequencies of heritable traits within a population. The change that selection explains is a consequence of variation in fitness.” Exponents of the “central-tendencies” interpretation of the theory of natural selection will stop here, and deny that the PNSpop need be further grounded on claims about the fitness of individuals. Thus, Walsh, Lewens, and Ariew (2002, p. 469) write, “Natural selection explains changes in the structure of a population, but not by appeal to the individual-level causes of births, deaths, and reproductions.”
159
160
ch ap t er f iv e
Biologists and philosophers who reject the temptation to stop with this principle may wish to endorse something like the following PNS for individuals: PNSind (x)(y)(E) [If x and y are competing organisms in generation n, and x is fitter than y in E, then probably (there is some generation n⬘, at which x has more descendants than y)] These biologists and philosophers hold that, when PNSpop obtains, the PNSind is the most important part of the explanation, and the explanation is a matter of straightforward aggregation from pair-wise comparisons to population statistics. Both principles employ the relational property “x is fitter than y” in their antecedents and the sentential operator “probably (p)” in their consequents. What these terms mean remain two of the most vexed questions in the philosophy of biology. There are two proposed answers to the question of what fitness in the antecedent and probably in the consequent of the PNSpop (and the PNSind, if there is one) mean that are popular among philosophers of biology. First the so-called probabilistic-propensity definition of fitness: “x is fitter than y in E” ⫽ “x has a probabilistic propensity ⬎.5 to leave more offspring than y.” The loci classici of this definition are Brandon 1978 and Beatty and Mills 1979. (See also Brandon 1990, chapter 1; Sober 1993, p. 71; and Matthen and Ariew 2002.)1 Sober defines the comparative fitness of traits, not individuals or populations. “Trait X is fitter than trait Y if and only if X has a higher probability of survival and/or reproductive success than Y” (2000, p. 71). Traits are types, that is, abstract properties. Their survival and/or expected reproductive success is a matter either of the individuals that instantiate them or the individuals or populations that manifest these traits. Trait fitness differences require individual or population fitness differences. Second, the relative-frequency interpretation of the consequent’s probability: “Probably (__)” ⫽ “The relative frequency in the long run of (__) is greater than .5.” 1. Some philosophers and biologists may wish to define fitness in terms of the (probabilistically) expected number of offspring, instead of as a probabilistic propensity. Indeed, I argued for this view in Rosenberg 1993. Of course, if we follow Lewis’s (1986) “principal principle,” the expected value of the probability that x is fitter than y will equal the probabilistic propensity (the chance) that x is greater than y. However, defining fitness in terms of expected values is to covertly import our subjective beliefs into the definition of a relation which obtains even when there are no such beliefs. For expected values are understood in terms of a Bayesian interpretation of probability.
Central Tendencies and Individual Organisms
If we plug these two proposals into either or both the PNSpop and PNSind, at least three questions arise. First and perhaps most obvious is the question of how the consequents of either PNSpop or PNSind are related to the finite actual sequences if they are claims about the relative frequency in the long run, that is, about infinite sequences. The problems here are well known. (For a general introduction to these problems, see Salmon 1966, pp. 83–95.) We need a way of applying a claim about infinite sequences to the actual finite sequences which the PNSs are to explain. Although no uncontroversial solution to this problem is available, there must be one. For none of the alternative analyses of probability will preserve the explanatory character of the two PNSs. A Bayesian interpretation makes the PNSs into claims about the expectations and preferences of actual or possible rational cognitive agents. Yet surely the existence or possibility of such agents is not among the truth-makers for the theory of natural selection. The other alternative, that the probability in the consequent is a propensity, has already, so to speak, been spoken for. It is the interpretation of probability that figures in the fitness relations reported in antecedents of the PNSpop and PNSind. If we adopt the same meaning for the probability in the consequent, then when the appropriate grammatical changes are made to accommodate this interpretation by attributing probabilistic dispositions to x and y, the two PNSs will turn out to be tautologies. The second issue is closely related: for either version of the PNS to be a contingent truth, there must be a difference between “x has a probabilistic propensity ⬎.5 to leave more offspring than y in every generation after n” and “the long-run relative frequency of (x’s having more offspring than y in any generation after n) is more than .5.” If there is no difference between these two probabilities, both versions of the PNS become tautologies. Another way to put the point is that, in the two PNSs, the antecedent is supposed to identify a cause and the consequent an effect. Accordingly, there must be at least in principle a difference between them in conceptual if not empirical content. What would show that there is a difference between these two kinds of probabilities? There certainly are philosophers of science who deny that an empirical distinction between probabilistic propensities and long-run relative frequencies is in general possible (see Earman 1986, pp. 147–51). Putting aside empiricist strictures, would it suffice to claim that, here as in quantum mechanics, we find a brute, unanalyzable, probabilistic dispositional property of a particular item, which generates long-run relative frequencies? Among philosophers of quantum mechanics, some hold that probabilistic propensities can explain actual frequencies (compare Railton 1981, p. 216), and some hold that they do so via a detour into long-run relative frequencies. But, owing to empiricist com-
161
162
ch ap t er f iv e
mitments, few are comfortable with such arguments and adopt them only because, at the level of the quantum mechanical, probabilistic propensities are indispensable and irreducible (compare Lewis 1986). Proponents of probabilistic propensities in the PNSs may envision two possibilities here. One is that probabilistic propensities at the levels of phenomena that constitute the biological are the result of quantum probabilities “percolating up,” in Sober’s (1984) and Brandon and Carson’s (1996) phrase; the second is that there are brute, unexplainable, probabilistic propensities at the level of organismal fitness differences. No one doubts the possibility of quantum percolation at the biological level. It is likely one of the sources of mutations (compare Stamos 1999 for a discussion). But the claim that it has a significant role in fitness differences is not supported by any independent evidence (compare Glymour 2000 for a discussion). The claim that there are brute probabilistic propensities at the level of organismal fitness differences (Brandon and Carson 1996) is only slightly more tenable. No one has adduced any evidence that, for instance, the probabilistic generalizations about the behavior of animals that ethology and behavioral biology provide are irreducibly statistical. Rather, they are expressions of the current state of our knowledge and ignorance of the causes and conditions of the behavior in question. Empiricist-inspired suspicion of dispositions without manifest-trait foundations seems well grounded in biology. These first two problems about probabilities in evolution are largely philosophical. The third issue facing any interpretation of the PNSs is a biologically urgent matter: it turns out to be difficult to pin down the specific probabilistic propensity that constitutes fitness altogether. The difficulty reflects features of natural selection that we must accommodate. And it leads inexorably to the conclusion that far from providing the theoretical meaning of fitness, the probabilistic propensity “definition” is a set of an indefinitely large number of operational measures of fitness. Moreover, identifying which of these measures to use turns on prior determinations of whether natural selection obtains and what has been selected. The upshot will be that the probabilistic propensity “definition” does not figure in either the PNSpop or the PNSind. The first thing to notice about the “definition” “x is fitter than y in E” ⫽ “x has a probabilistic propensity ⬎.5 to leave more offspring than y” is that it makes the PNSs into falsehoods. That is, there are many circumstances in which the organism with the higher number of expected offspring is the less fit, not the more fit organism. For example, Gillespie (1977) has shown that there are cases in which the temporal and/or spatial variance in number of offspring may also have an important selective effect, which swamps mere num-
Central Tendencies and Individual Organisms
bers in any given generation. To take a simple example from Brandon (1990), if organism a has 2 offspring each year, and organism b has 3 offspring in oddnumbered years and 1 in even-numbered ones, then, ceteris paribus, after one generation, b will be fitter than a; after two generations, a will be fitter than b, after which b will again be fitter; but by nine generations, there will be 512 descendants of a and 253 descendants of b. The same holds if a and b are populations, and/or b’s offspring vary between 1 and 3, depending on location instead of period. If the question were simply which generation’s numbers give the correct measure of fitness values, the correct answer would be, “It depends.” But our question is about the meaning of fitness, not its measurement. To accommodate these biological cases, we need to qualify the “definition” to include the effects of variance: x is fitter than y ⫽ probably x will have more offspring than y, unless their average numbers of offspring are equal and the temporal and/or spatial variance in y’s offspring numbers is greater than the variance in x’s, or the average numbers of x’s offspring are lower than y’s, but the difference in offspring variance is large enough to counterbalance y’s greater number of offspring. It is also the case that in some biologically actual circumstances—for example, in circumstances in which mean fitnesses are low—decreased variance is sometimes selected for (see Ekbohm, Fagerstrom, and Agren 1980). Indeed, as Beatty and Finsen (1989) have noted, sometimes the “skew,” or geometric means, of offspring numbers and variance may effect selection. Thus, the “definition” of fitness must take these conditions into account on pain of turning the PNSs into falsehoods. One simple way to protect the PNSs from falsehood is to add a ceteris paribus clause to the definition. But the question must then be raised of how many different exceptions to the original definiens need to be accommodated. If the circumstances under which greater offspring numbers do not make for greater fitness are indefinitely many, then this “definition” will be unsatisfactory. Some proponents of the propensity definition recognize this difficulty and are prepared to accept that at most, a “schematic” definition can be provided. Thus, Brandon (1990, p. 20) writes, We can . . . define the adaptedness [a synonym for expected fitness] of an organism O in an environment E as follows: A* (O,E) ⫽ ⌺P(Q iOE)Q iOE − f (E, 2). Here Q iOE are a range of possible offspring numbers in generation i; P(Q iOE) is the probabilistic propensity to leave Q iOE in generation i; and most important,
163
164
ch ap t er f iv e
f(E, 2) is “some function of the variance in offspring numbers for a given type, 2, and of the pattern of variation” (ibid.). “Some function” here must be understood as “some function or other, we know not what in advance of examining the case.” Moreover, we will have to add to variance other factors that determine the function, such as Beatty and Finsen’s skew, or the conditions which Ekbohm, Fagerstrom, and Agren identify as making higher variance adaptive, and so on. Thus, to be correct, even as a schematic expression, the final term in Brandon’s definition will have to be expanded to f(E, 2, . . . ), where the ellipses indicate the additional statistical factors that sometimes combine with or cancel the variance to determine fitness levels. But how many such factors are there, and when do they play a nonzero role in fitness? The answer is that the number of such factors is probably indefinitely large, and the reason is given by a fact about natural selection recognized by Darwin and his successors. This fact about selection, which fates our “definition” to being either forever schematic or incomplete, is the “arms-race” strategic character of evolutionary interaction. Since every strategy for enhancing reproductive fitness (including how many offspring to have in a given environment) calls forth a counterstrategy among competing organisms (which may undercut the initial reproductive strategy), the number of conditions covered by our ceteris paribus clause, or equivalently, the number of places in the function f(E, 2, . . . ), is equal to the number of strategies and counterstrategies of reproduction available in an environment. That this number of strategies and counterstrategies may be indefinitely large forms a crucial component of chapter 4’s argument that there are no biological laws beyond the PNSs and their deductive consequences. In each particular selective scenario, a different specification of Brandon’s definition, A*(E,O), figures in the antecedent of different versions of the PNSpop and PNSind. Properly restricted to the right function f(E, 2, . . . ) and the right set of statistical features of its reproductive rate for a given environment, these versions of the PNS will presumably each be a nomological generalization about natural selection for a given population in a given environment. And the set of these narrowly specific PNSs (each different in the subject matter and the functional form of its antecedent’s fitness measure) will disjunctively constitute a general PNS for populations and/or individuals. The notion that there is no single PNSpop or PNSind, but a family of them, each with a restricted range of application, will be attractive to those biologists uncomfortable with a single principle or law of natural selection, and to those philosophers of science who treat the theory of natural selection as a class of models (Beatty 1980; Lloyd 1993; Thompson 1988). But, one will want to ask,
Central Tendencies and Individual Organisms
does this set of generalizations with mathematically similar antecedents and identical consequents have something in common, which in turn explains and unifies them all? Or is each one an equally fundamental principle of the theory of natural selection? The question is obviously rhetorical. Of course, these restricted generalizations have something in common that needs to be explained. For each of the members of the set of functions [f1(E, 2, . . . ), f2(E, 2, . . . ), f3(E, 2, . . . ) . . . ] measures the same thing, comparative fitness, and identifies it as the cause of the probabilistic claim in each of their consequents. And what is comparative fitness, as opposed to its effects in reproduction which measure it? One possible answer is the following: a is fitter than b in E ⫽ a’s traits result in its solving the design problems set by E more fully than b’s traits. This formula (or any of its terminological equivalents) provides a definition of what we label “ecological fitness,” which supervenes on all those relations between an individual and its environment that contribute to the individual’s success. Fitness as design-problem solution is, however, famously unattractive to philosophers and biologists (see, for instance, Lewontin 1978). The problems vexing this definition include at least the following ones: (1) it is not obvious how to individuate and count distinct design problems; (2) nor is it clear how to measure the degree to which they are solved by individual organisms; (3) aggregating solutions into an overall level of fitness is difficult in the absence of a common unit to measure ecological fitness; (4) comparing conspecifics that solve different problems to differing extents is equally perplexing. “x solves more design problems than y” is at least as recalcitrant to measurement as “x is fitter than y.” Besides the difficulties facing any attempt to operationalize the concept of “ecological fitness,” there is the objection to its suggestion of teleology in the notion of a “design problem,” and the definition’s consequent vulnerability to charges of Panglossian adaptationalism. It is, apparently, cold philosophical comfort to defend the design-problemsolution definition of fitness by arguing that this litany of difficulties trades on the assimilation of the meaning of a term to its measurement, and fails to recognize the theoretical character of the concept of “fitness.” Objections to this definition are unlikely to be answered by pointing out that definitions have to stop somewhere, that the definition of a theoretical term must be distinct from the operational measure of the property it names, and that testability is not a matter of theory meeting data one proposition, still less, one term at a time. Or at least none of these considerations have convinced philosophers of biology to give up the project of defining fitness in terms of its effects.
165
166
ch ap t er f iv e
fitness, entropy, and the second law of thermodynamics Perhaps the most serious obstacle to accepting the ecological-fitness concept is that it is impossible to reconcile with the “central-tendencies” account of the claims of the theory of natural selection now so widely endorsed. For ecological fitness is a relationship between organisms taken two at a time, not a statistical property of populations. Thus, there is among exponents of the “centraltendencies” approach a strong incentive to deal with the problem of defining fitness by simply expunging the concept altogether from the theory of natural selection. No fitness, no fitness problems. This strategy is adopted explicitly by Matthen and Ariew (2002). But, as we shall soon see, expunging ecological fitness from the theory of natural selection makes the theory unrecognizable. This means that despite its measurement problems, the ecological-fitness concept, whether or not it must ultimately be understood in terms of the solution to design problems, turns out to be indispensable to the theory of natural selection. At least since the work of Peirce, philosophers have been trying to understand the claims of the theory of natural selection by treating it on analogy with the second law of thermodynamics. Matthen and Ariew (2002) write, for instance, “As Fisher kept emphasizing, it is statistical thermodynamics—not Newtonian dynamics—that provides the closest parallel in physics to the theory of natural selection” (p. 72). Philosophers seeking to treat the theory of natural selection as a claim about central tendencies exclusively have reason to pursue this similarity, for (1) both the PNSpop and the second law of thermodynamics have probabilistic consequents not open to interpretation as subjective degrees of belief or probabilistic propensities; and (2) the second law is a regularity about ensembles, not the individuals out of which they are composed. We may state the second law of thermodynamics as follows: 2nd l aw. (x)(y)[x, y are states of a closed thermodynamic system and y is later than x → Probably (the entropy of y is greater than the entropy of x)]. The two PNSs have a probabilistic consequent isomorphic to the second law’s consequent: → Probably (x’s size is larger than y in E at some generation n⬘ later). It is this similarity in probabilistic consequents that seems to have encouraged philosophers to treat the PNS as a claim about ensembles, like the second law,
Central Tendencies and Individual Organisms
and to treat fitness as a property of ensembles, on a par with the concept of “entropy.” But the trouble is that there are important disanalogies between the PNSpop and the second law of thermodynamics. In particular, the PNS is a claim about later demographic effects of earlier fitness differences between individuals or populations. But the second law of thermodynamics makes no causal claim: earlier entropy levels are lower than later entropy levels, but are not their causes. Thus, some advocates of the central-tendencies interpretation have sought to substitute a version of R. A. Fisher’s fundamental theorem of natural selection (FFT) for a principle such as the PNS: fft. In a subdivided population, the rate of change in the fitness of the whole population is proportional to the variance in the fitness of the subpopulations. Like the second law, the FFT makes no causal claim. It relates simultaneous values. And it is silent on selection at the level of the individual organism. On Matthen and Ariew’s view, the FFT “tells us nothing about the causes of [population] growth: it is a general truth about population growth regardless of how it is caused” (Matthen and Ariew 2002, p. 74). Indeed, selection is not a cause of population growth (or of the changes in other population characteristics) on its conception: it is merely “the mathematical aggregate of growth taking place at different rates” (ibid.). But the FFT is just that, a derived consequence of what Fisher recognized is the more fundamental truth, the PNS. The theorem states something that Darwin explicitly recognized to be a consequence of natural selection: the more variation in a heritable trait, the more rapidly it will evolve under natural selection. Darwin, however, treated this fact as a subordinate consequence, and when we consider the Darwinian assumptions about selection for ecological fitness, from which the FFT is derived, its derivative status becomes evident. The very expression of the theorem makes clear its status as derivative from one or more fundamental, dare one say, postulates or axioms of the theory of natural selection, which, of course, requires that there exist some amount of natural selection whose rate can change. The existence of this process is, of course, vouched safe by a principle like the PNS. The reasoning from the PNS to the FFT is fairly direct and intuitive. Depew and Weber describe it aptly: Fisher is painting a picture in which natural selection speeds up as useable variation is fed into it. Moreover, he means to say that as natural selection acts on variation, it necessarily does so in such a way that it increases the fitness of the population from what it was an instant before
167
168
ch ap t er f iv e
the integration of the action of selection on the genetic array. The system moves naturally towards a state of maximal fitness, even if it never quite arrives because as it approaches maximal fitness, it runs, by definition, out of fuel. (Depew and Weber 1997, p. 251) Besides the independent prior assumptions about natural selection in general required to derive the theorem, there are other reasons to forego it as a characterization of natural selection’s most basic properties. First, there are circumstances in which selection operates, but there is no response to selection of the sort required by the theorem. As Sober’s treatment of the FFT and its implications shows, “Selection may increase or decrease the value of w (average fitness). Once frequency dependent selection is taken into account no general statement can be made as to whether selection tends to improve” (Sober 1984, p. 182). As in the subversion of a population of altruists by a selfish organism, there are conditions of frequency dependence in which natural selection can lower average fitness of a population. It is easy to reconcile this and other such cases with the FFT by bringing them under a “change-of-environments” clause; after all, an individual’s or group’s environment does include the population of its conspecifics. This is what a more basic PNS tells us about environments. The important conclusion is that Fisher’s theorem cannot serve as the touchstone of a theory of natural selection, because the FFT’s truth is a qualified consequence of more fundamental truths about natural selection. However, whether we choose the FFT or the PNS as the central nomological generalization of the theory of natural selection does not in the end matter in the present connection. The trouble with the analogy between the PNSpop or the FFT and the second law of thermodynamics is that the features that make for the emergent mysteries of the second law are largely absent from the foundations of the theory of natural selection. Once we understand the differences between entropy and fitness, the temptation to treat the theory of natural selection as a claim solely about ensembles disappears. The emergent character of the second law is generated by the fact that entropy is a property not of the individual components of an ensemble but of the ensemble as a whole. The standard explanation of how entropy emerges from the behavior of the members of the ensemble remains highly problematical. To see why, consider the simplest case in which a thermodynamic system—say, a quantity of a gas in a container—is treated as an ensemble of particles moving in accordance with Newtonian dynamical laws. Following Albert (2000, pp. 43ff.), call a specification of which particles are where in the container, and what their specific momenta are, an “arrangement,” and a specification of how many particles are within a given region of the container and a given range of
Central Tendencies and Individual Organisms
momenta a “distribution.” The entropy of the system depends on the distribution of the particles, not the particular arrangement of them. Any one distribution is, of course, compatible with more than one arrangement of particles. The particles change position and momenta in accordance with deterministic Newtonian laws, and the number of physically possible arrangements of particles that realize any one distribution increases as the particles spread out in space and in momentum values. The increase in entropy the second law reports results from this fact about arrangements and distributions: in the long run, later distributions supervene on a larger number of arrangements than earlier ones do. The larger the number of arrangements for a given distribution, the higher the entropy. Entropy is thus accounted for in terms of Newtonian concepts of position and momentum via the concepts of “distribution” and “arrangement.” The flaw in this story is that we have no right to hold that the number of arrangements at the earlier time is less than the number of arrangements at the later time. Since Newtonian momentum and space-time location can take on a continuum of values, the number of arrangements compatible with (almost) any single distribution is infinite, and there is no unique way to measure the size of these infinities. Within any given region of space and range of momentum values for any one particle, the position and momentum of the particle can take up a continuum of values. If the earlier, “smaller” number of arrangements compatible with a given distribution is infinite in number, and the later, larger “number” of arrangements is also infinite in number, we cannot appeal to differences in the number of arrangements on which given distributions supervene to explain the increase in entropy reported in the second law of thermodynamics. Thus, both entropy as a property and the second law as a regularity are said to be irreducible ensemble-level matters. But the theory of natural selection is not vexed by the problems that bedevil a reduction of thermodynamic properties to Newtonian dynamics, that make entropy an emergent property of an ensemble, and that prevent us from turning the schematic derivation of the second law into a complete explanation. In evolutionary theory, all we need in order to understand where the fitness coefficients of populations come from is the “concession” that there is such a thing as comparative differences in (ecological) fitness between pairs of individual organisms, and that these differences can be aggregated into fitness differences between populations. Recall the two versions of the PNS, the PNSpop and the PNSind, introduced in the first section above. Treat fitness as it figures in both PNSs as a matter of solving design problems (measured by some demographic statistic). Then the truth of the PNSpop follows from the truth of the PNSind by simple arithmetical aggregation. There is no difficulty explaining where “comparative fitness”
169
170
ch ap t er f iv e
in the PNSpop “comes from”: it’s just the average, over the compared populations, of the comparative fitnesses of the individual members of the populations. There is nothing at the ensemble level here emergent from the properties at the individual level the way there is in thermodynamics. There is no new property of the whole ensemble—like entropy—utterly dissimilar from any properties at the level of the components of the ensemble. There is just the average of actual comparative-fitness relations among pairs of organisms. It is true that measuring comparative fitness as it figures in the PNSpop and the PNSind is a matter that moves in the opposite direction from the direction of explanation as it obtains between these principles. That is, to get a quantitative handle on the degree to which one organism solves the design problems set by the environment more fully than another, one must aggregate over like creatures, whence the attractions of the probabilistic propensity “definition”—or rather, one or another of its disjuncts—to measure values of ecological fitness. When this requires actually collecting data about reproduction rates, variances in them, skews, and so on over multiple generations, independent evidence for the explanatory role of the PNSind is rendered invisible. As noted, the PNSs all do share with the second law of thermodynamics a probabilistic operator in their consequents. But this probabilistic operator is not the feature of the second law that obscures its foundations in Newtonian dynamics. The distinctive problem of the second law is that we would like to be able to say that states of higher entropy of an ensemble depend on distributions which are realized by a large number of arrangements of its components. We cannot say this, because every distribution includes an infinite number of physically possible arrangements, and there is no nonarbitrary measure on these infinities that will enable us to compare their size. This problem for thermodynamics, of identifying a measure on infinite sets of different cardinalities, simply does not occur in the theory of natural selection. The fitness of an ensemble is just nothing like the entropy of an ensemble, just because unlike entropy, fitness is a calculable value of the properties of the components of the ensemble. There is a parallel between the PNSpop and the second law. But it does not substantiate the conclusion that the former is, like the latter, a law about irreducible ensembles. The significant parallel between the PNSs and the second law is to be found in the probabilistic operators in their consequents. It is this probability concept that makes ecological fitness indispensable to the theory of natural selection’s claims about ensembles and populations, as we now see.
equiprobability, drift, and ecological fitness The probabilistic character of the consequents of the PNSs is what makes room for drift. If the long-run relative frequency of some event is greater than .5,
Central Tendencies and Individual Organisms
then this frequency is compatible with any actual finite frequency. When finite actual frequencies approach the long-run relative frequencies cited in the PNSs, the principles explain these finite actual frequencies. When the finite actual frequencies do not approach the long-run frequencies, the alternative explanations are (1) the PNS is false or (2) the divergence between the long-run and the actual frequencies is a matter of drift. Exclude the first alternative. As we shall see, drift plays its role in natural selection only against the background of disaggregated pair-wise ecological fitness differences among individual biological entities that cause differential reproduction. Despite the heavy weather made of it in the philosophy of biology, drift is perfectly easy to understand. Consider everyone’s favorite example: coin tossing. A fair coin has a long-run relative frequency of coming up heads equal to .5. When tossed 1000 times in batches of ten, it comes up heads, say, a total of 491 times, but in some of the batches, it will often come up heads 6, 7, or even 8 times. The (weak) law of large numbers tells us that if the long-run relative frequency of heads is .5, then the subjective probability of the actual frequencies approaching .5 converges on 1.0 as the number of coin flips increases. By contraposition, as the actual number of fair coin flips decreases, the probability that the actual frequency of heads equals .5 will decrease. It is fallacious to infer from the law of large numbers, a theorem of the calculus of probabilities, that actual frequencies approach the long-run relative frequency as the number of tosses grows larger. It is equally fallacious to infer that the failure of actual frequencies to approach the long-run frequencies shows that the coin is not fair. The causal explanation of the divergence of a finite sample from the longrun relative frequency of coin tosses is to be sought in the fact that the initial conditions of the actual coin tosses were not representative of the set of initial conditions that give rise to the long run. To see this, imagine a spring-loaded apparatus for tossing quarter-sized disks, and a single, physically bilaterally symmetrical, quarter-sized disk marked H and T, such that whenever the disk sits in the apparatus with the H side up and the spring is released, the disk is shot out on a single parabolic trajectory with three rotations of the disk that always result in its landing H side up (and vice versa, if it starts out T side up in the apparatus). There is nothing counterfactual about this physical system. It is a deterministic one in which all the actual sequences of H flips come up 100% H, and similarly for T flips; obviously, so long as the spring retains its elasticity, the disks are not worn, and so on, the long-run relative frequency, P(the disk comes up H on landing/the disk is H side up in apparatus) ⫽ 1. The apparatus is deterministic with the qualification that the actual world, which is quantumindeterministic in its fundamental laws of working, asymptotically approaches Newtonian determinism for objects as large as our coin-tossing device. This is
171
172
ch ap t er f iv e
owing to the fact that the probabilities of violation of Newton’s laws by macroscopic objects are so low that there is not a single actual violation in the amount of time taken up by the whole history of the actual world. Now, consider a real quarter, and a real thumb-and-fore-finger coin-flipping “device.” This physical system does not differ from our machine-and-disk system in any physically relevant way. Accordingly, it must also be a deterministic system. But when the quarter is flipped head side up, say, 100 times, it lands heads 47 times and tails 53 times; and when it is flipped 1000 times, it comes up heads 502 times, and so on. We infer that the long-run relative frequency P (the quarter comes up H on landing/the quarter is H side up on the forefinger) ⫽ .5, and we know perfectly well where this probability “comes from.” It is the result of the fact that the initial conditions of the coin-flipping which deterministically bring about an outcome of H or T in each case are distributed into two sets. One of these sets of initial conditions together with the relevant Newtonian laws determines a set of paths from thumb to tabletop which results in heads, while the other set of initial conditions together with the same set of laws determines paths to the tabletop resulting in tails. If there were 47 heads out of 100 tosses, then there were 47 initial conditions in the former set. If it is a fact that as the number of tosses increases, the number of initial conditions in the heads-outcome set approaches 50%, then the number of heads outcomes approaches 50%. When the ratio of heads to tails varies from exactly 50:50, we can be sure that the cause is that the distribution of initial conditions is not 50:50. Thus, when an actual series results in 50% heads, the explanation is that 50% of the initial conditions were of the heads-resulting sort, and when the actual series is not 50:50, the explanation is that the initial conditions were not distributed 50:50. Compare the case of a set of 100 uranium atoms, each with a 50% chance of emitting an alpha particle in a period of time t. If only 47 atoms emit alpha particles, there is no reason to assign a cause in the initial conditions realized by those 47 uranium atoms. For alpha-particle decay is a fundamentally indeterministic process. The initial conditions of those 47 atoms do not differ from the initial conditions of the 53 atoms that did not emit alpha particles in the time period in question. And there is no explanation of why 47 of the atoms emitted and 53 of them did not emit alpha particles. Suppose we have evidence that the set of initial conditions of a real series of coin flips is divisible into two equal sets—one of which results in H and the other T. This evidence will consist in the bilateral symmetry of the coin, the inability of the flipper to control initial conditions very accurately, and so on. And suppose that the series of flips results in 50 H and 50 T. Well, then, the explanation is the equal size of the two sets of realized initial conditions. Suppose
Central Tendencies and Individual Organisms
that among the set, however, the tosses 20 through 23 yielded 4 consecutive heads. This is an improbable event, P(H,H,H,H) ⫽ .0625, and not explainable by appeal to the equal distribution of initial conditions into H-resulting and T-resulting sets. It is explained by showing how the initial conditions in tosses 20 through 23 together with Newton’s laws resulted in Hs. It is certainly true that in the long run, when the initial conditions are equally distributed between heads-resulting and tails-resulting initial conditions, 4 consecutive heads come up 6.25% of the time. But this is either no explanation of why 4 heads came up when they did on tosses 20 through 23; or only a small part of the explanation; or an explanation of something else (namely that 6.25% of large numbers of fair coin tosses result in 4 consecutive heads); or an explanation that satisfies very unstringent standards on explanatory adequacy. By contrast, in alphaparticle emission among uranium atoms, that 4 contiguous atoms emitted alpha particles in the same period when each had only a 50% probability of doing so is maximally explained by the calculation that there was an objective and not further explainable 6.25% chance of its happening to every 4 contiguous uranium atoms. When we are presented with various actual sequences of Hs and Ts, we frame explanations of them that vary in the stringency of the conditions on explanatory adequacy they are expected to meet as a function of our interest in particular series of outcomes. Usually, our interests in the details are so weak that we are satisfied with an explanation for why a particular series of Hs and Ts approaches a 50:50 ratio which appeals to a division of initial conditions into sets whose sizes approach 50:50. The role in the explanans of the premise that the initial conditions of the coin tosses are equally divided between those that result in heads and those that result in tails is just a special case of the appeal to randomness in an experimental treatment. The empirical generalization which explains why the coin-tossing ratio approaches 50:50 tells us that if a random trial is repeated over and over, independently and under conditions otherwise identical, the fraction of trials that result in a given outcome converges to a limit as the number of trials grows without bound. Two things to note. First, when, as in the case of 4 consecutive heads, the fraction of trials does not converge, it follows from the empirical generalization mentioned above that the trials are not random or not independent or conditions have changed. And these facts must take part in the explanation of the 4 consecutive heads. More important, the explanation of why large numbers of tosses of fair coins approach 50:50 relies on the randomness of the trials. What does randomness consist in when it comes to coin-flipping? Randomness consists in each of the physically possible initial conditions of a coinflipping system being equiprobable (whence the equality of the number of initial
173
174
ch ap t er f iv e
conditions resulting in heads and in tails). Since coin-flipping is a deterministic affair, the source of the equiprobable randomness cannot be anything like the probabilistic propensities resulting from quantum processes. And while it may be reasonable, ceteris paribus, to adopt subjective probabilities or betting odds that are the same for all possible initial conditions of coin-flipping, the equal distribution of all physically possible heads-causing and tails-causing initial conditions does not turn on anyone’s epistemic states. It seems to be a fact about the world independent of subjective probabilities and betting odds that in the long run, the physically possible initial conditions of fair coin-tossing are equiprobable. Here (unlike the entropy-fitness disanalogy) we do have the same problem that vexes the long-run relative frequency probabilistic operator in the consequent of the second law of thermodynamics: the claim that probably, entropy will increase. As with fair coin-flipping, we need to assume that all the actual dynamic states of the constituents of the ensemble are distributed equally into all the possible dynamic states. But no one in physics or its philosophy, from Gibbs to Sklar, has been able to ground the assumption of equiprobability to general satisfaction. The situation is no different in coin-flipping. The claim that all the possible initial conditions are equiprobable might well be called a metaphysical commitment. What is the bearing of this discussion of coin-flipping on the PNSs? Fitness differences are much more like coin biases than they are like differences in alpha-particle emission. Suppose we have a heads-biased coin, one biased because it is asymmetrical in shape, density, magnetic charge, and so on. This coin comes up heads with a long-run relative frequency of .7, when flipped often enough by a given thumb-and-forefinger apparatus on random independent trials—that is, when the initial conditions of flipping are equiprobable. Evolutionary fitness differences have the same consequences as coin biases. If an organism of type a has a fitness coefficient of 1 and an organism of type b has a fitness coefficient of .4285, then, as a matter of long-run relative frequency, the a type will have 7 offspring to the b type’s 3 offspring, just as a coin biased .7 to heads will in the long run come up 7 heads for every 3 tails. Assuming that selection is a deterministic process that differs only by degree from coinflipping, fitness differences will have results of the same character as tossing biased coins has. It will be an empirical fact that when initial conditions are random and trials are independent, actual frequencies of .7-biased coin flips approach the long-run relative frequency of 7:3 as the number of tosses increases. Similarly, actual numbers of offspring of organisms whose fitness ratios are 7:3 will approach the long-run relative frequency of 7:3 in offspring numbers as they increase in number. In both cases, divergence from the 7:3 ratio will be deemed to be drift, in retrospect at least, if the divergence declines as numbers of tosses or genera-
Central Tendencies and Individual Organisms
tions increases. And each divergence will be in principle explainable deterministically by identifying its initial conditions. The explanation of the divergence will presumably show that the divergence does not disconfirm the long-run relative-frequency hypothesis, as the initial conditions in the divergence were rare, improbable, unrepresentative of the whole population of initial conditions. In practice, of course, these initial conditions are not in fact epistemically accessible either before or after the events in the divergent series (this is what makes coin-flipping a useful device for gambling). How do we decide whether a divergence from a long-run relative-frequency prediction about fitness differences is a matter of drift, a disconfirmation of the hypothesis of natural selection, or a reflection of a mismeasurement of fitness differences to begin with? Suppose we measure the fitness differences between population a and population b to be in the ratio of 7:3, and suppose further that in some generation, the actual offspring ratio is 5:5. There are four alternatives: (1) the fitness measure of 7:3 is correct, but there was drift—that is, the initial conditions at this generation are unrepresentative of those which obtain in all relevant generations; (2) the fitness measure of 7:3 was incorrect and there was no drift; (3) there was both drift and wrong fitness measure; or (4) the PNS is disconfirmed. How do we discriminate among the first three of these four alternatives? The answer is critical for seeing the role of ecological fitness in the theory of natural selection. In the absence of information about the initial conditions of the divergence, there is only one way empirically to choose among the first three alternatives. This way requires access to ecological fitness differences. This access we have, at least in principle, when we make comparisons between the degree to which compared individuals solve specified design problems that biologists identify. These comparisons give the independent empirical content to the notion of “ecological fitness” while allowing for it to be (fallibly) measured by probabilistic propensities to leave offspring. For example, we can tell that white-coated Arctic prey are fitter than their dark-coated competitors, since they have solved a pressing design problem better. We can make this fitness judgment without counting offspring, though barring drift we expect such head counts to measure the ecological fitness difference instantiated. If the theory of natural selection adverts to ecological fitness differences, it has the resources, at least in principle, to decide whether the divergence from predicted long-run relative frequencies, especially where small populations are concerned, is a matter of drift or selection, that is, whether demographic changes stem from ecological fitness differences or the unrepresentativeness of the initial conditions of individual births, deaths, and reproductions. The problem of distinguishing drift from selection in ensembles—large populations—has the same character, and is in principle susceptible to the same
175
176
ch ap t er f iv e
solution. We can make this distinction in ensembles if we accept that there is such a thing as ecological fitness differences; if we have access, at least in principle, to the initial conditions of births, deaths, and reproductions, taken one at a time; and if we accept that these individual differences aggregate into ensemble differences. That the solution is often available only in principle, and not to be obtained in practice, is reflected in our willingness to be satisfied by explanations that pass only the lowest of stringency tests. But at least in principle, in these cases there must be a causal explanation of the individual fitness differences; for without it we cannot distinguish drift from selection among ensembles, and the combination of both (which always obtains, since populations are not infinite and no actual run is a long run) from the falsity of the theory of natural selection altogether. Because there is always some drift, there is in the end no substitute for ecological fitness and no way to dispense with its services to the theory of natural selection. And since ecological fitness is ultimately a relationship between organisms taken two at a time, the theory is as much a set of claims about pairs of individuals as it is about large ensembles of them. Moreover, since fitness is ecological, it must be distinguished from “probabilistic propensities” or “expected reproduction rates.” This result, thankfully, frees us to treat selection as a contingent causal process in which individual fitness differences are the causes and subsequent population differences are the effects. Biologists and philosophers who seek an understanding of the theory of natural selection and its application to the natural history of this planet require a concept of “ecological fitness.” If the best way to define this term is by way of the notion of overall design-problem solution, then biologists and philosophers will have to decide if they can live with such a definition, despite its teleological suggestion and its measurement difficulties. Leaving this matter open, we need at least to see that attempts to define fitness as a probabilistic propensity are unavailing for biological reasons, and attempts to treat the concept as a property of ensembles, along the lines of “entropy,” obscure fundamental differences between fitness and entropy. And finally, the evolutionary contrast between selection and random drift makes indispensable a causal concept of comparative ecological fitness differences between pairs of organisms. This is enough to undermine the central-tendencies approach, which treats natural selection as a holistic process that cannot be grounded in the fitness of individuals. But it is only half of what the reductionist needs to dissolve the untenable dualism of physicalism and antireductionism. The next chapter takes us the rest of the way, grounding natural selection in a relationship between macromolecules recognizable in physical science.
6
• • • • • •
Making Natural Selection Safe for Reductionists ev er since chapters 1 and 2, it has been clear that the viability of reductionism as a research program turns on the status and the foundations of the theory of natural selection. The reductionist cannot give a coherent account either of nonmolecular biology or, for that matter, molecular biology without embracing the process of natural selection as a fundamental explanatory assumption. In fact, if the analysis of chapter 4 is correct, all explanations in biology and indeed the appropriateness of much of the descriptive vocabulary and taxonomy of biology requires that the principle of natural selection (PNS) exemplify a law—indeed, the only fundamental law in the discipline. But these considerations commit the reductionist to providing an account of the PNS or the PNSs that reveal their own foundations in molecular biology along with the foundations of the rest of biology. The obligation to provide such an account in order to substantiate reductionism is matched by a parallel obligation to draw the force of the strongest argument against reductionism, the argument from the alleged autonomy of ultimate (that is, evolutionary) explanations. Chapter 5 was devoted to accomplishing a substantial measure of these two tasks. There I dealt with the holism of the central-tendencies approach to evolutionary theory, which treats the PNS as process operating only among populations. On this view, natural selection is no more reducible to facts about individual organisms than the second law of thermodynamics is
178
ch ap t er s ix
a claim about the individual particles out of which a gas is composed. Having shown this view to be an inadequate interpretation of evolutionary theory, I now need to ground the PNS as a claim about individuals in even more fundamental features of physical science. Otherwise, the indispensability of natural selection everywhere in biology, including in molecular biology, which I acknowledge, will ensure its autonomy from physical science and its irreducibility. Even if everything else in biology can be explained by appeal to the behavior of macromolecules, if the behavior of these latter is a matter of natural selection irreducible to processes recognizable in physical science, the reduction of biology will not dispose of the untenable dualism, which I described in the introduction, between biology’s acceptance of physicalism and its apparent autonomy. In this chapter I address that untenable dualism directly, and try to show how the reductionist can learn to stop worrying and love Darwinism as a purely physical process. I begin by reviewing in greater detail the argument given briefly in the introduction, that the ruling orthodoxy of physicalism and antireductionism is untenable. I then show how the reductionist can assimilate the theory of natural selection to an unproblematical place in physical science.
physicalism and natur al selection Physicalism is roughly the thesis that the physical facts fix all the facts (the slogan echoes Hellman and Thompson [1975, p. 552; 1977, p. 310]). We can be a bit more specific: physicalism has two components: (1) ontological physicalism, in which every (concrete) thing is physical; and (2) physical determination, in which the current state of things is determined by the current and/or past state of physical things. The physical determination thesis cannot simply be the claim that the physical facts fix all the facts by fiat. The fixing must reveal the physical character of the fixed facts (as, for example, chemical facts about the elements are fixed by physical facts about their atomic, that is, physical, structure). Otherwise, physical determination is compatible with the thesis that there are no nonphysical facts (that is, biological, psychological, social facts) which are fixed by the physical facts, but which are distinct from them and not explained by them. This would be a physicalism even Cartesians can embrace. Some physicalists once hoped to exclude such a ploy by appeal to economy and simplicity: “In the absence of positive arguments for extra entities, Occam’s razor (sound scientific procedure) will dictate commitment to the sparser ontology” (Hellman and Thompson 1975, p. 561). Such appeals are too general, too vague, and too controversial to do the work physicalism requires. Physicalism must show how at least in principle facts are fixed, by showing how the kinds they instantiate and the laws that explain them are or can be physically fixed. Physicalism enjoys near universal acceptance in the philosophy of biology.
Making Natural Selection Safe for Reductionists
But despite this agreement, philosophers of biology are divided about whether biological properties and processes are autonomous from physical ones. Most philosophers of biology hold that biology is autonomous from physical science, while a minority deny this claim. Both parties to this dispute face difficulties, however. The antireductionist claim that biology is autonomous from physical science requires that its general theories (if any) not be at very least reducible to those of molecular biology. This denial has an epistemic and a metaphysical version. The former is not controversial even among reductionists; it is the latter, ontological version of the claim, however, that antireductionists are committed to. Epistemic antireductionism begins with reasonable assumptions about the presuppositions, interests, cognitive characteristics, and background knowledge of informed inquirers, in this case biologists. It holds that in light of these assumptions, for these inquirers’ questions, nonmolecular, biological answers are adequately explanatory, and need no completion or correction by information from molecular biology. But this claim is too weak to express the thesis in dispute between reductionists and antireductionists. The reductionist can embrace this thesis and go on to observe that the absence of molecular details from many biological explanations simply reflects temporary or merely anthropocentric limitations on biological inquiry. Kitcher (1984, p. 348) expresses the reductionist’s challenge thus: “There is a natural reductionist response. . . . After all, even if we became lost in the molecular details, beings who are cognitively more powerful than we could surely recognize the explanatory force of the envisaged molecular derivation.” The antireductionist replies that the autonomy of the biological from the molecular reflects the existence of biological natural kinds, and generalizations about these kinds that obtain independent of our knowledge or beliefs about them. Again, Kitcher (1984, p. 350) voices the antireductionist’s claim: “This response misses a crucial point. The molecular derivation forfeits something important. . . . . The molecular account objectively fails to explain because it cannot bring out that feature of the situation which is highlighted in the [biological] cytological story.” Similar arguments with different examples are advanced in Kitcher 1999. Sober (1984, section 4.3, pp. 128 and 130, for example) endorses the ontological version of antireductionism on the ground that a physical approach to biological phenomena would miss generalizations, presumably generalizations that obtain independent of us. Sober (1993, p. 78) writes, “Fisher’s generalization about natural selection cannot be reduced to physical facts about living things precisely because fitness supervenes on these physical facts.” (For a brief introduction to the concept of “supervenience,” see chapter 1, note 2.) The dispute is thus not epistemic but ontological: as Kitcher writes, “Antireductionism construes the current division of biology not simply as a tempo-
179
180
ch ap t er s ix
rary feature of our science stemming from our cognitive imperfections but as the reflection of levels of organization in nature.” It is this metaphysical thesis, that there are facts, kinds, and generalizations that a molecular biological approach would miss, which is in dispute between reductionists and antireductionists (Kitcher 1984, p. 350). As we shall see below, the ontological character of Kitcher’s antireductionism is also reflected in a commitment to “downward causation” from the biological to the chemical. Alternatively, as noted way back in chapter 1, the disagreement between reductionists and antireductionists can be drawn in terms of a distinction between explanations instead of one between epistemic and ontological versions of antireductionism by adapting Railton’s (1981) notion of an “ideal explanatory text.” In chapter 1 I quoted Railton’s observation that the “full blown causal account would extend, via various relations of reduction and supervenience, to all levels of analysis, i.e. the ideal text would be closed under relations of causal dependence, reduction, and supervenience. It would be the whole story concerning why the explanandum occurred, relative to a correct theory of the lawful dependencies of the world” (1981, p. 247). Exploiting this notion of an “ideal explanatory biological text,” the antireductionist claims that in biology, at least sometimes if not often such a text will not employ descriptions and generalizations about macromolecular processes; many such texts adverting only to nonmolecular biological considerations will be ideal. The reductionist denies this thesis. Salmon (1989, p. 161) observes that “the distinction between the ideal explanatory text and [less than complete] explanatory information can go a long way . . . in reconciling the views of the pragmatists [about explanation] and the realists,” or “objectivists,” as Salmon elsewhere calls them. As in chapter 1, we can help ourselves to Salmon’s approach here in order to avoid irrelevant debates about the nature of explanation. The antireductionists’ ontological thesis is difficult to reconcile with physicalism. In fact, it is, as I have said, an untenable dualism in the ruling orthodoxy of the philosophy of biology. If the direction of fact-fixing is exclusively from the physical to the biological, neither downward causation nor the causalmereological independence of the biological from the physical seems possible. Perhaps the most powerful recent arguments for this claim are those advanced by Jaegwon Kim against arguments that purport to reconcile physicalism with antireductionism in psychological theory (Kim 1992, 1993, 1998). The arguments are easily adapted to physicalist antireductionism in biology and in fact are more forceful, as they are not vexed by problems of intentionality that bedevil the issue in the psychological case. Briefly, if the physical facts at time t1 fix the biological facts at t1 and the biological facts at that time fix the biological facts at t2, as the autonomy of biology would have it, they can only do so by fixing the physical facts at t2 (as physicalism requires). But the physical facts at t1 fix
Making Natural Selection Safe for Reductionists
the physical facts at t2. Accordingly, the biological facts at t1 either (1) overdetermine the occurrence of the biological facts at t2 or (2) provide an explanation of the biological facts at t2 that is incompatible with and competes with the explanation in terms of the physical facts at t1. Neither alternative is acceptable to the physicalist. Downward causation is similarly excluded: were the biological facts at t1 to cause or explain the physical facts at t2, they would compete with the explanation of these facts by appeal to the physical facts at t1 or overdetermine the occurrence of the physical facts at t2. Again, neither alternative is acceptable to the physicalist. On the other hand, despite its philosophical problems, antireductionism does appear to have Darwinism on its side. And this fact by itself may seem to many to tip the scales in favor of the autonomy of biology. To reprise the argument for autonomy from the introduction, biological explanations implicitly or explicitly invoke what Mayr (1982, pp. 67–69) has called “ultimate,” adaptational causes, by contrast with physical science’s “proximate,” mechanistic causes. Equally, natural selection is widely and rightly held to be responsible for the functional kinds in which biology trades. (The locus classicus of this view is, of course, Wright 1973.) The role of natural selection in the autonomy of the biological is not just a matter of ultimate explanation. As noted in chapter 4, the environment is blind to differing structures with similar effects when it selects (or better, filters) among hereditary variants. Thus, it often selects for equally effective but slightly different physical variants, resulting in a disjunction of physical structures on which functionally characterized systems supervene, but to which they are not exhaustively reducible. In light, then, of the literal truth of Dobzhansky’s dictum that “nothing in biology makes sense except in the light of evolution,” the autonomy of the discipline and its subject matter will turn on the reducibility or irreducibility of this theory to physical science. The theory of natural selection tells us that if there is phenotypic variation in hereditary traits, and these traits have differential fitness, then (probably) there is descent with modification, that is, evolution. Chapters 4 and 5 were devoted to arguing that among candidates for the most central and important nomological generalization at the core of this theory is the following PNS: PNS (x)(y)(E) [If x is fitter than y in environment E at generation n, then probably there is some future generation n⬘, after which x has more descendants than y.], where x, y, and E range over reproducing systems and environments. For reasons that will be clear below, we do not specify further the range over which x, y, and E quantify. One may treat PNS as implicitly specifying its range.
181
182
ch ap t er s ix
As chapter 4 argued, the PNS or some more-complicated variant of it certainly can make some claim to being a law, according to the standard analysis of this notion (universality, qualitative predicates, support for counterfactuals, high degree of confirmation). The PNS has been traditionally identified as the core of the theory at least to the extent that conceptual connections between fitness and reproduction which threaten it with unfalsifiability are alleged to undermine Darwinism altogether. Whether the PNS, interpreted as in the last two chapters, is innocent of the charge of unfalsifiability is a matter that I will not treat further. Assuming that the PNS is a nomological generalization, our present problem is to elucidate the relationship between it and physical science. Three alternatives suggest themselves. 1. The PNS is a nonderived law about biological systems, and is emergent from purely physical processes. This alternative would vindicate the autonomy of all of biology, following Dobzhansky’s (1973) dictum, but leave biological phenomena physically unexplained and/or emergent. (By emergent I mean properties and laws whose role in explanations would compete with those that physical science advances or whose respective instantiation and operation would overdetermine the occurrence of events explained by explanations that advert only to physical properties and laws, in the sense described in Kim’s argument given above.) 2. The PNS is a derived law: it is derivable from some laws of physics and/ or chemistry. This alternative would vindicate the reductionist vision of a hierarchy of scientific disciplines and theories, with physics at the foundation. 3. The PNS is a nonderived law about physical systems (including nonbiological ones), and from it the evolution of biological systems can be derived, so that the principle we recognize operating at the biological level is also a nonderived basic law of physical science. Of course, specifying exactly what a “physical system” is and how it should be partitioned has long been a vexed question for physicalists. Before the eclipse of Newtonian mechanics, it might have been safe to say it is one whose states are completely exhausted by the position and momentum of each “elementary particle” in the system. Now it is recognized that physicalism’s content, that is, what is taken to be “elementary,” turns on developments in physics and is therefore hostage to them. However, since this is a problem for all physicalists, I note it here only to set it aside as irrelevant to the present debate, in which both parties take current physics to limn the limits of physicalism. Physicalists might wish to defend alternative 2, but for reasons that are detailed below, no such derivation is possible. This, one may suspect, is part of the
Making Natural Selection Safe for Reductionists
reason many physicalists hold that there are no laws in biology. After all, if there are no laws, then physicalism has no need to show a systematic relationship between biological theory and physical science. As noted in chapter 4, defending alternative 1 is an option so daunting to the physicalist antireductionist consensus in the philosophy of biology that instead of facing it, most either deny that there are any biological laws of the sort recognized in physical science (Mitchell 2000; Kitcher 1993), or redefine the notion of “law” so that mathematical truths, statements of local invariance, and relatively weak ceteris paribus generalizations will count as laws (see Lange 1995; Sober 1993; and Woodward 1997, for examples). Some also deny that the theory of natural selection has a distinctive nomological component at all. Rather, they treat the theory as a historical claim about the “tree of life” on the Earth. Thus, Sober writes: “The two main propositions in Darwin’s theory of evolution are both historical hypotheses. . . . The ideas that all life is related and that natural selection is the principal cause of life’s diversity are claims about a particular object (terrestrial life) and about how it came to exhibit its present characteristics” (Sober 1993, p. 7). Sober’s view is echoed by Kitcher (1993, p. 21) as well: “The main claim of the Origin of Species is that we can understand numerous biological phenomena in terms of Darwinian histories of the organism involved.” So, the physicalist antireductionist avoids the problem of explaining away emergent biological laws by denying there are any. The fundamental trouble with these approaches to biology in general and the theory of natural selection in particular is that they require us to surrender a fundamental commitment to the need for nomological force in scientific explanation without providing an alternative. Matters would be simpler if we could accept the nomological force of some principle of natural selection and reconcile it with physicalism. The obstacles facing alternatives 1 and 2 make 3 at least an alternative worth exploring. But for the reductionist it is more than just an alternative worth exploring. The reductionist is required to make a case for 3, if options 1 and 2 above are excluded. Otherwise, it will turn out that the molecular biology to which the reductionist wants to see the rest of biology reduced is just as autonomous from physical science as the antireductionist has always held it to be.
can the pns be a nonderived law of biology? That option 1 is not open to the reductionist should be perfectly obvious, but it would be well to make the reasons quite explicit. The PNS is not supposed to be a merely “local” truth, one that happened on the Earth over a period of about 3.5 billion years, owing to the operations of fundamental physical laws on the initial conditions which obtained here at or prior to life’s origin. The initial conditions
183
184
ch ap t er s ix
might well have been quite different, and therefore might have resulted in nothing properly identified as evolution, without this undermining the PNS. It is certainly a part of Darwin’s theory that all biological systems on the Earth share common descent, and that their particular traits are largely the result of adaptation through natural selection. But in addition to these claims, Darwinism includes the claim that natural selection is at least the most significant causal mechanism of evolutionary change instantiated by lineages on Earth’s tree of life. Darwinism must hold that the process could be instantiated elsewhere and “else when,” like any causal process. It follows that the principles describing this general process must have some nomological force, independent of the operation of physical laws on distinctive initial conditions. Of course, as with any law, the principle’s actual instantiation does require that the right initial conditions actually obtain. It is difficult for the physicalist to accept that the biologist’s PNS is a basic nonderived law of nature, whose truth is not contingent on more fundamental laws, in particular those of physical science. For one thing, the history of science has strongly encouraged the view that the existence of and generalizations about larger aggregations of matter, such as biological systems from the virus that causes AIDS to the blue whale (Balaenoptera musculus), can be explained by generalizations about the smaller aggregations of matter that compose them. Surely natural selection is a process that operates on aggregations of matter larger than their physically or chemically characterized component subsystems. Furthermore, neither chemistry nor physics seems to have any explanatory need of the PNS. Accordingly, if it were a nonderived basic law about biological systems, the biologist’s PNS would be quite different from all the other basic laws, which work together to explain physical processes. This is a conclusion holists, organicists, and others uncomfortable with physicalism would gladly embrace and one that physicalist antireductionists must wrestle with: if the physical facts fix all the facts, how is it that, once biological systems arise, a nonderived law not dependent on any particular nomological facts about chemistry and physics comes to be instantiated? As noted above, the physicalist’s thesis that the physical facts fix all the facts cannot merely be the claim that once physical laws are fixed, so are all the laws, if any, of other sciences. Physicalism must include the thesis that the obtaining of these particular laws, as opposed to other possible laws, can be shown in principle to be the causal and/or mereological outcome of the operation of physical laws. The physical facts cannot just fix all the facts by fiat; the fixing must reveal the physical character of the fixed facts. Otherwise, physicalism is compatible with epiphenomenalism about nonphysical facts (that is, biological, psychological, and social laws), which are fixed by the physical facts but distinct from them. This would be a physicalism even some Cartesians would accept.
Making Natural Selection Safe for Reductionists
It might be held that an argument for the nonderived basic character of the PNS can be constructed on the basis of a feature of the principle identified by Dennett (1995): its “substrate neutrality.” The PNS is substrate neutral in the sense that it can operate on an indefinitely large number of different objects differently composed. It would be true almost no matter what the objects of natural selection were composed of. Aside from requiring hereditary variation of traits, perhaps the only other requirement the principle makes of its objects is that they be concrete tokens, not abstract types. As such, the principle is indifferent to changes in, for example, the standard model of microphysics, or the acceptance of the periodic table of the elements in chemistry. It can therefore hardly depend on such laws for its foundations. If the content of the PNS is unaffected by actual or plausible changes anywhere in fundamental microphysics, then unlike laws in chemistry, there may be reason to suppose it is not dependent on such laws. If its form and content would be unaffected by changes in chemistry—even ones as profound as the overthrow of the periodic table, then again, it may well be supposed that the principle is not derivable from more basic laws in chemistry either. The substrate neutrality of the PNS would be a good reason to treat the principle as a nonderived basic law of nature if it weren’t for the fact that there is at least one other undoubted law which appears to share the same property of substrate neutrality and yet which few suppose to be a nonderived basic law of nature: the second law of thermodynamics, which we have met before in chapter 5: (x)(y) [If x and y are two states of a closed thermodynamic system and x is later than y, then, probably, x has greater entropy than y.] The second law of thermodynamics is, like the PNS, substrate neutral. Thermodynamic systems can be composed of any concrete objects whatsoever. What is more, the formulation and the nomological status of the second law seems impervious even to such vast changes in physics as the shift from Newtonian determinism to quantum mechanical indeterminism. But ever since its establishment, physicists have sought to ground the second law on more fundamental considerations from Newtonian dynamics. They have been entirely unwilling to treat the second law as a basic nonderived law of nature. Why not? The answer seems to be something like this: thermodynamic systems are composed of concrete objects with mass and velocity. Newtonian mechanics tells us that the past and future behavior of each such object is entirely fixed by its present position (relative to all other bodies) and momentum. Accordingly, the behavior of every aggregation of concrete objects should be explainable by disaggregation into the behavior of the aggregation’s constituent members. This commitment was vindicated early on in the case of thermo-
185
186
ch ap t er s ix
dynamics by the derivation of the ideal-gas law from the assumption that gas molecules honored Newton’s law, and the hypothesis that the temperature of a gas (in degrees kelvin) is equal to the mean kinetic energy of the molecules. It remains true that physics has not succeeded in providing a completely successful grounding for the second law of thermodynamics, but few physicists have any doubt that the principle is a derived and not a basic one. This is evidently owing to the physicists’ commitment to physicalism or, more narrowly, mechanism as the thesis not merely that mechanical facts fix the thermodynamic facts, but also how mechanical facts fix the thermodynamic facts. (The physicists’ commitment to “mechanism” must, of course, be understood as qualified and indeed underwritten and explained by nonmechanistic theories and laws: the inverse-square law of gravitation in Newton’s time, relativity, and quantum mechanics at present.) Substrate neutrality is not by itself a sufficient reason to accept that the PNS is a nonderived law. Indeed, from the point of view of physicalism, arguments suggesting that the second law of thermodynamics should be a derived law of physics have equal force for the conclusion that the substrate-neutral PNS should also be a derived law. After all, the behavior of the concrete objects it deals with are also in principle described by physical theory. Accordingly, if the biologist’s PNS describes the behavior of some of the same objects and/or aggregations of them, then either it must be incompatible with and compete with physical theory to explain their behavior, or somehow it must be derivable from physical theory. The PNS’s independence from fundamental physical theory is the one alternative it would be difficult for a physicalist to accept.
could the pns be a derived law of chemistry or physics? Dennett has also more controversially suggested that the PNS is an algorithm. What he wanted to emphasize by so treating it is the “mindless,” purely “mechanical” (1995, 59) character of natural selection in order to reflect its freedom from teleology, and its “automatic” implementation innocent of any need for “intelligence.” The mindless, mechanical character, the complete absence of purposeful intelligence in its operation, would indeed follow if option 2 were correct, and the PNS were derivable from physical law alone. For the PNS to be derived from more fundamental laws of physics and/or chemistry, we need something like a derivation of the PNS’s consequent “probably, there is some future generation n⬘, after which x has more descendants than y” from premises that include some set of physical and/or chemical laws, along with the antecedent of the PNS, “if x is fitter than y in E.” In effect, this
Making Natural Selection Safe for Reductionists
would be a conditional proof of the truth of the PNS from premises in physical science. Physicalism should feel the attraction of such a derivation. After all, if the physical facts fix all the facts, then they fix the biological facts, including the biological laws. Moreover, the physicalist can even identify the particular physical processes that realize variation, heredity, and selection on the Earth. Doing so is, in fact, a large part of the revolutionary developments in molecular biology over the last half century. If natural selection is a physical process, surely it must be physically explained. But, for the most well known of reasons, such a derivation (even merely in principle) faces serious difficulties. In order to derive PNS from physics, we need to connect the relation “x is fitter than y in E” to some purely physical facts about x, y, and E. Compare the explanation of the ideal-gas law, PV ⫽ nRT, by derivation from Newton’s laws applied to gas particles. It requires that P, the pressure of the gas on the container; V, the volume of the container; and T, the temperature in degrees kelvin be connected to Newtonian properties of the constituent particles of the gas and the container, such equalities as Tkelvin ⫽ (1/2)mv2. But, owing to the multiple realizability of the comparative fitness relation, no such connections can be established. The same supervenience relationship obtains for variation, heredity, and selection. Even if the physical mechanism of heredity on Earth is limited to only one or two general processes (the replication of nucleic acids and, more controversially, prions), its specific details are quite diverse (DNA replication, RNA-virus replication by reverse transcriptase, and, arguably, prion replication). Similarly, variation in the hereditary material employed by creatures on this planet will include a vast and heterogeneous disjunction of different physical processes. As for the way in which the environment filters among hereditary variations, the diversity of physical mechanisms it employs is beyond contemplation. Plainly, no derivation of the PNS can actually be effected from laws that govern the physical processes on which it supervenes. No actual derivation may be effected, owing to the famous “many-one” relation between the physical supervenience base and the biological properties to which the PNS adverts. Elsewhere, I have argued that though this is an obstacle for creatures of our cognitive and computational limitations, a finitesized supervenience base would make such a derivation at least in principle possible (Rosenberg 1985, 1993). An omniscient cognitive agent, who knows all the ways nature can skin the cat, all the physically possible ways that biological properties can be realized, would be able to effect a derivation of the PNS. This claim is at best cold comfort for the physicalist. To begin with, it would be deemed question-begging if the only grounds for the claim were the
187
188
ch ap t er s ix
physicalist’s conviction that since the physical facts fix all the facts, they must fix the PNS as well. Second, for all we know, the number of physically possible alternatives that realize biological properties are infinite, owing to the range of possibly continuous values biologically significant physical properties can realize. If there is no way finitely to express the range of continuous values, the derivation of biological laws from physical ones will be at best “schematic” or otherwise tendentious. The physicalist may respond to this last problem with a tu quoque. If the PNS is not derivable from the laws of physics owing to the infinitude of the alternative physical arrangements that will realize a given biological state of affairs, then the PNS’s relation to the laws of physics is no different from that of the second law of thermodynamics. After all, we know that the obstacle to reducing the entropy of a gas, for example, to the dynamic properties of the particles that compose it is quite similar: any quantity of entropy is compatible with an infinite set of equiprobable arrangements of particle positions cum momenta. The problem, recall from chapter 5, is simply that as yet, no one has provided a nonarbitrary measure that will enable us to rank these infinitely membered sets for size. Until we find such a measure, larger quantities of entropy cannot actually be identified with larger infinitely membered sets of such arrangements. It is for this reason that thermodynamics cannot provide a complete reduction of entropy to particle motion or, consequently, the second law to Newtonian mechanics. Nevertheless, there is no disquieting suggestion in physics that the second law of thermodynamics is “emergent” or “autonomous” from mechanics. If the PNS is no worse off when it comes to reduction than the second law of thermodynamics, this tu quoque is therefore hardly convincing. Physicists know that the problem facing a reduction of the second law of thermodynamics to mechanics is the “technical” problem about the arbitrariness of alternative measures of the size of infinite sets. They recognize how in principle the reduction of thermodynamics to mechanics should run. It is pretty clear that the situation is quite different in the case here under consideration. We don’t have the slightest idea how the reduction of the PNS to fundamental laws of physics might even in principle proceed. Accordingly, the physicalist’s tu quoque is of limited force. The problems that vex alternatives 1 and 2 for any physicalist make option 3 worth exploring. And yet it is one no one seems to have examined before. Certainly no physicalist—antireductionists or otherwise—has explored the possibility that the PNS may be a nonderived law of chemistry or physics, and therefore unproblematic from a physicalist point of view. Let’s examine this option.
Making Natural Selection Safe for Reductionists
the pns is a basic law of physical science Consider the following scenario: Begin with a set of atoms interacting in accordance with the laws of chemical stoicheometry to compose molecules of various kinds, which themselves interact chemically to compose other larger molecules. In a given molecular milieu, some of these resultant molecules will be more stable than others, owing to the nature of their bonds; some molecules will find themselves in chemical environments energetically and otherwise more favorable to their persistence than others. Those molecules that are more stable in a given environment are likelier to persist longer than other molecules, provided the environment persists. As the interactions among atoms continue, the number of these more stable molecules will increase over time until local conditions change and begin to favor the chemical synthesis of some other molecules. Among the molecules that emerge from atomic interaction and then from molecular interaction, some will have chemical properties which result in the appearance of more molecules of the same structure as their own: perhaps these molecules are catalysts for some reactions that, under the circumstances and given available substrates, lead to the production of more copies of themselves (more tokens of their types); or perhaps they provide templates for such copying; or their increased concentration shifts other chemical reactions away from equilibria and toward production of some substrate needed for more copies of themselves; or all of the above. Call molecules that (sometimes by the aid of other molecules acting catalytically or otherwise) foster the appearance of more tokens of their chemical types “self-replicating.” Of course, stability and replicability are matters of degree, and molecules will have both properties to varying degrees; but assume for the moment that the self-replicating molecules are unstable and the stable molecules are nonreplicating. Given a locally finite supply of substrate atoms and molecules out of which both self-replicating and stable molecules are synthesized, the two kinds of molecular products must eventually exhaust the stock of substrate. If stability and self-replication are incompatible and the molecular environment lasts long enough, the number of stable molecules will almost certainly increase while the number of self-replicating ones will eventually fall. For every time a stable molecule is synthesized, some substrate atom or molecule necessary for replication will be tied up and become unavailable. As replicating molecules break up, owing to their instability, they will provide substrates for both new stable molecules and new replicating ones. But the stable molecules will not provide substrate for the replicating ones. In the end, if stability is great enough, only stable ones will be left. Of course, when the molecular environment changes or more
189
190
ch ap t er s ix
substrate appears, proportions may begin to change again. On the other hand, if the rate of replication is high enough and the half-life of stable molecules is low enough, then the long-term result may be either an equilibrium distribution of both kinds or even the swamping of stable molecules by self-replicating ones. However, given the laws of chemical synthesis, the distribution of substrate atoms and molecules, and the molecular environment, there may arise different molecules that combine stability and self-reproduction in varying proportions. Among these, of course, some will more closely approximate an environmentally optimal combination of stability and replication, so that given a finite stock of substrate atoms and molecules, those closest to optimally stable-andreplicating molecules will predominate. And, of course, if the environment (the local chemical milieu) changes, which molecule will come closest to the optimal combination will change as well. There are in our scenario a large number of strategies of chemical synthesis competing to bind together available substrate into molecules of varying degrees of stability, reproduction, and combinations thereof. In a constant environment, one or more of these strategies must “win”: that is, after a certain point all the molecules will be one kind, either all stable without replication or all highly unstable and constantly self-replicating; or there will turn out to be one or more processes of synthesis which combine varying quantities of stability and reproduction to produce approximately the same number of molecules of each type at the end of each period, given the initial substrate conditions. Or, more likely, there will be “ties” for first place among several strategies. (Whence the supervenience of the biological on the physical, and the blindness of selection for effects to differences in structure.) The result is, of course, the selection of the fittest molecules (types as well as tokens). In effect, our scenario suggests that molecules can realize a PNS, or equivalently, that there is a PNS among the laws of chemistry, and that it is not itself derivable from other laws of chemistry. Of course, it might be argued that a PNS for molecules would be derivable from chemical and physical laws if all the physically and chemically possible environments and all the combinations of stability and replicability could be specified chemically and physically. For then the comparative fitness relation that figures in the antecedent of the PNS for molecules could be cashed in for purely physical/chemical descriptions of the competing molecules, and the consequent of the molecular PNS would be derivable conditionally from the antecedent and the rest of physical law. If the combinations of differing means of replicability and different types of stability, and the number of differing chemical environments, on all of which comparative fitness differences among molecules supervene cannot be exhaustively characterized, then no such derivation is possible. If the number of combinations of replicability and stability
Making Natural Selection Safe for Reductionists
and the number of discrete chemical environments is finite, then, of course, the reductionist could argue for option 2, the derivability of the PNS from more basic laws of physical science. The argument of this chapter assumes that the antecedent of this conditional is false. If it is true, then ironically the claims of this chapter may have to be surrendered, but the reductionist’s argument will be immeasurably strengthened. If our natural selection scenario obtains for the constituents of atoms, we would have reason to embrace a PNS as among the fundamental laws of physics. What prevents us from finding such a law among the leptons and hadrons is, of course, that these particles are either highly stable or highly unstable. But, either way, they never replicate themselves. Accordingly, there is no scope in microphysics for differences in “fitness” along the lines that molecules exemplify. The PNS is a nonderived law, not of physics but of chemistry. Well, if there is a PNS among the laws of chemistry, why has it never been noted by chemists? Is it an objection to the claim that such a law obtains among the basic laws of chemistry that no chemist has hitherto explicitly recognized it? Perhaps. But it is also evident that the law is not one chemists ever needed to invoke to explain salient chemical processes. It is a law that only acquires explanatory and predictive application when the chemist turns to explaining the distribution of molecules of various types in various locations over long timescales, and this is something chemists scarcely ever concern themselves with. Chemists are certainly interested in stability of molecules; indeed, this has long been a matter of grave importance in research on the synthesis of new molecules. And in recent years, a number of chemists have interested themselves in the synthesis of self-replicating molecules as well. (See Rebek, Park, and Feng 1992.) Moreover, catalysis has always been a concern among chemists, and the properties of molecules as templates or support for the synthesis of other molecules have taken on importance especially in chemical engineering. But “pure” chemistry has not traditionally interested itself in the process of “selection for effects” by which an environment filters for stability, self-replication, or combinations of these two traits of molecules. This is an area of chemical inquiry to be found among chemical engineers, petrologists, and geologists interested in the distribution of chemicals on the Earth. It is therefore little wonder that a PNS for molecules is not likely to be found in the basic textbook presentation of chemistry along with the periodic table, the laws of stoichiometry, the gas laws, or the law of mass action equilibrium. Add to this the obviousness of the PNS, which has led repeatedly throughout its history in biology to the charge of tautology, and the invisibility of the law among the laws of chemistry should be no surprise.
191
192
ch ap t er s ix
The PNS for molecules reflects selection of molecules for stability, replicability, and various combinations thereof, depending on the environment. Selection for molecules, of course, results in selection of larger compounds, again for the stability and replicability they confer on their constituent molecules. But the only way compounds can do this is via their own stability cum replicability. The result is a PNS for compounds, grounded in part on the PNS for molecules. And, of course, there are other ways that ensembles of molecules arise besides via the covalent bond. Compounds also result from van der Waals forces and ionic or electrovalent bonds; additionally, noncompound aggregations of molecules result from nonpolar bonds and—especially important in biology—hydrophobic interactions, which produce solids and layers of lipid molecules. The result at each level of chemical aggregation is the instantiation of another PNS, grounded in, or at least in principle derivable from, the molecular interactions that follow the PNS in the environment operating at one or more lower levels of aggregation. As the size and complexity of the compound molecules increase, we can begin to identify the distinct and different contributions that their various parts make to their stability cum replicability. These parts may be molecules that survived in the previous environment long enough to constitute the substrates for novel chemical synthesis in the current one. After a certain point, large molecules will come to have distinct components, which provide active sites or allosteric sites for catalysis, or make for a favorable local pH, or protect a nucleic acid from deamination, and so on. These components will not themselves be very stable or replicate. But they may come to be shuffled around and attached to molecules that will enhance their stability and replication. When such molecules that aid stability and replication of other molecules become available, fitness will shift from simply being a matter of stability and/or replication to stability and/or replication and/or fostering the attachment of molecules that accelerate replication and increase stability. At some point in our scenario, the evolution by natural selection of successively more complex molecules may come into contact with the process that Stuart Kauffman (1993, 1995) has identified as providing “order for free” among molecules. If and when this happens, evolution by natural selection among molecules can be expected to accelerate in the direction of what we recognize as biological. Kauffman (1995, chapters 4 and 5) has produced simulations which suggest that given a large collection of different molecules, each one catalyzing the production of no more than two other molecules, it is highly probable that the resulting network of molecular interactions will show a relatively small number of orderly cycles of states; that the system of molecules so related when
Making Natural Selection Safe for Reductionists
perturbed slightly moves among a small number of basins of attraction; and that this system, when perturbed more seriously, moves with high probability to a new such stable cycle. For, say, a set of 30,000 molecules (roughly the estimated number of human gene products), each one randomly connected to exactly 2 other molecules, the length of a single cycle of states in which the system of molecules finds itself is only about 175 sets of successive molecular states. The trick, of course, is fine-tuning sets of chemical reactions so that each molecule catalyzes the synthesis of only two others. Pace Kauffman, this is a task for natural selection among molecules. What is more, if the present account is correct, natural selection among molecules for stability cum replicability will already have kicked in before the emergence of the spontaneous order which Kauffman seeks to model. Thus, at each level of the organization of matter, there turns out to be a PNS, and each one should be in principle derivable from the PNS for the immediately lower level or some other lower level(s), all the way back down to the PNS for molecules. (I appreciate that “in-principle derivability” is a notion which could benefit from significant amplification. Suffice it to say that the “in-principle derivability” of the second law of thermodynamics from statistical mechanics is an example of this sort of derivability, which sets a relatively high standard for “in-principle” derivability without requiring logical deduction.) The only reason the PNS doesn’t reach down into physics, on this picture, is that the environment that builds microphysical particles only selects for stability, not also for replicability. Indeed, calling the PNS a law of chemistry is just a picturesque way of drawing attention to the fact that selection for effects only begins to operate at the level of chemical reactions; and through its operation here, it also does so at higher aggregations of matter. Similarly, we call the second law of thermodynamics a law of physics, even though it obtains for all systems—physical, chemical, biological—since it is at the level of the physical that it begins to operate. The real point is not that the PNS is a chemical law properly so called, but that it describes a well-understood and purely physical process. Repeated cycles of such a process on this planet produced RNA and amino acids. The rest is natural history. The subsequent instantiation of natural selection for molecules and other aggregations of matter that combine stability and replication more optimally than others produces the biological systems that we know. It may not escape the reader’s notice that the scenario here described has affinities with some well-known speculations about the origins of life in stable replicating RNA molecules, which are both templates and catalysts for chemical reactions that multiply copies of themselves (Eigen and Schuster 1977). The
193
194
ch ap t er s ix
problem Eigen attempts to deal with is that once RNA molecules begin to appear, the error rate in their replication prohibits sufficient stability for selection to act on them, unless other molecules interact with them in a “hypercycle.” This proposal and the difficulties it faces need not concern us here. For the present proposal is a much less speculative claim about the operation of natural selection in the appearance of stable replicating molecules of a much less complex and much smaller kind than five hundred nucleotide-long molecules. However, if the “hypercycle” is as accessible as Kauffman’s (1993, 1995) models suggest, Eigen’s problem may not be insurmountable.
pns, the levels of selection, and downward causation The reductionist should be happy to accept the PNS as a nonderived law of physical science (in particular chemistry, if a more exact location is needed). Reductionists should also hold that its operation at higher levels of the aggregation of matter is a consequence of the operation of the nonderived PNS for molecules together with the rest of physical law. Physicalism requires that at any given level of the organization of matter, all the way from the lipid bi-layer to the group of interacting organisms, the operation of the PNS is grounded (in part) on its operation at one or more lower levels of organization of matter, and always on its operation at the level of the molecule. The reductionist can employ this approach to elucidate how the physical facts can fix all the facts, including the appearance of “downward causation” in biology, and multilevel selection, without incurring the obligation to embrace ontological antireductionism. Let’s consider an antireductionist’s argument for “downward causation” and from it the autonomy of biology. It is an argument we have canvassed before, in chapter 2, in some detail. Here I want to focus just on how our treatment of the PNS as a nonderived law of chemistry effects this argument. In “1953 and all that,” Kitcher argues, the geometrical structure of the cells in a developing chick limb—a nonmolecular fact—will have a molecular explanandum, lowering the density gradient of a molecular gene product below a threshold, and so explaining the formation or malformation of the wing. Kitcher notes that “reductionists may point out, quite correctly, that there is some very complex molecular description of the entire situation.” But “however this is realized at the molecular level, our explanation must bring out the salient fact that the presence of a gap between cells that are normally adjacent that explains the non-expression of the genes.” Of course, it is also the spatial contiguity of the cells in the normal case that explains why the genes are expressed in the “right” order and development proceeds. There are thus “examples in which claims at
Making Natural Selection Safe for Reductionists
a more fundamental level (specifically, claims about gene expression) are to be explained in terms of claims at a less fundamental level (specifically, descriptions of the relative positions of pertinent cells)” (1984, 371). Substitute the relation of causation for Kitcher’s relation of explanation, since the issue, as he sees it, is one of ontological antireductionism: we would, Kitcher holds, “fail to identify the causally relevant properties . . . by using the vocabulary and reasoning patterns of molecular biology” (1984, 371). Thus understood, the thesis is that nonphysical, biological facts about spatial relations among biological items cause nonbiological, physical facts about the density gradient of a macromolecule across the space of those cells. It is hard to see how we could reconcile the claim that the direction of causation runs from the biological “down” to the physical, with physicalism’s commitment to physics as fixing all the facts. Locating the PNS among the nonderived laws of physical science will not make “downward causation” possible. But it will help us see clearly where the appearance of downward causation in biology comes from. Consider how the causal process Kitcher describes came about. The story is something like this: At some earlier evolutionary time, environmental factors operating at the level of macromolecules made adaptive the spatial arrangement of molecules into organelles, then into cells, and eventually into tissues. That is, the chemical milieus first at the time of organelle formation, then at the time of cellularization, and eventually at the time of tissue formation made them successively available as attainable solutions to successive problems of stability-cum-reproduction for the nucleic acids (and their molecular products). Subsequently, the physical geometry of the tissue structure, together with the chemical density gradient of certain molecules (in Drosophila it would be the maternally secreted bicoid-protein mRNA), provided an environment in which the resulting differential expression of nucleic-acid-molecule sequences is selected for, owing to its developmental effects. Nowadays, the gradual diffusion of a chemical through the space of this tissue, turning some genes off and others on, depending on its concentration, is a purely physical process. Note it is the physical geometry of the structure that is causally relevant here, not the fact that the structure is composed of tissues, or composed of cells, for that matter. Any contained space of the same dimensions will do. As a result of all these selection processes, given the right geometry and chemical gradient, ceteris paribus, normal embryological development ensues. The spatial distance between nucleotide sequences and the gradient of spatial distribution of the maternal protein are jointly causally sufficient in the circumstances of normal embryological development to repress and stimulate a variety of gene sequences that make for development of the chick wing. Note that this latter (nowadays) part
195
196
ch ap t er s ix
of the story—in which the causal responsibility is borne by diffusion and spatial separation of molecules—is wholly physical. There is nothing less physically “fundamental”—to employ Kitcher’s term—in the cause than the effect. Appearances to the contrary result from the biologists’ descriptive vocabulary of organelles, cells, tissues, and so on, functional terms which reflect the fact that the current proximate physical causes were fixed long ago by natural selection operating at the level of macromolecules. Not so long ago, Donald Davidson (1967, 195) noted that we should not mistake deletions from the description of causes for deletions from the causes themselves. The same advice should be kept in mind when it comes to additions to these descriptions.1 1. Philosophical digression. Davidson’s point and amplifications of it are useful for rebutting arguments that purport to convert reductionism to a species of eliminativism. The threat is serious if only because no biologist is prepared to substitute the molecular descriptions of biology’s explananda for the functional ones, and yet the reductionist’s “nothing-but” argument that the biological just is macromolecular sounds like it may require such a substitution. Some philosophers of psychology “run” a similar argument against the reduction of psychological (especially intentional) processes to neurological ones, under the label “the causal drainage argument” (see Block 2003). Adapted to the biological case, it proceeds roughly like this: if the biological kinds, or, as I have called them, functional kinds, have their causal properties in virtue of being “nothing but” complex molecular kinds, as the explanatory version of reductionism here defended requires, then biological kinds have no causal powers beyond the causal powers of the macromolecular kinds which compose them. Bereft of their own distinctive causal powers, they are scientifically otiose and must be eliminated. Avoiding eliminativism requires that we accord biological kinds distinctive, irreducible causal powers, among them, for example, downward causal powers. The “causal-drainage” argument fails to notice that the terminology of functional biology and that of molecular biology identify the same causal powers, the only causal powers there are—the physical powers—in different terms. Reductionism does not deny that biological kinds have causal powers—the physical ones; it reveals them. Here we need to heed Kim’s point that functional terms do not identify distinct “higher-level” kinds with distinct “higher-level” causal properties. They are, rather, “higher-order” terms that name the same properties which “lower-order”—macromolecular—terms name. Higher-order terms are often disjunctive in their reference and even embody concealed quantifiers, but they pick out the same properties as lower-order terms, not distinct higher-level ones. (See Kim 2005, especially pp. 52–69 and 108–20.) There is a metaphysical problem in mounting this defense against the threat of eliminativism: it is the need for a general argument against the Platonistic claim that there is a distinct property for every predicate, including the so-called higher-order disjunctive predicates that, I argue, following Kim, name lower-order nondisjunctive properties. This is, of course, hardly a problem in the philosophy of biology, still less in biology.
Making Natural Selection Safe for Reductionists
Like other biological explanations, developmental ones give the appearance of downward causation from the biological to the physical for three reasons. First, they do so in part because some of the causes they identify are described biologically, even though their causally relevant properties are purely physical ones. Second, they do so because the physical structures that produce their molecular (or other physical) effects were put in place by natural selection, which has not hitherto been recognized as a wholly physical process. And finally, our ignorance of “all the gory details” makes epistemic antireductionism inevitable. But fortunately for physicalist reductionism, the appearance of “downward causation” is just that: mere appearance. The same perspective will enable the physicalist to accommodate another apparently antireductionist thrust of contemporary biological theory. This is the claim that biological systems such as groups have properties not explainable by the properties of their individual component parts, and so must be accorded existence irreducible to the existence of their immediate component parts. Among the most important examples of this sort of argument is due to Sober and Wilson (1996), who argue for the autonomous existence of groups by showing that selection for traits of groups in which altruists predominate can be opposite in direction and stronger than selection for individuals with the same (or competing) traits. In such cases, evidently the operation of the PNS at the level of groups cannot be accounted for by the operation of the PNS at the level of individuals who compose them. Ontological reduction is thus supposedly blocked, since apparently it would require that group traits depend on traits of the individuals who compose them. But with the addition of the PNS to the stock of physical laws, physicalism may countenance a much more complicated relationship between parts, wholes, and their respective properties than this argument against ontological reduction supposes. The substrate-neutral PNS can operate at various levels of organization, moving in different and indeed opposite directions within larger biological systems and the smaller ones they contain, owing to its operation at the level of molecules. To see how, consider the operation of that other substrate-neutral physical principle, the second law of thermodynamics. A thermodynamic system, the Earth, for example, whose entropy is increasing may consist of subsystems or components whose entropy among certain members is not increasing or is even decreasing (as, for example, when biological systems decrease their entropy at the expense of that of their local environments, with a net increase in total entropy). The second law assures the improbability of “local” decreases in entropy unless made good by increases elsewhere, and prohibits permanent local decreases in entropy. Apparent local violations of the second law are reconciled to its general validity when we take a wider, less local
197
198
ch ap t er s ix
view of changes in the distribution of matter, in which distribution fixes all the thermodynamic facts about the whole system. Thus, there is no difficulty about reconciling the second law’s demand for net or global increases in the entropy of an aggregate with local increases in the entropy of one or more of its components. What the second law requires to allow persistent local entropy decrease is compensating local entropy increase elsewhere in the aggregate, so that the aggregate honors the second law. Much the same can be said for the PNS, though interestingly here the relevant quantity—average fitness—must globally increase (modulo a constant environment) at the level of components, even as it may decrease at the level of some aggregates they compose. (This is just the opposite of the second law’s requirement for entropy increase at the level of aggregates.) Groups of biological individuals may experience fitness increases at the expense of fitness decreases among their individual members for periods of time that will depend on the size and composition of the group and the fitness effects of their traits. What the PNS will not permit is long-term fitness changes at the level of groups without long-term fitness changes in the same direction among some or all of the individuals composing them. The physicalist will explain this in much the same way as the second law of thermodynamics is reconciled to temporary local departures from global entropy increase. In Sober and Wilson’s model for group selection, groups with higher proportions of altruists grow larger in total population from generation to generation than groups with lower proportions of altruists, even as the proportion of altruists within each group declines from generation to generation (owing to the free-riding of the selfish among them). Thus, in a large population divided into such groups, over a certain number of generations the total number of altruists will rise, even though altruism reduces individual fitness (when free-riding is unchecked), while the proportion of altruists in the total population must decline, in accordance with the PNS. At least temporarily, groups composed mainly of altruists will have higher fitness than groups composed mainly of selfish members, as measured by the total numbers within each group, even as the comparative fitness of altruists declines in each group, as measured by the comparative reproductive success of selfish and altruistic members within each group. But as Sober and Wilson recognize, left alone, this process cannot persist indefinitely. It is important to see that Darwinian reductionism does not require that the operation of selection at one level be explained by the operation of selection at the immediately “lower” level. For example, it is no part of reductionism to insist that group selection be reducible to individual selection. Darwinian reductionism does not require that group selection be explained by individual se-
Making Natural Selection Safe for Reductionists
lection, or that individual selection be explained by genotypic selection, or that genotypic selection be explained by gene (or allele) selection. It does require that regularities about selection operating at any level be explained in terms of regularities operating at the next lower level. But these need not be regularities about selection operating at this level. What it does require is that wherever selection does operate, it must eventually be explained by selection at some “level” in the succession of reductive explanations that eventually terminate at the behavior of macromolecules. (The distinction between reductive explanations of selection and selectionist explanations of selection is emphasized in Okasha 2006, chapter 4.) The local departure from individual fitness maximization that the PNS countenances is like the local departure from entropy increase that the second law countenances. In neither case will the laws allow for permanent departure. The second law will not allow for permanent departure because the closer we come to the global perspective of matter in motion common to all thermodynamic systems, the closer the probability of entropy increase approaches 1.0. The PNS will not allow for long-term fitness increases at one level driven by equally long-term fitness decreases at some lower level. And the reason is that, as physicalism requires, selection at the lowest level (together with initial conditions) must in the long run fix selection at higher levels. At the molecular level, the probability that fitter alternatives will proliferate approaches 1.0, just owing to the very large numbers of molecules involved. Physicalism requires that the selective environment, so to speak, build the genes, organelles, cells, tissues, individual organism, and groups out of the molecules by selecting on each aggregation as an extended phenotype that the molecules realize. In some environments, selection may act for a limited time in different directions on parts and their wholes at any of these levels of aggregation. But, ceteris paribus, it cannot do so persistently without the eventual extinction of those individuals whose lower fitness makes for the higher fitness of the aggregations of which they are parts. Thus, for example, with the extinction of altruistic individuals, so too altruistic groups must become extinct. Until they do, their separate existence and divergent adaptational fates will be the result of the PNS operating at levels below that of either group or individual members of it. So, at any rate the reductionist must and can hold. The reductionist may even advance a reconciliation with the antireductionist on the strength of this conclusion. Since the PNS’s operation at any level where it does operate is in part fixed by the operation of a PNS operating unproblematically, unmysteriously, at the level of the molecule, the physicalist can have no qualms about the PNS’s physical foundations. Since the PNS’s operation at any level above the molecular is fixed by the operation of at least one law that
199
200
ch ap t er s ix
is not reducible to the (other) laws of physics and chemistry, the antireductionist can without discomfort hold that biology is a discipline whose foundations are strictly autonomous from those of physical science (minus the PNS for molecules, of course). Once it is recognized that the PNS is a nonderived but physically unproblematical law of chemistry, the differences between reductionists and antireductionists become easy to reconcile, and the autonomy of the biological from the physical becomes philosophically uncontroversial. As the exponents of biology’s descriptive and explanatory autonomy held, the functional-kind vocabulary of biology will be irreducible to that of physical science. But the reason will simply be that the PNS for molecules, whose action “builds” the functional kinds as adaptations, is not reducible to (the other) laws of physics. But as the reductionist holds, there is nothing ontologically suspect about these functional kinds, since their tokens are “built” by the operation of a physical law—the PNS for molecules. And, of course, as physicalists, all philosophers of biology will now be free to treat the theory of natural selection as a body of nomological generalizations that really vindicate Dobzhansky’s dictum that nothing makes sense in biology (including molecular biology) except against the background of evolution.
7
• • • • • • •
Genomics, Human History, and Cooperation my case for darwinian reductionism is now pretty much complete. In the last two chapters of this work, I want to vindicate my subtitle, to give the reader reason to stop worrying and love molecular biology. In this chapter, the aim is the latter: to show why even those of us who are not molecular biologists, or even biologists, can profit by what molecular biology tells us about matters far from the behavior of the organelles, cells, tissues, organs, organisms, and populations that concern the biologist. In the next chapter, I turn to the task of showing why there is nothing to fear from molecular biology, or at least how the Darwinian reductionism advocated here refutes the very thesis that so worries reductionism’s opponents: genetic determinism. In his Notebook, Darwin famously wrote, “Origin of man now solved. . . . He who understands baboon, would do more for metaphysics than Locke” (1989, Notebook M, p. 84). Darwin’s claim is probably guilty of pardonable exaggeration. After all, he didn’t prove the origin of man, and Locke’s greatest contributions were to political philosophy, not metaphysics. But it may turn out that Darwin’s twentieth-century grandchild, genomics, vindicates this claim with respect to both metaphysics and political philosophy, or at least the metaphysical questions about human nature and the political philosopher’s questions about the foundations of human cooperative institutions. And along with providing answers to such questions, genomics sheds light on a great deal else about human cultural, and not just genetic, evolution.
202
ch ap t er se v e n
By genomics I mean the comparative and often computational study of the nucleotide sequences and the functional organization of the human genome and the genomes of many other species of animals, plants, and fungi. The Human Genome Project has given us a first and second draft of the 3-billionbase-pair DNA sequence of the human genome. It has so far given us a little more information about the human genome. For instance, it appears that even more of it is “junk” DNA than molecular biologists have thought; “junk” DNA has no role in development or normal human function, and is just along for the ride, so to speak. And it now appears that there are only about 30,000 to 60,000 genes in our genome, which makes it a small multiple of the size of the fruit fly’s genome. But at an accelerating rate, genomics—the comparative study of the DNA sequences of the human and those of other organisms—will begin to give us the sort of detailed information about our genomes we never dreamed of, and will give it to us as the result of methods we can automate and turn over to computers. Learning about our genomes and their protein products will cease to require genius, and at most will demand ingenuity. Learning exactly which DNA sequences among the 3 billion nucleotide bases express genes, and which genes they express, is a matter of “annotating” the DNA sequence the Human Genome Project has provided. Even before the whole sequence came into our hands, comparative genomics was providing evidence about large tracts of history about which only informed speculation had hitherto been possible. So, how can gene-sequence data shed light on events over which the genes presumably have no control? It turns out that E. O. Wilson’s metaphor of the genes holding culture on a leash is also apposite in the other direction. Culture holds the genes on a leash, and its twitch upon the cord is still there millennia later to be read by us.
mitochondria, y chromosomes, and human prehistory We are inclined to think of history as having begun when written records did, about three thousand years ago in the Near East and a thousand years later in Mesoamerica. But DNA-sequence data already in hand extend our knowledge of the general lines of human history so far back as to turn the Inca Empire, the fall of Rome, the building of the Great Wall of China, or the founding of Sumerian Ur into matters of recent history. DNA-sequence data can answer detailed perennial questions about human origins and prehistory that have hitherto been the domain of pure speculation. Like the bar code on a can of beans on a supermarket shelf, our DNA sequences are labels from which we can read off date and place of manufacture, not just in geological time, but over the last
Genomics, Human History, and Cooperation
200,000 years, with resolving power that already approaches only a few thousand years, just beyond the reach of carbon-14 dating. Seeing how fine-grained is the resolving power of the genetic bar code in these cases should give us some confidence it can answer countless other questions hitherto beyond the reach of evidence. But to see this requires a little of the science of DNA sequences. Everyone inherits their cellular mitochondria and the genes these contain only from their mothers, because the mitochondrial genes aren’t in the nucleus of any cell—somatic or germline—and so don’t make it into the sperm, which contains only DNA from the nucleus. But since mitochondria are in every cell, they are in every ovum, and so in every ovum fertilized by a sperm. By contrast, all males and everyone else who has a Y chromosome inherits it from his (or her) father (the parenthesized here accommodates the rare XXY females). Mitochondrial genes’ DNA (mtDNA) and Y-chromosome DNA can be sequenced. Since individuals differ from one another in gene sequence, it is easy to order a sample of individuals for greater and lesser similarity in DNA sequences—whether in the nucleus or the mitochondrion. The more similar the sequences, the more closely related two people are. Given an ordering of similarity in mtDNA and Y-chromosome DNA among people living today, and compared to some mtDNA and Y-chromosome DNA sequences in another species whose age is known, geneticists can work backwards to identify an mtDNA or Y-chromosome sequence from which all contemporary sequences must have mutated and descended; in effect, they can draw a family tree of all the main lines of descent among mtDNA or Y-chromosome sequences, and they can date the age of various branches in this family tree. mtDNA-sequence data were available much earlier than Y-chromosome data; it led to the conclusion that every human being now living is descended from one particular woman living in eastern Africa—current Kenya and/or Tanzania, approximately 144,000 years ago. She alone, of the approximately 2000 to 5000 women then alive, has had an unbroken line of female descendants from that day to this. Every other woman then alive has had at least one generation of all-male descendants, and so her mitochondrial sequences have become extinct. Moreover, the narrowness in sequence variation among extant people reveals that we are ten times more similar to one another in sequence data than, for example, chimpanzees are similar to one another in sequence data. It can also be established that this woman, called “Eve” by biological anthropologists, lived among a relatively small number of Homo sapiens who must have gone through some sort of evolutionary bottleneck—that is, most of our ancestors were killed off at some point in the recent past. As a result, there were only about 2000 (⫹/⫺1000) women altogether alive at the time Eve lived. Subsequent sequencing of a portion of the Y chromosome has confirmed these conclusions. Indeed,
203
204
ch ap t er se v e n
as more and more sequence data comes in about single-nucleotide polymorphisms and microsatellite loci, the conclusion has become inescapable (in spite of Chinese reluctance to accept it) that all present Homo sapiens are descended from this one African Eve and a relatively small number (about a dozen) of African Adams alive at the same time as Eve.1 Our common descent from African Eve explains why intraracial genesequence differences are larger than interracial ones, and why polygenetically (many-gene) coded traits have not had sufficient time to assort into separate lineages, thereby explaining why race is not a biologically significant explanatory concept. The genetic similarity among humans suggests further that the obvious visible differences among us in skin color, hair color, facial characteristics, and so on are both of relatively recent origin and most probably the result of sexual and natural selection in local environments and small populations. Besides telling us where and when we started from, following out differences in more and more available DNA sequences, geneticists have traced the details of early human migration out of eastern Africa into both western and southern Africa and northward, dating the arrival of Homo sapiens on each of the continents to within a few thousand years, and explaining in some detail the peopling of Micronesia, Melanesia, and Polynesia within the last six thousand years (Cann 2001). And beyond chronology, sequence data provides other startlingly detailed revelations about matters of prehistoric narrative hitherto thought forever beyond answers. For example, consider the question that concerned novelists like Auel and Goulding, and many others: what became of the Neanderthals? Well, Neanderthal DNA is available in bones from the Neander valley in Germany. By comparing mtDNA and the ALU gene sequence—a bit of junk-DNA sequence that repeats a distinctive number of times in chimp, Homo sapiens, and Neanderthal DNA—it can be shown that these three lines of descent don’t share these genes at all, as they would have to if there were any interbreeding among them. This is not surprising in the case of chimpanzees and Homo sapiens, of course. But that there was no interbreeding between our species and the Neanderthal at all is very significant. That means that either Homo sapiens killed off the Neanderthal, gave them all a fatal disease, or otherwise outcompeted them in a common environment. Probably, Cro-Magnon outcompeted them, because there is archaeological evidence that both populations existed side by side in Europe over many thousands of years. Similarly, the absence of any non-African Y-chromosome sequences among 12,000 Asian 1. For an introduction to the African “Eve” hypothesis and supporting data, see Boyd and Silk 2000, 477–83; Hedges 2000; Stoneking and Soodyall 1996. For Y-chromosomesequence confirmation and amplification, see Renfrew, Foster, and Hurles 2000; Stumpf and Goldstein 2001.
Genomics, Human History, and Cooperation
males from 163 different populations shows that the migrants out of Africa replaced any earlier Asian populations, and did not interbreed with them either (Boyd and Silk 2000; Gibbons 2001). Further research has employed DNA-sequence data to uncover the detailed narrative of events we never dreamed of reconstructing, and of other events our nongenetic records have misrepresented to us. For example, consider the origin of agriculture in Europe about ten thousand years ago. How did it happen? There is some archaeological evidence that farming spread from the Near East northward and westward in Europe. But how? By cultural evolution, one might presume: farming must have spread as people in one European valley noticed the success of those farming in the next valley to the southeast, and copied their discovery. Others have held that the farmers came out of the Near East and, like the Cro-Magnon, outcompeting or extirpating the Neanderthal, displaced—pushed out or decimated—local populations, took over their territory, and thus expanded the farming regions. Which hypothesis is right is not a question we could ever have expected to answer, since these events took place before any recorded history, indeed before the invention of writing! But recent studies, first of mtDNA and now of Y-chromosome-sequence differences in contemporary Near-Eastern and European populations, substantiate the latter scenario, the “demic-defusion” model, a euphemism for the displacement of one whole population by another. mtDNA and Y-chromosome-sequence data show that the earliest migration from the Near East into Europe occurred about 45,000 years ago, and its descendants now account for only about 7% of contemporary European mtDNA. However, the earliest immigrants, provide twice that proportion of mtDNA among the isolated Basque, Irish, and Norwegian populations, and only half that frequency in Mediterranean populations. The next wave of migration, about 26,000 years ago, provided about 25% of current mtDNA in Europe, while the third wave, 15,000 years ago, accounts for about 36% of contemporary European mtDNA. Agriculture arrived with a diffusion from the Middle East about 9,000 years ago, and despite their recent arrival, the mtDNA sequences these immigrants brought with them account for only 23% of the mtDNAs of current European populations, 50% when we exclude the extreme Basque, Irish, and Scandinavian populations. And this wave of migration provides mtDNA and Y-chromosome DNA sequences in a “cline”—a gradient of change in proportions—that moves in the direction from southeast to northwest (Richards and Macaulay 2000). What the sequence data tells us is that Near-Eastern populations displaced indigenous ones year after year in a wider and wider arc of expansion from the Middle East, either driving them west eventually to the extremities of the European continent or killing them off so that the only survivors of the original population of Europe were those inhabiting agriculturally marginal territories.
205
206
ch ap t er se v e n
The question arises, then, why didn’t the earlier inhabitants acquire farming either independently or by imitation of their neighbors’ practices? Surely there is no gene for farming that they lacked. Did farming and the social organization it produced make the Near Easterners that much more formidable than the hunter-gatherers? If so, why? Further thought about this displacement should at least enable theorists of the evolution of cooperation among hunter-gatherer egalitarians to set some constraints on their models. The payoffs to cooperation cannot be so strong as to prevent defeat by less-egalitarian groups with storable commodities. More recent population events, besides revealing who settled Melanesia, Micronesia, Polynesia, and the Western Hemisphere and when they did so, will tell us who arrived later, what groups went back the other way to settle Madagascar (where mtDNA sequences are quite different from mainland African mtDNA; see Gibbons 2001), and why the current residents of the Andaman islands east of the Indian mainland have mtDNA sequences far closer to those of east Africans than even the inhabitants of their neighboring islands or the Indian Subcontinent. Nonhuman DNA-sequence data will be able to tell us still more about human prehistory. Sequencing the domesticated plants and animals and their extant undomesticated relatives can tell us where and when hunting and gathering first gave permanent way to farming, and thus to the beginnings of hierarchal social, political, and cultural institutions. And it can date these events well before or with much greater accuracy than does the archaeological evidence now available. In fact, what DNA-sequence research thus far has shown is that both wheat and cattle were probably domesticated at least twice independently and at roughly the same time. Among the earliest domesticated cereals is emmer wheat, which, however, reflects two different sequences that diverged 2 million years ago, one traceable back only to southern and central Europe, including Italy, the Balkans, and Turkey, while the other is ubiquitous to all regions of emmer cultivation. This suggests a double expansion from domestication in the Middle East. There are two distinct types of cattle—the humped breeds of India and the humpless ones of Africa and Europe. They were both domesticated two thousand years after wheat, but their DNA sequences are sufficiently different to support the hypothesis of separate domestication. (See Brown et al. 1998; Noonan et al. 2005.)
genomics and the emergence of human cooperation From the year that William Hamilton first introduced the concept of inclusive fitness and the mechanism of kin selection, biologists, psychologists, game the-
Genomics, Human History, and Cooperation
orists, philosophers, and others have been adding details to answer the question of how altruism is possible as a biological disposition. We now have a fairly well articulated story of how we could have gotten from there—nature red in tooth and claw—to here—an almost universal commitment to morality. That is, there is now a scenario showing how a lineage of organisms selected for maximizing genetic representation in subsequent generations could come eventually to be composed of cooperating creatures. Establishing this bare possibility was an important turning point for biological anthropology, human sociobiology, and evolutionary psychology. Prior to Hamilton’s breakthrough, it was intellectually permissible to write off Darwinism as irrelevant to distinctively human behavior and human institutions. The major components of the research program, the models and simulations, the comparative ethology, are well known. Once Hamilton showed that inclusive-fitness maximization favors the emergence of altruism toward offspring, a virtual riot of ethological activity began to identify previously known cases of offspring care as kin selected, and to uncover new examples of it. Once Hamilton was joined by Axelrod in identifying circumstances under which reciprocal altruism between genetically unrelated beings would be selected for, the community of game theorists began to make common cause with evolutionary biologists in the discovery of games in which the cooperative solution is a Nash equilibrium. This led in turn to the development of models of evolutionary dynamics for iterated games like cut-the-cake, ultimatum, and hawk versus dove, which show how a disposition toward equal shares, private property, and other norms among genetically unrelated beings may be selected for. An independent line of inquiry at the intersection of psychology and game theory developed an account of emotions which suggests that they too may have been selected for in order to solve problems of credible commitment and threat in the natural selection of optimal strategies in single games (Frank 1988). But in a sense, all this beautiful research remains as what Gould and Lewontin (1979) characterized a “just-so story”: there was no evidence for it, and it seemed unlikely that there ever would be. This is unsurprising; after all, behaviors, dispositions, norms, and social institutions are not among the hard parts preserved in the fossil record. There is, of course, comparative ethology, neurophysiology, and neuroanatomy. But at most, these provide data from which we can reverse-engineer our way into . . . well, into just-so stories—hypotheses among which we cannot choose on the basis of independent evidence. Surely we can excuse gene sequencing from shedding light on a purely cultural phenomenon such as human cooperation. Surely it would be a species of the most puerile “genetic determinism” to suppose that such a socially significant fact about us is “in our genes.” Let us leave the Darwinian reductionist’s refutation of genetic determinism to the next chapter. Cooperation is almost
207
208
ch ap t er se v e n
certainly not in our genes. But that doesn’t mean they can shed no light on how it emerged. The mitochondrial DNA sequences strongly suggest that sometime at or before 144,000 years ago, there was an evolutionary bottleneck through which Homo sapiens came. This was long before the advent of agriculture, and presumably cooperation was already well established at that point. If Homo sapiens is the sole species in which substantial cooperation emerged, and if we could compare gene sequences between extant and extinct hominid lineages, then there would at least be a chance of uncovering a genetic difference that reflects this nonphenotypic difference. There are a lot of “if’s” here. But even if sociality is what Dennett (1995) calls a “forced move” that was somehow written into our gene sequence and not those of extinct hominids (a tendentious assumption yet to be discussed), it is obvious from gene-sequence data that these other hominids left no representatives for us to sequence and compare. Or did they? Recall that DNA has been extracted from Neanderthal bones upwards of forty thousand years old. This work is part of a new subdivision of biological anthropology that styles itself the study of ancient DNA. Quantities of DNA to be found in burial-ground bones, around cave- and campfire detritus (and coprolites, for that matter), in fossil skulls, and so on are minuscule in quantity; proportions of the full sequence are low, and no particular portion—say, functional genes as opposed to junk DNA—is preferentially preserved. Nevertheless, the prospects of worthwhile data are not entirely unfavorable. The optimism here as elsewhere in the genetic revolution is in the power of a molecular process: PCR—polymerase chain reaction for the amplification of DNA. This is a process employing a reagent that can catalyze the amplification (reproduction) of a single nucleotide sequence of any length into a million copies in only thirty rounds of replication. This means that if a molecular biologist can extract just a single copy of a polynucleotide molecule of the DNA from any specimen, an unlimited number of copies will shortly be available for sequencing, comparing to other sequences, and functional annotation (identifying the part of a gene, if any, it codes for). Naturally, the older a specimen, the smaller the amount of DNA and the shorter the DNA molecules recoverable. Moreover, in sequencing hominid DNA, the greatest stumbling block is contamination with contemporary human DNA, which literally spews from the fingertips of the investigator running the PCR procedure. But (as yet unreplicated) claims of successful amplification and sequencing include 80-million-year-old dinosaur bones, and 130-million-year-old insects trapped in amber (Paabo 2000). There is little reason to suppose that any human activity as complex as social cooperation could have an interesting genetic basis. Even inquiry into the putative genetic basis of much more stereotypical behaviors has made little
Genomics, Human History, and Cooperation
progress. Current inquiry into the genetic basis of behavior begins with the assumption that behavioral dispositions which are statistically heritable or disproportionately represented in some genetically homogeneous groups are matters of degree and dependent on a large number of genes. For example, the search for a genetic basis of criminality or intelligence—both taken to be dispositions measurable by criminal records or performance on a test—treats the disposition as a “quantitative trait” and seeks a “locus” in the genome statistically associated with that trait in the populations who manifest it in a high degree. These QTL (quantitative trait loci) studies are both politically and scientifically controversial. (See Lewontin 1985.) Few such studies reveal even a 0.20 correlation between the quantitative trait and some region of the genome on which a detectable marker can be found. QTL studies face two scientific problems. 1. Most traits of interest are hard to operationalize, so that individuals who instantiate them to the greatest degree are hard to identify; in effect, the traits of interest are not themselves phenotypes, but at most packages of phenotypes or the result of phenotypic and environmental interaction. 2. At best, QTL studies will identify a set of loci—perhaps ten or more relatively large stretches of DNA—that are jointly highly correlated with the instantiation of a high degree of some quantitative trait in a “normal environmental range.” Nothing will be revealed by such studies about the biosynthetic pathways from these genes to the actual behavior they are supposed to be the “genes for.” It is easy to see how these problems will bedevil the attempt to employ genomics as evidence to test alternative theories about how human cooperation emerged. (For an introduction to these QTL studies, see Plomin et al. 2000.) To make matters concrete, suppose the behavioral disposition we seek to explain as an evolutionary adaptation is something as specific as “the disposition to engage in tit-for-tat strategies in iterated prisoner’s dilemma games,” or “the disposition to ask for 1/2 in iterated cut-the-cake games,” or again, “the disposition to reject anything less than 1/2 in an ultimatum game.” Now, no one supposes that any of these dispositions is a single gene-controlled phenotype like tongue-rolling. Genes just don’t seem likely to code for recognition of a complex environmental conditional setup in which an abstractly described strategy is to be employed. Even when, among infrahuman species, complex behavioral dispositions are genetically “hardwired,” identifying these dispositions is not only difficult, it may also require that first we undertake genetic knockout experiments of the sort impossible among humans. Here is a striking example of this sort of thing: The male mouse is disposed to kill all mouse pups that are not its own offspring—a highly adaptive bit of environmentally conditional behavior that max-
209
210
ch ap t er se v e n
imizes its genetic representation. But how could nature have programmed the male mouse with the power to make the required genealogical discriminations, given the similarity in look, smell, or other features a mouse can detect in pups? It didn’t have to. Instead, nature found a quick-and-dirty substitute that does just about as well. Male mice have a genetically hardwired pup-killer disposition. But mice do not live in large colonies, and nature equips the male mouse with a further package of genes that automatically switches off the mouse’s pupkiller disposition from day 18 to day 22 after its last ejaculation. This period happens to be the gestation time for female mice. So, pups the male encounters during this period have a high probability of being its own pups and have a chance to escape before the pup-killer instinct returns (Perrigo et al. 1990). For all the world, it looks like male mice show a complicated strategy requiring considerable genealogical knowledge, when in fact the behavior is hardwired, and the gene that produces it is a down-and-dirty solution to a difficult problem. Similarly, strategic cooperative behavior will be indistinguishable from behavior generated by some much simpler genetically encoded dispositions. In particular, a gene for unconditional kin altruism will produce behavior indistinguishable from a more complicated strategy in iterated prisoner’s dilemma circumstances, when all players are close kin. That there is a gene for kin altruism, or any preferential treatment of kin, or, for that matter, some downand-dirty substitute for it (a gene for altruism toward anything that secretes a certain odor, for example) among the mammals may seem a pretty safe bet. But if there is a “gene for” kin altruism or even any down-and-dirty available substitute for it, there is also some considerable evidence that such a gene either never figured among the genotype of primates, or that if it did, it made no significant contribution to cooperation among them. This is due to the fact that long before our last common ancestor with the chimps (about 5 million years ago), all the primates had ceased to live in groups in which kin altruism would be selected for. Or at least that is what a comparative analysis of our closest primate relatives suggests. The social structure of almost all extant ape groups reflects female (and often also male) disbursal at puberty, high uncertainty of paternity (except for gibbons), an abundance of weak social ties, and a lack of strong ones. Paleontology reveals that the number of ape species underwent a sharp decline about 18 million years ago, while monkey species proliferated. If this was the result of competitive exclusion of apes toward marginal tree-limb niches, it would explain many of their and human anatomical similarities. Unlike humans, chimps and gorillas remained in these restricted niches to the present. Humans and chimps are highly individualistic, mobile across wide areas, self-reliant, and independent. By contrast, the monkey species reflect matrifocal social networks that would strongly encourage the selection of kin
Genomics, Human History, and Cooperation
altruism (Maryanski and Turner 1992). At a minimum, the pattern of sociality we and the other primates inherited from our last common ancestor makes it highly probable that cooperation among us is not written in the genes, even imperfectly, approximately, by some down-and-dirty exploitation of an already available gene for some form of kin altruism, still less by direct natural selection for the disposition to conditional or strategic cooperation. All in all, it seems more reasonable to assume that cooperative behaviors are the results of the collaboration of a number of different behavioral dispositions all simply reinforced by their environments, that is, dispositions ontogenetically selected for, though not phylogenetically selected for. If so, it would be worthwhile seeking a package of genes that produce the dispositions and capacities that are individually (nontrivially) necessary but not jointly sufficient for these sorts of cooperative behavior. (A gene is nontrivially necessary for a phenotype roughly if it is not also necessary for a large number of other traits, including respiration, metabolism, reproduction, survival, and so on.) In this scenario, a great deal of the burden of explaining the exact shape of cooperation is shouldered by the environment in which hominids must have survived for hundreds of thousands of years. And the degree to which our genomes are explanatorily relevant to cooperative dispositions will turn on whether the genes that subserve cooperative behavior were selected for owing to the fact that they make it overwhelmingly likely that one or more cooperative strategies will be hit on, and make it easy to learn these strategies from others. If they merely make it easy to discover and learn any complex adaptive behavior at all, the notion that cooperation is an evolutionary adaptation directly naturally selected for will be undercut. Exponents of an evolutionary account—genetic or cultural—of cooperation will favor a hypothesis according to which dispositions that specifically subserve cooperation are selected for owing to the payoff cooperation provides for fitness. Indeed, some will hold that dispositions and capacities useful for other purposes beside fostering sociality, capacities like memory, speech, and reasoning, have been selected for owing to their contribution to solving the design problem presented by opportunities to cooperate and defect. Suppose the genes for a suite of widely useful capacities such as speech, memory, and a theory of mind were all selected for because together, they made an agent’s seeing and choosing the cooperative strategy a “no-brainer,” obvious move in appropriate circumstances. We might be tempted to say that together the sequences do constitute “a gene for cooperation.” How could we show this? First, we need to identify the capacities that make complex cooperative activities among people possible: general capacities such as memory, reasoning, and speech, and ones specific to cooperation, such as
211
212
ch ap t er se v e n
the emotions of anger, shame, resentment, guilt, love, jealousy, and revenge. Gene-sequence comparisons between humans and between us and infrahuman species could shed some light on the genetic bases of these capacities that cooperation requires. However, the sorts of comparative sequence data now available are completely inadequate to test hypotheses about human/primate genetic differences and similarities. As is well known, to begin with the sequence similarity between Homo sapiens and chimpanzees is something over 98%, and the size of the genomes is immensely greater than that of the mitochondria their cells bear. Moreover, approximately 95% of the sequences in both genomes are “junk” DNA, which does not code for any gene products and whose function, if any, is unknown. Presumably, the differences between Homo sapiens and chimps are to be found among the 5% of coding sequences, the regulatory sequences that control the expression of structural genes which are identical between humans and chimps. But where these coding sequences are across the 3 billion base pairs and how they differ are at issue. What we need is a way of analyzing this vast source of data.
enter the gene chip It is at this point that the next generation of genomic data comes into play. For even in the last five years, genomics has moved from comparisons of relatively small amounts of extranuclear DNA to the comparison of entire chromosomes, employing automated “gene-chip” or “microarray” technology. A gene chip is a small piece of glass on which a huge number of gene sequences can be arrayed. These sequences will preferentially bind to sequences that are closely similar to themselves, when a sample of such sequences is washed over the chip. Those sequences which have bound a similar sequence from the sample can be detected. If the sequences on the chip are known, it is trivial to read off the differences between genes originally placed on the chip and those of the sample. So, once we have located some or all of the genes on, say, a human chromosome, we can array sequences from these genes on a chip, wash the sample with DNA sequences from the homologous chimpanzee chromosome, and read off the sequence differences. If enough is known about how the sequences on the human chromosome realize particular genes, we can identify the presence or absence of the same genes in chimps, as well as differences in their structure, number, and location on the chromosome. If enough is known about the biosynthetic pathways into which these genes enter, we are in a position to identify the genetic bases of differences in anatomy and behavior between Homo sapiens and chimps. Such a program of research has already begun to be carried out for the hu-
Genomics, Human History, and Cooperation
man chromosome 21 and the homologous chimpanzee chromosome 22. Human chromosome 21 is the shortest and was the second to be fully sequenced (by a German-Japanese consortium). It contains only 225 genes (of which 98 are identified only through computerized gene prediction) within 33.5 million base pairs, and is of particular interest owing to its duplication in Trysomy 21 (Down syndrome) and the role of genes it carries in Alzheimer’s disease, some forms of epilepsy, autoimmune disorders, a form of manic psychosis, and deafness. Recently, a microarray comparison between the human chromosome 21 and the homologous chimp chromosome 22 has been undertaken (Bickerton 1998). What this work so far shows is that there are not just individual polynucleotide differences, but substantial genomic rearrangements—both insertions and deletions—between the two genomes; that these rearrangements account for about 50% of the total gene-sequence differences between chimps and humans on these chromosomes; that the deletions at least appear to be random in origin; and that both deletions and insertions are randomly distributed across the chromosomes (except for one 250-kilobase region). Let’s apply these evidential breakthroughs to the study of the evolution of sociality. Assume that cooperative behavior does not result from a single genetically coded behavioral disposition, but rather that it is taught, learned, and culturally selected for, once it appears. However, there is a suite of hereditary phenotypic dispositions on which it depends. What will these look like? Most skeptics about “genetic determinism” will claim that these phenotypes are likely to be at most anatomical structures and in many cases mediate and immediate protein products of regulatory and structural genes at best causally necessary for the behavior, not sufficient for it, even in relatively restricted circumstances. If these skeptics are right, genomics can do little for our inquiry, nor much for human behavioral biology, evolutionary psychology, biological anthropology, or the rest of social science. But whether they are right or whether there are gene sequences sufficient in normal environmental circumstances for complex behavior is, of course, an empirical question. Begin with the hard data that gene-sequence differences and traditional archaeology already provide: cooperation among Homo sapiens is universal, and all extant members of the species are descendants of an African population that was probably no larger than 5000 or so 144,000 years ago. A body of their descendants left Africa about 80,000 years ago, and made extinct another species of hominid, Homo erectus, which had flourished everywhere but thereafter persisted perhaps only on the island of Florenes. That is, across all the ecologies in which Homo erectus had subsisted if not flourished, from the African savannah to the Central Asian steppe and the European forests, the Indian Subcontinent, Siberia, and the Far East, as well as the accessible por-
213
214
ch ap t er se v e n
tions of Australasia, Homo sapiens killed them off, outcompeted them, excluded them from every occupiable niche without interbreeding, even though archaeological evidence shows that the two species cohabitated in Europe at least for 30,000 years. What design problem is to be found in all these niches, and in eastern Africa as well, that a relatively small and apparently unsuccessful species solved, and another, much more populous species did not solve, sometime around or before the moment, 80,000 years ago, when Homo sapiens spread out of eastern Africa? The answer that strongly suggests itself is that the design problem was that of finding means, motive, and opportunity reciprocally to cooperate. The design problem of cooperation has several features that the scenario above suggests. First, it is a problem that obtains in all the ecologies Homo finds itself in— warm, cold, arid, wet, savannah, forest, steppe. Second, it is one that cannot be solved without those capacities that presumably distinguish Homo sapiens from Homo erectus: for example, language and imitation learning. (Here the absence of complex tools among Homo erectus right down to 18,000 years ago provides independent evidence, as we shall see.) Third, there is independent anthropological evidence that Homo did not live in matrilineages that foster extended kin altruism, but were solitary pairs for much of the pre-Holocene period. For Homo, cooperative opportunities would have to be reciprocal, not kin altruistic (Maryansky and Turner 1992). Fourth, as noted above, non-kin cooperation must have solved a very big design problem or, equivalently, conferred a great adaptation advantage. For it has such substantial and obvious short-term costs that it would not have long persisted without a great payoff: for example, the ability to exclude another species from every one of the varied niches in which they competed would be such a payoff. Finally, since reciprocal cooperation among humans is presumably not coded by any genes, it must have spread horizontally, obliquely, and quickly on any evolutionary timescale. Again, a solution to a design problem that can spread faster than any gene is just what we need to explain the rapid (50,000-year) spread of Homo sapiens populations into already occupied niches across the whole world. Thus, we may treat as a general hypothesis the claim that the success of Homo sapiens is owing to its having hit upon the solution to the problem of attaining cooperative equilibria in some social interactions; and we may treat the models of evolutionary game theory as specifications of the particular structures that arrived at these equilibria. These are admittedly highly speculative hypotheses. Let us turn to the matter of how gene-sequence data might test the general hypothesis and its specifications. Consider some of the capacities, dispositions, and traits that are required for one or another of the models of evolutionary game theory to obtain as the actual
Genomics, Human History, and Cooperation
course of cultural evolution of cooperation. Among them are, at least, emotion, from jealousy and love to shame and guilt; reliable memories about other agents playing iterated games; the strategies these agents employed and the game payoffs; a theory of (other) mind(s) or at least of goal-directed behavior; language, in which to exchange ex ante plans and ex post analysis of cooperative activities; and imitation learning—if, as assumed, cooperation spreads by horizontal and oblique as well as vertical cultural transmission. Imitation learning is indispensable to models of the evolution of cooperation such as Skyrms’s (2004). For players must detect and imitate successfully strategies of other players, including players with whom one does not interact at all. So, in testing Skyrms’s model or any other that depends on imitation learning, the question arises, how long ago did imitation learning emerge, and did it do so as a response to the environment’s putting a premium on cooperation or some design-problem solution? One clue that the sort of imitation required for learning appears rather late in hominid evolution is that the earliest date at which composite tools—axes with handles, for example—emerged was 250,000 years ago. For over a million years before that time, hominids were using roughly the same stone axes, their means of manufacture appears to have been repeatedly and independently discovered, it appears to show no cumulated improvement, and it appears not to have been widely transmitted. (The recently discovered Homo florenses, a putative member of the Homo erectus but living as late as 18,000 years ago, do not seem to have used composite tools.) It’s safe to assume that what was lacking was a suite of cognitive and communication capacities needed to preserve and disseminate technological discoveries. Before 250,000 years ago, there is no archaeological record of Homo sapiens employing many of the obvious materials from which tools can be made—such as bones and antlers. If, as seems reasonable, the spread of such discoveries requires imitation learning, then we cannot date its emergence much earlier than 250,000 years ago, nor suppose cooperation of the sort that requires it to have appeared earlier. Consider how selection might have acted on an organism that was capable of a moderate degree of imitation learning. The study of imitation in other primates, especially chimpanzees, is helpful here. What we find is that chimpanzees are relatively poor imitators. For example, when the chimpanzee Kanzi was instructed on how to fashion stone tools, he eventually, with considerable coaching, managed to produce a stone flake sharp enough to cut the string on a box containing a food reward (Mithen 1996). However, this achievement was dwarfed by the number of details Kanzi failed to glean from his lessons. Kanzi never learned how to discriminate between different striking platforms on a stone to choose one that would produce the most effective cutting surface—his
215
216
ch ap t er se v e n
strategy was more to bang stones together until the flakes broke away. Nor did Kanzi manage to control his force in percussion (and this was not due to a lack of dexterity on Kanzi’s part, either). It seems prudent to assume that our first tool-fashioning ancestors were not much better than Kanzi at learning the subtleties of technique by watching one another. The basic idea of tool use might have gotten across, but many of the details were lost. This has been proposed as an explanation for why there were virtually no technological advances made to the hand axe for a million years: it is unlikely that innovations were not hit upon, the problem was that they failed to be passed along. Suppose we can locate the suite of genes that subserve our imitationlearning abilities; moreover, suppose we can locate their homologues in chimps and, with a lot of luck, in Neanderthals, in ancient Homo sapiens remains in eastern Africa before and after the evolutionary bottleneck period, and in the newly discovered Homo erectus from Florenes. We can then make sequence comparisons between them and, perhaps even more important, between their introns, promoters, and local noncoding regions. From the differences in molecular events we consequently find, we may be able to theorize what mutations, translocations, duplications, and so on produced these differences in capacities, and perhaps order these molecular events and even date the emergence of the capacities to learn and teach by imitation that they serve. We have assumed that non-kin cooperation really does solve a serious design problem that the Homo sapiens lineage solved when competing species did not, and that the solution continued to be selected for in all environments humans came to occupy. If the assumption is correct, there would have been very strong stabilizing selection among the genes that carry traits needed for cooperation, and probably more DNA-sequence stability and less drift than one might find in other genes for other sequences and for homologous sequences among the other species. But suppose that, as seems overwhelmingly likely, the number of genes and the interactions among them that are necessary for the capacity to learn by imitation, or the dispositions that underwrite cooperation and that its evolution turns on, is astronomically large. In that case, how can gene sequencing shed light on the actual course of the evolution of cooperation? This question, of course, presumes a negative answer to a prior question: are the genes that subserve capacities needed for the evolution and persistence of cooperation large in number and complex in interaction? Consider linguistic ability and the ability to attribute strategies to conspecifics (part of a theory of other minds). These two traits are arguably necessary for several of the models of the evolution of cooperation we might seek to test. Although these are almost certainly polygenetic traits, there are well-known genetic defects associated with human incapacities in speech and strategic interaction, and gene-sequence differences between
Genomics, Human History, and Cooperation
us and our closest relatives at these loci. High-function autism and Asperger’s syndrome prevent normal cooperative behavior, are associated with anatomical and neurological abnormalities in the brain, and (in the case of autism at least) have a substantial hereditary component. There is reason to suppose that autism results from the interactive effects of at least three microrearrangements on genes, some of which produce a serotonin transporter. These genes are probably located on chromosomes 7 and 15, and they are implicated in some other rare, genetically caused retardation. We know that normal children develop a “theory of mind,” the attribution of intentional states to others, between the ages of two and four; and there has been some empirical investigation and good deal of debate about whether the primates show a similar capacity. If the capacity to treat others as having intentional states is one lost in autism (Klin et al. 2000), then we are on the way to locating the genes that are at least nontrivially necessary for the capacity in humans. For another example, it has recently been shown that certain significant defects in speech assort in genetically familial patterns, and a technique of genetic localization known as positional cloning has enabled geneticists to locate the particular genes responsible for the defect. Mutatis mutandis, they have located some of the genes whose normal function is necessary for normal speech (Lai et al 2001). Almost immediately, it occurred to the researchers making the discovery of the “gene for” the hereditary speech disorder in question that genomic comparisons to chimps could reveal important information about the evolution of language competence, a vital necessity for the emergence of complex cooperative capacities. We know that chimps and gorillas have shown substantial communicative behavior in domestication, and ethological study of vervet monkeys continues to increase our knowledge of their lexicon well beyond the well-known calls for eagle, leopard, and snake. What infrahumans appear to lack is syntactic skills, and that these skills are genetically hardwired in us is suggested not just by Chomsky’s speculations but by Derrick Bickerton’s studies of the transition from pidgin languages to creoles (Bickerton 1998). Consider the idea that cooperation emerges earliest and differentially among hominid females. Suppose, for example, that reciprocal non-kin cooperation gets its start among females owing to the prior selection for dispositions and capacities which subserve kin altruism. Selection for such dispositions and capacities among females, combined with neutrality or selection against it among males over a long-enough period, may be reflected in X-chromosome loci or even dose-dependent expression of genes on X chromosomes. Such differences may support two quite different scenarios for the evolution of cooperation among males and females, which accord different roles to genetically transmitted and culturally transmitted traits in the emergence of cooperation in
217
218
ch ap t er se v e n
males and in females. And these scenarios might leave traces in gene-sequence differences. The crucial thing to see is that we don’t need to assume the existence of such “genes for ___” to show the relevance of DNA-sequence data to testing the models in question. All we need is to identify loci that covary with these traits and their absence in other species or in Homo sapiens before the putative date cooperation emerged. And the technology to identify these sequences will soon exist. How soon depends in part on the rapidity with which gene-chip technology is improving and its declining costs. Gene-chip or microarray technology can enable the molecular biologist to identify and locate large numbers of DNA sequences whose expression subserves any particular somatic cellular activity, and correlate these sets with dysfunctional human disabilities and incapacities as well as differences between normal humans and our nearest extant relatives, chimpanzees. And this can be done without knowing exactly which genes subserve which capacities, how many do so, or how they do so. What is more, with good luck, this technology may enable us to pinpoint differences between DNA sequences known to have a significant role in human capacities and homologous sequences in Neanderthal, ancient Homo sapiens, recent Homo erectus, or even older genetic material. The gene chip, applied to gene expression in heritable human behavioral deficits and to chimpanzee brain function, enables us to begin to identify sequences which covary with (and so are presumptively distinctively necessary for) the sort of complex behavior that constitutes social cooperation in normal environmental circumstances. These will be sequences differentially expressed in the brain compared with other organs; they will be ones in which there is less intraspecies sequence variation, owing to the pressure of selection for a function both specially restricted to humans and under tighter selection than the homologous sequence among comparison species. The sequence comparison will have to be three-, four-, or even five-way, including gene expression in the normally functioning brain, in hereditarily malfunctioning brains, in chimp brains, and in ancient DNA from Neanderthal and other Homo erectus remains. Begin by using a microarray to identify the chromosomal locations of genesequence differences between the normal humans and the large range of humans with hereditary neurological malfunctions. Given the location on the normal chromosome of these differences, use the same gene-chip method to establish chromosomal locations of the homologous sequences, if any, in chimps. If one of these sequences is quite similar in size, copy number, relative location, and so on, assume that it is not among those correlated with a distinctive human behavioral disposition which chimps lack. If the gene sequence is absent
Genomics, Human History, and Cooperation
or different in number, location, introns, and so on in the chimp, then it is a candidate for being interestingly necessary for distinctive human dispositions. It will take a very long time to identify all the gene sequences nontrivially necessary for complex cooperative behavior, and to learn the functions of the genes they are parts of. But it will not take as long to simply provide a list of locations, alternate sequences, introns, and copy numbers for these sequences without details about their biosynthetic products and, ultimately, their combined behavioral consequences. And computational genomics will soon be able to provide hypotheses about the most likely macromolecular scenarios of how linkages, crossover events, mutations, gene duplications and translocations, and other events produced these nucleotide sequences from the common ancestor of humans and chimps. These genetic differences hold the key to our distinctive capacities and dispositions. For they either were selected for, or were along for the ride with what in the genome was selected for in the differential adaptation of the primate species. Despite the proportionately tiny quantitative nucleotide differences between us and the chimps and gorillas, these apes are both relatively unsuccessful species, still restricted to a narrow and endangered niche geographically close to the one we started out in, while we bestride the globe. And then there will be the sequence differences between us and the various Homo erectus populations from Neanderthal to Florenses that we displaced everywhere without interbreeding. It is hard not to conclude that the sequences in which we differ from our common ancestors, other primates, and other members of the genus Homo must have been subject to selection (selection for or selection of, in Sober’s [1984] phrase) in the environments we shared with them. Once the list of locations and sequences for genes without a known function but nevertheless implicated in distinctively human behavior are given, the methods employed to date and order mitochondrial and Y-chromosome sequences can be employed to give the order of emergence and perhaps even the ages of these genes. Already, the comparison of human chromosome 21 and the homologous chimp chromosome 22 provides evidence that the genetic differences include rearrangements and duplications; and thus there is reason to think that within homologous sequences, there will be sufficiently many singlenucleotide polymorphisms—neutral-point mutations—to provide a molecular clock to date the emergence of each of the distinctively human genes these sequences are part of or lie near on the chromosome. This dating can proceed even before we know much more about the genes than that they produce a protein which functions in the brain cells. Of course, once we have identified the proteins, we will be able to locate the DNA sequences that code for them. Then
219
220
ch ap t er se v e n
the homologous sequences, if any, which in other species are entirely missing or diverge beyond random-point mutations, may tell us even more about the genes that figure in the production of these proteins. What will the gene-sequence chronology alone so established show us? It depends on what the chronology looks like. Consider some of the alternatives: the sequence differences enable us to order the chronology in which genes interestingly necessary for distinctively human dispositions emerge. It may show that they all emerge at roughly the same date, that different subsets emerge together, or that each emerged at a different time and in no order to which adaptive significance can be attached. Suppose all of the capacities that subserve cooperation can be dated to as far back as the time Homo sapiens began to make composite tools, say, 250,000 years ago. That would disconfirm any scenario that made solving the design problem of cooperation the occasion for our dispersal from eastern Africa or the competitive advantage which led us to extinguish Homo erectus. Nevertheless, if many of the capacities required—for language, memory, cognitive emotions, and a theory of other minds—did emerge together with imitation-learning capacities and demonstrated rapid spread and minimal drift (by little sequence variation at nonfunctional and functional sites respectively), this would suggest that cooperation emerged much earlier than 80,000 years ago. On the other hand, suppose imitation-learning abilities date to 250,000 years ago, but some other capacity, such as language, appeared much later, spreading rapidly and without sequence drift. We might then infer that game-theory models which rely crucially on learning and speech are more strongly confirmed by the genesequence data than other models are. Similarly, suppose it turned out that dispositions to have those emotions which we do not share with infrahumans—the desire for revenge, say—emerged together with speech, memory, and a theory of other minds. Then, models like Sober and Wilson’s (1996) and Fehr and colleagues’ (2003) that rely on reputation, secondary reinforcement, or strong altruism, or other forms of enforcement that involve commitment problems, would be substantiated. And the sudden spread of sequences subserving such emotions, along with the concerted spread of memory capacities, would confirm the importance of iterative play in the emergence of cooperation. On the one hand, models like Axelrod’s (1984) tit-for-tat make far weaker demands on the players in a game having language than, say, Skyrms’s stag hunt. If the former models the evolution of cooperation more realistically than the latter, we may expect the appearance of the capacities it requires to emerge and spread together earlier than language. And these two scenarios should have left different marks in the DNA sequences of Homo sapiens. Of course, if these capacities subserving cooperation appear to have emerged
Genomics, Human History, and Cooperation
in various chronological orders among several lineages of Homo sapiens coming out of eastern Africa, if their spread was slow and they showed average amounts of sequence drift, then doubt would be cast on all models from evolutionary game theory which treat cooperation as a culturally evolved outcome that solved a common design problem faced by our ancestors and solved by them better than those hominids with whom they competed. This sort of DNA-sequence data would leave us with nothing more than a just-so story, a how-possible account, of why modern human beings are so prone to cooperation. It is, of course, not for philosophers to speculate how this research, once commenced, will eventuate. The speculations here offered may be overtaken by events tomorrow, just as they were encouraged by front-page news (about Homo florenses) reported while I was writing this book. Good luck in recovering bone material, and technological breakthroughs in genetic reconstruction, amplification, and sequencing may provide imaginative scientists with tools to examine new evidence in entirely new ways that test these evolutionary game-theory models. Cleverness in applying known tools and known evidence may enable scientists to do this as well. It will suffice for our purposes if we have shown that gene sequencing at least holds out the best hope of combining with traditional archaeology and anthropology to answer questions about the evolution of human cooperation and the relevance to it of the theoretically beautiful results of evolutionary game theory.
221
8
• • • • • • • •
How Darwinian Reductionism Refutes Genetic Determinism throughout this book, I have more than once noted that molecular biology is a much more general enterprise than molecular genetics and not to be equated with it. For that matter, macromolecular reductionism is not the thesis that all of biology must be grounded in explanations provided by the biochemistry and evolutionary history of the nucleic acids. Accordingly, the reductionist’s claim that all of biology must be grounded in explanations provided by molecular biology is not to be saddled with the objection that it embraces a factually false and morally problematical genetic determinism. In this chapter, I want to go further and show that Darwinian reductionism has been and will hereafter be the source of genetic determinism’s most compelling refutation.
what exactly is genetic determinism? Genetic determinism promotes the morally problematical claim that socially significant traits, traits we care about, such as gender roles, violence, mental illness, intelligence, are fixed by the genes and not much alterable by environment, learning, or other human intervention. This view is morally problematical, because it encourages complacency about inequalities in opportunity and in outcome, as well as discrimination in social, economic, and political institutions. Genetic determinism unwarrantably invokes genocentrism far beyond the genes’ role in the explanation of embryological development and somatic cell regulation. And it is viewed at least by some as sustained and
How Darwinan Reductionism Refutes Genetic Determinism
encouraged by the successes of the reductionist research program of molecular genetics. Reductionism, I have said, is not genocentrism. Still, this distinction between molecular biology’s research program and that of molecular genetics will strike some readers as rather “precious,” especially in light of molecular genetics’ technological role in advancing all the other research frontiers of molecular biology. If there really is such a clear distinction here, why did the heart of the biological argument for reductionism come in chapter 2’s exposition and 3’s defense of the unique role of the genes in development? Moreover, at more than one point throughout this book I have identified opposition to a morally repugnant genetic determinism as among the chief motives of those who deny the claims here defended. Surely, if I hold these writers are wrong about reductionism, I might be regarded as skeptical about the claim that motivates their disagreement with me as well. We may grant that formally speaking, such a conclusion would be an instance of the genetic fallacy, this time moving from (the falsity of) a conclusion to the (falsity of the) beliefs which motivated it (instead of the usual fallacious reasoning from motive to conclusion). Yet, the reader who has come this far will wonder, is there no payoff from the vindication of reductionism for the assessment of genetic determinism, some insight about its scope and limits? I did say, in chapter 2, that if genes can program embryological development, then for them programming the regulation of somatic cells would be child’s play. Of course regulation is a far cry from determination. But surely reductionism holds that everywhere and always the genetic program puts constraints on subcellular behavior and its supracellular consequences. If the constraints are narrow enough, molecular reductionism may be supposed to substantiate some version of genetic determinism. But quite the reverse conclusion is the case: there are implications for the scope and limits of genetic determinism to be drawn at least from the centrality of the genetic program to development and the regulation of the molecular biology of the somatic cell. Darwinian reductionism does have significant consequences for any interesting version of the thesis of genetic determinism. That consequence is nothing less than its refutation as an empirical falsehood. This should be enough to satisfy those who reject genetic determinism, and perhaps provide them with some motive for accepting this version of reductionism after all. To begin with, we need to state a version of genetic determinism worth taking seriously enough to bother refuting. As the label for an unqualified and very global thesis that genes by themselves fix socially significant traits, “genetic determinism” names a thesis no informed scientist seems ever to have held. Everyone even minimally qualified to participate in a debate about this subject knows perfectly well that genes by themselves produce nothing, not so much
223
224
ch ap t er eig h t
as an mRNA. It is only genes plus their cellular and extracellular environments that have any outcome whatsoever. If it is going to be refuted, genetic determinism must be stated so as to accommodate the causal role of the environment, along with the genes, in bringing about phenotypic traits. On one or another understanding of environment, a variety of different theses may be formulated and criticized as more or less egregious versions of genetic determinism. Genetic determinist theses come in various strengths, where strength is measured by the range of environmental variations to which the effects of the gene are alleged to be insensitive. The stronger versions have been advanced, mainly in pseudoscience and pop science, and have been discredited by revealing the scientific misunderstandings on which they rest, as well as the mischievous political, social, racial, and gender interests which they serve. The strongest version of genetic determinism is the claim that the genes or some particular gene gives rise to a particular trait no matter what environment the gene finds itself in; or, in the argot of the debate, the gene has the same phenotypic expression “across the entire norm of reaction.” As noted, this version of the thesis is one no participant in the discussion could hold, as it denies the environment any role in the formation of phenotypic traits. A version of genetic determinism worth arguing about is one that (1) does not narrow the range of environments much beyond what might be called “normal” or “standard”; (2) holds that the gene is tightly correlated with some trait of social significance; and (3) infers that therefore the trait is largely determined by the gene. So understood, genetic determinism will turn out to be a thesis about a particular trait, a particular gene, and a particular environment. At the present stage of inquiry for most of the traits that make genetic determinism a scary claim—alcoholism, violence, schizophrenia, risk taking, homosexuality, gender roles, IQ—the thesis will be very short of empirical support. For not only will its advocates be unable to identify the alleged determining gene independent of the trait it is supposed to determine, but it will be equally difficult to identify the environmental conditions that are “normal” or “standard.” Of course, as we have seen, the demand that the gene be identified independently of its effects is in general too stringent to be insisted upon at the outset of inquiry. On the other hand, one fairly intuitive idea of what is meant by the terms normal or standard is an environment not too far different from the one in which the phenotype was selected for. But “not too far different” is another weasel phrase. Perhaps we can make the notion of “normal” or “standard” a little more precise if we define it as denoting an environment in which the trait, and the other traits it requires in order to be manifest, is not obviously selected against (see Griffiths and Grey 1994; Kitcher and Sterelny 1988). So, for a trivial example, genetic determinism about any of our traits will require that the environment be one in which sufficient oxygen for respiration is present. But
How Darwinan Reductionism Refutes Genetic Determinism
genetic determinism about most of our traits can’t require that an environment including a gravitational field no weaker than the earth’s be present. This account of the environment will only work for the genetic determination of adaptive traits. We need an account that will allow us to consider whether maladaptive traits are equally genetically determined. To say the polar bear has a gene for white fur of course assumes that its environment includes starlight of the sun’s typical wavelength, but it doesn’t include the local landscape being snowy, even though it was in this landscape that the gene was selected for. A polar bear that wanders into the north temperate zone’s forests owing to the paucity of seals in the Arctic still has the gene for white fur, even though forests are seriously fitness-reducing environments for any whitecoated animal. Indeed, polar bear lineages trapped in nonpolar environments will quickly become extinct. But even here, the zoo provides an exception. It is a nonpolar environment in which extinction is not threatened and the polar bear still has the gene for white fur. So, genes for traits aren’t just the ones that result in adaptations, they also include the ones whose phenotypic expression is maladaptive or even lethal in a roughly normal environment. The moral of the story for a version of genetic determinism worth disputing is clear. When a human gene can be said by itself to determine a trait, adaptive or maladaptive, the features of the environment that are also required to provide the phenotypic trait must be the “normal” ones in which human genes, both adaptive and maladaptive, are expressed. Normality is a concept that needs to be cashed in to understand both functional and dysfunctional traits. And as philosophers have learned, both in the study of biology and medical ethics, “normality” is a vexed concept! This is where reductionism can contribute something important to the debate. At least in molecular biology and perhaps only in molecular biology do we have a prayer of characterizing normality with both precision and any pretense at completeness. For purposes of molecular genetics, the normal environment of any structural gene at least is going to be the minimal molecular milieu needed by that structural gene to be transcribed and translated into its (pre) protein product. Against this background of normality, it seems safe to say there is a gene for hemoglobin, or for pro-insulin, and, perhaps a little more speculatively, for insulin itself, assuming that the required posttranslational machinery for making pro-insulin into insulin can be treated as part of the environment of the gene for insulin.
are there genes for tr aits? If the molecular biologist’s identification of a “gene for ___” is coherent and legitimate, then is there some basis for a minimal or default version of genetic
225
226
ch ap t er eig h t
determinism? There had better be, if the opponents of genetic determinism are to have a real target and not a stalking horse or straw man. Now, is there also a product further down the causal chain from the enzyme or protein that the molecular biologist must identify a gene for? This would warrant a stronger version of genetic determinism. Most biologists and philosophers of biology, and certainly the antireductionists among them, have supposed that the answer to this question must be yes. For there do seem to be a number of traits that in molecular biology’s normal extra- and intracellular environments are tightly correlated with nothing except a particular gene. Most of these traits will be debilitating disorders, so-called inborn errors of metabolism, of which there several hundred known. Few, however, are as well known as phenylketonuria (PKU), a form of mental retardation caused by the brain’s inability to metabolize phenylalanine owing to a defect in the gene for the phenylalanine hydroxylase enzyme. It is a widely held view that PKU is a genetically determined syndrome which can easily be treated by environmental modification, and in which the management of the untreated condition is costly to publicly provided health care. For this reason, perinatal testing for it is mandatory across the developed world. If ever there were a case of genetic determinism, PKU is one. And the successful identification of its cause in a particular gene defect seems to vindicate at least a narrow thesis of genetic determinism. There is an increasing number of such inborn errors of metabolism for which the culprit gene is being uncovered. Their genetic origins together with their insensitivity to a wide range of environmental variations suggest that genetic determinism is a perfectly acceptable name for a well-established scientific hypothesis about the inborn errors of metabolism. As we learn about more and more of these traits that assort in Mendelian patterns and of which we can find chromosomally located genetic basis, there will be increasing scope for the locution “gene(s) for ___.” Or so a widely shared interpretation of recent medical developments encourages us to think. As a claim about inborn errors of metabolism or genetic defects of a similar sort, genetic determinism does not appear to be a controversial, still less a morally dangerous thesis. But its opponents will insist that appearances are deceptive. For we are on a slippery slope from the claim that there are genes for the metabolism of phenylalanine to the claim that there are genes for IQ. And this is a slope the opponent of genetic determinism claims it is hard to avoid sliding down. To avoid doing so, we need to recognize that the whole idea of “gene for ___” rests on unwarrantably apportioning causal responsibility for phenotype between environment and gene, and then rests on the further error of thinking we can intelligibly identify cases in which the predominance of the causal responsibly belongs to the gene. So, the mistake on this view is to hold that
How Darwinan Reductionism Refutes Genetic Determinism
though both environment and gene causally interact to produce the phenotype, in some cases (such as the uncontroversial inborn errors of metabolism), the genes causally contribute more to the outcome than the environment. Once this claim is allowed, the objection runs, the rest of the argument is a mere haggling over details about how thoroughgoingly genetic determinism obtains. But, the argument against genetic determinism continues, there is a trivial conceptual mistake made in drawing the apparently innocent conclusion about the preponderance of genetic responsibility for these traits. And it is the logical mistake that the morally dangerous version of genetic determinism requires. Lewontin (1974, p. 404) has made the argument with great effect and graphic simplicity. When two bricklayers build a wall, each mixing the mortar, troweling it on, placing the bricks, and so on, it makes sense to ask which one builds more of the wall, for the answer will depend on separable facts, such as each bricklayer’s rate of bricklaying and the amount of time each spent on the job. But suppose one bricklayer mixes the mortar, brings the bricks to the wall, cleans off the excess mortar when it oozes out between the bricks as they are laid, and ensures the courses are horizontal to the ground, while the other bricklayer lays, breaks, and taps the bricks with careful attention to the design they make in the wall, and ensures that the top layer is orthogonal to the lower courses. Here the question of who built more of the wall makes no sense, or if it does make sense, there is no ascertainable right answer to the question. Almost all philosophers of biology who have written on this subject (save Sesardic [2005]) concur with Lewontin that the genes and the environment work together like these latter two bricklayers, and not like the former. So, separating out their causal contributions makes no sense, or has no ascertainable answer, even presumably in the case of inborn errors of metabolism. It should be clear that the account of the role of the genes in development elaborated in chapters 1 and 2 suggests an analogy quite different from either Lewontin’s division-of-labor model or the duplication-of-labor model he contrasts it with. In light of those chapters, a better (though still imperfect) analogy is that of the journeymen and the master bricklayer: the latter directs all aspects of the project, while the former do the actual building, following the master builder’s instructions. It’s the master builder who takes credit or blame for the resulting wall; everyone else was just following orders. Unless, of course, the wall was built on ground unsuitable for brick walls, in which case some other supervisory authority takes the blame. And, of course, if there is a general contractor who chose the master bricklayer, or supplied the bricks without the master bricklayer’s consent and told him how to design the brickwork pattern, then perhaps the praise or blame accrues to this person. In our story, the bricks and the ground the wall is built on take the role of the environment, and the
227
228
ch ap t er eig h t
genes play all the other roles. We can continue to elaborate the analogy until it captures the roles of structural genes—single and repeated genes, regulatory genes, RNA genes, histone genes, housekeeping genes, repeated sequences, and so on. But it is only an analogy, and its purpose is to highlight just one point in the account of the genes’ role in development. They are parts of a structured program that builds the embryo by directing the synthesis of its components and their integration. The fact that, unlike the bricklayers, the genes do not act from original intentionality is irrelevant. To see this we could just as well have had the wall built by the sort of robots that nowadays build cars. In that case, responsibility for the wall comes via the program from the programmers. The upshot is clear—that is, it is clear if one buys the account of the genes as programming the embryo and, for that matter, as programming the physiology of the somatic cell, in which somatic genes regulate the concentrations of macromolecules that result in subcellular and supracellular movements. It does indeed make sense to say there are genes for some traits. And therefore it seems at least some minimal genetic determinism is in the offing. So, the task of sophisticated opponents of the thesis must be to show why the admission is no first step down a slippery slope to the full-blown and morally unacceptable genetic determinism of socially significant traits. This, at any rate, seems to be the problem faced by its detractors (see Kitcher 2003). But the anxiety actually rests on a badly false premise. Accepting that there are inborn errors of metabolism, and hence genes for traits, is not the first nor even any step down the slippery slope to any even innocent genetic determinism—still less the racist, sexist, or other nefarious conclusions we want at all costs to avoid. In fact, a brief history of the molecular biology of PKU shows that Darwinian reductionism is incompatible with even innocent genetic determinism.
is there a gene for pku? PKU’s most significant symptom is mental retardation caused by the buildup of unmetabolized phenylalanine in the body. First identified in 1934 by a Norwegian physician and once eponymously known as Følling’s disease, it was very quickly recognized to be an inherited disorder, roughly following a Mendelian pattern of autosomal recessive transmission manifested by a fairly uniform phenotype. Though investigation of PKU revealed that there is a gene for phenylalanine metabolism in the normal case, it has long been cited to undercut genetic determinism. The reason is that the central features of PKU, which have made it a poster child for opponents of genetic determinism, are the “facts” that, though its cause is wholly genetic, the symptoms can be ameliorated by a simple environmental manipulation: removal of phenylalanine from the diet. Thus, children born with PKU, otherwise condemned to a life of severe mental
How Darwinan Reductionism Refutes Genetic Determinism
retardation by their genes, can live normal lives with normal mental functioning. PKU thus serves as an example of how we can reconcile the existence of genes for traits with the denial of genetic determinism. Kitcher provides a good example of this strategy: To see how simple inferences [from genetic causation to genetic determinism] go awry, consider a well-known case. Most new parents learn about a disease that occasionally afflicts human beings, producing severe developmental problems if it is allowed to run its course. Babies are routinely tested for PKU. There is an allele that, on a common genetic background, makes a critical difference to the development of the infant in the normal environments encountered by our species. Fortunately, we can modify the environments. The developmental abnormalities result from an inability to metabolize a particular amino acid (phenylalanine), and infants can grow to full health and physical vigor if they are kept on a diet that does not contain this amino acid. So it is true that there is a “gene for PKU.” Happily, it is false that the developmental pattern associated with this gene in typical environments is unalterable by changing the environment. (Kitcher 1985, p. 128) Is PKU really such a simple case of one gene and two alternative phenotypes (mental retardation or normal intelligence), depending on the presence or absence of phenylalanine in the diet? Is it really a case of genetic causation that we can reconcile with environmental amelioration to refute genetic determinism? The reductionist research program of molecular biology in fact showed that even the poster child for the uncontroversial case of a gene for a socially significant trait is not one after all. Matters are far more complicated; so much so that there is no slippery slope from “the gene for PKU” to “the gene for ___” (insert your favorite bugbear). This will be no surprise for Darwinian reductionism, of course. For it recognizes that just as nature is blind to the structural diversity of adaptations, it is also blind to the structural heterogeneity of maladaptations. Keeping this principle in mind not only forewarns us about the complications surrounding PKU, it also reflects molecular biology’s ability to explain them. First of all, gene sequencing shows that the disorder contemporary medicine calls PKU is related to one of over four hundred mutations and alleles in the gene for phenylalanine hydroxylase so far described. Moreover, correlation between genotype and behavioral phenotype in this case has been shown to be problematical. The authors of a study of the disorder published in the Annual Review of Genetics conclude, Should there be any correlation between genotype and the PKU phenotype when one is derived by reductive analysis and the other reflects in-
229
230
ch ap t er eig h t
tegrated physiologic events? Robustness of correlation ought to decline with distance between genotype and level of phenotype. Indeed, correlations are good at the enzyme level, good-to-fair at the metabolic level, and can defy prediction at the cognitive level. Accordingly, search for factors that distort prediction of cognitive development is to be anticipated. A group, which earlier had described a relationship between IQ score in untreated PKU patients and a set of metabolic parameters that reflect transamination of phenylalanine, reported on metabolites in blood and urine in 61 patients with PKU. Levels of the transamination derivatives (phenylpyruvate, phenyllactate, and o-hydroxyphenylacetate) reflect the coexisting plasma phenylalanine value, the state of transaminase induction, and efficiency of renal excretion. However, in well-treated patients, the status of these metabolites is unlikely to explain variation in cognitive outcome. (Scriver et al. 1994) Some cases of PKU turn out to stem from gene defects elsewhere, as in the gene for phenylalanine decarboxylase. Many of these individuals suffer from either a relatively mild retardation or none at all. Since this gene and the phenylalanine decarboxylase gene can be subjected to mutations at many different points in their gene sequences, each reducing catalytic activity of the immediate and subsequent gene products to a different degree, variation in phenotype will not be surprising. All this suggests that PKU is not a one mutant gene type/one disease type disorder, but rather a set of symptomatically similar one gene mutation/ one dysfunction phenotype tokens. But molecular biology has complicated the picture further: the hyperphenylalaninemia that characterizes the syndrome is employed in assays to diagnose it; and the neurological processes it causes, which subserve the retardation, can be produced by other mutations in other genes besides the phenylalanine hydroxylase and decarboxylase genes. The causal path to hyperphenylalaninemia may result from a deficiency in dihydropteridine reductase and/or quinoid dihydropteridine reductase, two enzymes involved in producing cofactors essential to phenylalanine decarboxylase’s enzymatic activity. These two cofactors are themselves the products of two genes in which several allelic variants can have gene products with reduced enzymatic activity. The resulting syndrome, symptomatically indistinguishable from PKU, is now known as phenylketonuria-II. What is more, PKU can be produced environmentally, even in the presence of dietary control. Pregnant women with homozygous PKU expose their fetuses to increased concentrations of phenylalanine. These children, who will be obligate heterozygotes for the phenylalanine decarboxylase gene, will be phenotypically normal, that is, will not assay positively for hyperphenylalaninemia. Nev-
How Darwinan Reductionism Refutes Genetic Determinism
ertheless, they can manifest similar mental retardation, identified as a variant of genetic PKU (so-called environmental PKU). Molecular biological discoveries have complicated the determination of PKU still further. PKU was initially identified as a phenotypic disorder involving the metabolism of one amino acid (the hydroxylation of phenylalanine). And since many of the sequelae of the untreated disorder can be lessened by restricting dietary phenylalanine, it has been concluded that the story ends there; thus, the genetically determined outcome can be mitigated. But it did not turn out to work that way. Studies of the biosynthetic pathway from the phenylalanine decarboxylase gene to hyperphenylalaninemia indicate that it involves a highly complex hydroxylation of phenylalanine, which requires at least three enzymes. Mutation at two different loci can affect at least two of the genes for these enzymes. Researchers have concluded that “multiple alleles probably exist at the locus (or loci) determining the phenylalanine hydroxylase apoenzyme. Thus, there is much opportunity for many varieties of hyperphenylalaninemia” (Man 2004). The absence of either of these active phenylalanine hydroxylases or the enzymes involved in the generation and regeneration of biopterin (a necessary cofactor in the enzymatic metabolism of phenylalanine) leads not only to a buildup of phenylalanine but also to a deficiency of both tyrosine and dopamine, the downstream products of this metabolic pathway. The lower levels of these metabolites could themselves lead to mental retardation due to decreased neurotransmitter levels (see Diamond et al. 1997). Thus, the reduction of PKU from a medical syndrome to a detailed biosynthetic pathway reveals that it is not one disorder (mental retardation) caused by one defective gene and its enzyme/protein product, which could be reversed with a simple manipulation of the environment by removing phenylalanine from the diet. PKU labels more disorders than there are potential mutation sites in the phenylalanine hydroxylase or decarboxylase gene. In fact, it labels a trait that can be produced even in the presence of the wild-type gene churning out what would otherwise be sufficient gene product to metabolize phenylalanine and avoid any retardation. What molecular biology teaches us is that even the apparently noncontroversial case of a single gene for a trait we care about, one that is medically significant or, for that matter, a socially significant trait, is no such thing. Kitcher concludes a recent discussion of genetic determinism with a caution: Genetic research can hope to discover norms of reaction more directly by finding large numbers of individuals who share a genotype and tracking the variation in phenotype across environment. Of course, our pervasive ignorance of the causally relevant features of the extraorganismic
231
232
ch ap t er eig h t
environment . . . should lead us to be tentative in evaluating the results, for we may well be overlooking some crucial environmental variable. (Kitcher 2003, p. 295) What the case of PKU reveals is that even in the apparently least controversial cases of well-established genetic determinism, it is molecular biology’s study of the intraorganismic, indeed intracellular, macromolecular environment which reveals that there is no single phenotypic trait here at all, and no single genotype responsible for the vast number of cases which medicine lumps together indiscriminately as tokens of a single trait type. If even the inborn errors of metabolism, prototypical cases of “single-gene” disorders, do not provide us with clear cases of genes for traits we care about, what should we expect to learn from the macromolecular bases of the range of capacities that really concern the opponents of genetic determinism—violence, criminality, alcoholism, intelligence, homosexuality, risk taking, schizophrenia? What we know about the way in which natural selection operates at the level of the macromolecules gives us good empirical evidence that when we get much beyond the inborn errors of metabolism, the idea of genes for traits we care about becomes even more problematical. It is no surprise that over the last two decades, behavioral geneticists have repeatedly had to withdraw their claims to have localized a “gene for ___,” where the blank is to be filled in by some socially significant behavioral disposition. Darwinian reductionism gives us a great deal of confidence that this track record will not get any better, ever. Traits of concern to the opponents of genetic determinism are capacities, dispositions that manifest themselves sometimes only rarely in the population and always irregularly and to different degrees. Like other dispositions or capacities, say, being magnetic or fragile, they supervene on occurrent properties of the items that have them. For example, an iron bar is magnetic owing to its material structure. Compare an allegedly genetically determined trait, such as schizophrenia. To put it crudely, schizophrenics are delusional owing to the material structure of their mind. Schizophrenia is a diagnosis that is vindicated by a relatively large number of different behaviors. About the only thing that gives us confidence that the common features they share to varying extents constitute a single syndrome is the way a large proportion of schizophrenic patients respond to medication, that is, to some change in their (brain’s) material structure or composition. But even in these cases, we know that the response to medication is not uniform, and this by itself is a tipoff that, just like any other biological trait, the molecular basis of the syndrome is heterogeneous. After all, as noted above, if selection for adaptive traits is blind to differences in structure, so is selection against maladaptive ones. And
How Darwinan Reductionism Refutes Genetic Determinism
this goes for the selection that environments make among nonhereditary and well as hereditary traits. But a capacity or disposition that is very heterogeneous in its occurrent base is ipso facto unlikely to be the result of one or a small number of common causes, whether environmental or genetic. The molecular biologist recognizes that our kind-terms are very coarse grained, and that nature’s discriminations are even more coarse grained. We and nature both will treat as identical the end products of quite diverse causal pathways that happen to look, sound, taste, or feel the same or similar to us and the assays we employ. PKU is an obvious example. In our own case, sufficiently similar effects on our sense organs lead to taxonomic assimilation; in nature’s case, sufficiently similar effects for selection lead to homoplasies. Molecular biology is able to make far finer discriminations, and Darwinian reductionism expects them. By teasing out the diverse pathways to apparently uniform outcomes, it locates the diverse sets of genes for polypeptides. It can enumerate the molecular milieu within which, when disjunctively packaged together, diverse gene sequences produce traits that we and nature both fail to distinguish from one another, owing to their similarity of effects. Unlike nature and the rest of human science, molecular biology is not blind to differences in structure with similar effects. This is, of course, why the reductionist research program on which it proceeds has the effect of increasing the accuracy, predictive power, and general explanatory adequacy of the explanations functional biology proffers. By the same token, owing to its far greater discriminatory power with respect to underlying causes, its identification of the genes for various polypeptides will explain why the search for the single (or the small number of tightly linked) genes for alcoholism, schizophrenia, violence, risk taking, and so on is a will-o’-the-wisp. Yet it will also show why often the most effective route to treating these syndromes medically is through knowledge of the genes! The genes in the germ cells program development. The genes in the somatic cells regulate function as well, though they do not program it the way they program development. Rather, they strongly constrain its behavior by the feedback and feed-forward loops through which they produce gene products that enable the cell to interact with its milieu. They do both of these tasks via their role as discrete modularized subprograms in a large number of structured programs operating throughout multicellular systems. Because they are discrete physical structures that embody separable subprograms, they provide a distinctive opportunity to respond to malfunction that other, less molecular links in the causal chain that produces the malfunction do not provide. Consider an individual case of schizophrenia. The reductionist research program of molecular biology, and in particular microarray or gene-chip technology, can uncover the one or more particular mRNAs whose overexpression or
233
234
ch ap t er eig h t
underexpression distinguishes the patient as beyond some mean value for the population as a whole. This may enable us to identify the gene in this patient that, together with the patient’s environment, results in the syndrome. We can use this mRNA to locate the somatic and the germ-line genes responsible for this level of expression. Sometimes, as in most cases of PKU, there is an effective immediate treatment for “the” disorder in an environmental intervention: something like “avoid phenylalanine in the diet.” This intervention may sometimes be impossible, or impractical, as when the environmental trigger is pervasive (for example, when the only soft drink around is diet soda. Check the label on the diet pop can nearest you: it contains a warning to phenylketonurics). Environmental intervention may be impractical if the substrate it normally makes present is also required as input for many other genetic programs. You can’t treat sickle-cell anemia by withholding valine from the sufferer’s diet. In the long run, a treatment that changes the body’s response to the environment, instead of changing the environment, will often be more reliable and more effective than changing the environment. We can rarely control our environments in the way we control our responses to them. And it is just because the body’s response is the consequence of the operation of a structured program of subroutines that such intervention is in the long run feasible for almost all syndromes. Once molecular biology uncovers the details of the program of development and somatic cell regulation, treatment becomes equivalent, literally, not figuratively, to debugging programs and finding patches for them. Debugging a program is something the writer of the software must do if the program does not produce the intended outcome, if it makes the hardware run in infinite loops, for example. Patching a program installed on a computer is a matter of adding some new lines, some subroutines to the program. In essence, these are activities that will eventually be mirrored in germ-line and somatic gene therapy. Somatic gene therapy is a matter of inserting a gene—rewriting a program module in the nuclei of relevant somatic cells in which a line of code has been corrupted—in order to prevent the breakdown from recurring. Germline gene therapy is more like writing a patch, a new bit of software that will enable a large number of machines on which it runs to function in ways they or their predecessors could not before. The analogies are imperfect, but they reflect the fact that in most cases it’s easier to fix the central processing unit than to move it to an environment in which it won’t have a chance to demonstrate its malfunction. It is important in the present connection to recall that molecular biology provides the resources to diagnose and treat patients one at a time. Given the heterogeneity of the hardware that realizes the same set of programs, and given the heterogeneity of the programs that realize the same outcomes, few reli-
How Darwinan Reductionism Refutes Genetic Determinism
able inferences can be drawn from detailed research on single-model systems about the gene for anything more complicated and distant in effects from the gene than the polypeptide. Reductionism does not uncover the gene for schizophrenia. It uncovers the gene(s) for the polypeptide/some polypeptides (or their absence or their concentration, catalytic effectiveness, and so on) which in some environments result in some small number of individuals exhibiting the syndrome. Not only does it not follow that all schizophrenias (or most or many) are the result of the same gene, or even that schizophrenia is in general a genetic disorder. What molecular biology reveals is the heterogeneity of genetic and environmental routes to what we coarsely identify as a single disorder. In another patient it may be another gene, and quite a different one at that, in a different location, with a different polypeptide product, that produces the “same” syndrome. And in a third individual it might be an environmental input, whose concentration is too great or too small for normal functioning (compare how environmental lead levels lower IQ by varying amounts across all genetic backgrounds). For reasons that were given in chapter 3, it is in the nature of the biological that as we move downward to the molecular details, the generalizations become much more reliable but at the cost of being much narrower and more specific in their antecedent conditions and in the systems whose behaviors they describe. Therefore, though molecular biology is committed to the existence of “genes for individual polypeptide traits,” and thereby to genetic determinism for polypeptides and for early developmental structures, in the end it enables us to refute the sort of genetic determinism which reductionism’s opponents are frightened it will encourage. That is, the reductionist research program shows to be false any version of the thesis that assures us of a small number of genes which even within the normal intracellular environmental (let alone the normal extracellular) norm of reaction result in any capacity or incapacity of potentially policyrelevant interest. The way natural selection operates at the level of the macromolecule makes it highly improbable that there could be the package of a small number of genes and a broad norm of reaction that scary genetic determinism requires. It is only findings yielded by a reductionistic research program that could have provided the opponent of morally obnoxious genetic determinism with this reassurance. On the other hand, the technology that molecular biology has provided and will hereafter provide does equip medicine to respond—individual by individual—to each of the different causal pathways to the many incapacities lumped together by our coarse-grained medical classifications, by suiting the genetic program to the environmental conditions when the reverse is inconvenient or impossible.
235
236
ch ap t er eig h t
The results and the further prospects of the technological application of reductionistic developmental biology and cell physiology have already raised some hard questions in bioethics, questions that have little to do with genetic determinism. Two of them will be especially important for applied and theoretical molecular biology. Assume that gene-chip diagnosis can be combined with techniques that enable us to debug or patch genetic programs to prevent inborn errors of metabolism and other polypeptide problems. This would enable us cheaply and effectively to tailor treatments to individuals at the exact point in the biosynthetic pathway where their idiosyncratic differences from the population would result in a significant deficit or dissimilarity. But this raises the question—as we begin to be able to respond to inequalities in the natural lottery in the way in which we have been able to respond in the social lottery—of whether the obligation we have to level the social playing field extends to the natural one (compare Buchanan et al. 2000). This obligation, if it exists, in turn raises the question of distinguishing between the treatment of defects, deficiencies, and syndromes that obstruct normal functioning, and the enhancement of normal functioning to levels that might inequitably advantage individuals. These are both serious problems, and I have no solution to them. But there would be no point in worrying about them if Darwinian reductionism had not led us to the analysis and treatment of these traits. Another problem is raised by those who fear the advent of what Kitcher (1997) calls laissez-faire eugenics or what Stock (2003) would call free-market genetic experimentation. This is the threat that there will be a general medicalization of natural and normal human differences, in which individuals seek for themselves in somatic gene therapy and for their children in germ-line gene therapy the elimination of the very differences that make our culture interesting. This homogeneity may threaten the genetic variability required by our long-term survival in the face of environmental changes. Both of these concerns rest on exaggerated fears of the effect of the dominant culture on the variegated values of the many different minorities who participate in it. Be these fears as they may, both reflect the false assumption that genetic determinism about many traits of individual human interest and importance for social policy in fact obtains. If I am correct about the real upshot of molecular biology for genetic determinism, perhaps it is reassuring that the falsity of this assumption means we need not worry about these unhappy eventualities. Still, how should we respond to the argument that even if the sort of genetic determinism its opponents fear is not true, advances in genomics might well lead to mistaken beliefs that it is true by those who do not understand the real implications of advances in molecular biology? For that matter, commitment to
How Darwinan Reductionism Refutes Genetic Determinism
genetic determinism might be fostered among the general population by those whose interests may be served by belief in ineradicable differences in IQ, or violence, gendered sex roles, and so on. How should science respond to such threats? Is it the case, as S. J. Gould (1981) has argued, that scientists are responsible for the predictable misuse and/or misunderstanding of their findings? Is it the case, as Kitcher (2000a, 2000b) has suggested, that scientists have the obligation to avoid certain questions or adopt higher standards of evidence in the examination of certain matters, owing to their potential broader social impact? I believe that fairly simple answers can be given to both questions. As to the claim that some areas of research in molecular biology should be avoided owing to the likelihood of undesirable consequences, it may be replied that science, like nature, abhors a vacuum. Given the technology and the in silico availability of complete human and other species’ genome-sequence data and annotations, vast resources are no longer required to explore any question in molecular biology. Accordingly, someone somewhere is likely to be exploring any question of scientific interest, either regardless of the consequences or in some cases with an eye to the consequences, whether beneficial or otherwise. If it is going to happen anyway, better that a research question be publicly pursued under the scrutiny of science’s system of objectively controlled inquiry, which allows for maximal healthy skepticism, persistent demands for replication, and critical analysis of findings and interpretation. No self-denying ordinance against any research question is really enforceable in molecular biology. As to the demand that higher standards of evidence be adopted in some lines of inquiry than those usually imposed by the epistemic institutions of science, this is an invitation to second-guess, and ultimately to unravel, public confidence in science. Recall the problem of arms races which arises once biological systems begin to interact strategically in their search for the best solutions to design problems. Humans are, of course, biological systems, and ones in which the arms races, and strategic interaction in general, take place with the greatest intensity and complexity, and at the greatest speed. There is no doubt that once one set of humans adopts a strategy, such as censoring oneself or others, owing to the employment of a higher standard for scientific certification of their own or other people’s findings and theories, others will begin to second-guess them. The others will in this case be scientists with whom they compete, and the various communities and individuals that consume scientific information. What is the best outcome one can hope for as a result of each of these agents making appropriate adjustments to their own interpretation of the actions of the research group that moves first in the strategic interaction? Will it be a new equilibrium, one that is better or worse in its consequences than the previous
237
238
ch ap t er eig h t
one? It is quite possible that a new equilibrium will be worse for all concerned. Still worse an outcome would be the absence of any equilibrium. In cases like these, there is no assurance that another equilibrium exists. For all we know, such second-guessing among scientists could result in their social institutions of epistemic certification spiraling out of control, with the result that nothing is believed on its merits. If the community of qualified molecular biologists declines to investigate some question as too dangerous, others will undoubtedly make claims about it that will fill the vacuum left. If highly qualified, distinguished scientists with a wide public following make false though well-meaning claims about the evidence bearing on various versions of genetic determinism (compare Gould 1981 and Sesardic 2005 [commenting on Gould]), they are more likely to undermine the credibility of better-supported objections to exaggerated versions of the thesis. Or they may undercut true and socially innocuous but medically useful applications of Darwinian reductionism. Or they may even do both.
References
Adleman, L. 1994. “Molecular Computation of Solutions to Combinatorial Problems.” Science 266 : 1021–24. Albert, D. 2000. Time and Chance. Cambridge, MA: Harvard Univ. Press. Amundsen, R., and G. Lauder. 1998. “Function without Purpose.” In Nature’s Purposes, ed. C. Allen, M. Bekoff, and G. Lauder, 335–70. Cambridge, MA: MIT Press. Ariew, A. 2003. “Ernst Mayr’s ‘Ultimate/Proximate Distinction’ Reconsidered and Reconstructed.” Biology and Philosophy 18: 553–65. Armstrong, D. 1983. What Is a Law of Nature? Cambridge: Cambridge Univ. Press. Axelrod, R. 1984. The Evolution of Cooperation. New York: Basic Books. Beatty, J. 1980. “What’s Wrong with the Received View of Evolutionary Theory?” In PSA 1980, vol. 2, ed. P. Asquith and R. Giere. East Lansing, MI: Philosophy of Science Association. ———. 1992. “Fitness: Theoretical Contexts.” In Keywords in Evolutionary Biology, ed. E. Fox-Keller and E. Lloyd, 115–19. Cambridge, MA: Harvard Univ. Press, pp. 115–119. ———. 1995. “The Evolutionary Contingency Thesis.” In Concepts, Theories and Rationality in Biology, ed. G. Wolters and J. Lennox, 45–81. Pittsburgh: Univ. of Pittsburgh Press. Beatty, J., and S. Finsen. 1989. “Rethinking the Propensity Interpretation: A Peek inside the Pandora’s Box.” In What the Philosophy of Biology Is, ed. M. Ruse, 17–30. Dordrecht: Kluwer. Beatty, J., and S. Mills. 1979. “The Propensity Interpretation of Fitness.” Philosophy of Science 46 : 263–88. Beauchamp, T., and A. Rosenberg. 1981. Hume and the Problem of Causation. New York: Oxford Univ. Press. Bickerton, Derek. 1998. “How Protolanguage Became Language.” In The Evolutionary Emergence of Language, ed. C. Knight, J. R. Hurford, and M. Studdert-Kennedy, 32–64. Cambridge: Cambridge Univ. Press. Block, N. 2003. “Do Causal Powers Drain Away?” Philosophy and Phenomenological Research 67 : 133–50. Bodnar, J. W. 1997. “Programming the Drosophila Embryo.” Journal of Theoretical Biology 188 : 391–445. Bouchard, F., and A. Rosenberg. 2004. “Fitness, Probability and the Principles of Natural Selection.” British Journal for the Philosophy of Science 55:693–712.
240
r efer ences Boyd, R., and J. Silk. 2000. How Humans Evolved. New York: Norton. Brandon, R. 1978. “Adaptation and Evolutionary Theory.” Studies in the History and Philosophy of Science 9: 181–206. ———. 1990. Adaptation and Environment. Princeton, NJ: Princeton Univ. Press. Brandon, R., and S. Carson. 1996. “The Indeterministic Character of Evolutionary Theory.” Philosophy of Science 63 : 315–37. Brown, Jennifer R., H. Ye, R. T. Bronson, P. Dikkes, and M. E. Greenberg. 1996. “A Defect in Nurturing in Mice Lacking the Immediate Early Gene fosB.” Cell 86 : 297–309. Brown, P., T. Sutikna, M. J. Morwood, R. P. Soejono, E. Jatmiko, S. Wayhu, and R. Awe. 2004. “A New Small-Bodied Hominid from the Late Pleistocene of Flores, Indonesia.” Nature 431 : 1055–61. Brown, T. A., R. G. Allaby, R. Sallares, and G. Jones. 1998. “Ancient DNA in Charred Wheats: Taxonomic Identification of Mixed and Single Grains.” Ancient Biomolecules 2 : 185–93. Buchanan, A., D. W. Brock, N. Daniels, and D. Winkler. 2000. From Chance to Choice. Cambridge: Cambridge Univ. Press. Cann, R. L. 2001. “Genetic Clues to Dispersal in the Human Populations: Retracing the Past from the Present.” Science 291 : 1742–48. Carroll, S., D. Keyes, D. L. Lewis, J. E. Selegue, B. S. Pearson, L. V. Goodrich, R. L. Johnson, J. Gates, and M. P. Scott. 1999. “Recruitment of a Hedgehog Regulatory Circuit in Butterfly Eyespot Evolution.” Science 283:532–34. Cartwright, N. 1983. How the Laws of Physics Lie. Oxford: Oxford Univ. Press. ———. 1998. Do the Laws of Physics State the Facts?’’ In Philosophy of Science: The Central Issues, ed. J. Cover and M. Curd, 865–77. New York: Norton. Crick, F. 1968. “The Origins of the Genetic Code.” Journal of Molecular Biology 38 : 367–79. Cummins, R. 1975. “Functional Analysis.” Journal of Philosophy 72:741–65. Reprinted in E. Sober, Conceptual Issues in Evolutionary Theory (Cambridge, MA: MIT Press, 1983). Darwin, C. 1859. On the Origin of Species. Facsimile 1st ed. Cambridge, MA: Harvard Univ. Press. ———. 1989 [1836–44]. Charles Darwin’s Notebooks, 1836–44: Geology, Transmutation of Species, Metaphysical Enquiries. Edited by P. H. Barrett and P. J. Gautrey. Ithaca, NY: Cornell Univ. Press. Davidson, D. 1967. “Causal Relations.” Journal of Philosophy 64:691–703. Dawkins, R. 1982. The Extended Phenotype. San Francisco: Freeman. Dennett, D. 1995. Darwin’s Dangerous Idea. New York: Simon and Schuster. Depew, D., and B. Weber. 1997. Darwinism Evolving. Cambridge, MA: MIT Press. Diamond, A., M. B. Prevor, G. Callender, and D. P. Druyn. 1997. “Prefrontal Cortex Cognitive Deficits in Children Treated Early and Continuously for PKU.” Monographs on Social Research in Child Development 62:i–v, 1–208.
refere nces Dobzhansky, T. 1973. “Nothing in Biology Makes Sense except in the Light of Evolution.” American Biology Teacher 35 : 125–29. Dray, W. 1957. Law and Explanation in History. Oxford: Oxford Univ. Press. Earman, J. 1986. A Primer on Determinism. Dordrecht: Kluwer. Eigen, M., and P. Schuster. 1977. “The Hypercycle: A Principle of Natural SelfOrganization.” Naturwissenschaften 64 : 541–65. Ekbohm, G., T. Fagerstrom, and G. Angren. 1980. “Natural Selection for Variation in Off-Spring Number: Comments on a Paper by Gillespie.” American Naturalist 15 : 445–47. Fehr, E., H. Gintis, S. Bowles, and R. Boyd. 2003. “Examining Altruistic Behavior in Humans.” Evolution and Human Behavior 24 : 153–72. Feyerabend, P. 1964. “Reduction, Empiricism and Laws.” Minnesota Studies in the Philosophy of Science, vol. 3. Minneapolis: Univ. of Minnesota Press. Fisher, S. E., F. Vargha-Khadem, K. E. Watkins, A. P. Monaco, and M. E. Pembrey. 1998. “Localisation of a Gene Implicated in a Severe Speech and Language Disorder.” Nature Genet. 18 : 168-70. Fodor, J., 1975. The Language of Thought. New York: Crowell. ———. 1981. “Special Sciences.” In Representations. Cambridge, MA: MIT Press. Fox-Keller, E. 2000. The Century of the Gene. Cambridge, MA: Harvard Univ. Press. Frank, R. 1988. Passion within Reason. New York: Norton. Frost Arnold, G. 2004. “How to Be an Anti-Reductionist about Developmental Biology: Response to Laubichler and Wagner.” Biology and Philosophy 19:75–91. Fukuyama, Francis. 2002. Our Post-Human Future. New York: Farrar, Straus and Giroux. Gehring, W., G. Halder, and P. Callaerts. 1995. “Induction of Ectopic Eyes by Targeted Expression of the Eyeless Gene in Drosophila.” Science 267:1788–92. Gibbons, A. 2000. “The Peopling of the Pacific.” Science 291:1735–37. ———. 2001. “The Riddle of Co-Existence.” Science 291 : 1725–29. Gillespie, J. H. 1977. “Natural Selection for Variance in Off-Spring Numbers: A New Evolutionary Principle.” American Naturalist 11 : 1010–14. Glymour, B. 2001. “Selection, Indeterminism, and Evolutionary Theory.” Philosophy of Science 68 : 518–35. Godfrey-Smith, P. 2000. “Information, Arbitrariness, and Selection.” Philosophy of Science 67 : 1202–207. Goudge, T. 1961. The Ascent of Life. Toronto: Univ. of Toronto Press. Gould, S. J. 1981. Mismeasure of Man. New York: Norton. Gould, S. J., and R. Lewontin. 1979. “The Spandrels of St. Marco and the Panglossian Paradigm.” Proceedings of the Royal Society of London B 205:581–98. Griffiths, P. 2001. “Genetic Information: A Metaphor in Search of a Theory.” Philosophy of Science 67: 26–44. Griffiths, P., and D. Grey. 1994. “Developmental Systems Theory and Evolutionary Explanation.” Journal of Philosophy 91 : 277–304.
241
242
r efer ences Griffiths, P., and E. Nuemann-Held. 1999. “The Many Faces of the Gene.” Bioscience 49: 656–63. Haig, D. 1997. “Parental Antagonism, Relatedness Asymmetries, and Genomic Imprinting.” Proceedings of the Royal Society of London 264, no. 1388: 1657–62. Hamilton, W. D., R. Axelrod, and R. Tanese. 1990. “Asexual Reproduction as an Adaptation to Resist Parasites (A Review).” Proceedings of The National Academy of Science 87 : 3566–573. Hebert, A. 2003. “The Four Rs of RNA-Directed Evolution.” Nature Genetics 36:19–25. Hedges, B. 2000. “A Start for Population Genomics.” Nature 408:652–53. Hellman, G., and F. W. Thompson. 1975. “Physicalism: Ontology, Determination, and Reduction.” Journal of Philosophy 72 : 551–64. ———. 1977. “Physicalist Materialism.” NOUS 11 : 309–45. Hempel, C. G. 1942. “The Function of General Laws in History.” Journal of Philosophy 39. Reprinted in Aspects of Scientific Explanation, 231–44 (New York: Free Press). Herbert, A. 2004. “The Four Rs of RNA-Directed Evolution.” Nature Genetics 36:19–25. Herbert, P. D. N., A. Cywinska, S. L. Ball, and J. R. de Waard. 2003. “Bioidentification through DNA Barcodes.” Proceedings of Royal Society of London B 270:313–28. Hong, J. I., Q. Feng, V. Rotello, and J. Rebek. 1992. “Competition, Cooperation, and Mutation—Improving a Synthetic Replicator by Light Irradiation.” Science 255 : 848–50. Hull, D. 1974. The Philosophy of Biological Science. Englewood Cliffs, NJ: Prentice Hall. ———. 1989. Science as a Process. Chicago: Univ. of Chicago Press. Jukes, T. 1985. “A Change in the Genetic Code in Mycoplasma Capricolum.” Journal of Molecular Evolution 22 : 361–62. Kauffman, S. 1993. The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford Univ. Press. ———. 1995. At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Oxford: Oxford Univ. Press. Kim, J. 1992. “ ‘Downward Causation’ in Emergentism and Nonreductive Physicalism.” In Emergence or Reduction? ed. A. Beckermann, H. Flohr, and J. Kim, 119–38. Berlin: de Gruyter. ———. 1993. Supervenience and Mind. New York: Cambridge Univ. Press. ———. 1998. Mind in a Physical World. Cambridge, MA: MIT Press. ———. 2005. Physicalism, or Something Near Enough. Princeton, NJ: Princeton Univ. Press. Kimura, M. 1961. “Natural Selection as a Process of Accumulating Genetic Information in Adaptive Evolution.” Genetic Research 2 : 127–40. Kitcher, P. 1984. “1953 and All That: A Tale of Two Sciences.” Philosophical Review 93 : 353–73. ———. 1985. Vaulting Ambition: Sociobiology and the Quest for Human Nature. Cambridge, MA: MIT Press. ———. 1989. “Explanatory Unification and the Causal Structure of the World.” In Scientific Explanation: Minnesota Studies in the Philosophy of Science, vol. 15, ed. W. Salmon and Philip Kitcher, 410–605. Minneapolis: Univ. of Minnesota Press.
refere nces ———. 1993. The Advancement of Science. New York: Oxford Univ. Press. ———. 1997. The Lives to Come. New York: Free Press. ———. 1999. “The Hegemony of Molecular Biology.” Biology and Philosophy 14: 195–210. ———. 2000a. Mendel’s Mirror. Oxford: Oxford Univ. Press. ———. 2000b. Science, Truth and History. New York: Oxford Univ. Press. ———. 2003. In Mendel’s Mirror. New York: Oxford Univ. Press. Kitcher, P., and K. Sterelny. 1988. “Return of the Gene.” Journal of Philosophy 85 : 339–61. Kitcher, P., K. Sterelny, and K. Waters. 1990. “The Illusory Richness of Sober’s Monism.” Journal of Philosophy 87 : 158–61. Kittler, R., M. Kayser, and M. Stoneking. 2003. “Molecular Evolution of Pediculus humanus and the Origin of Clothing.” Current Biology 13:1414–16. Klin, A., W. Jones, R. Schultz, and F. Volkman. 2000. “The Enactive Mind, or From Actions to Cognition: Lessons From Autism.” Philosophical Transactions, Biological Sciences, Royal Society of London B 358 : 345–60. Kuhn, T. S. 1962. The Structure of Scientific Revolutions. Chicago: Univ. of Chicago Press. Lai, C. S. L., S. E. Fisher, J. A. Hurst, F. Vargha-Khadem, and A. P. Monaco. 2001. “A Forkhead-Domain Gene Is Mutated in a Severe Speech and Language Disorder.” Nature 413 : 519-23. Lange, M. 1995. “Are There Natural Laws concerning Particular Species?” Journal of Philosophy 112 : 430–51. ———. 2004. “The autonomy of Functional Biology: A Reply to Rosenberg.” Biology and Philosophy 19 : 93–109. Laubichler, M., and G. Wagner. 2001. “How Molecular Is Molecular Biology?” Biology and Philosophy 16 : 53–68. Lawrence, P. 1992. The Making of a Fly: The Genetics of Animal Design. Oxford: Blackwell Scientific Publishers. Levins, R., and R. Lewontin. 1985. The Dialectical Biologist. Cambridge, MA: Harvard Univ. Press. Lewis, D. 1986. Philosophical Papers. Vol. 2. Oxford: Oxford Univ. Press. Lewontin, R. 1974a. “The Analysis of Variance and the Analysis of Cause.” American Journal of Human Genetics 26 : 400–411. ———. 1974b. The Genetic Basis of Evolutionary Change. New York: Columbia Univ. Press. ———. 1978. “Adaptation.” Scientific American 239 : 156–69. ———. 1980. “Theoretical Population Genetics in the Evolutionary Synthesis.” In The Evolutionary Synthesis, ed. E. Mayr and W. Provine. Cambridge: Harvard Univ. Press. Lewontin, R., and R. Levins. 1985. The Dialectical Biologist. Cambridge, MA: Harvard Univ. Press. Lewontin, R., and E. Sober. 1982. “Artifact, Cause and Genic Selection.” Philosophy of Science 47 : 157–80.
243
244
r efer ences Lloyd, E. 1993. The Structure and Confirmation of Evolutionary Theory. Princeton, NJ: Princeton Univ. Press. Man, O. M. I. 2004. Phenylketonuria. Baltimore: Johns Hopkins Univ. Press. Margulis, L., and D. Sagan. 1986. Microcosmos: Four Billion Years of Microbial Evolution. New York: Simon and Schuster. Maryanski, A., and J. Turner. 1992. The Social Cage. Palo Alto, CA: Stanford Univ. Press. Matthen, M., and A. Ariew. 2002. “Two Ways of Thinking about Fitness and Natural Selection.” Journal of Philosophy 99 : 58–83. Mattick, J. S. 2003. “Challenging the Dogma: The Hidden Layer of Non-Protein Coding RNAs in Complex Organisms.” Bioessays 25 : 930–39. Maynard Smith, J. 2000. “The Concept of Information in Biology.” Philosophy of Science 67 : 177–94. Mayr, E. 1982. The Growth of Biological Thought. Cambridge, MA: Belnap Press, Harvard Univ. Press. Mitchell, Sandra. 2000. “Dimensions of Scientific Law.” Philosophy of Science 67 : 242–65. Mithen, S. 1996. The Prehistory of the Mind: The Cognitive Origins of Art, Religion and Science. London: Thames and Hudson Ltd. Monod, J. 1974. Chance and Necessity. London: Fontana. Murray, J. 1981. “A Prepattern Formation Mechanism for Animal Coat Markings.” Journal of Theoretical Biology 88 : 161–99. Murray, J. D. 1989. Mathematical Biology. Berlin: Springer Verlag. Nagel, E. 1961. Structure of Science. New York: Harcourt, Brace and World. Reprinted, Indianapolis: Hackett, 1991. ———. 1977. “Teleology Revisited.” Journal of Philosophy 74:261–301. Nelson, P., M. Kiriakidou, A. Sharma, E. Maniataki, and Z. Mourelatos. 2003. “The Micro RNA World: Small Is Mighty.” Trends in Biochemical Science 28:534–40. Nijhout, F. 1994. “Genes on the Wing.” Science 265 : 44–45. Noonan, J. P., M. Hofreiter, D. Smith, J. Priest, N. Rohland, N. Rabeder, J. Krause, C. Detter, S. Paabo, and E. M. Rubin. 2005. “Genomic Sequencing of Pleistocene Cave Bears.” Science 309 : 597–99. Nusslein-Volhard, C. 1992. “Determination of the Embryonic Axes of Drosophila.” Cell 68: 201–19. Okasha, S. 2006. The Levels of Selection Question: Philosophical Perspectives. Oxford: Oxford Univ. Press. Paabo, S. 1999. “Human Evolution,” Nature 404 : 453–54. Perrigo, G., W. C. Bryant, and F. S. vom Saal. 1990. “A Unique Timing System Prevents Male Mice from Harming Their Own Off-Spring.” Animal Behavior 39:535–39. Pinker, S. 2001. “Talk of Genes and Vice Versa.” Nature 419:465–66. Plomin, R., J. C. DeFries, G. E. McClearn, and P. McGuffin. 2000, Behavioral Genetics. San Francisco: Freeman Griffith and Grey. ———. 2001. Behavioral Genetics. 4th ed. New York: Worth.
refere nces Putnam, H. 1975. Mind, Language, and Reality. New York: Cambridge Univ. Press. Railton, P. 1981. “Probability, Explanation, and Information.” Synthese 48:233–56. Rebek, J. 1996. “The Design of Self-Replicating Molecules.” Current Opinion in Structural Biology 4 : 629–35. Rebek, J., T. K. Park, and Q. Feng. 1992. “Synthetic Replicators and Extrabiotic Chemistry.” Journal of the American Chemical Society 114 : 4529–532. Renfrew, C., P. Foster, and M. Hurles. 2001, “The Past within Us.” Nature Genetics 36 : 253–54. Richards, M., and V. Macaulay. 2000. “Tracing European Flounder Lineages in the Near Eastern mtDNA Pool.” American Journal of Human Genetics 67:1251–76. Rosenberg, A. 1978. “The Supervenience of Biological Concepts.” Philosophy of Science 45 : 368–386. ———. 1983. “Fitness.” Journal of Philosophy 80 : 457–73. ———. 1985. Structure of Biological Science. Cambridge: Cambridge Univ. Press. ———. 1986. “Intention and Action among the Macromolecules.” In Current Issues in Teleology, ed. N. Rescher, 65–76. Lanham, MD: Univ. Press of America. ———. 1993. Instrumental Biology or the Disunity of Science. Chicago: Univ. of Chicago Press. ———. 1998. “Computing the Embryo: Reduction Redux.” Biology and Philosophy 12 : 445–70. Salmon, Wesley. 1966. Foundations of Scientific Inference. Pittsburgh: Univ. of Pittsburgh Press. ———. 1989. “Four Decades of Scientific Explanation.” In Scientific Explanation: Minnesota Studies in the Philosophy of Science, vol. 13, ed. Wesley Salmon and Philip Kitcher. Minneapolis: Univ. of Minnesota Press. Santos, M., C. Cheesman, V. Costa, and P. Moradas. 1999. “Selective Advantages Created by Codon Ambiguity Allowed for the Evolution of an Alternative Code in Candida Spp.” Molecular Microbiology 31 : 937–47. Sarkar, S. 1996. “Decoding ‘Coding’—Information and DNA.” Bioscience 46:857–64. ———. 2000. “Information in Genetics and Developmental Biology.” Philosophy of Science 67 : 208–13. ———. 2005. Molecular Models of Life. Cambridge, MA: MIT Press. Schaffner, K. 1967. “Approaching to Reduction.” Philosophy of Science 34:137–47. Scriver, C. R., R. C. Eisensmith, S. L. C. Woo, and S. Kaufman. 1994. “The Hyperphenylalaninemias of Man and Mouse.” Annual Review of Genetics 28:141–65. Searle, J. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3 : 417–57. Sesardic, N. 2005. Making Sense of Heritability. Cambridge: Cambridge Univ. Press. Shaffner, K. 1967. “Approaches to Reductionism.” Philosophy of Science 34:137–47. ———. 1993. Discovery and Explanation in Biology and Medicine. Chicago: Univ. of Chicago Press. Shannon, C. E., and Weaver, W. 1963. Mathematical Theory of Communication. Urbana: Univ. of Illinois Press.
245
246
r efer ences Skyrms, B. 2004. The Stag Hunt and the Evolution of Social Structure. Cambridge: Cambridge Univ. Press. Sober, E. 1984. The Nature of Selection. Cambridge, MA: MIT Press. Reprint, Chicago: Univ. of Chicago Press, 1994. ———. 1993. The Philosophy of Biology. Boulder, CO: Westview Press. ———. 1999. “The Multiple Realizability Argument against Reductionism.” Philosophy of Science 66 : 542–64. Sober, E., and D. S. Wilson. 1996. Unto Others. Cambridge, MA: Harvard Univ. Press. Stamos, D. 2001. “Quantum Indeterminism in Evolutionary Biology.” Philosophy of Science 68 : 164–84. Sterelny, K. 2000. “The ‘Genetic Program’: A Commentary on Maynard Smith on Information in Biology.” Philosophy of Science 67 : 195–201. Stock, G. 2003. Redesigning Humans. New York: Mariner Books. Stoneking, M., and D. Soodyall. 1996. “Human Evolution and the Mitochondrial Gene.” Current Opinion in Genomics and Development 6:731–36. Stryer, L. 1983. Biochemistry. San Francisco: Freeman. Stumpf, M., and D. Goldstein. 2001. “Genealogical and Evolutionary Inference with the Human Y-Chromosome.” Science 291 : 1738–42. Tatum, E. L., and G. Beadle. 1941. “Genetic Control of Biochemical Reactions in Neurospora.” Proceedings of the National Academy of Science 27:499–506. Thompson, D’Arcy. 1942. On Growth and Form. Cambridge: Cambridge Univ. Press. Thompson, P. 1988. The Structure of Biological Theories. Albany: SUNY Press. Turner, C. L., A. Grant, J. Bailey, G. A. Dover, A. Gabriel, and G. Barker. 1998. “Patterns of Genetic Diversity in Extant and Extinct Cattle Populations: Evidence from Sequence Analysis of Mitochondrial Coding Regions.” Ancient Biomolecules 2 : 235–50. van Fraasen, B. 1980. The Scientific Image. Oxford: Oxford Univ. Press. Waddington, C. W. 1957. The Strategy of the Genes. London: Allen & Unwin. Walsh, D., T. Lewens, and A. Ariew. 2002. “The Trials of Life: Natural Selection and Random Drift.” Philosophy of Science 69 : 452–73. Waters, C. K. 1990. “Why the Antireductionist Consensus Won’t Survive: The Case of Classical Mendelian Genetics.” In PSA 1990, Proceedings of the 1990 Biennial Meeting of the Philosophy of Science Association, vol. 1, ed. Arthur Fine, Mickey Forbes, and Linda Wessels, 125–39. East Lansing, MI: The Philosophy of Science Association. ———. 1991. “Tempered Realism about the Forces of Selection.” Philosophy of Science 58 : 552–73. ———. 1994. “Genes Made Molecular.” Philosophy of Science 61:163–85. Watson, J. D., and F. H. C. Crick. 1953. “Molecular Structure of Nucleic Acids.” Nature 171 : 737–38. Winne, J. 2000. “Information and Structure in Molecular Biology.” Philosophy of Science 67 : 517–26.
refere nces Winter, A. E. 1996. “Autocatalysis and the Generation of Self-Replicating Systems.” Chemica Scandinavica 50 : 469-85. Wolpert, L. 1969. “Positional Information and the Spatial Pattern of Cellular Formation.” Journal of Theoretical Biology 25 : 1–47. ———. 1994. “Do We Understand Development?” Science 266:571–72. Wolpert, L., et al. 1998. Principles of Development. Oxford: Oxford Univ. Press. Woodward, James. 1997. “Explanation, Invariance and Intervention.” In PSA 1996, Proceedings of the 1996 Biennial Meeting of the Philosophy of Science Association, vol. 2, ed. Lindley Darden. East Lansing, MI: The Philosophy of Science Association. Wright, L. 1973. “Functions.” Philosophical Review 82 : 139–68. ———. 1976. Teleological Explanation. Berkeley and Los Angeles: Univ. of California Press.
247
Index
abdominal-A and -b genes, 64, 67 aboutness, 99, 103 adaptation, 47, 52 adaptational behavior, 211 adaptationalism, 125; explanations, 46 adenine, 96 Africa, 205, 206, 220, 221 African widowbird, 86, 91, 92, 109 Agren, G., 163, 164 agriculture, 208 AIDS virus, 145, 153, 184 Albert, D., 168 alcoholism, 224, 232, 233 algorithm, 110 alpha-particle decay, 172, 173 alternative mRNA splicing, 123 altruism, 197, 199, 210 amino acids, 20, 71, 98, 231 amplification, 208 Amundsen, R., 19mu 20n, 138n ancient DNA, 208 Angelman syndrome, 90 annotation, 202; functional, 208 Annual Review of Genetics, 229 Anscombe, E., 135n antennapedia gene, 64, 67 anterior/posterior development, 52–53, 61, 65–66, 80 anthropocentrism, 60, 126 anticondon, 99 apterous gene, 49 arbitrariness, of code, 97–99 archaeology, 221
Ariew, A., 42, 159, 167 Aristotelian mechanics, 28 Aristotle, 9, 57 arms race, 89, 90, 237 arrangements and distributions, 168–69 Asperger’s syndrome, 217 assembly language programs, 71, 105, 106 atomic structure, 98 atomic theory, 3 atoms, 113, 172, 173, 189, 191 Australasia, 214 autism, 217 autonomous levels of explanation, 82 autonomy of biology, 33, 157 autonomy of second law of thermodynamics, 188 autosomal phenotypes, 114; recessive transmission, 228 auxiliary information, 153 Axelrod, R., 220 AZT, 153 Balaenoptera musculus, 184 balance of power, 90 bar codes, 8, 202 Basque, 205 Bayesian probabilities, 160n, 161 Beadle, G., 114 Beatty, J., 142n, 160, 163, 164 Beauchamp, T., 135n behavioral biology, 162, 213; disposition, 208, 209
250
ind e x Bennett, J., 37n Benzer, S., 117 Bickerson, D., 213, 217 bicoid gene, 66, 67 Bicoid mRNA, 80 Bicoid protein, 64, 70, 78, 80 bilateral symmetry, 171 biochemical pathway, 47 bioethics, 236 bioinformatics, 14 biological anthropology, 213 biological computers, 72 biological explanation, 134ff., 197; as historical, 152–56; PNS’s role in, 152; as sketches, 154 biological generalizations, 110 biological kinds, 196n biological laws, nonexistence of (other than Darwin’s), 26, 30, 32, 135–45, 149–52, 183; reduction of, 156 biological models, 146-149 biological understanding, loss of, 54 biology: as historical science, 55, 76, 78, 117, 152, 153, 156; as terrestrial, 40–41 biosynthetic pathway, 9, 209, 210, 219, 231, 236 biotechnology, 56 birdsongs, 86 blastoderm, 61, 69, 72 blastopore, 59 Block, N., 196n blue whale, 184 Bodnar, J. W., 68, 69, 71, 72 Bohr atom, 148 Boolean language, 71 Boolean switching rule, 62–64 Boolean table, 65 bottom-up research, 54 Boyd, R., 204, 205 brain, 10, 104, 108, 217, 218, 232 Brandon, R., 160, 162, 163, 164 bricklayers, 227, 228
bridge principles, 27, 29, 30 Brown, J., 206 Buchanan, A., 236 buckeye butterflies, 17, 44 butterfly eyespots, 16, 17, 43, 44, 47, 48, 52, 53, 77 butterfly wing, 16, 18 C. eligans, 72 calculus of probability, 171 Callaerts, P., 75 camouflage, 43 Candida, 98 capacities, 233 carbon-14 dating, 203 Carrol, S. B., 52 Carson, S., 162 Cartesians, 184 cat codon, 101 catalysis, 189, 191, 192, 193 causal democracy thesis, 85, 73, 124 causal drainage, 196n Cech, T., 144 cell, 84, 194 cell cycle, 62–63, 64, 71 cell membrane, 83 centimorgans, 114, 144 central dogma, 142, 144 central tendencies, 157ff., 176, 177 ceteris paribus clauses, 140–45, 154 challenges to genetic program of development, stated, 71–75 chemical differences in imprinting, 88 chemical environments, 189, 190 chemical events, 98 chemical laws, 200 chemical processes, 155 chemical synthesis, 190 chemistry, 5, 6, 7, 19, 135, 184, 185, 186, 188, 191 chess, 147 chick embryo, 77 chick wing, 59, 195
i nd e x chimpanzee, 210, 212, 215, 217, 218, 219 Chinese pictograms, 104 Chinese room, 103n, 104–5 Chinese speakers, 104 Chomsky, N., 217 chromatin configuration, 62–64, 84; shutoff, 66 chromosome 15, 34, 76, 88, 90, 114 chromosome 21, 213, 219 chromosome 22, 213, 219 chromosomes, 140, 143 cistron, 117 classical egalitarianism, 9 classical liberalism, 9 cleavage, 58 codons, 97–101 cognition, as following program, 103–4 cognitive development, 230 cognitive processes, 60, 73; and computational powers, 15, 36, 187 coin tossing, 171–74 comparative fitness. See fitness, comparative computational biology, 15 computer, 103, 104 computer program, 84, 107, 108, 109 conditions of existence, law of, 149–50 content, 99, 102 contingent truth, PNS as, 161 cooperative behavior, 206–12, 213, 214, 216, 219, 220, 221 count noun, 114, 116, 121 counting genes, 119, 122, 129, 131 covalent bonds, 192 Coward, Noel, 147 creationism, 9 Crick, F. H. C., 1, 3, 4, 7, 27, 99, 115, 116, 125 criminality, 209, 232 criterion of connectability, 30 criterion of individuation, 118 Cro-Magnon, 204 cubitus interruptus gene, 53
cultural evolution, 93, 201, 211, 215, 221 cultural transmission, 217–18 Cummins, R., 18–19n, 138n cut-the-cake, 207, 209 cycles, 192 Cygnus olor, 128 cytosine, 39, 88, 96 dangerous questions, 238 Darwin, C., 6, 9, 42, 46, 102, 132, 149, 150, 152, 156, 157, 164, 167, 184, 201 Darwinian reductionism, aim of, 23–24 Darwin’s theory, 23. See also theory of natural selection data storage, 107 Davidson, D., 135n, 196, 196n Dawkins, R., 4, 53, 130–31 debugging programs, 234 decapentaplegic gene, 66, 77 deductive derivation, 27 deductive-nomological explanation, 28 definition of fitness, 162, 163 deformed gene, 64, 67 demic defusion, 205 Dennett, D., 102n, 185, 186, 208 Depew, D., 167–68 derived intentionality, 101, 104, 105, 107, 108, 132 Descartes, René, 3 descent with modification, 151 design problem, 88, 91, 126, 141, 152, 153, 165, 175, 214 determinism, 9, 174–75. See also genetic determinism development, as programming, 75–80 developmental abnormalities, 82, 83 developmental biology, explanatory vacuum of, 57–60 developmental control, 118 developmental generalizations, 57 developmental systems theory, 85, 124 developmental variations, 69 Diamond, A., 231
251
252
ind e x Didus ineptus, 128 differential equations, 51 diffusible morphogen, 60 diffusion of pigments, 51 dihydropterine reductase, 230 disjunction, of macromolecular accounts, 31, 34–35, 36; and predicates, 37, 38; of genetic structure, 112; of physical processes, 187 distal-less gene, 49 DNA, 1, 3, 8, 25, 30, 39, 71–72, 87–88, 90, 95, 99, 107, 113, 119, 121, 124, 142, 154, 155, 204, 208, 216; computing, 105–9, 218 DNA polymerases, 154, 156 DNA replication, 155, 187 dnmt3l gene, 89, 91 Dobzhansky’s dictum, 15–21, 20n, 26, 55, 125, 132, 134ff., 140, 144, 149, 158, 181, 182, 200 DO loop, 64, 72 domesticated plants and animals, 206 dominant culture, 236 dopamine, 231 dormative virtue, 58, 59, 112 dorsal lip, 59 dorsal-ventral structure, 80 double-striping program, 67 Down syndrome, 213 downward causation, 82, 181, 194–96 Dr. Pangloss, 112 Dreische, H., 58 Dretske, F., 102n drift, 125, 171, 220; and ecological fitness, 170–74; versus selection, 159, 175–76 Drosophila melanogaster, 22, 48–49, 50, 52, 53, 73, 75, 76, 77, 78, 84, 103, 108, 110, 112, 195; embryo program of, 61–71, 94, 107 dualism, 2, 3 Ducasse, C., 135n dysfunctional traits, 225
Earman, J., 161 Earth, 15, 98, 99, 113, 126, 127, 130, 183, 184, 187 East Africa, 214 ecological fitness. See fitness, ecological ecologists, 8 Eigen, M., 193–94 Ekbohm, G., 163, 164 electromagnetism, 5 eliminativism, 26, 84, 122, 124, 196n embryo, 95 embryogenesis, 57, 70 embryological development, 57, 61, 69, 84, 222, 223 emergence of entropy, 168–70 emergence of second law, 188 emotions, 207, 212 empirical content, 59, 161 empiricism, 162 energetic cost of computing, 107 engrailed gene, 53, 64, 77 ensemble-level properties, 169, 170 ensembles, 176 entropy, 166–70, 176, 197–98, 199 environment, 164, 225, 229 environmental PKU, 231 environmental preservation, 8 epigenesis, 84–93, 95, 97, 109; molecular, 90 epiphenomenalism, 184 epistemic argument against reduction, 14 epistemic reductionism, 179–80 equilibrium, 7, 191, 237–38; cooperative, 214 equiprobability, 170–74 erotetic account of explanation, 36, 41, 44, 47, 53, 70 error rates, 155 essentialism, 42 essential properties, 129 ESTs. See expressed sequence tags ethologists, 91, 162
i nd e x eugenics, 236 European forests, 213 “Eve,” African, 203 even-skipped gene, 64, 67 evolution of cooperation, 216 evolutionary bottleneck, 203, 208 evolutionary etiologies, 31, 126, 128; history of, 70 evolutionary game theory, 214, 221 evolutionary psychology, 213 evolvability, 127 exact laws in biology, 135–40 exons, 31, 118, 119 expected reproductive rates, 176 expected values, 160n explananda, 18, 79, 26, 145 explanans, 26, 98 explanation sketch, 40, 153 explanation, as aim of science, 26 explanations: adequacy of, 13–14, 145; interests and, 35; irrelevance of, 35; power of, 34, 143; reduction of, 72 expressed sequence tags, 126–27, 131 extended phenotype, 53, 199 extinction, 150 eye morphogenesis, 76 eyeless gene, 75, 94 eyeless protein, 75 eyespots. See butterfly eyespots Fagerstrom, T., 163, 164 Far East, 213 farming, 205 feedback/feed-forward loops, 233 feedback loops, 45, 53 Fehr, E., 220 Felis domesticus, 99 females, 217 Feng, Q., 191 Fermat’s theorem, 106 fertilization, 58, 61, 87 fetal hemoglobin, 128
fetus, 230 Feyerabend, P., 27 FFT. See fundamental theorem of natural selection finch species, 86, 92 finite actual frequencies, 171 Finsen, S., 163, 164 Fisher, R. A., 136, 146, 151, 158, 167 fitness, 134ff., 144, 152, 199; and entropy, 166–70; as probabilistic propensity, 159–66; comparative, 186–87; defined, 151; differences in, 158, 167; ecological, 165, 169, 170–76; measure of, 162; of ensemble, 170; of groups, 197; of molecules, 190; of traits, 160; reproductive, 164 fitness coefficients, 169 follicle cells, 80 fossils, 208 Foster, P., 204 four-color theorem, 105–6 Fox-Keller, E., 109, 111 Frank, R., 207 free-floating mental states, 102 Frege, G., 100 French flag, 64–65, 66, 67, 70 frequency dependence, 141, 168 frog, brain states of, 102–3 Frost-Arnold, G., 82n frozen accident, 97, 99, 143, 153 Fukuyama, F., 9 full explanation, 34, 46 functional biology, 31, 33, 40; description of, 18n, 19, 30, 43, 102; explanation, 45; functional equivalence in, 138; generalizations in, 142, 143; kinds in, 139, 181; laws of, 41; properties of, 139; terminology in, 54 functions, 18–20n; as selected effects, 137–38; of gene, 113 fundamental theorem of natural selection, 159, 167–68 fushi tarazu gene, 64
253
254
ind e x Galileo, 4 game theory, 207, 220. See also evolutionary game theory gap genes, 64, 67, 75 gases, 187 gastrula, 58 gastrulation, 61, 69 Gehring, W., 75–77 gender roles, 224 gene, 22–23, 33, 48, 62–68, 92, 194; eliminativism about, 111; expression of, 87; function of, 87, 113; number of, 61, 111, 115, 126; sequencing of, 8; as heuristic device, 122; as kindterm, 129; history of concept, 110–22 gene chip, 212, 218, 233 gene-chip diagnosis, 236 gene copying, 155 gene expression, 83, 114 gene for ____, 123, 209, 210, 217, 218, 225–28, 229, 232 gene for cooperation, 211 gene for PKU, 228–33 gene individuation, functional, 49. See also counting genes generalizations, 134, 145, 165, 235 gene sequence, 102n, 207, 212 gene-sequence chronology, 220 gene therapy: germ line, 234, somatic, 234 gene switching, 71–72 genetic code, 85, 96–110; and original intentionality, 99–105 genetic defects, 226 genetic determinism, 10, 23, 94, 96; 201, 207, 211, 213, 222ff., 228, 235; defined, 222–24 genetic versus epigenetic heredity, 85, 87 genetic evolution, 211 genetic fallacy, 11; hardware and, 80–84; networks and, 71–72; program and, 74, 76–77, 78; software and, 80–84 genetic knockout, 209
genetic program of development, 56ff., 223 genetic regulation of function, 233 genetics, history of, 74 genic selection, 198 genocentrism, 3, 73, 74, 85, 86, 90, 92, 95–110, 221; and epigenesis, 84–93; and information, 95–99 genomes, 213 genomic conflict, 89 genomic imprinting, 87–90, 91 genomic rearrangements, 201ff., 213 genomics: defined, 202; and cooperation, 212 genotype, 158 geological time, 202 geometry, of cells and tissue, 81–82, 194, 195 germarium, 79 giant gene, 64, 66 Gibbons, A., 205, 206 Gibbs, J. W., 174 Gillespie, J., 162 global warming, 8 glutamate, 97, 131 Glymour, B., 162 Godfrey-Smith, P., 96n Goldstein, D., 204 gorilla, 210, 217, 219 Gould, S. J., 10, 45, 237, 238 gradient, 71, 195; cellular, 83 Grey, P., 73, 85, 124, 224 Griffiths, P., 62, 73, 85, 96n, 100, 123, 124, 224 group selection, 93, 197 Growth and Form, 50 guanine, 88, 96 gurken protein, 80 h19 genes, 88, 89 hadrons, 191 Haig, D., 88 hairy gene, 64, 67
i nd e x Halder, G., 75 Hamilton, W. D., 206, 207 hardware, 234 hardware/software distinction, 72, 105, 110 Hardy-Weinberg law, 146 hawk versus dove game, 207 hbf gene, 128 heaven, 1, 21 Hedgehog gene, 53, 77, 117, 130, 131 Hedgehog protein, 77 Hedges, B., 204 Hellman, G., 178 hemoglobin gene, 31, 117, 130, 131 hemoglobin protein, 31, 225 Herbert, A., 8, 90 hereditary variation, 185 heterozygotes, 230 heuristic device, 122 high-accuracy replication, 155 high-fidelity hereditary transmission, 91; storage of, 156 higher-level kinds, 196n higher-level programs, 70, 71, 105 higher-level selection, 54 higher-order kinds, 196n histidine, 97, 101 historical explanation, 46 historical facts, 41, 48 historical hypotheses, 183 historical patterns, 40 historical processes, 153 history, 11, 39, 43,79, 177, 184, 202; of gene concept, 110–21; of life on earth, 127; of science, 13 homeotic selector genes, 49, 64, 67, 75 hominid evolution, 211, 215 Homo erectus, 213, 214, 216, 218 Homo florenses, 215, 216, 219, 221 Homo sapiens, 14, 128, 204, 212, 213, 214, 215, 216, 218, 220 homologous sequence, 94, 126, 220 homosexuality, 224, 232
horizontal transmission, 215 host imprinting, 86, 91, 109 how-possible explanations, 43–54, 70; versus why-necessary explanations, 47–53 Hull, D., 29 human chromosomes, 212 human genome project, 202 human intentionality, 108 human prehistory, 202–6 Humean causation, 135n hunchback gene, 64, 66, 67 Hurles, M., 204 hybridizing nucleic acid sequences, 131 hydrophilic interactions, 192 hypercycle, 193 hyperphenylalaninemia, 230, 231 ideal explanatory text, 41 ideal gas law, 48, 186, 187 igf-2 genes, 88, 89 imagal disk, 77 imitation learning, 214, 215, 216, 220 implicit definitions, 29 inborn errors of metabolism, 23, 226, 227, 232 inclusive fitness, 207 incommensurability, 28 indeterminism, 185. See also quantum mechanics Indian Subcontinent, 213 individuation, 137; of genes, 114–16, 121, 124. See also counting genes induction, of embryo, 58, 59, 60 inequalities, 222 inexact laws, 140–45. See also ceteris paribus clauses inference rules, 136 information, 60, 74, 84, 85, 95–99; content, 100–101, 103; role of genes and, 22, 94ff., 125; storage of, 14, 39, 107, 108; transmission of, 39
255
256
ind e x initial conditions, 98, 99, 172 institutions, human, 10, 11 insulin gene, 117 intelligence, 209, 232 intelligent design theory, 9, 152 intentionality, 73, 84, 103, 217; characterization of, 100–101; idiom and, 73; metaphors and, 60 interactors, 86, 150 intercellular communication, 82 interfering forces, 141. See also ceteris paribus clauses intertheoretical explanation, 28 intracellular environment, 226, 232 introns, 31, 119, 122, 127, 145 IQ, 224, 226, 230, 235, 237 Irish, 205 irreducible hardware, 84 island biogeography, laws of, 152 iterated games, 207; prisoner’s dilemma, 210 Jukes, T. H., 98 junk DNA, 120, 127, 130, 202, 204, 208, 212 jurisdictions, 129–30 just-so stories, 45, 207 Kant, I., 9, 13, 14 Kanzi, 215 Kauffman, S., 192, 193, 194 Kenya, 203 Kepler, J., 4 Kim, J., 32, 38, 180, 196n Kimura, M., 125n kin-altruism, 211, 217 kind-terms, 233 Kitcher, P., 30, 33, 47, 50, 51, 54, 81–82, 135, 158, 179–80, 183, 194–95, 196, 224, 229, 231–32, 237 knirps gene, 64, 66, 67 knock-out experiments, 64 krupple gene, 64, 66, 67 Kuhn, T., 27
La Rochefoucauld, Duc du, 9 Labrador (dog), 12 laissez-faire eugenics, 236 Lange, M., 183 language, 214, 215, 216 Laubichler, M., 82n Lauder, G., 19–20n, 138n law(s), 134–46; exceptionless, 6; and initial conditions, 98; of large numbers, 171; of mass action, 191; of natural selection, 54, 168; of physics, 146; standard analysis of, 182; as timeless truths, 142. See also individual names of laws layer-cake reduction, 28, 32, 40 leptons, 191 levels of selection, 194–97 Levins, R., 10 Lewens, T., 159 Lewis, D., 160n, 162 Lewontin, R., 10, 42, 45, 165, 207, 209, 227 life cycles, 85 limb development, 77 line-drawing program, 64, 66 linkage, 114 lions, 142 lipid bilayers, 83, 192, 194 Lloyd, L., 29 local conditions, 98; on earth, 97; as trends, 143; as truths, 183 Locke, J., 201 logical empiricists, 21, 27 logical implication, 154 Lois Lane, 100 long-germ-band insects, 69, 77 long-run relative frequency, 159, 160, 171, 175 low fidelity, 155 lower-level programs, 71 Macaulay, V., 205 macromolecular explanations, 26, 39, 73, 81
i nd e x major premise, 136 maladaptive traits, 232 male dispersal, in primates, 210 mammalian coat color, 51 Mangold, H., 58 Marxian historians, 11 Maryanski, A., 211, 214 mass noun, 114, 116 master builder, 227 master control genes, 75, 94, 95 maternal effect genes, 64, 67, 89; mRNA, 85, 109; nurse cells, 69 mathematical logic, 28 mathematical models. See models mathematicians, 106 matter in motion, 3 Matthen, M., 159, 167 Mattick, J. S., 70, 72 Maynard Smith, J., 96n, 97, 100 Mayr, E., 16, 42, 127, 128, 181 measures, on infinite sets, 188 meiosis, 14, 36, 87, 117 meiotic drive, 114 Mendel’s laws, 6, 114–15, 117, 143, 148, 149 Mendelev, D. I., 5, 113 Mendelian gene, 30 Mendelian ratios, 112 Mendelian transmission, 228 mental retardation, 226, 228 messenger RNA, 39, 62, 63, 118, 123, 126, 224, 233–34 metabolites, 230 metaphor, 74, 100 metaphysical reductionism, 179–80 metaphysics, 25, 37, 201 methylation, 88, 90, 91, 109 Micronesia, 206 microarray, 212, 217, 218, 233 micro-RNA, 61, 62, 70, 72, 78, 90, 118, 120, 131 Middle East, 205, 206 Millikan, R., 102n Mills, S., 160
mind/body problem, 2 miRNA. See micro-RNA Mitchell, S., 183 Mithen, S., 215 mitochondria, 121 mitochondrial DNA, 203, 204, 208, 219 mitotic drive, 154 models, 29, 136, 137, 146, 154, 164; biological, 146–49; in physics, 148; systems of, 79 molecular biologists, as reductionists, 4–8 molecular biology, putative laws of, 142; and structural differences, 233 molecular developmental biology, 56ff., 94ff. molecular milieu, 233 molecular process gene concept, 123 molecules, 113, 188–92 momentum, 169, 182, 185 monkey species, 210 Monod, J., 97 morality, 207 morgans, 144 motives, for antireduction, 8–11 mouse, 87, 94, 122; male, 122 mRNA. See messenger RNA mtDNA. See mitochondrial DNA multiple real, 187 multiple realizability, 21, 30, 110, 117 Murray, J. D., 50–52 mutations, 53, 69, 117, 153, 229, 216 Nagel, E., 27, 136 Nash equilibrium, 207 natural history, 54; and kinds, 121, 124; as lottery, 236 natural selection, 15, 17, 38, 40, 98, 101–2, 108, 141, 162, 164, 204; as filter, 137; and individuation of genes, 121–33; irreducibility of, 16; mindlessness of, 186; at molecular level, 191; and physicalism, 178–83; and structural differences, 22, 31, 79, 113, 121, 130, 138–40, 151
257
258
ind e x Neander, K., 102n Neanderthal, 204, 208, 216, 218, 219 necessary truths, 147, 154 Nelson, P., 62 Neumann-Held, E. M., 123, 124 neurological capacities, 92; malfunction of, 218 neurlogical system, 13 neuroscience, 2, 10 neurotransmitters, 93 neutralism, 125n Newton, I., 4, 5; versus Kant, 9, 13, 14; and laws of motion, 4, 15, 72, 137, 147, 173, 186 Newtonian determinism, 171, 174 Newtonian dynamics, 169, 185 Newtonian mechanics, 28, 182 niche construction, 93 Nieuwkoop center, 58 Nieuwkoop, P., 58 Nijhout, H. F., 49–50 1953, 5, 6, 116 Nobel Prize, 56, 61, 144 nomological force, in explanations, 134ff., 183 nomological generalizations, 153. See also law(s) nonampliative inference, 137 nonerotetic accounts of explanation, 37 nonhuman DNA, 206 nonpolar bonds, 192 Noonan, J., 206 norm of reaction, 224, 235 normal science, 56 normality, 224, 225 Norwegians, 205 “nothing but” thesis, 2, 3, 4, 25, 117 nouns, 114 NP-hard problems, 106–7, 109 nuclear structure, 72 nucleic acids, 22, 90, 95, 98, 113, 117, 195, 222; chauvinism toward, 24 nucleotide, 87
nucelotide sequence, 195 nurse cells, 79 Nusslein-Volhard, C., 61 oblique transmission, 215 Occam’s razor, 178 occurrent properties, 232 Ohm’s law, 109 Okasha, S., 199 omega points, 4 On the Origin of Species, 149 one-gene/one-enzyme hypothesis, 114, 115, 118, 119, 144 one-gene/one-protein hypothesis, 119 one-gene/one-RNA molecule, 118 ontic theory of explanation, 41 ontological reduction. See metaphysical reductionism oocyte, 80, 89 oogenesis, 79 open reading frame, 120, 131 operationalization, 165, 209 “order for free,” 192–93 organic chemistry, 20 organizer, 58, 59, 60 organogenesis, 58 original intentionality, 99–108 osmosis, 83 overdetermination, 83 ovum, 79 owls, 44 oxygen, 129 Paabo, S., 208 Prader-Willi syndrome, 90 pain, 2 pair-rule I genes, 64, 67 pair-rule II genes, 64, 69, 75 paired gene, 64 Paley, W., 46, 102 paradigm, 144 parasitizing strategy, 91 Park, T., 191
i nd e x paternal genes, 87, 89, 90 PCR. See polymerase chain reaction periodic table, 185 Perrigo, G., 210 pH, 192 Phage T4 virus, 122 phenotype, 81, 114, 209, 227, 229 phenotypic control, 117 phenotypic expression, 224 phenotypic variation, 181 phenylalanine, 228, 229, 234 phenylalanine decarboxylase, 230, 231 phenylalanine hydroxylase, 229 phenylketonuria, 226, 233, 234; gene for, 228–32 phenylketonuria-II, 230 philosophers of history, 43–44 philosophy of mind, 103 philosophy of psychology, 30 Philosophy of Science, 96n phlogiston, 116 physicalism, 2, 3, 7, 8, 20, 21, 25 32, 41, 135, 187; fact fixing by, 20, 25, 178, 184, 188; and natural selection, 178–83 physical laws, 42, 140, 141, 177ff., 197, 200 physical process, 178, 184, 187 physical science, 5, 16, 22, 42n, 156, 157, 179, 186 physicalist antireductionism, 7–11 physicists, 5, 185 physics, 6, 7, 8, 15, 19, 135, 136, 141, 182, 184, 186, 188, 193 PKU. See phenylketonuria placenta, 87, 88, 89 pleiotropy, 140n Plomin, R., 209 PNS, 150–53, 154, 156, 157, 158–65, 169, 187, 188, 197, 198, 199, 200; as basic to biology, 177; as basic law of chemistry, 189–94; for compounds, 192; as derivable, 191; as derived, 186–88;
for molecules, 190; as underived law, 181–86 polar bear, 225 polarity genes, 67 political philosophy, 201 Pollack, J., 104 polygenetic traits, 114, 216 polymerase chain reaction, 208 Polynesia, 206 polynucleotide sequences, 145 polypeptides, 235 polytene chromosomes, 114 pop science, 224 Pope John Paul II, 9 population genetics, 148–49 populations, 167 position, 182, 185 positional information, 59, 99, 100 postpositivist reduction, 22, 27–32, 40 posttranscriptional modification, 121 posttranslational modification, 119, 121 Precis coenia, 44, 47, 49–50, 52 predictive power, 145 prefertilization, 78 prehistory, 202 pre-Holocene, 214 primates, 210 principle of natural selection. See PNS probabilistic operator, 170 probabilistic propensities, 161, 162, 176 probability, 151; interpretations of, 160–62 programmed manifestation, 18–19n programming behavior, 86 programs, 103–10 pro-insulin, 225 promoter, 118 proofreading, 95, 99 proofs, in mathematics, 105 propensity definition of fitness, 160, 163, 164, 170 Protagoreanism, 35–36, 41 protein, 24, 64; synthesis, 101, 130
259
260
ind e x proteonomics, 24 proximate cause, 17 proximate explanation, 17, 155 proximate/ultimate distinction, 26, 33, 42, 181 PS process, 33, 36, 37 pseudoscience, 224 psychology, 178, 184, 206 purines, 98 Putnam, H., 35n PV ⫽ nrT, 148, 187 pyrimidines, 98 QTL. See quantitative trait loci quantitative trait loci, 209–10 quantum indeterminism, 171 quantum mechanics, 5, 35, 161, 185 quarks, 8, 139 quinoid dihydroteridine reductase, 230 Railton, P., 41, 161, 180 random variation, 17 randomness, 173 Rebek, J., 191 recombinant genetics, 56 recombination, 14, 117 reduced gene, 67 reduction, as a form of explanation, 28 reductionism, defined, 4; as form of explanation, 28; as research program, 2–7, 50, 56, 81, 84; as term of abuse, 11; of laws, 55; obstacles to, 7 redundancy of code, 120 referential opacity, 102n regulatory genes, 16, 77, 94, 118; pathway, 52; protein, 118; sequence, 212 relativity, theory of, 5 religion, 9 remethylation, 90 Renfrew, C., 204 repair mechanisms, 39 replication. See self-replication
replicators, 150 representation, 84, 95, 99 repressors, 118; genes, 63 reproductive fitness. See fitness, reproductive research program, 23, 25, 84, 134, 207, 223; reductionism as, 2–7 retardation, 217 reverse engineering, 71, 80, 187, 207 ribosomal RNA, 39,118 ribosomes, 31 Richards, M., 205 risk taking, 224, 232, 233 RNA, 31, 39, 118, 124, 129, 130, 142; and amino acids, 193; primer, 154, 155; processing, 71; viruses affecting, 121, 144, 155 robot, 62 Rosenberg, A., 32, 39, 82n, 96n, 135n, 187 Roux, W., 58 runt gene, 64 Russell, B., 100 S phase of meiosis, 62 Saint Peter, 1, 21 Salmon, W., 41, 161, 180 Santos, M. A., 98 Sarkar, S., 62, 96n Schaffner, K., 27, 28 schizophrenia, 224, 232, 233, 235 Schuster, P., 193 scientific realists, 122 Scriver, C., 230 sea urchin, 58 Searle, J., 103, 104, 105 second law of thermodynamics, 166–71, 174, 177, 185, 188, 193, 197–98, 199; derivability of, 193 second order predicate, 38 selector gene network, 64 self-denying ordinance, 237
i nd e x selfish DNA, 130, 131, 132 self-replication, 155, 189, 190 semantic account of theories, 29 semiconservative replication, 154 sensitivity to descriptions, 102 sequence comparison, 218 sequence similarity, 212 serotonin, 217 Sesardic, N., 227, 238 sex combs gene, 67 sex combs reduced gene, 64 sex-ratio model, 136, 146, 151, 154 sexual reproduction, 91, 127 sexual selection, 204 sey gene, 76 Shannon-Weaver information, 96 short-germ-band insects, 69, 70, 110 Siberia, 213 sickle-cell anemia, 12, 131, 234 silicon chips, 107 Silk, J., 204, 205 single-gene disorders, 232 single-nucleotide polymorphism, 204 skew, 163 skinning the cat, 144 Sklar, L., 174 Skyrms, B., 215, 220 SNP. See single-nucleotide polymorphism Sober, E., 35, 136, 168, 179, 183, 197, 219, 220 social facts, 178 social laws, 184 social science, 213 software, 62, 234 somatic cells, 57, 94, 218, 228; regulation of, 222, 223, 234 songbird tune, 92 Soodyall, D., 204 spandrels, 45, 47 spatial arrangement. See geometry special theory of relativity, 137 species, 79, 127–28, 129
speculative mechanisms, in development, 57–58 speech, 211, 220 speech defects, 217 Spemann, H., 58 sperm, 89 spermatogenesis, 79 Spinoza, B., 46 square peg–round hole argument, 35n stability and replication, optimal combination, 189, 190, 192 stabilizing selection, 216 Stamos, P., 162 standard model of microphysics, 185 standards of evidence, 237 standing conditions, 37n start codon, 120 statistical mechanics, 158 Sterelny, K., 96n, 158, 224 stereochemical theory of genetic code, 98 Stock, G., 236 stoichiometry, 191 Stoneking, M., 204 stop codon, 120 strategic interaction, 141, 142n strict laws, 139 stripe doubling, 64, 65, 66 structural genes, 16, 70, 77, 118, 130, 144, 228 structural heterogeneity of biological kinds, 155 structural property, 139 Stryer, L., 155 Stumpf, M., 204 subjective probability, 174 subprograms, 70, 72, 233 substrate neutrality, 185, 186, 197 substrates, 189 subsumption, 33 supercomputers, 105, 106 Superman, 100
261
262
ind e x supervenience, 21, 31, 117, 179, 187, 190, 232 syntactic account of theories, 28 tailless gene, 64, 67 tailless protein, 64, 70 Tanzania, 203 Tatum, E., 114 tautology, 156 taxonomy, 19, 122, 125, 128, 233 technological improvements, 20, 21, 145 teleology, 18–19n teleosemantics, 102n templates, 189, 193 terminal genes, 64, 67, 75 “the gory details,” 197 theologians, 9 theories, general, 6 theory of natural selection, 39, 46, 132, 134, 156, 157, 177 theory of other minds, 211, 215, 216, 217, 220 thermodynamics, 4, 159, 170, 185–86 Thompson, D’Arcy, 50, 54 Thompson, F. W., 178 Thompson, P., 29, 164 thymine, 39, 96 tissues, 195 tit-for-tat strategy, 209, 220 token/type distinction, 117 tokens, 185 top-down research, 54 torpedo protein, 80 torso gene, 64, 67 torso protein, 64, 70 torso-like gene, 69 tramtracks gene, 64, 67 transamination, 230 transcription, 142 transfer RNA, 39, 118 translocations, 216
traveling salesman problem, 106–7 tree of life, 183 Tribolium castaneum, 69, 70 truth table, 63 Trysomy, 213 Turner, J., 211, 214 types, biological, 30 typological explanation, 42 tyrosine, 231 ultimate explanations, 17, 42, 46, 55, 89, 134, 181 ultimatum game, 207, 209 ultrabithorax gene, 64, 67 unfalsifiability, 46 unfertilized egg, 65 unity of type, law of, 149–50 untenable dualism, 25ff., 178 uracil, 39 uranium, 172, 173 useful fiction, 93 valine, 131, 234 van der Waals forces, 192 variance, 163, 164 variation, 79 vehicles, 86 vertebrates, 73 Viduinae. See African widowbird violence, 224, 232, 233 viruses, 124 vital spirits, 4 vocalization, 92 Waddington, C. H., 86 Wagner, G., 82n Walsh, D., 159 Waters, C. K., 121, 123, 124 Watson, J. B., 1, 3, 4, 7, 27, 115, 116, 125 Weber, B., 167–68 Weisman, F., 113 Western hemisphere, 206
i nd e x wetness, of water, 12–13 whole-greater-than-parts argument, 12 why-necessary explanations, 43–54, 55, 70 Wilson, D. S., 197, 220 Wilson, E. O., 4, 94, 202 wing imagal disk, 47–48, 49 wingless gene, 49, 64 Wolpert, L., 48, 59, 60, 65, 69, 80 Woodward, J., 183 World War I, 11, 43
Wright, L., 18–19n writing, 202, 205 X chromosome, 217 Xenopus laevis, 58 Y chromosome, 203, 219 zebras, 142 zerkunult gene, 66 zygotic hunchback genes, 67
263
“For most philosophers, reductionism is wrong because it denies the fact of multiple realizability. For most biologists, reductionism is wrong because it involves a commitment to genetic determinism. In this stimulating new book, Rosenberg reconfigures the problem. His Darwinian reductionism denies genetic determinism, and it has no problem with multiple realizability. It captures what scientific materialism should have been after all along.” e l l i o t s o b e r , University of Wisconsin
“Alex Rosenberg has been thinking about reductionism in biology for a quarter of a century. His latest discussion is many-sided, informed, and informative—and extremely challenging.” p h i l i p k i t c h e r , Columbia University
“Over the last twenty years and more, philosophers and theoretical biologists have built an antireductionist consensus about biology. We have thought that biology is autonomous without being spooky. While biological systems are built from chemical ones, biological facts are not just physical facts, and biological explanations cannot be replaced by physical and chemical ones. The most consistent, articulate, informed, and lucid skeptic about this view has been Alex Rosenberg, and Darwinian Reductionism is the mature synthesis of his alternative vision. He argues that we can show the paradigm facts of biology—evolution and development—are built from the chemical and physical, and reduce to them. Moreover, he argues, unpleasantly plausibly, that defenders of the consensus must slip one way or the other: into spookiness about the biological, or into a reduction program for the biological. People like me have no middle way. Bugger.” k i m s t e r e l n y , author of Sex and Death
The University of Chicago Press www.press.uchicago.edu isbn-13: 978-0-226-72729-5 isbn-10: 0-226-72729-7