Distributed Cognition and the Will INDIVIDUAL VOLITION AND SOCIAL CONTEXT
edited by
Don Ross, David Spurrett, Harold Kincaid, and G. Lynn Stephens
Distributed Cognition and the Will
Distributed Cognition and the Will Individual Volition and Social Context
edited by Don Ross, David Spurrett, Harold Kincaid, and G. Lynn Stephens
A Bradford Book The MIT Press Cambridge, Massachusetts London, England
( 2007 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please e-mail
[email protected] or write to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142. This book was set in Stone Serif and Stone Sans on 3B2 by Asco Typesetters, Hong Kong and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Distributed cognition and the will : individual volition and social context / edited by Don Ross . . . [et al.]. p. cm. ‘‘A Bradford book.’’ Includes bibliographical references and index. ISBN-13: 978-0-262-18261-4 (hardcover : alk. paper) ISBN-13: 978-0-262-68169-8 (pbk. : alk. paper) 1. Will. 2. Act (Philosophy) 3. Distributed cognition. I. Ross, Don, 1962– BJ1461.D57 2007 128 0 .3—dc22 10 9 8 7 6
2006030813 5 4
3 2
1
Contents
Contributors
vii
1 Introduction: Science Catches the Will Don Ross
1
2 The Puzzle of Coaction 17 Daniel M. Wegner and Betsy Sparrow 3 What Kind of Agent Are We? A Naturalistic Framework for the Study of Human Agency 39 Paul Sheldon Davies 4 The Illusion of Freedom Evolves Tamler Sommers 5 Neuroscience and Agent-Control Philip Pettit 6 My Body Has a Mind of Its Own Daniel C. Dennett 7 Soft Selves and Ecological Control Andy Clark
61 77 93 101
8 The Sources of Behavior: Toward a Naturalistic, Control Account of Agency 123 Mariam Thalos 9 Thought Experiments That Explore Where Controlled Experiments Can’t: The Example of Will 169 George Ainslie 10 The Economic and Evolutionary Basis of Selves Don Ross
197
vi
Contents
11 Situated Cognition: The Perspect Model Lawrence Lengbeyer 12 The Evolutionary Origins of Volition Wayne Christensen
227
255
13 What Determines the Self in Self-Regulation? Applied Psychology’s Struggle with Will 289 Jeffrey B. Vancouver and Tadeusz W. Zawidzki 14 Civil Schizophrenia Dan Lloyd Index
349
323
Contributors
George Ainslie is chief psychiatrist at the Veterans Affairs Medical Center, Coatesville, Pennsylvania, and studies the motivational implications of hyperbolic discounting (picoeconomics). His Breakdown of Will was recently a target book in Behavioral and Brain Sciences. Wayne Christensen is Lecturer in Philosophy at the University of Adelaide and studies the evolution of cognition and agency. His recent articles include ‘‘Neuroscience in context: the new flagship of the cognitive sciences’’ (coauthored with Luca Tommasi) appearing in Biological Theory and ‘‘Cognition as high order control’’ to appear in New Ideas in Psychology. Andy Clark is Professor of Philosophy in the School of Philosophy, Psychology, and Language Sciences, at Edinburgh University in Scotland. He is the author of several books including Being There: Putting Brain, Body and World Together Again (MIT Press, 1997) and Natural-Born Cyborgs: Minds, Technologies and the Future of Human Intelligence (Oxford University Press, 2003). His research interests include robotics and artificial life, the cognitive role of human-built structures, specialization, and interactive dynamics in neural systems, and the interplay of language, thought, and action. Paul Sheldon Davies is Associate Professor of philosophy at The College of William and Mary. Author of Norms of Nature: Naturalism and the Nature of Functions (MIT Press, 2001), he is presently writing a book on naturalism and human agency. Daniel C. Dennett is University Professor, and Austin B. Fletcher Professor of Philosophy, and Co-director of the Center for Cognitive Studies at Tufts University, Medford, Massachusetts. Harold Kincaid is Professor and Chair of the Department of Philosophy and Director of the Center for Ethics and Values in the Sciences at the University of Alabama at Birmingham. He has published widely in the
viii
Contributors
philosophy of science, particularly on issues in the philosophy of social science and the unity of science. His coedited volumes Value Free Science: Ideal or Illusion? and The Oxford Handbook of Philosophy of Economics are forthcoming from Oxford University Press. Lawrence Lengbeyer is Associate Professor of Philosophy at the United States Naval Academy. His recent published and presented work has dealt with a wide variety of topics, including selflessness, courage, racist belief, sexist humor, anti-Semitism, artists’ rights, self-deception, make-believe, reactions to fictions, cognition and emotion, and mental compartmentalization. Dan Lloyd is a professor in the Department of Philosophy, Trinity College, Hartford, Connecticut, and the author of Simple Minds (MIT Press, 1989) and Radiant Cool: A Novel Theory of Consciousness (MIT Press, 2003). Philip Pettit is the Laurance S. Rockefeller University Professor of Politics and Human Values at Princeton University, where he teaches political theory and philosophy. His recent books include A Theory of Freedom (OUP, 2001), and a selection of papers Rules, Reasons and Norms (OUP, 2002). He is the co-author of a number of books, including recently The Economy of Esteem (OUP, 2004), with Geoffrey Brennan; and Mind, Morality and Explanation (OUP, 2004), a selection of papers co-authored with Frank Jackson and Michael Smith. Don Ross is Professor in the Departments of Philosophy and Finance, Economics, and Quantitative Methods at the University of Alabama at Birmingham and Professor in the School of Economics at the University of Cape Town, South Africa. His book Economic Theory and Cognitive Science: Microexplanation was published by MIT Press in 2005. Every Thing Must Go with J. Ladyman, and Midbrain Mutiny with C. Sharp, R. Vuchinich, and D. Spurrett, will appear in 2007. Tamler Sommers is Assistant Professor at the University of Minnesota, Morris. He is writing a book on moral responsibility and the emotions. Betsy Sparrow is doctoral candidate in Psychology at Harvard University. She studies the experience of authorship in action and thought, and has published a series of recent research papers in the Journal of Personality and Social Psychology. David Spurrett is Professor of Philosophy at the Howard College Campus of the University of KwaZulu-Natal, where he is director of the mind AND world working group and of the cognitive science program. His active re-
Contributors
ix
search areas relate to phenomena of disordered agency (including addictions), philosophy of science and metaphysics. He is also one of the editors of the South African Journal of Philosophy. G. Lynn Stephens is Professor of Philosophy at the University of Alabama at Birmingham. He specializes in the philosophy of mind, consciousness, and psychiatry and is author, with George Graham, of When Self-consciousness Breaks: Alien Voices and Inserted Thoughts, published by MIT Press. Mariam Thalos is Professor of Philosophy at the University of Utah. Her research focuses on foundational questions in the sciences, especially the physical, social, and decisional sciences. She is the author of numerous articles on causation, explanation, and how relations between micro and macro are handled by a range of scientific theories, as well as articles in political philosophy and action theory. She is a former fellow of the National Endowment for the Humanities, the Institute of Advanced Studies of the Australian National University, and the Tanner Humanities Center, and is current working on two book projects, The Natural History of Knowledge and The Natural History of the Will. Jeffrey B. Vancouver is Associate Professor of Industrial/Organizational Psychology at Ohio University. He studies the dynamics underlying human motivation in work contexts and recently published a series of papers in the Journal of Applied Psychology on the role of beliefs on motivation and behavior using a control theory perspective. Daniel M. Wegner is Professor of Psychology at Harvard University. He studies the role of thought in self-control and social life, and is author of White Bears and Other Unwanted Thoughts and of The Illusion of Conscious Will. Tadeusz W. Zawidzki is Assistant Professor of Philosophy at George Washington University in Washington, DC, and studies the relations between, and the evolution of, language and cognition. He is recent author of several articles on these issues and forthcoming author of the monograph Dennett (Oneworld).
1 Introduction: Science Catches the Will Don Ross
If there were an all-star list of concepts from the history of western philosophy based on the volume of attention over the years, the concept of will would be among the list’s elements. The reasons for this are not obscure. It has been taken as a principal source of human specialness that we are putatively original authors of some events, which themselves then get dignified as a select subset called ‘‘actions.’’ Exercises of will have been regarded as a sui generis type of process, events of ‘‘agency.’’ This in turn opens a raft of questions as to who else besides the prototypes of agency, individual human beings, might also partake wholly or partly of it. Animals? Infants and cognitively impaired people? Suitably organized and structured groups of people? Integral parts or functionally distinctive parts—‘faculties’—of people? These questions in turn seem to rest on resolution of others. Does agency essentially involve rationality, and if so, of what kind and beyond what threshold? How can the apparently close relationship between agency and motivation by reference to reasons be squared with the intuition that spontaneous, subjectively nondeliberative decisions to act seem like the paradigmatic exercises of will—even though someone who always and only acted spontaneously and unpredictably would likely not be deemed an agent at all, and so not be thought to properly will anything? During every age of western philosophy without exception these questions, and the assumptions required to make sense of them, have been in prominent play. The concept of will is central in pre-modern Christian philosophy for obvious reasons: the main theme of the Christian myth is a gift of free will to humans by an omnipotent god, resulting in a drama that is complex, to put it mildly. The consequent entrenchment of a conceptual network based around the will in all aspects of western moral culture left the notion less vulnerable than most of its fellow pre-modern conceptual all-stars when the scientific revolution arrived. The majority of early modern philosophers were inclined to accommodate the concept of will into scientific metaphysics
2
Don Ross
rather than aim to displace it or explain it away. Hume, as usual, is the great exception, while his foil Descartes also plays his customary role as the exemplary synthesizer. Ingeniously, Descartes borrowed the Christian conception of will as the basis of both human specialness and human sinfulness so as to buttress his version of science-friendly epistemology. That is, will was presented by him as a necessary part of the source of all error, from which it required rescue and discipline by the faculty of understanding. But because he allowed the will to retain its role as the source of sui generis mental activity, he was able to appeal to it to try to insulate his program for sweeping mechanization of nature from both apparent counterexamples based on nonmechanical human agency and from dangerous conflicts with the prevailing moral and social order. There were too many sharp minds in the church of Descartes’s day for this cunning to turn the intellectual authorities then and there, but despite losing almost all battles to his critics in the debates following the Meditations, he gradually—posthumously—won the war. The efforts of Hume and Nietzsche notwithstanding, Cartesian dualism became the basis for the prevailing popular metaphysics of both the natural order and of morality in western culture. So it clearly remains. However, the academy—or at least that part that is led by science—has almost all defected from the popular picture. Among professional students of mind and behavior, dualism has few adherents. However, support for the idea of free will, under some alternative interpretation, probably still commands majority assent, at least if the alternative is taken to be the thesis that the appearance of agency is an illusion. Two simple propositions stand in the way of banishing the concept of will to the realm of caloric and demonic possession—and the Cartesian faculty of understanding. First, it is difficult to see how attributions of moral responsibility can be justified if no one is really the author of their actions. It is, of course, generally recognized among philosophers and psychologists that discomfort at a belief’s consequences cannot be invoked to refute it. If we have been blaming and punishing people, on a vast scale, who in fact cannot possibly deserve it, then we should be prepared to see our moral culture and the institutions that rest on it revised. However, contemplation of this vertiginous challenge is discouraged by the second simple proposition: no argument against the existence of the will, however cogent, seems to carry conviction stronger than everyone’s sense that they can, for example, decide to raise their left arm and then feel and watch it go up. And if we can each autonomously choose to raise or not raise our arms, presumably we
Introduction: Science Catches the Will
3
can autonomously choose to pilfer or not pilfer the pension fund and poison or not poison our enemies. Thus, although dualism is a marginalized intellectual preference these days, the pivotal concept of will on which it has always rested remains philosophically current. We can note signs of unease in semantic trends, however. Philosophers and scientists tend no longer to use the word ‘‘will’’ by itself—that is, apart from its occurrence in the phrase ‘‘free will.’’ This reflects the fact that no one continues to believe in classical faculty psychology (popularity of cognitive modules in some quarters notwithstanding). Thus while free will might be the most natural name for a putative real phenomenon, the idea that it resides in the capacities of some sort of entity is disreputable. When philosophers are inquiring into the nature of the will itself, as opposed to free will, they are more likely to say they are investigating agency. I do not advise them against this more cautious language, but it can obscure the fact that in the worldviews of many avowedly naturalistic philosophers a traditional pre-modern concept still influentially lurks. Among philosophers who have recently wrestled with this challenge, the two most widely read are Donald Davidson and Daniel Dennett. Davidson’s approach is analytic: he tries to break down the monolithic problem of how will can exist in a natural world into less grandiose constituent problems that, while still hard, might be tractable.1 Such problems include ‘‘Can reasons be causes?’’ and ‘‘How is it possible that people sometimes act so as to disappoint themselves with respect to their own standing preferences?’’ If these questions and others like them can be given sensible naturalistic answers, then in the course of working them out we are in effect learning how to reconcile the familiar inherited conceptual network around the will with our scientific metaphysics. Dennett, by contrast, has preferred to tackle what I called the monolithic problem synthetically and head-on: in two books (1984, 2003) and many supporting articles, he has argued that much of what has traditionally been said about the will is indeed shown by science to be illusory, but that the consequences of this for everyday moral life have been greatly exaggerated. We have, as Dennett famously puts it in the earlier book, all the free will worth wanting. Endorsing this conclusion, if we do so, can still leave us wondering whether we have all the will, simpliciter, worth wanting from the point of view of a metaphysics of the person that is both comprehensible and in accord with scientific evidence; this is the issue that motivates Dennett’s second book on the subject.
4
Don Ross
The pressure of the scientific evidence in question has recently become much more acute than in the past. The main source of early modern tension, the fact that mechanical determinism was a broadly successful program in post-Newtonian physics, might have made the will into an object of philosophical suspicion, but it can hardly be said to have constituted true scientific evidence against it. However, the vast increase in the sophistication of the brain and behavioral sciences over the past few decades changes the situation dramatically. We are beginning to understand, on the basis of direct empirical investigation, how human behavior at various scales of analysis is controlled and influenced. From this perspective, the will has always mainly been a black box into which have been bundled all efferent behavioral control factors that have been inferred to exist but remained unexamined by science. As long as that set included most of the hypothesized factors, philosophers could freely speculate without inviting justified charges of fecklessness. This circumstance no longer prevails. The factors that are lately being dragged out of the black box arise on both micro and macro scales. By ‘‘micro’’ factors I refer to influences on short-term decisions and fine-grained calibration of action: that is, withinbrain causal antecedents of raising or not raising arms, pulling or not pulling triggers, and, in a realm of pressing policy relevance, taking or not taking another drink, cigarette, slice of pie, or pull on the slot machine lever. By ‘‘macro’’ factors I refer to influences on the gradual sculpting of personalities and selves, as manifest in dispositions to particular patterns of action, operating at time scales measured in months and years. I do not intend here to urge a binary distinction, but to draw attention to a continuum of scales by focusing on two contrasting points. My examples above do not capture the extrema on the continuum. For those, consider on the one side a baseball player ‘‘deciding’’ to tip his bat just up or just down as the pitch crosses the plate, which cannot possibly (because of processing speed considerations) be a personal decision in the sense of involving his deliberative consciousness or even his frontal cortex. On the other extreme, a human personality is partly constrained by patterns in natural selection that have unfolded over hundreds of millions of years. On both of these limiting sides, many think or feel that we have passed beyond the domain of will and into that of exogenous causation. Moral culture is heavily preoccupied with aspects of agency, and with its limits, on all scales. We hold people responsible for pulling triggers and taking drinks. We massively reward good characters and try to blight the lives of bad ones. Controversies over the relative contributions of ‘‘nature’’ and ‘‘nurture’’ to human character are conducted tirelessly and with moral pas-
Introduction: Science Catches the Will
5
sion on both sides, and social scientists are often resented for promoting evidence in favor of structural rather than agent-driven causes of outcomes. Most societies allow insanity defenses to reduce or remove criminal liability, but this is controversial with conservatives. Millions of people around the world deny rationally incontrovertible evidence for evolution because they fear that natural selection threatens the sense of human and divine autonomy. Issues around the nature of will and agency thus embroil leading fronts of scientific progress directly in ideological, political, and legal tempests. Nearer to the micro end of the continuum, science is opening the black box of the will in two main ways. First, ingenious behavioral experiments of a kind pioneered by Libet (e.g., 1985), and extended by subsequent researchers whose leading representative is Wegner (2002), have confounded basic widespread assumptions supposedly grounded in everyday unreflective experience to the effect that events such as conscious subjective choices to raise arms must temporally precede all arm-raisings that are not brought about by extra-personal forces. The brain, it turns out, prepares such actions before its personal, conscious ‘‘operator’’ is aware that it is doing so. Furthermore false impressions of being consciously and subjectively in control of micro-scale actions as they unfold can reliably and relatively easily be induced by manipulation of environmental conditions. I will not try to further summarize this work here, since the reader is about to encounter extended descriptions of it in several of the chapters to come. Suffice to say that it seems on its face to undermine, or at the very least to greatly complicate, the second of what I identified above as the two ‘‘simple propositions’’ that have allowed the Cartesian will so much longer a life in serious inquiry than most of its conceptual kin. The other avenue along which science is disrupting traditional ideas of will and agency at the micro scale is through attention to the architecture of the mind/brain. Descartes believed that the body, including what we now call the nervous system, was a mechanical system, basically a hydraulic network of pulleys and levers. The will’s task in administering this network was simply to increase and decrease distributions of tension at the site in the brain where chords and cables converged. This assumption contributed greatly to the plausibility of the simplicity and unity of the Cartesian will, from whence it derived most of its explanatory power. As vividly described by Dennett (1991, and elsewhere) and Glimcher (2003), Descartes’s conception of the will as a fused control point that coordinates and regulates reflexes in discharge of action plans it itself spontaneously originates not only survived through many decades of early neuroscience, it
6
Don Ross
positively shaped the dominant research paradigm in that discipline due to Sherrington. Over the past fifteen to twenty years, this paradigm has been blasted to ruins. First, experience in artificial intelligence demonstrated that real-time management of complex behavior involving networks of interrelated subroutines comes with stringent limitations on the extent to which sequencing of actions can be allowed to bottleneck at central control points. Early AI designs for performing even severely limited subsets of the everyday human task repertoire tended to resemble mini Soviet Unions, grievously incapable of supplying behavioral responses adequate to meet environmental demands the moment the latter were allowed to become the least bit difficult to predict and monitor (Brooks 1999). The nexus of brain and environment, it came generally to be recognized, is a complex system dominated by nonlinear feedback, amplification and damping, that can be (imperfectly) managed only by information-processing systems that are themselves complex (Thelen and Smith 1994; Port and van Gelder 1995). Several philosophers of mind and cognitive science, including Dennett, have contributed significantly to developing the implications of complex, distributed models of cognition and control. Most would agree that the foremost figure among these philosophers is Andy Clark. After devoting his early work to issues surrounding the kind of cognitive architecture needed to account for manifest behavioral patterns (both competencies and characteristic breakdowns and errors), Clark turned to cultivating a truly radical consequence of the recognition that the will cannot be a simple lever or identified with a spatiotemporal point. This is that the will, insofar as it is identified with the efferent aspect of the agent, person, self or mind, cannot nonarbitrarily be contained within the brain and body of the human organism (Clark 1997, 2003; Clark and Chalmers 1998). I will again not attempt to summarize this perspective or the arguments for it here, since Clark himself attends to this task in a later chapter. The basic argument schema may be stated quite straightforwardly, however. When the Cartesian will is regarded as the nonphysical principle of activity, the whole brain is effectively made part of the external environment. But once we abandon dualism and begin distributing control of behavior, including such behavior as we deem fit to call ‘‘action,’’ around in the brain, our hold on the boundaries of the will begins to slip, along two dimensions. First, a point implicit in all distributed-control models, it becomes entirely unclear how to distinguish the will, now no longer a faculty that is among the mind’s components, from the whole agent, which is in turn often treated as synonymous with the self and the person. Second, and
Introduction: Science Catches the Will
7
more distinctive of Clark’s position, once we allow that ‘‘devices’’ within the brain, such as perceptual systems, can be aspects of the self because they exercise control functions (both directly and via their roles in complex feedback loops), it emerges as arbitrary if we try to necessarily consign external prostheses and environmental scaffolds to the will’s external environment merely because they are not implemented in the body’s cells. People, like many other animals, reorganize their environments in idiosyncratic ways to help regulate their own behavior. Is a typical person’s inner ear less a part of herself than her computer, organized as it is to remind her of her projects, organize and conserve her latest thoughts and cue new ones that cohere with them, and manage a large part of her communication with others? Are you performing an action as an agent, responsive to reasons rather than causes, when you multiply two large numbers? You know you couldn’t do this without your frontal cortex, and you’re inclined to take your frontal cortex as part of the system that implements your will (or selfhood, agency, etc.). But could you multiply the numbers without the pen and paper or calculator you use? Could you do it without a culture that provided a symbolic notation for conceptualizing your task in the first place, and then for keeping track of the subroutines it requires? So if cortex is part of the system that implements your will—which implements you— why not your essential cultural technology for doing arithmetic? If we are persuaded by Clark and like-minded thinkers to allow the self/ person/mind to be distributed outside the organism casing, then the undermining of the traditional will from the micro direction meets and makes common cause with such undermining from the macro side. As the examples above are intended to illustrate, the exploding new interdisciplinary sciences (evolutionary psychology and anthropology, behavioral and institutional economics, multi-agent modeling, complex system simulation) that study cultural evolution, imitation, convergence to equilibria in games, and the unplanned origins of institutions and norms, all emphasize the extent to which interpersonal dynamics establish the enabling conditions for the actions of persons qua relatively behaviorally unified agents. But then, and especially in conjunction with micro-scale influences from subpersonal processes in the brain, they seem to make will and agency redundant. The work of Libet, Wegner, and Dennett suggests that your brain is built to be able to initiate and steer many of your actions with little or no deliberation or phenomenal awareness. Then consideration of social dynamics suggests that a necessary condition for your brain’s ability to stay on track is a relatively stable social environment, furnishing norms and targets for
8
Don Ross
imitation that have evolved in the culture to get precisely such jobs done. At this point there seems no evident task left for a vestigial will. Just as well, one might think—another bit of prescientific metaphysical litter goes into the bin. But now recall that when we put pressure on the will as a Cartesian point-mass it became increasingly difficult to distinguish from the concept of the self. If the will is eliminated, does not the self disappear with it, and for the same basic reasons? Can the progress of science convince us that there are no individual persons but only biological individuals pushed into certain sorts of behavioral dispositions rather than others by the complex dynamics interlocking their brains and their cultural/social environments? Note that in wondering about the existence of persons without calling into question the existence of distinguishable instances of H. sapiens we are not raising doubts about the status of any biochemical or biopsychological facts. Some might take us to be wondering about a metaphysical fact, but this does not interest scientists, and even many philosophers deny that there are such facts. If the question seems important, it is because persons—as opposed to human organisms—are at the center of moral culture. It is indeed difficult to imagine how people would function, would indeed even be people, if they did not suppose that they were substantially responsible for at least their own moral and social choices in adolescence and adulthood. The concepts of the will and the self appear to rise or fall together. Among contributions to this volume, I have explicitly mentioned the work of Dennett, Wegner, and Clark as leading elements of the book’s intellectual background. One further scene-setter now needs introduction. I noted earlier that although talk of ‘‘free will’’ is still common at least in philosophy, the ‘‘will’’ simpliciter is rarely invoked due to the conviction that it is not a mental organ, and that there’s no evident other sort of entity it could be instead. Recently, however, the psychiatrist and behavioral economist George Ainslie has rehabilitated frank talk of the will. He has been able to do this because he has found a new ontological interpretation for it. According to Ainslie, the will is not an organ but a habit, a recurrent (to considerably varying extents from person to person) pattern in people’s behavior. Development of this perspective begins from observing that people’s (and other animals’) preferences are sensitive to anticipated delays of rewards in a way that makes them singularly ill-suited to maximization. In particular, they are discounted into the future by hyperbolic-shaped functions that lead preferences to temporarily reverse as an individual’s tempo-
Introduction: Science Catches the Will
9
ral distance to the elements in her reward stream changes (see Ainslie’s chapter for details). This goes a long way, Ainslie argues, toward explaining addiction and other pathologies of rational choice identified by behavioral economists. But then it raises a puzzle as to how most of us rise above our natural inconsistency so as to approximate the prudent investors recognized as agents in microeconomic theory. Ainslie proposes that the answer to this puzzle lies in the ability to see that inconsistency now predicts defeat of currently entertained long-term projects. As a result disappointment over future prospects can arise not just in the future, about which discounting makes us unduly cavalier, but in the more highly motivating present. Furthermore the tendency to pay attention to the predictive aspect of preference consistency and inconsistency is something a person can cultivate and become habituated to. Such cultivation and habituation amounts, says Ainslie, to creation and maintenance of will. So here we have a perspective in which the will is in no way metaphysically mysterious but is also cut loose from its conceptual background in Cartesianism and faculty psychology. It is particularly interesting in this connection that Ainslie further models the will as emerging from the bargaining activity of subpersonal interests—something that follows, he argues, from hyperbolic discounting—thus echoing the general theme of distributed cognition theorists. Simultaneously, his picture depends on the idea that what induces the bickering and selfish subpersonal units to settle their differences and, in so doing, give rise to willpower are social pressures: if your gang of interests fails to coordinate while your neighbor’s coalition (forced together like yours through inhabiting the same agent at the macro scale) succeed, then yours will all jointly stand to be exploited by the more consistent team. So, far from taking the concept to be rejected by new theories of distributed cognition and social-dynamical influence on stabilization of selves and characters, Ainslie offers a new model of will that revives the idea by appeal to those very resources. Thus inspired by the possibility that the apparent threats to the cogency of the idea of will might instead turn out be the basis of its reemergence as a scientific subject, we gathered together the contributors represented in this volume to evaluate the prospects. The banner for our deliberations was the Mind and World (mAw) working group founded at the University of KwaZulu-Natal in eThekwini (Durban), South Africa, in 2002. mAw’s aim is to develop and apply models of mind as distributed and extended. It has built a close working relationship with the Center for Ethics and Values in the Sciences at the University of Alabama at Birmingham. The
10
Don Ross
mAw conference on Distributed Cognition and the Will, co-sponsored by UKZN and the Center, was held in Birmingham on March 18–20, 2005. The papers gathered as chapters in this volume are descendants of the presentations given there, refined by virtue of the extended critical discussion they individually and collectively provoked. The chapters address the status of the concepts of the will and the self on the basis of first taking for granted that the recent scientific developments that put pressure on them must not be denied, avoided, or lazily interpreted so as to seem harmless to traditional views. This volume thus does not represent a debate between naturalists and others; all contributors are motivated by a commitment to take science seriously. Only about half of the chapters are by philosophers; the remaining half are by behavioral scientists of various disciplinary persuasions. I will briefly the survey the contents to come in order to give the reader an idea of the thematic trajectory we had in mind in choosing an order for them. Chapter 2 by Daniel Wegner and Betsy Sparrow summarizes the most basic source of the new scientific challenge to the traditional concept of the will: the empirical evidence, gathered to a significant extent in Wegner’s lab, that our standard perception of the subjective experience of will as causing our voluntary behavior on micro scales is an illusion. Wegner and Sparrow then propose an explanation for this illusion that introduces the book’s other main themes. The explanation in question is that the illusion of will has been selected (both biologically and culturally) because in underwriting our sense of responsibility for our effects on social stability, it promotes maintenance of that stability, from which individuals in turn derive benefits. Thus the relationships among neural dynamics, behavioral dynamics, and social dynamics as the basis for the phenomenon of the self as a stabilization device, are put onto the table at the outset. They remain there throughout the volume. In chapter 3 Paul Sheldon Davies raises reasons for fearing that Wegner and Sparrow are too quick to seek an ecumenical accommodation of our traditional cultural model of the agent in the face of their data. This is a recurrent issue for naturalistic philosophers: to what extent should we aim to domesticate the counterintuitive findings of science so as to go on feeling comfortable in our conceptual skins, and to what extent should we embrace the exposure of our folk myths as we learn more about a universe that was not designed, and in particular was not designed to be understood by us (since natural selection didn’t directly favor those of our ancestors who could explain any but a handful of ecologically local phenomena)?
Introduction: Science Catches the Will
11
Davies’s chapter 4 leads straightforwardly into Tamler Sommers’s application of the same general concern to the more specific question of whether our scientific knowledge is compatible with the idea that people as moral agents are causally responsible for their actions. Sommers doesn’t just wonder about this: he argues forcefully for incompatibility, for what might be called cold-water naturalism about moral responsibility understood on the model of causal responsibility. Though defending this conclusion is Sommers’s main purpose in the chapter, he follows the general line of Dennett’s 1984 book on free will—but not the line of Dennett (2003). He suggests that if we take his advice and give up our illusion of responsibility, this need not have the corrosive effects on social stability that the (conceptual) conservative fears, and that might be thought to follow from Wegner and Sparrow’s explanation of the illusion of will. Phillip Pettit, in chapter 5, provides a much more sustained argument for Sommers’s second conclusion above. Pettit has long been among the foremost developers of the theory of agency as a social phenomenon. He extends that development in showing how the will need not cause any actions on micro scales in order for our responsibility as agents to be truthfully attributed and to be functionally selected by cultural evolution so as to regulate behavior that underwrites social stability. Agency, according to Pettit, merely requires our conforming our socially significant behavioral dispositions to answerability to reason on relatively macro scales, and this in no way requires control of fine-grained actions by the will on micro scales. It might be said that this idea emerges as nearly a consensus in the book. Of all the authors here, only Davies seems to be somewhat skeptical on the point. The chapters to this point, after Wegner’s and Sparrow’s survey of evidence in the first half of theirs, all concentrate on the implications for moral and social agency of skepticism about willful causation on the micro scale. In chapter 6 Daniel Dennett addresses this too, but also begins to widen the target to consider the implications of recent behavioral and cognitive science for our ability to understand ourselves not just as moral agents but as agents or selves at all. He is not in the least skeptical about this prospect, but he thinks we have hard work to do in order to recover a clear picture. Like most authors in the book, Dennett thinks that our microscale agency is based on a foundation of macro-scale agency. He agrees with Pettit in emphasizing the crucial role of interpersonal communicative coordination. But whereas Pettit emphasizes coordination on standards of reason for the sake of justification of action, Dennett draws attention to a logically prior level on which we are under pressure to make agents of
12
Don Ross
ourselves: if we didn’t, we couldn’t engage in complex communication in the first place, and such communication is basic to the construction of the distinctive ecological niche our species has filled. Having Andy Clark’s chapter 7 on selfhood as an aspect of niche construction follow Dennett’s allows us to recapitulate the wider history of the extended and distributed mind hypothesis. Dennett originally suggested and inspired it, but Clark fully developed and elaborated it. In his chapter Clark explains how selves exercise will (at appropriate scales) by means of what he calls ‘‘soft’’ or ‘‘ecological’’ control. Selves, he argues and explains, are problem-solving assemblages of expropriated ecological resources. Crucial among these resources, we might suppose, are culturally evolved standards of publicly sanctioned reasons for action. If this supposition is added to Clark’s account, then it and Pettit’s can naturally be conjoined with Dennett’s to form a model of the self and its control capacities that replaces the one demolished by the evidence of Wegner and colleagues. By this point in the book, more traditional philosophers might be grumbling that their patient analytical labors over the years are being casually brushed aside here by scientistic show-and-tell. Where, they might ask, are the careful arguments required for systematically showing, in detail, that the distributed, extended, virtual, nonfacultative self and will can in fact perform the conceptually unifying function that motivated the oldfashioned versions of the concepts? Mariam Thalos’s chapter 8 is the longest in the book because providing this patient engagement is the task she takes on. Agency, she shows analytically, must be distributed if philosophers are to have any hope of backing out of various logical cul-de-sacs into which they have driven themselves in their attempts to come to grips with the concept, and if they are to grasp the possibility of a scientific account of it. While nonphilosophers might find themselves skimming this chapter, readers and critics who are analytic philosophers are likely to regard it as the serious core of the book. Philosophers will join everyone else in appreciating George Ainslie’s chapter 9. It does two main things. First, it provides a concise introduction to his theory of the will based on hyperbolic discounting as sketched above, and his theory was the single most important motivator for the questions asked at our conference and in this book. This part of the chapter will be a valuable resource for readers new to Ainslie’s model. Second, the chapter defends the value of thought experiments, often employed by Ainslie in developing his ideas, as guides to the nature of the mind and the person. Ainslie’s thought experiments will intrigue philosophers because
Introduction: Science Catches the Will
13
most will be aware that the kind of naturalism promoted by Dennett, which forms another foundation block for the account of self and will in the book, has promoted skepticism about the possibility for thought experiments to contribute to empirical knowledge; Dennett has said on other occasions that most philosophers’ thought experiments are failures of imagination disguised as insights into necessity. In this context it is surely interesting to find the importance of some thought experiments being defended by a scientist. Ainslie does some useful philosophy himself here by analyzing some of the features that distinguish illuminating thought experiments from those that indeed obfuscate in the ways Dennett has cataloged. If the reader accepts Ainslie’s conclusion, it will be mainly because the sample thought experiments he provides indeed seem irresistible. To the extent that they are, the reader will discover that while her attention was officially on thought experiments, she imbibed compelling reasons to also endorse Ainslie’s theory of the will. Among those who agree that Ainslie’s is the most promising and exciting positive model of the will now on offer, it might be seen as surprising that it comes from the discipline of behavioral economics. If any discipline has over its history treated the will as a pure black box, it is economics, and as Davis (2003) cogently argues, its leading neoclassical incarnation has not incorporated even a substantial concept of the individual person, let alone the self. However, while economists have indeed tried to get by with a psychologically thin conception of the person, they have made the concept of the agent fundamental to their science, and have done more careful modeling of agency in a vast range of real and hypothetical scenarios than any other inquirers. In chapter 10, I sketch a framework by which selves can be endogenously generated in a class of dynamic game-theoretic models that break no foundational rules of neoclassicism (thus asking us to surrender none of its axiomatic power). My approach specifically exemplifies the general theme of many papers in the book (including Wegner’s, Pettit’s, Dennett’s, Clark’s, Thalos’s, and Ainslie’s): the self is depicted as virtually created in niche construction, in order to perform the function of simultaneously stabilizing and intermediating the micro-scale dynamics of the distributed individual mind/brain and the macro-scale dynamics of society and culture. Selves, in my view, do not exist despite the complexity of these dynamics at both scales, they exist because of it. The three following chapters lend symmetry to the book’s organization. Early chapters (Sommers, Pettit, Dennett, Clark) concentrate on macroscale issues. Middle papers (Thalos, Ainslie, Ross) work on the seam between macro- and micro-scale phenomena. The final three chapters focus
14
Don Ross
on the micro side and go deeper into the mind/brain. In chapter 11 Lawrence Lengbeyer provides a schematic model of how the distributed will might actually go about allocating its attention and control functions. Though his account is couched in conceptual terms and doesn’t attempt to specify measurable parameters, he is careful to attend to its motivation in phenomena from experimental psychology. A natural question the reader can carry forward from Lengbeyer’s chapter to the following one, chapter 12, by Wayne Christensen is: To what extent is Lengbeyer’s schematic model at the mental/functional level consistent with Christensen’s detailed survey of what we know about the distribution of control in the vertebrate brain? Christensen sounds a strong warning that, as with most new and exciting ideas, enthusiasm for distributed control in cognitive science can carry theorists farther than the evidence warrants, and indeed to extremes where evident falsehoods are embraced. Christensen’s sketch of the architecture of neural control is compatible with a good deal of functional distribution and decentralization, as he makes clear. However, he makes equally clear that there is a straightforward sense in which the brain incorporates a control hierarchy with an executive, subordinates, and a vector of information flow that makes attribution of these roles nonmetaphorical. After considering both Lengbeyer’s and Christensen’s chapters, readers might naturally want to ask: Does Lengbeyer’s pmanager system have the sorts of attention-allocating and plan-guiding capacities to be the right functional specification of the executive neural system characterized by Christensen? Christensen’s sketch of neural control architecture is broadly cybernetic, in the rigorous sense of Wiener (1948). It can thus be taken as a solid empirical platform for Vancouver and Zawidzki’s defense in chapter 13 of cybernetic control theory in applied psychology as against a dominant stream among practitioner advisors who seem to believe that modeling people as agents requires that we reject cognitive science. Vancouver and Zawidzki provide no evidence that their foils are familiar with Wegner’s work; one surmises that if they were, they would be deeply disconcerted by the evidence, while at the same time confirmed in their view of the cognitive and behavioral sciences as dehumanizing. I hope that the chapters in this book—Vancouver’s and Zawidzki’s specifically, on one level, but then all the chapters together, on another level—provide compelling evidence that such a view is simply ignorant. Real science does not imply a threat to our status as agents answerable to social demands for justification, for the reasons explored by Sommers, Pettit, Dennett, Clark, Thalos, and Ross. It does imply a challenge to lazy ‘‘morality’’ that simply follows conven-
Introduction: Science Catches the Will
15
tion, endorses familiar platitudes and thereby invites sleepy quietism in the face of problems. The book closes with a riveting application of this general perspective, in chapter 14, to a genuinely tragic problem, schizophrenia. Dan Lloyd first makes clear how devastating this widespread condition is, how resistant it is to effective intervention, and how baffling it must seem if one insists on cleaving to conventional models of agency and its relationships to reason and to the unity of the self. He then sketches an original model, supported by simulation evidence of his own and some clinical support, according to which schizophrenia is the symptom of dynamic coordination failure among parts of the distributed mind with respect to self-monitoring of attention and action cueing against time. This micro-scale breakdown pattern in dynamics is then suggested by Lloyd to be potentially analogous to loss of a democratic society’s ability to function as a coherent agent—that is, as a genuine community—when standards of answerability to evidence in public discourse are debased, especially by the very officials most clearly responsible for maintenance of them. Having followed Lengbeyer, Christensen, and Vancouver and Zawidzki into micro-scale manifestations of the distributedness of agency, Lloyd thus ends the book by returning the reader to its opening focus on the relationship between social stability and answerability to reason that, far from being overthrown along with the simple Cartesian will, is given increased emphasis when we buttress philosophical speculation with science and take the consequences seriously. Despite the different and occasionally conflicting perspectives on display in this book, we find here a firm consensus that we are collectively and individually better off for acknowledging that the comfortable conceptual image of the will and the self that has been characteristic of western culture for several centuries is paying diminishing returns, and for aiming to marshal our species’ talent for coordinated collective agency to build a more accurate one. Note 1. Davidson’s most important work on the will (agency) is gathered in his 1980 book. References Brooks, R. 1999. Cambrian Intelligence: The Early History of the New AI. Cambridge: MIT Press.
16
Don Ross
Clark, A. 1997. Being There. Cambridge: MIT Press. Clark, A. 2003. Natural-Born Cyborgs. Oxford: Oxford University Press. Clark, A., and D. Chalmers. 1998. The extended mind. Analysis 58: 7–19. Davidson, D. 1980. Essays on Actions and Events. Oxford: Oxford University Press. Davis, J. 2003. The Theory of the Individual in Economics. London: Routledge. Dennett, D. 1984. Elbow Room. Cambridge: MIT Press. Dennett, D. 1991. Consciousness Explained. Boston: Little Brown. Dennett, D. 2003. Freedom Evolves. New York: Viking. Glimcher, P. 2003. Decisions, Uncertainty and the Brain. Cambridge: MIT Press. Libet, B. 1985. Unconscious cerebral initiative and the role of the conscious will in voluntary action. Behavioral and Brain Sciences 8: 529–66. Port, R., and T. van Gelder, eds. 1995. Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge: MIT Press. Thelen, E., and L. Smith. 1994. A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge: MIT Press. Wegner, D. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wiener, N. 1948. Cybernetics. New York: Wiley.
2 The Puzzle of Coaction Daniel M. Wegner and Betsy Sparrow
Forward movement creates so many chances for awkward stalls and collisions, decisions about who goes first, right or left, mini-crises that make one conscious of authority and position. —Francine Prose 2000, The Blue Angel
When people do things together, who is the author? There is a puzzle of coaction, for example, in whether Fred Astaire or Ginger Rogers should be credited as the artist who inspired the couple’s classic dance duets. Who was really doing the dancing? Cartoonist Bob Thaves remarked that even though Astaire was great, ‘‘Rogers did everything he did, backwards and in high heels.’’ In the whirl of their tightly coupled performances, who was to know that he truly led and she followed? And just like their footwork, their romance in the film musical Top Hat posed the same puzzle. One reviewer noted: ‘‘He plays the role of the pursuer and she the pursued, but it is Rogers who controls the game’’ (Mueller 1985). The fact is that for Fred and Ginger, and for every coacting couple, group, or collective, the puzzle of coaction is open to multiple possible solutions at every moment. The perception of the authorship of coaction can shift and shimmer from one coactor to another in our minds, and we may even perceive that the action is being produced by no individual—and instead by the group as a unit. This chapter is concerned with how people go about solving the puzzle of coaction in everyday life, particularly when it comes to perceiving their own role in coaction. As we will see, understanding how people perceive coaction will take us well beyond the matter of sorting out credits among agents—leading eventually to a key insight into the vexing question of why people experience their own actions as consciously willed. The puzzle of coaction may be the reason we each have the experience of acting.
18
Daniel M. Wegner and Betsy Sparrow
Forms of Coaction Coaction occurs when one agent’s action is influenced by or occurs in the context of another agent’s action—and together they do something that is not fully attributable to either one alone. A prototypical case of coaction occurs when a bride and groom together grasp a knife and cut into their wedding cake. The movement is jointly determined, giving immediate rise to the question of authorship. There are several possible forms of coaction, varying in the nature of the linkage between coactors. Although these forms are not necessarily mutually exclusive, they are worth enumerating to review the range of potential coactions. 1. Physically coupled action The most obvious form of coaction occurs when agents are physically linked in some way. Fred and Ginger often held hands, for example, and such coupled action appears as well in many other instances: wrestlers in a clinch, pallbearers carrying a casket, children playing ring-around-the-rosy, a couple having sex, a group at a se´ance guiding the planchette on a ouija board, runners in a three-legged race, or teams in a tug of war, all share mechanical connections that influence their actions. Coupled action may occur with a designated leader, or it can arise without recognized leadership. 2. Psychologically coordinated action When an agent moves in a way that takes into account the action of another agent, coaction occurs through coordination. This is perhaps the broadest class of coaction, including every sort of turn-taking and all the coactions that involve the synchronized pursuit of action plans. People marching in a band, playing tennis, doing the wave, having a snowball fight, or having a conversation are examples. These kinds of coactions occur by plan or impromptu with or without leaders. 3. Mimicry When one agent copies the action of another, there is coaction. There may also be partial mimicry, as when one agent copies another’s action and transforms it. In the case of mimicry, of course, it is typically understood that the first agent to act is the leader. Leadership of mimicry can be unclear when the mimicking is not consciously undertaken, however, and there are also cases of mutual mimicry (After you . . . no, after you . . . no really, after you, etc.) when it becomes progressively more difficult to discern who started the coaction. 4. Obedience and conformity When one agent issues a suggestion or command to another and so guides the other’s action, coaction can also be said to have occurred. Obedience includes a whole realm of social activities,
The Puzzle of Coaction
19
from teaching, parenting, and following a recipe or written instructions, to participating in hypnosis, yielding to the influence of the mass media, or following orders to commit an act of war. Obviously the analysis of coaction involves most of the field of social psychology, not to mention pretty much every other social science. Much is known, then, about why and how people engage in coaction. The specific concern here is how, in the context of coaction, individuals perceive their own authorship and assign causal responsibility to self and others for the actions in question. The point of noting these forms of coaction is to highlight the fact, at the outset, that life is full of opportunities in which it may be unclear who is responsible for what is being done, and in which one’s own action can be embedded in complex surroundings that might make the authorship of that action difficult to distinguish. All that is needed to blur one’s authorship is a physical connection to another person, an attempt to coordinate action with a person or mimic what is done, or a circumstance in which one follows a person’s explicit direction. Every such case occasions the question ‘‘Whodunit?’’ Authorship Processing We each have ready answers to questions of our own authorship. Asked whether you are the one reading this sentence, you could quickly concur. This facility for knowing what we are doing suggests that the human being has evolved a set of systems for the purpose of establishing knowledge of authorship of own action (Wegner and Sparrow 2004). These systems are exquisitely crafted, and include ways of learning from the body itself what it is doing (sometimes known as proprioception or introception), ways of establishing authorship by examining how the mind may have contributed to the action, and ways of incorporating external information about the social circumstance of the action—in particular, the presence and potential contribution of other agents. These systems typically operate together to produce one’s sense of authorship. The sources of information they provide are independent, so they may add or subtract from each other, but they come together to produce the experience of consciously willing the action. The experience of conscious will, in this sense, is the final common pathway served by multiple indicators of authorship. These indicators are integrated to produce the sense that ‘‘I did it,’’ that ‘‘I didn’t do it,’’ or any of the gradations in between.
20
Daniel M. Wegner and Betsy Sparrow
Body Information The body learns what it is doing through several sensory channels. Proprioception or introception of own movement involves systems sensitive to information fed back from the muscles to the brain (Georgieff and Jeannerod 1998), sensory information from joints, tendons, and skin (Gandevia and Burke 1992; Matthews 1982), visual information encountered as the eyes watch the body in motion (Daprati et al. 1997; Nielson 1963), and information fed forward by the brain to the muscles (Frith, Blakemore, and Wolpert 2000). The nature of feedback from body to brain has been studied at length. The history of psychology and physiology feature classic works by Sherrington (1906), von Holst and Mittelstaedt (1950), Sperry (1950), and others investigating the role of efference (brain to body pathways) and afference (body to brain) in the production of action (Scheerer 1987). This research shows that people are able to sense joint positions and muscle movements directly. Still the sense of our own bodies can be remarkably weak when inconsistent visual information is encountered (Graziano 1999). People who are shown mirror or video representations of others’ limbs in place of their own, for instance, can develop illusions that they are authors of the others’ action (Nielson 1963), and this illusion is particularly compelling in people with schizophrenia (Franck et al. 2001; Daprati et al. 1997), movement disorders (Sirigu et al. 1999), and phantom limb experiences (Ramachandran and Rogers-Ramachandran 1996). The experience of bodily feedback can also be overridden by feed-forward processes. The brain produces sensory images of completed action, a kind of movement plan or template, and authorship experiences are attenuated when these expectations are not met (e.g., Blakemore and Decety 2001; Blakemore, Wolpert, and Frith 2000). The interplay between feed-forward and feedback is responsible, for example, for the fact that one cannot tickle oneself, in that knowledge of the feed-forward plan apparently leads to the cancellation of the feedback of the sensation (Blakemore, Wolpert, and Frith 1998, 2000). Studies of proprioception only tell part of the story of authorship processing, for two reasons. First, research on the body does not explicate how we come to experience authorship for actions that seem to take place inside the mind (e.g., adding up numbers in your head). How do you know that you did that? Research on the body also does not encompass the key role of perceptions of intention, planning, and premeditation in the attribution of authorship.
The Puzzle of Coaction
21
Mind Information The assessment of one’s authorship is not just a polling of the body. It also takes into account information about one’s thoughts, favoring actions as authored by self when thoughts occur prior to the action and are consistent with the action. The theory of apparent mental causation proposes that the self-perception of the relationship between conscious thoughts and consciously cognized actions underlies the experience of consciously willing the action (Wegner 2002; Wegner and Wheatley 1999). Normally, of course, we experience our actions as being caused by our intentions, and we interpret the intention–action sequence as evidence in favor of the efficacy of our conscious will; philosophers have often taken this personal observation as the basis for theories of human action (Bratman 1987; Searle 1983), and there is continued debate on the validity of this interpretation (Pockett, Banks, and Gallagher 2006). If mental causation is only apparent, however, we should expect that experiences of conscious will might be enhanced or undermined as a result of experimental manipulations of the apparent co-occurrence of thought and action. A number of studies reveal that the experience of conscious will is susceptible to just such manipulation. Research testing this theory has found that when people are primed to think about actions just prior to their occurrence, they experience an enhanced sense of authorship for those actions—even when the actions are not their own. One set of studies explored whether people might come to experience the actions of others as under their control—when they are watching in a mirror as the other person standing close behind them extends arms on each side and makes a series of gestures (in a pantomime known as ‘‘Helping Hands’’). These studies revealed that people report an enhanced sense of control over the other’s arm motions when they hear the instructions for movement that the other is following—as compared to when the instructions are not heard (Wegner, Sparrow, and Winerman 2004). Simply knowing the actions in advance seems to yield an enhanced sense of authorship for them. Moreover this enhanced control was accompanied by an experience of empathy for the other’s hands as well; participants who had heard the hand movement instructions were particularly likely to show emotional reactions (skin conductance responses) when they watched while one of the hands snapped a rubber band on the other’s wrist. In other studies, research participants have been led to think incidentally about some item—say, a frog or a deer—and then were asked to ‘‘type randomly’’ at a computer keyboard for several minutes and subsequently rate
22
Daniel M. Wegner and Betsy Sparrow
the likelihood that they typed a number of different words. These participants rate their likelihood of typing ‘‘frog’’ or ‘‘deer’’ more highly than the likelihood of typing comparable words, and are particularly likely to report such authorship if they had encountered the primed word just prior to the random typing task (Gibson and Wegner 2003). Similarly, when people are asked to discern which of two moving blocks on a computer screen they are controlling, they are inclined to select the one whose eventual resting place they were primed to think about by a prior presentation of a block on screen (Aarts, Custers, and Wegner 2004). Reduced levels of thought relevant to an action have the expected complementary effect: when people doing a series of simple tasks (e.g., winding thread on a spool) are asked to try not to think about what they are doing during a task, they become less likely to report intending to perform that task (Wegner and Erksine 2003). These findings all involve people performing actions alone, but similar observations have been made in cases of coaction. Individuals placed in hand-to-hand contact at a computer keyboard and asked to ‘‘read the muscle movements’’ of another person, for example, have been found to report being influenced by that person in their keyboard responses. They answer a series of questions correctly at the keyboard even though the person whose muscles they are ostensibly reading never hears the questions and was instructed to remain inert. The participants were entirely responsible for the communications and yet experienced their own actions as being performed by the other (Wegner, Fuller, and Sparrow 2003). This failure to recognize self as the source of coaction is the basis of the discredited technique of ‘‘facilitated communication,’’ the practice of holding the hand of an autistic or otherwise communication-impaired person in the attempt to help that person type at a keyboard. Messages ‘‘facilitated’’ in this way are typically produced by the facilitator—who nonetheless experiences the messages as issuing from the autistic client ( Jacobson, Mulick, and Schwartz 1995). Coaction that occurs when people operate a ouija board together is open to a similar analysis. People moving a computer mouse together in an experiment simulating the ouija board report greater intention to stop an onscreen cursor on a particular item when they have been prompted to think about that item and know that the other person has not experienced this prompt (Wegner and Wheatley 1999). And when people are given to believe that their actions have helped or hurt another, they become prone to report that they intended and caused such a result when they are led to have thoughts relevant to the action before its apparent occurrence (Pronin et al. 2006). It seems there are many instances in which people take their
The Puzzle of Coaction
23
own thoughts into account on the way to inferring their intention and developing an experience of will for an action—even when those thoughts have no authentic causal relation to their action. These illusions of conscious will suggest that the experience of will is not an authentic indicator of how action is caused (Wegner 2002, 2003, 2004). Social Information In addition to information from the body and mind, information gleaned from the social circumstances in which an action is performed has a remarkably strong effect on the experience of conscious will for the action. A renowned instance of this effect occurred in the obedience studies conducted by Stanley Milgram (1963). Research participants were led to believe that they were teaching another participant in an experiment by applying electrical shocks whenever he performed incorrectly, and many were found to apply such shocks willingly—to the point of apparently placing him in grave danger and possibly causing his death. Yet these people were only willing to accept a modicum of responsibility for this action. Participants obeying the experimenter reported what Milgram called an agentic shift: ‘‘the person entering an authority system no longer views himself as acting out of his own purposes but rather comes to see himself as an agent for executing the wishes of another person’’ (Milgram 1974, p. 133). The Milgram experiment represents an unusually powerful psychological situation (Ross and Nisbett 1991). When people are pressed by strong social circumstances into doing things they would not otherwise do, it makes sense that they would recognize these circumstances and view their own authorship as reduced. The experimental literature on causal attribution in social psychology reveals that people are indeed influenced by the presence of external forces and agents to attribute less causality to self (Gilbert 1998; Heider 1958; Jones et al. 1972; Kelley 1967). What this literature also shows is that social circumstances often do not need to be very powerful to undermine the experience of authorship. Human sensitivity to the influence of social circumstances on authorship is highly acute, as people respond to subtle cues that can rapidly and radically alter perceptions of who is in charge. Perceivers’ attributions of authorship are influenced, for example, by very small perturbations in the relative salience of individuals’ contributions to coaction. A person wearing a brightly colored shirt is more likely to be held responsible for the direction of a group discussion than someone dressed so as to blend in, even when contributions are the same (McArthur and Post 1977). A person looking into a mirror is more inclined to rate self as
24
Daniel M. Wegner and Betsy Sparrow
Figure 2.1 The alphabet maze.
responsible for hypothetical actions than someone without a mirror (Duval and Wicklund 1972, 1973), apparently because of increased salience of self as a causal agent. And such salience also operates when people are viewed from different physical perspectives. Perceivers looking at someone face-on, rather than over the person’s shoulder, are more inclined to hold the person responsible for action (Taylor and Fiske 1978). Subtle variations in the timing of contributions to coaction also appear to influence authorship judgments. When two people are walking along the sidewalk hand in hand, for example, and they take the opportunity of a break in traffic to cross to the other side of the street, they have engaged in a coaction. The question of which one decided to make the crossing just then is likely to be determined by what could be very small differences in the timing of their actions. The one who is perceived to have made the move first—even by a matter of a split second—will tend to be seen as the leader of this particular segment of their walk. A mere glance at one’s cowalker might be enough to signify that the move is to be made, and people often seem aware of such subtleties as they maneuver on walks together. On occasion the walkers may discover that neither was really leading and they’ve become lost, but the far more common pattern is for both to know at each turn who is author of their coaction. The influence of such fine differences in the timing of action and gaze on the experience of authorship for coaction was examined in a series of studies recently conducted in our laboratory (Sparrow and Wegner 2006). For these studies a participant was asked to perform the action of tapping out the letters of the alphabet in order with a conductor’s baton by following a line connecting letters on a maze (see figure 2.1). The task involved tapping the letters on the click of a metronome set to click once per second. The participant and experimenter sat on either side of a Plexiglas
The Puzzle of Coaction
25
Figure 2.2 Experimenter (left) and participant (right) coaching.
sheet showing the maze (see figure 2.2), with the participant facing the letters. The participant completed the maze several times. This task was entirely straightforward for participants, as all were happily quite familiar with the alphabet. After each completion of the maze, participants took a minute to rate the action on a set of scales measuring their experience of authorship. The scales included measures of the degree to which they felt responsible for the action, felt in control of the action, performed the action deliberately, performed the action voluntarily, and caused the action. Ratings on these scales were highly correlated, and were summed to yield an overall measure of authorship. The basic idea of the studies was to examine how various activities performed by the experimenter during the maze task might impinge on the participant’s sense of authorship for this simple movement. In a first experiment, participants completed the maze three times. For half of these participants, the experimenter also had a pointer and pointed at letters in time with the participant and metronome. For the other participants, the experimenter directed her gaze letter by letter through the maze in time with the participant. Perceivers are typically able to read the eye movements of a person to infer what is intended (Bekkering and Neggers 2002), so this study examined whether the gaze of the experimenter might be read and understood as might the experimenter’s pointer movement.
26
Daniel M. Wegner and Betsy Sparrow
In these between-subject sets of pointer and gaze trials, the experimenter carried out three different coactions (counterbalanced in order across participants). For one trial (with pointer or gaze), the experimenter pointed or looked at exactly the same letter as the participant. On this simultaneous trial, each person started on ‘‘A’’ and proceeded through the alphabet. For another trial, the participant started on ‘‘B’’ and the experimenter started on ‘‘A’’—for this leading trial, then, the participant was always one step ahead of the experimenter. For a third trial, the participant started on ‘‘A’’ and the experimenter started on ‘‘B’’—for this following trial, the participant was always one step behind the experimenter. This variation in the sequence of their activities was the only indication to participants of what was different across the three trials. At no point in the introduction of the tasks did the experimenter ever mention the terms ‘‘leading’’ or ‘‘following’’ or explicitly direct the participant’s attention to the experimenter’s gaze. All the tasks were introduced solely by reference to the details of the participant’s required movement. The results revealed that participants’ reports of their experience of authorship were remarkably sensitive to the experimenter’s coaction. Participants reported experiencing greater authorship for action during leading than during following, with simultaneous trials between these extremes. This was true whether the experimenter used the pointer or gaze to trace the alphabet. So, as in the case of the walkers stepping off the curb to cross the street, the one who does so first experiences greater authorship for the action, and the one who does so second experiences less authorship for the action—even though both are performing a coordinated coaction and their individual movements are indistinguishable. These findings also indicate that the experimenter’s timed gaze placement has the same effect. Perhaps this is why it feels annoying to have someone read over your shoulder or watch as you work on a puzzle; their gaze exerts a kind of pressure that makes it less clear that you are the boss. In this series of studies a second experiment arranged for participants to complete the same alphabet trials, but the experimenter always used a pointer, while some participants moved a pointer and others moved their own gaze. As in the initial study, leading produced greater authorship than following, and this was no different for participants using pointers or gaze. Apparently the person who is leading or following another’s pointer movement with his or her own gaze experiences the same relative changes in authorship as the person leading or following with the pointer; watching ahead of someone’s action makes you feel more the author of your eye movements than watching behind it.
The Puzzle of Coaction
27
We also wondered whether these effects were due to the participant’s awareness that the experimenter could perceive their relative position—or if this effect might also occur in participants who were working with an experimenter who could not see their relative position. In a third experiment, then, participant and experimenter were placed in separate rooms with an embedded one-way mirror between them. Participants could see the experimenter, but knew the experimenter could not see them. Under these conditions, leading again induced greater authorship than following, so it appears that having one’s coaction monitored by the experimenter was not essential to the influence of asynchronous action on authorship attributions. There need not be mutual knowledge of the relative timing of coaction to produce these social effects of relative timing. Finally, we were curious about the degree to which these effects might be general across people, or whether they might be limited to people who are particularly attuned to the perception of other agents. Perhaps people who are ‘‘mindblind’’ (Baron-Cohen 1995) might not have the same acute sensitivity to the relative position of the experimenter’s movement. Their experience of authorship might be determined more by their body and mind inputs and less by perceptions of the social circumstances of action. To assess this possibility, we had participants in a fourth experiment complete a pretest questionnaire to assess their autism-quotient (AQ ) (Baron-Cohen et al. 2001), a self-report measure of individual differences in the sensitivity to social cues and other autism-relevant characteristics. The study revealed that participants who scored high on autism-like traits reported authorship that was unaffected by their position relative to the experimenter, whereas participants with higher levels of social sensitivity showed the effects of experimenter position as in the prior studies. The participants with autismlike traits had reduced authorship levels for coaction regardless of position. These experiments, taken together, suggest that the authorship experience of normal adults is highly sensitive to relatively minor variations in the social context of a simple action. The alphabet maze presented no challenge to any of the participants, so it is unlikely that any real leading or following was taking place. Yet participants reported a reduced sense of authorship when their action occurred after that of the experimenter, and an enhanced sense of authorship when it occurred before. Apparently it doesn’t take much in the way of social circumstances to override the multiple sources of authorship information in body and mind and produce an alteration in overall authorship experience. Someone else doing an action just before or after we do it makes us feel differently about whether we did it.
28
Daniel M. Wegner and Betsy Sparrow
The susceptibility to social information in the experience of authorship is reflected in unique patterns of brain activation that occur during actual leading and following. PET scans were made for participants in one experiment while they imitated an experimenter’s actions, were imitated by an experimenter, produced actions without imitation, or produced actions while the experimenter performed other actions (Decety et al. 2002). The same brain area was found to be activated both when one was imitated and when one imitated the other (left superior temporal sulcus, STS). Meanwhile there were also differential activations in these circumstances (the right or left parietal area, respectively). This pattern suggests that the STS is involved in understanding both one’s own and another’s actions, whereas the parietal area helps to differentiate which actions originate with the self and which originate with the other. The sensitivity to social circumstances in experiencing authorship is subserved by an intricate neural architecture that operates to incorporate this information into one’s overall experience of authorship of action. Coaction and Conscious Will So why all this fuss about social effects on authorship? In one sense these effects are fascinating in themselves. It is interesting to observe that you can be driving merrily along the freeway, notice a car in a lane next to you pull slightly ahead and stay there—and get the odd sense that you’ve been bested somehow. Seemingly minute social maneuvers affect our inner sense of what we’re doing, and this strange power is disconcerting. But there is another reason to be thinking about social effects on authorship feelings, a reason that turns us toward the more general issue of why we have authorship feelings at all. Think of it this way: if the experience of conscious will is a veridical reflection of the inner causal processes whereby action is produced, why is this experience so radically sensitive to the presence of external social pressures that may in fact be irrelevant to the production of the action? Another driver’s position has nothing to do with your travel down this road at this moment. You were going there anyway. Why does this position impinge at all on your experience of authorship for driving? The Official Doctrine One clear implication of the influence of social irrelevancies on our experience of authorship is to undermine any wish we might have to accept what Ryle (1949) called the ‘‘official doctrine’’ of the mind—the idea that we
The Puzzle of Coaction
29
know what we are doing. In a well-crafted critique Ryle wondered at the widespread assumption that the mind is ‘‘self-luminous’’—knowing all that it does, including its causation of action. Ryle swept such ideas into a pile he called the ‘‘official doctrine.’’ He produced a series of arguments suggesting that the person is not intrinsically informed of the mind’s workings in this way, and instead is merely in a good position to observe the mind and draw inferences about its operation. In philosophy, Dennett (1984, 1987, 1991, 1996, 2003) drew from this observation the general insight that mind is in the eye of the beholder (his conception of the ‘‘intentional stance’’). And in psychology, Bem (1967, 1972), and then Nisbett and Wilson (1977), offered a range of research findings consistent with Ryle’s critique. These various themes all center on the notion that the person is a self-perceiver rather than a self-knower—particularly when it comes to authorship. If the official doctrine were correct, the mind would always know what it does. After all, if the experience we have of consciously causing an action were a valid reflection of the entire causal enterprise producing our action, it should never be mistaken. The mind should be able to sort out and discount irrelevant circumstances such as the presence of another person performing the action before or after our own performance. At best, reports of the feeling of conscious will should reflect only the authorship information arising from the body—the proprioceptive or introceptive information noted earlier. The experience of conscious will shouldn’t be fooled by experiments that introduce extraneous information about the state of the mind before the action, and it certainly shouldn’t be pushed around by information indicating that another person is in the area and might be doing something similar! If the experience of conscious will is a conduit into the soul of our actions, an indicator of the very wellsprings of what we do, immaterial facts should not perturb it. But they do. We’ve all had the experience of deciding to order something in a restaurant, only to find that someone dining with us orders this item from the server before we’ve had a chance to speak up. At this moment, although we usually don’t mention it to anyone, we would like to be able to take the server aside and note that we had already thought of this dish and are not slavishly copying the other diner. The other diner’s actions do not have any sensible bearing on the authorship we should feel for our selection— but we find our authorship feelings are warped nonetheless. This kind of thing suggests that the official doctrine simply can’t be right. If the mind knows what it is doing as it causes its action, it can’t be swayed at every turn by inconsequential matters such as who ordered first. The social
30
Daniel M. Wegner and Betsy Sparrow
sources of authorship feelings in coaction, then, help us to realize that the experience of conscious will is a feeling our minds construct to reflect where they think authorship lies—not a feeling that radiates from a selfluminous mind as it reports its own causal power. Authorship and Social Accounting As this chapter is being written, there are several controversies in the news about claims of plagiarism. A Harvard undergraduate, Kaavya Viswanathan, has been accused of plagiarizing segments of her novel and the novel has been retracted by the publisher. The chairman of Raytheon Co., William Swanson, stands accused as well, in his case for parts of his Unwritten Rules of Management that were found to have indeed been written, but by someone else. The chairman took an immediate cut in pay. Claiming authorship for items that are not one’s own is a risky business, bringing social opprobrium or worse when it is discovered. And these examples are just today’s top stories. The fact is that the accounting of who has done what is a central task of every human system, the foundation of justice and morality. Our tendency even to name or think about actions seems to arise from our need to figure out who did what (Feinberg 1970). The elements of social exchange that allow us to function as a society would be upended immediately if we weren’t all keeping track at all times, at least at some level, of who does everything. Slips in accounting can destroy our system of interaction; we become concerned when authorship is confused. It bothers us when a person with schizophrenia claims to hear inner ‘‘voices’’ from others, as we know there is a lapse in appreciating authorship (Graham and Stephens 1994; Hoffmann 1986). We find ourselves getting nervous when someone such as President George W. Bush claims that God is the author of his actions (Suskind 2004). People who claim too little authorship for self, like the plagiarists who claim too much, upset our sense of justice and threaten to undo our society (Homans 1961). As a result we will forever be worrying about who called the cab, who burned the turkey, who left the cat out overnight, who clicked the remote, who lost the twist tie to the ginger snaps . . . and on and on. The importance of authorship accounting in social life makes the role of social cues in authorship judgments far more understandable. It’s not just that other people might blur for us what we think we did—it’s that everyone is continually trying to keep clear on what everyone is doing. There is a great social task at hand, the maintenance of society. With this in mind, the finding that people are influenced by who goes first in the alphabet
The Puzzle of Coaction
31
task begins to take on some meaning. In essence, this realization suggests that authorship judgments have evolved in humans not merely as a way of keeping track of personal causation as compared with the forces of physics in the world, but additionally, and more profoundly, as a way of accounting for own agency in a social world where agency in coaction is the measure of all things. The experience of willing an action is enhanced when others follow, and is reduced when others lead, even when the leading and following are causally irrelevant, because the experience is produced as a reportable measure of one’s potential claim to causal influence. Conscious will is an experiential indicator that lends a sense of ‘‘meness’’ to some portion of the maelstrom of coaction in which the self is involved. And this feeling produces something more than a rational judgment of authorship. It also yields a kind of authenticity, a sort of embodiment (Niedenthal et al. 2005) to the judgment that gives it much the same punch as an emotion. Like a feeling of joy or fear or anger, a feeling of doing marks an action as one’s own in a way that is easily perceived and highly memorable. Imagine that a toddler is tugging to get your attention and you push her away, perhaps with a bit too much vigor. She falls back on her bottom and begins to cry. Yes, you feel bad because she’s crying. You feel worse because you were partly the author of that coaction, and you are responsible for what happened. The feeling that is produced in this setting, a sense of willfulness about the action, along with guilt and embarrassment for doing it, makes your authorship all too clear to you. You experience a sense of firstperson responsibility for this event that is more telling about your reactions and subsequent feelings than any amount of third-person responsibility (responsibility attributed to you by others) that might be allocated in the court of toddler justice downtown (Wegner 2004). The maintenance of social justice and the fabric of society, as well as the continued operation of morality in human affairs—these are all things that some commentators would have us believe are lost if there arises widespread belief in a deterministic model of human action (e.g., Nahmias 2002). The implosion of these social necessities does not hinge, however, on the existence of real responsibility. Thoughts do not have to cause actions for morality and human value to continue. Rather, there must be a personal perception of own responsibility. The real linchpin holding together society is first-person responsibility, the kind of thing that happens in your head and your heart when you nudge that toddler and know you did it. We could have a world of legal systems and punishment machinery
32
Daniel M. Wegner and Betsy Sparrow
in place to hold people responsible for what society thinks they did, but if people never had any deep inner sense that they authored their actions, all of this would be for naught. Many prisoners may believe they don’t deserve to be in prison—but what if nobody believed they ever had done something to deserve what they got? The experience of conscious will, along with its attendant moral emotions, is what gives us each an inner sense of our part in the grand scheme of coaction and allows us to accept graciously the deserts that society, by its own calculations, thinks we should get for our authorship. Conclusion The experience of consciously willing an action has long been understood as an expression of the true source of action: Actions we consciously will are those that indeed were caused intentionally by us. This is what Ryle called the ‘‘official doctrine,’’ and it is an intuition that continues to be espoused by a number of psychologists and philosophers. It turns out, though, that people in psychological experiments are remarkably open to illusions of their own authorship, coming to conclude they have done things they did not, or to believe that they have not done things they actually did do, as a result of the introduction of misleading information about the relationship that exists between their own thoughts and actions, or about the relationship between their own actions and the actions of others. In particular, the experience of coaction can be difficult to read properly. It is easy to become convinced that the irrelevant actions of others have implications for one’s own authorship. When another person mimics your action before you do it, it seems that you are less responsible for your own action; when the other mimics it afterward, in turn, you feel an enhanced sense of responsibility for the same motions. These feelings reflect erroneous authorship judgments, and so indicate that the experience of consciously willing an action is only a fallible estimate of authorship—not a direct expression of the operation of personal causation. Human sensitivity to social cues in judging personal authorship in coaction also points to a useful observation about why such authorship feelings have evolved at all. The experience of authorship seems to have developed in humans as a building block toward the creation of social exchange systems. Knowing who does what may keep society together. Fred Astaire and Ginger Rogers must have had views of who was doing what in their dance team that led
The Puzzle of Coaction
33
to similar cohesion—a solution to the puzzle of coaction that kept them dancing cheek to cheek. Acknowledgment This research was funded in part by NIMH Grant MH-49127. References Aarts, H., R. Custers, and D. M. Wegner. 2004. Attributing the cause of one’s own actions: Priming effects in agency judgments. Consciousness and Cognition 14: 439–58. Baron-Cohen, S. 1995. Mindblindness. Cambridge: MIT Press. Baron-Cohen, S., S. Wheelwright, R. Skinner, J. Martin, and E. Clubley. 2001. The autism-spectrum quotient (AQ ): Evidence from Asperger’s syndrome/high functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders 31: 5–17. Bekkering, H., and S. Neggers. 2002. Visual search is modulated by action intentions. Psychological Science 13: 370–73. Bem, D. J. 1967. Self-perception: An alternative interpretation of cognitive dissonance phenomena. Psychological Review 74: 183–200. Bem, D. J. 1972. Self-perception theory. In L. Berkowitz, ed., Advances in Experimental Social Psychology, vol. 6, pp. 1–62. New York: Academic Press. Blakemore, S.-J., D. M. Wolpert, and C. D. Frith. 2002. Abnormalities in the awareness of action. Trends in Cognitive Sciences 6: 237–42. Blakemore, S. J., and J. Decety. 2001. From the perception of action to the understanding of intention. Nature Reviews Neuroscience 2: 561–67. Blakemore, S. J., D. Wolpert, and C. Frith. 1998. Central cancellation of self-produced tickle sensation. Nature Neuroscience 1: 635–40. Blakemore, S. J., D. Wolpert, and C. Frith. 2000. Why can’t you tickle yourself? NeuroReport 11: 11–16. Bratman, M. E. 1987. Intentions, Plans, and Practical Reason. Cambridge: Harvard University Press. Daprati, E., N. Franck, N. Georgieff, J. Proust, E. Pacherie, J. Dalery, and M. Jeannerod. 1997. Looking for the agent: An investigation into consciousness of action and self-consciousness in schizophrenic patients. Cognition 65: 71–86.
34
Daniel M. Wegner and Betsy Sparrow
Decety, J., T. Chaminade, J. Grezes, and A. N. Meltzoff. 2002. A PET exploration of neural mechanisms involved in reciprocal imitation. NeuroImage 15: 265–72. Dennett, D. 1984. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge: MIT Press. Dennett, D. 1987. The Intentional Stance. Cambridge: Bradford Books/MIT Press. Dennett, D. 1991. Consciousness Explained. New York: Basic Books. Dennett, D. 1996. Kinds of Minds. New York: Basic Books. Dennett, D. 2003. Freedom Evolves. New York: Viking. Duval, S., and R. A. Wicklund. 1972. A Theory of Objective Self Awareness. New York: Academic Press. Duval, S., and R. A. Wicklund. 1973. Effects of objective self-awareness on attribution of causality. Journal of Experimental Social Psychology 9: 17–31. Feinberg, J. 1970. Doing and Deserving. Princeton: Princeton University Press. Franck, N., C. Farrer, N. Georgieff, M. Marie-Cardine, J. Dale´ry, T. d’Amato, and M. Jeannerod. 2001. Defective recognition of one’s own actions in patients with schizophrenia. American Journal of Psychiatry 158: 454–59. Frith, C. D., S.-J. Blakemore, and D. M. Wolpert. 2000. Abnormalities in the awareness and control of action. Philosophical Transactions: Biological Sciences 355: 1771– 88. Gandevia, S., and D. Burke. 1992. Does the nervous system depend on kinesthetic information to control natural limb movements? Behavioral and Brain Sciences 15: 614–32. Georgieff, N., and M. Jeannerod. 1998. Beyond consciousness of external reality: A ‘‘who’’ system for consciousness of action and self-consciousness. Consciousness and Cognition 7: 465–77. Gibson, L., and D. M. Wegner. 2003. Believing we’ve done what we were thinking: An illusion of authorship. Paper presented at the Society for Personality and Social Psychology, Los Angeles. Gilbert, D. T. 1998. Ordinary personology. In D. T. Gilbert, S. T. Fiske, and G. Lindzey, eds., Handbook of Social Psychology, pp. 89–150. New York: McGraw-Hill. Graham, G., and G. L. Stephens. 1994. Mind and mine. In G. Graham, and G. L. Stephens, eds., Philosophical Psychology, pp. 91–109. Cambridge: MIT Press. Graziano, M. S. A. 1999. Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position. Proceedings of the National Academy of Sciences USA 96: 10418–21.
The Puzzle of Coaction
35
Heider, F. 1958. The Psychology of Interpersonal Relations. New York: Wiley. Hoffmann, R. E. 1986. Verbal hallucinations and language production processes in schizophrenia. Behavioral and Brain Sciences 9: 503–48. Homans, G. C. 1961. Social Behaviour. New York: Harcourt, Brace, and World. Jacobson, J. W., J. A. Mulick, and A. A. Schwartz. 1995. A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist 50: 750–65. Jones, E. E., D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, and B. Weiner. 1972. Attribution: Perceiving the Causes of Behavior. Morristown, NJ: General Learning Press. Kelley, H. H. 1967. Attribution theory in social psychology. In D. Levine, ed., Nebraska Symposium on Motivation, vol. 15, pp. 192–240. Lincoln: University of Nebraska Press. Matthews, P. B. C. 1982. Where does Sherrington’s ‘‘muscular sense’’ originate? Muscles, joints, corollary discharges? Annual Review of Neuroscience 5: 189–218. McArthur, L. Z., and D. L. Post. 1977. Figural emphasis and person perception. Journal of Experimental Social Psychology 13: 520–35. Milgram, S. 1963. Behavioral study of obedience. Journal of Abnormal and Social Psychology 67: 371–78. Milgram, S. 1974. Obedience to Authority. New York: Harper and Row. Mueller, J. 1985. Astaire Dancing: The Musical Films of Fred Astaire. New York: Knopf. Nahmias, E. 2002. When consciousness matters: A critical review of Daniel Wegner’s The illusion of Conscious Will. Philosophical Psychology 15: 527–41. Niedenthal, P. M., L. W. Barsalou, P. Winkielman, S. Krauth-Gruber, and F. Ric. 2005. Embodiment in attitudes, social perception, and emotion. Personality and Social Psychology Review 9: 184–211. Nielson, T. I. 1963. Volition: A new experimental approach. Scandinavian Journal of Psychology 4: 215–30. Nisbett, R. E., and T. D. Wilson. 1977. Telling more than we can know: Verbal reports on mental processes. Psychological Review 84: 231–59. Pockett, S., W. P. Banks, and S. Gallagher, eds. 2006. Does Consciousness Cause Behavior? Cambridge: MIT Press. Pronin, E., D. M. Wegner, K. McCarthy, and S. Rodriguez. 2006. Everyday magical powers: The role of apparent mental causation in the overestimation of personal influence. Journal of Personality and Social Psychology, forthcoming. Prose, F. 2000. The Blue Angel. New York: Harper Collins.
36
Daniel M. Wegner and Betsy Sparrow
Ramachandran, V., and D. Rogers-Ramachandran. 1996. Synaesthesia in phantom limbs induced with mirrors. Proceedings of the Royal Society of London 263: 377–86. Ross, L., and R. E. Nisbett. 1991. The Person and the Situation. New York: McGraw-Hill. Ryle, G. 1949. The Concept of Mind. London: Hutchinson. Scheerer, E. 1987. Muscle sense and innervation feelings: A chapter in the history of perception and action. In H. Heuer, and A. F. Sanders, eds., Perspectives on Perception and Action, pp. 171–94. Hillsdale, NJ: Erlbaum. Searle, J. R. 1983. Intentionality: An Essay on the Philosophy of Mind. New York: Cambridge University Press. Sherrington, C. S. 1906. The Integrative Action of the Nervous System. New York: Charles Scribner’s Sons. Sirigu, A., E. Daprati, P. Pradat-Diehl, N. Franck, and M. Jeannerod. 1999. Perception of self-generated movement following left-parietal lesion. Brain 122: 1867–74. Sparrow, B., and D. M. Wegner. 2006. The experience of authorship in coaction. Unpublished manuscript. Cambridge, MA. Sperry, R. 1950. Neural basis of spontaneous optokinetic responses produced by visual inversion. Journal of Comparative and Physiological Psychology 43: 482–89. Suskind, R. 2004. Faith, certainty and the presidency of George W. Bush. New York Times Magazine, October 17. Taylor, S. E., and S. T. Fiske. 1978. Salience, attention, and attribution: Top of the head phenomena. In L. Berkowitz, ed., Advances in Experimental Social Psychology, vol. 11, pp. 249–68. New York: Academic Press. von Holst, E., and H. Mittelsteadt. 1950. Das reafferencprinzip. Naturwissensschaften 37: 464–76. Wegner, D. M. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wegner, D. M. 2003. The mind’s best trick: How we experience conscious will. Trends in Cognitive Sciences 7: 65–69. Wegner, D. M. 2004. Precis of The Illusion of Conscious Will. Behavioral and Brain Sciences 27: 649–92. Wegner, D. M., and J. Erksine. 2003. Voluntary involuntariness: Thought suppression and the regulation of the experience of will. Consciousness and Cognition 12: 684–94. Wegner, D. M., V. Fuller, and B. Sparrow. 2003. Clever hands: Uncontrolled intelligence in facilitated communication. Journal of Personality and Social Psychology 85: 1–15.
The Puzzle of Coaction
37
Wegner, D. M., and B. Sparrow. 2004. Authorship processing. In M. Gazzaniga, ed., The New Cognitive Neurosciences, 3rd ed., pp. 1201–09. Cambridge: MIT Press. Wegner, D. M., B. Sparrow, and L. Winerman. 2004. Vicarious agency: Experiencing control over the movements of others. Journal of Personality and Social Psychology 86: 838–48. Wegner, D. M., and T. P. Wheatley. 1999. Apparent mental causation: Sources of the experience of will. American Psychologist 54: 480–92.
3 What Kind of Agent Are We? A Naturalistic Framework for the Study of Human Agency Paul Sheldon Davies
What kind of agent are we? My answer is twofold. First, we do not know. At this point in the history of inquiry, traditional notions of agency are dead or dying and their replacements have yet to be born or yet to reach maturity. Second, although we are in a period of conceptual transition and thus have no developed concept of human agency, we do know what kind of agent we are not. We do know of some elements in our traditional concept that they are no longer tenable. It is important to be self-conscious about these deaths in our conceptual scheme, as they generate intellectual tension that may lead to conceptual creativity. I aim to flesh out this twofold answer to my title question. My discussion divides into four main parts. The first sketches in broad strokes a contrast between three different orientations toward inquiry—a contrast, I think, that captures some central features of the contemporary philosophical and perhaps general intellectual landscape. Of these three orientations, one is deeply naturalistic, another naturalistic but insufficiently so, and the third nonnaturalistic. The second part of my discussion introduces some of the elements of the naturalistic orientation I wish to recommend, while the third part illustrates this orientation by reviewing some of the central claims in Daniel Wegner’s theory of the will, which he defends in his splendid (2002) book The Illusion of Conscious Will. In the fourth part, however, I focus on Wegner’s discussion of the concepts ‘control’ and ‘responsibility’ from the ninth chapter of his book and argue that his claims concerning these concepts conflict with the theory of will developed in the first eight chapters. I also argue that we should resolve this conflict by taking the theory of conscious will to its full naturalistic conclusion and altering our notions ‘control’ and ‘responsibility’ even further than Wegner suggests. We should do this, I argue, in light of the main elements of my naturalistic orientation. By way of conclusion, I return to the question, What kind of agent are we? and conclude that the implications of Wegner’s
40
Paul Sheldon Davies
theory, particularly the conceptual uncertainty that the theory thrusts upon us, make plausible both parts of the answer I give to my title question. Orientations toward Inquiry The first orientation, what I call conceptual conservatism, presumes that the aim of inquiry is to preserve as far as possible apparently important concepts. Two species within this genus include (1) the presumption that we should save concepts that appear important within some well-developed scientific theory and (2) the presumption that we should save concepts that appear important within our general worldview. An example of the first presumption is evident in Michael Ruse’s recent book Darwin and Design, in which he claims that the concepts ‘design’ and ‘function’ must be preserved within evolutionary theory (Ruse 2003).1 The second presumption might explain the conflict that arises in Wegner’s discussion of ‘control’ and ‘responsibility’. I am going to suggest, at any rate, that while Wegner’s theory of will threatens to undermine traditional notions associated with agency, his final chapter reverses direction. It is a move away from a naturalistic orientation toward the second form of conceptual conservatism. The second orientation, what I call conceptual imperialism, presumes that some concepts possess a special right among or dominion over all other concepts and all methods of inquiry. The aim is not to try to ‘‘save’’ apparently important concepts. Nor is the aim to try to better conform our concepts to the world. Just the opposite in fact. The aim is to delineate the contours of certain concepts and thereby specify what the world must be like in order that it fit those concepts. On this view, our concepts are not open to negotiation with respect to their presumed core content. The function of such concepts is not to put them to any test but rather to hold them fast and insist on classifying the world in terms of those concepts. Lest anyone suspect me of caricature in the case of imperialism, consider the imperialist’s orientation toward the concept ‘free will’. The imperialist presumes that some apparently important concepts are necessary in some sense or other and hence nonnegotiable, at least at their core. The claim concerning ‘free will’ is that if we alter or relinquish our traditional concept, we thereby lose our grip on a host of concepts clustering around the notions of control and responsibility. Altering or relinquishing those concepts eviscerates our conceptual scheme to such an extent we can no longer think or offer meaningful assertions about human agency. We will
What Kind of Agent Are We?
41
have changed the subject and lost our way. From the imperialist’s point of view, there is something inherently self-destructive about a form of inquiry in which a concept so near the core can be altered or eliminated. What then is the positive aim of imperialist inquiry concerning free will? The goal is to reflect on our concept by way of thought experiments and articulate conditions necessary and sufficient for the application of the concept. Once we have accomplished that, we can then specify what the world must be like in order that something actually fall under the concept. On the imperialist assumption that our concept must be retained, and on the basis of conditions deemed necessary and sufficient, we are forced to conclude, as Roderick Chisholm (1964) does, for example, that human beings must be akin to God with respect to the will. The imperialist claims in this case to have discovered that the core of the human self must include the capacity to initiate causal sequences leading to free actions without that capacity itself being subject to efficient causation. This is a claim about the nature of human beings based solely upon the (allegedly) nonnegotiable content of our (alleged) concept ‘free will’. Now, I will have nothing further to say about conceptual imperialism except that it contrasts starkly with the third orientation—what I call conceptual progressivism. According to the progressivist, inquiry is largely a matter of exploration. Naturalism is exploration in the way that the naturalists of the nineteenth century—Charles Darwin and Alfred Wallace, for example—were explorers. To adopt this orientation is to embark on the study of natural systems with the positive expectation that our settled view of the world is about to be unsettled. It is to proceed with a heightened sense that, as we approach the bounds of current knowledge, we are crossing into lands that may challenge the very categories in which our knowledge is presently expressed. This is to face the crucial possibility that we are conceptually ill-equipped to assimilate new discoveries, that we may be thrown into conceptual disarray and forced to flounder, even forced to create new conceptual categories. Genuine inquiry, for the progressivist, is always a looming threat to our current worldview and a looming challenge to our creative powers. Naturalism I turn now to describing some of the main elements of the naturalistic orientation I endorse. Naturalism-as-exploration is not a metaphysical thesis. It is concerned rather to answer the question, By what methods are we most likely to discover the truth? Applied to the study of human nature,
42
Paul Sheldon Davies
the question concerns methods most likely to produce knowledge of our nature. So far as I can see, there are only two remotely plausible sources for answering this question: (1) lessons culled from human history, particularly the history of modern science and the history of culture, and (2) lessons concerning our constitution, particularly our neurology, psychology, and sociology. Considerations drawn from our history and our constitution make plausible several directives for naturalistic exploration. I will describe four such directives. Before describing my directives, however, a brief word about tactics. The directives I offer are utterly banal, and intentionally so. I want constraints on inquiry based on considerations so uncontroversial that even the imperialist cannot reject them out of hand. That has obvious strategic payoff when dealing with nonnaturalists. The downside, of course, is that serious naturalists may find these directives tiresome. But I beg your indulgence. For if I am right, the directives I offer, despite their banality, are not without power. There may even be room for serious naturalists to become better naturalists. History of Science The first directive derives from the history of science. We make progress in our knowledge of natural systems to the extent we analyze inward and identify low-level systemic mechanisms and interactions that instantiate high-level capacities. We also make progress as we synthesize laterally across related domains of inquiry, as we look for coherence among taxonomies of mechanisms postulated in associated areas of study. Such analyses and syntheses serve as mutual checks on one another. The importance of these checks is demonstrated throughout the history of science, including, for example, the evolution in our concept ‘life’ from Blumenbach (1781) and Kant (1790) to the modern synthesis in biology. The important claim is that the initial conceptual categories with which we begin such inquiry are often altered and sometimes destroyed as we analyze inward and synthesize laterally. It is an historical fact that as biology matured in the first half of the twentieth century, Bergson’s (1907) ‘life force’, a notion inherited from Blumenbach and Kant, could no longer be sustained because scientists discovered low-level physical mechanisms implementing precisely those capacities which, according to Blumenbach, Kant, and Bergson, could not be understood mechanistically. A full defense of this claim requires detailed, historical case studies2 and, on the basis of such studies, it is reasonable to accept the following directive:
What Kind of Agent Are We?
43
(E) For systems we understand poorly or not at all, expect that, as inquiry progresses—as we analyze inward and synthesize laterally—the concepts in terms of which we conceptualize high-level systemic capacities will be altered or eliminated. It is important that (E) expresses an expectation and nothing stronger. But it is also important that the expectation expressed can have substantive consequences for the way we frame our intellectual tasks. We should expect our initial conceptual categories to change as we analyze into complex natural systems of which we are presently ignorant. And, in light of this expectation, we should calibrate our intuitions and hunches so that an explicit openness to change in our concepts becomes our default position. It is worth pointing out that a commitment to the fecundity of analyzing inward is not a commitment to ontological reductionism. This is clear from what we have recently learned about reductionism in biochemistry. Consider the following form of reductionism: (R) For any living system S, all high-level capacities are explicable in terms of generalizations quantifying over intrinsic properties of low-level systemic mechanisms. The properties of low-level mechanisms are ‘‘intrinsic’’ in the relevant sense just in case they do not depend on the mechanisms being part of system S—just in case, that is, they are features that these low-level mechanisms possess even when extracted from S or placed in some other type of system. The lead intuition behind (R) is that high-level systemic capacities are wholly explicable in terms of certain core properties of low-level mechanisms; stability in high-level capacities is, presumably, a direct product of stability in the core properties of low-level mechanisms. But we now know that (R) is false, at least for certain kinds of biochemical systems. The work of Boogerd et al. (2005) shows that the behavior of certain complex cells decomposes not simply into the intrinsic causal properties of low-level enzymes but also into properties that enzymes acquire only in so far as they are related in the right way to certain other chemical agents. Boogerd et al. are clear that the falsity of (R) does not entail the failure of mechanistic explanation. The high-level systemic capacities of the cells they discuss are entirely explicable in terms of the causal and mechanical effects of low-level mechanisms and their relations. Nevertheless, the kind of reductionism expressed in (R) is demonstrably false. Some of the highlevel capacities exemplified by living systems are explicable only in terms of the properties of low-level mechanisms that depend crucially on their
44
Paul Sheldon Davies
place in precisely those kinds of living systems. This is enough to show that a commitment to analyzing inward, to explaining high-level capacities in terms of low-level mechanisms and their interactions, in no way entails a commitment to the form of reductionism expressed in (R). And it is also worth emphasizing that the discovery reported by Boogerd et al. was accomplished precisely because they approached the study of complex cells by following the lessons that motivate the directive in (E)—by analyzing inward and synthesizing laterally. History of Culture The history of (at least) western culture should make us self-consciously suspicious of what I call dubious concepts. One category of dubious concepts comprises concepts that are dubious by virtue of descent. Concepts that descend from a worldview we no longer regard as true are dubious in this sense, provided they have not been vindicated or superseded by concepts of some well-developed scientific theory. The traditional concept ‘free will’ as elaborated by Chisholm (1964) and others is dubious in this sense, since it derives mainly from our theological ancestry and has not been vindicated by any contemporary form of inquiry. These sorts of reflections on the history of culture, properly fleshed out, warrant the following directive for inquiry: (D) For any concept dubious by virtue of descent, do not make it a condition of adequacy on our philosophical theorizing that we preserve or otherwise ‘‘save’’ that concept; rather, bracket the concept with the expectation that it will be explained away or vindicated as inquiry progresses—as we analyze inward and synthesize laterally. The importance of (D) consists not simply in a general mistrust of concepts with a theological heritage but also in serving as a check on the intellectual biases bequeathed to us by our predecessors. The hope is that we become self-conscious of the prejudices we may otherwise fail to notice in ourselves. In doing so, we create the intellectual space within our own inquiries to apply the lessons that motivate the expectation in (E). After all, we are the products of our history, including our cultural history, and it is naı¨ve to think that we can, simply by way of self-reflection, identify and correct the retarding effects of our culturally instituted categories. Perhaps the most pernicious effect of concepts dubious by descent is that we tend not to see the need for, or even the possibility of, analyzing inward and synthesizing laterally. We may tend to see certain concepts as inescapable and constitutive of what we are even when those concepts are the products
What Kind of Agent Are We?
45
of historical and cultural forces that are escapable after all. The importance of (D), at any rate, is that it broadens the efficacy of the lessons that motivate (E).3 Human Psychology Some of our most basic conceptualizing capacities lead us astray. Our tendency to over-apply the perceptual concept ‘face’—to clouds, automobile grilles, abstract paintings, and so forth—is a simple but vivid example. But there are other, more substantive cases. Contemporary cognitive psychologists maintain that the human mind comprises a host of conceptual and inferential capacities. Some of these capacities, it is claimed, dispose us to apply the concept ‘cause’ under a range of conditions, including conditions in which no causal relation obtains. Other capacities dispose us to apply one or more of a cluster of mental concepts, including ‘intention’, even when the attributed intention is absent. In general, concepts for which our conceptualizing capacities are apt to generate false positives are dubious by virtue of their psychological role.4 Now, even if you have a healthy skepticism toward contemporary cognitive psychology, the point here is still significant. It is plausible that some of our most basic cognitive and affective capacities enable us to anticipate and navigate our environments under a limited range of conditions. It is also plausible that we find ourselves, often enough, in conditions outside those limits. The resulting false positives may be, from an evolutionary point of view, a cost worth paying; the false positives may be a nuisance or even deleterious in some instances, but the presumption is that being possessed of different psychological structures would be far worse. However, from the point of view of trying to acquire knowledge, the false positives may be intolerable. As Robert McCauley (2000) has recently argued, the pursuit of scientific knowledge is probably unnatural—contrary to some elements of our nature—in precisely this sort of way. If so, then progress in knowledge is limited by the retarding effects of our own psychological architecture or, less pessimistically, by the bounds of our best efforts to creatively think or feel our way around our own structural limitations.5 If all this is right, then it is crucial from the point of view of inquiry that we become self-conscious about the conditions under which such conceptualizing capacities may lead us astray. The relevant directive is this: (P) For any concept dubious by psychological role, do not make it a condition of adequacy on our philosophical theorizing that we preserve or otherwise ‘‘save’’ that concept; rather, require that we identify the
46
Paul Sheldon Davies
conditions (if any) under which the concept is correctly applied and withhold antecedent authority from that concept under all other conditions. As with the directive in (D), the importance of (P) is that it serves as a check that enables us to better apply the lessons that motivate (E). It serves as a check against the biases of our own psychological structures; it creates the space, by way of analyzing inward and synthesizing laterally, for us to better understand the limitations of our own abilities. Of course, (P) may be difficult to implement. It may be difficult because the capacities engage nonconsciously and because they are entrenched and thus constitutive of the way we orient ourselves to the world. It may also be difficult because the concepts, even if they reach the level of conscious reflection, are central to how we portray ourselves as agents and as inquirers. As I say, implementing (P) may require that we devise strategies for thinking or feeling our way around certain naturally distorting dispositions of thought and feeling.6 And we must be prepared to discover that implementing (P) is beyond our powers with respect to some psychological capacities. Evolutionary History At least two elementary facts concerning our evolutionary history are important. The first is that we are evolved animals. All of our capacities, including our capacities to generate and apply concepts, and including the conceptual schemes we create, have a long evolutionary history. The second is that we, along with other primates that exist today, evolved from a common hominine7 ancestor, in which case the study of our cousins is sometimes an aid to the study of ourselves. These facts, coupled with claims from psychology or neurology, enable us to formulate reasonable hypotheses concerning the structure of our psychology. They enable us to distinguish structure from noise. They do this by making it rational to choose hypotheses that fit what is known about our evolutionary history over hypotheses that do not. If taken in isolation, such evolutionary considerations are weak, but if combined with the lessons that motivate the expectation in (E)—if combined with the fruits of analyzing inward and synthesizing laterally, as ethologists and neuroscientists do—then considerations of our evolutionary history have force. A plausible evolutionary hypothesis serves as a general framework for the pursuit of low-level mechanisms and interactions; the more plausible the framework, the more compelling the low-level taxonomy. We thus should require that
What Kind of Agent Are We?
47
(H) For any hypothesis regarding any human capacity, make it a condition of adequacy that, as we analyze inward and synthesize laterally, we do so within a framework informed by relevant considerations of our evolutionary history. In addition to helping us to distinguish structure from noise, this directive also has implications for conceptual conservatism and imperialism. Neither the conservative nor the imperialist pays sufficient heed to the fact that our concepts are the products of our evolutionary history; the mere fact that we take ourselves to have the relevant concept is usually thought to be sufficient. But surely that is false. Surely the fact that our concepts evolved bears on the authority such concepts should have in our inquiries. Moreover neither the conservative nor the imperialist pays heed to the fact that our conceptualizing capacities are the products of our evolutionary history; the mere fact that we take ourselves to have the relevant capacity is usually taken at face value. But, once again, that is surely a mistake. The fact that our capacities evolved indicates something about their scope and limitations; the fact that they evolved should, for example, cast doubt upon certain claims concerning the sorts of properties to which our capacities are responsive. Wegner’s Naturalism Wegner (2002) employs all four directives to great effect in his approach to the topic of free will.8 Although the topic is usually broached as a problem in metaphysics, a fully satisfying theory would need to explain how freedom of will is implemented in human psychology. Indeed, it is a reasonable constraint on any metaphysical theory concerning the mind that it apply to organisms with the cognitive and affective capacities we in fact have. Perhaps a metaphysical theory of free will need not directly answer this question, but it must at least presuppose the availability of one. A central question, then, concerning free will is: How can we study the alleged human capacity for free will scientifically? It is this question Wegner sets out to answer. His answer rests upon the theory of apparent mental causation. We frame our inquiry by fixing attention not on the concept ‘free will’, but on a capacity we can be confident most humans in fact possess. We fix attention on the felt experience of willing or of having willed an action. We focus on a feeling associated with experiencing one’s self as the causal impetus and conduit of one’s own action, a feeling with the oomph of effort as its immediate object.
48
Paul Sheldon Davies
Here then is a strategy for the scientific investigation of our alleged capacity for free will: study the felt experience of willing. And this takes us immediately to the lessons that motivate the expectation expressed in (E), for we may now bring to bear models from cognitive and social psychology, as well as neurology; having settled on an actual high level capacity of humans, we may begin the process of analyzing inward and synthesizing laterally. This also enables us to adhere to the directive expressed in (D), for we set to one side the concept, dubious by descent, that drove Chisholm and others to conclude that our wills are akin to God’s. Most important, once we adhere to the lessons that motivate (E)—once we turn to what is known about our neurological, psychological, and social constitution—the directive in (P) rises to the top. The theory of apparent mental causation asserts that the same properties involved in the perception of causality generally apply to the particular case of perceiving a causal relation between the pair the conscious intention to do action A and the perception of one’s self performing action A Taking his cue from the work of Albert Michotte (1954), Wegner offers the following model. Normal humans perceive instances of the above-cited pair of events—particular intentions and actions—as causally related to the extent the following conditions obtain: 1. Priority One event is perceived as occurring prior to the other 2. Consistency Both events are conceptualized in semantically related terms 3. Exclusivity No third event is perceived as an alternative cause Wegner’s hypothesis is that these are features of situations to which we are, by virtue of our psychological constitution, perceptually attuned; these are features in response to which we conceptualize a pair of events as related causally. The stronger our perception of priority, consistency, and exclusivity between a conscious intention and the perception of one’s self acting, the more likely we are to infer that the conscious intention caused the action. And when we make that inference, we tend to experience the feeling of willing. The theory, however, is not simply about mental causation; it is about the appearance of mental causation. This is crucial, for it is here that the directive in (P) exerts its full force. In the course of introducing his theory,
What Kind of Agent Are We?
49
Wegner says, ‘‘. . . the present framework suggests that the experience of will [the feeling of consciously willing] may only map weakly, or at times not at all, onto the actual causal relation between the person’s cognition and action’’ (p. 66). And, on the next page, ‘‘This theory of apparent mental causation depends on the idea that consciousness doesn’t know how conscious mental processes work . . . . The conscious will may . . . arise from the person’s theory designed to account for the regular relation between thought [the conscious intention to do A] and action [the perception of one’s self performing A]. Conscious will is not a direct perception of that relation but rather a feeling based on the causal inference one makes about the data that do become available to consciousness—the thought and the observed action’’ (p. 67). The point of the first passage is that the causal inference we draw concerning the relation between intention and action need not track the actual facts of our psychology. We may infer that our intention caused our action even when it did not, since we draw the inference on the basis of perceived priority, consistency, and exclusivity, and not on the basis of actual, lowlevel causes. The point of the second passage is to help explain the first passage. The appearance of mental causation may mislead because the psychological system involved in perceiving causality operates according to its own principles and on the basis of only partial information that reaches conscious awareness. Few of our cognitive and affective processes rise to the level of conscious awareness. The interpretive system involved in the perception of causality cannot operate on information it cannot access. That system sometimes receives information concerning intentions—those of which we become consciously aware—as well as information concerning actions—those we consciously perceive ourselves performing. But besides these two sources, our interpretive system operates with paltry inputs. No wonder, then, that the appearance of mental causation maps only weakly onto what Wegner calls the empirical will, that is, the actual low-level processes that cause us to act. Wegner offers a battery of evidence for the thesis that the system that generates the feeling of willing operates more or less independently from the processes that cause us to act. Consider the mechanisms that help us ‘‘protect the illusion’’ of conscious will. The illusion is the felt sense we have that our conscious intentions are the cause of our allegedly free actions. Wegner hypothesizes that our interpretive system operates with a template of the ‘‘ideal causal agent’’—an agent who is, among other things, always consciously aware of her goals prior to pursuing them. The facts of the matter indicate, however, that we often act in the absence of prior
50
Paul Sheldon Davies
conscious goals—when, for example, we do things influenced by nonconscious priming. Other times we act with some intention but after the action we unknowingly revise our previous intention, only to deny or forget ever having the initial intention. This is the case, for example, in instances of cognitive dissonance. What is striking in these and related cases is that the subjects of the relevant experiments are psychologically healthy and neurologically intact, and that the phenomena involved—nonconscious primes and cognitive dissonance—are far from uncommon. And there are, furthermore, the fantastic fabrications of intentions among split-brain patients reported by Gazzaniga and LeDoux (1978). Now I am unsure what to make of these latter cases, although Gazzaniga himself (e.g., in Gazzaniga 1997) appears confident that human beings are, by virtue of our neurological constitution, inveterate bullshitters. That may be an overstatement— though working in academe gives one pause!—but what is clear is the necessity of the directive expressed in (P). The interpretive system that generates conscious intentions appears to operate at some considerable distance from the nonconscious, low-level mechanisms that cause us to act. Wegner’s Conceptual Conservatism? As I see it, the power and momentum of Wegner’s argument drives us to the edge of the precipice, forcing a cluster of traditional concepts to the very brink. But, as I also see it, Wegner pulls us back at the last moment. We should not, of course, leap blindly off the edge. Our initial problem revolves around this cluster of concepts, and unless we understand the conceptual revisions forced upon us by our discoveries, we may be in no position to judge whether and how our initial problem has been solved. That said, there is still a difference between retaining concepts without affording them any antecedent authority in our inquiries and retaining concepts in order to preserve or ‘‘save’’ them. We do the former in order to understand whether we are making progress. The latter, by contrast, is objectionable from the naturalistic point of view. And my concern is that in the end Wegner’s discussion is conservative in this sense. So let us have a look. The question Wegner poses for his final chapter is this: Why, if this experience of will is not the cause of action, would we even go to the trouble of having it? What good is an epiphenomenon? The answer becomes apparent when we appreciate conscious will as a feeling that organizes and informs our understanding of our own agency. Conscious will is a signal with many of the qualities of an emotion, one that reverberates through the mind and body to indicate when we sense having authored an action. (p. 318)
What Kind of Agent Are We?
51
And, in the same paragraph, Wegner points out that his answer to the question is novel. He says, The idea that conscious will is an emotion of authorship moves beyond the standard way in which people have been thinking about free will and determinism, and presses toward a useful new perspective. This chapter explores how the emotion of authorship serves key functions in the domains of achievement and morality. (ibid.)
To the best of my knowledge, Wegner’s theory of conscious will is indeed new, but does it nevertheless try to conserve what ought to be discarded? Does it, in particular, try to save what it has already called into question? The thesis of Wegner’s final chapter is that, although the felt experience of will is illusory, this feeling nevertheless serves the important functions of, first, reliably indicating to the agent when she is the author of an action and when she is not, and, second, signaling to the agent actions for which it is appropriate to feel moral emotions and attribute moral responsibility. I begin with the claim about achievement and work my way toward responsibility. The claim is that the feeling of willing is a reliable guide to what we have indeed accomplished. This is not to say that the feeling of willing is always correct; often our perceptions of control do not match the causal facts involved, as the literature on ‘‘perceived control’’ suggests. Still, as Wegner puts it: Just as the experience of will allows us to know what we can control, the lack of this feeling alerts us to what we can’t control, what surely exists beyond our own minds. . . . Perceptions with no sense of will attached to them are likely to be indications of reality rather than figments of our minds. The experience of will is thus a friendly sign, a marker of things that have some ‘‘give’’ in that we have felt we could influence them by virtue of our thoughts . . . . Even if conscious will isn’t an infallible sign of our own causation, it is a fairly good sign and so gives us a rough-and-ready guide to what part of experience is conjured by us and what part is bedrock reality. (p. 333)
The claim is that the feeling of conscious willing fulfills the function of indicating to an agent which actions she has authored and which she has not. But it fulfills this function only to the extent that the feeling is reliably correlated with the actual causes of our actions, with the empirical will. In a remarkable passage, Wegner puts it this way: Conscious will is the somatic marker of personal authorship, an emotion that authenticates the action’s owner as the self . . . . Often, this marker is quite correct. In many cases, we have intentions that preview our actions, and we draw causal
52
Paul Sheldon Davies
inferences linking our thoughts and actions in ways that track quite well our own psychological processes. Our experiences of will, in other words, often do correspond correctly with the empirical will—the actual causal connection between our thought and action. The experience of will then serves to mark in the moment and in memory the actions that have been singled out in this way. We know them as ours, as authored by us, because we have felt ourselves doing them. (p. 327; italics added)
The story, then, is this: often the feeling of willing correctly corresponds with the actual low-level causes of our action and, by virtue of this correspondence, the feeling of willing reliably indicates the extent of our control or achievement, while the absence of this feeling reliably indicates that we have hit upon something real that we cannot control. I do not think we can accept the claim concerning correspondence—not, at any rate, if we accept the theory of apparent mental causation. Nor can we accept that the feeling of willing serves the functions that Wegner says it serves. I will focus my criticisms on the correspondence claim. There are three points. 1. To see the first point, consider the following levels of processing in Wegner’s model: i. The feeling of willing (arises from) ii. drawing a causal inference between conscious intention and perceived action (caused by) iii. the perception of priority, consistency, and exclusivity between conscious intention and perceived action Now, the correspondence claim asserts that there is a positive correlation—a ‘‘correct correspondence’’—between i and a further level of processing, namely: iv. the empirical will—nonconscious, low-level causes of action But this claim of correspondence is implausible on its face. Wegner has provided a theory concerning the relations between i, ii, and iii. If his development of Michotte’s theory concerning the perception of causality is correct, then correlations between i, ii, and iii are nonaccidental. They hold because of our psychological architecture. They hold because an interpretive system responsive to the perception of priority, consistency, and exclusivity among pairs of events generates a causal inference, which in turn generates the cognitive emotion of authorship. None of this, however, can be said about the relation between i and iv. We have no psychological theory—no theory concerning low-level architecture—linking the con-
What Kind of Agent Are We?
53
scious feeling of willing to the nonconscious, empirical will. In fact, to the contrary! We have a wealth of theoretical and experimental considerations suggesting that, within the architecture of the human mind, these two processes—the feeling of will and the empirical will—run on more or less independent tracks. This, as I read it, is the take-home lesson of chapters 4, 5, and 6 of Wegner’s book. The suggestion at the close of chapter 4, in light of the ideomotor theory of action and other considerations, is that we should conceptualize our psychology as consisting fundamentally of automatic, nonconscious processes, with the conscious will being an add-on in need of special explanation. Automatisms are the rule; consciousness, the exception. The suggestion in chapter 5 is that our interpretive system operates with a template of ‘‘the ideal causal agent’’ and that this system works incessantly to forget or fabricate conscious intentions in order to conceal the extent to which our agency is nonideal—hence the efficacy of mechanisms such as cognitive dissonance. And the suggestion in chapter 6 is that our interpretive system, under a range of conditions, undermines the emotion of authorship even when our nonconscious processes are clearly the cause of the action. The well-known case of ‘‘facilitated communication’’ and the experiments reported in Wegner and Fuller (2000) make this clear. So we cannot accept the theory of apparent mental causation and also accept that the feeling of willing corresponds with the empirical will. 2. My second point is that, even if there were some sort of correspondence between i and iv—even if my first point could be defeated—we nevertheless have good grounds for denying that the correspondence is ever ‘‘correct.’’ Three brief considerations support this point. First, recall that on the theory of apparent mental causation, the causal relata are consciously accessible intentions and perceptions of oneself acting. So, in any given case, the inference drawn must be something like (CI) My consciously accessible intention to do A caused my doing A. The subsequent emotion of authorship, therefore, should be something akin to (CF) The feeling that I, by virtue of conscious, intentional effort, caused my doing A. That is, the feeling of willing is a feeling of consciously exerting the effort of initiating and executing A. But, of course, it is false that what occurred in consciousness caused A; apparent mental causation is, after all, apparent causation. The emotion of authorship mistakenly leads us to specify the
54
Paul Sheldon Davies
cause of our action in consciousness, while the actual cause operates beneath the level of conscious awareness in the empirical will. So we have two quite different sorts of objects, a conscious intention that does not cause our action and a nonconscious process that does cause our action. In what sense then does the former ‘‘correctly’’ correspond to the latter? Even if we grant that a correspondence of some kind obtains, on what grounds should we agree that the correspondence is ‘‘correct’’ as opposed to ‘‘incorrect’’ or as opposed to ‘‘neither correct nor incorrect’’? The second consideration goes further. The problem is not simply that the content of the conscious feeling of willing is in error. Suppose we assume that nonconscious processes that cause us to act—those operating in the empirical will—also have intentional content. Still the intentional content of the conscious feeling is something like ‘‘I did A,’’ while the content of the nonconscious process would be distinct. In many cases the nonconscious content would have nothing to do with A. The conscious, intentional contents offered by Gazzaniga’s split-brain patients hardly correspond to what appears to have been the actual cause of their behavior.9 Consider too all the experiments in which nonconscious primes cause subjects to act in ways they do not consciously recognize or understand. In these sorts of cases there is a radical mismatch between conscious and nonconscious contents. But even in cases where both contents represent the same action, their full intentional contents diverge nonetheless. In the content of the conscious feeling of willing—in (CF)—I am represented as the cause of my action, but that cannot be part of the content of nonconscious processes, since it is implausible that I, the agent, am represented at that level of processing. The third consideration is that the empirical will is distributed. The actual causal processes are susceptible to social, causal influences. The processes that cause us to act are driven not merely by nonconscious factors internal to the agent but also by factors external to the agent—including, for example, a range of social factors described in Ross and Nisbett (1991). This means that the empirical will sometimes operates at an even greater distance from the feeling of willing. The feeling, after all, is a feeling of my having caused my own action. The feeling is that I, by virtue of a conscious intention, caused my action. And there is nothing in this feeling to even suggest that the action was caused by factors external to my own conscious awareness. In pressing these last two points, it is perhaps worth pointing out that I am not making an illicit appeal to anything akin to a Cartesian theater (as described in Dennett 1991). The conscious feeling of will isolated by
What Kind of Agent Are We?
55
Wegner may be described with the sentence ‘‘I authored my own action’’ but it may also be described with the sentence ‘‘this bit of internal processing caused that bit of behavior.’’ The second sentence assumes a capacity for representing some bits of internal cognitive processing; it assumes a capacity to take one’s own internal processes as objects of thought or feeling. That requires no theater unless the metaphor is stretched so thinly that every instance of having a thought or feeling somehow requires a spectator viewing a spectacle. And the first sentence may be permissible on the ground that we do not yet know which bits of internal processing we ought to designate, in which case the second sentence asserts more than we can justifiably claim to know at present.10 At any rate, since the feeling isolated by Wegner is basically the feeling that the cause of my action is wholly internal, it follows that if the empirical will is distributed over external causal factors—as Wegner claims it is—then this feeling does not correctly correspond, but rather conflicts, with the empirical will. These three considerations—that the feeling of conscious willing is false, that the intentional content (if any) of the empirical will often diverges from that of the conscious will, and that the empirical will is determined by factors external to the agent—all support the general claim that there is no meaningful sense in which the feeling of willing can correspond ‘‘correctly’’ with the empirical will. 3. But suppose I am wrong. Suppose there is some sort of ‘‘correct’’ correspondence between the feeling of will and the empirical will. Even so, such correspondence would be too weak to underwrite Wegner’s claim about achievement or control and, in consequence, too weak to underwrite ascriptions of moral responsibility. The reason is this: even if it were true that the feeling of willing often corresponds correctly with the empirical will, that is not enough to justify the claim that the agent, in performing a given action, knowingly exerted the right kind of control. To justifiably claim that the agent was in control and hence deserves praise or blame, it must be the case that the agent can identify the action as her own. Put in terms of Wegner’s theory, the agent must have good evidence that her action emanated from features of her empirical will that she recognizes as belonging to her character, as in some sense constituting who she is. But, as Wegner so delightfully describes, such evidence is not available from the first-person perspective, for there exists a host of conditions under which we sincerely, but erroneously, take ourselves to be the authors of our actions. A human agent cannot justifiably claim to know, merely on the basis of how things feel, that her action emanated in the right way from her character. To put the point in epistemological terms, the theoretical and
56
Paul Sheldon Davies
experimental work to which Wegner appeals in the first eight chapters of his book constitutes an overwhelming defeater of the claim that one’s action emanated in the relevant way from one’s empirical will. We simply do not know our selves to that degree; the structure of our psychology works against it. My conclusion concerning the concepts ‘control’ and ‘responsibility’ is threefold. First, we should reject the claim that the feeling of willing serves the function of telling us when we have exercised control and when we have not. That claim is plausible only if we accept the correspondence claim and, as I have just argued, the correspondence claim is not plausible if we accept the theory of apparent mental causation. We should also reject the claim that the feeling of willing underwrites attributions of responsibility, on the ground that the correspondence claim, even if true, is too weak. The conclusion to draw is that if we accept apparent mental causation, we should not afford antecedent authority to either concept in framing our inquiries. That is the force of the directive in (P). We should also insist that with respect to these concepts, we first try to analyze inward and synthesize laterally. If that succeeds, the concepts will be vindicated. If it fails, the concepts will die or be transformed into something else. That is the force of (E). Second, it would indeed be useful to know the conditions under which the feeling of willing is a reliable guide to actual authorship. But if the theory of apparent mental causation is correct, and if the naturalistic directives in (P) and (E) are on the right track, then the only way to discover that the feeling of will ever corresponds correctly with the actual causes of our actions is to resist the apparently natural tendency to trust that our feeling matches the actual causes of our action. In order to discover the significance of the feeling of consciously willing, we must first achieve some distance from that feeling. We must try to discover whether and under what conditions the feeling ever correlates positively with nonconscious processes that cause us to act. And, of course, to accomplish that, we will have to discover which low-level, nonconscious processes are the actual causes of our actions. Third, suppose we eventually discover that the feeling of willing rarely corresponds positively with the causes of action. What should we conclude? One suggestion is that we try to conform our conceptual and affective capacities to better fit the world. Golfers must learn to associate a host of conscious feelings with the actual movements of the club and with the direction, distance, and spin of the ball. They do this by developing ‘‘grooves’’ in their nervous systems with hundreds of hours of repetition. Something similar is true of opera singers, who must learn to associate the
What Kind of Agent Are We?
57
conscious feelings of producing and hearing their own voices with the pitch, tone, and volume of the sound they produce. Perhaps there is something analogous to be learned about the feeling of willing. Perhaps we can learn to correlate certain feelings of authorship with specific low-level causes of our actions. Perhaps. But the theory of apparent mental causation should make us cautious; we should doubt that the feeling of willing locks onto the actual causes of our actions. And most important, from the naturalistic point of view, we should relinquish these cautious doubts only if we discover, as the result of adhering to the directives in (E) and (P), that the feeling does, or can be trained to, correspond with the empirical will. In the meantime we must withhold antecedent authority from the concepts ‘control’ and ‘responsibility’ as they apply to claims about human agency. We must be prepared to discover that the feeling of willing and the empirical will do not fit and cannot be made to fit. To that end, we should calibrate our expectations and hunches in such a way that we are open to the loss of certain concepts, including ones that presently strike us as ineliminable in some way. In the final paragraph of the book, Wegner says, ‘‘Our sense of being a conscious agent who does things comes at a cost of being technically wrong all the time. The feeling of doing is how it seems, not what it is— but that is as it should be. All is well because the illusion makes us human’’ (p. 342; italics added). But I wonder. Is it really the illusion that makes us human—as opposed, say, to the determination to live without illusions? More important, why afford any authority to our current understanding of what ‘‘makes us human,’’ since we have good evidence that dubious concepts persist in our current conceptualization of human nature? At any rate, the attitude of exploration that drove Darwin and Wallace should incline us—as it inclined them—toward the construction of new concepts rather than the preservation of old ones. And we may, in the course of such explorations, run up against the limits of our nature—limits on our abilities as practical agents or our abilities as inquiring agents—but we should not pretend to know them in advance. We certainly should not claim to know our limits on the basis of concepts so patently dubious by virtue of descent and by virtue of psychological role. Conclusion To return, then, to the question, What kind of agent are we? The most immediate answer is: we do not know. But, in more or less the same breath,
58
Paul Sheldon Davies
we may add: we are not the ideal causal agents of our mostly theological ancestral worldview. Indeed, if the theory of apparent mental causation is true, we are not even good approximations to that ideal. The discovery of this rather basic fact forces us to re-orient the way we study our selves. And the four directives described above provide at least some of the framework of that orientation. Insofar as our cultural and psychological ideals of agency have been undermined by progress in neurology, psychology, and sociology, we must take care to resist the temptation to try to save whatever remnants appear within reach. Conceptual conservatism, with respect to concepts dubious by psychological role or by descent, is antithetical to genuine naturalism. Acknowledgments This essay was first presented at the second meeting of the Mind and World Working Group in March 2005 at the University of Alabama, Birmingham. The topic of the conference was distributed cognition and individual volition. My thanks to Harold Kincaid, Don Ross, and David Spurrett for conceptualizing and organizing a first-rate conference. George Ainslie, Don Ross, Philip Pettit, Daniel Dennett, and Daniel Wegner offered questions and comments for which I am most grateful. Special thanks to David Spurrett for detailed criticisms and suggestions on the penultimate draft. Notes 1. See Davies (unpublished). 2. Studies of the sort presented, for example, in Bechtel and Richardson (1993). See also the discussion in the next paragraph of Boogerd et al. (2005). 3. Both (E) and (D) are wielded against Ruse’s (2003) theory of design in Davies (unpublished). 4. Michotte (1954) discusses several experiments in which subjects apply the concept ‘cause’ in the absence of relevant causal relations. Sperber et al. (1995) offer recent perspectives on the same theme and Wegner (2002) acknowledges a debt to Michotte. Malle et al. (2001) discuss our capacity to apply the concepts ‘desire’, ‘belief ’, ‘intention’, and the like. 5. Wolpert (1992) also discusses the unnaturalness of science. 6. Humphreys (2004) suggests that technology already provides such devices. The epistemology of science, he says, is no longer restricted to the epistemology of human beings.
What Kind of Agent Are We?
59
7. According to Lewin (1998), the term ‘‘hominine’’ has replaced the earlier term ‘‘hominid’’ used to refer to the species in the human family or clade. 8. The Illusion of Conscious Will is a remarkable book in several ways. It is accessible and playful—rare virtues among academic writers—and at the same time it poses a powerful challenge to our current view of human agency. If Wegner’s theory is right, much of how we currently understand our selves is deeply mistaken. Moreover Wegner’s argumentative strategy is formidable. He offers a view of agency that unifies, or at least pulls into focus, a wide range of experimental evidence concerning a host of human capacities and infirmities. It all but impossible to dismiss his theory by quibbling with this or that bit of evidence; one must wrestle with his interpretation of the whole body of evidence. 9. One experimental subject, after reading the command ‘‘stand up and leave the room’’ with his left eye only (thus receiving the information in the nonlinguistic, right hemisphere only), stood up and walked toward the door. When asked by the experimenter why he was leaving, the subject sincerely replied ‘‘I need to get a drink.’’ 10. I am indebted in this paragraph to a question posed by Don Ross at the Mind and World conference cited in my acknowledgments. Wegner’s thoughts on this issue are given in the ‘‘Author’s Response’’ section of Wegner (2004). References Bechtel, W., and R. Richardson. 1993. Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Princeton: Princeton University Press. Bergson, H. 1907. Creative Evolution, trans. A. Mitchell (1911). New York: Holt. Blumenbach, J. F. 1781. Uber den Bildungstrieb (On the Formative Power). Gottingen: Johann Christian Dieterich. Boogerd, F. C., F. J. Bruggeman, R. C. Richardson, A. Stephan, and H. V. Westerhoff. 2005. Emergence and its place in nature: A case study of biochemical networks. Synthese 145: 131–64. Chisholm, R. 1964. Human freedom and the self. The Lindley Lecture. University of Kansas. Davies, P. S. nd. Conceptual conservatism: Naturalism, history, and design. Unpublished manuscript. Dennett, D. 1991. Consciousness Explained. Boston: Little, Brown. Gazzaniga, M. 1997. The Mind’s Past. Berkeley: University of California Press. Gazzaniga, M., and J. LeDoux. 1978. The Integrated Mind. New York: Plenum Press.
60
Paul Sheldon Davies
Humphreys, P. 2004. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. New York: Oxford University Press. Kant, I. 1951 [1790]. Critique of Judgment, trans. J. H. Bernard. New York: Hafner Press. Lewin, R. 1998. Principles of Human Evolution. Malden, MA: Blackwell Science. Malle, B., L. Moses, and D. Baldwin. 2001. Intentions and Intentionality: Foundations of Social Cogniton. Cambridge: MIT Press. McCauley, R. 2000. The naturalness of religion and the unnaturalness of science. In F. Keil, and R. Wilson, eds., Explanation and Cognition. Cambridge: MIT Press, pp. 61–85. Michotte, A. 1963 [1954]. The Perception of Causality, trans. T. R. Miles. New York: Basic Books. Ross, L., and R. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press. Ruse, M. 2003. Darwin and Design: Does Evolution Have a Purpose? Cambridge: Harvard University Press. Sperber, D., D. Premack, and A. J. Premack. 1995. Causal Cognition: A Multidisciplinary Debate. Oxford: Oxford University press. Wegner, D. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wegner, D. 2004. Pre´cis of The Illusion of Conscious Will and frequently asked questions about conscious will. Behavioral and Brain Sciences 27: 649–92. Wegner, D., and V. A. Fuller. 2000. Clever hands: Action projection in facilitated communication. Unpublished manuscript. Wolpert, L. 1992. The Unnatural Nature of Science. Cambridge: Harvard University Press.
4
The Illusion of Freedom Evolves
Tamler Sommers
All theory is against free will; all experience is for it. —Samuel Johnson.
‘‘All Theory Is against Free Will . . .’’ Powerful arguments have been leveled against the concepts of free will and moral responsibility since the Greeks and perhaps earlier. Some—the hard determinists—aim to show that free will is incompatible with determinism, and that determinism is true. Therefore there is no free will. Others, the ‘‘no-free-will-either-way-theorists,’’ agree that determinism is incompatible with free will, but add that indeterminism, especially the variety posited by quantum physicists, is also incompatible with free will. Therefore there is no free will. Finally, there are the a priori arguments against free will. These arguments conclude that it makes no difference what metaphysical commitments we hold: free will and ultimate moral responsibility are incoherent concepts. Why? Because in order to have free will and ultimate moral responsibility we would have to be causa sui, or ‘‘cause of oneself.’’ And it is logically impossible to be self-caused in this way. Here, for example, is Nietzsche on the causa sui:1 The causa sui is the best self-contradiction that has been conceived so far; it is a sort of rape and perversion of logic. But the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense. The desire for ‘‘freedom of the will’’ in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated; the desire to bear the entire and ultimate responsibility for one’s actions oneself, and to absolve God, the world, ancestors, chance, and society involves nothing less than to be precisely this causa sui and, ¨ nchhausen’s audacity, to pull oneself up into existence by with more than Baron Mu the hair, out of the swamps of nothingness. (1992, pp. 218–19)
62
Tamler Sommers
The conclusion of all of these arguments is that what we do, and the way we are, ultimately comes down to luck—the luck of the nature draw, the nurture draw, the brain-state draw, and perhaps the quantum indeterministic event draw. So while it may be of great pragmatic value to hold people responsible for their actions, and to employ systems of reward and punishment, no one is truly deserving of blame or praise for anything. My aim here is not to argue directly for this conclusion.2 I will say only that 2,500 years have passed and the reasoning behind it has never been refuted or, in my view, even seriously undermined. Yet the view that we lack free will and moral responsibility is seldom taken seriously. On the contrary, it is often dismissed out of hand, even by those who recognize the force of the arguments behind it. Philosophers who reject God, Cartesian dualism, souls, noumenal selves, and even objective morality cannot bring themselves to do the same for the concepts of free will and moral responsibility. The question is: Why? ‘‘. . . All Experience Is for It’’ We feel free. When we face a situation in which we have to make a decision, we feel, at that moment, like we can act with deep metaphysical ‘‘contra-causal’’ freedom. We believe ourselves to be in total control of the decision. We feel like a centralized ‘‘will’’ is about to cause the choice, and if it is a moral choice, we feel that we will be morally responsible for whatever choice we make. Galen Strawson (1986) gives a vivid description of this experience in the following example. ‘‘Imagine, ‘‘he writes,’’ that it is Christmas eve and you are going to buy a bottle of scotch with your last twenty dollar bill. Right outside the liquor store, there is a man with an Oxfam cup, or a beggar who is clearly in need. You must decide now whether to spend your money on the scotch, or give to the beggar. You stop, and it seems quite clear to you—it surely is quite clear to you—that it is entirely up to you what you do next—in such a way that you will have deep moral responsibility for what you do, whatever you do. The situation is in fact utterly clear: you can put the money in the tin (or give it to the beggar) or you can go in and buy the scotch. You’re not only completely, radically free to choose in this situation. You’re not free not to choose. That’s how it feels.’’ (G. Strawson 1986, p. x) Strawson’s point is that the phenomenology of decision-making leads us to believe that we are radically free and responsible—at least in the immediate moment. Philosophical theories of free will have understandably tried to save this phenomenon, or phenomenology, but at the expense of either
The Illusion of Freedom Evolves
63
dodging the central objections raised by anti–free will arguments or by lapsing into a stubborn mysterianism. (Samuel Johnson’s other famous quotation on this subject is ‘‘I know we’re free and there’s the end on’t.’’) There is another option, however—one that remains naturalistic but faces the problem squarely. We may accept the soundness of the arguments against free will, but instead of trying to justify our belief in it, we try to explain it away. An Error Theory of Free Will and Moral Responsibility Doing this means providing an ‘error theory’ of free will and robust moral responsibility (RMR):3 The error theory makes the following claims. 1. We commonly suppose ourselves and others to have the type of free will that would make us RMR for our behavior. 2. RMR does not exist—no one is ever deserving of blame or praise for any action whatsoever. 3. The widespread belief and experience that we have free will and RMR can be ‘explained away,’ that is, accounted for in such a way that does not involve the actual existence of free will and RMR. Free will skeptics have developed strong arguments for the first two claims, but they have not paid sufficient attention to claim 3. The result is that the skeptical position is dismissed out of hand. However, if we can provide a plausible explanation for why we mistakenly believe ourselves to be free and morally responsible, the first two claims gain even greater support.4 And the skeptical position can no longer be ignored. This project has been undertaken by a few others. Spinoza, for example, in the appendix to part I of the Ethics, offers the following explanation for our belief in free will: ‘‘Men believe they that they are free, precisely because they are conscious of their volitions and desires; yet concerning the causes that have determined them to desire and will they have not the faintest idea, because they are ignorant of them’’ (Spinoza [1675] 1982, p. 57). Charles Darwin presented a similar type of explanation in his Notebooks: ‘‘The general delusion about free will [is] obvious—because man has power of action, & he can seldom analyze his motives (originally mostly INSTINCTIVE, & therefore now great effort of reason to discover them . . .) he thinks they have none’’ (Darwin [1839] 1987, p. 608). This simple and rather profound idea may well be part of the correct explanation. We are aware of our desires and volitions and that they cause
64
Tamler Sommers
our behavior. But in most cases we are ignorant of the causes or motives behind the desires and volitions themselves. Thus, as reflective and selfconscious creatures, we have developed this view of free will, this idea that certain volitions have no causes or hidden motives—that they derive from us, from the self, and only there. We believe we are causa sui because we don’t know what else could have caused our volitions. Spinoza and Darwin were, in addition, skeptics about moral responsibility. Darwin, for example, writes: ‘‘This view should teach one profound humility, one deserves no credit for anything . . . (yet one takes it for beauty and good temper), nor ought one to blame others . . . . One must view a wicked man like a sickly one . . .’’ ([1839] 1987, p. 608). But their explanations for why we attribute robust moral responsibility to ourselves and others are derivative of their explanations of why we believe we have free will. Our ignorance of the causes of our volitions leads to the erroneous belief in free will. And the belief in free will leads in turn to the erroneous belief in moral responsibility. What I want to propose is that a large part of the explanation may be the other way around: the belief in robust moral responsibility leads to the belief in free will.5 So then the question becomes: What causes the belief in robust moral responsibility? Point of Departure: P. F. Strawson and the Reactive Attitudes P. F. Strawson’s magnificent essay ‘‘Freedom and Resentment’’ suggests a way to answer this question. Strawson at the time he wrote this paper was frustrated with the free will debate, with its emphasis on the question of determinism and on metaphysical debates over the meaning of concepts such as can and possibility—Strawson thought all of this missed the point. He sought instead to locate our commitment to the concepts of free will and RMR within the general framework of human attitudes and emotions—the reactive attitudes as he called them. Attitudes like resentment, gratitude, forgiveness, guilt, and love are, according to Strawson (1982), ‘‘given with the fact of human society.’’ The human commitment to these reactive attitudes, Strawson argued, is too deeply rooted to take seriously the idea that a general theoretical conviction (like a belief in the truth of determinism) can cause us to abandon it altogether. It is these reactive attitudes, and not any particular metaphysical theory, that ground our attributions of moral responsibility. Strawson and the free will error theorist agree on two very important claims: (1) that the reactive attitudes are deeply rooted in human psychology and (2) that the reactive attitudes are fundamentally connected to the
The Illusion of Freedom Evolves
65
widespread belief in free will and moral responsibility. But whereas Strawson sought to use these facts to ground or justify free will and RMR, the error theorist sees them as part of the explanation for why we mistakenly believe ourselves to be free and responsible. The reactive attitudes and our proneness to experience them, according to the error theorist, can help to explain why we commit the error in the error theory of free will. The Reactive Attitudes as Adaptations The next question we need to ask, then, is why the reactive attitudes are so deeply rooted in human psychology. Strawson never once refers to evolutionary theory in his essay, yet every one of the reactive attitudes he describes has an adaptive rationale. Evolutionary theorists since Darwin have argued that certain emotions and attitudes have been naturally selected to motivate behavior that improves social coordination. The emotions are especially valuable for their ability to motivate behavior that goes against our immediate self-interest but that serves long-term reproductive or material gains. Frank (1988), for example, argues that certain problems cannot be solved by rational action. To solve these problems—what he calls commitment problems—we have to commit ourselves to behave in ways that prove contrary to our short-term self-interest. Frank then develops what he calls the ‘commitment model’, which is ‘‘shorthand for the notion that seemingly irrational behavior is sometimes explained by emotional predispositions that help solve commitment problems’’ (Frank 1988, p. 11). Frank argues that emotions like anger, outrage, guilt, and love serve as ‘commitment devices’: psychological mechanisms designed to counteract the allure of immediate self-interest in favor of long term gains. Frank’s ‘commitment devices’ and Strawson’s reactive attitudes are virtually identical. The reactive attitudes also match up almost perfectly with the emotions that the evolutionary biologist Robert Trivers (1971) hypothesized would be necessary for creating and enforcing reciprocally altruistic behavior in hominids. According to Trivers, attitudes like resentment motivate what he calls ‘moralistic aggression’: retributive acts that are often out of all proportion to the offense committed (but that serve notice not to repeat the offense in the future). The ‘self-reactive attitude’ of guilt serves two purposes: (1) to prevent individuals from engaging in cheating behavior that can harm the individual in the long run and (2) to motivate cheaters, after the deed is done, to compensate for their behavior so that future reciprocity can be preserved.
66
Tamler Sommers
Another role for the reactive attitudes has recently been suggested by the economist Ernst Fehr. Fehr and colleagues (2004, 2002) argue that reciprocal altruism, kin selection, and Frank’s theory are part of the story but insufficient to explain human cooperation in large groups. They argue that a phenomenon called ‘altruistic punishment’ plays an important role in norm enforcement, working to penalize free-riders enough for cooperation strategies to be adaptive. To support this view, Fehr conducted a number of public good experiments in which subjects can cooperate or defect to various degrees. After a certain number of rounds, cooperators are given the opportunity to punish defectors at a cost to themselves. Cooperators willingly suffer costs in order to punish defectors in these experiments, even when they know that they will never interact with the defectors again. (This is what makes the punishment ‘altruistic’—they will never benefit from inflicting it.) Why do they do this? Fehr argues that negative emotions, emotions like resentment, serve as the proximate mechanisms for this behavior. Even a reactive attitude as complex as forgiveness has an adaptive function. Individuals who hold long-lasting grudges lose out on too many cooperative opportunities in Prisoner’s Dilemma–like situations (Axelrod 1984). Indeed, in Robert Axelrod’s famous Prisoner’s Dilemma tournaments, one of the primary characteristics of successful strategies was ‘forgivingness.’ Tit-for-Tat, the winner of the tournament, punishes (resents?) one defection. But if the defection is followed by cooperation, then all is forgiven—at least until the next defection. There is good reason to believe, then, that the attitudes and emotions that Strawson linked with the concepts of free will and moral responsibility were selected for their contributions to biological fitness in hominids.6 If Free Will Did Not Exist, We Would Have to Invent It One might wonder what this discussion of retributive attitudes has to do with the belief in free will and moral responsibility. Frank never mentions such a belief, nor does Trivers. Furthermore, as Franz De Waal (2000) and other primatologists have argued, chimpanzees may very well feel a kind of moral outrage, yet we do not attribute a belief in free will and responsibility to them. All of this is true. But we differ from chimpanzees and other intelligent social creatures in a crucial and relevant respect: we are able to question the rationality of our attitudes and the accompanying behavior. First, what do I mean by rationality? For the purposes of this argument, I will divide ‘rationality’ into two parts: ‘a-rationality’ and ‘b-rationality.’ To
The Illusion of Freedom Evolves
67
believe something is a-rational is to believe that it is ‘makes sense,’ namely that it is consistent with other beliefs that one holds to be true. So, for example, when I am mad at my wife for not sitting in our green rocking chair during the bottom of the ninth inning in game 4 of the Red Sox/Cardinals world series (because that is where she sat in the bottom of the ninth inning in game 7 against the Yankees), I am being a-irrational. My anger contradicted a belief I hold to be true: namely that the actions of two people in Hillsborough, North Carolina, have no causal effect on a baseball game being played in St. Louis (especially with the five-second satellite delay). The second sense of rationality I employ, ‘b-rationality’, is instrumental. To believe that something is ‘b-rational’ is to believe that it serves our short-term material self-interests. So, if my goal is to get to class on time, and driving will be faster than walking, then taking the car is b-rational. (And walking is b-irrational.) Let us call creatures who have the capacity to question the a-rationality and b-rationality of their emotions cognitively sophisticated (CS) creatures. My central contention in this essay is this: for CS individuals the commitment devices, or reactive attitudes, cannot work as effectively to motivate adaptive behavior unless they are accompanied by a belief in free will and RMR. Why is that? Well, let us first consider creatures who lack this capacity, who cannot question the rationality of their attitudes. De Waal (2000), for example, tells a story of chimpanzee indignation or outrage and the retributive behavior it motivates. One of the female chimpanzees De Waal studied, Puist, had earlier supported a male, Luit, against his rival Nikkie— she had helped chase Nikkie away. Later when Nikkie displayed at Puist, an act of aggression, Puist turned to Luit in search of support. But Luit did nothing to protect her. This so infuriated Puist that rather than attack Nikkie, the aggressor, she turned on Luit, barking, and chasing him across the enclosure, even hitting him. De Waal interprets Puist’s behavior as follows: She experienced a type of indignation or outrage at Luit for breaking a chimpanzee social norm, one that simply states ‘‘one good turn deserves another.’’ This indignation in turn motivated her to punish Luit, even at risk to her own personal safety. Luit will think twice about breaking that norm again in the future, as will any other chimpanzee who witnessed the incident. But if chimpanzees are able to experience indignation or outrage, they are certainly incapable of assessing whether or not the indignation is rational. Puist was not capable of asking herself whether the attitude and the accompanying behavior ‘made sense’, nor in any sophisticated way, whether they served
68
Tamler Sommers
her self-interest. The attitude did all the motivating work, with no backtalk or interference from the reasoning faculty.7 Now consider a similar scenario involving a CS individual. Luke, a hunter-gatherer, has been wronged by Gus. Gus did not defend Luke when he should have. (Or maybe he tried to free-ride during the hunt, or attempted to steal Luke’s mate.) Luke gets a visceral feeling of indignation or outrage, ‘‘mostly instinctive’’ as Darwin says, and the feeling motivates him to act with moralistic aggression the next time he sees Gus. If Luke were a non-CS creature, that would be the end of the story. The indignation would motivate the retributive behavior, and Luke would act. And sometimes no doubt that will still happen. But Luke, unlike Puist the chimpanzee, can question the rationality of this attitude. He can recognize that Gus is larger than he is, and that he may well get the short end of any attempt at revenge. He can also realize that what’s done is done, that there is little point in dwelling on something that already happened. Yes, he has learned to stay away from Gus, and not to trust him. But there is no point in risking his own safety, perhaps his own life, to punish him. These rational considerations may well undermine the link between attitude and behavior. And if Gus knows this, he will be more likely to commit the offense, for the risk is lower. That is the essence of a commitment problem. Now I’m not suggesting that early hominids were reflective enough to have all of these thoughts, or to think of them in such a collected manner. But it seems plausible that considerations like these have undermined the link to some degree between the commitment devices, the reactive attitudes, and the accompanying behavior. If this is correct, then a new design problem arises for CS creatures. The reactive attitudes or the commitment devices are still adaptive for their role in solving commitment and coordination problems. But greater cognitive sophistication has diminished the capacity of these emotions to motivate the accompanying adaptive behavior. CS creatures, then, need something else to offset the dampening effect of increased cognitive sophistication. This something else, I suggest, is an independent belief in the robust moral responsibility of other agents. How would this work? Well, what if, in addition to the visceral anger, Luke also had a belief or feeling that Gus deserved blame, that he ought to be punished for what he did? What if Luke felt that, his own interests aside, something would be deeply wrong with the world if Gus got away with the offense? Now the outrage is no longer a-irrational. It makes perfect sense. Luke is outraged because Gus deserves blame and punishment for his offense. And although it still might be b-irrational, contrary his to selfinterest, to act retributively, this consideration might be offset by that
The Illusion of Freedom Evolves
69
belief that something would be deeply wrong if Gus’s offense went unpunished. This belief would then fortify the link between the outrage and the retributive act. We can see this belief in RMR in action by referring to an example Frank (1988) uses to illustrate his commitment model. Jones has a $200 leather briefcase that Smith wants to steal. If Smith steals it, Jones will have to go to court to recover it and force Smith go to jail for 60 days. But the day in court will cost Jones $300 in lost earnings not to mention the tediousness of a court trial. Since this is more than the briefcase is worth, it is clearly not in Jones’ material self-interest to press charges. The problem, of course, is that if Smith knows that Jones is going to be rational in this way, then he can steal the briefcase with impunity. There’s no risk. But Frank writes: ‘‘Suppose that Jones is not a pure rationalist; that if Smith steals his briefcase, he will become outraged, and think nothing of losing a day’s earnings, or even a week’s, in order to see justice done. If Smith knows Jones will be driven by emotion, not reason, he will let the briefcase be. If people expect us to respond irrationally to the theft of our property, we will seldom need to, because it will not be in their interests to steal it. Being predisposed to respond irrationally serves much better here than being guided only by material self-interest’’ (1988, p. x). From this passage one might think that the outrage alone is sufficient to predispose Jones to act ‘irrationally.’ Frank makes no explicit reference here to any beliefs about moral responsibility or free will. But the belief is there—implicit in the remark about Jones’s need to see ‘‘justice done.’’ If Jones did not believe that Smith deserved blame and punishment for stealing the briefcase, then he would feel no need to see justice done. And the outrage, no matter how fierce, might very well not be enough. Suppose that Jones viewed Smith like he viewed a dog. He might be angry and frustrated about losing the briefcase, but he would feel no burning need to sacrifice his own interests to pursue and punish the dog who stole his briefcase. Without the belief, then, that other human beings deserve blame or punishment for their actions, Jones’s outrage would be insufficient to solve this commitment problem. With the belief, the link between attitude and adaptive behavior is fortified. So where does the experience of free will come into the picture? Taking this account even further, we could say that the phenomenology of free choice is a very complex adaptation that allows us to attribute robust moral responsibility to ourselves and others. This part of the story owes what Shaun Nichols (2004) has termed, in a slightly different context, ‘‘a perverse debt to Kant.’’8 To paraphrase Kant, the moral life of rational creatures
70
Tamler Sommers
cannot get off the ground unless we experience ourselves as indeterministically free agents. If we did not believe that we could have acted otherwise, or intended otherwise, or been other than we are, then how could we see ourselves as RMR for an action? And if we did not believe that others were indeterministically free agents, then how could we see offenders as deserving of blame or praise for their behavior? If Kant is right that libertarian freedom is a necessary condition for RMR, then possessing the illusion of free will would allow us to believe that attributing RMR to ourselves and others was a-rational. And if, as I have argued, the attribution of RMR is adaptive for CS creatures, then the phenomenology of free will, the illusion that we can act freely and be ultimately responsible for our actions, is adaptive as well. To recap, my argument makes the following claims: 1. Retaining the link between the reactive attitudes and the accompanying behavior is important for biological fitness in hominids. 2. Greater cognitive sophistication, which evolved for other reasons, undermines this link. The ability to assess the a-rationality and brationality of our attitudes makes it less probable that the attitude motivates the accompanying adaptive behavior. 3. A belief in the robust moral responsibility of oneself and others offsets this undermining effect, and therefore is adaptive. 4. CS creatures nevertheless do not consider it a-rational to attribute robust moral responsibility to agents who are not radically free. 5. Unless we believe ourselves and others to be radically free agents, we cannot consider it rational to attribute RMR. 6. Experiencing ourselves as radically free agents allows us to believe that we and others are radically free. 7. The experience of radical free agency, the phenomenology of free choice, and the illusion of the conscious will are adaptive for CS creatures. The conclusion, I have to stress, is not that the sole cause of the illusion of free will is the need to attribute RMR to ourselves and others. (Nor am I in any way claiming that cultural forces and social structures have had no effect on these concepts as we understand them today. Surely they have.) The phenomenology of free will may have multiple causes and origins, and the advantages outlined in my hypothesis may be only a small part of the explanation for why we possess it.9 Don Ross’s theory, chapter 10 in this volume, that selves arise as stabilizing devices for social dynamics is a highly plausible—and I believe, complementary—explanation for why we experience ourselves as having an autonomous, if predictable, self. Con-
The Illusion of Freedom Evolves
71
sider Ross’s claim that ‘‘the massive interdependency among people incentivizes everyone to regulate the stability of those around them through dispensation of social rewards and punishments’’ [my italics] (Ross, chapter 10 in this volume). My hypothesis is that the belief in RMR enables this dispensation of social rewards and punishment to occur more effectively. And again, perhaps Spinoza and Darwin were right as well: the origin of the illusion of free will is that we are aware of the desires and volitions that cause our actions, but unaware of what causes the desires and volitions themselves. My theory, then, point outs the adaptive advantages of having this phenomenology once it is in place. And the belief in free will would arise from this phenomenology and the need to justify our belief in RMR. Another point I wish to stress is this. The ‘‘explaining away’’ of RMR should not be taken to support the claim we lack other important (nondesert-related) varieties of responsibility. As compatibilists from Hume to Frankfurt to Dennett to Pettit have observed, there are numerous valuable types of responsibility that are entirely compatible with the most uncompromising naturalism imaginable. The error theorist does not disagree, for example, with Pettit’s claim that we are ‘‘conversable’’—that is, that we have the ability to track and conform to relevant reasons given to us in conversation—or that this capacity is fully compatible with determinism (see Pettit, chapter 5 in this volume). Rather, the error theorist and the compatibilist disagree on the question of whether this capacity can truly justify robust desert-entailing moral responsibility. Elsewhere (Sommers 2005) I have argued that compatiblist forms of freedom and responsibility are insufficient for this task. The argument presented in this chapter supports this claim at best indirectly—by showing why people (including philosophers) are so resistant to give up the concept of robust moral responsibility. Testing the Hypothesis To fend off some inevitable ‘‘just-so story’’ objections, I want to highlight three claims I’ve made that are testable. First, human beings (CS creatures) who deem an attitude, in a certain situation, to be irrational are less likely to perform the behavior that is often motivated by this attitude. This is a key claim but one that seems amenable to fairly straightforward empirical investigation. Experiments in social psychology might help to determine the degree to which our behavior is governed by what we deem to be rational or irrational, and especially what occurs when we think an attitude or emotion is irrational.
72
Tamler Sommers
Second, we believe we act with libertarian (indeterminstic) free will. Third, we do not believe it rational to attribute RMR to an agent unless we believe they have libertarian free will. These are essentially questions about what our intuitions are regarding free will and moral responsibility. Are they compatibilistic or incompatibilistic? Do we really think an agent needs to have acted with indeterministic free will in order to believe that our assignments of blame are rational and justifiable? Until recently pronouncements on this question were made from the philosopher’s armchair. Now, however, philosophers and psychologists are conducting studies that will help determine whether our intuitions are compatibilist or incompatibilist regarding our own actions, and about the actions of others.10 Much more should and will be done on this front. In addition more can be said on the more general conditions required for attributing blameworthiness. Malle and Nelson (2003), for example, contend that in order to hold someone blameworthy, we must believe they acted intentionally. But sometimes we first assign blame and afterward apply to the behavior the concept of intentionality, whether it fits or not, in order to justify the blame already assigned (Knobe, 2006). This model is quite congenial to the error theory I’ve described, for it shows how the assignment of moral responsibility can come first and the explanations for why agents are responsible come afterward, even if the explanations are false. My contention is that in addition to coming first in the psychological process, assignments of blame can come first in the evolutionary process as well. The phenomenology of free will and RMR emerged as a means of justifying blame-linked attitudes like resentment. Conclusion Of course, this account of the origins of our beliefs regarding free will and moral responsibility does not, on its own, demonstrate that these beliefs are false or illusory. But a plausible ‘explaining away’ of free will and RMR, when combined with the powerful negative arguments against these concepts makes a good case for an error theory. I have argued that Darwinian theory can be of great help in providing this explanation. It is important to note, however, that my evolutionary account differs crucially from other theories (e.g., Dennett 2003) that attempt to ‘‘naturalize’’ freedom and responsibility. Although I agree that the experience of free will and RMR can be explained naturalistically, I am nevertheless convinced that it is an illusion—that true moral responsibility, and the retribu-
The Illusion of Freedom Evolves
73
tive attitudes that presuppose it, are unjustifiable. I am also convinced that the unraveling of the illusion can have significant ethical, legal, and practical implications, both for the individual, and for a society that embraces an error theory of free will. Regarding these implications, the prevailing view among philosophers— and nonphilosophers for that matter—is that these implications would be disastrous. Everything would be permitted. Life would lose most or all of its meaning; we would be puppets on a string, living a mockery of a real human existence. As I have argued elsewhere, however, this pessimism is little more than a contemporary prejudice—it is seldom argued for, and almost always based on a distorted view of what free will skepticism really entails (Sommers 2005, and Sommers, in press). A denier of free will and RMR can live a happy, fulfilling, love-filled, moral life without in any way contradicting his or her principles. Indeed one may say, as Darwin himself said of his theory of natural selection, that there is grandeur to this view of life. Or as Einstein has written: ‘‘This realization [that there is no such thing as free will] . . . prevents us from taking ourselves and other people too seriously; it is conducive to a view of life which, in particular, gives humor its due’’ (Einstein 1982, pp. 8–9). Acknowledgments I am grateful to Alex Rosenberg, Owen Flanagan, Eddy Nahmias, Shaun Nichols, Joshua Knobe, and Manuel Vargus for comments on earlier drafts of this chapter. I have also profited immensely from discussions at the Mind and World conference at the University of Alabama at Birmingham. Notes 1. Thanks to Galen Strawson for drawing my attention to this tactful passage. 2. For recent theoretical arguments in support of this conclusion see G. Strawson (1986) and Pereboom (2001). 3. By robust moral responsibility I mean the type of responsibility that would make us truly deserving of blame or praise for our actions. I add ‘‘robust’’ in order to distinguish this desert-entailing type of moral responsibility from other uncontroversial varieties—causal responsibility, the capacity to act according to reason, the capacity to form second order volitions, and other types of ‘‘compatibilist responsibility.’’ 4. The explanandum here, in other words, is not the existence of free will. Rather, the explanandum is our belief that we have free will. One explanation might be that we
74
Tamler Sommers
have free will and that leads us to believe that we do. But there are other competing and perhaps more plausible explanations for our belief and these explanations need to be considered as well. 5. This is not to claim that the explanation offered by Spinoza and Darwin is false. The illusion of free will may have multiple causes (see below). What I want to suggest is that the need to see ourselves and others as RMR could also have been a factor in the evolution of this (illusory) phenomenology. 6. Flanagan (2000) is, to my knowledge, the first to explicitly consider (albeit in a different context) whether the reactive attitudes might be biological adaptations. See also Wright (1994) for an interesting discussion of the adaptive value of the moral emotions. 7. You do not have to agree with De Waal’s interpretation of Puist’s behavior to appreciate my point here. If you do not believe that chimpanzees can feel moral outrage, then simply imagine an early hominid reacting to a situation similar to the one De Waal describes. 8. Perverse, because neither he nor I endorse Kant’s (libertarian) conclusions. 9. Furthermore the illusory phenomenology surely did not come in one fell swoop after the reactive attitudes were in place. Thanks to Andy Clark for pointing out how the theory could be interpreted in this (highly implausible) manner. If my account is correct, or approximately correct, there must have been a co-evolution of sorts between the phenomenology of free will and the increasing complexity of the reactive attitudes. 10. See, for example, Nahmias et al. (2005), Nichols (2004), Nichols and Knobe (forthcoming). The data are, in my view, insufficient for any side to claim victory at present. References Axelrod, R. 1984. The Evolution of Cooperation. New York: Basic Books. Darwin, C. 1987. Charles Darwin’s Notebooks, 1836–1844. Ithaca: Cornell University Press. Dennett, D. 2003. Freedom Evolves. New York: Viking. De Waal, F. 2000. Chimpanzee Politics. Baltimore: Johns Hopkins University Press. Double, R. 1996. Metaphilosophy and Free Will. Oxford: Oxford University Press. Einstein, A. 1982. Ideas and Opinions. New York: Crown. Fehr, E. 2004. The neural basis of altruistic punishment. Science 305: 1254–58.
The Illusion of Freedom Evolves
75
Fehr, E., and S. Gachter. 2002. Altruistic punishment in humans. Nature 415: 137–40. Flanagan, O. 2000. Destructive emotions. Consciousness and Emotion 1: 259–81. Frank, R. 1988. Passions within Reason. New York: Norton. Knobe, J. 2006. The concept of intentional action: Case studies in the uses of folk psychology. Philosophical Studies 130: 203–31. Malle, B. F., and S. E. Nelson. 2003. Judging mens rea: The tension between folk concepts and legal concepts of intentionality. Behavioral Sciences and the Law 21: 563–80. Nahmias, E., T. Nadelhoffer, S. Morris, and J. Turner. 2005. Surveying free will: Folk intuitions about free will and moral responsibility. Philosophical Psychology 18: 343–54. Nietzsche, F. 1992. Beyond Good and Evil. In W. Kaufmann, ed., The Basic Writings of Nietzsche, pp. 179–427. New York: Modern Library. Nichols, S. 2004. The Folk Psychology of Free Will: Fits and Starts. Mind and Language 19: 473–502. Nichols, S., and J. Knobe. 2007. Moral responsibility and determinism: Empirical investigations of folk Intuitions. Nous, forthcoming. Pereboom, D. 2001. Living Without Free Will. Cambridge: Cambridge University Press. Sommers, T. 2005. Beyond Freedom and Resentment: An Error Theory of Free Will and Moral Responsibility. Dissertation, Duke University. Sommers, T. The objective attitude. The Philosophical Quarterly, in press. Spinoza, B. [1675] 1982. The Ethics and Selected Letters, S. Feldman, ed. Indianapolis: Hackett Publishing. Strawson, G. 1986. Freedom and Belief. Oxford: Oxford University Press. Strawson. P. F. 1982. Freedom and resentment. In G. Watson, ed., Free Will, pp. 59– 80. Oxford: Oxford University Press. Trivers, R. 1971. The evolution of reciprocal altruism. Quarterly Review of Biology 46: 35–56. Wright, R. 1994. The Moral Animal. New York: Vintage.
5
Neuroscience and Agent-Control
Philip Pettit
Neuroscientific and related studies have sharpened the debate about how we, creatures composed out of neurally organized matter, can exercise the sort of agent-control over action that makes it possible for us to be held responsible for them. In this chapter I look briefly at some results that sharpen this problem, argue that the problem arises because we think of agent-control on what I call the act-of-will picture, and then try to show that by moving to an alternative conception of agent-control we can resolve the difficulty; this conception of agent-control I have already defended elsewhere (Pettit and Smith 1996; Pettit 2001a, b). The scientific results suggest not that agent-control is an illusion but that having agentcontrol is best conceived in the manner I favor, or in some closely related fashion. The chapter is in five sections. In the first three, I describe and present the challenge, identify an assumption that it presupposes and propose an alternative, and then argue that under this alternative assumption the challenge can be satisfactorily met. In the fourth section, I look more generally at the way the alternative picture relates to the standard one it displaces. And in the fifth, I offer a comment on the sort of lesson that neuroscience teaches us in this and other cases. The Challenge One of the standard assumptions in traditional philosophical lore is that actions are subject to agent-control if and only if they are the products of acts of will. Acts of will, on this approach, satisfy three crucial conditions. First, they reveal the mind of the agent in a pure or direct manner, not in the manner of the actions they prompt; they cannot be actions themselves, since they would then require prior acts of will and a regress would open up. Second, they are essentially capable of being brought to awareness by
78
Philip Pettit
the agent, even in the case of habitual behavior; they are so intimately bound to the agent that no ignorance of their reality is inevitable and no error about their identity possible. And third, they are acts that the agent can initiate on demand, as choice is called for; the agent can will or not will, just as fancy goes. The act-of-will picture is highly plausible from our point of view as agents. We know what it is just to will something; we know that we are usually quite capable of telling what our will is, and what it is not; and we feel, perhaps as deeply as we feel anything, that we can display our capacity in these regards by consciously making a choice on demand: say, raising the right hand rather than the left, or the left rather than the right. ‘I’ am in charge, so the intuition has it, and being in charge, being in agentcontrol, means that for the relevant domain—however restricted this may be—things happen according to my will. My will is where I am, and what I do, consciously or otherwise, I do by willing it. The picture is beautifully caught in Iain McEwan’s novel, Atonement, when the twelve-year-old Briony reflects as follows: She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of her own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. (McEwan 2001, p. 33)
On this picture it ought to be possible, at least in principle, to track the moment at which a person’s will forms and then to identify in the activity of the brain the process that the formation of will initiates en route to the willed action. And on this picture, equally, it ought to be possible to get someone to will or not to will a certain response, and to identify in the brain the way things switch as the will comes on stream or does not come on stream. Such possibilities have been explored in recent neuroscience and related work, but the upshot is in tension with our sense of ourselves; the experiments on record throw serious doubt on the role of will, as that has been sketched so far. They suggest either that there is no such thing as agent-control or that agent-control should not be pictured on the act-ofwill model. I hold by the second possibility, as I will be arguing here. The range of experiments I have in mind is well reviewed by Daniel Wegner (2002, ch. 2), and I will just mention two varieties. Any particular
Neuroscience and Agent-Control
79
experiment can be questioned, of course, but it is hard to imagine that the observations made in these and related studies are not broadly correct; they are now part of a rapidly developing, mutually supportive body of results. The first sort of experiment challenges the claims that when we act we have more or less infallible access to the acts of will that lie, purportedly, at the origin of our choices. Here are three fairly familiar examples: Subjects, with a glove on the relevant hand, are asked to draw a line on a sheet of paper in a transparent drawing box. As they draw, the line they see is actually drawn by another—this is due to an arrangement of mirrors— and begins to move away from the line they try to draw. They compensate with their own drawing and experience the gloved hand seen, and the line drawn, as under their personal control (Nielson 1963). n Subjects are placed within an apparatus that magnetically stimulates the motor center of the brain, now on one side, now on the other. They are asked in a sequence of choices, each triggered by the sound of the magnet being activated, to raise either the right or the left index finger. Although the magnetic stimulation allows a fairly reliable prediction as to which finger will rise, subjects experience the movement as one that they control by will (Brasil-Neto et al. 1992). n The subjects here are amputees with a phantom arm. They see an arm move where their own phantom arm, were it real, would be; this appearance is achieved with a mirror box in which the arm seen is their own nonphantom arm or someone else’s arm. They experience the movement of the arm they see as a movement they themselves are controlling in the phantom arm (Ramachandran and Rogers-Ramachandran 1996). n
The second variety of experiment challenges the claim that acts of will lie at the origin of what we do voluntarily and is particularly associated with the work of Benjamin Libet and his colleagues (Libet et al. 1983). Prior to that work it had been established that between 800 and 500 milliseconds before voluntary action some characteristic brain activity occurs. This came to be described as the formation of a readiness potential for action, and it was identified by many with the act of will. John Eccles, a Nobel laureate, spoke of the latency as ‘‘the mental act that we call willing’’ (Wegner 2002, p. 52). But in a series of experiments, Libet presented evidence that people only experience themselves as willing an action—an action they are asked to perform—about 200 milliseconds before the action itself: well after the appearance of the readiness potential. He drew a conclusion in radical disagreement with Eccles: ‘‘the initiation of the voluntary act appears to be an unconscious cerebral process. Clearly, free will or free
80
Philip Pettit
choice of whether to act now could not be the initiating agent, contrary to one widely held view’’ (Wegner 2002, p. 54). These results establish that if willing is always needed to put an agent in control of action, and the act of will is subject to conscious experience and activation, then it is an illusion. We agents might have a generally reliable, if fallible, sense of whether something we do is a voluntary exercise. But it cannot be a general axiom, as under the act-of-will story, that an action is a voluntary exercise and is consciously a voluntary exercise, in virtue of the presence of an act of will at its origin. The first sort of experiment suggests that we identify an action as voluntary, rightly or wrongly, on the basis of something other than access to such an unmissable, unmistakeable element. And the second sort shows that even if there is such an accessible element present, it comes too late in the process to do the required causal work. These observations undermine the story that acts of will are necessary in the initiation of voluntary action and that they make the voluntary character of actions manifest and unmistakeable for their agents. One likely reaction to the observations will be to think that agent-control is therefore an illusion too, since the traditional model identifies agent-control with control by acts of will. But I think that this is not an inescapable conclusion and that it is possible to remain upbeat about our capacity for agentcontrol. A Response Strategy The challenge presented is typical of a way in which science often challenges common sense: a way in which the recondite scientific image, as Wilfred Sellars (1997) puts it, challenges the image that is manifest to us in our ordinary life. A simple example of the challenge was presented by the British physicist, A. S. Eddington, when he argued that science shows that common sense is wrong even in thinking that some things are solid, others not; physics shows that everything is composed of atoms and that atoms are composed mainly of empty space. The obvious reaction to Eddington suggests a way in which we might also react to the neuroscientific challenge. He assumed that solidity according to common sense requires that solid things be solid the whole way down, with every part being solid in the same sense as the whole. And so he argued that science shows that solidity is an illusion when it demonstrates that this connotation of solidity is unsatisfied. The obvious reaction among those of us who continue to think that ‘solid’ serves a useful func-
Neuroscience and Agent-Control
81
tion in commonsense discussion is to deny that this is truly a connotation of the idea or, if we think it is, to revise the notion of solidity so that it ceases to be a connotation. We may say that solidity is a property of bodies in virtue of which two or more such bodies compete for the occupation of space, and we may then argue that subatomic physics does not eliminate that property but rather serves to explain it. When neuroscientific findings are invoked to challenge our idea of agent-control, then that is because of an assumption, parallel to that which Eddington made, that according to commonsense agent-control requires a history of action-production in which the formation of a consciously accessible will plays an initiating, causal role. We can escape Eddington’s challenge by thinking of solidity without any connotation that solid things are solid the whole way down. Can we avoid the neuroscientific challenge by thinking of agent-control without a connotation to the effect that will is present in the etiology of an agent-controlled action? Can we develop a plausible conception of agent-control or freedom under which the important feature is something other than the presence of an act of will in the history of the action? I think we can. The act-of-will story supposes that control belongs in the first place to actions and only in the second to agents. Actions display control so far as they are produced by acts of will and agents display control in virtue of their actions displaying control. But an alternative to the act-of-will picture is made salient by the observation that perhaps things are the other way around. Perhaps agents display control in virtue of having a relevant capacity for control and perhaps actions display control so far as they are performed in the domain or scope of such an agential capacity. The act-ofwill story suggests that in every action I produce as an agent, I am in the loop, given a place there by my generative act of will. This alternative, in a phrase from Daniel Dennett (2003), would suggest that far from being in the loop, I am the loop. My character and capacity as an agent is what gives me control over my actions, not the particular histories of their generation. I now proceed to explore such an agent-centered view of agent-control. If a horse runs a mile in one minute, we could say that it has the capacity to do this—the performance was characteristic of the animal—or we could say that it doesn’t: the fact that it ran a one-minute mile was a fluke, not a manifestation of a standing ability. Or if the horse fails to run a mile in one minute, we could say that this shows its lack of capacity or we could hold that it has that capacity but its failing to run the one-minute mile was a contingent, not a characteristic, failure. The horse with the relevant capacity, then, could or could not run a mile in a minute, for it could or could
82
Philip Pettit
not manifest or display its capacity. Given that the capacity is real, the horse runs on each occasion in the presence of the capacity—its performance is within the domain of the capacity—but on the one occasion it manifests the capacity, on the other it fails to do so. We can carry over this logic to the area of agent-control, if we seek to identify agent-control with an agential capacity. The control of an agent in a given domain will be constituted by a standing capacity, and the actions of the agent that are performed in the presence of that capacity will be agent-controlled. Those actions will divide, then, into two kinds. First, those that give evidence of the capacity that is present, being characteristic of the agent-controlled agent; and second, those that fail to give evidence of the capacity: those that are performed in the presence of the capacity but that fail to show what the agent is capable of. But what capacity, if any, has a claim to be relevant to agent-control in this manner? In answer to this question, I suggest that we should look to ordinary human practice, and to the sort of capacity that we expect to find in those, including ourselves, whom we treat as agents who enjoy control. The locus where we most explicitly display our assumption that people enjoy control is in ordinary, noncoercive conversation (Pettit 2001a; Pettit and Smith 1996). Such conversation presupposes, and continually replenishes, a currency of reasons for thought and action that are recognized as relevant on all sides, even if they are sometimes weighted differently; they are considerations that are recognized as legitimating or requiring various responses. Participants to a conversation treat one another as agent-controlled to the extent that they assume of each other—and indeed of themselves—that they have a capacity to track and conform to the demands of such reasons. The demands involved will range from requirements of respect for evidence, to requirements of consistency and valid argument, to requirements of fidelity to avowals and promises. Imagine I authorize you as an interlocutor, treating you as someone with whom I can do conversational business. I will take you to have a capacity to think with recognizable reason—reason recognizable to you—about what is the case and to act with recognizable reason on the basis of how you think. By my lights you will be the sort of creature who is generally susceptible to reasons, and to the perception of what reasons demand: in particular, to the sort of perception that I think I can elicit in you by testimony or argument or persuasion. If I did not impute this capacity to you, I would not find you worth talking to: I might as well be talking to the wall.
Neuroscience and Agent-Control
83
I call the capacity I ascribe to you in taking this view ‘conversability’. Some agents may lack conversability entirely, being out of their minds or beyond the reach of reason. And all agents are liable to lack it in some domains or in some circumstances; they will be beyond the reach of reasons on certain restricted subjects, or under certain temporary pressures. But we assume that most people have the capacity associated with conversability most of the time; that is why we think they are worth talking to. And we have no option but to assume that we ourselves generally have such a capacity, even if temporary distraction or temptation or fatigue can get in the way of its exercise. We could not sincerely put ourselves forward as candidates for conversational address by others if we did not make this assumption about ourselves. And, more important still, we could not carry on the sort of internal dialogue in which we debate with ourselves as to what we ought to think or do; we could not represent ourselves as worthy of such self-address. I suggest that we ought to identify the capacity for agent-control with the capacity to go with recognizable reasons, in awareness of what reasons are; we might think of it, more specifically, as ‘orthonomy’, the ability to be ruled by the ‘orthos’ or the right (Watson 2005; Pettit and Smith 1996). The suggestion fits with a now well-established tradition of discussion about free will, inaugurated by Peter Strawson’s 1962 essay ‘‘Freedom and Resentment’’ (Watson 1982, 2005; Wallace 1996; Wolf 1990; Dennett 1984). If we go along with the suggestion, then we will say that to be agentcontrolled in any range of action is to be conversable or orthonomous in those actions, acting in the presence of the relevant capacity: remaining, as one acts, within the reach of reason. And we can acknowledge that people who are agent-controlled in this sense may or may not always manifest conversability; they may act against recognizable reason, while retaining the orthonomous capacity to act in accord with it. We acknowledge this possibility routinely, holding people to higher expectations than their flawed performance satisfies. We recall them in such cases to the standards to which they represent themselves as faithful when they purport to be worthy interlocutors; we remind them, if you like, that they are capable of better: that, at least when exceptional cases are put aside (Frankfurt 1969), there is a sense in which they could have done otherwise than they actually did. The capacity associated with conversability is one that we naturally conceive in a realistic, down-to-earth way. We freely admit that it comes in degrees, with each of us having bad days, bad topics, and facing hurdles of
84
Philip Pettit
varying difficulty as we seek to prove conversable. We freely admit that it is often exercised in the manner of an editor rather than an author, a conductor rather than a composer, with autonomously emerging behaviors—for example, the words that naturally come to us as we speak—being shaped to appropriate effect (Pettit 2001a, ch. 7). And we freely assume, perhaps most important, that people can have and exercise the partial, editorial control that the capacity gives them even when they are acting more or less habitually and unthinkingly. This is possible so far as the discipline of conversable reason is in ‘virtual’ control of action (Pettit 2001a, ch. 1). Although unthinking habit shapes what agents do, the discipline of reason will be in virtual control so far as it is ready to be activated and take charge in the event of habit failing to keep the agent in line. In that event, at least in general, the ‘‘red lights’’ will go on and ensure that the agent remains faithful to the perceived demands of reason (see Thalos, chapter 8 in this volume). Beyond the Challenge The points I have made so far are these: According to neuroscientific findings, agents can make mistakes about whether an action is willed, and even when agents correctly see an action as willed, they do so later than when the brain launches the action. These findings challenge the commonsense intuition that we agents have a distinctive sort of control over our actions. n But they pose this challenge, only on the assumption that if an action is agent-controlled, then its etiology includes the formation of an act of will, where this sort of act is necessarily accessible to the agent and can be formed on demand. n That assumption is not compulsory. In principle, we might think that an action is agent-controlled, not in virtue of the elements in its particular etiology but in virtue of the nature or constitution of the agent in whom it is produced. n There is a good case for thinking that in practice we identify actions as agent-controlled in virtue of identifying their agents as having a suitable constitution. We do this when we identify agents as conversable or orthonomous, operating within the reach of conversationally recognizable reason. n
We are now in a position to connect up these points. Suppose that we go along with the picture of agent-control as a feature of agents in the first place—their conversability or orthonomy—and a feature of actions in the
Neuroscience and Agent-Control
85
second. We can then hold that actions are agent-controlled, not because of their particular neural etiologies but because the agents are neurally so constituted that they are conversable in relation to the actions. That means in turn that we can reject the assumption made by those who pose the neuroscientific challenge. So we can escape that challenge; we can accept the findings given and still maintain a belief in agent-control. The first sort of finding was that agents are far from infallible in identifying actions as willed or unwilled. This is not surprising under the picture presented. Assume that we learn to become relatively conversable or orthonomous over a certain range of action in the course of normal development. With any action performed within that range, we must be able to see where it is going—where our goal lies—since otherwise we would not be able to submit it to reason-based assessment. But the required ability to see where the action is going—to see, if you like, what we want or will to do—does not have to involve immediate access to an element in the etiology of the action, as under the rival picture. On the contrary, it may be an ability that is only exercised fallibly, as the action more or less gradually evolves. It may depend on a variety of cues from muscular sensation and sensory feedback, for example, not on immediate access to a presumptive act of will. And if it depends on such cues, it will be subject to precisely the sorts of error that are elicited in experiments of the first variety. The second sort of neuroscientific finding was that agents become aware of actions as willed at some time later than when the brain begins to launch the actions. Again, this should come as no surprise under the picture presented here. What makes an action one in which I have agentcontrol is the fact that in this area I can be treated successfully, whether by myself or others, as conversable and orthonomous. That I can be treated that way is a function of how I am neurally constituted: a function, for example, of the possibilities of response and reform associated with the perception that what I am doing is contrary to the reasons I recognize; a function of the fact that I am not a hopeless case with whom there is no point in remonstrating, whether the challenger be myself or another. It should not be a scandal, therefore, that before ever I become aware of myself as initiating an action of that type, my brain will have done much of the work required to orchestrate the behavior. It would be surprising were it otherwise. The experimental results discussed should no longer scandalize us, then, if we adopt the agent-centered view of control. Under this view of things, I will enjoy control of what I do so far as I can be effectively brought to book
86
Philip Pettit
on my actions—brought to the book of conversable reason. I will claim such control so far as I invite others to hold me to that book, as I do in presenting myself as a conversable interlocutor. To display and claim this sort of agent-control is perfectly consistent with not being the originary, selftransparent cause of what I do, as in the act-of-will picture. It only requires that however complex and opaque the process in which actions are produced, my makeup is such that the process remains sensitive to the factors that must be in place for conversability. How should we think of decision-making under this shift of perspective? I imagine agents responding to various aspects of their surroundings, aware of prompts that would take them in one direction or another, though not perhaps aware of being aware of them. I imagine the prompts congealing to elicit a given course of action, still without the agent being aware of this activating cascade of cues. And then, as the process unfolds, I imagine an awareness of the prompt as a prompt evolving in agents, giving them a sense of where they are behaviorally headed, and taking them into the space where they can exercise the capacity that makes them conversable. Agents may or may not be able to inhibit or reinforce the unfolding action at this point of awareness. Even if they have no such ability, however, their action will be agent-controlled, so far as they can review it in the light of the reasons available overall. They can endorse or disendorse their action, and in the case of disendorsing it, they can set themselves more or less successfully against doing the same sort of thing in the future. This is what ensures that the action is performed within the domain where conversability or orthonomy rules. A General Perspective The central idea in the notion of agent-control is that I am present in the performance of an action. It is not just something that happens in or to my body, unlike a spasm or reflex or compulsion. The agent-controlled action, we all think, is me; it carries my first-person signature. The standard picture of the will in action casts the self or the ‘I’ as the unit of production. And in that role it represents the self as condensing— magically, as I think—in the formation of will. This act of the will, as we saw, is meant to be an event in the natural etiology of action but an event with very special properties. Although it is not itself something I do—it cannot be, on pain of infinite regress—it is a manifestation or epiphany of my self. Where it materializes, I am. And where it materializes, I can consciously be, standing in full awareness at the well-spring of behavior.
Neuroscience and Agent-Control
87
The story I prefer keeps the self in the picture, and represents voluntary action as carrying my signature. But according to this story the self is not the unit of production in action so much as the unit of accountability. Although the agent-controlled action is produced by a neural complex to which I have limited access, it remains something for which I am able to assume responsibility in the forum of exchange with myself and others. As a conversational participant, actual or potential, I purport to be someone who can generally speak for what he believes and desires; can elicit expectations about how he will behave; and can give others good reason, on pain of accepting censure or penalty, to hold him to those expectations. What it means to say that the action is agent-controlled is simply that it falls within the sphere where I prove myself to be the center of responsibility—the conversable interlocutor—that I purport to be. My agent-control has little to do with the production of action, and much to do with how effectively and authoritatively I can speak for it. This control will put requirements on the way in which the action is generated, of course—it will rule out hypnotic inducement for example—but it will not be defined by the nature of that process. While the picture I have defended fits in many ways with our manifest sense of ourselves, as well as fitting with what we know from neuroscience and related work, it has one surprising implication (see Greene and Cohen 2004). A capacity like the capacity to be conversable or orthonomous is inevitably the product, not just of native makeup, but also of cultural development. We are not born responsible, any more than we are born free. We have to learn what responsibility requires and how to display it (McGeer 1996); as criminologists say, we have to be ‘responsibilized’ (Garland 2001). Because conversability is subject to this sort of formation, it is almost bound to come in degrees. One person may be conversable over a larger or smaller domain than others, as we saw, and one person may be conversable in a higher or lower degree within the domain where the capacity is exercised. This means that whether or not people exercise agent-control in an action is never a black or white question. At this point our picture of the self in action breaks quite cleanly with the act-of-will picture. On that picture, there is bound to be a fact of the matter as to whether or not the agent’s will was present in a piece of behavior; there is a bright line that will divide voluntary and properly responsible behavior from behavior that is not like this. Under our picture, by contrast, it may often be relatively indeterminate whether or not the agent was properly or fully conversable in the domain of the action; the bright line will fade into a blurry
88
Philip Pettit
area. But this implication of indeterminacy is surely quite plausible, however troubling it may be for moral and legal accounting. We are creatures composed out of neurally organized matter, as I said at the beginning. It would be a miracle if we turned out to display firmer contours of control than such plastic material is capable of sustaining. The Lesson from Neuroscience I have used the neuroscientific challenge—and, in effect, the associated findings—as a ground for arguing in favor of the view that links agentcontrol with conversability or orthonomy. But is there any more pointed lesson that neuroscience teaches us in this area? I believe there is. Neuroscience has taught us in many domains that what presents itself in common sense as a controlling factor in mental performance is often a side effect of a deeper, more inaccessible center of control. And it teaches precisely the same sort of lesson here. Consider the findings of a recent study in which subjects are asked to gamble with four decks of cards, where two of the decks are stacked against them (Bechara et al. 1997). While the subjects do eventually come to register those two decks as stacked, and so to resist them, it turns out that the resistance response—or at least the disposition toward that response—is present long before any perception of the decks as stacked, or even as suspect, emerges. This is evident from fMRI imaging of what is going on in their brains when they are dealing with those decks, as distinct from the fair ones, and from associated skin conductance responses. Unconscious resistance materializes in these subjects on the basis of a subpersonal registering of the fact that things are going wrong with the suspect decks. And the eventual registering of the decks as suspect—‘there’s something I don’t like about them’—takes the form of registering them simply as decks that occasion that resistance. The lesson of this discovery is that what appears in common sense as the factor that controls one’s response—the suspect look of the stacked decks— is not really a controlling factor but rather something that becomes available in virtue of the subpersonal controls that are operative. I do not resist the stacked decks because they look suspect, as seems to be the case from the first-person point of view; I see them as suspect because, at a deeper level, I resist them. The perception of the decks as suspect may reinforce my resistance, of course, but it is not the originating cause of resistance; that cause operates below the level of my awareness and before any awareness ever comes on stream.
Neuroscience and Agent-Control
89
What is true in this case is true in many others. I have argued elsewhere, for example, that it is not because we see something as red that we can sort it out visually and track it across different backgrounds, under different lights, from different perspectives. Rather, it is because we are equipped at a subpersonal level to do that visual sorting and tracking that we see the object as red; the red look is not a controlling factor in my mental performance but something that becomes available in virtue of the operation of lower level controls (Pettit 2003). The brain often works behind the back of the mind, and we minds—we brain-bearers—are very likely to miss the fact (Gazzaniga et al. 1998). The picture of agent-control that we have been led to adopt in order to avoid the neuroscientific challenge bears out this general lesson. Assume that if an unfolding action is perceived by me as subject to my control, then it will present itself as subject to a control I perceive myself to have: as subject, so we can put it without equivocation, to my perceived or phenomenal control. On the act-of-will picture, it is in virtue of the fact that an unfolding action is subject to my perceived or phenomenal control that it counts as agent-controlled. But, if the argument here is correct, it is not in virtue of being subject to that phenomenal control that the action is agent-controlled. Rather, it is in virtue of being agent-controlled—in virtue of being performed in the presence of a neurally supported capacity for conversability or orthonomy—that it has such a perceptual or phenomenal profile. I see an object as red so far as I am able to sort and track it appropriately—I leave out some complications—and not the other way around. In the same manner, I perceive an action as conforming to my perceived will, so far as I perform it in the presence of a capacity for agential agent-control, and not the other way around. That an action is under my agent-control is the work of a vast orchestration of neural factors: those that shape the behavior but make me at the same time into a conversable subject in the domain of the behavior. When, in the absence of deceptive setups, I perceive an action as agent-controlled—when it presents itself as subject to my perceived control—this is a case of registering a sort of control that is already assured. It is not a case of assuming control for the first time. Acknowledgment I benefited from comments received from Walter Sinnott-Armstrong and Victoria McGeer. I am also very grateful for discussions at a University of
90
Philip Pettit
Alabama workshop about related matters, for responses received at a conference on the unity of science in Venice, and for a vibrant interdisciplinary discussion at a meeting of Old Dominion fellows in Princeton University. References Bechara, A., H. Damasio, D. Tranel, and A. R. Damasio. 1997. Deciding advantageously before knowing the advantageous strategy. Science 275: 1293–95. Brasil-Neto, J. P., A. Pascual-Leone, J. Valls-Sole´, L. G. Cohen, and M. Hallett. 1992. Focal transcranial magnetic stimulation and response bias in a forced choice task. Journal of Neurology, Neurosurgery, and Psychiatry 55: 964–66. Dennett, D. 1984. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge: MIT Press. Dennett, D. 2003. Freedom Evolves. New York, Viking. Frankfurt, H. 1969. Alternate possibilities and moral responsibility. Journal of Philosophy 66: 828–39. Gazzaniga, M., R. Ivry, and G. Mangun. 1998. Cognitive Neuroscience. New York: Norton. Garland, D. 2001. The Culture of Control: Crime and Social Order in Contemporary Society. Chicago: University of Chicago Press. Greene, J., and J. Cohen. 2004. For the law, neuroscience changes everything and nothing. Philosophical Transactions, Royal Society 359: 1775–85. Libet, B., C. A. Gleason, E. W. Wright, and D. K. Pearl. 1983. Time of conscious intention to act in relation to onset of cerebral activity. Brain 106: 623–42. McEwan, I. 2001. Atonement. New York: Anchor. McGeer, V. 1996. Is ‘‘self-knowledge’’ an empirical problem? Renegotiating the space of philosophical explanation. Journal of Philosophy 93: 483–515. Nielson, T. I. 1963. Volition: A new experimental approach. Journal of Psychology 4: 215–30. Pettit, P. 2001a. A Theory of Freedom: From the Psychology to the Politics of Agency. Cambridge and New York: Polity and Oxford University Press. Pettit, P. 2001b. The capacity to have done otherwise. In P. Cane and J. Gardner, ed., Relating to Responsibility: Essays in Honour of Tony Honore on his 80th Birthday, pp. 21– 35. Oxford: Hart. Pettit, P. 2003. Looks as powers. Philosophical Issues 13: 221–52.
Neuroscience and Agent-Control
91
Pettit, P., and M. Smith. 1996. Freedom in belief and desire. Journal of Philosophy 93: 429–49. Ramachandran, V. S., and S. Rogers-Ramachandran. 1996. Synaesthesia in phantom limbs induced with mirrors. Proceedings of the Royal Society 263: 377–86. Sellars, W. 1997. Empiricism and the Philosophy of Mind. Cambridge: Harvard University Press. Strawson, P. 1982. Freedom and resentment. In G. Watson, ed., Free Will, pp. 59–80. Oxford: Oxford University Press. Wallace, R. J. 1996. Responsibility and the Moral Sentiments. Cambridge: Harvard University Press. Watson, G. 2005. A´gency and Answerability: Selected Essays. Oxford: Oxford University Press. Wegner, D. M. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wolf, S. 1990. Freedom within Reason. Oxford: Oxford University Press.
6
My Body Has a Mind of Its Own
Daniel C. Dennett
In life, what was it I really wanted? My own conscious and seemingly indivisible self was turning out far from what I had imagined and I need not be so ashamed of my self-pity! I was an ambassador ordered abroad by some fragile coalition, a bearer of conflicting orders, from the uneasy masters of a divided empire. . . . As I write these words, even so as to be able to write them, I am pretending to a unity that, deep inside myself, I now know does not exist. —William Hamilton 1996, p. 134 Language was given to men so that they could conceal their thoughts. —Charles-Maurice de Talleyrand
‘‘My body has a mind of its own!’’ Everybody knows what this exclamation means. It notes with some surprise the fact that our bodies can manage many of their key projects without any conscious attention on our part. Our bodies can stride along quite irregular ground without falling, avoiding obstacles and grabbing strategic handholds whenever available, pick berries and get them into the mouth or the bucket with little if any attention paid, and—notoriously—initiate preparations for sexual activity on a moment’s notice without any elaborate decision-making or evaluation discernible, to say nothing of the tight ship run by our temperature maintenance system and our immune system. My body can keep life and limb together, as we say, and even arrange for its own self-replication without any attention from me. So what does it need me for? This is another way, perhaps a better way, of asking why consciousness (our human kind of consciousness, at least) should evolve at all. The sort of consciousness (if it is a sort of consciousness) that is manifest in alert and timely discriminations for apt guidance of bodily trajectory, posture, and resource allocation is exhibited, uncontroversially, by invertebrates all the way down to single-celled organisms, and even by plants.1 Since the
94
Daniel C. Dennett
self-protective dispositions of the lobster can be duplicated, so far as we can tell, in rather simple robots whose inner states inspire no conviction that they must ‘‘generate phenomenology’’ or anything like it, the supposition that nevertheless there must be ‘‘something it is like to be’’ a lobster begins to look suspicious, a romantic overshooting of anthropomorphism, however generous-spirited. If a lobster can get through life without a self (or very much of a self), why should it have a ‘selfy’ self (Dennett 1991)? Maybe it doesn’t. Maybe it (and insects and worms and . . . .) are ‘‘just robots.’’ However, leaving matters here leans on an unsatisfactory assumption to the effect that being robotic and being conscious are mutually exclusive. The ultimate issue, really, is to explain first the possibility and then the (evolutionary) actuality of consciousness in some robots (if not lobsters). After all, we are conscious, and if materialism is true, we are made of nothing but robots—cells—and is such a complex not itself a robot?2 If we want to have a way of expressing our supposition that lobsters, say, are ‘‘mere’’ automata, we need to anchor that ‘‘mere’’ to some upper bound on complexity. Let’s say that mere automata are entities (organisms or automata) that are incapable of higher order self-monitoring; while they engage in behaviors that benefit from feedback guidance, these behaviors and the states that give rise to them aren’t further represented by the entities in such a way as to permit reflective or retrospective self-evaluation. A mere automaton is stuck with whatever behaviors it can perform—though these might be conditionable—capable of being shaped or extinguished by feedback—but cannot, for instance, note that ‘‘it seemed like a good idea at the time.’’ This would enable us to investigate different styles of nervous system: from simple and relatively low-priced arrangements lacking central selfmonitoring capabilities to more expensive and sophisticated arrangements capable of significant varieties of higher level self-monitoring, without having to confront in advance the vexing question of whether or not any of these sophistications amounted to, or were sure signs of, consciousness. As we learn more about the adroitness and versatility and internal organization of spiders or octopuses (or bats or dolphins or bonobos), we might uncover some further impressive thresholds that eventually persuade us to grant consciousness (of the impressive kind—whatever that means) to these creatures, but for the time being, we are the only species that everyone confidently characterizes as conscious. Our confidence is grounded in the simple fact that we are the only species that can compare notes. Consider a remark by Daniel Wegner: ‘‘We can’t possibly know (let alone keep track of) the tremendous number of mechanical influences on our be-
My Body Has a Mind of Its Own
95
havior because we inhabit an extraordinarily complicated machine’’ (2002, p. 27; italics added). Wegner presumably wouldn’t have written this if he hadn’t been comfortable assuming that we all know what he is talking about, but just who is this we that ‘‘inhabits’’ the brain? There is the Cartesian answer: each of us has an immortal, immaterial soul, the res cogitans or thinking thing, the seat of our individual consciousness. But once we set that answer aside—as just about everybody these days is eager to do—just what thing or organ or system could Wegner be referring to? My answer, compressed into a slogan by Giulio Giorello, who used it as a headline for an interview with me in Corriere della Serra in 1997, is this: Si, abbiamo un anima. Ma e` fatta di tanti piccoli robot. Yes, we have a soul. But it’s made of lots of tiny robots. Somehow, the trillions of robotic (and unconscious) cells that compose our bodies organize themselves into interacting systems that sustain the activities traditionally allocated to the soul, the ego or self. But since we have already granted that simple robots are unconscious (if toasters and thermostats and telephones are unconscious), why couldn’t teams of such robots do their fancier projects without having to compose me? If the immune system has a mind of its own, and the hand–eye coordination circuit that picks the berries has a mind of its own, why bother making a super-mind to supervise all this? George Ainslie notes the difficulty and has a suggestion: ‘‘Philosophers and psychologists are used to speaking about an organ of unification called the ‘self’ that can variously ‘be’ autonomous, divided, individuated, fragile, well-bounded, and so on, but this organ doesn’t have to exist as such’’ (2001, p. 43). How could this be? How could an organ that doesn’t have to exist as such exist at all? And, again, why would it exist? Another crafty thinker who has noticed the problem is the novelist Michael Frayn, whose narrator in Headlong muses: ‘‘Odd, though, all these dealings of mine with myself. First I’ve agreed a principle with myself, now I’m making out a case to myself, and debating my own feelings and intentions with myself. Who is this self, this phantom internal partner, with whom I’m entering into all these arrangements? (I ask myself.)’’ (Frayn 1999, p. 143). Although Frayn might not have intended to answer his own question, I think he has in fact provided the key to the answer in his parenthesis: ‘‘I ask myself.’’ It is only when asking and answering are among the projects undertaken by the teams of robots that they have to compose a virtual organ of sorts, an organ that ‘‘doesn’t have to exist as such’’—but does have to exist.
96
Daniel C. Dennett
This, in any case, has been suggestively argued by the ethologist and roboticist David McFarland (1989) in a provocative, if obscure, essay. According to McFarland, ‘‘Communication is the only behavior that requires an organism to self-monitor its own control system.’’ I’ve been musing over this essay and its implications for years, and I still haven’t reached a stable conviction about it, but in the context of the various intricate constructions around selves and their volition offered in the present volume including the juxtaposition of Wegner’s and Ainslie’s perspectives, I think it is worth another outing. Organisms are correctly seen as multicellular communities sharing, for the most part, a common fate (they’re in the same boat). So evolution can be expected to favor cooperative arrangements in general. Your eyes may, on occasion, deceive you—but not on purpose! (Sterelny 2003). Running is sure to be a coordinated activity of the limbs, not a battle for supremacy. Nevertheless, there are bound to be occasions when subsystems work at cross purposes, even in the best-ordered communities of cells, and these will, in general, be resolved in the slow, old-fashioned way: by the extinction of those lineages in which these conflicts arise most frequently, are most costly to fitness, and aren’t ineliminable byproducts that come with the numerous gains from going multicellular in the first place (see the papers in part III of Hammerstein 2003). The result is control systems that get along quite well without any internal self-monitoring. The ant colony has no boss, and no virtual boss either, and gets along swimmingly with distributed control that so far as we can tell does not engage or need to engage in high-level self-monitoring. According to McFarland, organisms can very effectively control themselves by a collection of competing but ‘‘myopic’’ task-controllers that can interrupt each other when their conditions (hunger or need, sensed opportunity, built-in priority ranking, . . .) outweigh the conditions of the currently active task controller. Goals are represented only tacitly, in the feedback loops that guide each task-controller, but without any global or higher level representation. (One might think of such a task-controller as ‘‘uncommented’’ code—it works, but there is nothing anywhere in it that can be read off about what it does or why or how it does it.) Evolution will tend to optimize the interrupt dynamics of these modules, and nobody’s the wiser. That is, there doesn’t have to be anybody home to be wiser! But communication, McFarland thinks, is a behavioral innovation that changes that. Communication requires a central clearinghouse of sorts in order to buffer the organism from revealing too much about its current
My Body Has a Mind of Its Own
97
state to competitive organisms. In order to understand the evolution of communication, as Dawkins and Krebs (1978) showed in a classic article, we need to see it as manipulation rather than as purely cooperative behavior. The organism that has no poker face, that communicates its state directly to all hearers, is a sitting duck, and will soon be extinct. What must evolve instead is a communication-control buffer that creates (1) opportunities for guided deception, and coincidentally (2) opportunities for self-deception (Trivers 1985), by creating, for the first time in the evolution of nervous systems, explicit (and more ‘‘globally’’ accessible) representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviors can be formulated and controlled. This in turn opens up structure that can be utilized in taking the step, described in detail by Gary Drescher (1991), from simple situation-action machines to choice machines, the step I describe as the evolutionary transition from Skinnerian to Popperian creatures (Dennett 1996). I wish I could spell all this out with the rigor and detail that it deserves. Fortunately, some of the other papers in this volume make a careful collective start. The best I can do for my part, at this point anyway, is simply gesture in the directions that strike me as theoretically promising and encourage others to continue mining this fine vein. What follows are some informal reflections that might contribute. Consider the chess-playing computer programs that I so often discuss. They are not conscious, even when they are playing world-class chess. There is no role for a user-illusion within them because, like McFarland’s well-evolved noncommunicators, they are more or less optimized to budget their time appropriately for their various subtasks. Would anything change if the program were enabled/required to communicate with others—either its opponent or other kibitzers? Some programs now available have a feature that permits you to see just which move they are currently considering (e.g., see http://chess.captain.at/), but this is not communication; this is involuntary self-exposure, a shameless display that provides a huge source of valuable information to anyone who wants to try to exploit it. In contrast, a program that could consider its communications as informal moves, social ploys, in the enlarged game of chess—the game that some philosophers (e.g., Haugeland 1998) insist is real chess, unlike the socially truncated game that programs now play—would have to be able to ‘‘look at’’ its internal states the same way a poker player needs to look at his cards in order to decide what action to take. ‘‘What am I now trying to do, and
98
Daniel C. Dennett
what would be the effect of communicating information about that project to this other agent?’’ (it asks itself). In other words, McFarland imports Talleyrand’s cynical dictum about language and adapts it to reveal a deep biological truth: explicit self-monitoring was invented to conceal our true intentions from each other while permitting us to reveal strategic versions of those intentions to others. If this is roughly right, then we can also see how this capacity has two roles to play: export and import. It is not just that we can use communication to give strategic information about what we are up to, but we can be put up to things by communication. ‘‘A voluntary action is something a person can do when asked’’ (Wegner 2002, p. 32). The capacity to respond to such requests, whether initiated by others or by oneself, is a capacity that must require quite a revolutionary reorganization of cerebral resources.3 This prospect is often overlooked by researchers eager to stress the parallels and similarities between human subjects and animal subjects when they train a monkey (typically) to ‘indicate’ one thing or another by moving their eyes to one or another target on screen, or to press one of several buttons to get a reward. A human subject can be briefed in a few minutes about such a task, and thereupon, with only a few practice trials, execute the instructions flawlessly for the duration of the experiment. The fact that preparing the animal to perform the behavior usually involves hundreds or even thousands of training trials does little to dampen the conviction that the resulting behavior counts as a ‘‘report’’ by the animal of its subjective state.4 But precisely what is missing in these experiments is any ground for believing that the animal knows it is communicating when it does what it does. And if it is not in the position of an agent that has decided to tell the truth about what is going on in it now, there is really no reason to treat its behavior as an intentional informing. Its carefully sculpted actions may betray its internal state (the way the chess program willy-nilly divulges its internal state), but this is not the fruit of self-monitoring. If something along these lines is right, then we have some reason to conclude that contrary to tradition and even ‘‘common sense,’’ there is scant reason to suppose that it is like anything to be a bat. The bat’s body has a mind of its own, and doesn’t need a ‘‘me’’ to inhabit it at all. One might thus say that only we who compare notes (strategically) inhabit the complicated machines known as nervous systems. But on the whole this metaphor may by now be causing more confusion than enlightenment. Perhaps we should say instead: we strategic signalers may be the only selves that nervous systems produce.
My Body Has a Mind of Its Own
99
Notes 1. This robotic sensitivity-cum-action-guidance is what I used to call awareness2 , to distinguish it from the reportable kind of consciousness that might be only a human gift, awareness1 (Dennett 1969), a pair of awkward terms that never caught on, though the idea of the distinction is still useful, in my opinion. 2. I find it strategically useful to insist that individual cells, whether prokaryotic, archaic, or eukaryotic, are basically robots that can duplicate themselves, since this is the take-home message of the last half-century of cell biology. No more romantic vision of living cells as somehow transcending the bounds of nanobothood has any purchase in the details of biology, so far as I can see. Even if it turns out that quantum effects arising in the microtubules that crisscross the interiors of these cells play a role in sustaining life, this will simply show that the robotic motor proteins that scurry back and forth on those microtubules are robots with access to randomizers. 3. Thomas Metzinger (Being No One, 2003) has some excellent suggestions about what he calls the phenomenal self-model and its revolutionary capacities. 4. See Sweet Dreams (Dennett 2005, pp. 169–70) for earlier remarks on this. I myself have given more credence in the past to this proposal than I now think appropriate. See Dennett (1991, pp. 441–48). References Ainslie, G. 2001. Breakdown of Will. Cambridge: Cambridge University Press. Dawkins, R., and J. R. Krebs. 1978. Animal signals: Information or manipulation? In J. R. Krebs and N. B. Davies, eds., Behavioural Ecology: An Evolutionary Approach, pp. 282–309. Blackwell, Oxford. Dennett, D. C. 1969. Content and Consciousness. London: Routledge and Kegan Paul. Dennett, D. C. 1991. Consciousness Explained. New York: Little Brown. Dennett, D. C. 1996. Kinds of Minds. New York: Basic Books. Dennett, D. C. 2005. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. Cambridge: MIT Press. Drescher, G. 1991. Made-up Minds: A Constructivist Approach to Artificial Intelligence. Cambridge: MIT Press. Frayn, M. 1999. Headlong. London: Faber and Faber. Hamilton, W. 1996. Narrow Roads of Gene Land, vol 1. Oxford: Freeman. Hammerstein, P., ed. 2003. Genetic and Cultural Evolution of Cooperation. Cambridge: MIT Press.
100
Daniel C. Dennett
Haugeland, J. 1998. Having Thought: Essays in the Metaphysics of Mind. Cambridge: Harvard University Press. McFarland, D. 1989. Goals, no-goals and own goals. In A. Montefiore and D. Noble, eds., Goals, No-Goals and Own Goals: A Debate on Goal-Directed and Intentional Behavior, pp. 39–57. London: Unwin-Hyman. Metzinger, T. 2003. Being No One. Cambridge: MIT Press. Sterelny, K. 2003. Thought in a Hostile World: The Evolution of Human Cognition. Oxford: Blackwell. Trivers, R. 1985. Social Evolution. Menlo Park, CA: Benjamin/Cummings. Wegner, D. 2002. The Illusion of Conscious Will. Cambridge: MIT Press.
7
Soft Selves and Ecological Control
Andy Clark
These bodily members are, as it were, no more than garments; which, because they have been attached to us for a long time, we think are us, or parts of us [and] the cause of this is the long period of adherence: we are accustomed to remove clothes and to throw them down, which we are entirely unaccustomed to do with our bodily members.1 —Avicenna, De anima, V.7 As interface, the skin is obsolete. . . . The clothing of the body with membranes embedded with alternate sensory and input/output devices creates the possibility of more intimate and enhanced interactivity. Subjectively, the body experiences itself as a more extruded system, rather than an enclosed structure. The self becomes situated beyond the skin.2 —Stelarc, ‘‘Parasite Visions’’
Introduction Advanced biological brains are by nature open-ended opportunistic controllers. Such controllers compute, pretty much on a moment-to-moment basis, what problem-solving resources are readily available and recruit them into temporary problem-solving wholes. Neural plasticity, exaggerated in our own species, makes it possible for such resources to become factored deep into both our cognitive and physical problem-solving routines. One way to think about this is to depict the biological brain as a master of what I here dub ‘‘ecological control.’’ Ecological control is the kind of top-level control that does not micromanage every detail, but rather encourages substantial devolvement of power and responsibility. This kind of control allows much of our skill at walking to reside in the linkages and elastic properties of muscles and tendons. And it allows (I claim) much of our prowess at thought and reason to depend on the robust and reliable operation, often (but not always) in dense brain-involving loops, of a variety of
102
Andy Clark
nonbiological problem-solving resources spread throughout our social and technological surround. Are the complex distributed systems that result in some sense ‘‘out of control,’’ beyond the reach of useful (you might even, though problematically, say, ‘‘personal’’) governance? I will argue that they are not, although understanding them requires us to rethink some key ideas about control and the nature of the self. To (try to) make this case, I will first examine some strategies for efficient, external opportunity exploiting control in simple systems. I will then argue that many of the same lessons apply to the case of higher level human problem-solving. Ecological Control Consider Shakey. Shakey, circa 1970, was the mobile robotic darling of the Stanford Research Institute: one of the very first computer-controlled mobile robots and a locus of hard (nonecological) control. Armed with a camera, wheels, and a laser range finder, and controlled by a big old mainframe whizzing along at a quarter million calculations per second, Shakey could obey typed commands such as ‘‘Push the cube to the pyramid.’’ To do so, the system would sense, process, plan, and then act out its plan. For Shakey, the body and the environment were first and foremost problems to be solved. The environment was the problem arena. The sensors detected the layout in that arena and the reasoning system planned a solution. To a large degree, Shakey’s body was just another part of the problem space: a part that needed to be sent detailed, micromanaging control signals so as to put the reasoned-out solution into practice. A contemporary analogue, though vastly faster and more sophisticated, is Honda’s Asimo. Asimo is a mainstream control paradigm walking robot that uses precise joint-angle control to mimic human walking. The solution looks pretty good but is massively energy and computation expensive. Contrast these solutions with the kinds of ecological control deployed by passive dynamic walkers3 (PDWs). Passive dynamic walkers are simplelooking two-legged devices that employ no actuation except gravity, have no control system, and as a result can enforce no joint-angle control at any time. Yet surprisingly, PDWs are capable (when set on a gentle incline) of very stable, human-like walking. Now imagine that you (playing evolution) want to exploit that kind of surprising capacity but in the context of a self-powered locomoting agent. The solution, which looks to be Nature’s own, is to walk using a kind of controlled falling-over. Powered (level terrain) walking can thus be brought about by the brain and CNS systemati-
Soft Selves and Ecological Control
103
cally pushing, damping and tweaking a system in which passive dynamic effects continue to play a very major role. In such cases a low-energy source, a simple control system, and the body (and gravity!) ‘‘collaborate’’ to solve the walking problem. This strategy has recently been implemented in a variety of simple robots (see Collins et al. 2005) and described as ‘‘a new design and control paradigm’’ for walking robots (2005, p. 1083). As a second example, Tedrake et al. (2005) have given us ‘‘Robotoddler.’’ Robotoddler uses actor-critic reinforcement learning to acquire a control policy that exploits the passive dynamics of the body. The robot learns to change speeds and to go forward and backward, and can adapt on the go to different terrains, including bricks, wooden tiles, carpet, and a variable speed treadmill. By using passive dynamic strategies, the robot’s power consumption is dramatically reduced (to about one-tenth that of a standard robot like Honda’s Asimo). These examples serve to introduce the notion of soft or ‘‘ecological’’ control. This is the kind of control that occurs when a system’s goals are not achieved by micromanaging every detail of the desired action or response, but by using a strategy that devolves a great deal of problem-solving responsibility, making the most of some robust, reliable source of relevant order in the body, elsewhere in the brain, and/or in local environment (notice that ecological control thus names a type of effect not a single mechanism). The effect of an ecological control strategy is often (though not always) the soft-assembly of a solution. Ecological controllers that can learn onthe-go promote soft-assembled solutions, that is to say solutions that comprise a temporarily stable assembly of factors and forces recruited from whatever happens to be available. Soft-assembly itself is a notion developed in movement science according to which: Movements can be seen as ‘‘softly-assembled’’ patterns created and dissolved as tasks and environments change, with some patterns easy and preferred, and others more difficult and unstable. . . . Moreover, as these synergies are assembled, they also take advantage of the non-neural aspects of movement: effects of gravity, elastic properties of muscles and inertial effects. (Thelen and Bates 2003, p. 381)
Humans belong to the interesting class of what I’d like to call openended ecological controllers. These are systems that seem to be specifically designed so as to constantly search for opportunities to make the most of body and world, checking for what is available, and then (at various timescales and with varying degrees of difficulty) integrating it deeply, creating whole new unified systems of distributed problem-solving. Robotoddler,
104
Andy Clark
as I just showed, does this open-endedly for walking but can apply that kind of learning to nothing else. We humans seem able to apply the same kinds of ecologically controlled learning much more widely. In fact human agents are highly engineered so as to be able to learn to make maximal problem-simplifying use of an open-ended variety of internal, bodily, or external sources of order. For example, we can learn to use tools, sports racquets, and musical instruments in ways that exploit the intrinsic dynamics of those material structures (for discussion and lots of examples, see Clark 2003 and Clark, forthcoming). But more important for present purposes, there seems to be something analogous to (possibly identical with; see Christensen 2004) ecological control that operates in the cognitive domain. Thus suppose we ask: Could human cognition involve ecologically controlled, soft-assembled, distributed systems built of heterogeneous parts? I believe (see Clark 1997, 2003; Clark and Chalmers 1998) that the answer is yes and that minds like ours are distinguished, in part, by the ease with which brains like ours form larger problem-solving wholes that incorporate and exploit extra-neural stores, strategies, and processes. These larger problem-solving wholes, I would like to argue, are not simply extended cocoons for the ‘real’ selves, choosing agents and cognitive engines hidden deep within. Rather (or so I wish to suggest), they then are those selves, agents, and cognitive engines. To creep up on this initially unsettling idea, let’s next turn to the very idea of the self. Situating the Self Contemporary work on the nature of the self, and on the origins of the sense of self, tends to distinguish between two major contributing elements. First, there is something like a core sense of being, which involves having a point of view on the world (I see the world from over here), having a sense of what I can and cannot do (I can reach and grasp that cup, I cannot fly or jump to the roof, etc.) and having a sense of the limits and placement of my own body in space. Dogs, cats and rabbits, as well as human beings, have this core sense of their own embodiment and abilities. But second, in the human case, at least, there is also something like a complex narrative self.4 This is (roughly speaking) an understanding, co-constructed by myself and others, of the kind of person I am, the kinds of projects and interests I have, the shape of my life so far, and so on. Both elements can be significantly impacted by the opportunistic recruitment of external props and aids. Fitted with a shiny new prosthetic arm
Soft Selves and Ecological Control
105
that can lift more weights than before, my direct, automatic, sense of what I can and can’t do must rapidly alter and catch up. Fitted with a cochlear implant that cures my deafness and (as a kind of added extra) allows me to hear sounds in ranges that most adult humans cannot detect, my core sense of my own auditory potential again changes. Accustomed to the (now automatic and unreflective) use of, say, a retinal display that allows me to invisibly retrieve information from a plug-in or courtesy of a wireless accessible database, it seems less and less clear where what ‘‘I’’ know stops and what ‘‘it’’ (the plug-in) makes available starts. If the latter claim seems less plausible than the others, imagine an agent who knows, pretty much by rote, a lot of facts about women’s basketball. Now imagine that instead of storing all of those facts in your head, you deploy a kind of heads-up display that provides instant access to the main performance statistics of key players over the last twenty years. The display might be delivered by eyeglasses, or even courtesy of a wireless implant sending signals directly into visual cortex (rather like certain newgeneration cochlear implants; see Clark 2003, ch. 1). The system is set up so that the visual sighting of a player’s name, or the auditory pickup of that name, or simply mouthing the name, activates a kind of augmented reality visual overlay displaying all the key facts and figures. Imagine too that the system is fairly flexible, allowing you also to start with categories (e.g., three-point field goal percentages in the year 2000) or with specifics (players with three-point field goal percentages of 0.350 or above) and to then retrieve information accordingly. Over a period of use, you become so accustomed to this easy, on-demand access that the plug-in (for want of a better word) becomes automatically, unthinkingly deployed, and that you usually trust what it delivers. As a result perhaps you start to behave, and subsequently to feel, as if you simply know which of any two players had the best three-point field goal percentage in any given season, and so on. Would you be wrong to feel that? The answer is by no means cut-and-dried. True, your knowing these things depend on the proper operation of the plug-in. But your knowing other things depends, equally, on the proper operation of parts of your brain. And in each case, damage and malfunction is always a possibility. True, you need to retrieve the information before it is present to your conscious awareness. But knowledge stored in long-term biological memory is in the much the same boat, until some kind of retrieval process poises it for the control of verbal report and intentional action. (For a longer and more philosophically nuanced version of this much-compressed argument, see Clark and Chalmers 1998).
106
Andy Clark
A helpful, though at best partial, parallel is with the way our sense of ‘‘seeing the whole visual scene before us’’ depends on some contemporary models, on the way information that is not currently represented in conscious visual awareness, and that is ‘‘stored’’ only in the external scene itself, is nonetheless (thereby) poised for easy access by a simple saccade. That poise-for-easy-retrieval is taken for granted in our daily planning and acting, and may be the source of our feeling that we see detail and color throughout the whole of the visual field (for discussion, see Clark 2002a; Noe 2002). This is not to say that there are no interesting differences. For example, knowledge stored in long-term biological memory is open to all kinds of subterranean processes of integration and interference (with both old and newly acquired knowledge). And neurally stored information is fluently accessible by an amazing variety of routes, and in a wide variety of situations.5 Nonetheless, the simple feeling of ‘‘already knowing’’ the answer to a question as soon as it is asked is surely the knowledge-based equivalent of the more familiar notion of transparent equipment: equipment (like the carpenter’s hammer) with which we are so familiar and fluent that we do not think about it in use, but rather rely on it to mediate our encounters with a still-wider world. Easy access to specific bodies of information, as and when such access is normally required, is all it takes for us to factor such knowledge in as part of the bundle of skills and abilities that we take for granted in our day to day life. And it is this bundle of taken-for-granted skills, knowledge, and abilities that—or so I am suggesting—quite properly structures and informs our sense of who we are, what we know, and what we can do. The Shrinking Chooser If this still feels unnatural, it is largely (I suggest) because we have in any case only the most tenuous collective grip on what it means to be a choosing, acting ‘self’, or a unified ‘mind’, and because we suffer from a chronic tendency to misconstrue the relations between our self-conscious choosings and the vast webs of nonconscious processing activity (all those whirrings and grindings of machinery, neural and perhaps nonneural, internal and perhaps external) that also structure and determine our own actions and responses. Until we form a better, more consistent image of the relationship between these factors, we cannot hope to know ourselves. There is, most (though not all) theorists will agree, a genuine (though not always sharp and all-or-nothing) distinction between those things of
Soft Selves and Ecological Control
107
which I am consciously aware and those things of which I am not.6 Right now, for example, I am conscious of the page in front of me, of the glare of my desk lamp, and of the difficulty of formulating this particular thought! I am not, however, conscious of all the complex low-level visual processing (the parallel processing of multiple differential equations) that supports and makes possible my conscious visual awareness of the page and the glare. Nor am I conscious of whatever complex internal machinations underlie my sudden sense than I am here tiptoeing into difficult and dangerous territory. Certainly, at any given moment, not all the cognitively important goings-on in my brain are present as contents of my current conscious awareness. That is why we sometimes find thoughts and ideas, ones that we nevertheless recognize as originated by ourselves, simply ‘‘popping up in our heads’’: they are the intrusive conscious fruits of some ongoing, subterranean, nonconscious information processing. It is impossible to underestimate the significance of these nonconscious cognitive processes in the determination of the mental character of a persisting and identifiable thinking being. We must reject the seductive but ultimately barely intelligible idea that we (qua individual, thinking things) are nothing more than a sequence of conscious states. The identification of the agent or chooser with such a thin slice of themselves obscures the full suite of mechanisms whose coordinated action is responsible for much of what is distinctive of an individual chooser. The shrinkage would leave us with no real grip on the cohesion and continuity that we naturally associate with the idea of a single mind, self, or chooser persisting through time. It is illuminating, I think, to actually try the following simple experiment. For just ten minutes, keep track (as far as you can) of the contents of your conscious awareness. Unless you are totally engaged in a single all-absorbing task, you will probably end up with a sequence of oftenunconnected thoughts. A feeling of hunger, a thought about consciousness and the self, a worry about a forthcoming lecture, a glimmer of sexual arousal, a pang of anxiety, an urge to write to an old friend, another thought about the self (a good one—but where did it come from?), the strong desire for a coffee, and so on and on. The sequence of conscious contents is highly varied (in type) and radically discontinuous (in content). Themes persist, and whole trains of thought are sometimes, painfully, birthed. But the true principles of continuity, and the bulk of the thinking, choosing self, must lie largely underground. We cannot, it seems, afford to identify ourselves with the conscious contents of momentary time-slices. With this in mind Dennett (2003) responds to worries (Libet 1985, 1999) concerning the time-lag between onset of
108
Andy Clark
action and conscious awareness of a decision by noting that, ‘‘Our free will, like all our other mental powers, has to be smeared out over time, not measured at instants. Once you distribute the work done . . . in both space and time in the brain, you have to distribute the moral agency around as well. You are not out of the loop; you are the loop’’ (Dennett 2003, p. 242). Taking the whole loop of temporally and spatially spread cognitive activity seriously is, however, already to take the crucial step toward understanding ourselves as ecological control systems capable of incorporating external structures deep into their cognitive routines. For the choices before us are now relatively stark: Either, treat the mind and self as nothing but the shifting set of momentary ‘‘conscious’’ contents (thus shrinking mind and self beyond recognition) Or, allow mind and self to depend on the ongoing coordinated activity of multiple temporally spread conscious and nonconscious processes, thus inviting us to also consider certain nonbiological members of the class of nonconscious processes to contribute as deeply to the mechanical underpinnings of our minds and selves as do the nonconscious neural processes. The challenge, in other words, is this. Given the profound role of nonconscious, opportunistically recruited neural resources in the intentional origination of action, show us why (apart from some unargued prejudice) the machinery of mind and self should be restricted to the neural, the inner, or the biological. We need to seriously question the idea that neural, inner, and/or biological goings-on are in some way incredibly special. We need to rid ourselves of the idea that our brains are somehow touched with the magic dust that makes them suitable to act as the physical machinery of mind and self, while the nonbiological stuff must forever remain mere slave and tool. The relations between our conscious sense of self (our explicit plans and projects, and our sense of our own personality, capacities, bodily form, location and limits) and the many nonconscious neural goings-on that structure and inform this cognitive profile are, it seems to me, pretty much on a par with the relations between our conscious minds and various kinds of transparent, reliable, robust and readily accessed nonbiological resources. When those resources are of a recognizably knowledge- and information-based kind, the upshot is an extended cognitive system: a biotechnologically hybrid mind, a biotechnologically hybrid self.
Soft Selves and Ecological Control
109
Is this merely arguing over words? Why should we worry whether we accept well-integrated bio-external elements as aspects of the physical machinery of self and mind? The deepest reason to care, it seems to me, is that to fail to do so is to implicitly accept a model according to which the machinery of the self becomes identified with the machinery of conscious reason. This leads directly to a variety of fears and worries (Wegner 2002 is a nice example) concerning the authorship of many perfectly intentional actions. The remedy is to see that the choosing agent is not somehow hidden within the machinery whose operations are most accessible to consciousness. Rather, she is the whole well-tuned ensemble capable of taking responsibility for actions and of initiating actions that make sense in the light of her long-term projects and commitments. Gallagher (2005) extends the Dennettian view in just this way, writing that, ‘‘I don’t disagree with Dennett concerning the role played by nonconscious elements, except that I think we are even larger than he thinks—we are not just what happens in our brains. The ‘loop’ extends though and is limited by our bodily capabilities, into the surrounding environment, which is social as well as physical, and feeds back through our conscious experience into the decisions we make’’ (Gallagher 2005, p. 242). Confronted with this general class of proposal, according to which the machinery of mind extends out into the surrounding world, many people feel deeply uncomfortable. How, they ask, could something to which I (they mean the conscious mind) may have so little detailed access count as in any way a proper part of me? How could, to take an admittedly extreme case, the ongoing daily computations of a software agent that only ever reports back when it has found some tasty mercantile morsel count as part of my own mechanistic underpinnings? We may feel easier about this, I suspect, once we face up to the fact that this kind of relation already obtains between the conscious mind and sizable chunks of our onboard neural machinery. To see this quite concretely, imagine you are now reaching for that coffee cup sitting before you on the desk. It may seem to you that your hand and finger motions are being sensitively guided by your own conscious seeing. This, however, is not really the case. In fact fine-tuned reaching and grasping involves the delicate use of visually received information by functionally and neuro-anatomically distinct subsystems operating, for the most part, outside the window of conscious awareness. It is this nonconscious circuitry that guides the most delicate shape- and position-sensitive aspects
110
Andy Clark
of reach and grasp. (For a radical version of this, see Milner and Goodale 1995; for a more balanced account, see Jacob and Jeannerod 2003). In a similar way it is the nonconscious use of visual information that is responsible for many of our fine postural corrections and compensations (e.g., standing up while riding a bus). Even in the case of our own biological brains, then, the conscious self is in (direct, micromanaging) control of much less than we think. Not just the ‘‘autonomic’’ functions (breathing, heartbeat, etc.) but all kinds of human activities turn out to be partly supported by quasi-independent nonconscious subsystems. This is no surprise, I am sure, to any sports player: it doesn’t even seem, when playing a fast game of squash, as if your conscious perception of the ball is, moment by moment, guiding your hand and racket. Nor should it come as a surprise to artists and scientists, who are often painfully aware that the bulk of their own (intentional, owned, self-expressing) creative activity flows from subterranean and nonconscious sources. What seems to matter, for our daily sense of effective agency and choice, is (1) that the conscious mind has a rough and fallible sense of what she (the embodied, embedded, perhaps technologically extended, agent) knows, wants, and can and can’t do, and (2) that sometimes at least, conscious rehearsal can be an active part of the process that leads, within that complex economy, to intentional action. The conscious contribution here need amount to no more than a gentle but subjectively experienced nudge that tips the balance of a complex, and to a large extent unconsciously selforganizing, system. The conscious mind I am thus suggesting is well-placed to act as an ecological controller in the sense outlined in the opening sections. It is for this reason that the fear of ‘‘loss of control,’’ as we cede more and more to nonconscious inner processes and to a supporting web of nonbiological scaffolding is misplaced. For what matters is not that the conscious self be micromanaging every detail of every subroutine but that, working together, the conscious mind and a variety of nonconscious subsystems provide usable, robust support for the kinds of life we lead and the kinds of activity we value. The conscious mind, on this model, finally emerges as something like a new-style business manager whose role is not to micromanage but to set long-term goals, pursue some slow deliberative reasoning, and gently nudge the larger system in certain directions, all the while actively creating and maintaining the kinds of conditions in which the overall distributed cognitive economy performs at its best.7
Soft Selves and Ecological Control
111
The Buck That Never Stops A common objection, at about this point, goes something like this: even if external, nonbiological elements do sometimes help us—quite profoundly—in our problem-solving activities, still isn’t it always at least our biological brains that have the final say? The mental buck stops there. The brain is where I am because the brain is the controller and chooser of my actions in a way this other stuff (software, pen, paper, palm-pilot) is not. And that, it may be suggested, is why the nonbiological stuff should not count as part of the real self or cognitive system, and why our minds are not hybrids built of biological and technological parts. Human minds, so the obvious objection goes, are good old fashioned biological minds, albeit ones that enjoy a nice wraparound of power-enhancing tools and culture. This sounds sensible and proper. Until we turn up the magnification on the biological brain itself. Notice first that many processes involved in the selection and control of actions are routinely off-loaded, by the biological brain, onto the nonbiological environment. Think of the knot in the hanky, the automated desktop diary, and the software agent empowered to purchase. In reply to such an observation the sceptic is likely to invoke some kind of ultimate authority: Who was it that decided to knot the hanky, the sceptic demands? The biological brain, that’s ‘‘who,’’ and that’s YOU. Who was it that empowered the software agents to purchase? The same old brain, the same old YOU! But this reply is a major hostage to fortune. Suppose we now ask some parallel questions within the neurobiological nexus itself. Do we now conclude that the real ‘‘me’’ is to be identified only with those select elements of the neural machinery involved in ultimate decision-making? Suppose only my frontal lobes have the final say? Does that shrink the physical machinery of mind and self to just the frontal lobes? What if, as Dennett suspects, no neural subsystem has always and everywhere the final say? Has the mind and self simply disappeared? as Jerry Fodor once said, ‘‘If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me’’ (Fodor 1998, p. 207). Perhaps not. What we really need to reject, I suggest, is the seductive idea that all these various neural and nonneural tools need a kind of stable, detached user. Instead, it is just tools all the way down. Some of those tools are indeed more closely implicated in our conscious awareness of the world than others. But
112
Andy Clark
those elements, taken on their own, fall embarrassingly short of constituting any recognizable version of a human mind or of an individual person. Some elements, likewise, must be more important to our sense of self and identity than others. And some elements will play larger roles in control and decision-making than others. But this divide, like the ones before it, tends to crosscut the inner and the outer, the biological and the nonbiological. Different neural circuits provide different capacities, and contribute in different ways to our sense of self, of where we are, of what we can do, and to decision-making and choice. External, nonbiological elements provide still further capacities, and contribute in additional ways to our sense of who we are, where we are, what we can do, and to decision-making and choice. But no single tool among this complex kit is intrinsically thoughtful, ultimately in full control, or plausibly identified as the inner ‘seat of the self’. We (we human individuals) just are these shifting coalitions (see Ainslie 2001, Ross chapter 10 in this volume) of tools. We are ‘‘soft-selves,’’ continuously open to change and driven to leak through the confines of skin and skull, annexing more and more nonbiological elements as aspects of the machinery of mind itself. The metaphor of ‘tools all the way down’ may seem to threaten, as Ross (personal communication) notes, to deconstruct itself. How can ‘tools’ be the right notion in the absence of a central intelligent user? The air of paradox is intentional. The user is nothing but the cumulative effect of the coactive unfolding of the various resources supporting different aspects of adaptive response. This unfolding is determined by a delicate mix of sparse ecological control and pure self-organization. The user is what we see (in others and ourselves) when all this is working properly: a more-or-less rational being pursuing a more-or-less unified set of goals and projects. (See Rovane 1998.) The role of others in all this is not to be underappreciated. Ross (2004) makes a powerful case that that humans (like other animals) communicate and bargain nonlinguistically using multiple analogue signaling systems (think of the continuous variety of small facial and bodily motions that may convey pleasure or displeasure, or encouragement to approach or retreat). But when such agents are also able to label their own states and those of others (e.g., pinning them down as ‘‘attracted’’ or ‘‘unattracted,’’ ‘‘interested’’ or ‘‘not interested,’’ and so on), they enter into a new kind of arena, one whose dynamics can be stabilized by a series of such all-ornothing (digital) commitments (e.g., these are achieved whenever one agent labels another’s state and the other does not reject the label). Further negotiations and coordinations among this group can then be predicated
Soft Selves and Ecological Control
113
upon this stable baseline of publicly endorsed digital commitments. Ross depicts such practices as recursively agent-generating: that is to say, the agents that enter into the digitally modulated negotiation (or other coordination project) are defined, in part, by these very sets of commitments, and the new utility functions that accompany them. Despite all this we are prone, it seems, to a particularly dangerous kind of cognitive illusion. Because our best efforts at watching our own minds in action reveal only the conscious flow of ideas and decisions, we mistakenly identify ourselves with this stream of conscious awareness. Then, when in our more scientific moments we begin to inquire into the material and physical underpinnings of the mind and self, it can quickly seem as if much (though not all) of the brain and all the rest of the body, not to mention the surrounding social and technological webs, are just tools for that conscious user. This is the mistake that led Avicenna, the Islamic philosopher quoted at the start of the chapter, to depict his own bodily limbs as ‘‘no more than garments.’’ But garments for what? A conscious Cartesian self perhaps? To pursue this route is to embrace a hideously disfigured image of the mind and self, privileging a vanishingly small piece of the true personal and cognitive pie. A better bet, as we have already begun to see, is the de-centralized, distributed, heterogeneous vision of the machinery of mind and self powerfully championed by Dennett (see especially Dennett 1991). Much of Dennett’s work sets out to oppose the persuasive image of the ‘‘Cartesian theater’’: the mythical place inside our brains where sensory inputs, thought and ideas are all inspected by a ‘‘central meaner’’ whose wellinformed choices determine our deliberate actions. Dennett marshals a plethora of philosophical, psychological, and neuroscientific evidence against such a view. His target is often thought to be simply the idea of a neural or functional center of consciousness. But Dennett’s deeper quarry, or so it has always seemed to me, has been the idea of a central self, a small-but-potent internal user relative to whom all the rest—be it neural, bodily or technological—is mere toolbox. Where we hallucinate a central self, some spiritual or neural point wherein our special individual essence resides, Dennett finds only a grab bag of tools, and an ongoing narrative: a story we (the ensemble of tools) spin to make sense of our actions, proclivities and projects. According to Dennett, we are our own best story, and our sense of self is a kind of artifact, useful for many purposes, but best taken with a pinch of salt. I will not rehearse or critique Dennett’s arguments here (though I have done so at some length elsewhere—for example, see Clark 2002b).
114
Andy Clark
Instead, I simply note that our earlier reflections lead us to the very same conclusions, unpalatable though they may initially seem. There is no self, if by self we mean some central cognitive essence that makes me who and what I am. In its place there is just the ‘‘soft-self’’: a rough-and-tumble control-sharing coalition of processes—some neural, some bodily, some technological—and an ongoing drive to tell a story, to paint a picture in which ‘‘I’’ am the central player. Giving up on the image of a hard central self raises a thorny problem. What, then, makes a grab bag of tools (a grab bag whose specific elements may shift and change over time) into a unified, cohesive self? Part of the answer, to be sure, is that we simply hallucinate more unity and cohesion than in fact exists. Related to this is the pragmatic point that for many social and legal purposes, it is convenient to simply identify the agent with the core biological ensemble. We imprison the body and brain, not the laptop! But we do this despite knowing that individual bits of neural circuitry (my hippocampus, let’s say) are themselves as incapable of being ‘guilty’ as the laptop! What we are really doing is rejecting a pattern of behavior that has itself emerged from a whole social and biotechnological matrix. But another, perhaps more interesting, part of the answer is that the unity and cohesion of the self, and the distinctness of the self (the sense we have of being individual agents, located thus and so, confronting a wider world) are not simple givens. Instead, they are (imperfect and constantly vulnerable) achievements. In other work I have discussed how the sense of location and body boundaries is constructed on the basis of coordinated signals arising within perception–action cycles. (See Clark 2003, ch. 4). The preconditions for the emergence of a rich sense of self begin to be met, I suspect, when on the basis of such information a loose-knit system begins to stabilize itself and to actively protect its own problem-solving infrastructure. Thus reflect on the (superficially disproportionate) vexation of the child whose parents enter and slightly re-arrange her bedroom when she is not around. The feeling is one of almost personal assault. The room, organized in a certain way, was integral to the child’s modes of play and study. To borrow an even simpler example, most of us keep our drinking glasses in a certain cupboard in the house. By actively stabilizing this environmental structure (putting clean glasses back in that same cupboard) we simplify the problem of future glass-location. Or consider the way files are arranged and stored in your own office. Our offices are organized in highly individual ways, dovetailed to our specific needs and to our different neurobiological profiles. We human beings actively organize our own local environments
Soft Selves and Ecological Control
115
for cognitive purposes, and then take steps to protect this achieved organization (woe to the cleaner who disturbs the piles). Again and again, we act so as to stabilize our local environments in ways that simplify or enhance the problem-solving that needs to be done. All this is a close cousin, I claim, to our carefully constructed and defended notion of a bounded self. The narrative-spinning drive (clearly evidenced in well-known studies of the tendency toward confabulation; e.g., see Nisbett and Ross 1980), when confronted with such active efforts at stabilization, tends (I conjecture) to project the principle of stabilization further and further inward, inclining us to hallucinate a single, central organizing self. Imagine a pile of sand, deposited roughly on the ground and slowly settling into a stable arrangement of grains. Were the pile of sand self-aware, it too might hallucinate a kind of inner essence: a special grain or set of grains whose deliberate actions sculpt the rest into a stable arrangement. But there is no such essence. The sand pile simply self-organized into a more-or-less stable coalition of grains. Similarly (and here I also refer the reader to Dennett’s classic 1974 discussion of the ‘law of effect’) certain coalitions of biological and nonbiological problem-solving elements (‘grab bags of mind-tools’) prove more stable and (hence) enduring than others. These configurations have a tendency to preserve and even repeat themselves. When viewed by a conscious, narrative-spinning element, this all looks like the work of some central organizer: the real self, the real mind, the real source of all that observed order. Thus is born perhaps the image of the self as a critical yet vanishingly slim slice of the overall problemsolving ensemble (brain, body, cognitive technologies): a slice so slim and elusive that our own neural circuits (‘my’ hippocampus, ‘my’ frontal lobes) can quickly seem like its tools! That little story is mere speculation. But however it arises, this notion of a real, central, yet wafer-thin self is a profound mistake. It is a mistake that blinds us to our real nature, and leads us to radically undervalue (and misconceive) the roles of context, culture, environment and technology in the constitution of individual human persons. To face up to our true nature (soft-selves, distributed de-centralized coalitions) is to recognize the inextricable intimacy of self, mind, and world. Finding the Balance But not so fast. Isn’t there a tension in the story as I have told it? On the one hand, I want to speak of the conscious mind as a source (or sources) of ecological control, adding crucial nudges to the complex dynamics of
116
Andy Clark
much larger (potentially hybrid) cognitive and behavioral systems. On the other hand, I want to depict us as soft selves: de-centralized, distributed, and self-organizing systems that just happen to have a perspective on their own activity and an accompanying story to tell. How can both stories be true? A suggestive move is made by Velleman (2000), who notes that our own verbal utterances, encountered in the context of a narrative-spinning drive, may at times serve the purpose of helping to bring about the very thing they depict us as about to do. This is a function they could perform even in the context of a system that is largely self-organizing in just the way Dennett suggests. For example, if late at night I say to you ‘‘I’m leaving now,’’ I may say this in part as a means of bringing it about that I do, in fact, leave—that I leave, that is to say, despite my self-perceived urge to continue our conversation. This strategy can work, according to Velleman, because as a rampant narrator of my own life story, I am driven to strive for consistency, for alignment between behavior and narrative. So by saying that I am about to do so and so, I increase the costs to myself of not so doing. This trick (the addition of a new narrative-induced cost to not doing so and so) would work for silent inner rehearsal as well as for overt speech. The conscious thought or overt utterance that depicts me, to myself, as about to do so and so thus makes it more likely that I will do what I think or say. The key (as Ismael 2004, 2006, nicely notes) is simply to make the narrative run ahead, into the future, so that the drive for consistency and alignment acts as a causal influence on what we then do, rather than a post hoc (and sometimes demonstrably ad hoc8) commentary on what we have already chosen or done. Ismael (2004, 2006) then uses this as the springboard for a related but much stronger move. What is right in Velleman’s position, she suggests, is his recognition that ‘‘if the brain is generating a self-representation, there’s no reason that the thing can’t have a role in the determination of behavior, indeed, no reason that it can’t acquire an increasingly prominent role’’ (2004, p. 6). (By a ‘self-representation’ Ismael seems to mean any kind of internal model that contains even a partial representation of the agent as a distinct property-bearing individual, so any Dennett-style narrativespinning engine should count as a self-representation). But as soon as such a self-representation assumes some kind of causal role (whether by the neat trick of forward-narrating or any other means), we are no longer (according to Ismael) dealing with a simple self-organizing system that just happens to spin a story about itself. Rather, we are dealing with a system that really does include a self-model as an active principle of organization.
Soft Selves and Ecological Control
117
Not self-organization but organization by a self-model! Such systems may behave in ways that are more flexible (so the argument continues) than any first-order self-organizing system because they can use the self-model to select among multiple responses in ways that are driven by that model, hence by stored memories, representations of goals, and so on. In this story the stress on various forms of higher level control seems to me exactly right, as does the observation that such control may often require the system to deploy some kind of model of its own nature, history, or dynamics. But for my own part, I remain wary of the idea of any single such self-model, and much more inclined (see Clark 1997) toward a vision of multiple partial models, most of which may be relatively low level (e.g., a variety of forward models of bodily dynamics; see below). As far as the most inclusive and abstract such ‘self-model’ goes (the one that corresponds most closely to the narrative self), I wonder what the notion of a model here adds to the simpler notion of systems that know some stuff about themselves and use that knowledge to help select actions? Perhaps then the contrast between self-organizing and self-governing systems is not as sharp as it first seems?9 Ismael thus depicts a kind of conflict between what she calls selfgoverning models (ones in which a self-representation plays a crucial, flexibility-promoting causal role) and truly self-organizing ones, in which there is only input-driven self-organization plus a spun narrative. The notion of ecological control, I want finally to suggest, begins to show us how to reconcile these two notions,10 and hence how to reconcile Dennett’s emphasis on the unreality of selves with Ismael’s recognition that agents with some kind of self-model may exhibit complex dynamics that are the causal consequence of that very self-model. For what I am calling ecological control is what you get when you add an inter-animated and changeable variety of thin slices of self-governance to a system in which the intrinsic, self-organizing properties of many subsystems are allowed to bear a lot (but not all) of the problem-solving strain. The self-actuating passive dynamic walker, whose direction of travel and speed of walking is selected by a local supervisory system empowered to nudge and tweak the mechanically coupled leg system, exploits a potent combination of self-organization and self-governance. Importantly, self-governance, as I am understanding it, may be the province not of any single inner self-model but of a variety of partial11 self-governing strategies, some of which may themselves involve circuitry spanning brain, body, and world, and themselves orchestrated via a higher level kind of self-organization. According to such a picture, overall self-governance, though real and important, is the emergent outcome of
118
Andy Clark
the action of a whole complex of partial self, body, and world models acting as mini ecological controllers in a distributed cognitive economy. Some of these may be simple forward models of the dynamics of bodily subsystems (e.g., see Miall and Wolpert 1996; Clark and Grush 1999), while others may be more tightly woven with systems for episodic memory and for categorization. In all cases, though, the self-governance works only because it is delicately and continuously keyed to (and often highly exploitative of) a variety of sources of order inherent in the rest of the system. True (flexible, efficient, robust) self-governance thus positively requires, or so I want to suggest, the use of ‘soft’ ecological control strategies: ones that are maximally exploitative of the order and intelligence that are distributed throughout the larger system. This suite of soft, hybridization-favoring, partial self-governing routines cannot reconstruct a stereotypic Cartesian mind or a traditional single central self. To that extent we may appreciate what is correct in Dennett’s depiction of (that kind of) self as a kind of illusion, while embracing the causal potency of (perhaps a variety of) empirically real partial self (and world) models, some of which link memory and motivation to action and choice. Conclusions: Situations and Persons How can we reconcile the vision of human agents as distributed, hybrid problem-solving ensembles with the vision of human agents as, indeed, agents: as autonomous individuals exercising control and choice? Viewing ourselves as loci of multiple systems of ecological control may provide a useful tool to attempt such a reconciliation. Ecological control systems are, first and foremost, essentially opportunistic and exploitative. They take whatever is around, and build it into fluent problem-solving routines. But they are nonetheless controllers in good standing, able to tweak and nudge complex systems in ways that promote goal-driven activity and flexible adaptive response. Seeing ourselves as biologically based (but not biologically imprisoned) engines of ecological control may help us to develop a species self-image more adequate to the open-ended processes of physical and cognitive self-creation that make us who and what we are. Acknowledgments This project was completed thanks to teaching relief provided by Edinburgh University and by matching leave provided under the AHRC
Soft Selves and Ecological Control
119
Research Leave Scheme (grant reference number: 130000R39525). Some of the material is drawn from Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (Oxford University Press, 2003). Thanks to the publishers for permission to use this material here. Thanks also to Don Ross, Jenann Ismael, and Harold Kincaid for invaluable feedback and advice. Notes 1. Ibn Sina Avicenna was a Persian philosopher-scientist who lived between AD 980 and 1037. The quote is from R. Martin’s unpublished translation of his De anima (Liber de anima seu sextus de naturalibus), vol. 7. 2. Stelarc is an Australian performance artist whose work explores the technological transformations of embodied experience. The quote is from the Stelarc Web site at www.stelarc.va.com.au. 3. Much of this work originated in the Andy Ruina Lab, Cornell, and with original contributions by Tad McGeer. 4. See Dennett (1991, ch. 13) and Damasio (1999, ch. 7). 5. For discussion of the impact of such differences on the arguments for the ‘extended mind’, see Adams and Aizawa (2001) and Clark (2005). 6. Dennett might seem to reject this claim, as when he writes that ‘‘we cannot draw the line separating our conscious mental states from our unconscious mental states’’ (1991, p. 447). But he does not deny that many key processes operate at what he himself initially dubbed the ‘subpersonal level’. And certainly, Dennett would not deny that much of what structures my behavior emanates from purely subpersonal processes. I think that Dennett’s real target is the idea that within the smaller class of potentially directly reportable goings-on, there is a neat line between those that are at this moment conscious and those that are not. See, for example, Dennett (1997). 7. The skill of successful self-management is thus pretty much the same skill as management in certain new business sectors. In each case what matters is knowing how to exercise rather indirect forms of intervention and control: what Kevin Kelly (1994, p. 330) nicely dubs ‘co-control’. Co-control is what you get when an ecological controller has opportunistically recruited a motley of resources to deal with a current need or problem. 8. I am thinking here of the empirical evidence (though see the preceding section) to suggest that our chosen actions are frequently the results of unconscious activity that precedes the experience of conscious will. See, for example, Libet (1985), and more recent demonstrations such as Wegner (2002). For an interesting review, see Haggard (2005).
120
Andy Clark
9. Indeed, Ismael notes that the contrast between self-governing and self-organizing systems is ‘not a dichotomy’ and that ‘most animate systems fall somewhere in the space between those with fixed and those with flexible response functions’ (2004). 10. One potentially important difference is that Ismael thinks that the relation of true dynamical coupling, in which ‘controller’ (ecological or otherwise) and ‘controlled’ are engaged in a continuous reciprocal exchange, introduces important complexities and undermines the attempt to couch the relation in quite those terms. The notion of an ecological controller, Ismael might object, implies a kind of divisibility that ongoing coupled influence (between self-model and whole embodied system) cannot underwrite. Thus she writes that in such cases ‘‘there is no simple way to decompose the system into dynamically separable units’’ (2004, p. 10). For my own part, I don’t yet see why this makes a significant difference. The roles of the parts seem distinct enough, even if their ongoing co-evolution resists decomposition. For example, one part of the system may have access to stored memories and a selfprofile while another, though constantly coupled to it, does not. 11. For something like this vision of multiple partial models as a mode of control, see Arbib (1993). References Adams, F., and K. Aizawa. 2001. The bounds of cognition. Philosophical Psychology 14: 43–64. Ainslie, G. 2001. Breakdown of Will. New York: Cambridge University Press. Arbib, M. A. 1993. Book review: Allen Newell, Unified Theories of Cognition. Artificial Intelligence 59: 265–83. Christensen, W. 2004. Self-directedness, integration and higher cognition. In D. Spurrett, ed., special issue on distributed cognition and integrational linguistics. Language Sciences 26: 661–92. Clark, A. 1997. Being There: Putting Brain, Body, and World Together Again. Cambridge: MIT Press. Clark, A. 2002a. Is seeing all it seems? Action, reason and the grand illusion. Journal Of Consciousness Studies 9: 54–71. Clark, A. 2002b. That special something: Dennett on the making of minds and selves. In A. Brook and D. Ross, eds., Daniel Dennett, pp. 187–205. New York: Cambridge University Press. Clark, A. 2003. Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.
Soft Selves and Ecological Control
121
Clark, A. 2005. Intrinsic content, active memory, and the extended mind. Analysis 65: 1–11. Clark, A. 2007. Re-inventing ourselves: The plasticity of embodiment, sensing, and mind. Journal of Medicine and Philosophy, forthcoming. Clark, A., and D. J. Chalmers. 1998. The extended mind. Analysis 58: 7–19. Clark, A., and R. Grush. 1999. Towards a cognitive robotics. Adaptive Behavior 7: 5–16. Collins, S., A. Ruina, R. Tedrake, and M. Wisse. 2005. Efficient bipedal robots based on Passive-Dynamic Walkers. Science 307: 1082. Damasio, A. 1994. Descartes’ Error. New York: Putnam. Damasio, A. 1999. The Feeling of What Happens. New York: Harcourt Brace. Dennett, D. 1974. Why the law of effect will not go away. Journal of the Theory of Social Behaviour 5: 166–87. Dennett, D. 1978. Where am I? In D. Dennett, ed., Brainstorms. Cambridge: Bradford Books/MIT Press. Dennett, D. 1991. Consciousness Explained. New York: Little Brown. Dennett, D. 1997. The path not taken. In N. Block, O. Flanagan, and G. Guzeldere, eds., The Nature of Consciousness, pp. 417–20. Cambridge: MIT Press. Dennett, D. 2003. Freedom Evolves. London: Penguin. Fodor, J. 1998. In Critical Condition. Cambridge: MIT Press. Gallagher, S. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press. Haggard, P. 2005. Conscious intention and motor cognition. Trends in Cognitive Sciences 9: 290–95. Ismael, J. 2004. Emergent order: The limits of self-organization. Unpublished manuscript. Ismael, J. 2006. The Situated Self. New York: Oxford University Press. Jacob, P., and M. Jeannerod. 2003. Ways of Seeing: The Scope and Limits of Visual Cognition. Oxford: Oxford University Press. Kelly, K. 1994. Out of Control. Reading, MA: Perseus Books. Libet, B. 1985. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences 8: 529–66. Libet, B. 1999. Do we have free will? Journal of Consciousness Studies 6: 47–57.
122
Andy Clark
Miall, R. C., and D. M. Wolpert. 1996. Forward models for physiological motor control. Neural Networks 9: 1265–79. Milner, A., and M. Goodale. 1995. The Visual Brain in Action. Oxford: Oxford University Press. Nisbett, R. E., and L. Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood-Cliffs, NJ: Prentice-Hall. Noe, A. 2002. Is the visual world a grand illusion? In A. Noe, ed., Is the Visual World a Grand Illusion? pp. 1–12. Thorverton, UK: Imprint Academic. Ross, D. 2004. Meta-linguistic signalling for coordination amongst social agents. Language Sciences 26: 621–42. Rovane, C. 1998. The Bounds of Agency. Princeton: Princeton University Press. Tedrake, R., T. Zhang, and H. S. Seung. 2005. Learning to walk in 20 minutes. Proceedings of the Fourteenth Yale Workshop on Adaptive and Learning Systems. New Haven: Yale University Press. Thelen, E., and E. Bates. 2003. Connectionism and dynamic systems: Are they really different? Developmental Science 6: 378–91. Velleman, J. D. 2000. The Possibility of Practical Reason. Oxford: Clarendon Press. Wegner, D. 2002. The Illusion of Conscious Will. Cambridge: MIT Press.
8
The Sources of Behavior: Toward a Naturalistic, Control
Account of Agency Mariam Thalos
Often enough in my life I have done things I had not decided to do. Something— whatever that may be—goes into action; ‘‘it’’ goes to the woman I don’t want to see anymore; ‘‘it’’ makes the remark to the boss that costs me my head; ‘‘it’’ keeps on smoking although I have decided to quit, and then quits smoking just when I’ve accepted the fact that I’m a smoker and always will be. I don’t mean to say that thinking and reaching decisions have no influence on behavior. But behavior does not merely enact whatever has already been thought through and decided. It has its own sources, and is my behavior. . . . —Bernhard Schlink, The Reader (1997, p. 20)
Introduction: Inquiring into Agency It is commonly assumed that the best route to characterizing agency—the best route to marking the difference between actions and mere happenstances in a life—is by focusing on quite unchallengeable cases where an agent voluntarily produces an action, through first voluntarily producing an act of willing it. I will refer to these as ‘‘at-will’’ behaviors or performances. Focusing on at-will behaviors proves to be a precarious strategy. Doing so clearly assumes that at-will performances are paradigm, or at least sufficiently characteristic of those behaviors that deserve labeling actions (as contrasted with mere happenstances in a life). But in fact they are not. One big difference between those things we should like to categorize as events, and at-will performances, is that the latter have an arbitrary, spontaneous or capricious quality—and are not sufficiently grounded in the soil of one’s deepest needs and aspirations. Whereas we should hope that the bulk of the actions in a life deserve such a characterization. And the strategy of focusing on at-will behaviors is thus open to the retort that very little in a life deserves calling an action, because very little in a life is performed ‘‘at-will.’’1
124
Mariam Thalos
Still, at-will performances had better come out as actions, if anything does. And so an element of at-will thinking continues to motivate philosophical accounts of the subject: the focus is now on appropriate antecedents to action that are supposed to confer proper credentials on the behaviors that deserve the label. And so a certain consensus has also been reached—itself an astonishing thing in the discipline of philosophy—to the effect that no account of action, as action, may leave out the grounds on which the agent (as a unitary, indivisible thing) takes ownership of it. G. E. M. Anscombe (1963) spoke of a ‘‘desirability characteristic’’ from the point of view of the agent. In the same vein, one contemporary school of thought—found in law, philosophy, and social science2 —directs attention to practices of re-description that allow us in ordinary life to grasp the point (the purpose) of an action in a wide setting of rules, practices, conventions, and expectations. In other words, it places action in a wider setting that affords evaluation as to its rationality, permissibility, or as to some other form of appropriateness. The wider setting is considered a normative rather than a purely descriptive or explanatory setting: for it must be wide enough to encompass everyday practices of evaluation pertaining to ownership, and the normative reasoning that goes with it regarding what is appropriate. Such reasoning is evaluative rather than purely descriptive in character. A certain line of argumentation routinely surfaces along the way to these wider settings: in the face of the fact that ownership of a piece of behavior is to be adjudicated according to standards internal to a set of practices that range over a wide variety of social conventions, we must abandon hope of a science of agency, as such. Like it or not, the language or discourse of agency is, on this account, proprietarily philosophical, as contrasted with scientific. Thus on this reasoning, the nature of agency transcends the realm of the purely empirical, and so must any account that handles it correctly. Conversely, science cannot transcend the realm of mere behavior to cavort among the likes of agents and their deeds; its methodology is inapt. This transcendental turn vis-a`-vis agency has been with us in one form or another since the schools of philosophy founded in the nineteenth century by Kant and his immediate successors.3 And if we accept it, we end up with an account of agency that is not portable from one area of inquiry into another—at least, not in the way scientific theories routinely are. Donald Davidson, to his credit, sought to destabilize this argumentation. He sought to do justice to the idea of agency as ownership over behavior, while nonetheless preserving a basis for some semblance of a science of action. His efforts were directed at defending, as he put it in a landmark es-
The Sources of Behavior
125
say, ‘‘the ancient—and common-sense—position that rationalization is a species of causal explanation’’ (2001/1980, p. 3). And he thought that doing this answered the question of ‘‘What is the relation between a reason and an action when the reason explains the action by giving the agent’s reason for doing what he did?’’ In advancing this proposal, Davidson set a new standard of philosophical practice that has penetrated as deep as the very heart of the discipline of cognitive science. What is ‘‘mine’’ about a piece of behavior, among the scientifically minded nowadays, is the proattitude that I direct toward it, in the form of a mental state.4 And because mental states, on this account, have causal consequences, a causal account of agency is not foreclosed. Thanks to Davidson, the line between what I do and what merely happens to me or in my life is now in very wide circles being drawn according to a very specific criterion—namely according to whether I had a reason for (construed as a mental state that stands in a causal relation to) the behavior to be categorized. But Davidson in effect restricts the science of action to cases where the agent has a reason for the performance, and owns up to this reason somewhere in the process of intellection, at least in private moments. Davidson’s account is therefore criterial, proceeding—as we will see—by drawing the boundary arbitrarily and in the wrong place. I will be arguing that this move on Davidson’s part is no harmless piece of accommodation (for the sake of securing ownership): it is instead a critical strategic error. It does not preserve a robust future for the nascent science of action but instead deprives the fledgling science of prospects for proper development. The science of action deserves better. I will show how we can put it back onto a more worthy developmental trajectory, one that will eventually converge—as it must—with that trajectory upon which travel theories of the evolution of culture itself. The passage from Bernhard Schlink that I quote at the start of this chapter augurs badly for Davidsonian theory of action. The novel from which it hails falls in the category of Holocaust perpetrator literature—literature that explores perpetrator psychology and circumstances, in search of a compelling account of how certain Holocaust events might have come to pass. The narrator of the novel is not himself a perpetrator, but a young German who, through no fault of his own, inherits the legacy of Holocaust and who subsequently seeks to understand the deeds of a perpetrator who happens to be a former lover. The narrator speaks in the passage I quote about actions that he acknowledges as his own but is at some loss for why he performed them; he is quite sure he had not decided explicitly in their favor. In the passage he asserts decisively that he is not even certain there
126
Mariam Thalos
was something—even at the time of action—deserving of the name of a reason for the performances. Even so, he has no difficulties of any kind taking ownership of these behaviors, certainly as his own, and some as actions. The point for now is that our conception of action—certainly in highly charged cases involving the commission of atrocity, in which the question of ownership is enormously important—does not rule out as actions, behaviors in which what moves us is dark even to ourselves, and hardly deserving the name of a reason. And this violates Davidson’s criteria for action. The point is that not even common practice is straightforwardly criterial vis-a`-vis the category of action, and certainly not in agreement with Davidson’s criterion. Ownership of a piece of behavior, as an action, does not depend on identification of grounds, much less those that we are not too ashamed to confess as our own—at least to ourselves in private rooms. This, as will become clear presently, is the beginning of wisdom for a true science—and possibly even a predictive one—of action.5 A true science of action must be open to the possibility of actions not bearing the characteristic of being transparent to their perpetrator as to why they were performed. It cannot follow Davidson’s criteria, any more than a science of biology can be similarly criterial vis-a`-vis its target species. For exactly the same reasons biology must be open to the possibility of, for example, non-black ravens. But then how will a science of action proceed, if not by laying down criteria of what is to fall under its purview? If not by delimiting its own scope, how does a science of action begin? Motivationism My central criticism of Davidsonian action theory, to which I will refer as ‘‘motivationism’’ because it directs attention to the ‘‘engine’’ of action without providing analysis of the ‘‘steering,’’ is that the transparency it wishes to accommodate—the unbreakable link it wishes to forge between agency and reason-bearing behavior—makes it ill-suited to handling routinized behavior, the highly trained, ‘‘expert’’ behavior, that comes in many varieties. Walking, dancing, driving an automobile, speech production, and more parcelized stretches of behavior like personal mannerisms and driving to one’s place of work are habituated behaviors, trained over time, and to the point where questions of reasons for their performance, or at least for performance of their parts or segments, become inapt. So let us begin by taking stock of what must be avoided. A fresh look at a principle of Davidson’s will furnish a vantage point on this space.
The Sources of Behavior
127
Davidson’s principle If an agent wants to do x more than he wants to do y, and he believes himself free to do either x or y, then he will intentionally do x if he does x or y intentionally (2001/1980, p. 23). Davidson’s principle says that strength of motivation is a motive force— a causal force—that functions to produce action. (It is Davidson’s way of fleshing out the idea that mental states matter in this causal way.) Davidson’s principle also says that intentions are side-taking things: they take the side of the want possessed of the greatest motivational strength. The model is quasi-Newtonian: wants (mental states with a representational character) are forces, and intentions fall in the same category of entities— they just happen to be resultant forces. We don’t require a new category of mental state in order to characterize ‘‘the will’’ because, the ingredients of will are already at hand. (This strategy is in some sense reductionistic: it seeks to show how we do not require anything more to account for the category of will and willing.) In the case of wants, by contrast with the case of Newtonian forces, rather than the resultant falling somewhere intermediately between two potential opposites, the resultant force that drives action will simply be the ‘‘winner,’’ and winners in this realm take all; it is therefore co-extensive with the want possessed of the greatest strength: it is, as we might wish to call it, the preponderant motivation. The notion of strength functions within this account as a theoretical primitive. But such an account faces certain theoretical puzzles, for example, how to handle cases with the structure of Buridan’s mythical ass, which, for want of a larger motivational force in favor of one of two bales of hay, goes hungry. The ass, of course, has no means, in the stock of motivations, of breaking a perceived tie. And then how to handle so-called weakness of the will: how can a nonpreponderant want triumph over a want with a preponderance of motivational force behind it? One resolution of these matters, favoring the Davidsonian model, is just to declare that there really is no weakness of will: the preponderant force always wins (since there’s no will, over and above individual wants, and so nothing to be weak). And likewise that there really are no asses like Buridan’s: all real-life asses manage to manufacture—artificially, to be sure—a larger motivational force behind the want for one bale over the other.6 The first lesson is clear: where there are no theoretical resources to handle weakness of will, there also are no resources for breaking ties between wants of equal strength. The first reply I suggested to the Davidsonian, to the effect that what explains my so-called weak-willed eating of cake, is that (at the moment when I succumb to temptation) my desire for it is more
128
Mariam Thalos
potent than my desires against, and this might have merit on its own. But it does not explain why ‘‘weak-willed’’ performances will be experienced differently—as defective vis-a`-vis willing. The Davidsonian might at this point benefit from appeal to recent work by Wegner, who has sketched a model of willing easily adapted to the needs of a motivationist account of action. On Wegner’s view, willing is an experience normally conjoint with (and more or less contemporaneous with) the behavior experienced as willed, but not a causal precursor to it. The experience—or ‘‘computation,’’ as Wegner sometimes refers to it—of will is exactly like the behavior experienced as willed: the two items are common effects of the same physical causes (whatever they are) in the brain. But it might be that certain processes can interfere with the ‘‘computation’’ of will, that may or may not interfere with the normal processes that bring off behavior experienced as willed (had there been no interference with the normal computation). And the reverse is also true. This explains defective experiences: events correctly described as our behavior but experienced as unwilled, or events experienced as willed but not actually caused by processes originating in our brains. So weak-willed behavior might be explained as behavior that is ours (caused by our brain processes) but not experienced as originating from processes transpiring in our brains (as it ought to be). It is experienced as alien, but actually is not alien. The experience can therefore be said to be defective as an experience because it fails to track motivational facts (in the cake case, that I wanted cake more than anything else) in the usual or normal way. However, even theorists prepared to side with Davidson on the point of weakness of will, will find the Davidsonian way of handling Buridan’s ass cases much harder to accept: that way is to suppose that the difficulty is overcome by an appeal to irrelevant asymmetries (e.g., via a coin flip in favor of one bale but not the other) as capable of increasing the motivation toward one bale of hay while leaving the motivation for the other unaltered. Motivational strength—a primitive on the account—turns out to be dependent on factors that are as foreign to motivation as anything can be.7 Those who follow Davidson here may acknowledge that they have wandered rather too far afield at a heavy cost to their concern for science. The Davidsonian paradise in which motivation is sufficiently malleable,8 leads to Popperian purgatory. In the next section I will introduce a principle, even more innocentseeming, of a broadly motivationist cast of mind, offer a compelling counterexample, and then explain how the counterexample urges an account
The Sources of Behavior
129
Figure 8.1 The simple motivationist model.
of action that is completely different from the familiar ones involving motivation. Hijacked: Motivationism Undone To what sort of story about action does Davidson’s principle lead? Consider this one: (Simple) If an agent intends to do A, and is able to do it, then providing no new considerations arise (causing the agent to reconsider the intention) and no external agency intervenes to prevent the agent going forward with the intention, the agent will perform A. We might here say that ‘‘simple’’ stands to Newton’s laws of motion as Davidson’s principle above stands to the law of vector addition for forces. ‘‘Simple’’ goes with a very basic model of behavior: I will call it the simple motivationist model (figure 8.1). But Simple—and the model associated with it—is straightforwardly incorrect. Consider this case: You are throwing a dinner party and need a few more items from the local grocery. Acting on your reasons for proceeding straightaway to the store, you get behind the wheel of your automobile, pull out of your driveway, and while driving become engrossed in mapping out the remaining preparations for your dinner party. Twenty minutes later you arrive, not at the grocery as you wished, but at your office door. Behavior is not founded upon motivation in the simple way set out in this model. There’s a large philosophical literature that acknowledges that a simple motivationist account of behavior is inadequate, and reserves an important role for will to play in bringing off behavior, once motivation has done its part. Perhaps the best known of these is due to Michael Bratman,9 who offers a model of action that includes intentions and intention-like things (e.g., plans and ‘‘policies’’).10 These things, he says, mediate between motivation and behavior so as to make it possible to coordinate with our future selves—something that a purely motivationist account cannot
130
Mariam Thalos
Figure 8.2 The fancier motivationist model.
accommodate in any straightforward way. The model is now more like that shown in figure 8.2. This fancier model too is straightforwardly incorrect, and this can be demonstrated by the same scenario as unseated the simple motivationist model. You are throwing a dinner party and need a few more items from the local grocery. Reflecting on your reasons for going to the store, you form the intention so to act, planning accordingly. You get behind the wheel of your automobile, pull out of your driveway, and while driving become engrossed in mapping out the remaining preparations for your dinner party. Twenty minutes later you arrive, not at the grocery, as you intended and planned, but at your office door. Plans are susceptible to being hijacked. A clear and familiar reality can subvert both the simple and fancier models. It can be objected at this point that the ‘‘you’’ of the alleged counterexample is not behaving intentionally at all, so that no actions of any kind were performed in the story, much less intentional ones. And so no model appropriate to action is appropriate here. This is an unworthy and implausible response. Whatever it was, intentional or not, driving to your office was an action of yours. If, on the way you had become involved in a collision with another vehicle, we would be as correct in holding you accountable for this mishap, and in the same degree, as if you had been driving to the grocery instead. The fact of the matter is that you were on what in common parlance gets called ‘‘autopilot.’’ And if you had arrived at the grocery, as intended, this complaint would have been invalid: the behavior would have qualified as action, and even as intentional action, unconditionally. And yet the proceedings from your perspective went on pretty much the same way as had you performed the action intended (except of course that you went right instead of left at the third light—a fact that had no special impact on you at the crucial time). So the criteria we have for dividing between intentional and unintentional in common parlance is purely behavioral. What we have to say, in common parlance, about such cases as involve acting in
The Sources of Behavior
131
semi-automation, is so meager as to be thoroughly useless for scientific purposes. Accordingly common sense (how we might reply in common parlance) should bear no weight when it comes to a scientific taxonomy of behaviors where these things are involved. Ordinary parlance only stands in the way of understanding how automation is an integral dimension of routine behavior. It may be clear that in the example we do not have a case of intentional action. So it may also be clear that errors performed while on autopilot tend to undermine a behavior’s credentials as intentional action. (Actually, it is not so clear. Consider a case where I intend to perform both A and B, but to proceed with A first. However, while on autopilot, I am hijacked and arrive at the destination for B first, whereupon I go ahead and perform B before proceeding to the destination for A. Was arrival at the B destination so clearly unintentional?) In hijacking cases all the agent’s reasons are directed at performance of something entirely different, which as it happens did not get performed. But this fact has no bearing of any kind on whether the example might nonetheless contain a case of action, unqualified. Cries of ‘‘not intentional, hence not action!’’ suffer from a restricted diet of examples for a theory of action, and deserve the following reply: we rule this case out as a case of action (intentional or not) on pain of failing to seriously confront the question of how all the sources of behavior are so integrated as to make that range of behaviors possible—errors, flaws, and all. The complaints thus far rehearsed suffer from an insufficiently empirical approach to the topic. A better procedure is to take seriously a broader slate of examples of behavior, to examine whether the decision to divide sharply between intentional and unintentional when it comes to credentials for action is sufficiently well motivated for purposes of a science, or whether it makes better sense to treat the range of cases as forming a continuum—or better still, a space. This constitutes a more empirically minded approach to the topic. It is progress through taxonomic refinement: more often that not, taxonomy is more than halfway toward a science of the subject. Before proceeding, it will help to examine more closely the phenomenon responsible for the failure of motivationism. Molarization of Behavior The phenomenon of expertise (sometimes referred to as ‘‘flow’’) that makes the hijacking of the examples above possible is interesting in its own right. (Some have even thought that it refutes the computational theory of mind
132
Mariam Thalos
altogether; see Dreyfus and Dreyfus 1986.) In expert performance—of activities such as walking, riding a bicycle, playing tennis, performing on a musical instrument, driving a car, skilled typing—the body, rather than the mind, seems to be the locus of control of behavior. J. Jastrow’s description (penned in 1906) of this phenomenon is (I think) best: ‘‘At the outset each step of the performance is separately and distinctly the object of attention and effort; and as practice proceeds and expertness is gained . . . the separate portions thereof become fused into larger units, which in turn make a constantly diminishing demand upon consciousness’’ (quoted by Weger 2002, p. 106). Expert performance is molarization of behavior. An adult’s life is comprised in large part of extended sequences of expert performances, broken now and again with novel (or novice) performances. High function and expert performance are, for all intents and purposes, synonymous.11 What sets apart the sequence of behaviors comprising my performances on the stage of life from the sequence comprising yours is not so much that the elements are different (although there are probably elements in your life that are not in mine, and vice versa), but that they are differently sequenced, coordinated, and orchestrated, both together and in relation to the behaviors of others. From this fact it is only a small philosophical move to the idea that full-fledged agency is the stuff of upper management, much more than it is the stuff of the rank and file. The work of being an agent is the work of higher-ups, conducting and directing, much more than it is the squeaks and squawks, the grinds and starts on stage. The work of agency is thus the work of controlling subordinates. This is a key idea. And in this light something else comes into sharper focus: the fact that training steps must be undertaken at critical points in this process to ensure that trained subordinates are ‘‘on line’’ when needed. These ideas will guide our considerations from here onward. The Trouble with Causation Case: Man angers dog, which bites child who dies of rabies. In this example, what is the cause of death? The trouble is that there are multiple causal stories ripe for the telling. There is a medical story, an evolutionary story, a psychodynamical story, and of course a narrative of coincidences (rather like a historical narrative) detailing how each player came on the scene. Each story is causal, capable of filling some request or other for a causal explanation. And more than one such stories involve psychological states of one sort or another. But not all of the stories represent the relevant events in question as actions. The point is that identifying a story
The Sources of Behavior
133
as causal does not make it fit for all explanatory purposes: understanding of the explanatory purpose must come first. More specifically, when we are after an empirical account that explains behavior as proceeding from an agent, we are not after any and every causal explanation in which a psychological state plays a causal role.12 We are after a specific type of explanation. Furthermore, if we adopt a regularity conception of causation (as we ought to do), conceiving of a cause as (roughly) a chance-raiser (a causes b when a raises the chances of b) as they do in the applied sciences of engineering and medicine,13 it becomes quite clear that being a cause is emphatically not a mark of being an agent: many more things can serve as causes than can qualify as agents. And this is perhaps the most profound reason for seeking a different paradigm of dependence relation between agents and their deeds. Naturally the friends of doing-as-causing will reply that causing may not be sufficient to qualify something as an agent, but surely it is necessary. Notice first that the friends of doing-as-causing actually do hold that causing, at least of a certain kind, is sufficient (as did Davidson). Second, it cannot be, not even according to the standard doing-as-causing picture, that everything that has a causal bearing has some credentials or qualifications, however miniscule, in favor of its being an agent; rather, it is that some factors with causal bearing on an incident have no qualifications whatsoever (e.g., those unnoticed features of the landscape that cause me to trip and fall down stairs). Causal theorists judge that events must be produced by their authors in the ‘‘normal’’ causal way in order to qualify as actions.14 Examples of the following sort, they think, seem to suggest this: Alfred is driving to his office. Midway a Martian technology interferes with the neurological transmissions from his brain to the muscles controlling his movements. Then another Martian technology transmits signals that mimic the transmissions of his native neurological equipment so seamlessly that Alfred can continue (and does) to believe that he is in control of his vehicle all the way to his office. Causalists will insist that the resulting events do not qualify as actions of Alfred, if they qualify as actions at all. And the reason is that they do not come on the scene via the ‘‘normal’’ route from a motivational state to a behavior. This is entirely true. But what does such a case display? Just that in the judgment in question, the notion of ‘‘causal route’’ is playing no role at all, and that all the heavy lifting is done by the judgment of ‘‘normality.’’ This leads to the conclusion that the use of causal language in the causal account is playing a purely referential role—the role of picking out without actually describing the ‘‘normal production process,’’ and
134
Mariam Thalos
simply gesturing at the ‘‘cause’’ as the producer. But what if causes are not the only producers theories of production can countenance? Our job now is to give an account of what a producer might be in a way that displays sensitivity to the right range of cases. We have been arraigning cases (with the Martian chronicle being the most recent) in which causing is not sufficient, and this casts some doubt on its being even necessary. The natural move is to consider whether an account that never adverts to causes at all can succeed nonetheless—and this is something that causalists never even consider doing. This is where we will begin to blaze a trail. Before we proceed, it is worthwhile reflecting briefly on the natural destinations to which a causalist model can lead, so as to appreciate what is at stake. As we begin to acknowledge the nature of our species’ dependence on technological tools, how technologies of every kind (from small projectile artifacts, to electronic implants, to language itself) alter the way human brains utilize and process information, and how they change the organism’s very sense of itself—its capacities, its possibilities, its very extension in space and time—we are made aware of how ‘‘plastic’’ or ‘‘soft’’ our selves are. Dennett (1991) and Clark (2003) have made this point vivid. As Clark puts it, ‘‘we just are . . . shifting coalitions of tools’’ (p. 137). If we conceptualize the relations among the tools that constitute us as purely causal, then we soon reach a place where we can no longer draw a boundary between the self and the rest of the world. We will instead acknowledge nothing but causal relations and these (as everyone knows) are positively everywhere without distinction. Now this, as a Zen Buddhist might insist, is a beginning of wisdom. But for those who prize the concept of authorship and responsibility, it is a dangerous road to travel. It is therefore worthwhile examining whether it is the only road available. Control What is more immediately relevant to explaining behavior is the notion of control, and how it is exercised by and passed among sites of agency (e.g., in those ‘‘coalitions’’ of which Clark speaks): who or what is in control, how is that control organized in the circumstances? This helps settle the question of who deserves to be called the author of the action, as well as how the author manages to pull it off. This is much more pertinent than (and, as I will show presently, orthogonal to) the enterprise of identifying and enumerating causal factors, most of which are irrelevant to the standard questions of agency. Now, if there had existed even the most insubstantial and sketchy account of control, one that is markedly different from
The Sources of Behavior
135
standard accounts of causation, philosophers (and especially Davidson) might have fastened onto it from the start without delay. (Both Dennett 1991 and Clark 2003 recognize the need to talk about control, but neither distinguishes between control and causal structures.) Fortunately, we can now repair this gap as follows. The account here must of necessity be preliminary. Let us begin by calling attention to three dimensions of a certain picture, familiar from first-person phenomenological reflections, of the problemsolving process that preludes exemplars of action. These are three aspects also of a distinguished model of mind—the computational model according to which the distinctively mental is to be construed in terms of formal operations upon a system of symbols (Newell and Simon 1972). The point of this exercise is not to commend the features, nor to link them together, and especially not to insist on them as inalienable to our enterprise of handling minds within an account of agency. The point, rather, is to draw them to attention as distinct features, and thereby to display their differences, so that we can focus on the ones salient for our account of agency. The first and most manifest feature is algorithmicality. Problem-solving among humans is evidently, at least on occasions where the label ‘‘reasoning’’ is explicitly applied, subject to being performed in accordance with a procedure, a recipe, or a rule. Sometimes reasoning presents itself as a stepby-step process, decomposable into logical components or parts, and so susceptible of logical and computational analysis. Following Schank (1975, 1986), I will sometimes refer to such algorithms as script. Algorithmicality is a logical feature of processes that manifest it: it is a feature of their abstract form, as contrasted with their ‘‘machine’’ properties. The contrasting (also logical) feature might be characterized as brute reaching for the goal in one step—‘‘guessing,’’ as we might call it in certain cases—of the sort that afflicts many young children. Now, when someone becomes an expert at some task, that person’s performance of the task becomes automated, so that the agent no longer experiences breaking down the task into parts. This is the molarization upon which we have already remarked at some length. The second dimension of the familiar picture is what I will refer to as the ‘‘implementational’’ or ‘‘hardware’’ dimension, following a standard practice in AI, philosophy of mind, and philosophy of computation. And this refers to the physical, material, chemical, biological, and other ‘‘machine’’-specific features of the agent, as well as to pertinent laws governing their behaviors. This might also include such things as hungers, pains, processes of emotional contagion or expression-imitation, and other
136
Mariam Thalos
physiological processes that impact the process of situated reasoning, since they have bearing upon either cognition or affect.15 The third dimension of the familiar and distinguished picture is the one of central interest here: I will call it organization of control. A deliberative process can be centralized or, by contrast, decentralized, localized, or distributed. The question whether or not some process is centralized, is not so much about where processing takes place as it is about the structure of its organization—its control structure. The question of whether a process is centralized or distributed also concerns a broadly logical or computational feature of it—its organizational attribute rather than its features special to material implementation. A centrally organized deliberative or problemsolving entity is a hierarchically (or top-down) organized entity, with a single ‘‘processor’’ residing at the topmost level of control, whereas a local or distributed process is one in which control is distributed horizontally at the top-most level among a plurality of processors equally ranked from the point of control. Organization is a matter of the ranks created by the entity’s ‘‘constitution,’’ and how these are permitted to interact according to that constitution. Organization is a matter of protocol. We might say of distributed processing that the initiative is ‘‘grassroots.’’ And obviously control can, at least logically speaking, be distributed at intermediate levels as well: a process can be centralized at the topmost level but distributed at lower levels, and vice versa.16 Again, the point here is just to bring the distinction between centralized and localized to attention. Notice that this characterization of control applies equally to individuals and to multiindividual deliberative and agentic bodies. (For purposes of explaining rational behavior, the typical Anglo philosopher simply assumes central processing in cases of true agency.) A few preliminary remarks about algorithmicality and centralization are in order. First, they do not come to the same thing.17 An operation can be algorithmic without being centralized, if at the top there are equally ranked processors that run algorithms locally but are not themselves organized in a hierarchical fashion. And in principle at least, an operation can be centralized without being algorithmic, such as when there is only one top processor whose operation is not recipe-driven (think about the guessing children again). Next, algorithmicality, on its own, lends some credibility to the doctrine that reasoning is purely procedural, owing nothing to the machine characteristics of the entities carrying it out. And centralization, taken on its own? Centralization—or, more generally, organization of control—is a procedural matter too, rather than a feature of the matter or
The Sources of Behavior
137
the physics of the entities that carry it forward. Centralization too might suggest—though again it does not on its own prove—that body is one thing and function or activity another. The hierarchical qualities of the US military, for example, are not inseparable from the military ‘‘machine’’—at least as a matter of logic the machine and the hierarchy are separate, since the soldiers can (and do) operate outside of the military context. I’ve now laid out three aspects or dimensions of a process that very often preludes paradigm cases of action. Which, if any, of these—script, implementation or organization—deserves attention for the purposes of an account of agency? My answer is: Decisively not the implementation! It must be some combination of the other two, with the organization of control being the more important: it is the structure that implements the algorithm or script. Before deepening the analysis on this point, it is worth lingering somewhat longer here on the idea of rank. Rank suggests authority, and high rank executive authority. This is in fact what I have in mind. But my notion of executive authority is managerial—it is a management concept. This concept, as I will now argue, stands in stark contrast to the transcendental notion certain philosophers have taken vis-a`-vis first-personal authority. Richard Moran (2001) for example, understands executive authority in terms of what he refers to as a ‘‘transcendental stance’’ toward the question of what one’s attitudes ought to be—one ‘‘turns the question of what one’s attitudes are to be over to reason.’’ According to Moran, I speak my mind with special authority because I am the one who makes up my mind, and I make up my mind with an eye to what reason dictates. This transcendental conception is inadequate for the purposes of a theory of control in our sense. First, the evidence is that people’s reasoning about their attitudes and preferences is far less a matter of attention to ‘‘what my attitudes ought to be’’ than it is a matter of what I happen to have available to reflection at the moment.18 But, even more important, my attitudes now, even those we should like to refer to as avowals, need have no special rank, status or veto-capability in relation to other factors that might affect my behavior: there is no assurance in avowals, as such, that they characterize the workings of a ranking entity. Different agentic systems are organized differently; in some systems avowals enjoy a ranking office, whereas in others they do not. Some systems might be so organized that avowals have only marginally more than a chance relation to the actions they avow. That is what is so uncanny about the weakness of will.
138
Mariam Thalos
And no less so (albeit in a different way) in the case of hijacking. Without language that allows us to expose these facts as facts, the science of action will be lost. Control, Not Causality Let us give closer attention still to the contrast between control structure and causal structure by focusing again on familiar examples. Military organizations are paradigm examples of top-down control structure. Imagine that the commanding officer in a certain military system exercises a dictator’s control over the members of the unit under his command. Consider an information-gathering unit in that system. When information flows up to the central processor from the subordinate, this is a causal relation originating from the subordinate processor. Even so, we do not have control being exercised by the subordinate over the executive. This is an important point, and routinely insufficiently appreciated: control is not a local feature of an event-causal picture of a network of intersecting events. Controlling does not consist in standing in a causal relationship to some event, or even to a network of such events. Control is a matter of authority, a matter of which (or, as in the multi-organism cases, whose) directives win out when there is conflict. Control is essentially not a local matter (as all causal matters are; see Thalos 2003) of where a process originates, nor is it a matter of which features of the world ultimately explain what the products of the process ultimately look like. Control is a global affair, a matter of rank and protocol, and not well correlated with the outcome or goal of a process or procedure. So it is very difficult to diagnose when encountered on the ground—and this explains why the notion is not as prevalent as it might otherwise be. Control is best illustrated by examples where by hypothesis we know what the protocol is, or simply where we stipulate the control structure: the military is perhaps the best illustration of hierarchical control, and other examples of control are contrasts to it. The question of protocol is not concerned so much with the forces, factors, or influences that are actively exercised during the relevant real-time process as with a global feature of the organization of that process. Control over my car, for example, is what I lose when the brakes of my car fail at the top of the hill, even if I have occasion to call upon their services (and then entirely without success when I do) only at the bottom. During the period of time when I have no occasion to call on their service, matters will proceed exactly as they might have done had I indeed enjoyed control over them; still I do not enjoy that control. For sake of concreteness let us adopt the following rough
The Sources of Behavior
139
and provisional analysis of authority, keeping in mind that this is still only preliminary to a mature account: Axiom A An entity, unit or function A ranks over a second entity, unit or function B, when the edicts or instructions or processes of A win out over those of B in cases where the edicts or instructions or processes of the two are in conflict or cannot go forward simultaneously. This axiom is palpably rough. Nothing yet has been done to take account of three rather important qualifying ideas: (1) that ranking can be complete or partial, (2) that it may be circumscribed rather than global (applying to a finely delimited range of operations), and (3) that these two ways that a rank can be modulated or tempered might intersect or overlap in such ways as to confound diagnosis of the precise balance of partiality and circumscription, on the ground. These details will be worked out in time. For now what is on offer is simply the conception of ranking, as a piece of metaphysics for the sake of a science of agency. Here are further considerations to help make that conception more clear. Suppose that we are talking about causal interaction among such things as psychological or cognitive processes. We will articulate the conception, just as Newton did his theory of forces, as a (contoured) story about competitions. Causes of the sort that inhabit brains are entities that enter arenas of competition, and might do so in the following way: Axiom Causality When process A causes neurons upwind of (for example) the hand to send an impulse with characteristics wA to neurons that attach to the hand, and process B causes other neurons to send an impulse with characteristics wB , the neurons of the hand will receive an impulse wNET ¼ FðwA ; wB Þ. Here FðwA ; wB Þ signifies the form of a functional combination of the two impulses that emerges when two such impulses meet on the ground, context-free. Axiom causality thus signifies a law that could be referred to as the law of neuronal impulse combination. I have no intention of challenging the possibility or form of such a law. I am only challenging the picture on which causes always meet context-free. On the picture presented by this axiom, causes meet on the ground, on a level playing field where there are no ground rules of any kind. Causal laws are uncontextualized laws. Such is not a picture of anything under control. Processes under control look much more as explored below. Causes can also meet according to engineered (designed or evolved, as contrasted with nonexistent) ground rules. This happens when there are
140
Mariam Thalos
processes of higher rank that dictate which of the lower ranking, competing processes go forward and which must hold back. So, if processes A and B of the preceding axioms are in competition, a system with higher order control structures might be governed by a rule of the following form: Axiom Control When process A causes neurons upwind of (for example) the hand to send an impulse with characteristics wA to neurons that attach to the hand, and process B causes other neurons to send an impulse with characteristics wB , control structures C will suppress one of those impulses, and the neurons of the hand will receive an impulse w ¼ wA 4wB . In other words, the structures C of this hypothetical example will act as an OR gateway. A gateway is a control structure. It governs the pathways by which lower level (causal) structures interact. This is the domain of engineering. Not the domain of contextless neurons. Note that w and wNET may coincide (may be equal in value for all actual input values), and the facts will still be different. In one case, the output is under control, while in the other, the outcome emerges as a result of an uncontrolled process. And this is a difference that makes a difference from the point of view of agency. The point then is this: where A ranks over B, this is an asymmetric affair. And the difference between the asymmetry of ranking and the asymmetry of causality is that, whereas there is nothing to prevent a causal relation being reciprocal (e.g., in those cases where wA ¼ f ðwB Þ and wB ¼ gðwA Þ), reciprocal arrangements violate the concept of ranking. And so it makes sense to conceive of control structures as structures that overlay lower level causal structures, and govern when and where lower level structures interact. They choreograph. They direct. They are the stuff of upper management. One concept that has been common in the disciplines of psychology and neurophysiology since the 1950s is that of inhibition.19 This concept is indeed a genuine control concept, much more than it is a causal one. The concept of inhibition is (roughly) the concept of veto or brake: the controlling entity or process prevents an action, even one that is already going forward, from going to completion. This is roughly one-half the concept of ranking. The full concept also includes the controlling entity being in a position to ‘‘arouse’’ as well as inhibit. And perhaps to get a physiological grip on the idea of control, one needs to break it down into these two component ideas. When once a full complement of control concepts have been articulated, it will be possible to give a detailed account of homeostasis and similar systems notions in purely metaphysical terms.20
The Sources of Behavior
141
Now someone is sure to complain that this analysis of control is just a very elaborate rendition (and possibly even an unintelligible one) of standard causal reasoning. This complaint is difficult to dispatch, precisely because causal reasoning is many things to many people. One can say something to repel the criticism. But before proceeding, it is worth emphasizing that the point of repelling this criticism is not to demonstrate that the science of agency is or ought to be a sort of transcendental exercise. The point is rather to illuminate in one more way the fact that the empirical realm is not purely a realm of causes. The account of control developing here is by every standard a naturalistic account. It is useful to look at how control and causation might diverge in a simple example. In a series of recent articles, Christopher Hitchcock (2001a, b) has made much of a distinction between (on the one side) the net effect of a certain factor or event C on a second factor or event E, and (on the other side) the component effect of C on E along a particular causal route. Hitchcock argues that this distinction is required when C affects E in more than one way, to capture all our causal judgments about a given scenario. Hitchcock proceeds to use this distinction to resolve long-standing difficulties in the analysis of causation, and in particular, some cases regarding so-called preemption. In preemption, one potential cause C1 , works along two causal routes: (1) it directly brings about an effect E, but (2) it also prevents a second potential cause of E, C2 , that was poised to bring about E had C1 failed to do so. So imagine that two assassins conspire to shoot a victim. The plan is for assassin 1 to fire upon the victim ðC1 Þ, but assassin 2 will act as a backup and fire ðC2 Þ in the first assassin’s place should assassin 1 fail to do so. By bringing about C1 , assassin 1 also contributes to prevention of E, through acting as a cause of the nonoccurrence of C2 . By distinguishing between the two kinds of effect C1 , Hitchcock proposes to offer a more satisfying account of all the relations of causation involved in cases of preemption.21 Hitchcock’s analysis, illuminating as it is otherwise, still leaves out a good deal that is relevant to how E (the assassination) is being brought about. In addition to the causes identified in this example, there is an organization of those causes that we can refer to as the structure of control. (This case is an instance of of the organization of control in its simplest form, since there is very little in the way of a training process preceding it, and even less in the way of precedent set by it.) These things have yet to be accounted for, once we have identified the causes. Here is what is left out: according to the protocol (i.e., as told in the original ‘‘story problem’’ in the language of
142
Mariam Thalos
plans and counterfactuals), C1 is a controlling factor in the situation, and whether it obtains determines which causal processes go forward. Structures of control are relations among causes pertaining to potentialities and potential events as well as actualities and actual events. In Hitchcock’s preemption case, it has specifically to do with relations between C1 and C2 when the first assassin fails to fire; C1 ’s failure brings on C2 , just as C1 ’s occurrence preempts it. This is (at least in part) what is involved in C1 ’s being a controlling factor in this scenario. So, just as C1 acts along different causal pathways in the actual sequence of events, it also acts along different potential pathways in the entire system of factors that converge and yield a true-life assassination episode. Assassin 1’s actions control assassin 2’s, as the story is told. And this is just like the story about the light switch that operates the table lamp: if the switch is switched to ON, the lamp comes on; otherwise, it stays off. And just like the lamp switch, assassin 1 occupies a ranking position vis-a`-vis assassin 2. This fact answers the question: Could assassin 2 go ahead and fire first? (Answer: No, by protocol, he is to wait until assassin 1 has taken action at the first action node.) This fact does not come out in Hitchcock’s analysis of the story. It cannot be captured in that counterfactual account: it is not a counterfactual. Still, capturing it is essential to capturing the relations in which those events stand: illuminating this fact brings out the bearing that events—both actual and potential—have upon one another. And it sheds light on the nature of preemption, explaining why handling preemption, as such, in a purely causal story, has been difficult and contested. There is a tendency in philosophical treatments for nonoccurrences of events to be left out unless absolutely unavoidable. (Philosophical resources for handling them are scarce because philosophers typically do not access the mathematical technology of variable calculus, but rely altogether too often on the more limited technologies of the first-order logical calculus). But this is the kind of fact that the language of control brings to the table. Now a certain type of critic is sure to reply at this juncture that we can handle preemption within a strictly counterfactual account of causation, and so there is nothing unaccounted for by that theory of causation— because that theory can handle any and all matters bound up with counterfactuals. And since I am claiming that control is a matter of what is to happen under contrary-to-fact conditions, there can be nothing left unhandled by a counterfactual analysis of the facts. Now, I will not enter the highly contested territory of whether a counterfactual account can handle preemption or any other causal phenomenon. In answer to the critic, I will
The Sources of Behavior
143
simply say that there is a question whether it can. And that at least some of the difficulties are bound up with how we will handle such relations as I am seeking to designate as control relations. The friends of counterfactuals have high aspirations: they aspire to tame these relations (the precise term is ‘‘reduce’’ them) to their favorite relations—counterfactual relations—so as to be able to say that causal relations too are nothing but a configuration of these favorite relations (and consequently that causation is not mysteriously noumenal). These are not universal aspirations. For one thing, socalled counterfactual relations are beasts of a kind altogether unknown to the sciences; they are philosophers’ ways of handling in one fell sweep any true generality that a science might wish to deal with.22 But to cast all relations among merely potential facts as of one kind is to assume entirely too much: these relations might come in a variety of flavors and colors, and the varieties might make a substantive difference. The counterfactual approach to all high-order generalities in the natural world seeks to paper over these differences. I, by contrast, am calling to attention only one such set of differences among generalities, and signaling that they function in a way especially salient to a theory of agency. And the point is that papering over these differences will fail to account accurately for the phenomenon of agency. More modestly, an account of control, particularly in the early stages, should not seek to reduce control relations to something else familiar from some other line of inquiry, but to leave them as it finds them, cataloging and drawing them together, illuminating their distinctions. It should leave reductions to those whose aspirations rest more with metaphysics than with explanation of empirical phenomena. We are still at the very earliest stages of articulating an account of control that can serve the needs of a theory of action. Reduction is not yet in order. It can be postponed, if not permanently then at least until we have identified fundamental principles adequate to the phenomenon. And so at this stage of our proceedings, modesty dictates that when asked whether axiom A is realized at some ‘‘lower’’ level by concrete causal facts, we must answer that our account does not depend on it; we must insist that focus on realization theses of this sort only obscures (puts out of focus) the features of the phenomenon of agency upon which the analysis must attend. Interlude: Aristotelian Reflections Aristotle’s point of departure on the topic of the Will is an anti-Platonic thought to the effect that reason by itself cannot move anything.23 The
144
Mariam Thalos
question that he subsequently allows to guide his proceedings is: What is it in the soul that originates movement? He acknowledges that reason can issue commands but that oftentimes its commands are not obeyed. The incontinent follows desire wherever it leads, commands of reason notwithstanding. On the other hand, the recommendations of reason allow an agent to resist desire. Hence the urgings of desire are not obligatory in and of themselves. Aristotle draws the conclusion that desires do not originate movement either, and therefore discovers the will, the seat of agency. He is subsequently careful to distinguish between will and inclination, and the distinction is taken up in transcendental terms and much elaborated by Kant and those who nowadays tread in his footsteps.24 But for Aristotle, the distinction does not signal a break with the empirical world and mark a gateway into the transcendental. The will for Aristotle is as much an empirical object as are the desires against which it sometimes strives. Aristotle is (as usual) concerned with the empirical facts of agency. I venture the thesis that Aristotle’s discovery is better understood as a discovery of the need to take account of the structure of control, within a purely naturalistic account of agency. And this discovery requires departures from the metaphysics of beliefs and desires as conceptual technology for explaining the operations of will—and mind. There are signals in the Aristotelian corpus that Aristotle was not satisfied with what he ended up with in the way of an account of agency, in the way of an opposition between reason and desire. I propose that what would have satisfied him is an account in the spirit, here suggested, that distinguishes between sources of movement and processes that control them. Molarization, Modularity, and Consciousness: The State of Play in Psychology A preponderance of work on the problem of explaining behavior has focused on consciousness and its role in laying groundwork or catering for behavior. But while this is an important question, it is not the central issue of how to model the control of behavior, and now we can explain why. An important feature of a great majority of human behaviors is the degree of automation or expertise. And the striking feature of expert behavior is the degree to which consciousness is not required for their performance—at least, for performance of the details. Expert drivers are not conscious of the details of driving any more than expert walkers or swimmers are conscious of the details of walking or swimming. Consciousness of the details is clearly not key to carriage of expert performance. What consciousness does for expert performance is unclear, but one thing is certain: it is not
The Sources of Behavior
145
what controls, or even what launches or initiates it (as the hijacking cases show). To explain the control of behavior—executive function—we will be required to illuminate features that underpin both motivation and consciousness, and tie them together as threads in a whole cloth. The psychologist Bernard Baars (1988) has articulated what he sees as a ‘‘gathering consensus’’ on the subject of cognition: the whole affair is orchestrated by a ‘‘distributed society’’ of specialists operating upon a shared space consisting of working memory—what he refers to as a ‘‘global workspace.’’ This consensus has been forged by the work of many neuroscientists and psychologists, often studying subjects with brain damage or cognitive deficit, or who have had surgical procedures such as a cutting of the corpus collosum. This body of research suggests that the human mind is comprised of a vast number of more elementary units, carrying out sophisticated cognitive tasks for which they are specialized, that these activities go on outside the awareness and/or the access of the unit in charge of verbalization and explicit memory, and that their management is not under the control of the verbalizing unit. This is the doctrine of ‘‘massive modularity,’’ as Michael Gazzaniga puts it, ‘‘the human brain is more of a sociological entity than a psychological entity’’ (1985, p. 28). This sort of account (which has a more modest precursor in Fodor 1983) has been taken up and propounded in a radical form by Dennett (especially 1991), and by a variety of camps of researchers studying the ‘‘human nature’’ question. (There are, to be sure, differences of emphasis, of methodology, and of reliance on bodies of empirical findings in comparative phylogeny, primatology, developmental psychology, and human and animal cognition, that go along with the choice of labels among ‘‘evolutionary psychology,’’ ‘‘behavioral ecology,’’ and ‘‘evolutionary anthropology.’’ And exchanges among those who adhere to different labels—or who have the labels adhered to them—have been positively acrimonious (Downes 2001).25 Nonetheless, they have in common an adherence to some form of the doctrine of dedicated, special-purpose cognitive mechanisms. But once it is agreed that special-purpose mechanisms are the order of the day, this can only be the beginning of the story on the question of cognition, its control over behavior, and the role of consciousness in all of that. Once we recognize the fact of modularity, modest or massive, the real issue becomes: How are special-purpose mechanisms so organized as to produce what appears to be seamless and orderly, and even linear, organization of behavior? And how are departures and systematic deviations from seamless and linear orderliness to be taxonomized and explained? The problem, of course, is that ‘‘modules,’’ however realized in the brain,
146
Mariam Thalos
must draw on the same pool of resources available to the organism. Unless the demands of the modules are adjudicated in some fashion, chaos would result in overwhelming abundance.26 However, the chaos in overwhelming abundance is not observed, and this simple fact is what requires explaining. Of course, this point has not gone unnoticed, particularly in the computer science community. Teams of engineers and artificial intelligence researchers have devised architectures (in some ways reminiscent of the old ‘‘drive’’ reduction architectures that still figure in many models of animal behavior) that put into action principled systems of conflict resolution. (Two examples are Anderson’s 1983 ACT* and Rosenbloom, Laird, and Newell’s 1987 SOAR. A psychological model, utilizing feedback mechanisms but with less attention to control structures is Carver and Scheier 1998, which builds on Miller, Gallanter, and Pribram 1960.) And Gazzaniga (1985), though he does not offer an architecture, nonetheless recognizes the need for one when, drawing on Festinger (1962), he describes a process of resolving ‘‘cognitive dissonance’’ in which the language ‘‘module’’ (the ‘‘interpreter,’’ as Gazzaniga calls it) systematically gives up ground by ‘‘revising’’ its narrative about the self.27 Nowhere before have we got a philosophical treatment of control structures or architecture, and nowhere (even now) is there a rigorous and complete taxonomy of control structures. Now, modularity and molarity are related, most obviously through the idea that a module is a good candidate for automation, and that automated and un-automated tasks must be approached differently by a supervisory entity (something of higher rank in the control architecture). But this idea needs working through. To my knowledge, there is nowhere—not in computer science and certainly not in philosophy—a treatment of the automated system as distinct in type or category from un- or semi-automated ones. Similarly nowhere is there a treatment of the relationship of automation to modular architecture. As I see it, there is important foundational work here as well for philosophy to do. Making Things Happen An agent makes things happen. To say ‘‘makes a thing happen’’ is to say something ambiguous as between ‘‘triggers a causal chain culminating in that event’s occurrence’’ and ‘‘brings the thing about in a controlled fashion.’’ To do the latter, on the present account, is to perform a management or executive function. Therefore the first item of business in this present account is to lay out clearly the difference between management (or control) and simple causation. This first step is now behind us. The second item of
The Sources of Behavior
147
business is to formulate the thesis of agency as control, while at the same time working out the difference between automated and un-automated processes. To the extent that we succeed in this task, we will have addressed the worry that an agent consists of ‘‘tools all the way down,’’ for we will have articulated a distinction between the tools and the (organized) entity under whose control the tools are set in motion. And the final item of business will be to show how these ideas function to model and explain a good many things, vis-a`-vis animal and machine behavior, that we do not now have adequate means of explaining. Before proceeding with what remains to be done, here are a few preliminary remarks regarding studies of executive function in childhood. First, a good deal is now known (thanks to Piaget 1952, Vygotsky 1934, and numerous contemporary developmental psychologists who have made developmental issues focal) about executive function in children. The major findings are these: executive function develops incrementally, stage-like and in an order that dovetails with development in other cognitive functions, especially in perception, category acquisition, organization of categories, and application of rules (e.g., the rules of logic).28 Second, what a human infant will notice before it notices much else, and before completing the first year of life, is its own capacity to bring things about—and not just to cause things but to bring them about in a controlled fashion. In this way the self is perceived as (among other things) a controlling entity on the ecological landscape. For example, an infant of 3 months who is so situated as to control, either via sucking action or leg movements, an environmental variable (e.g., the onset of a Sesame Street program, or simply the focus function on a device that projects an image onto a screen) will display a great deal of attention to and enjoyment of the visual display, whereas infants receiving the same input, but in a way that is not contingent on their own behaviors, will tire far more quickly of the display.29 Infants in the first condition are noticing something— something to do with their own impact on their environment—that keeps them deeply engaged. Third, the concept of management or control that is at the heart of executive function is what will allow us to handle a range of rather paradoxical applications of will—what Wegner refers to as ‘‘ironic processes’’—in which the implementation of an intention or plan brings about the opposite of what is aimed at. There is a good deal of research, for example, on the movement of the handheld pendulum (called the Chevreul pendulum after the French chemist who debunked occult interpretations of its movement). One is instructed to look at the pendulum, to imagine a course of
148
Mariam Thalos
movement, but to try not to move it. Carpenter (1888) summed up the results of findings in his era as follows: ‘‘a singularly complete and satisfactory example of the general principle, that, in certain individuals, and in a certain state of mental concentration, the expectation of a result is sufficient to determine—without any voluntary effort, and even in opposition to the Will . . .—the Muscular movements by which it is produced’’ (quoted in Wegner 2002, p. 115). Daniel Wegner and his collaborators (Wegner, Ansfield, and Pilloff 1998) studied this phenomenon extensively. Participants were specifically admonished not to move the pendulum in a particular direction, and the pendulum’s movement was observed by videocamera. The repeated finding was that the suggestion not to move in a certain direction yielded significant movement in exactly the forbidden direction, and that this tendency was amplified by mental load (e.g., having to count backward from 1,000 by 3’s) or equally by a physical load (holding a brick at arm’s length with the other arm). The phenomenon is more robust than one might suppose: it applies even more dramatically to control of the mental. I am instructed to suppress thoughts about elephants, with the result that there is nothing I think about more often than those blasted elephants—contrary to (not merely orthogonal to) all my desires, aims and intentions. Am I causing myself to think about elephants? Well, of course. But to stop here by way of explanation is perverse: it is not just that my attempt to perform A causes me to perform not-A, for this occurs only in these special cases. We require an understanding of how my effort to suppress elephant thoughts ironically produces the very result it sought to prevent. Only an account of control, as I will soon explain, has elements enough in it for the job. Sources of Behavior: Anticipations of the Model to Come Any model of a functional (i.e., nonchaotic) cognitive system that utilizes semi-autonomous modules requires a supervisory system. This can, though it need not be, separate from those parts of the behaving entity that provide it with motivation. Any model of an agent must therefore say that motivation is only one part or aspect of the action-making works, and that there is a good deal more to explaining behavior than directing attention to the motivational powers of various features of a target state of the model. We may hereforward refer to the functions of an executive ‘‘branch’’ as ‘‘agentic functions’’, while reserving the name of ‘‘executive’’ for a central, ranking function that will constitute a component of the larger whole of agentic function. This is the first move. It decidedly should not be regarded
The Sources of Behavior
149
as a move to the effect that we regard the agent as a totalitarian entity, with a despot residing at the top of a control hierarchy and exercising autocratic control over the rest. We cannot foreclose the possibility that control comes in degrees, and is subject to circumscribed ranges of applications. The point is only to appoint a place in our theory for a function (that corresponds to what we familiarly recognize as willing or intending) distinct from motivation, and to make thereby a space for inquiry as to how this function interacts with other functions in the agentic system. Of course, no one has ever proposed that nothing but motivation matters, not even friends of the simple or fancier models discussed above. Friends of those models do not deny that a variety of ‘‘normal’’ conditions have to be invoked before calling attention to motivations, and their relative strengths could furnish an explanation of behavior. This is how exponents of the motivationist model manage to dismiss certain cases as cases falling outside the scope of what they propose to account for. But both models do much too little in the way of taking account of these other factors, obscuring the importance that must be placed on these other things for understanding a wide range of behaviors. Where these models had nothing illuminating to offer—as in the cases of weakness of will, say, or expert performances— they place the behavior in question outside the bounds of what a theory of intentional behavior needs to account for. This is a commonsensical way to proceed, perhaps—but (as already remarked) it is a way to proceed that is unworthy of the name of science. Principles of Modeling Agency Once identification of executive function as distinct from motivation is made, within a broader range of sources of behavior, we find ourselves modeling an acting entity that has more than one center of agentic gravity. And this then becomes the distinctive guiding principle: that any specific model should depict, not an atom, but a kind of molecule—a distributed entity with more than one locus of agentic functioning, and thus an entity that may operate in a more or less unified/integrated fashion. And thus also an entity that may suffer deformities to various of its agentic parts separately, deformities that may or may not affect their organization but will affect that entity’s agentic performances in characteristic ways. This will be the key to accommodating such phenomena as weakness of will and hijacking, among other things. The principles that ought henceforward to guide further development and refinement of a model are four in number:
150
Mariam Thalos
1. Whatever elements we acknowledge as making a contribution to agentic function broadly construed—whatever elements or functions (perception, logical/mathematical processing, proprioception, hunger, etc.) we find grounds for including in the ‘‘boxology’’ that represents agentic functioning—we must assume they are semi-autonomous and fully interacting with every other element in the model, unless empirical findings indicate otherwise. 2. Having granted semi-autonomy and full (multidirectional) interactivity to the elements, we must anticipate the possibility of interference among their operations. 3. The potential for interference among subsidiary functions is a source of instability in any system. A function, operation, or systemic organization that resolves at least some conflicts will probably be required if the system as a whole is not to self-destruct (and therefore to have disappeared in the course of evolution). A central executive is a function that exercises a certain authority over the elements it governs. We can therefore begin casting about for such a function, and the various specifics of its operation (the precise form of the ranking relations to which it stands to other functions). To achieve a model of this will require a good deal of further empirical inquiry. 4. We will take a developmental perspective, as well as begin to articulate the developmental target of agentic processes. A model that follows these principles as guidelines will begin by marking a distinction between the learned and the not-yet-learned tasks, and then sketch a general picture of how a task moves from being not-yet-learned, to properly learned. Thus a model that follows these guidelines will, among many other things, have to articulate the difference between automated and unautomated. This model will serve the purpose of giving an account of expertise. Addressing the Hard Question The project we’re undertaking might be considered too difficult: it consists in giving an account of the workings of ‘‘central systems.’’30 What evidence could we possibly amass that would give us some direction in classifying systems as either ranking or ranked over? I would contend that indeed we do have such evidence, that it is precisely the sort of thing that has to this point eluded explanation. And I will now itemize some of the phenomena that will help us make progress in the task. Before we proceed, however,
The Sources of Behavior
151
we should do so with caution. We should not expect all agents to have the same structures of control. Since these structures require training, we should expect that differences in training will result in systematic differences in the structures we find on the ground. Explanations A/not-B Piaget discovered a very interesting phenomenon among infants, which continues to be the subject of much theorizing: it is a phenomenon on which new developmental proposals regarding intentional action are frequently tested. At about 6 months of age, an infant will reach for an occluded object, not where it was last visible, but rather where they were last successful in finding it. (The experimenter ‘‘hides’’ a toy in spot A in a sandbox, and the infant is successful in retrieving it. But when the experimenter subsequently hides the toy in spot B, the infant reaches for it at A, despite just having seen the experimenter place it at B.) The phenomenon is sometimes referred to as ‘‘perseverative,’’ suggesting that the young subjects just can’t help repeating a kind of behavioral program, after the appropriate time has passed. What is going on? There is probably more going on here than just perseveration. But persevere infants do—among other things. The model I am sketching has features that can handle perseveration: if the executive module does not disengage or inhibit a motor ‘‘program,’’ the motor program repeats until otherwise interfered with. In this case the motor program is more directly controlled by perception than by judgment (as most programs are early in life). And disengagement/inhibition of the variety of programs under its control can itself be something that an executive must learn to perform at the right time. Zelazo’s Incompatible Rules There is a perseverative phenomenon closely related to A/not-B to which preschoolers of a certain age are subject, even when they have successfully graduated the A/not-B task. Preschoolers who fail this task cannot switch cleanly between successive card-sorting tasks. The preschoolers are trained on a card-sorting task along each of two dimensions: for example, they learn to sort images according to color and also according to shape. At all times each of the two dimensions of the images are presented to the subjects’ perceptual field (they can see both the color and the shape at the same time), and they are proficient in both sorting tasks when performing
152
Mariam Thalos
them separately. The experiment then begins. Subjects are first asked to sort a stack of images according to one dimension, then to switch midtask. There is an interval of time during which subjects are continuing to sort by the first dimension, all the while verbally able to tell the experimenter that they are supposed to be sorting according to the second dimension and verbally also informing the experimenter correctly where they should be placing the current card. All the while their behavior is in keeping with the first instructions rather than with the second. What’s going on here? Zelazo and his colleagues (Zelazo 1996; Zelazo and Muller 2002; Zelazo et al. 1997) postulate that the sort/switch task is more complex than the simple A/not-B; it requires representational complexity that mirrors the complexity of the task itself. It is therefore no wonder that preschoolers fail at this more complex task even when they have achieved mastery of Piaget’s original. I think there’s something to this: the second task is in one sense more complex. It demands of subjects at least this much more: that they engage a ‘‘rule’’ (as Zelazo et al. refer to it) that was not in play before, while disengaging one that is currently guiding performance. In A/not-B, no ‘‘rule’’ or concept is intermediating in the monitoring of performance: it is (as I suggested) directly answering to perceptual cues. So if, to master A/not-B the executive need only master a process of controlling behavior through simple perceptual cues, and must learn to switch control between simple perceptual cues, then mastery of A/not-B task is not enough to guarantee mastery of switching between sorting tasks that involve placing objects under categories. Expert Performance Here automation is the name of the game. How are semi-autonomous motor (and other) programs created out of untrained neurons? There is a neurological dimension to this question, to which I can pretend no answers. The question I want to address, if only in the sketchiest way, is the question of the executive role in the creation of autonomous programs. The answer must be: experts are trained into existence. And this training involves the executive in a possibly painful exercise of resource-intensive monitoring over a considerable period of time (in which the degree of ‘‘pain’’ is dependent on the current condition of the executive as an executive, and what resources might be available to support the performance being trained). The training is normally triggered by or linked to a stimulus or a training exercise of some sort, and it involves dedication of physiological resources (that may or may not be costly, depending on what might be
The Sources of Behavior
153
available at the time of training). Once trained, programs are available on demand, and are controlled by engaging and disengaging (inhibiting and disinhibiting, in the older language). ‘‘Micromanagement’’—the process of interfering, ‘‘internally,’’ as it were, with a semi-autonomous unit—is out of line with a semi-autonomous process, unless the output is somehow suspect. This explanation, obviously, requires making the distinction between automated process and un-automated process, which as we noted before is required of any model that follows these four prescriptions. Hijacking Once semi-autonomous programs are available to the executive, hijacking becomes a possibility. The name of the game becomes stress and load. Since the executive has more to manage, monitoring resources are spread more thinly, and the semi-autonomous are more subject to going astray. Weakness of Will This phenomenon has been the philosophers’ touchstone on matters to do with intentional behavior. How can it be that my motivation be a certain way, but that my behavior does not follow suit? Folk psychology has it that in such cases the will supports the inferior desire, against the one supported by the better judgment. Puzzlingly, Davidson reckons that it is judgment that is in some sense causally faulty. There are many ways that a model following my prescriptions can go with handling weakness of will, by having any number of things interfere with carriage of an executive order—for example, perception, and all the extant automated motor programs that multiple perceptual cues may trigger. I think this is a good thing. It is possible that not all forms of weaknesses of will are weaknesses in the same thing. For example, weakness of will can be a form of perseveration: an inability to disengage a certain type of behavior to which (for better or ill) one has grown habituated. This would explain why bad habits (and good ones too!) are hard to break: to break a habit, one has to interfere with programs that operate semiautonomously. If bad habit is to cease, the agent must put into practice vigilant and costly monitoring, for which there conspicuously might not exist sufficient resources. Ironic Processes You are admonished not to move the Chevreul pendulum in a particular direction. So why does the pendulum move precisely in that direction? Or
154
Mariam Thalos
you are determined to win a wager that you cannot carry a full cup of coffee across the room without spilling it. Why do you lose the wager? Because you’re micromanaging the process, and this perhaps because the level of motivation is higher than normal. If you were simply to have tried at the normal level of trying, or even not tried (in the active sense of trying) at all, you would have succeeded. Motivation triggers higher levels of executive monitoring. In the case of automatic processes, like keeping cups from spilling, this has a poor outcome: expert performances are after all best left to experts. (This is why you train them in the first instance.) Now, you are trying not to think about pink elephants for the next 5 minutes, since you will receive a prize if you can pull this off. But instead you think of the dratted things 30 times in that time span. Why? Because you are proceeding the wrong way with it: you are micromanaging. If your executive system is not accustomed to (trained on) this sort of negative task, its first efforts probably involve the usual intermittent ‘‘checking.’’ But had you proceeded immediately by taking out an engrossing book, or initiated a scintillating conversation about gun control, or turned to a difficult chess puzzle, you might have succeeded. After all, if the prize hung simply on your remaining within the bounds of your office for the next 5 minutes, you’d simply have found something interesting to do in your office for that span of time. But the admonition not to think about pink elephants (whether you have issued it to yourself or accepted it from another source) has triggered a monitoring process, in your case a process that intermittently looks for thoughts about pink elephants. And this ‘‘checking’’ is itself a form of ‘‘thinking’’ pink elephants. Neurotic Behaviors, or Behaviors due to Phobic Fears This model can do a great deal more in the way of explaining the neurotic—which, of course, the motivationist model (to its discredit) puts outside the bounds of a theory of agency. Suppose I wash my hands obsessively, or I simply cannot board an airplane, notwithstanding my knowledge of the relative safely of air travel. I think that these phenomena involve a failure of executive function. It will take some further development of the model to show just how. Bratman’s Plans A kind reader might at this stage observe that my principles constitute a careful collection of friendly emendations, developments, and additions to Bratman’s original modification of the simplest model. My proposals make
The Sources of Behavior
155
transparent what is only a verbal promissory note on Bratman’s part, to the effect that we are after an empirical (to be read here as polar opposite to ‘‘transcendental’’) characterization of will. And there is something to this. But this kind reader must be careful: my ambitions are less lofty than Bratman’s. Bratman begins where my account leaves off: intentions are for him primitive items, as different from wants as anything cognitive can be. (Similarly, on my account, control structures are as different from the structures that they control as anything structuring can differ from what it structures.) Bratman proceeds to identify multi-intentional objects, intention ‘‘molecules’’ that he refers to as plans. He then seeks to show that for rational, coordinative creatures like ourselves the features of practical judgment that achieve control over behavior are things he calls ‘‘policies,’’ that these have the function of producing value, and (furthermore) that these productions of value proceed via productions of obligation (normative structures) and on at least some occasions they produce motivation as well (Bratman 1999). I have sought to defend no contentions of this sort. Thus my ambitions are lesser: I have sought to provide a metaphysics that might support a position like Bratman’s, but not to necessitate it. And if, furthermore, the ‘‘policies,’’ values, and obligations that Bratman favors are to be fleshed out in terms of agent avowals of some sort, then we may have reason (having to do with weakness of will) to decline the Bratmanian model. Furthermore the question of whether avowals and motivations are in some alignment becomes, on my view, a thoroughly empirical matter, and subject to empirical study once we have learned to discern the processes I have illuminated, empirically, as distinct. How judgment and motivation interact, to produce value or anything else for that matter, is something that comes open to empirical study, once we learn to discern the variety of distinct agentive processes on the ground.31 Hooking up with Social Science I have noted how the account sketched here resolves certain standard philosophical problems, like the well-worn problem of how to account for weakness of will.32 But for the social sciences, the significance of the account begun here is even more profound. The model sketched here is pregnant with new possibilities for a science of social phenomena—and in particular, for theories that seek to handle issues of how social decisions are taken and how social entities take action within a social setting. The principles of modeling I am advocating offer the prospect of new means of handling such phenomena. Here is how.
156
Mariam Thalos
The simple model, even with modest modifications, tells us that agents are essentially extensionless atoms. It tells us that agency is always an individual affair, for atoms are always centrally organized. And you need only a theory of atoms and how they interact at their ‘‘peripheries’’ in order to generate predictions about how a population of them will behave. But the problem is that such predictions are often wrong. For example, the atomic model utilized by rational decision theorists is known to be entirely too simple: not only is it insusceptible to cooperation when there are costs involved, it is not susceptible even to coordination when the costs of failure are themselves very high.33 The model has to be tempered with (as they say) ‘‘irrationalities’’ in order to yield confirmable predictions. My contention is that the failures of this model lie precisely in the fact that the agent of which it speaks enjoys no functional ‘‘extension’’—the agent exists as an indivisible unity. And as all must acknowledge, this is an idealization. One, I say, that is responsible for the problem, and that can be remedied through adoption of such modeling principles as I am proposing. (The adoption of a model in which the work of agency is distributed also leads away from the false dilemma social psychologists paint between explaining behavior as caused by environment and explaining behavior as caused by the agent, a dispute that has generated more heat than light, as evidenced by Doris 2002.) But when once agents are modeled in the distributed fashion I have been urging, with perhaps some increment of central processing, very interesting possibilities come open. For if, furthermore, there can be some set of processes that resemble interpenetration of the units so that an overlap of basic agentic entities is a substantively different agentic entity from two such entities interacting merely at the ‘‘peripheries,’’ then genuine molecularization of agents becomes possible. (This can happen if it is genuinely possible for individuals to ‘‘pool’’ agentive resources, such as their resources for judgment, memory, and even motivation.) And no longer may the legal language of (for example) a corporation qualifying as an agent be regarded as metaphorical. If molecularization is a genuine possibility, then one requires substantively more in the way of theory, in order to generate predictions of the behaviors of populations—predictions of the sort that are currently formulated by theorists of cultural evolution.34 The model might thus play a role in furthering the cause of theories of cultural evolution. For such theories, one requires a theory of how ‘‘molecules’’ of agents behave as well as how they form. (The formation of such entities is also the stuff of biology, as Sober and Wilson 1994, 1998, have shown.)
The Sources of Behavior
157
So this distributed model of agency is not intended simply for philosophical consumption, to resolve cases puzzling to philosophers. It can pull its weight in the larger academic world as well. Reflections on Wittgenstein by Way of Concluding Wittgenstein, in his later years, struggled visibly with the contrasts between (what might best be referred to today as) formal semantics, on the one hand, and messy reality, on the other. And he gave this struggle very wide scope—viewing it as extending from language to every social practice thought to be rule governed.35 In these struggles Wittgenstein sought to reconcile the demands of a normative theory of language (what he referred to as the ‘‘rules of the game’’), in the way of an abstract and universal grammar consisting of constraints on sentence and phrase formation, with the realities of learning language (or any other rule system) in a world of inconsistent language speakers (game players) who take shortcuts wherever feasible, whose capabilities are uneven (from moment to moment and from player to player), and whose attentions to language itself are uneven and routinely mixed with other concerns. How is it even conceivable, given the realities of communication and the nature of language transmission by means of flawed episodes of learning, that there be one language (one grammar, one rule system) grasped, mobilized, and propagated, universally and monolithically, throughout a given community? And if such a thing makes any kind of sense, where can its authorities—the slate of normative, regulative principles governing it—reside? The later Wittgenstein was characteristically critical of the idea that the authorities on language, or any other rule system, reside in the minds of its users or players. The slogan that ‘‘meaning is use’’ was a signpost to divert traffic away from the perilous idea that meaning, or any other abstract authorities on a given practice, resides in the heads of users. To my mind, the invaluable contributions that Wittgenstein made to our thinking on this topic are not so much the answers he seemed to favor (and therefore the no-abstractions, no-meanings, no-mental-life doctrines that flourished in the soil he turned), but instead the category of question he pressed in so many different ways and the scope that he brilliantly insisted upon for these questions. In his emphasis on developmental questions, he illuminated the fundamental theoretical need for an account of communication that helps make contact between the messy world of true-life social practices and the abstract objects in which they purport to trade. This class
158
Mariam Thalos
of important questions elicits analyses of the fundamental ontological question of where ‘‘social practices’’ end, and individual cognition begins. And they elicit also attention to the focal question, ontological in character, of what does cognition (in its many manifestations—thinking, reasoning, willing) amount to, in light of these social and developmental realities. Furthermore Wittgenstein’s concerns help us confront the fundamental issues of how to classify and study the class of social interactions in which cognitive processes are distributed among the agents carrying them out.36 There’s a lesson here that I want to draw out by way of concluding remarks. Wittgenstein, in pursuit of a genealogy of rule-governed practices, purporting to govern the behaviors of mental ‘‘atoms,’’ found below the surface only a morass of messy, massively distributed processes that seem nonetheless to maintain these practices. The apparent flimsiness of it all astonishes. This is in point of fact the universal experience of the scientist, as contrasted with that of the analytical or transcendental philosopher. The transcendental philosopher in Wittgenstein poses the question, but it is, as I see it, his empiricism that makes the questions so poignant: How is the appearance of uniformity and unitarity possible, in light of what lies just beneath the surface? This is the Wittgensteinian dilemma.37 Our problem here has been a more limited (but only slightly more manageable) version of Wittgenstein’s: to explain how unitarity of action is achieved (at least sometimes) in a world of messy, massively parallel processes. And I think our way of proceeding can be a model for how to proceed in other arenas. Whereas those who name Wittgenstein as their hero make a transcendental move in the Wittgensteinian dilemma (much in the spirit of Kant, identifying chasms and dividing the world into realms), we started on the empiricist side with a messy world, and stayed there. What we found is that when we allowed agency at the organismic or individual level to remain a distributed affair, and viewed unitarity as a form of organization (loose or tight, as the case might be) of massively distributed, massively interlinked, messy processes, we were able to see how this could generalize to larger units of agency, through further and more nested (and probably even more messy) processes of molecularization. To attain this picture of things—to keep from fragmenting the world into further and further transcendental realms—we needed to articulate a bit of metaphysics. (In this case we had to discern the structure of control as something that is distinct from and possibly irreducible to causal structures.)
The Sources of Behavior
159
Let me put it another way: the transcendental philosopher, proceeding from common sense, begins with a nonnegotiable conception of the human being as an intellectual center, installed in a biological vessel like a ‘‘ghost in the machine’’—according to Gilbert Ryle’s haunting image. The conception is nowadays fleshed out in terms of interior organizing principles that are ‘‘inalienable’’ from the self (or, perhaps in the language of real-politique, principles that resist deniability). The self is made a matter of identity, avowal, or ownership over behavior. The empirical turn, which I have been sponsoring, views the self not as a matter of identity in a transcendental sphere, but rather as a matter of constitution in the (one and only) empirical order—and conceives of constitution in the political sense: a constitution is a protocol embodying or realizing the principles that govern our unavoidably parallel proceedings.38 It is a matter of government and structure, not a matter of identity. This shift in focus allows us, finally, to see the commonalities and analogies between the individual and larger entities to which the individual belongs. Rather than viewing the state (the premier social institution) as an individual writ large (as Plato sought to do by way of laying foundations for a theory of the state), we come to view the individual as a state writ small. I think this is an eye-opening spectacle. Acknowledgments With pleasure I thank Chrisoula Andreou, Andy Clark, Daniel Dennett, Steve Downes, Fred Dretske, Harold Kincaid, Elijah Millgram, Alex Rosenberg, Barry Smith, and Leo Zaibert, for thoughtful readings and reactions, and audiences in the philosophy departments of the University of Alabama/Birmingham, Duke University, and Texas A&M, for tolerant attention and the occasional patient explanation of how I am straying from the one true path. But one must sometimes stray, if only in the service of the discipline, lest the academic community become impaled upon inadequate treatments through myopic fixation. Notes 1. Experiments conducted by Benjamin Libet and his colleagues (1993, 1983) have been drawn upon for just this conclusion. 2. The influence of H. L. A. Hart’s ascriptivism (in Hart and Honore 1962) has wrought an effect on legal scholarship to this effect.
160
Mariam Thalos
3. Ginet (1990) and Wilson (1989) are recent articulations of these themes. 4. Notice that this form of ownership is description-relative, and so for this reason it is also inadequate for the purposes of a science of action. But I will not pursue this point here. 5. Interestingly Davidson would probably agree. He writes that ‘‘the practical syllogism [his own account of the reason for which] exhausts its role in displaying an action as falling under one reason; so it cannot be subtilized into a reconstruction of practical reasoning, which involves the weighing of competing reasons. The practical syllogist provides a model neither for a predictive science of action nor for a normative account of evaluative reasoning.’’ (2001, p. 16) 6. This is not the way Davidson himself proceeds on weakness of will. Davidson instead suggests an alternative conception of weakness of will, in which there is no weakness as such. According to Davidson’s considered view on the matter, weakness of will is a matter of poor judgment: an agent is incontinent in those cases where the agent performs A for certain reasons, but nonetheless has a broader base of reasons (including those on which performance of A was premised) on which that self-same agent judges some other action to be superior to A. (Of course, this is also a paradoxical position, as Davidson accepts the principle according to which when an agent judges A to be superior to B, then he also wants to perform A more than he wants to perform B. So the incontinent agent has wants on both sides, and wants to perform an action B more than the action A actually taken. But Davidson maintains his original principle in the face of this characterization of incontinence. Michael Bratman directs attention to unclarities in Davidson’s account that might allow him a way out of the paradox (‘‘Davidson’s Theory of Intention,’’ in Bratman 1999). I will return to another way out below. 7. Alfred Mele (1992) puts a positive spin on this. 8. For more on malleability, see Mele (1992). 9. A similar account is due to Raimo Tuomela (see Tumoela 1984, 1995, and Mele 1992, 2005). All these studies can be viewed as supported by empirical research such as that conducted by B. Malle (1999, 2001). 10. Some of the more central pieces are to be found in Bratman 1987 and 1999. But see also Bratman (2000a, b, 2002). 11. Annette Karmiloff-Smith (1999) is right in insisting on a connection between expertise and modularity. 12. I have argued elsewhere (2002, 2003) that the causal explanation is inadequate to all the needs of theoretical science, as such. 13. For more on this reductionist account of causation and defenses of it, see Thalos (2003).
The Sources of Behavior
161
14. This appears to be the moral in Alfred Mele’s Motivation and Agency (2005) in which he discusses an example like the one I will be describing in the text to follow. 15. We could extend this list through inclusion of such things as accessible machinery (compasses, computers, telephones, etc.), as the friends of extended cognition might like to do. But it is not necessary for my purposes here. 16. Norman and Shallice (1986) offer an intersection model of top-down and bottom-up processing, without a ranking central processor. Hardcastle (1995) makes a beginning at a philosophical discussion of these topics. 17. Note well: I am hereby defining the terms central, local, and distributed for my use in this essay. They are here my own terms of art. Others who might use similar verbal forms, are not necessarily—and in fact usually not—using that language in a sufficiently technical or narrow fashion. And thus only by coincidence will it be true that they are utilizing the same concepts as mine, if perchance their meaning comes to the same as mine. 18. See Krista Lawlor (2007). 19. It originated in the 1950s in the work of Eliot Stellar, in the form of dual-center theory, now generally thought overly simple. 20. The metaphysics of systems theory is currently in its infancy. My own piece ‘‘A World of Systems’’ makes some progress in articulating the metaphysical differences between systems theory and causal analysis. 21. This problem structure, and closely related others, threaten to undo David Lewis’s (1973, 1979) counterfactual account of causation. 22. See Thalos (2002) for a contrasting account of the subject matters of the sciences. 23. De anima, 433, a21–24; Nicomachean Ethics 1139, a35. 24. Someone might judge this Aristotelian point as Humean, but I do not, as I think that Hume in the relevant passages has no ambition to speak on the topic of modeling behavior insofar as he seeks to make remarks on the reach of reason. See Millgram (1995). 25. I owe this potted history to remarks of Hannah Arendt (1978). 26. Indeed there are models of behavior that proceed from the single axiom that selfregulation is a limited resource; see Schmeichel and Baumeister (2004). 27. It’s worthwhile noting that he is repeating a move made by Daryl Bem (1967). 28. See Zelazo and Muller (2002), Zelazo et al. (1997), and Zelazo (1996). Gibson (1993) is a source of other important research on development of perception of control. 29. For references to these and related studies, see Gibson (1993).
162
Mariam Thalos
30. This is a phrase of Jerry Fodor’s, who has been prominent in his pessimism about its analysis. 31. This brings up a related issue: there is a large body of work pertaining to the bounds or constraints that belief (or in my terminology, judgment) creates for or sets upon intentions: Davidson, Bratman, Mele, and Audi especially. The questions this literature tries to handle are in the following 2 forms: (1) Can I believe X and still intend Y? and (2) Must I believe X in order to intend Y? (A parenthetical remark: none of what is said in the literature on these questions is in the least bit persuasive, for a whole range of reasons having to do with the methodology of the writers and the readings performed of the belief-constraint principles in question. One simply cannot answer the posed questions in conceptual terms: one cannot examine one’s store of common sense to illuminate the answers; one must perform empirical studies.) It is not clear why these questions are pursued in the first place: nothing of any substance is said on the question of the bearing of said belief constraints on philosophical topics already acknowledged as substantive. And not much in the way of answers to the questions comes out, in any case, nor any substance on the subject of what hangs on answering the belief constraints questions one way or another. Perhaps some of what is fueling the fervor on these questions might be linked to something Davidson says early on in the conversation on intention: ‘‘ ‘Incontinent,’ like ‘intentional,’ ‘voluntary,’ and ‘deliberate,’ characterizes actions only as conceived in one way rather than another.’’ Here Davidson is concerned about how to characterize the contents of intention. There is indeed a substantive philosophical question in the vicinity. It could more explicitly be put as follows: How does the fact that judgment always involves the mobilization of categories, and that these categories might vary by community, or even from individual to individual, impact an executive function that ultimately issues in something (i.e., behavior or action) that belongs on the side of the concrete rather than the abstract—on the side of the empirical world rather than the world of concepts and their ilk? And this question is made even more urgent on my model: according to my model, judgment interacts directly with the executive function, it is absolutely pivotal and indispensable in the learning process, and we can thus anticipate that in a vast proportion of cases judgment will intermediate between other cognitive functions and the executive module. And so the question becomes a very live and a very exciting philosophical question. But no theory of belief constraints will give anything that approaches an interesting answer to it. 32. Elsewhere I am sketching how this account handles certain phenomena in cognitive and social psychology: such phenomena as are associated with development of executive function, with dissociative diseases and with neuroses. 33. In one region of the academy, researchers have been pursuing a manifestly reductive goal: to provide an account of cooperative endeavors (‘‘games’’ as they are
The Sources of Behavior
163
called) in competitive terms. Guided by the idea that cooperative games do not deserve a separate category of their own, rational theorists have sought to assimilate what may be called collective rationality, the process of deliberating collectively to achieve agreement for the sake of coordinating action, to what may be called individual rationality, the process of achieving decision as an (undistributed) individual. The general colonizing move is to handle the overt process of deliberations as a series of strategic bargaining steps, in a competitive game played out amongst the members of the coalition, within the boundaries of the larger game. The bargaining game is itself viewed as governed by independent rules of interaction among multiple players, and therefore clearly not as something that someone can undergo purely as a single, unified psyche. Under this proposal it thus becomes impossible to view the process of deliberation with others as a potential means to reaching collective decision. It becomes impossible to view the process of deliberation as a means to forming a single but larger decision-making body aiming at common goals. Instead, deliberation with others comes to be viewed exclusively as a means for each participating individual to reach an individual end, within a purely competitive framework. The problem of equilibrium selection has become legendary in this context and tomes like John Harsanyi and Reihnard Selton’s (1988) have become monuments to this problem. See Thalos (1999) for more on this topic, and how a distributed-system analysis of agency can be deployed to advantage. 34. See, for example, Boyd and Richwerson (1985) and Cavalli-Sforza and Feldman (1981); both projects lie in the tradition of Maynard Smith (1982). 35. ‘‘To follow a rule’’ and ‘‘thinking.’’ 36. Hutchins (1995) is a masterful, extended study of distributed cognition in the social world. 37. In an important recent episode touching on this family of questions, Skyrms (1996) has sought to answer the special question of how a rule-defined practice might evolve in an ecological context. He assumes that the rules among which selection operates—we might profitably refer to them as idiolects—are ‘‘original existences’’ (to bend a phrase), transmittable as wholes, and not copied piecemeal in the learning process that involves multiple interactions amongst members of a community. Thus this approach, while asking a certain evolutionary question and offering an evolutionary answer, does not answer what I’m referring to as the developmental questions that Wittgenstein placed before us. Skyrms’s approach to the evolution of rule-following simply takes for granted that behaving according to a rule is an unproblematic phenomenon that can spring up naturally. It volunteers in the garden of life and does not require special cultivation. But this is precisely to take as unproblematic what Wittgenstein found so puzzling. In fact it simply focuses on different questions entirely: namely the questions of how evolutionary forces tend to select one kind of rule over another.
164
Mariam Thalos
38. Korsgaard (1999) uses similar phraseology, but it is clear that her language of constitution is entirely metaphorical since it is couched within a transcendental account of agency. References Anderson, J. 1983. The Architecture of Cognition. Cambridge: Harvard University Press. Anscombe, G. E. M. 1963. Intention. Ithaca: Cornell University Press. Arendt, H. 1978. Willing. New York: Harcourt Brace Jovanovich. Baars, B. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press. Bem, D. 1967. Self-perception: An alternative explanation of cognitive dissonance phenomena. Psychological Review 103: 320–35. Boyd, R., and P. Richerson. 1985. Culture and the Evolutionary Process. Chicago: University of Chicago Press. Bratman, M. 1987. Intention, Plans, and Practical Reason. Cambridge: Harvard University Press. Bratman, M. 1999. Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge University Press. Bratman, M. 2000a. Reflection, planning, and temporally extended agency. Philosophical Review 109: 35–61. Bratman, M. 2000b. Valuing and the will. Philosophical Perspectives 14: 249–65. Bratman, M. 2002. Hierarchy, circularity, and double reduction. In S. Buss and L. Overton, eds., Contours of Agency: Essays on the Philosophy of Harry Frankfurt, pp. 65– 85. Cambridge: MIT Press. Carver, C. S., and M. F. Scheier. 1998. On the Self-regulation of Behavior. New York: Cambridge University Press. Cavalli-Sforza, L., and M. Feldman. 1981. Cultural Transmission and Evolution: A Quantitative Approach. Princeton: Princeton University Press. Clark, A. 2003. Natural-Born Cyborgs. New York: Oxford University Press. Davidson, D. 2001/1980. Actions, reasons and causes. In Essays on Actions and Events, pp. 3–20. Oxford: Clarendon Press. Dennett, D. 1991. Consciousness Explained. Boston: Little Brown. Doris, J. 2002. Lack of Character: Personality and Moral Behavior. Cambridge: Cambridge University Press.
The Sources of Behavior
165
Downes, S. 2001. Some recent developments in evolutionary approaches to the study of human cognition and behavior. Biology and Philosophy 16: 575–95. Dreyfus, H., and S. Dreyfus. 1986. Mind over Machine. New York: Free Press. Festinger, L. 1962/1957. A Theory of Cognitive Dissonance. Stanford: Stanford University Press. Fodor, J. 1983. The Modularity of Mind. Cambridge: Bradford Books/MIT Press. Gazzaniga, M., ed. 1984. Handbook of Cognitive Neuroscience. New York: Plenum Press. Gazzaniga, M. 1985. The Social Brain: Discovering the Networks of the Mind. New York: Basic Books. Gazzaniga, M., ed. 2000. Cognitive Neuroscience: A Reader. Oxford: Blackwell. Gazzaniga, M., and T. Heatherington. 2003. Psychological Science: Mind, Brain, and Behavior. New York: Norton. Gazzaniga, M., and J. Ledoux. 1978. The Integrated Mind. New York: Plenum Press. Gazzaniga, M., D. Steen, and B. Volpe. 1979. Functional Neuroscience. New York: Harper and Row. Gibson, E. J. 1993. Ontogenesis of the perceived self. In U. Neisser, ed., The Perceived Self, pp. 25–42. New York: Cambridge University Press. Ginet, C. 1990. On Action. Cambridge: Cambridge University Press. Hardcastle, V. G. 1995. A critique of information processing theories of consciousness. Minds and Machines 5: 89–107. Harsanyi, J., and R. Selton. 1988. A General Theory of Equilibrium Selection in Games. Cambridge: MIT Press. Hart, H. L. A., and T. Honore. 1962/1959. Causation in the Law. Oxford: Clarendon Press. Hitchcock, C. 2001a. A tale of two effects. Philosophical Review 110: 361–96. Hitchcock, C. 2001b. The intransitivity of causation revealed in equations and graphs. Journal of Philosophy 98: 273–99. Hutchins, E. 1995. Cognition in the Wild. Cambridge: MIT Press. Jastrow, J. 1906. The Subconscious. Boston: Houghton Mifflin. Karmiloff-Smith, A. 1999. Beyond Modularity. Cambridge: MIT Press. Korsgaard, C. 1999. Self-constitution in the ethics of Plato and Kant. Journal of Ethics 3: 1–29. Lawlor, K. 2006. The new privileged access problem. Unpublished manuscript.
166
Mariam Thalos
Lewis, D. 1973. Causation. Journal of Philosophy 70: 556–67. Lewis, D. 1979. Counterfactual dependence and time’s arrow. Nous 13: 455–76. Libet, B. 1993. Neurophysiology of Consciousness. Boston: Birkhauser. Libet, B., C. A. Gleason, E. W. Wright, and D. K. Pearl. 1983. Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act. Brain 106: 623–42. Malle, B. F. 1999. How people explain behavior: A new theoretical framework. Personality and Social Psychology Review 3: 23–48. Malle, B. F. 2001. Folk explanations of intentional action. In B. F. Malle, L. J. Moses, and D. A. Baldwin, eds., Intentions and intentionality: Foundations of social cognition, pp. 265–86. Cambridge: MIT Press. Maynard Smith, J. 1982. Evolution and the Theory of Games. New York: Cambridge University Press. Mele, A. 1992. Springs of Action. New York: Oxford University Press. Mele, A. 2005. Motivation and Agency. New York: Oxford University Press. Miller, G. A., E. Galanter, and K. H. Pribram. 1960. Plans and the Structure of Behavior. New York: Holt. Millgram, E. 1995. Was Hume a Humean? Hume Studies 21: 75–93. Moran, R. 2001. Authority and Estrangement: An Essay on Self-Knowledge. Princeton: Princeton University Press. Newell, A., and H. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall. Newell, A. and H. Simon. 1976. Computer science as empirical inquiry: Symbols and search. Communications of the ACM 19: 113–26. Norman, D. A., and T. Shallice. 1986. Attention to action: Willed and automatic control of behavior. In R. Davidson, G. Schwartz, and D. Shapiro, eds., Consciousness and Self-Regulation: Advances in Research and Theory, vol 4, pp. 1–18. New York: Plenum Press. Piaget, J. 1952. The Origins of Intelligence in Children. New York: Norton. Rosenbloom, P. S., J. E. Laird, and A. Newell. 1987. Knowledge-level learning in SOAR. Proceedings of AAAI, vol. 87, pp. 499–504. Los Altos, CA: Morgan Kaufman. Schank, R. 1975. Conceptual Information Processing. New York: Elsevier. Schank, R. 1986. Explanation Patterns: Understanding Mechanically and Creatively. Hillsdale, NJ: Erlbaum.
The Sources of Behavior
167
Schlink, B. 1997. The Reader, trans C. Brown Janeway. New York: Pantheon Books. Schmeichel, B., and R. Baumeister. 2004. Self-regulatory strength. In R. F. Baumeister and K. D. Vohs, eds., Handbook of Self-Regulation: Research, Theory, and Applications, pp. 509–24. New York: Guilford Press. Skyrms, B. 1996. Evolution of the Social Contract. Cambridge: Cambridge University Press. Sober, E., and D. S. Wilson. 1994. Reintroducing group selection to the human behavioral sciences. Behavioral and Brain Sciences 17: 585–654. Sober, E., and D. S. Wilson. 1998. Unto Others. Cambridge: Harvard University Press. Thalos, M. 1999. Degrees of freedom in the social world. Journal of Political Philosophy 7: 453–77. Thalos, M. 2002. Explanation is a genus: On the varieties of scientific explanation. Synthese 130: 317–54. Thalos, M. 2003. The reduction of causation. In M. Thalos, and H. Kyburg Jr., eds., Probability Is the Very Guide of Life: The Philosophical Uses of Probability, pp. 295–328. Chicago: Open Court Press. Thalos, M. 2006. A world of systems. Unpublished manuscript. Tuomela, R. 1984. A Theory of Social Action. Dordrecht: Reidel. Tuomela, R. 1995. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford Series in Philosophy. Stanford: Stanford University Press. Vygotsky, L. S. 1934/1962. Thought and Language. Trans. E. Hanfmann and G. Vakar. Cambridge: MIT Press. Wegner, D. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wegner, D., M. E. Ansfield, and D. Pilloff. 1998. The putt and the pendulum: Ironic effects of the mental control of movement. Psychological Science 9: 196–99. Wilson, G. 1989. The Intentionality of Human Action. Palo Alto: Stanford University Press. Zelazo, P. D. 1996. Towards a characterization of minimal consciousness. New Ideas in Psychology 14: 63–80. Zelazo, P. D., and U. Muller. 2002. Executive function in typical and atypical development. In U. Goswami, ed., Blackwell Handbook of Childhood Cognitive Development, pp. 445–69. Oxford: Blackwell. Zelazo, P. D., A. Carter, J. S. Reznick, and D. Frye. 1997. Early development of executive function: A problem-solving framework. Review of General Psychology 1: 198–226.
9
Thought Experiments That Explore Where Controlled
Experiments Can’t: The Example of Will George Ainslie
Research in decision-making has always followed one of two basic strategies: start with the complex and work toward the simple, or start with the simple and work toward the complex. Since real-life choices are the complex part of the picture, most researchers have chosen the former strategy, particularly since the ‘‘cognitive revolution’’ made subjective processes a legitimate subject of scientific inquiry (Baars 1986; Lachman et al. 1979). These top-down theorists have documented many subtle phenomena in detail, but they have not even tried to find the basic mechanisms of these phenomena. Many of these theorists are fundamentally opposed to the attempt to reduce higher mental processes to simpler elements (Miller 2003; Baars 1986, e.g., quoting Maltzman, p. 109, and Mandler, p. 258). Moving in the opposite direction, the atomists par excellence have been the behaviorists. These bottom-up theorists have isolated atoms of choice, but even slightly complex concatenations of these atoms have had a mechanical feel. And the combining principles that were observed in the laboratory often stretched credibility when applied to the important life incentives— the idea that emotions come from conditioned sensory experiences, for instance, or that human behavior is a straightforward response to environmental contingencies (e.g., Skinner 1953). Recently the neuroeconomists have joined the bottom-up approach—you cannot get much more elementary than single cortical neurons. They have begun to repeat the work of the behaviorists while monitoring CNS activity (Glimcher 2005). However, complex interactions are still beyond the capacity of this research. In recent years findings from parametric behavioral experiments with both humans and nonhuman animals have suggested how the simplest motivated process may combine into higher functions. The logic of this compounding process predicts the growth of even the ego functions from the interaction of elementary motives. Undoubtedly some of this growth is constrained by inborn predispositions, but the existence of these
170
George Ainslie
predispositions becomes an open question rather than the assumption that more holistic theories have needed. However, insofar as this model predicts higher functions, the method of controlled experiment becomes inadequate to test many of its predictions, for a reason that I will go into presently. In this chapter I will elaborate on an earlier suggestion of mine (Ainslie 2001, pp. 125–39), that a tool of philosophy of mind—the thought experiment—can be a sound way of testing the validity of theories of higher mental functions. The most developed example of both modeling and testing is the will. Three Components of Will The will has always been hard to study, to the point that some authors have claimed that it does not exist. Ryle famously called will ‘‘the ghost in the machine’’ (1949/1984), and a recent book by Wegner called it an illusion, at least in the form of which people are conscious (2002). Part of the problem is that the term ‘‘will’’ gets applied to at least three somewhat independent functions: the initiation of movement (which corresponds to the Cartesian connection of thought and action—the function that Ryle found unnecessary), the ownership of actions, which gives you the sense that they come from your true self (the one that Wegner shows to be a psychological construction), and the maintenance of resolutions against shortsighted impulses. The first two functions of will can be studied by experimental procedures and by observing their pathology in ‘‘nature’s experiments.’’ An intention to move can be tracked through electrical activity in cortical neurons, starting from a point before it reaches the stage of being an intention (as in Iacoboni’s mirror neurons 1999, and Libet’s early movement potentials 1999). An act of intending can even be isolated experientially by the amputation of a limb, which often leaves the sense of initiating movement without the subsequent stages of movement. Ownership—the integration of choice with the larger self—can be studied in cases where it is absent: in splits of consciousness or activity below the threshold of consciousness. Splits remove the reporting self’s ‘‘emotion’’ of agency by physically (split brain, alien hand) or motivationally (dissociation and probably hypnosis) blocking this part-self’s awareness of the other will functions; however, even without the feeling of agency both of the other components of the will, initiation of movement and the steadfastness of resolutions, may be preserved. Subthreshold phenomena include mannerisms, which can be shaped even in sleep, small drifts of activity that can be summed into
Thought Experiments That Explore Where Controlled Experiments Can’t
171
ouija-like phenomena, and the insensible induction of one choice over another, such as by means of powerful transcranial magnetic fields (BrasilNeto et al. 1992; Wegner 2002 reviews many of these phenomena). The third function of will, the maintenance of resolution—willpower hereafter—has been harder to study, and yet it is arguably the most important of the three. Pathologies of initiation are rare—the ‘‘locked-in’’ syndrome is the most dramatic example—and the pathologies of ownership just mentioned are uncommon. By contrast, as modern society progressively widens our freedom of choice, pathologies of willpower have become the most common preventable cause of death and disability in young and middle-aged people (Robins and Regier 1991). These are not limited to failures of willpower, seen in addictions to alcohol, recreational drugs, cigarettes, food, gambling, credit card abuse, and many less obvious forms of preferring smaller, sooner (SS hereafter) satisfactions to substantially larger but later (LL) ones; in addition to these failures there are the overly narrow, rigid wills seen in obsessive compulsive personality disorder, anorexia nervosa, and many character pathologies both named and unnamed. Data from neuroanatomy and neurophysiology have told us what parts of the brain are essential for willpower. Ablation studies dating back to the now famous Phineas Gage (lobotomy by tamping bar) have shown that without the functioning of the lateral prefrontal cortex people cannot carry out intentions over time, becoming notably impulsive (Burgess 2006). A recent fMRI study by McClure and his collaborators showed that when student subjects chose LL rewards over SS ones their lateral prefrontal cortices and parts of their parietal cortices were more active than when they chose the other way (2004). London and her collaborators have shown that when deprived smokers are told to avoid puffing on a device that delivers cigarette smoke, their dorsal anterior cingulate gyri and supplementary motor areas were active (2006). However, such studies show only where, not how or why, willpower works. It is certainly foreseeable that greatly increased resolution in space and time will let brain imaging answer those questions too, but this may not happen soon. According to the standard of normality that is either explicit or implicit throughout the behavioral sciences, rational choice theory (RCT), willpower should not be necessary at all. Choices are assumed to have inertia so that, once made, they will be steady over time in the absence of new information (Hollis and Sugden 1993). In economics and behavioral psychology normal individuals have been explicitly held to discount future outcomes in an exponential curve; in other fields exponential (and the linear) discounting is implied by RCT’s property of consistency, since all
172
George Ainslie
shapes other than the exponential sometimes predict reversals of preference as a function of time. Given exponential discounting, the role of a self or ego is merely to obtain the individual’s greatest advantage by seeking as much information and freedom of action as possible. In RCT the ego is a hierarch that coordinates obedient subordinate processes; will in the sense of willpower is superfluous, and impulsive choices must be explained by some separate motivational principle. Basic Properties of Hyperbolic Discounting However, parametric experiments on the devaluation (discounting) of prospective events in animal and human subjects have repeatedly found that an exponential shape does not describe spontaneous choice as well as a hyperbolic shape (inverse proportionality of value to delay—reviews in Green and Myerson 2004; Kirby 1997; figure 9.1). Three implications of hyperbolic discounting—preference reversal toward SS rewards as a function of time (impulsiveness), early choice of committing devices to forestall impul-
Figure 9.1 (a) Conventional (exponential) discount curves from a smaller-sooner (SS) and a larger-later (LL) rewards. At every point their heights stay proportional to their values at the time that the SS reward is due. (b) Hyperbolic discount curves from an SS reward and an LL reward. The smaller reward is temporarily preferred for a period just before it’s available, as shown by the portion of its curve that projects above that from the later, larger reward.
Thought Experiments That Explore Where Controlled Experiments Can’t
173
siveness, and decreased impulsiveness when choices are made in whole series rather than singly—have also been found experimentally (reviewed in Ainslie 1992, pp. 125–42, and 2001, pp. 73–78). Such findings suggest an alternative to the hierarchical model of the self: behavioral tendencies are selected and shaped by reward in a marketplace of all options that are substitutable for one another (developed at length in Ainslie 1992, pp. 144–227, and 2001, pp. 73–104). Temporary preferences for SS options create conflict among successive motivational states, so a currently dominant option must include means of forestalling any incompatible options that are likely to become dominant in the future. Neither better information nor greater freedom of action necessarily serves the person’s longest range interest. The key ego function becomes prediction and forestalling temporary preferences for SS rewards. This function need not involve a special organ, but rather can be expected to be learned by trial and error whenever an individual has adequate foresight. It may entail finding external incentives or constraints, diverting attention from sources of SS reward, or cultivating emotions with some motivational momentum; but the only method that is both powerful and widely applicable is willpower. The finding that discounting the future tends to follow a hyperbolic curve puts even more demands on a theory of willpower. The will is needed not just for heroic cases but as a basic means of knitting a person’s intentions together from one moment to the next. Far from being a ghost in the machine, will may be the most fundamental ego function. Several theories of willpower have been proposed, or at least sketched out: that it is an organ-like faculty that gets stronger with practice but is exhausted by overuse (Baumeister and Heatherton 1996; Kuhl 1994), that it is a skill involving selectively not re-examining decisions once made (Bratman 1999; McClennen 1990), that it is an appreciation of molar patterns of choice that itself adds incentive not to spoil these patterns (Rachlin 1995), or that it comes from the (usually tacit) perception of a limited-warfare relationship among successive motivational states, which makes current choices test cases for future choices in similar situations. Mine is the last of these, and it has the advantage of having been suggested by the shape of the hyperbolic curve itself. I outline its rationale here, but a rationale is not a proof. Intertemporal Bargaining Limited warfare occurs where agents share some goals but dispute others (Schelling 1960, pp. 60–90): warring countries share an interest in avoiding
174
George Ainslie
nuclear war, or a couple wants to save money but each member wants to exempt pet projects. Such situations set up a repeated game of Prisoner’s Dilemma. Peace or at least a truce is achieved by establishing clear criteria for what will constitute defection in this game, in such a way that each agent’s long-range interest is served better by continual cooperation than by ad lib defection. My hypothesis is that hyperbolic discount curves are continually putting us into intertemporal Prisoner’s Dilemmas—cooperate with future selves for the long run or splurge for the present moment. The most powerful solution is simply to recognize this state of affairs so that our current decision becomes a test case for how we can expect to decide similar choices generally. With such a perception our expected reward from having a consistent intention is staked on cooperating with our future selves, and the reward is sharply reduced if we defect to an impulsive alternative. Although people conceive the mechanics of this contingency variously, under the rubrics of morality, principle, personal intention, and even divine help, we uniformly experience resolve when we have an adequate stake in a particular plan, and guilt or at least foreboding when a lapse causes loss of part of this stake. That is, the kind of guilt that arises from a failure of resolve represents your accurate perception that you have reduced the credibility of your promises in similar situations and perhaps generally, making intertemporal cooperation harder. The threat of this reduced credibility, I have argued, is the basis of willpower. Empirically this hypothesis has two components: 1. Making decisions between SS and LL rewards in a whole bundle increases the incentive to choose LL rewards; and 2. Perceiving current choices as test cases that predict a series of similar future choices forms these choices into such a bundle. The Effect of Bundling Rewards Both controlled experiments and uncontrolled observations have provided good evidence for the first hypothesis. Studies with nonhuman animals show that the hyperbolically discounted effects of each reward in a series simply add (analyzed in Mazur 1997). Thus we should be able to estimate the effect of a series of rewards at a given moment by simply adding the heights of the curves from each reward at that moment. Because hyperbolic curves decay relatively slowly at long delays, bundling rewards together predicts an increase in the hyperbolically discounted value of the LL rewards relative to the hyperbolically discounted value of the SS rewards (figure 9.2a). Thus a bundle of LL rewards may be consistently worth more
Thought Experiments That Explore Where Controlled Experiments Can’t
175
Figure 9.2 (a) Summed hyperbolic curves from a series of larger-later rewards and a series of smaller-earlier alternatives. As more pairs are added to the series, the periods of temporary preference for the series of small rewards shrink to zero. The curves from the final (rightmost) pair of rewards are the same as in figure 9.1b. (b) Summed exponential curves from the same series as in figure 9.2a. Summing doesn’t change their relative heights. (This would also be true if the curves were so steep that the smaller, earlier rewards were preferred, but in that case summing would add little to their total height, anyway, because the tails of exponential curves are so low.)
than a bundle of SS ones even where the discounted value of the most imminent smaller reward greatly exceeds the discounted value of its LL alternative. Experiments in both humans and rats have confirmed this effect. Kirby and Guastello reported that students who faced five weekly choices of a SS amount of money immediately or a LL amount one week later picked the LL amounts substantially more if they had to choose for all five weeks at once than if they chose individually each week (2001). The authors reported an even greater effect for SS vs. LL amounts of pizza. Ainslie and Monterosso reported that rats made more LL choices when they chose for a bundle three trials all at once than when they chose between the same SS vs. LL contingencies on each separate trial (2003). The effect of such bundling of choices is predicted by hyperbolic but not exponential curves:
176
George Ainslie
exponentially discounted prospects do not change their relative values however many are summed together (figure 9.2b); by contrast, hyperbolically discounted SS rewards, although disproportionately valued as they draw near, lose much of this differential value when choices are bundled into series. These experimental findings confirm a great deal of lore on willpower. From classical Greek times philosophers have claimed to see patterns of will and its failure, and these reports have been consistent. The recommendation that has recurred most regularly through the ages has been to decide according to principle, that is, to decide in categories containing a number of choices rather than just the choice at hand (see Ainslie 2001, pp. 117– 20). Aristotle said that incontinence (akrasia) is the result of choosing according to ‘‘particulars’’ instead of ‘‘universals’’ (Nichomachean Ethics 1147a24–28); Kant said that the highest kind of decision-making involves making all choices as if they defined universal rules (the ‘‘categorical imperative,’’ 1793/1960, pp. 15–49); the Victorian psychologist Sully said that will consists of uniting ‘‘particular actions . . . under a common rule’’ so that ‘‘they are viewed as members of a class of actions subserving one comprehensive end’’ (1884, p. 663). In recent years behavioral psychologists Heyman (1996) and Rachlin (1995) have both suggested that choosing in an ‘‘overall’’ or ‘‘molar’’ pattern (respectively) will approach reward maximizing more than a ‘‘local’’ or ‘‘molecular’’ one. Bundling via Recursive Self-Prediction It is harder to find evidence about the second component of my hypothesis. This describes an internally recursive process, in which changes of current choice change expected future choice, and changes of expected future choice change current choice. There is no way to manipulate this process by varying external incentives; indeed that is one of the defining properties of the will, which the subject must experience as ‘‘free.’’ Controlled experiments can only nibble at the edges of this process. For instance, the subjects in the Kirby and Guastello experiment who had to choose between LL and SS rewards weekly for five weeks had a greater tendency to choose LL rewards if it was suggested to them that their current choice might be an indication of what they would choose in the future. This is suggestive, but hardly proof, that the will depends on self-prediction. Analogues using interpersonal repeated Prisoner’s Dilemmas can test whether the predicted dynamics of limited warfare actually occur. For instance, false feedback that a partner has defected changes a pattern of
Thought Experiments That Explore Where Controlled Experiments Can’t
177
cooperation more and for longer than false feedback that a partner has cooperated changes a pattern of defection, an asymmetry that has been described for willpower (Monterosso et al. 2002). Even the multiple-person, single-play games that most closely model intertemporal bargaining can be set up, as in this demonstration: Picture a lecture audience. I announce that I’ll go along every row, starting at the front, and give each member a chance to say ‘‘cooperate’’ or ‘‘defect.’’ Each time someone says ‘‘cooperate’’ I’ll award a dime to her and to everyone else in the audience. Each time someone says ‘‘defect’’ I’ll award a dollar only to her. And I ask that they play this game solely to maximize their individual total income, without worrying about friendship, politeness, the common good, etc. I say that I will stop at an unpredictable point after at least twenty players have played, at which time each member can collect her earnings. Like successive motivational states within a person, each successive player has a direct interest in the behavior of each subsequent player; and she’ll guess their future choices somewhat by noticing the choices already made. Realizing that her move will be the most salient of these choices right after she’s made it, she has an incentive to forego a sure dollar, but only if she thinks that this choice will be both necessary and sufficient to make later players do likewise. If previous players have been choosing dollars she’s unlikely to estimate that her single cooperation will be enough to reverse the trend. However, if past choices have mostly been dimes, she has reason to worry that her defection might stop a trend that both she and subsequent players have an incentive to support. Knowing the other audience members’ thoughts and characters—whether they’re greedy, or devious, for instance—won’t help a person choose, as long as she believes them to be playing to maximize their gains. This is so because the main determinant of their choices will be the pattern of previous members’ play at the moment of these choices. Retaliation for a defection won’t occur punitively—a current player has no reason to reward or punish a player who won’t play again—but what amounts to retaliation will happen through the effect of this defection on subsequent players’ estimations of their prospects and their consequent choices. So each player’s choice of whether to cooperate or not is still strategic. (Ainslie 2001, p. 93)
This exercise has provided a serviceable illustration of how an intertemporal Prisoner’s Dilemma works. However, the pattern of choices both in lecture audiences and roomfuls of volunteer subjects (residents of substance abuse rehabilitation programs) have been greatly affected by the participants’ awareness of the social situation, as they have often said afterward. The emotional incentives are apt to be greater than the monetary ones, and the precedent set by someone’s move affects not only the moves of subsequent players but also her own self-image, as ‘‘a cooperative person’’ (see Ainslie 2005a). Further research along this line is possible, but it promises to be both noisy and expensive.
178
George Ainslie
Despite this suggestive evidence, a critic could still reasonably say that the second component of my hypothesis, recursive self-observation, has not been demonstrated. It might seem that with a process that is so much a part of daily life we could just ask people how it works. People can indeed report using willpower among other ways to avoid impulses, and the pattern of reported endorsement of personal rules grossly correlates with expectable traits—positively with compulsive traits and negatively with reports endorsing external control devices (Ainslie 1987). However, intensive interviews with a variety of subjects have shown that people notice disappointingly little about the properties of their own willpower. For instance, if subjects who are trying to break a bad habit are asked whether backsliding today will make it harder to avoid it tomorrow, they are almost as likely to say no as yes, which even to an intuitive layman standing outside the problem seems unlikely to be true (Ainslie, unpublished data). The common knowledge that must exist about how willpower works does not seem to be easily available to casual introspection. It is ironic that a process that is so familiar can be so elusive. There should be a way to marshal our everyday knowledge to test such a basic hypothesis, at least to rule it out or to rule out some of its alternatives. Using Common Experience: Lessons from the Sociability Question Most psychology before the positivist era drew its conclusions from common experience, just as the essayist or novelist did. There was little debate about will, but argument on the basis of experience can be seen in many other topics, such as the mechanism of social feeling. An exchange between Alexander Bain and William James illustrates the failure of this method to get to the roots of theoretical disagreements: Bain argues that the value of human company is based on differential pleasure: Why should a more lively feeling grow up towards a fellow-being than towards a perennial fountain? It must be that there is a source of pleasure in the companionship of other sentient creatures, over and above the help afforded by them in obtaining the necessaries of life. To account for this, I can suggest nothing but the primary and independent pleasure of the animal embrace. For this pleasure every creature is disposed to pay something, even when it is only fraternal. (Emotions and Will, quoted in James 1890, p. 551, note)
At which William James snorts, Prof. Bain does not explain why a satin cushion kept at about 98 degrees F. would not on the whole give us the pleasure in question more cheaply than our friends
Thought Experiments That Explore Where Controlled Experiments Can’t
179
and babies do . . . . The youth who feels ecstasy [when] the silken palm . . . of his idol touches him, would hardly feel it were he not hit hard by Cupid in advance. (ibid., p. 552, note)
And again, With the manifestations of instinct and emotional expression, [pleasures and pains] have absolutely nothing to do. Who smiles for the pleasure of smiling, or frowns for the pleasure of the frown? Who blushes to escape the discomfort of not blushing? (ibid., p. 550)
It is the authors’ assumptions that limit their solutions. Bain assumes that all incentives must be based on the rewards and punishments that come from concrete stimuli; hence he ‘‘can suggest nothing but’’ animal embrace as the hard currency that backs the otherwise flimsy experience of companionship. James assumes that only voluntary behaviors can be shaped by reward, thus ruling out any motivational basis for (involuntary) smiling, frowning, or blushing. These assumptions might have been useful at the time to protect the debate from unmanageable degrees of freedom, but they concealed possible solutions (e.g., the dependence of involuntary behaviors like blushing on intrinsic reward, which I advocate in Ainslie 2001, pp. 48–70). The positivist reaction to these experiential arguments took the form of behaviorism, which rendered many assumptions irrelevant but wound up limited by assumptions of its own—particularly the discipline of ignoring internal experience, which perversely evolved into the assumption that internal experience is not a factor in choice. Academic debate requires conventions of evidence just as organized sports must define boundaries for fair balls, but formalizing the scope of inquiry necessarily limits what it can find. With the ‘‘cognitive revolution’’ researchers again began exploring common experience. But, naturally enough, no experiment has been devised to demonstrate that human companionship has a noninstrumental value, that is, a value beyond what it has as a means to other ends. You cannot put experimental subjects into enough of a social vacuum for that. And yet the fact is obvious. Frustrated by the persistent convention in RCT that social transactions have to have instrumental bases, one author proposed a structured introspection. Eric Posner asked us to imagine the case of the ‘‘Pampered Prisoner: a man lives in a giant building, all alone . . . .’’ It is full of every resource except people. Posner argues that this man lacks ‘‘autonomy:’’ ‘‘An economist might say that the Pampered Prisoner is in an enviable position, but this claim is not plausible. We do not envy the Pampered
180
George Ainslie
Prisoner, and the reason is that much of our sense of accomplishment and well-being comes from our considered approval or rejection of the values to which others expect us to conform, and from our consistent action with these judgments despite the pressures put on us by others’’ (Posner 2000, pp. 207–208). Faced with a counterintuitive assertion that cannot be subjected to controlled experiment, Posner has tried to clarify intuition. He does not appeal to anyone’s actual experience but to an experience that can be built from familiar components, with one element expunged—the instrumental need for other people. I won’t argue whether or not this experiment is adequate to its purpose. I describe it to introduce the method of structuring intuitions to make them into shared observations. Shortly this illustration will also illustrate a hazard of that method. Thought Experiments What Posner proposed was a thought experiment, an exercise of imagination designed to let the reader examine her intuitive or previously unexamined knowledge in a framework that tests a hypothesis—in this case, whether instrumental purposes adequately explain the value of all social transactions. As in this example, thought experiments often suggest a counterfactual condition that removes one kind of doubt or variability in a real-life choice situation, and ask how this condition affects what seems to be the best choice. The intuition sought does not constitute a theory but a finding that can test a theory, just as the finding of a controlled experiment can. Interestingly thought experiments have been used almost exclusively in two disparate fields, physics and philosophy. It was Einstein’s mentor, Ernst Mach, who first called them gedankenexperimente (Sorensen 1992, p. 51), but they date back at least to Galileo. The broadest use of the term includes ways of visualizing logical truths. The proofs of plane geometry might count, but this usage is too inclusive to be meaningful. Most thought experiments in physics involve logical deductions from physical observations, albeit everyday ones. In this category would be Leibniz’s argument against a famous dichotomy—Descartes’s theory that a lighter body striking a heavier one recoils with equal speed whereas a heavier body striking a lighter one moves together with it. Leibniz imagined the continuous variation of the two masses from slightly unequal in one direction to slightly unequal in the other so that at one point, in Descartes’s theory, the striking ball would change suddenly from movement along
Thought Experiments That Explore Where Controlled Experiments Can’t
181
with the other one to a movement just as fast in the other direction. Such a change would be logically possible, but unlike anything anyone had observed (ibid., p. 10). Galileo’s famous demonstration that acceleration does not depend on mass reveals a boundary of thought experiments. He demonstrated that heavy objects cannot fall faster than light ones by asking the reader to imagine two such objects connected by a string. If the light one really fell more slowly, he pointed out, it should hold the heavy one back, and this violates intuition (ibid., p. 127). However, it was not clear that all his readers would have this intuition, even if they were aware of wind resistance, since their experiences would have been entirely in an environment that had wind resistance. Real experiments like dropping different sizes of shot or dropping a feather in a vacuum were called for. Thus what I am calling thought experiments do not tap pure reason but extend the reach of induction and hence the set of hypotheses that you can reject with a given degree of confidence. As Mach said, they permit you to draw from a storehouse of unarticulated experience (ibid., p. 4). They may sometimes be unnecessary for this—the laws of motion were soon subjected to physical experiments. Thought experiments in physics have been used mostly as heuristics, stepping-stones to quantified, controlled observations. However, in studying a mental process that is both inaccessible to direct observation and internally recursive, such verification is not available. In philosophy and, I will argue, in behavioral science, thought experiments may take the analysis of everyday experience beyond what controlled experimentation can provide, at least before brain imaging techniques become a great deal more sophisticated. The simplest cases are hardly experiments at all. For instance, where someone has concluded from the common experience of internal dialogue that ‘‘we think in words,’’ the crucial thought experiment might be just to recall occasions where you grasped a concept but ‘‘couldn’t find the words.’’ Thus when Hauser argued that ‘‘public languages are our languages of thought,’’ Abbott proposed a ‘‘moron objection:’’ ‘‘Why is it so difficult to put my thoughts into English prose if they already are in English prose?’’ (Hauser and Abbott 1995, p. 51). This test does not involve a priori logic. We can imagine mechanisms of thought that have only words as their units and that relate these units only by conventional syntax. However, it should not be necessary to survey a group of subjects about whether they have ever had the experience of not being able to find the right word. The moron objection seems to be a basic illustration of how to confront an intuitive hypothesis with an intuitive counterexample. It does not rule out
182
George Ainslie
the possibility that we sometimes think in words, nor does it support a particular alternative, such as ‘‘we think in a kind of machine language,’’ but it clears the way for such alternatives. Many of the experiments that have been conducted in social psychology since the cognitive revolution have actually been surveys of thought experiments. The violations of conventional rationality described by Kahneman, Tversky, and their school were demonstrated by what were essentially thought experiments, in which the authors asked a number of subjects to repeat what must originally have been the experimenters’ personal intuitions (e.g., Kahneman and Tversky 1984). For instance, most subjects said that if they lost money equivalent to the cost of a theater ticket they would still buy the ticket, but if they lost the ticket itself they would not buy another; or they would travel a certain distance to save five dollars on a calculator, but not to save five dollars on the combination of the calculator and a more expensive jacket. These framing effects became known through self-reports; the role of the experimenters was only to find questions that would elicit them—doubtless by their own introspection— ask these questions, and tabulate the data. What makes it necessary to pose questions like these to a number of people, rather than just putting them before the reader? I would suggest that this surveying process is worth the effort in three circumstances: 1. Where a substantial number of subjects might not report it, or where the reader might not be expected to believe that most subjects would report it. Ultimately thought experiments, like logical arguments, convince insofar as the reader replicates them. However, reader and experimenter might share the elicited introspection and still think it was weak enough or counterintuitive enough that other people may not share it. I have had hundreds of people say that they prefer $100 now to $200 in three years, but $200 in nine years over $100 in six years (or similar reversals of preference), even though this is the same choice seen at a greater distance; but some actual subjects have disagreed, and many reverse preferences at different values (see Ainslie and Haendel 1983). Although the change in preference is widespread and might have been persuasive if presented as a thought experiment, it has taken actual experiments to be sure of it. 2. Where quantification would be useful. Parametric presentation of the amount-versus-delay question has been an important tool in establishing the specific shape of the spontaneous discount curve (Green and Myerson 2004). 3. Where the anticipated answers are illogical or otherwise noticeably ‘‘wrong’’ in some sense. Under those circumstances a reader may or may not be able to
Thought Experiments That Explore Where Controlled Experiments Can’t
183
put aside her awareness of the ‘‘right’’ answer. The job of the experimenter is then to present the questions to naı¨ve subjects in such a way that the answer demanded by conventional assumptions is not apparent to them. This is not so difficult. I have always been surprised that subjects are willing to report changes of preference as a function of time. By these criteria it should not be necessary to survey dozens of students on Abbott’s moron objection. The finding of that thought experiment is just that people sometimes have a thought for which they can’t find the words; it seems safe to assume that this experience is universal. Thought Experiments on Willpower There have been few thought experiments on self-control, let alone on willpower specifically. Some of those that have been proposed have been heuristics rather than tests of hypotheses. They provide illustrations: ‘‘This is imaginable,’’ rather than what is the only decisive outcome of testing a null hypothesis, ‘‘This cannot be true.’’ An example is Quinn’s self-torturer model of addiction (1993), which has been used to illustrate the possibility that addiction can be caused in exponential discounters by an intransitivity of choice between single and summed rewards (Andreou 2005): suppose that you are in a lifelong experiment, in which an implanted device can give you continuous electric shocks ranging from imperceptible (level 0) to excruciating (level 1,000). Every week you can try out different levels, but are then given the choice only of whether or not to increase the shock by a single level, which is imperceptible but permanent, and collect ten thousand dollars each time you do. Quinn argues that you will prefer to increase the shock imperceptibly at each level, even though you would not accept level 1,000 even for a thousand times ten thousand dollars. Thus he has created an analogue of an addict, resembling an overeater, say, who is hungry enough (or emotionally needy enough) to gain an imperceptible 100 grams every week but would never choose to gain a massive 100 kilograms in twenty years. This model illustrates a mechanism of addiction, just as my lecture audience example illustrates a mechanism of intertemporal bargaining, but it is not evidence either for the existence of this mechanism or against the existence of alternatives.1 By contrast with heuristic experiments like this, I am suggesting thought experiments that can rule out null hypotheses, which are alternative to a target hypothesis, namely the recursive self-prediction model of willpower. To do so, each experiment must show that the null hypothesis cannot handle some intuition elicited by the experiment. As with heuristic thought
184
George Ainslie
experiments, and laboratory experiments for that matter, the design of the hypothesis-testing thought experiment must overcome two common shortcomings: it must accurately sample the process to be studied, and it must not confound its findings with interpretations of these findings. Daniel Dennett has shown that many thought experiments intended to rule out the possibility of consciousness, autonomy, and similarly subtle qualities in a strictly deterministic world persuade only by tacitly assuming a lack of subtlety in the world’s design (e.g., Dennett 1984). For instance, he comments on Ayer’s attempt to demonstrate with an imaginary world that feelings of responsibility and blame would not withstand awareness that it was ‘‘possible to implant desires and beliefs and traits of character in human beings, [so] that it could be deduced . . . how any person who had been treated in this way would most probably behave in a given situation.’’ Dennett complains that ‘‘those are circumstances in which . . . the actual causation of the imagined beliefs and desires has been rendered drastically cruder,’’ that the imagined conditioning process has to be ‘‘powerful enough to override [the person’s] delicate, information-gathering, information-assessing, normal ways of coming to have beliefs and desires’’ (ibid., pp. 33–34). In other words, the intuitions elicited by this thought experiment are not about people but about imaginary creatures. The result of such a sampling error is not a shared intuition but a shared convention of science fiction. Posner’s pampered prisoner example illustrates the hazards of claiming the authority of an intuitive finding for an interpretation of the finding. He has ruled out a need for practical cooperation, since the prisoner’s environment has ‘‘every resource.’’ But there is no comparison that demonstrates autonomy to be the missing element, as opposed, say, to intimacy or integrity or another Eriksonian quality, or another quality entirely. A diehard conditioning theorist might even assert Bain’s idea (and Watson’s 1924) that we value people only for their association with sensory experience, and that we cannot currently imagine how much we would like being a pampered prisoner once we had gotten used to it. The finding of the experiment is just our empathic sense that the pampered prisoner would be lonely. Explanation of this finding requires interpretation—which, of course, is also true of the most positivistic controlled experiment. In Dennett’s view the unjustified simplification of intuition-based thought experiments, ‘‘intuition pumps,’’ allows them ‘‘to entrain a family of imaginative reflections in the reader that ultimately yields not a formal conclusion but a dictate of ‘intuition’ ’’ (1984, p. 12). Thus he may mean that all procedures intended to elucidate intuition are misleading, are
Thought Experiments That Explore Where Controlled Experiments Can’t
185
pumps. But it is also possible that the problem is not in the fundamental unreliability of intuition, but in the lack of rigor that he himself has uncovered in identifying assumptions, in both their design and interpretation. Assumptions in design prevent these thought experiments from presenting a true sample of the phenomenon under study. Assumptions in interpretation confound what might have been a valuable intuition with additional, untested elements. If thought experiments are ever to serve as means of empirical observation, their degree of deviation from the experiences that presumably formed the reader’s intuitions should be limited and clear, the null hypothesis should be specific, and the interpretation of the finding separated from the finding itself. I have selected and interpreted the following thought experiments with these principles in mind. One more introductory idea: the most useful thought experiments test not just hypotheses but assumptions underlying hypotheses. The greatest obstacles to scientific progress have not been questions badly answered, but questions not asked. Assumptions have always constricted the asking process—for instance, that something solid must hold the planets up and must be both spherical to prevent snags and transparent to show the stars; that there is no way for blood to move between arteries and veins; or that organic matter has a fundamentally different quality from inorganic. These assumptions have plenty of equivalents in modern behavioral science. Elsewhere I have criticized the assumptions (among others) that choices have inertia—that they stay constant until acted upon by new information, that aversiveness is the opposite of rewardingness, that involuntary behaviors cannot be based on motivation, and that ‘‘the’’ will is an inscrutable organ governed by a higher choice-making executive (reviewed in Ainslie 2001, 2005b). The last assumption is especially impenetrable and has elements of all three of the historical assumptions I just mentioned: that the will must be held together by something more solid than sheer incentive, that it can’t be dissected and therefore must remain a black box, and that it has a nature that, if not immaterial, is at least outside the chain of causation as otherwise understood. The will is the last seat of vitalism. Thought experiments that can help penetrate this black box will be especially valuable. We can now return to the second component of the intertemporal bargaining hypothesis, which challenges all the elements of this assumption about willpower. The source of willpower is not a force more powerful than motivation; willpower can be broken into steps, and it does not stand outside ordinary causality. Willpower is simply the perception of current choices as test cases for expectations of future choices. As I have
186
George Ainslie
said, controlled experimentation has been only suggestive. Four thought experiments—or three and a conundrum—call into question assumptions that contradict this hypothesis: Monterosso’s problem, Kavka’s problem, the free will conundrum, and Newcomb’s problem. Monterosso’s Problem Monterosso’s problem is the simplest: Consider a smoker who is trying to quit, but who craves a cigarette. Suppose that an angel whispers to her that, regardless of whether or not she smokes the desired cigarette, she is destined to smoke a pack a day from tomorrow on. Given this certainty, she would have no incentive to turn down the cigarette—the effort would seem pointless. [Whether she had such an incentive has been put as a question to lecture audiences, resulting in a resounding ‘‘no.’’] What if the angel whispers instead that she is destined never to smoke again after today, regardless of her current choice? Here, too, there seems to be little incentive to turn down the cigarette—it would be harmless. [Again, audiences say there was no such incentive.] Fixing future smoking choices in either direction (or anywhere in between) evidently makes smoking the dominant current choice. Only if future smoking is in doubt does a current abstention seem worth the effort. But the importance of her current choice cannot come from any physical consequences for future choices; hence the conclusion that it matters as a precedent. (Monterosso and Ainslie 1999)
Here imagination adds a single element to real life: certainty about the person’s future smoking. The null hypothesis is that a present choice has no necessary motivational relationship with future choices. The finding is the strong conclusion that a person with certainty about her future smoking has no incentive not to smoke today, whichever the direction of the certainty—which is contrary to the null hypothesis. I interpret the finding to mean that self-control on the current day of a self-control plan seems necessary only to influence future choices. That is, if we imagine that our current choice will have no influence on future choices, we get a sense of release—and from what, I would ask, if not from the felt need to avoid setting a bad example for yourself? Kavka’s Problem Unlike Monterosso’s problem, Kavka’s problem has disquieting implications: a person is offered a large sum of money just to intend to drink an overwhelmingly disgusting but harmless toxin. Once she has sincerely intended it, as verified by a hypothetical brain scan, she’s free to collect the money and not actually drink the toxin (Kavka 1983). Philosophical discussion has revolved around whether the person has any incentive to actually
Thought Experiments That Explore Where Controlled Experiments Can’t
187
drink the toxin once she has the money—a majority of audience members initially say that she does not—and whether, foreseeing a lack of such motive, she can sincerely intend to drink it in the first place, even though she would drink it if that were still necessary to get the money. Having had this pointed out, actual audiences tend to get raucous and search for ways that intention is still possible, usually not involving the need for a reassessment of whether to drink the toxin. In the spirit of reducing unreal elements in thought experiments, it is desirable to replace the brain scan, even though such a capability is probably not far off. I have re-cast the problem in entirely realistic terms: say, you’re a mediocre movie actor, and a director casts you, with some misgivings, to play a pipsqueak who gets sent down a terrifying toboggan run. You don’t have to go down the run yourself—the director is perfectly happy to have one of the stunt men do it—but you have to play a scene right beforehand in which you are frightened out of your wits. You realize that you cannot fake the necessary emotion, but also that you are genuinely terrified of the toboggan run. The role is your big break, but if you cannot do it convincingly, the director will fire you. Under these circumstances you think it’s worth signing up to do the run yourself in order to ace the preceding scene. But if, after playing this scene, you find out you can still chicken out of the toboggan run, is it rational to do so? And if so, will not your realization of that spoil your acting in the previous scene? Much the same discussion ensues as with the toxin, except that someone sometimes comes up with an answer like: if the anticipation scene had to be shot over, and you had chickened out of the run the first time, it would be hard for you to believe any intention to go through with it next time. This answer is a partial solution, but what if you knew that the scene was a wrap? The null hypothesis of this thought experiment is that volition affects choices but is not affected by them in turn. So your future ability to will difficult behaviors is not affected by any information about this ability that a current choice may convey to you. The findings are not the answers to Kavka’s questions, which are usually garbled. Rather the two findings are (1) our lack of comfort with either answer—the sense that we may be doing the wrong thing by reneging but we cannot put our finger on why—and (2) we cannot find a clear reason why intention is possible at all in such a situation. I interpret these findings to show that there is a conceptual piece missing in the common theory of how people intend difficult behaviors. The null hypothesis is wrong. It is not possible to intend to drink or toboggan if you expect to renege, and conversely. Beyond this interpretation, I
188
George Ainslie
argue that hyperbolic discounting makes it possible to commit yourself, more or less, not to renege. You do this by putting up a pledge of sufficient value, and the only pledge available to put up irrevocably in this situation is the credibility of your pledges in difficult situations in the future. This kind of pledge is recursive: the more you believe that you will keep it, the more you can keep it and the more you will subsequently believe; the less you believe you will keep it, the less you can keep it, and so on. The current pledge need not put all future pledges at risk. However, if you intend it to include only choices involving toboggan runs, you will probably expect it to be inadequate from the start, and have to throw in more collateral, as it were, such as the credibility of your intentions to face fears in general, if you are to play that scene with conviction. This description makes the process sound deliberate. For most people, however, it may be better described thus: if you notice that the toboggan choice is similar to other choices where you face major fears, you can expect to believe your intentions less in these cases if you intend and then renege in the present case. You might regard toboggan runs as a special case, of course, perhaps because of the uniqueness of the movie situation, but a future self might or might not exclude this example from her estimate of the credibility of a future intention. Someone who objects to this interpretation needs to propose an alternative mechanism for how a person can expect not to renege in this kind of situation. Someone who needs experimental confirmation of the original finding will need to find naı¨ve subjects: once I have suggested the above logic to an audience they soon accept it as the solution, and stop showing the telltale difficulties about how there can be intention when it is possible to renege. I take this development, too, to be confirmatory, but it is no longer a thought experiment. Free Will The conundrum of free will must have the largest N of any puzzle on record, even if we count only publications on it. But it is essentially a thought experiment without a counterfactual story. The paradox exists in real life. In brief, we feel that our choices are not determined by the incentives we face unless we ‘‘let’’ them be so determined, and that the basis for letting them or not is imponderable, even to ourselves. On the other hand, random indeterminacy, such as the kind that has been supplied by atomic physics (Lande´ 1961) doesn’t feel like the basis for our choices either (Garson 1995). Our experience of free will seems to contradict our
Thought Experiments That Explore Where Controlled Experiments Can’t
189
belief in universal causality. The null hypothesis, if I may characterize this conundrum as having one, is that choices must either be caused by antecedent factors in linear fashion or not be strictly caused at all. As in Kavka’s problem, the finding here is not any of the solutions that people try out but the discomfort that the question causes. We depend both on our belief in physical causality and on our belief that what we will is not linearly caused, and do not feel that we should have to choose between these beliefs. I interpret this finding, too, as showing a missing piece in our concept of will. It would be impossible to try out all possible pieces to add, but the one suggested by hyperbolic discounting and its implied intertemporal bargaining seems to fit well with both sides. It preserves strict causal determinism, but it also makes individual choices imponderable, for the same reason that they cannot be dissected by controlled experiments: they are recursive— internally fed back—and sensitively dependent on small shifts in the bookkeeping of expectation. The concepts of chaos theory have been suggested before as the root of experienced freedom of the will (Garson 1995), but they have been rejected for the same reason as random indeterminacy: ‘‘If chaos-type data can be used to justify the existence of free will in humans, they can also be used to justify the existence of free will in chaotic pendulums, weather systems, leaf distribution, and mathematical equations’’ (Sappington 1990). That is, chaotic processes that do not somehow engage what feels like ourself will still be experienced as random, ‘‘more like epileptic seizures than free responsible choices’’ (Kane 1989, p. 231). I hypothesize that the location of recursiveness within the will itself supplies the missing sense of ownership. Although we can only guess at our future choices, the fact that these guesses change the incentives that govern those choices creates the compound contingencies that feel like will. By our vigilance about those choices we are actively participating in the choice process, all the more so because of our genuine suspense as to the outcome. Newcomb’s Problem Discussions of free will have often involved Newcomb’s problem, one of the most irritating thought experiments to people who mistrust them (e.g., Rachlin 2005), perhaps because it postulates a being who knows what you will choose before you do, something harder than usual to picture. Nevertheless, it has had great power to provoke debate, which suggests that it evokes some meaningful part of our choice-making process.
190
George Ainslie
A being in whose power to predict your choices correctly you have great confidence is going to predict your choice in the following situation. There are two boxes, B1 and B2. Box B1 contains $1,000; box B2 contains either $1,000,000 ($M) or nothing. You have a choice between two actions: (1) taking what is in both boxes; (2) taking only what is in the second box. Furthermore, you know, and the being knows you know, and so on, that if the being predicts you will take what is in both boxes, he does not put the $M in the second box; if the being predicts you will take only what is in the second box he does put the $M in the second box. First the being makes his prediction; then he puts the $M in the second box or not, according to his prediction; then you make your choice. Since the money is already in place before you make your choice, orthodox decision theory says that you should choose (1)—both boxes—and get $1,000 more than if you chose (2), whatever the being had decided. The problem is that you believe the being to have anticipated this and to have left nothing in B2; since you feel perfectly able to choose B2 alone, and can believe the being to have anticipated this as well, you may increase your present expectation by choosing B2. (Nozick 1993, p. 41)
The null hypothesis here is that we will maximize the reward that is literally at stake within the stated problem, and thus unambivalently prefer the two-box option. The finding is our strong hunch that we should pick the single box, even though we can’t affect the contents and two boxes will always contain $1,000 more than one box. Shafir and Tversky took the trouble to present this problem to college student subjects, and, not surprisingly, elicited the same hunch (1992). These authors interpreted the finding as evidence of their subjects’ magical thinking—their belief that what could objectively be only self-diagnostic thinking could become causal thinking, in that by presenting the appearance of being a one-boxer they would cause the $M to be in that box. It could be that we all have a magical streak, of course, but it seems more likely that we assimilate this unfamiliar situation to some familiar one. The Newcomb problem is an isolated choice with no history and no future and foreign to our experience. If this strange situation is to have any emotional meaning for us at all, we have to look for a similar framework in our experience. I offer an alternative to the magical thinking hypothesis: that we are intuitively familiar with a kind of diagnostic thinking that is also causal, and apply it to the Newcomb presentation. Everyone is familiar with problems that have almost exactly these terms, differing only in their repetitiveness and in the personal salience of the payoffs. Make the first box contain your aggregated expectation of reward over time for resisting your greatest temptation, the second your reward for giving in to it only this time. The being in whose predictiveness you
Thought Experiments That Explore Where Controlled Experiments Can’t
191
have great confidence is yourself. It is literally possible to get the contents of both boxes, but this situation, involving as it does your greatest temptation, is not new to you. You know by now whether you are a one- or twoboxer in this situation, and this knowledge governs both your choice (knowing you are a two-boxer undermines any resolve not to give in to your temptation) and the contents of box B2 (empty for the two-boxer, full for the one boxer). Knowing that you are a one-boxer fills the box with expectations of reward (¼ the present value of the aggregated actual rewards to come); knowing that you are a two-boxer empties it. Bundling choices not only converts intertemporal conflicts to conflicts of principle, it converts actions to traits, which are choice tendencies with momentum: insofar as you have habitually chosen two boxes you are a two-boxer, a trait that governs your choices unless some factor happens to balance one of them closely. If you know you are a two-boxer you will be unable to motivate yourself to choose only B2—if you know you’re a drunkard, there’s no point in trying to have one sober day—thus confirming the judgment of the predicting being. If you try to be a one-boxer and succeed, then you must be a one-boxer, also confirming the being’s judgment. In the isolated example of the thought experiment, being a one-boxer is easy. In real life it takes the effort of resisting temptation. There are hard things you can do to overcome a two-boxer trait, and tempting things that will wreck a one-boxer trait, but Newcomb’s problem elicits the motives contingent on recursive self-diagnosis without this ambiguous middle ground. Newcomb intended his problem as a model of the Calvinist theory of sin and salvation, whose anti-impulsive effects have also been said to arise from magical thinking—seeking to believe you are saved when your behavior will have no effect on the actual fact (Weber 1904/1958, p. 115). However, I have argued against the magical thinking interpretation here, too: Under such a belief system doing good is sometimes held to be a superstitious behavior, in that it is purely diagnostic, so that emitting it for the sake of seeing oneself emit it is fooling oneself (e.g., Quattrone and Tversky 1986). The several authors who have pointed this out do not consider an important possibility: doing good for its diagnostic value may not invalidate this diagnostic value. That is, if one can do good for any reason it may be valid evidence of being among the saved; such a situation would not contradict predestination, but only provide another mechanism through which destiny might act. The expectation of salvation might be selfconfirming, and the ability to maintain such an expectation might be the defining trait of the saved. This is much the same thing as saying that a higher power will grant sobriety to some alcoholics, and that acknowledgement of one’s helplessness against alcohol is a sign that a person may be one of those who will receive this
192
George Ainslie
favor. Such a shift in a person’s concept of causality is not casuistry. It marks the formation of a wider personal rule, one which apparently deters the hedging that weakens personal rules whenever they are recognized as such. (Ainslie 1992, pp. 203–204)
As with the conventional Prisoner’s Dilemma game, what is realistic play in the repeated game is not realistic in the one-shot version, but we rarely encounter the one-shot version. Our instinct is to follow the strategy of repeated play. Conclusion These four thought experiments produce little specific information about choice in the face of temptation—only one finding each, two in the case of Kavka’s problem. Their value is that these findings are anomalous for both RCT, in which choice simply tracks expected reward, and for the conventional picture of will as an organ that can send orders to lower processes without having to recruit incentive from among these processes. Neither of these are theories of will; in fact our culture’s assumptions about will are antithetical to theories of it: based only on the difficulty of observing it, we have assumed that it has an organ like other life functions do, that this organ cannot be subdivided into component processes, and that it is outside of the chain of causality that governs everything else. If will is to be a subject for behavioral science, we must have a theory that is not disturbed by these four thought experiments. And to have such a theory, we cannot be bound by these three assumptions. I have proposed intertemporal bargaining theory, an expectable outgrowth of the now well-established phenomenon of hyperbolic discounting, as a basis for both the strength and freedom of the will. This model was not dictated by these thought experiments, but these experiments have revealed intuitive problems with theories based on the currently dominant assumptions, problems that do not arise with the intertemporal bargaining model. Given the intrinsic difficulty of studying intertemporal bargaining by controlled experiment, the very compatibility of this model with these findings is informative. As this bottom-up approach to motivation comes to address higher level mental processes, means of accessing common intuitions about them in a way that can test compatibility will be increasingly important. Suitably focused thought experiments look like a promising addition to the conventional methods of behavioral science.
Thought Experiments That Explore Where Controlled Experiments Can’t
193
Note 1. In the terms of the example, you know the contingencies in advance, so it is not an illustration of insidious onset—Herrnstein and Prelec’s ‘‘primrose path’’ (1992). I think it would take a survey to see whether many knowing subjects would prefer 1/ 1,000 of a fortune to avoidance of 1/1,000 of agony if they preferred avoidance of the total agony to the total fortune. I personally doubt if intransitivity alone is an adequate explanation of preference for the weekly addictive increments, without either ignorance of the contingencies or the disproportionate weighting of imminent reward predicted by hyperbolic discounting. References Ainslie, G. 1987. Self-reported tactics of impulse control. International Journal of the Addictions 22: 167–79. Ainslie, G. 1992. Picoeconomics: The Strategic Interaction of Successive Motivational States within the Person. Cambridge: Cambridge University Press. Ainslie, G. 2001. Breakdown of Will. New York: Cambridge University Press. Ainslie, G. 2005a. You can’t give permission to be a bastard: Empathy and selfsignalling as uncontrollable independent variables in bargaining games. Behavioral and Brain Sciences 28: 815–16. Ainslie, G. 2005b. Pre´cis of Breakdown of Will. Behavioral and Brain Sciences 28: 635–50. Ainslie, G., and V. Haendel. 1983. The motives of the will. In E. Gottheil, K. Druley, T. Skodola, and H. Waxman, eds., Etiology Aspects of Alcohol and Drug Abuse, pp. 119– 40. Springfield, IL: Charles C. Thomas. Ainslie, G., and J. Monterosso. 2003. Building blocks of self-control: Increased tolerance for delay with bundled rewards. Journal of the Experimental Analysis of Behavior 79: 83–94. Andreou, C. 2005. Going from bad (or not so bad) to worse: On harmful addictions and habits. American Philosophical Quarterly 42: 323–31. Baars, B. J. 1986. The Cognitive Revolution in Psychology. New York: Guilford. Baumeister, R. F., and T. Heatherton. 1996. Self-regulation failure: An overview. Psychological Inquiry 7: 1–15. Brasil-Neto, J. P., A. Pascual-Leone, J. Valls-Sole, L. G. Cohen, and M. Hallett. 1992. Focal transcranial magnetic stimulation and response bias in a forced choice task. Journal of Neurology, Neurosurgery, and Psychiatry 55: 964–66.
194
George Ainslie
Bratman, M. E. 1999. Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge University Press. Burgess, P. W., S. J. Gilbert, J. Okuda, and J. S. Simons. 2006. Rostral prefrontal brain regions (area 10): A Gateway between inner thought and the external world? In N. Sebanz, and W. Prinz, eds., Disorders of Volition, pp. 373–96. Cambridge: MIT Press. Dennett, D. C. 1984. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge: MIT. Garson, J. W. 1995. Chaos and free will. Philosophical Psychology 8: 365–74. Glimcher, P. 2005. Neural mechanisms of temporal discounting in humans. Paper presented at the Third Annual Meeting of the Society for Neuroeconomics, Kiawah Island, SC, September 16. Green, L., and J. Myerson. 2004. A discounting framework for choice with delayed and probabilistic rewards. Psychological Bulletin 130: 769–92. Hauser, L., and B. Abbott. 1995. Natural language and thought: Point and counterpoint. Behavior and Philosophy 23: 41–55. Herrnstein, R. J., and D. Prelec. 1992. A theory of addiction. In G. Loewenstein and J. Elster, eds., Choice over Time, pp. 331–60. New York: Russell Sage. Heyman, G. M. 1996. Resolving the contradictions of addiction. Behavioral and Brain Sciences 19: 561–610. Hollis, M., and M. Sugden. 1993. Rationality in action. Mind 102: 1–35. Iacoboni, M., R. P. Woods, M. Brass, H. Bekkering, J. C. Mazziotta, and G. Rizzolatti. 1999. Cortical mechanisms of imitation. Science 286: 2526–28. James, W. 1890. Principles of Psychology. New York: Holt. Kahneman, D., and A. Tversky. 1984. Choices, values, and frames. American Psychologist 39: 341–50. Kane, R. 1989. Two kinds of incompatibilism. Philosophy and Phenomenological Research 50: 220–54. Kant, I. 1793/1960. Religion Within the Limits of Reason Alone, trans. T. Green and H. Hucken, pp. 15–49. New York: Harper and Row. Kavka, G. 1983. The toxin puzzle. Analysis 43: 33–36. Kirby, K. N. 1997. Bidding on the future: Evidence against normative discounting of delayed rewards. Journal of Experimental Psychology: General 126: 54–70. Kirby, K. N., and B. Guastello. 2001. Making choices in anticipation of similar future choices can increase self-control. Journal of Experimental Psychology: Applied 7: 154– 64.
Thought Experiments That Explore Where Controlled Experiments Can’t
195
Kuhl, J. 1994. Motivation and volition. In G. d’Ydewalle, P. Bertelson, and P. Eelen, eds., International Perspectives on Psychological Science, vol. 2, pp. 311–40. Hillsdale, NJ: Erlbaum. Lachman, R., J. L. Lachman, and E. C. Butterfield. 1979. Cognitive Psychology and Information Processing: An Introduction. Hillsdale, NJ: Erlbaum. Lande´, A. 1961. The case for indeterminism. In S. Hook, ed., Determinism and Freedom in the Age of Modern Science, pp. 83–89. New York: Collier. Libet, B. 1999. Do we have free will? Journal of Consciousness Studies 6: 47–57 (nos. 8– 9 bound as The Volitional Brain: Towards a Neuroscience of Free Will, B. Libet, A. Freeman, and K. Sutherland, eds. Thorverton, UK: Imprint Academic). Mazur, J. E. 1997. Choice, delay, probability, and conditioned reinforcement. Animal Learning and Behavior 25: 131–47. McClennen, E. F. 1990. Rationality and Dynamic Choice. New York: Cambridge University Press. McClure, S. M., D. I. Laibson, G. Loewenstein, and J. D. Cohen. 2004. The grasshopper and the ant: Separate neural systems value immediate and delayed monetary rewards. Science 306: 503–507. Miller, W. R. 2003. Comments on Ainslie and Monterosso. In R. Vuchinich and N. Heather, eds., Choice, Behavioural Economics, and Addiction, pp. 62–66. Oxford: Pergamon Press. Monterosso, J., and G. Ainslie. 1999. Beyond Discounting: Possible experimental models of impulse control. Psychopharmacology 146: 339–47. Monterosso, J., G. Ainslie, P. Toppi Mullen, and B. Gault. 2002. The fragility of cooperation: A false feedback study of a sequential iterated Prisoner’s Dilemma. Journal of Economic Psychology 23: 437–48. London, E. D., J. Monterosso, T. Mann, A. Ward, G. Ainslie, J. Xu, A. Brody, S. Engel, and M. Cohen. 2006. Neural Activation during Smoking Self-Control: poster at the 68th annual meeting of the College on Problems of Drug Dependence, Scotlsdale, AZ, June 20. Nozick, R. 1993. The Nature of Rationality. Princeton: Princeton University Press. Posner, E.A. 2000. Law and Social Norms. Cambridge: Harvard University Press. Quinn, W. 1993. The puzzle of the self-torturer. In P. Foot, ed., Morality and Action, pp. 198–209. Cambridge: Cambridge University Press. Quattrone, G., and A. Tversky. 1986. Self-deception and the voters’ illusion. In J. Elster, ed., The Multiple Self, pp. 35–38. Cambridge: Cambridge University Press.
196
George Ainslie
Rachlin, H. 1995. Self-control: Beyond commitment. Behavioral and Brain Sciences 18: 109–59. Rachlin, H. 2005. Problems with internalization. Behavioral and Brain Sciences 28: 658–59. Robins, L. N., and D. A. Regier. 1991. Psychiatric Disorders in America. New York: Free Press. Ryle, G. 1949/1984. The Concept of Mind. Chicago: University of Chicago Press. Sappington, A. A. 1990. Recent psychological approaches to the free will versus determinism issue. Psychological Bulletin 108: 19–29. Schelling, T. C. 1960. The Strategy of Conflict. Cambridge: Harvard University Press. Shafir, E., and A. Tversky. 1992. Thinking through uncertainty: Nonconsequential reasoning and choice. Cognitive Psychology 24: 449–74. Skinner, B. F. 1953. Science and Human Behavior. New York: Free Press. Sorensen, R. A. 1992. Thought Experiments. New York: Oxford University Press. Sully, J. 1884. Outlines of Psychology. New York: Appleton. Watson, J. B. 1924. Behaviorism. New York: Peoples Institute Publishing. Weber, M. 1904/1958. The Protestant Ethic and the Spirit of Capitalism. New York: Charles Scribner’s Sons. Wegner, D. M. 2002. The Illusion of Conscious Will. Cambridge: MIT Press.
10 The Economic and Evolutionary Basis of Selves Don Ross
Introduction This chapter forms part of a larger inquiry, whose main product to date is a book (Ross 2005) on the integration of mainstream microeconomic theory with the other cognitive and behavioral sciences. To fix the context for the chapter, I first sketch some of the book’s conclusions and their bases. The core contention is that new empirical and conceptual insights flowing out of the cognitive-behavioral sciences, including behavioral economics (e.g., Camerer et al. 2003) should not stampede us into rejecting the formal framework of postwar neoclassical analysis, as has lately been urged by many authors (see especially Lawson 1997; Sugden 2001; Hodgson 2001).1 This is not because neoclassical theory gives a more promising account of human behavior than these authors think, or because it incorporates sound ontological principles for thinking about people. Rather, it is because neoclassical theory, properly understood, is not directly about any specific kind of behavior, and rests on no ontological commitments more definite than the idea that agents can be analytically distinguished from one another. Radical methodological criticism of neoclassical theory generally rests on reading into it strong positive theses about individualism, and about human capacities for optimization and implementation of procedural rationality. I argue in the book that these readings confuse messages with messengers: from the fact that many theorists have exhibited these commitments, it does not follow that the theory itself incorporates them.2 If we are scrupulous about letting the mathematics do the talking, we can recover an adequate—indeed very useful—analytical and conceptual framework from neoclassicism that strengthens, rather than introduces false assumptions into, behavioral inquiry and is fully compatible with the empirical picture of people we are getting from the behavioral sciences.
198
Don Ross
In this chapter my concern is only with individualism. It is true that all the early neoclassical theorists (e.g., Walras, Jevons, Fisher) were individualists—indeed they took for granted a quite extreme form of early modern atomism, about people and everything else. But then, so did most of their contemporaries in every branch of inquiry, including Karl Marx (Elster 1985) (but not their predecessor Adam Smith, as many are now coming to appreciate; Sugden 2000 and Rothschild 2001). Many postwar economists have been normative individualists, but this is a quite different thesis from the descriptive individualism—and its denial—that interests me here. In the postwar formalism of neoclassicism there is no thesis at all about who or what agents are; an agent is simply anything whose behavior is well modeled within the constraints of a small set of consistency axioms. How long any given agent, so defined, lasts through a process is also a question on which the mathematics forces no stake. So individual humans can become new agents whenever their preferences change. Agents need not be internally simple—as people are not—so they can, in principle, be firms or households or whole countries or any other sort of unit that acts teleologically. I argue in the book (and in Ross 2002) that baseline or prototypical cases of economic agency should indeed be simple—insects are good prototypes—but not for atomistic or reductionist reasons. The motivation, instead, is that such agents don’t raise complications due to apparent preference reversals over their biographies; an entire biological bug does map relatively neatly onto representation as a single agent. Attention to properties of these baseline agents helps us fix state variables for use in more difficult extensions of the formalism to nonprototypical—more complex—agents like people and communities of people. Decades of experimental research by behavioral economists shows that individual biological instances of H. sapiens briefly instantiate particular economic agents only episodically, and thus resemble countries more than they do stable microeconomic units like bugs. Finally, I also argue in the book that nonprototypical agents are not mereological compositions of prototypical agents, thus hopefully blocking any temptation to read my suggestions about prototypical agency atomistically. Debates over these issues should not be encouraged to degenerate into semantic exercises. If someone wants to reserve the adjective ‘‘neoclassical’’ for a certain sort of ideological position, and say that any framework that doesn’t encourage that ideological spin is then anti-neoclassical, I would prefer not to quarrel over application of the label. What I want to concentrate on here are some substantive alternatives in modeling that are obscured when core neoclassical formalism is read as if individualism were
The Economic and Evolutionary Basis of Selves
199
built into it. This encourages inferences that begin from the premise that (formal) neoclassical economic agents must be equivalent to individual human selves, work through a second premise that individual human willpower is not the main causal engine of human behavioral patterns, and get the conclusion that selves are epiphenomena insofar as economic causation and explanation are concerned. Variations on this inference are much in vogue, especially in the precincts of Santa Fe. The reasoning has been given explicit expression by Satz and Ferejohn (1994) and Sugden (2001). Their modus ponens has also been expressed as a modus tollens argument by Davis (2003), who first criticizes the idea that selves could be economic epiphenomena, and then moves backward through the premises above, accepts the second one, and so concludes that the alleged neoclassical conception of agency should be rejected. I agree with Davis that selves are causally significant to human behavioral patterns, including economic ones, but since I reject the inference, I need not accept his reductio against the value of neoclassical theory. But I do accept the inference’s second premise. Human behavioral patterns are mainly social and collective phenomena. There are several complementary argument paths by which one can aim to keep this second premise of the inference, thus strongly denying descriptive individualism, while rejecting the inference’s conclusion. In my 2005 book I explore historical and conceptual paths, and show how these can help us reinterpret experimental results from behavioral economics and cognitive psychology that have seemed to many to cast doubt on the significance of the self. Here I will emphasize and extend one particular path, that of showing how we can model—and explain the stabilizing function of—selves in an ontological context that treats communities as logically prior for human behavioral explanation, while breaking none of the rules of the neoclassical formalism. The details of what is at stake in this enterprise of recovery will be indicated as I go along, but the main issue can be summarized upfront as follows. We should not suppose that we face a choice between an individualistic neoclassical economics and a nonindividualistic, heterodox economics that eliminates selves as loci of causal significance. Instead, we can use a refined form of neoclassical analysis to explain how selves evolved to stabilize relationships of economic exchange. Individualistic and Non-individualistic Models Social and economic theory in the individualist tradition has tended to take stable selves—used here to mean approximately individual human
200
Don Ross
personalities—as given, and then understand socialization as describable by some function that aggregates or otherwise composes these selves. Individualism may be thought of as coming in stronger or weaker versions depending on the nature of this function. Where the function is conceived to be linear, as in many neoclassical models, selves preserve their properties intact under socialization. The view that neoclassical models must be restricted to such functions—a view endorsed in their philosophical moments by a number of economists, though fewer than one might expect—has contributed to the interpretation of neoclassicism as necessarily individualistic. Models in the mainstream sociological and socialpsychological traditions, on the other hand, often (but usually only implicitly) use nonlinear composition functions. Marxism is less likely to be read as committed to individualism than is neoclassicism, despite the explicit individualism of its founder, because Marxist accounts typically invoke feedback from social phenomena that dynamically modifies the properties of selves. However, such models are still individualistic in what I will call the weak sense because they depict socialized identities as excretions of, or ‘‘wraparounds’’ (Clark 2002) to, pre-social selves. That is, they take individuals to be logically and ontogenetically prior to interactive networks. This distinction between strong and weak individualism, at least in economics, moves less furniture than is often supposed, because the models built around strong individualist assumptions can incorporate feedback with little strain. Individualistic economists and so-called analytic Marxists thus have no serious difficulty talking to one another in a shared formal idiom (Roemer 1981). I will here take genuine non-individualism to be based on two increasingly widespread, and closely related, theses from cognitive-behavioral science. According to the first thesis (see Clark 1997, 2004; Tomasello 2001; Sterelny 2004; Ross 2005), human personalities—selves, that is—have been made phylogenetically possible and normatively central through the environmental manipulations achieved collectively by humans over their history, while particular people are ontogenetically created by cultural dynamics unfolding in this context. According to the second thesis, individual people are themselves systems governed by distributed-control dynamics (Schelling 1980; Dennett 1991; Clark 1997; Ainslie 2001; Ross 2005), and so must for various explanatory and predictive purposes be modeled as bargaining communities. These theses together imply that adequate models of people—and not just of groups of people—will be socialdynamic models through and through.
The Economic and Evolutionary Basis of Selves
201
Note that interesting individualism cannot just be the truism that biological individuals are importantly distinct from one another in all sorts of ways, so interesting anti-individualism is not the implausible denial of that truism. Furthermore descriptive anti-individualism is made interesting by normative individualism: it is because we are normatively interested in human personalities, and not mainly in human organisms,3 that the logical and ontogenetic basis of these personalities in social dynamics, and their implementation as bargaining communities, are such striking things to recognize. The increasingly popular way of formally representing non-individualist models, both of communities of people and of the communities of interests that instantiate people, involves taking the communities in question to be dynamic systems. Such models are developed and promoted in a diverse range of literatures, including evolutionary game-theoretic (EGT) approaches in economics (Young 1998; Gintis 2000) and applications of artificial life (AL) techniques to social theory (Kennedy and Eberhart 2001). These are what I alluded to above as ‘‘the precincts of Santa Fe.’’ My interest in defending the continuing usefulness of some neoclassical principles is inspired by foundational worries about an aspect of these approaches.4 Here is the worry. Whereas individualistic models incorporate pure fictions—strongly unified selves that are established prior to and independently of social dynamics—most current EGT and AL models commit an (implicit and usually unintended) oversimplification in the opposite direction. That is, in treating top-down dynamical influences as the sole sources of nonaccidental causation, and then in addition modeling all dynamic phenomena as irreducibly statistical, they leave selves as, at best, mere cognitive book-keeping devices or, at worst, scientifically mysterious epiphenomena. A social theorist might argue that this needn’t much trouble her, on the ground that models of social processes can reasonably abstract away from the influences of selves, the idiosyncracies of which might be washed out by the law of large numbers on her level of analysis. She need not deny that causally efficacious selves exist, but she can leave their analysis and study to personality psychologists working at microanalytic levels. I see three problems with this kind of response. First, it ignores the possibility that selves might function as behaviorstabilization devices that in turn contribute (socially important) stabilizing properties to social dynamics. Second, the response fails to acknowledge that different social processes can produce alternative distributions of types
202
Don Ross
of selves within communities,5 and that this might in turn feed back to produce varying distributions of attractors in large-scale dynamics. Extant dynamic-systems models might thus fare poorly in predicting or explaining cross-cultural differences among people. Third, the response invites us to try to finesse rather than address some deep indeterminacies concerning the empirical applications of dynamic models. If we implicitly suppose that complex systems (like social communities) are dynamical systems ‘‘all the way down,’’ then it is unclear how, or whether, we might find general forms for writing down theories, as opposed to merely particular descriptions of histories that don’t facilitate formal generalization. This seems to leave something missing from the ontological foundations of social theory. Individualist neoclassical models show us how we might formally restrict a concept of the self for use in economics: selves are associated with unchanging preference fields (e.g., Stigler and Becker 1977). The antiindividualist argues that this won’t work because it relies on utterly fictional objects. However, it is not obvious how to move beyond this purely negative point to state a positive alternative that can be given generalizable, nonmetaphorical content. This worry does not merely represent philosophical nostalgia for theories written in a classical idiom; it expresses itself for practical purposes in EGT and AL models as instability of state variables from one model to another. In comparing these models, one gets the impression that decisions about what to regard as state variables are often driven as much by features of the software packages used for modeling as by any motivated theoretical principles. To the extent that this is so, it is hard to see how we could ever expect a convincing argument for regarding one model or family of models as empirically more persuasive than another. My aim here is thus to present a sketch of the dynamics of self-formation under social pressure that (1) makes explicitly theorized selves emerge as causally and explanatorily significant, and endogenous, elements of social dynamics, without following individualism in taking them as primitive, and (2) preserves a role for the neoclassical concept of utility, not as a representation of any empirical force or quantity but simply as a formal organizing principle—analogous but not identical to ‘‘fitness’’ in population ecology—that permits development of field theories to ontologically anchor dynamical accounts of societies. Neoclassical theory achieves a balance of the kind that has been of pervasive importance to the progress of science. On the one hand, it is representationally flexible enough to avoid precommitment to strongly restrictive ontological assumptions. As noted above, anything that pursues consistently maintained goals, even if just
The Economic and Evolutionary Basis of Selves
203
for a short interval, can be modeled as a neoclassical agent during that interval. And any set of noncontradictory goals at all can be represented by a utility function.6 On the other hand, such constraints as are imposed by the mathematical properties of neoclassical systems are well understood. Thus we can compare the dynamical properties (relative sizes of basins of attraction, relative sensitivity to quantitative adjustments of parameters, etc.) of any two models constructed in this formalism with maximum clarity. This yields us the basic Popperian virtue of being able to reject models for sound empirical reasons.7 Selves I derive my target concept of a ‘self’ from work by cognitive scientists and philosophers, specifically Bruner (1992, 2002), Dennett (1991, 2003), and Flanagan (2002). That is, I take selves to be narrated structures that enhance individuals’ predictability, both to themselves and to others. As emphasized by this literature, selves in this respect resemble characters in novels and plays, in a number of quite specific ways. In particular, they facilitate increasing predictive leverage over time by acquiring richer structure as the narratives that produce them identify their dispositions in wider ranges of situations. On this account, individuals are not born with selves; furthermore, to the extent that the consistency constraints on selfnarratives come from social pressures, particular narrative trajectories are not endogenous to individuals. As Dennett (1991) puts the point, selves have multiple authors, even if one author is most important in playing a role across all chapters while collaborators vary. This philosophical account nicely captures the phenomenology and microstructure of selfhood. A personality is experienced to itself, and to others, as a relatively coherent story; to the extent that it is not, pathologies couched in terms of ‘‘breakdown’’ tend to be diagnosed and— crucially—to trigger social sanctions. Stability is emphasized as an essential normative property of a self, though just as with characters in novels it is not literally to be maximized. (Totally consistent characters in novels are rejected as one-dimensional, and totally consistent people are sanctioned by being labeled boring or, worse, obsessive.) Instead, it is a background condition that makes some desirable extent of novelty from occasion to occasion meaningful and attractive. People evaluating and tinkering with their own personalities are usually acutely conscious of being monitored by others, and of being answerable to social norms and expectations, while doing so. I suggest that the close analogy between psychological and
204
Don Ross
literary narrative is not just a fortuitous metaphor. As Elster (1999) has argued—and promoted into a methodological motif—literary narrative conventions are likely projections of natural psychological ones, and the creation of literary characters is modeled on the creation of selves. As has been understood since at least Aristotle’s time, one can scientifically study some psychological dynamics by studying fiction. The narrative theory of the self helps to explain this otherwise odd fact. It is not mysterious that natural selection in a social species like H. sapiens could give rise to narrative selves. Because of the complexity of their control systems—their brains and their networks of environmental pressures—people can’t simply assume self-predictability; they have to act so as to make themselves predictable. They do this so they can play and resolve coordination games with others. (To be predictable to others, they must be predictable to themselves, and vice-versa.) Then all of this is compounded by the fact that nature doesn’t neatly partition games the way analysts do in game theory texts. A person can’t keep the various games she simultaneously plays with different people in encapsulated silos, so a move in a game Gi with the stranger will also represent moves in other games Gk;...; n with more familiar partners—because these partners are watching, and will draw information relevant to Gk;...; n from what she does in Gi . It is highly unlikely that the system of logical pressures set up by these dynamics is perfectly tractable by any finite information-processor in real time. People navigating in a web of social relationships face a continuous general equilibrium problem, and a nonparametric one at that. People probably do not literally solve such problems, that is, actually find optimal solutions to their sets of simultaneous games (except, sometimes, by luck). As discussed in Ross (2004, 2005), for example, the phenomenon of the ‘‘mid-life crisis’’ picks out the pattern that manifests itself when people regret the formerly open possibilities their self-narratives have closed off, and so try to withdraw some but not all of their investment in their self. But then the bits of the portfolio turn out to be interdependent, so valued stock is unintentionally thrown away with what’s deliberately discarded and personal disasters often result. Nevertheless, most people achieve tolerable success as satisficers over the problem space. They do this at the cost of increasingly sacrificing flexibility in new game situations. This, happily, trades off against the fact that as their selves become more stable, they can send clearer signals to partners, thereby reducing the incidence of both miscoordination within assurance or coordination games, and of inadvertently stumbling into destructive Prisoner’s Dilemma scenarios. This general fact itself helps to explain the prevailing stability of selves in a
The Economic and Evolutionary Basis of Selves
205
feedback relationship. It is sensible for people to avoid attempts at coordination with highly unstable selves. Given the massive interdependency among people, this incentivizes everyone to regulate the stability of those around them through dispensation of social rewards and punishments (Binmore 1998, 2005; Ross 2006). Thus selves arise as stabilizing devices for social dynamics, and are in turn stabilized by those very dynamics. However, from the anti-individualist perspective this notion of the self skates over a problem. That is, it seems as if the account requires us to treat individuals as primitive and then describe them as coming to be endowed with selves. In a neoclassical representation, socially sculpted selves will have to be assigned different utility functions from pre-social individuals, since self-sculpting must involve preference modification. Furthermore, if the subject’s own (essential) participation in self-narration is a strategic response aimed at coordination with others, then an economic model must interpret selves as products of games played among sets of players that can’t include that very self. We can’t build models of these games unless we have players with well-defined utility functions to start with. In that case, mustn’t a traditional economic model of the dynamics underlying narrative selves be an instance of an individualistic model in the weak sense specified above, that is, one that takes pre-social individuals as logically and ontogenetically primary, though allowing nonlinear composition functions in representing their interactions? We can carve a path around this conceptual impasse by appeal to two sources of tools and ideas: (1) control theory from AI and neuroscience and (2) behavioristic neoclassical consumer theory (as in Samuelson 1947). Attention to AI and neuroscience forces us to take seriously some limits on the sensitivity of behavior and agency to all the dynamical forces present in an environment. Complex systems can only manifest agency if they achieve stable integration of information in such a way as to shield them, up to a point, from dynamical perturbations. At the sub-personal level, nervous systems have access to information in more restrictive formats than those available to whole socialized people (Clark 1990).8 This enables neuroscientists to account for solutions to special bottleneck problems that arise in modeling the flow of information as it can be used by synaptic networks. Behavioristic consumer theory is useful because it encourages us to separate treatment of utility functions in extension from decision-theoretic conceptions of expected-utility maximization processes (i.e., explicit ‘‘inboard’’ calculations). I will first explain the relevance of consumer theory. Historically it is the assumption that utility-maximization by people must consist in calculation of expected utility that has led neoclassical theorists to take for granted that
206
Don Ross
if they’re modeling anything empirically real, this must be direct behavior determination by whole people processing information formatted for personal-level use.9 However, nothing in the neoclassical mathematics of the decade before Von Neumann-Morgenstern utility forces this interpretation. The importance of this point can be emphasized by attention to issues arising in the new neuroeconomics literature (Glimcher 2003; Montague and Berns 2002) that studies individual neurons as economic agents. So far, this exciting work has not been generally careful about keeping personal-level informational content distinct from subpersonal-level content, and so encourages a slide back into an individualist conception in which people are taken to be mereologically composed out of functional modules that locally supervene on neuronal groups. This perspective is explicit in Glimcher (2003), who has had the most to say about the philosophical foundations of neuroeconomics. Let us consider an example. Evidence reviewed by Montague and Berns (2002) suggests that firing rates of neurons in primate orbitofrontal cortex and ventral striatum encode a common currency by which primate brains can compare valuations of prospects over rewards in different modalities. The equation that describes the value the brain attaches to getting a particular predictor signal in a sequence of perceptions turns out to be a generalization of the Nobel-winning Black-Scholes model for pricing assets in financial markets with derivatives. Montague and Berns rightly express some enthusiasm about this fact, since the neural valuation equation and empirical tests of Black-Scholes respectively derive their data from wholly independent domains; the isomorphism between the equations is a discovery, not a construction. A natural explanation of the relationship might be that human investors are using their primate brains to estimate value, on which market prices are in turn based. Now, from this platform consider another striking empirical suggestion coming from neuroeconomic research with both monkeys and people. Montague and Berns report that when the predictor-valuation model is applied to subjects making risky investment decisions under uncertainty, subjects cluster strongly into two groups. One group plays optimally through runs of losses that could have been predicted to occur with positive probability at some point, while subjects in the other group abandon their portfolios too quickly for optimization of expected utility. The intriguing finding from the neuroeconomic research is that one can reliably predict which group a given subject will fall into by examining her brain under fMRI and determining whether neurons in her left nucleus accumbens respond to changes in the market data. Risk-takers seem to be tracking predicted values explicitly with these
The Economic and Evolutionary Basis of Selves
207
neurons, while so-called conservatives may be falling back on more general heuristics that are biased against losses. Montague and Berns themselves advance no philosophically loaded interpretations of these data. But it is easy to imagine such an interpretation, of just the sort that Glimcher (2003) encourages. Perhaps we should reduce explanations of people’s risk-aversion levels to explanations of the risk-attitude dispositions of their brains. Imagine, for example, financial houses thinking that they should screen potential asset brokers under fMRI to make sure that they’re not conservatives. I think that sophistication about the philosophy of mind should discourage such interpretations. For a person, values of assets will be sensitive to ranges of parameters that are strongly controlled by social dynamics in which the person is embedded, but to which the brain won’t be sensitive at the same grain of analysis as that at which it tracks frequency of perceptual cues; the person isn’t identical to her brain because some counterfactuals relevant to generalizing about her behavior track regularities controlled by her social environment rather than (just) her nervous system. Of course, facts of the sort unearthed by neuroeconomists are relevant to our understanding of the information made available to the person by her brain. A broker who knows she has a conservative brain might have extra reason to rely more heavily on her computer model of asset price estimation than her colleagues whose brains do accurate tracking more directly. But conservative brains need not predict conservative selves. Taking account of the way in which people are distinct from their brains is the point of my suggested appeal to neuroscientific control theory. In designing more sophisticated nervous systems over time—and thus encountering new risks of inefficiency due to bottlenecks—natural selection could not help itself to top-down control dynamics that arise when systems take the intentional stance (Dennett 1987) toward themselves. Our pre-human ancestors could not assume this stance. Thus evolution had to solve the neural bottleneck problem at the neural level. On the other hand, accounts of selves as devices for integrating internal bargaining communities are often based partly on an argument to the effect that solving control problems in nonparametric environments is what selves evolved to do (Dennett 1991; Sterelny 2004; Ross 2005). This precisely implies the distinction between brain-level individualism and person-level individualism, especially if one of the advantages people bring to the table by contrast with brains is faster response to the flexibility encoded in social learning. Brains bring compensating advantages of their own, as we should expect. As the discussion of asset valuation above suggests, their reduced plasticity relative to
208
Don Ross
socially anchored selves can help maintain objectivity in circumstances where herd effects occur. It is just when we don’t conflate maximization of utility by brains with goal achievement by selves that we have some hope of using data about the former as a source of theoretically independent constraints on processing models of the latter. Thus individualism about people could impede optimal use of neoclassical theory in a promising new domain of application—neuroeconomics—while a view that takes socially sculpted selves seriously as causal influences can focus our attention on what neoclassical theory is tailored for: describing the dynamics of information flow in markets. So much for motivations. How might we try to take socially sculpted selves seriously in models that are both nonindividualist and respectful of neoclassical restrictions on state variables? Games, Biological Individuals, and People I noted earlier that there is one sense of individualism that is truistic rather than false: there are biological individuals. Their important role in biological explanation is ensured by the fact that selection at the genetic level is non-Lamarckian. Although genetic selection is very far from all that is important in biological evolution (Keller 1999), it is important enough to make it essential for many purposes, both practical and theoretical, to keep track of individual phenomes. (See Buss 1987 on the biology of individuals.) Let us then isolate the notion of biological individuality by reference to barriers on transmission of genetic information. (We won’t need for present purposes to choose an explicit definition.) This gives us a basis both for treating species as kinds of individuals, following Hull (1976), and for taking organisms—but not people—to be individuals in the traditional sense.10 Now consider standard evolutionary games (Weibull 1995). These will be games in which the expected distribution of strategies at any time t is a function of the expected fitnesses of strategies at t n, n A fSn; . . . ; tg, t a t.11 Let Gn00 denote such a game as played over n generations, where 00 each generational cohort constitutes a round. Let gtþk denote the t þ kth 00 00 round of Gn , and model gtþk as a classical game. (Assume subgame perfection as the only refinement on Nash equilibrium in solving each gn00 H Gn00 .) 00 and Let i, j denote biological individuals that are among the players of gnþk 00 00 indicate that i is a player of gnþk by writing gnþkðiÞ . In standard models of Gn00 , no nonparametric problems need to be solved by individual brains; all are solved only at the species level. This is a definitional, not an empirical,
The Economic and Evolutionary Basis of Selves
209
claim: it follows from understanding biological individuality in terms of barriers to genetic information flow. To see this, define an agent by reference to neoclassical formalism, that is, as a unit that has a utility function over the outcomes in which the payoffs of some set of games are specified. Distinguish an agent i’s agent-specific control system as any nexus of causal 00 influences on i that (1) is sensitive to values of strategic parameters in gnþkðiÞ and (2) exerts strategic influence on i without exerting influence on any 00 player j of gnþkði;jÞ except via its influence on i’s strategy.12 Then, unless selection is Lamarckian, no agent i’s agent-specific control system can in00 troduce, at n þ k, new information strategically relevant at round gnþkðiÞ 00 into an evolutionary game Gn00 I gnþkðiÞ in which i instantiates a strategy.13 Otherwise, we would violate the assumption that no genetic information is transmitted between individuals except by ancestral descent. Standard EGT models sometimes impose (at least implicitly) restrictions on the causal generation of organism behavior that are a bit weaker than the condition above. That is, they can allow for error terms (denoting what 00 00 from Gn...nþk ) inside which will appear as ‘‘trembles’’ in a forecast of gnþkþ1 the causal influences of agent-specific control systems might figure. This would be necessary if one wanted to allow for possible mutations that affect dynamics by modifying cognitive dispositions. However, since such influences have to be nonautocorrelated with the original strategic dispositions of players in order to belong in error terms, this way of allowing for them can’t introduce systematic roles for selves in evolutionary games. This is why some economists attracted to EGT models have suggested eliminativism about selves and properties of selves in discussions around the methodology of modeling social dynamics, as alluded to earlier (Sugden 2001). Restrictions of the sort I have just characterized are, in themselves, unobjectionable. All that they do is make explicit the idea that lineages, not organisms, are the proper players of evolutionary games. Players of classically conceived rounds of these games are then either strictly deterministic products of the histories of prior rounds (as in the strict version of the restriction) or stochastic products of such rounds (as in the weaker version). The point I wish to stress by focusing on the restrictions is that in most applications of evolutionary economics it is implicitly supposed that games thus restricted provide sufficient explanations of all observed strategy dispositions. Note that this point applies not only to models in which strategies are taken to spread purely by vertical transmission (i.e., through inheritance), but also to models that are supposed to represent cultural evolution by assigning important roles to imitation. Imitation functions, of the sort studied by Young (1998), amplify and stabilize some effects of past
210
Don Ross
strategy distributions on future ones, but they leave informational integration by organisms inside black boxes. The motivations for doing this are clear enough: to the extent that one tries to open these boxes, it seems one is no longer trying to get evolution to carry the explanatory burden, but is drifting back toward cognitive—and individualist—modeling. Now we can more precisely characterize the challenge set in the preceding sections of this chapter. Is there a way to pry open this black box in the modeling framework that lets agent-specific control systems (which might include selves, if these turn out to have any strategic function) exert causal influence on the dynamics of G 00 -type games while requiring that they emerge endogenously in these games? The question can be operationalized as follows: Can we build a well defined evolutionary game Gn00 in which no properties of agent-specific control systems are relevant to computing equi00 but some agents i, j arise through the dynamics of Gn00 such libria at gnþ1 that properties of the agent-specific control systems of i and j are relevant 00 to computing equilibria in some gnþ1þkði;jÞ ? (I specify both i and j here because I am interested not just in any old agent-specific control systems but in selves, and am assuming, for reasons discussed earlier, that selves can arise only as elements of reciprocal social relations. That is the very content of denying individualism.) Doing this would show us how to work selves into dynamic social games without having to make any such selves logically prior to the social dynamics. Another distinction is now in order. Incorporating some conceptual suggestions of Sterelny’s (2004), let us distinguish social dynamics, as dynamics that arise whenever biological individuals play games, from cultural dynamics, which presuppose the relevance of agent-specific control systems to evolutionary equilibria. (Sterelny argues that cultural accumulation requires the evolution of cognitive capacities that go beyond the mere capacity for imprinting on others as imitation targets, and permit decoupled representation14 of goals and techniques based on others’ behaviors as models of abstract goal achievement.) Then the first step to understanding the logical phylogeny of selves requires explaining the logical phylogeny of games among socialized but unenculturated biological individuals. Modeling the dynamics by which natural selection can generate and solve nonparametric problems for biological individuals, in general, was the basic founding achievement of evolutionary game theory (Maynard Smith 1982), so we can treat this step as taken care of. Now, nonparametric problems are exponentially more complex than parametric ones. This point has been made often, but Sterelny (2004) again offers a nice conceptual extension of it in arguing that nonparametric selec-
The Economic and Evolutionary Basis of Selves
211
tion factors make environments ‘‘translucent’’ to organisms, and in so doing establish selection pressure for representations of some of their features that are both robust and (relatively) decoupled. An organism deploys robust tracking of a feature when its cognitive architecture allows it to represent the feature independently of a specific perceptual stimulus or cue. The architectural conditions for robust tracking have been discussed in the literature for some years. Gould and Gould (1988) offered behavioral evidence of robust tracking in bees. Lloyd (1989) sketched the generic model of a control system he argued to be the minimal requirement for a simple mind because it allows for at least a minimal degree of robust tracking, and there has been empirical discovery of such architectures in cockroaches (Ritzman 1984), toads (Ewert 1987), and other animals. Now, robust tracking is required for the implementation of many strategies (which is presumably why it evolved). However, Sterelny argues that humans exhibit to uniquely high degree the use of a representational genus that goes a level beyond robustness in achievement of abstraction. Many representations in humans are decoupled from specific action responses. Some of these decoupled representations are standing models of how the world is— beliefs—while others are comparative rankings of ways the world might come to be—preferences. Sterelny defends several interlocked theses concerning decoupled representations: (1) though they occur to some degree in other—invariably social—species, they dominate the ecological life of H. sapiens but no other animal, (2) they are necessary for the cumulative transmission of culture, and (3) though the neural platform that made them possible might have resulted from a rapidly but parametrically changing physical environment during the last series of ice ages, their explosive evolution could only have been driven by the pressures of nonparametric cooperation and competition. I will here take all three of these theses on board. They are not special to Sterelny, though I know of no one else who has worked out their interrelationships so clearly. Sterelny’s key accomplishment for the purpose of this chapter is his argument, based on surveyed empirical evidence, that the human capacity for massively decoupled representational scope does not rest on the evolution of special neural mechanisms—though the capacity for robust representation does, and robust representation is necessary for decoupled representation—but on historical dynamics of ‘‘downstream niche construction.’’ The idea here is that human activity has progressively reconfigured the environment so that there have been steadily increasing returns over time to investment in decoupled representation and the environment is an increasingly efficient storehouse of socially deposited
212
Don Ross
information that cues decoupled representations. This makes such representations efficient enough for reliable use and also—crucially—enables developmental processes in children to come to track the eccentric perceptual saliences that decoupling requires, and that other animals largely miss. Sterelny’s various distinctions help us say a number of things about the nature and role of selves in evolutionary and strategic dynamics. First, we can now be clearer about the relationship between a biological H. sapiens individual and a person. The former is a robust representer who instantiates a battery of strategies in various evolutionary games whose players are coalitions of genes. However, the restriction on agent-specific control, at least in its weak form, applies to her, as it does to most animals. Her basic tool in game-playing is her brain. She is nevertheless a social animal— manipulation of her parents’ responses being the main cognitive behavior for which she must be neurally equipped (Spurrett and Cowley 2004)—and so the games she must play are relatively informationally demanding (probably similar to those faced by other apes, and by dogs, whales, elephants, pigs, corvids, and parrots). They require a large brain, and this in turn means that potential bottleneck problems in control must be solved at the level of neural design. Neuroeconomics is beginning to tell us how preference control works in the biological H. sapiens individual. For the most part, the current state of play in Santa Fe-style modeling of human economic behavior stops here. These models leave the neuroeconomic details inside black boxes, but that’s all to the good. The wiring properties on which neuroeconomic facts supervene will change more slowly over time than the selection spaces in which the biologicalevolutionary games unfold. So, as neuroeconomists open the black box, the structures and processes they discover will usefully constrain EGT models (by telling us which strategies can and can’t be computed) but won’t dynamically interact with them on the same time scale. In this regard the relationship between conventional EGT and neuroeconomics exemplifies the kind of strategy that has worked so well so often in governing the relationships among complementary disciplines in the history of science: a higher level discipline—in this case, evolutionary game theory—isolates and moves around phenomena for a lower level discipline—in this case, neuroeconomics—to mop up. Then the progressive mopping up feeds back into the higher level theorizing as nondynamic constraints on possible models. This is great method when you can get it, and people are right to be enthused here.
The Economic and Evolutionary Basis of Selves
213
However, the next thing we learn from reflecting on Sterelny’s account is that if we want to understand fully human behavior we can’t stop with this. A core consequence of the games that our biological H. sapiens instance plays is that she will be enculturated into becoming a person. That is, she will be attuned to perceive and be motivated by a range of cultural distinctions and projects that are informationally stored in the ecological relationships between her brain and her environment, rather than in her brain alone. It is very natural to say that a new agent is brought into being by this process. The basic truth in anti-individualism lies right here: this new agent is recruited into existence for the sake of the contributions she can then make to social dynamics. (See McGeer 2001 for some details on the processes by which human infants are simultaneously led to begin narrating selves and recruited into membership in communities.) Notice that use of neoclassical formalism will force us to say what I have just argued is the natural thing to say. The person will have a different utility function (speaking more properly, a different sequence of utility functions) from the pre-enculturated biological individual. Indeed the latter’s utility function will range over different goods altogether, since her development involves fundamental re-packaging of perceptual saliences. It is, I suggest, a virtue of the neoclassical formalism that it makes this logical move central and nonoptional. I will now sketch the modeling framework that is implied by all this. Game Determination There is an overlooked puzzle that should have struck game-theoretic modelers of human behavior quite independently of the hypotheses about human evolution and development that I have just been discussing. This is the problem I have elsewhere (Ross and Dumouchel 2004; Ross 2004, 2005) called ‘‘game determination.’’15 I will reintroduce it here, because it provides independent motivation for the modeling suggestions I’m about to state. Game theorists building models have a big advantage over people in everyday life (including the game theorists while they’re getting on in everyday life). When a game theorist builds a model, she must know, or have justifiably assumed, the utility functions of the players. Her game can correctly model a given situation only if her assigned utility functions truly describe and predict the players’ behavioral dispositions. Of course, most actual game-theoretic models are of stylized or hypothetical agents, since
214
Don Ross
they are investigations of what agents who did have such-and-such utility functions (in such-and-such institutional settings with such-and-such information) would do. It is because so much game theory activity is of this sort that what I call game determination problems don’t loom large in the literature. Game determination names the task confronting agents who encounter each other, recognize strategic significance in their encounter, but don’t know enough about each other’s utility functions to be able to know which precise games they might play. This describes most people’s situation most of the time. Sterelny’s identification of ‘‘transluscence’’ in social environments as the pressure that fueled the evolution of people as cultural nicheconstructers is closely related to the point. Determining which games are possible is typically a harder inferential task than modeling or solving a game once utility functions are known. The surest way to keep game determination problems tractable is to build institutions that lock in mutual expectations so long as people are strongly incentivized to want to stay within the institutional rules. Thus groups of mutually anonymous stock market investors, sales clerks and shop customers, or trade negotiators at the WTO can get on with their games without having to closely study one another’s behavioral idiosyncracies in advance. The need for manageable constraints on game spaces explains why cultural rules exist (and then the importance of coordination explains why some particular such rules stabilize). In effect, Sterelny’s whole hypothesis that evolution invented culture because social coordination otherwise gets impossibly hard for large-brained organisms in rapidly changing environments is recognition of this point in other terms. Even in the absence of this hypothesis, we can recognize that people somehow solve an information problem in working out which games they’re playing when and with whom. The social world doesn’t present itself as prepartitioned into games. This is what I meant by saying that the phenomenon of game determination arises for our attention independently of Sterelny’s arguments. However, these arguments make the problem more pressing. I argued in the previous section that an implication of Sterelny’s hypothesis, one that neoclassical (revealed-preference) formalism forces us to stare in the face, is that as biological individuals are enculturated their utility functions change. Furthermore—as Sterelny also emphasizes—they change from being highly predictable (infants all being quite alike16) to being relatively distinctive. People learn to be coordinators within particular cultures. Their social environments not only make them into German liberals, Alabama fundamentalists, or Masai cattle herders, they can even make them into Masai cattle
The Economic and Evolutionary Basis of Selves
215
herders who care about model trains but not model airplanes and appreciate British film comedies but not Indian ones. People are capable of inhabiting many cultural niches simultaneously. But characters like Woody Allen’s Zelig, who was a perfect cultural chameleon, don’t really exist. Again, institutions often smooth things along. Our Masai English comedy buff can figure out how to behave at the Monty Python Fan Club meeting because the group will provide him with all sorts of cues in its newsletters and ceremonial protocols. But he could find himself in deep waters when, out there on the savannah one day, a nature photographer from Texas drives into one of his cows and they have a ‘‘situation.’’ The two might face a case of ‘‘radical’’ game determination.17 Our players are not biological individuals; both are enculturated people. 0 will name a game played Let us denote their game type by g 0 . Therefore gxði;jÞ by two strangers to each other who are already distinctive human selves. Its structure is, of course, constrained by their pre-engagement utility functions. But, by hypothesis, they don’t know each other’s. Crucially, in trying to mutually determine them, they are, being people, bound to act strategically. In particular, they will strategically signal. So the process of game determination will itself be a game. Also being people, the game playing in which they engage to try to find out which games they might play will amount to further enculturation. The Masai might have no goal of adding a bit of Texan to his cultural repertoire as the negotiation goes on. But because he’s a person, this might just happen to him nonetheless, and symmetrically for the photographer. Suppose that they relieve the tension by repairing to the Masai’s boma to watch a few old Monty Python episodes on DVD, something of which the American was previously ignorant, but learns to decode and enjoy partly by noticing when the Masai laughs most appreciatively.18 Let us put all this first in terms of the narrative theory of the self, then translate that directly into game-theoretic terms. Many engagements among people, where neither detailed mutual personal knowledge nor strict institutional constraints stabilize the dynamics, involve incremental refinements of the selves of the people in question. We might, for useful 0 analytical purposes, build the game gxði;jÞ that would describe their play if, like elephants or chimps, they were social but not fully cultural creatures. 0 only gives us a baseline from which to start modHowever, the model gxði;jÞ eling the empirical situation because the people will never actually play 0 0 gxði;jÞ . They will play instead another game gyði;jÞ —marked here with the same ‘‘level indicator’’ because it is played by the same players i, j who 0 0 ‘‘set out’’ to play gxði;jÞ —that is distinct from gxði;jÞ because it is played for
216
Don Ross
payoffs over a different set of outcomes. In particular, it is played to deter0 mine which game gzða; bÞ will be played over the original outcomes of gxði;j Þ (e.g., who will pay the costs of the dead cow and the wrecked landrover) by 0 . the new agents a, b that are sculpted into being by the play of gyði;jÞ To bring some analytical order to these complexities, let us define the concept of a ‘‘situation’’ S that remains invariant through game determination processes. The budget constraints that would have faced the players of 0 gxði;jÞ are inherited by the players of gzða; bÞ (i.e., the relative costs of cows and landrovers don’t change). Note that this is a stipulation, not something that is empirically guaranteed. ‘‘Deep’’ re-enculturation could change relative costs; imagine the Masai being so charmed by the Texan lifestyle that he ‘‘goes native’’ and would rather share the repaired landrover than replace his cow. But, in designing methodology, we get to make practical decisions. We can just stipulate that if an invariant situation fails to de0 0 scribe gxði;jÞ , gyði;jÞ , and gzða; bÞ , then it’s pointless to go on trying to characterize the history with a single dynamic model. Modelers make these sorts of practical decisions, at least implicitly, all the time. Why model the Uruguay Round GATT negotiations as one game and the Doha Round WTO negotiations as another game instead of modeling them as two rounds of one game (with new players joining for round two)? The answer is that too much changes between the rounds for the second option to be sensible. How much change is too much? There surely can be no general a priori rule to govern this judgment call. Suppose, however, that we think that 0 . (If we can get good predictive leverage over gzða; bÞ by studying gxði;jÞ this were not often true, then Sterelny’s hypothesis would amount to an empirically empty—that is, untestable—metaphysical speculation about 0 0 the ontogenesis of particular people.) In that case, gxði;jÞ , gyði;j Þ, and gzða; bÞ must all be models of one Sx . The requirement that we remain within the mathematical rules of game theory imposes tight constraints on our options for interpreting the relationships among these models of Sx . In the formalism i and j are strictly different agents from a and b, but the whole approach here would make no sense if i and j didn’t differentially care about a and b, or if i and j, in play0 ing gyði;jÞ , were myopic with respect to the predicted equilibria of gzða; bÞ . The 0 utility functions of i and j, with respect to the goods up for grabs in gyði;jÞ describe their preferences over which of a range of g-level games get played. A g-level game is a game among players of ‘‘fully determined’’ games, that is, players who have full information about one another’s preferences over 0 the possible outcomes of gxði;jÞ . Game theory now gives us two possible options for constraining the solution sets on the games that model Sx .
The Economic and Evolutionary Basis of Selves
217
The first option is what I will call the minimal constraints approach. Here we stipulate that (1) the outcomes over which the utility functions that de0 fine gzða; bÞ are constructed must include the payoffs available in gxði;jÞ and (2) 0 the initial state of gzða; bÞ is one of the equilibria of gyði;jÞ . This approach has the advantage of allowing the modeler flexibility over the degrees to which i and j predict and are motivated by the utility functions of a and b. Both 0 levels of strategic myopia and the slopes of discount curves in gyði;jÞ are left as free parameters to be determined empirically (by anthropological study of different sorts of mutual enculturation processes). The price of this flexibility is that the modeling framework isn’t doing much work in constraining our accounts, by comparison with seat-of-the-pants situational judgment. This is, of course, just the standard trade-off one faces in building formal frameworks for representing classes of phenomena. The second option, the maximal constraints approach, would incorporate stronger assumptions into the modeling technology. Again, we would stip0 ulate the relationship between gzða; bÞ and gxði;jÞ just as in clause (1) of the minimal constraints approach. However, clause (2) will now say that the solution of gzða; bÞ must be the subgame-perfect equilibrium of the two-stage 0 W gzða; bÞ , where the terminal nodes state payoffs for each of i, j, a game gyði;jÞ and b. Here i and j have zero myopia with respect to the welfare of a and b. (Discount functions remain free parameters.) On this approach the mutual 0 enculturation described by solving gyði;jÞ is effectively treated as entirely an informational transformation.19 Note that despite the requirement of subgame perfection, not all coordi0 nation signals used in gyði;jÞ need to be costly commitment devices. Recent work by Skyrms (2002) shows in detail how use of costless signals can be relevant to reaching equilibria in noncooperative dynamic games, even if such signals are strategically irrelevant at equilibrium. Thus signals that a or b would not send might be sent by i or j without this violating the sub0 game perfection solution and implying that gyði;jÞ is cooperative. How many real human social exchanges are usefully representable by the maximal constraints approach (by itself) is a strictly empirical question. It will certainly not be fruitful for processes unfolding across generations, or even across major shifts in particular people’s maturation cycles or life situations. However, it might provide a powerful source of predictions in application to short-run sequences of encounters. Furthermore the maximal and minimal constraints approaches could be recursively combined. That is, sequences of games related by maximal constraints could be related to one another by minimal-constraints models. These recursive structures could then be treated as standard formalizations of ‘‘analytic narrative’’
218
Don Ross
explanations (Bates et al. 1998) of medium-run social processes. In any case, I have specified minimal and maximal grades of determination in order to mark off the endpoints on a continuum; many intermediate cases are possible, and these would perhaps be the tools usually appropriate for modeling actual cases. (For instance, one might model a situation as in the maximal constraints case, but replace subgame perfection with Nash equilibrium, with or without concern about trembling hands, etc.) Conclusion All one can ask of a formalism is that it give one a precise way of stating, and thus of testing, hypotheses. The modeling alternatives I have sketched here do this. To the extent that models close to the maximal constraints end of the continuum of possibilities, used recursively with minimal constraints ones, give us strong predictive leverage over medium-run processes, then game theory will turn out to give us a strongly improved grip on historical-cultural evolutionary change, holding out hope for powerful and precisely formulated generalizations about the patterns of such change. On the other hand, it could turn out that although maximal constraints approaches work well for short episodes within highly stable institutional settings, attempts to chain these together into sequences of minimal constraints models fare little better than traditional historical narratives guided by approximate speculative intuitions about counterfactuals (see Tetlock and Belkin 1996). (In the modeling methodology described above, this would emerge as inability to reliably induct values of the free parameters in the minimal constraints models from analyses of other models; each minimal constraints model would be built as a largely customized exercise.) In that case we would be left with the current status quo: standard evolutionary games would help predict the relative sizes of basins of attraction in long-run games, with the influences of distributions of types of selves left in black boxes; classical games would help describe short-run interactions among people; and medium-run episodes in which distributions of types of selves matter but these distributions interact dynamically with cultural interaction would resist systematic characterization. We can capture some of what is at stake here by contrasting the modeling framework I have described with that suggested by Hollis (1998). Hollis argues that people in social interactions often—at least where institutional rules don’t explicitly discourage this, as they do in some capitalist markets—strategize by reference to ‘‘team’’ utility functions that systematically differ from the individual ones they would otherwise manifest. Sug-
The Economic and Evolutionary Basis of Selves
219
den (2000) argues that this proposal captures some manifest facts about social processes, and Bruni and Sugden (2000) argue that it recovers a classical insight that neoclassicism lost. As a finished account of social dynamics, however, it must leave game theory useless concerning one part of the analysis. In particular, the relationship between team games and short-run games is difficult to capture in game-theoretic formalism. Do team games impose commitments on players of short-run games among individuals? If so, what makes these commitments binding? In effect, Hollis’s proposal makes games among individual people into stages of larger cooperative games. Two unwelcome consequences follow from this. First, it amounts to supposing that cultural pressure for cooperativeness has managed to completely swamp (at least in normal cases) natural-selection pressures that encourage biological individuals to compete with one another. This isn’t impossible, but it is a very strong hypothesis. It immediately implies the second problematic feature of the framework, which is that, once again, distinctive selves become causal epiphenomena; individual people are just robotic products of team dynamics. By contrast, within the framework I have just suggested social dynamics are logically and ontogenetically prior to individual selves, since selves are sculpted into being by social processes. However, the outcomes of g level games—the actual bargaining episodes that determine the particular distributions of strategically created and contested assets—are sensitive to the properties of individual narratively generated selves. Furthermore properties of biological individuals are inputs to the social processes. The framework imposes no a priori view on the relative causal strengths of Darwinian competition among biological individuals and the stabilization of cooperative dispositions under the evolution of institutionalized norms. Finally, no part of the process fails to be describable in the standard formalism of game theory, as interpreted by reference to Samuelsonian neoclassical preference theory. The framework thus satisfies all the desiderata developed in the earlier sections of the chapter. Whether it will help us to predict and explain empirical phenomena that otherwise resist systematic treatment must remain to be seen. Acknowledgments I would like to thank Andy Clark, Stephen Cowley, Harold Kincaid, David Spurrett, and attendees at the Collective Intentionality (Sienna, 2004) and Distributed Cognition and the Will (Birmingham, Alabama, 2005) conferences for their comments on earlier drafts of this chapter.
220
Don Ross
Notes This chapter originally appeared in Journal of Cognitive Systems Research, vol. 7, pp. 246–58, and is reprinted here with permission of the editor. 1. Economists who promote evolutionary and behavioral-institutional economics, such as Gintis (2000) and Bowles (2004), often use a good deal of anti-neoclassical rhetoric, but, for reasons that will become clear shortly, aren’t really foes of neoclassicism in the sense I intend here. 2. Defenders of ‘‘bounded rationality’’ models of economic behavior, including many so-called behavioral economists, disassociate themselves from neoclassicism on ground that it is committed to optimization models. According to bounded rationality hypotheses, most people’s economic behavior involves only ‘‘satisficing’’ up to thresholds. However, as argued at length in Ross (2005), the rhetoric involved in presenting behavioral economics as necessarily a radical challenge to neoclassicism rests on conflation of two claims that are often similarly expressed but are logically distinct. Neoclassical formalism indeed demands models in which objective functions are maximized. However, this need not be interpreted as requiring that whole individual people act so as to maximize their stock of any material quantity. Indeed, what I take to be the canonical formulation of the core of neoclassical theory (Samuelson 1947) explicitly denies such an interpretation. Utility, as maximized in neoclassical models, is not a material quantity or an aggregate of material quantities, and models of material satisficers can always be expressed in terms of optimization functions. See Gintis (2004) for an instance of a leading behavioral economist stressing this point. 3. For explanatory contrast: ‘‘fundamentalist’’ moral positions on abortion and stem cell research that are politically influential in the United States fail to track this distinction. 4. ‘‘Aspect’’ needs emphasizing here; I am not encouraging skepticism about the value of these modeling approaches. I think that any young economist who does not learn evolutionary game theory thereby fails to acquire a piece of core professional technology. 5. A newly flourishing area of empirical study is personality differences in nonhuman animals. Natural selection appears to strongly maintain such differences, at least in intelligent social species. Scientists aim to explain this (Bouchard and Loehlin 2001). 6. It might be objected that noncontradiction is a poorly motivated restriction on the flexibility I just lauded. Isn’t the whole point of emphasizing behavioral evidence in economics that we stop imposing ideals of rationality on the agents we model? Why should this aspect of the ideal of rationality continue to be privileged? I respond
The Economic and Evolutionary Basis of Selves
221
to this objection as follows. There are well-honed techniques in utility theory for handling limited, local contradictions. But to the extent that contradictions globally iterate across a system’s behavioral patterns, the neoclassicists have been right to think that we should surrender our commitment to viewing the system in question as an agent. Resistance to this, where the systems in question are people, stems from an individualistic insistence that biological organisms must be prototypical agents. What else could it be based on? How could one have evidence that a system really is pursuing goals if the goals in question are taken to be globally contradictory? Those who take it as axiomatic that whole people are single agents across time thereby ignore the need for such evidence, which is why for them agency and consistency can seem to come apart. 7. Some critics of mainstream economics (e.g., Rosenberg 1992), argue that neoclassically inspired microeconomics doesn’t much manifest this virtue. I think this is empirically false. The history of postwar economics is littered with models that were once thought to be promising accounts of phenomena and are now known to be false. Behavioral and experimental economics has played a leading role in such Popperian progress. 8. The idea alluded to here will be familiar to philosophers of mind and to most cognitive scientists, but probably not to economists. To briefly explain it: communities of people use their shared public language to re-describe things, highlighting different aspects and relations of an event, process or object on different occasions while still (usually) keeping reference fixed. Coding at the neural level, not being public, cannot be assigned this sort of flexible semantic interpretation. At the same time, people have no direct access to the coding system used by their brains. Thus we say that the two informational formats—neural coding and public language—are distinct, and usable by distinct systems. 9. See the previous note. 10. So, in terms of the earlier explanatory heuristic: a ‘pro-life’ fundamentalist is right to think that an abortion destroys the integrity of a biological individual, though it destroys the integrity of no individual person (which is what ought to matter morally). 11. This formulation is not the standard textbook one. It deliberately abstracts away from some interesting issues about equilibrium computation in evolutionary games. n denotes the baseline point from which basins of attraction at t are calculated, and it can be any distance into the past history of the lines of replicators interacting during the process. With respect to any model, we can ask about the extent to which its dynamics are path-dependent (i.e., about the extent to which the law of large numbers will minimize the sensitivity of equilibria to low-probability events, as a function of n). To the extent that the model is relatively deterministic (not strongly path-dependent in its dynamics), we should get the same equilibria regardless of
222
Don Ross
where we choose n. To the extent that we have strong path-dependency, we will get less fine-grained topologies of basins if we increase n while holding fixed the confidence level with which we want to calculate expected fitnesses. All equilibria as n moves closer to t will be refinements of earlier n equilibria (i.e., consistent with those equilibria but representing more information). In a particular model (e.g., any implemented simulation), n will be set by the modeler, so the game can be defined less generally than in the formulation here. These less general formulations are the usual ones found in textbooks. 12. The first condition restricts attention to causal influences that operate by way of i’s agency—telling us not to count, say, an asteroid that strikes only i as part of i’s agent-specific control system. The second condition makes the control system specific to i. Note that the formulation is carefully neutral as between internalist and externalist interpretations of behavioral control, both in general and in particular circumstances. 13. There is no restriction on biological individuals acting so as to introduce strategically relevant information into games going on among other agents. 14. That is, representations that are not tied to any specific class of actions. 15. This construct was originally developed as part of an approach to representing emotional signaling in game-theoretic terms. Emotional response is another phenomenon that has sometimes been thought to confute neoclassical models of human economic behavior, though in this case with no basis in the history of fundamental economic thought that will survive serious scrutiny; see Ross and Dumouchel (2004). The confusion seems to rest on a common double mistake. First, neoclassical modeling theory gets assimilated with rational choice theory, which makes strong empirical assumptions about human behavior that neoclassicism does not (Ross 2005, chs. 3 and 4). Second, muddled critics read ‘‘rational’’ in rational choice theory as if it were the foil of ‘‘emotional.’’ This is a folk idea that has sometimes been echoed by philosophers but has never driven economics. Keynes’s famous remarks about ‘‘animal spirits’’ referred to herding behavior, not emotion. In the classical tradition reason is held to be the slave of the passions. Neoclassical theory incorporates no assumptions at all about the relative weights of different cognitive modalities or styles of behavioral motivation. 16. Individuals, like almost all animals with brains, vary in personality. But personalities, as studied by ethologists, vary along far fewer dimensions than selves do. See Ross (2007) for details. 17. My phrase here deliberately follows Quine’s (1960) ‘‘radical translation.’’ Quine sought to illuminate everyday problems of meaning interpretation by focusing on a case where two people share no lexical conventions. I similarly am interested in everyday problems of strategic coordination, and find it useful to do so by imagining
The Economic and Evolutionary Basis of Selves
223
a case where there is minimal mutual knowledge of shared culturally sculpted utility functions. 18. Notice how endlessly subtle we can make all this: perhaps the Texan becomes someone who enjoys Monty Python in something like the way a Masai cattle herder does. Yes, there can be facts of the matter of this sort. A very experienced anthropologist who studies Masai culture just might be able to spot its influence if she watches the Texan watch Monty Python. 19. Mariam Thalos points out that subgame perfection is a very strong equilibrium refinement to impose in a model. That is just one of the factors that makes it appropriate to call the kind of model in question here one that maximally constrains the relationships among the elements of Sx . References Ainslie, G. 2001. Breakdown of Will. Cambridge: Cambridge University Press. Bates, R., A. Greif, M. Levi, J.-L. Rosenthal, and B. Weingast. 1998. Analytic Narratives. Princeton: Princeton University Press. Binmore, K. 1998. Game Theory and the Social Contract, Volume 2: Just Playing. Cambridge: MIT Press. Binmore, K. 2005. Natural Justice. Oxford: Oxford University Press. Bouchard, T., and J. Loehlin. 2001. Genes, evolution and personality. Behavior Genetics 3: 243–73. Bowles, S. 2004. Microeconomics: Behavior, Institutions and Evolution. Princeton: Princeton University Press. Bruner, J. 1992. Acts of Meaning. Cambridge: Harvard University Press. Bruner, J. 2002. Making Stories: Law, Literature, Life. New York: Farrar, Strauss and Giroux. Bruni, L., and R. Sugden. 2000. Moral canals: Trust and social capital in the work of Hume, Smith and Genovesi. Economics and Philosophy 16: 21–45. Buss, L. 1987. The Evolution of Individuality. Princeton: Princeton University Press. Camerer, C., G. Loewenstein, and M. Rabin, eds. 2003. Advances in Behavioral Economics. Princeton: Princeton University Press. Clark, A. 1990. Connectionism, competence and explanation. In M. Boden, ed., The Philosophy of Artificial Intelligence, pp. 281–308. Oxford: Oxford University Press. Clark, A. 1997. Being There. Cambridge: MIT Press.
224
Don Ross
Clark, A. 2002. That special something. In A. Brook and D. Ross, eds., Daniel Dennett, pp. 187–205. New York: Cambridge University Press. Clark, A. 2004. Natural Born Cyborgs. Oxford: Oxford University Press. Davis, J. 2003. The Theory of the Individual in Economics. London: Routledge. Dennett, D. 1987. The Intentional Stance. Cambridge: MIT Press. Dennett, D. 1991. Consciousness Explained. Boston: Little Brown. Dennett, D. 2003. Freedom Evolves. New York: Viking. Elster, J. 1985. Making Sense of Marx. Cambridge: Cambridge University Press. Elster, J. 1999. Alchemies of the Mind. Cambridge: Cambridge University Press. Ewert, J.-P. 1987. Neuroethology of releasing mechanisms: Prey-catching behavior in toads. Behavioral and Brain Sciences 10: 337–68. Flanagan, O. 2002. The Problem of the Soul. New York: Basic Books. Gintis, H. 2000. Game Theory Evolving. Princeton: Princeton University Press. Gintis, H. 2004. Towards the unity of the human behavioral sciences. Politics, Philosophy and Economics 3: 37–57. Glimcher, P. 2003. Decisions, Uncertainty and the Brain. Cambridge: MIT Press. Gould, J., and C. Gould. 1988. The Honey Bee. San Fransisco: Freeman. Hodgson, G. 2001. How Economics Forgot History. London: Routledge. Hollis, M. 1998. Trust within Reason. Cambridge: Cambridge University Press. Hull, D. 1976. Are species really individuals? Systematic Zoology 25: 174–91. Kennedy, J., and R. Eberhart. 2001. Swarm Intelligence. San Fransisco: Morgan Kauffman. Keller, E. F. 1999. The Century of the Gene. Cambridge: Harvard University Press. Lawson, T. 1997. Economics and Reality. London: Routledge. Lloyd, D. 1989. Simple Minds. Cambridge: MIT Press. Maynard Smith, J. 1982. Evolution and the Theory of Games. Cambridge: Cambridge University Press. McGeer, V. 2001. Psycho-practice, psycho-theory and the contrastive case of autism. Journal of Consciousness Studies 8: 109–32. Montague, P. R., and G. Berns. 2002. Neural economics and the biological substrates of valuation. Neuron 36: 265–84.
The Economic and Evolutionary Basis of Selves
225
Quine, W. V. 1960. World and Object. Cambridge: MIT Press. Ritzman, R. 1984. The cockroach escape response. In R. Eaton, ed., Neural Mechanisms of Startle Behavior, pp. 93–131. New York: Plenum. Roemer, J. 1981. Analytical Foundations of Marxian Economic Theory. Cambridge: Cambridge University Press. Rosenberg, A. 1992. Economics: Mathematical Politics or Science of Diminishing Returns? Chicago: University of Chicago Press. Ross, D. 2002. Why people are atypical agents. Philosophical Papers 31: 87–116. Ross, D. 2004. Meta-linguistic signaling for coordination amongst social agents. Language Sciences 26: 621–42. Ross, D. 2005. Economic Theory and Cognitive Science: Microexplanation. Cambridge: MIT Press. Ross, D. 2006. Evolutionary game theory and the normative theory of institutional design: Binmore and behavioral economics. Politics, Philosophy and Economics 5: 51–79. Ross, D. 2007. H. sapiens as ecologically special: What does language contribute? Language Sciences, forthcoming. Ross, D., and P. Dumouchel. 2004. Emotions as strategic signals. Rationality and Society 16: 251–86. Rothschild, E. 2001. Economic Sentiments: Adam Smith, Condorcet, and the Enlightenment. Cambridge: Harvard University Press. Samuelson, P. 1947. Foundations of Economic Analysis. Enlarged edition (1983). Cambridge: Harvard University Press. Satz, D., and J. Ferejohn. 1994. Rational choice and social theory. Journal of Philosophy 91: 71–87. Schelling, T. 1980. The intimate contest for self-command. Public Interest 60: 94–118. Skyrms, B. 2002. Signals, evolution and the explanatory power of transient information. Philosophy of Science 69: 407–28. Spurrett, D., and S. Cowley. 2004. How to do things without words: Infants, utterance-activity and distributed cognition. Language Sciences 26: 443–66. Sterelny, K. 2004. Thought in a Hostile World. Oxford: Blackwell. Stigler, G., and G. Becker. 1977. De gustibus non est disputandum. American Economic Review 67: 76–90. Sugden, R. 2000. Team preferences. Economics and Philosophy 16: 175–204.
226
Don Ross
Sugden, R. 2001. The evolutionary turn in game theory. Journal of Economic Methodology 8: 113–30. Tetlock, P., and A. Belkin, eds. 1996. Counterfactual Thought Experiments in World Politics. Princeton: Princeton University Press. Tomasello, M. 2001. The Cultural Origins of Human Cognition. Cambridge: Harvard University Press. Weibull, J. 1995. Evolutionary Game Theory. Cambridge: MIT Press. Young, H. P. 1998. Individual Strategy and Social Structure. Princeton: Princeton University Press.
11 Situated Cognition: The Perspect Model Lawrence Lengbeyer
Activity AND Receptivity Social psychologists, through decades of research, have documented the common tendency to underestimate how much our subjective construals of the situations that we face shape our thought and behavior (Ross and Nisbett 1991). ‘‘[S]ituational factors exert effects on behavior that are more potent than we generally recognize’’ (1991, p. 28; see also Flanagan 1991, chs. 12–14).1 This general tendency to slight the role of situational influences upon thought and behavior is echoed by a similar neglect within philosophy. A person’s cognitive endowment is assumed to provide a single fund of resources whose availability is indifferent to situation. Goals, especially in the form of desires, tend to be treated as givens, with little said about how they are constructed or elicited in reaction to environmental pressures and opportunities. In effect, the standard philosophical and folk-psychological stories about cognition and action credit the agent with too much spontaneity in his activities and projects. He is taken to be fundamentally active rather than reactive, to project his needs and aims, accompanied by his full supporting arsenal of cognitive instruments, upon an environment that constrains his activities, and of course provides provocations and occasions for those activities, but is essentially a passive medium. There is nothing strictly false in such an approach; it is merely that the one-sided emphasis has served to obscure important features of mental functioning. A corrected point of view must balance the image of the active agent with an appreciation of how we are also continually responding to the world—that is, to the pragmatic situations that present themselves via our experientially molded perceptual systems and that effectively select (in
228
Lawrence Lengbeyer
the first instance, at least) subsets of our cognitive resources to be at our disposal in generating our responses. The result is a model of a structurally divided mind, comprising a multiplicity of diverse, and sometimes conflicting, standpoints and wills whose elicitation is a complex function of agent intentions and plans, the encountered environment, past experience, and temporal sequence. The Standard View: Cognitive Integrationism The prevalent ideal-type picture of the cognizing mind, which serves as part of the assumed background framework for a great deal of work in philosophy and psychology, takes it to be a comprehensively integrated network of beliefs. One’s mind is thought to provide, at any particular time, and regardless of the particular situation in which one finds oneself, a unified perspective on the world, something that makes one the unique individual one is. This account, of the typical person’s cognitive endowment as a functionally integrated unity all of which is available to contribute to each occurring cognitive process, I will term Cognitive Integrationism, or Integrationism for short. On an Integrationist account, one simply has a single overall perspective or framework (which includes one’s values, plans, and policies) that shapes one’s construal of, and response to, encountered information. One’s current mental activity is thus a function of two variables: one of these for the perspective, which is generally unchanging (aside from occasional permanent changes via learning or permanent forgetting), plus one for the immediate circumstances, which influence which of one’s stable values, plans, or policies are to be chosen to be controlling at the moment. Time— precisely when it is in the flow of daily life that one’s mind is operating within these parameters—is otherwise irrelevant. On this view, for any thinking task whatsoever the relevant mental tools are roused from their slumbers and mobilized.2 Perceptual construals and other interpretations are formed using the full resources of one’s perspective, and reasonings and actions are then executed with the utilization of all its topic-relevant components. The subject matter of a conversation, or the substantive demands and features of a task—which are ascertained in a context-independent manner, the assessments being affected in no special way by the kind of situation one is then in—determine which of one’s existing beliefs are relevant, and these then get put in play. True, one does think and act differently in different kinds of situation, but that is simply because the matters thought about in each situation elicit the mental activ-
Situated Cognition
229
ity appropriate to them. Once a topic is determined, situation exerts no further cognitive influence. The basic idea of topic-relevance can be conveyed with the aid of an example (from Cherniak 1986, p. 57). Suppose that one comes to think that one needs more light in order to look within a gasoline tank. One will then bring to bear all of one’s beliefs concerning ‘light’ and ‘gasoline tank’ (or maybe ‘light’, ‘gasoline tank’, ‘gasoline’, and ‘tank’—such details are unimportant). If it occurs to one to strike a match for the illumination, one will then have access to all of one’s beliefs concerning ‘match’ (and perhaps ‘striking a match’) as well.3 Such a model of cognitive-resource activation may appear to make intuitive sense, but we can already doubt whether it has the resources for handling this example of the Heedless Match-Striker. It in fact repeatedly occurs, in one variation or another, that a nonsuicidal person who knows that matches are used for ignition, that gas fumes can easily ignite, and that he is dealing with a gas tank, nonetheless knowingly lights a match for illumination and blows himself up. The view that topic-relevance guides the marshaling of cognitive resources has no apparent explanation to offer for such tragic absent-mindedness. That view would presumably have to fall back upon some unconnected auxiliary hypothesis about the unusual circumstances of the cognizer: physiological impairment or emotional disturbance, perhaps,4 or (and this seems still less plausible) extreme time constraints that prevent timely activation of the topic-relevant resources. Another response might be to invoke some account of attention focusing,5 but it is not easy to see how to incorporate attention-narrowing principles while maintaining the model of a single, internally unified cognitive system. In addition, while the Heedless Match-Striker illustrates the difficulty for Integrationism posed by cases where topic-relevant information is not available to cognizers, another challenge comes from cases where cognizers possess topic-irrelevant information that they cannot manage to ignore. For example, one is trying to evaluate a paper on its merits but cannot ignore or suppress knowledge of the identity of the author. In such cases the disavowed factors are not topic-relevant according to the standards and convictions of the persons involved, yet they figure in those persons’ cognizings, even against their wills. And it would appear that the more effective the activation-narrowing expedients that are introduced to handle the unavailable-information cases, like Heedless Match-Striker, the more problematic become the insuppressible-information ones, given that these are characterized by failures in cognition narrowing.
230
Lawrence Lengbeyer
There is one further problem that seems quite serious and intractable for the Integrationist account. A direct implication of that account, one that follows from its two main features—the constant accessibility, or ever-activated availability, of all of a person’s cognitive resources, and the activation of such resources according to topic-relevance—is that recognizable inconsistencies of belief cannot stably reside within the mind, cases of intentionally inexplicable malfunction or pathology aside. To hold onto, say, both p and not-p as beliefs would be absolutely out of the question. Even if the light of consciousness were to disclose only one at a time, thus averting the challenge of subjectively embracing both simultaneously, the fact of unrestricted access would mean that both would always be ready for use in reasoning and action upon presentation of appropriate topics of cognition. And because each is merely the negation of the other, wherever one would be topic-relevant so would the other, thus causing both always to be invoked together. But then neither one could actually be used in reasoning, and a person could not be said to possess either belief. Thus, if one is rightly attributed a belief that p, one cannot also possess a not-p representation. On the Integrationist picture, then, a person’s perspective may evolve over time, but always under the constraint that it remain free of recognizable internal inconsistencies—which would be fine, except that this requirement seems to be not infrequently violated in everyday life. An Alternative Cognitive Architecture: Perspects Overview A different account of cognitive structure, perspect theory, holds promise as an answer to these difficulties. According to this theory, the manifold stored representations constituting a person’s cognitive endowment do not form a single integrated network all equally ready for use. Cognition at any given moment is limited in its operations to drawing upon only a subset of one’s perspective, the perspect that is activated in accordance with the specific mental task or situation (the pragmatic context) that one perceives oneself to be facing. (This construal of the situation is shaped, naturally, by the immediately prior perspect.) Cognition cannot enlist perspectival elements that have not been activated (i.e., that have not been implicitly deemed to be situationrelevant) even if those elements might happen to be quite relevant or useful for the current topic of reasoning or task were it to be construed differently. An important consequence of this is that a single mind is nonpathologically capable of, and strongly—perhaps irresistibly—inclined toward, a
Situated Cognition
231
diversity of perceptual and interpretive orientations, viewpoints, and behavioral personas. These personas not only can take divergent or opposed views from one another and pursue different practical agendas, but can even scheme against one another. Indeed this is a basic mechanism for self-improvement. One encourages boldness in oneself, say, by curbing the tendency that one has in certain circumstances to activate perspects that cause one to think, and worry, about certain kinds of possible consequences of one’s actions. (There are countermeasures available to those other perspects, however.) Moreover there exist within any person’s mind numerous stable, ongoing instances of outright inconsistent cognitive adherence. The conflicting elements do not stand in pressing logical or psychological tension, either because they are seldom or never activated in such a way as to apprise their owner of the clash or simply because this person is content with the conflicted status quo. Sets of cognitive elements are thus partitioned from one another—not in a synchronic, geographic manner, via decoupling from each other and from the rest of the perspectival web, but diachronically, via differing activation conditions (i.e., differing sets of pragmatic contexts in which one is disposed to activate them). Despite the partitioning, cognition is not pervasively fractured and jumpy; perspect activation is tied to pragmatic situations, and these tend to be interlinked, one to the next, in ways that preserve a good deal of cognitive continuity. With the notion that one actually possesses and deploys an array of more specific perspectives (i.e., perspects), it becomes evident that one’s current mental activity is more profoundly a function of time than the Integrationist account allows. It now depends also on one’s immediately previous encounters with the world, which in part determine one’s particular purposes in the moment as well as the subset of cognitive resources that one will bring to bear upon them. Moreover cases like the Heedless Match-Striker are no longer baffling: a construal of the pragmatic context occasions a cognitive activation that omits some topic-relevant information. Given the Match-Striker’s circumstances (including preceding activities), and his prior training and experience, his knowledge about the dangers of explosion was not relevant to the particular purpose that he was pursuing (‘Get more light for a look inside the tank’). Though such a situation-relevance system is generally conducive to good mental functioning, in certain cases, like this one, it leaves dormant knowledge that would have been useful, even lifesaving. As for the inverse sorts of cognitive errors, where topic-irrelevant information cannot be ignored as one would like, these result from one’s unwitting or
232
Lawrence Lengbeyer
uncontrollable perception of one’s situation as possessing certain features, this perception causing certain cognitive resources to be mobilized even if they are not topic-relevant (and perhaps even in the face of one’s conscious disapproval of their mobilization). In More Detail: Shifting Standpoints The stream of cognizing thus consists of a sequence of occurrent cognizing episodes, each involving one or another subset of the sentences and images that stock one’s cognitive cupboard (augmented by new representations that have just been formed by way of sensation or cognitive activity such as reasoning6). Mental life is a continual shifting from perspect to perspect, or, in other words, from one frame of mind or point of view to the next.7 A shift of perspects from P1 to P2 is simply the change from having one set of representations activated to having a different, perhaps overlapping, set activated. Effectuating such a change requires ‘switching off’ the P1 representations that are not elements of P2, returning them to a dormant, temporarily inaccessible state in memory, and ‘switching on’ (retrieving from this dormant state) the constituents of P2 that are not elements of P1. In any particular cognitive episode, then, one’s cognitive processes are confined to utilizing the representations in the then-activated perspect (along with just-acquired representational resources8). In a mind structured this way, much of the responsibility for how one thinks, feels, perceives, and behaves belongs not so much to what one knows—the complete set of tools in one’s cognitive cabinet—as to how that knowledge is organized and allocated to one’s sundry perspect tool kits. What is crucial, that is, are one’s implicit policies for applying and not applying certain cognitive tools in particular types of contexts. Accordingly, the heart of the cognitive system is the perspect manager (or pmanager for short), a perspectival subset that has the distinctive role of bringing about perspect shifts. It comprises a collection of switches or transition rules, whose function it is to respond to information about one’s environment merely by mechanically prompting the activation and deactivation of particular representational resources and thereby of different perspects. The pmanager might thus be conceived as a (mindless) mechanism responsible for implementing a mapping or function from perspects and environments to other perspects: ðP; eÞ ! ðP 0 Þ.9 I assume that, by contrast to the multiplicity of perspects, there is only a single pmanager per person—that one and the same set of perspectswitching elements is always activated—though this special-purpose mechanism may become modified over time.10 (Given the crucial impact of
Situated Cognition
233
one’s pmanager on one’s psychological functioning, such modification via experience and training is a central means of self-improvement and selfcreation.) Still there is one small complication worth keeping in mind: one input to the pmanager is the cognizer’s environment, but the environment must be given some particular construal before it can make a determinate impression upon the pmanager, and this construal is shaped by the existing perspect. Hence perspect P1 might, in response to environment e construed as eðP1Þ, be switched by the pmanager to P7, while, given the identical objective environment e, P2 might produce a different construal eðP2Þ and therefore get transformed into P31.11 The domain of the pmanager function is thus not simply perspects and environments, but perspects and environments as construed via those perspects: ðP; eðPÞÞ ! ðP 0 Þ. Not every environment-as-construed initiates pmanager activity (perspect shifting). It is unrealistic to think that a human mind, even once it is fully trained in making situational discriminations, recognizes every variation in the environment as calling for a distinct cognitive approach and therefore is continuously fluttering from one to another (e.g., as the visual field or stream of consciousness changes from moment to moment). Rather, the pmanager ought to be envisioned as responding to only a finite number of triggering conditions feg. It comprises a limited set of perspect-transition rules, each containing a discrete kind of environmentas-construed within its antecedent. This might in fact allow it to respond productively to an infinite variety of environments never before encountered if, as seems likely, there is some means of recognizing similarities and analogies between situations that are not identical. Such a design makes a good deal of sense, as it permits novel environments to interface with the pmanager system and elicit responses from the cognitive apparatus that are reasonably well suited to those environments. It allows also for some stability in cognitive operation, whereby a construal of the kind of environment that one is currently in—and hence the perspect that one has activated—remains unchanged for a period of time even as one’s thoughts or sensory inputs are changing. That is, one’s mind need not be continuously ‘reconsidering’ and revising its view as to the type of environment one is currently in; the pmanager is quiescent, and thus the current perspect remains in place, until a change in perceived circumstances amounts to the perception of an environment type that better matches the antecedent of a transition rule different from the one last executed. (More will be said about this in the next section.) This is all quite abstract, and an analogy might help convey the central idea. Compare the structure of a person’s cognitive endowment to an array
234
Lawrence Lengbeyer
of radio telescopes. Only one subset of telescopes at a time is operative, and the data pattern registered at t1 by the activated subarray automatically causes activation of a different subarray at t2 . If each telescope in the array is programmed to operate in a distinctive manner, the system can be designed so that differing subarrays actually perceive differing things (by probing for different inputs, and construing identical inputs differently) and perform differing operations on the data thus perceived. Thus the array, depending on which of its subarrays is operative, might make very different sense of one and the same corner of the heavens. Situation-Relevance It was said above that it is ‘‘information about one’s environment’’ that prompts a perspect shift. What sort of information? Here perspect theory departs from the simple Integrationist picture in an important way. The set of perspectival resources that can be brought to bear in any cognizing episode in a perspect-structured mind is more circumscribed than the set of all topic-relevant resources. A situational filter comes into play, whereby assets that are topic relevant are nonetheless not available to participate in a cognizing episode if they are, roughly, not the kinds of items that are taken (subconsciously) by the cognizer to be, or to have been, useful in the sort of circumstances in which the cognizing episode is taking place. The assets brought to bear are determined by the mental task as the cognizer perceives it, something that includes considerations of topic, but limited by additional factors such as physical and social setting, and the cognizer’s specific and broader goals. I refer to the mental task ‘‘as the cognizer perceives it,’’ for situations presenting such tasks, like the ‘‘environments’’ mentioned earlier, are not objective facts that impose themselves (their meanings, constraints, and opportunities) on all people identically. They are identified (or individuated) by subjective construal of circumstances and reflect the projection of particular aims, values, concerns, and priorities onto the world. There is also a historical dimension to situational relevance. Perspect activation at time t is not, in general, a matter of computing de novo what representational resources are relevant given the goal, setting, and topic. Rather, it depends largely on which elements have in the past come to be linked to tasks of this type. On-the-spot adjustments are made only upon such a ‘given’ foundation, the sediment of past experience. In sum: at any moment, which of one’s potential perspects next gets activated depends on the pragmatic context in which one now takes oneself to be, in light of one’s construal of one’s present circumstances through the
Situated Cognition
235
Figure 11.1 Mind–world interaction in a perspect-structured cognitive system.
use of the current perspect. Cognition cannot call upon perspectival elements that have not been implicitly deemed, as a result of past experience, to be relevant to the kind of pragmatic context being currently perceived12 —that is, to be situation-relevant. That we subsequently recognize, after stepping back to a different standpoint, how very helpful it would have been to have had certain other specified resources available in those circumstances, cannot undo this selectivity of cognitive activation (though it might motivate us to seek to modify our pmanager system for the future). Indeed even prior anticipation of the usefulness of having certain cognitive resources activated in those circumstances is no guarantee that the automatic operations of our pmanager system will not cause a different set to be activated when the time comes. We can depict this cognitive system diagrammatically as shown in figure 11.1. With cognition being conducted through the employment of individual perspects rather than a single, entire perspective, two persons might possess identical collections of representational resources, yet cognize very differently. If their pmanagers were to differ, their resources thereby being wired differently for activation, they might employ different sets of mental states even in identically construed pragmatic contexts. Compounding this difference would be second-order effects, since the already divergent perspects would sometimes go on to produce dissimilar construals of immediately subsequent pragmatic contexts. This would in all likelihood13 cause
236
Lawrence Lengbeyer
the two persons’ perspects to differ again in the next round of activations, only now for a second reason as well. A perspect system is in effect a tremendously detailed organizational network of cognitive representations. The linking arrangement is itself an asset over and above the representations themselves, one that is generally conducive to conducting14 the kinds of mental tasks through which this network has in the past been shaped.15 Relocate to a very different (sub)culture, and one’s mind may undergo substantial devaluation, as much of the value furnished by its particular perspect organization might not be transferable, at least not without significant modification. One may be reduced to the cognitive state of a child or neophyte, bringing inappropriate collections of resources to bear and having to rely inordinately on conscious figuring and on trial and error. With these added complications, making the perspect picture vivid now requires a more complex analogy than the telescope array. Consider the organization of a football team—one guided by a badly myopic coach. After each play, the subset of the team (the ‘‘unit’’) that has just been on the field informs the coach of the game situation now faced by the team. The coach, squinting at the substituting rules listed on his clipboard, then orders onto the field the appropriate unit for such a situation. Sometimes a switch of units requires wholesale changes of personnel, sometimes only the swapping of a few players. Some players thus see a lot of action, appearing in many units or in commonly utilized ones, while others are benchwarmers whose appearances require unusual game situations. Because of the specific roles that the players are asked to play, a player may find certain teammates appearing on the field with him again and again, and others seldom or never. Systematically separating certain players in this way can sometimes be a matter of some urgency: the player at a certain position in a given unit may play that position in a way that, while it harmonizes effectively with the play of neighboring players in this unit, could clash with the play of teammates were he to see action with a different one. Yet clashing players can coexist on the team because no problems are encountered unless and until game situations arise that call for such player combinations (and which situations arise is to a significant extent under the influence of the team itself, how it plays). The inventory of available units is flexible, as the coach can respond to novel game situations by assembling specialpurpose units; it may be that most units are thus adapted at least somewhat. There is even the option of recruiting newly trained players for short-term employment, then dismissing them from the team when their
Situated Cognition
237
usefulness has expired—though some of these make such impressions that they are signed onto the permanent team roster. The theory might also be elucidated by applying it to a concrete experimental result in the psychological literature. One of the well-known Kahneman and Tversky ‘‘cognitive heuristics and biases’’ experiments shows very starkly the influence on decision-making of how outcomes attached to alternative options are framed (2000b, pp. 4–5). Subjects were presented the following problem: Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. . . . If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor?
Seventy-two percent of subjects chose A, the remaining choosing B. But then the options were formulated in terms of deaths that would result, rather than lives that would be saved: If Program C is adopted, 400 people will die. If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 will die.
The percentages reversed. Only 22 percent chose C, even though it is ‘‘undistinguishable in real terms’’ from A. The experimenters offered a compelling explanation in terms of people’s tendencies to be risk-averse regarding perceived alternative gains (hence preferring A over B) but risk-seeking regarding perceived alternative losses (hence preferring D over C).16 The most interesting aspect from the perspect point of view, though, is that, The failure of invariance . . . is as common among sophisticated respondents as among naı¨ve ones, and it is not eliminated even when the same respondents answer both questions within a few minutes. Respondents confronted with their conflicting answers are typically puzzled. . . . [T]hey still wish to [choose A, and choose D]; and they also wish to obey invariance and give consistent answers in the two versions. In their stubborn appeal, framing effects resemble perceptual illusions more than computational errors. (2000, p. 5)
The typical subject, S, appears to have mobilized different sets of cognitive tools for the different choice problems (A vs. B, C vs. D), according to
238
Lawrence Lengbeyer
the mental task perceived—as perspect theory would predict. Those mental tasks (i.e., pragmatic contexts) were evidently functions of the problem wordings, what with there being no other apparent differences between the problems. When shown the real-world equivalence between the two sets of options, S was indeed troubled by what she saw as an inconsistency. (She was using still another perspect for this different task of assessing her performance in the two problems taken together, a perspect that enabled her to see both of the problems, both of her responses, and the consistency standard that she had violated.) Yet she stuck to her guns, presumably because the task presented by each problem when considered in isolation remained unaltered, as did the perspect that was accordingly activated to handle it. S’s behavior might perhaps be counternormative or irrational, but from the perspect-theory standpoint it is not utterly mysterious. Nor is the tension that S experienced when revisiting, say, the A-B problem after realizing that the C-D problem was in some sense identical. This appears to have been produced by the way that the experimenters cleverly presented information (the C-D problem) that, although it seemed to S to be clearly topicrelevant for the A-B problem—how could it not be, when it revealed another, objectively identical, description of the outcomes?—it nevertheless seemed to S also to be pragmatic–context-irrelevant. This would understandably produce tension if S, lacking a clear understanding of the distinction between the two types of relevance, both found the C-D information irrelevant to her A-B task and also was under some impression that information thus related to her task (i.e., obviously topic-relevant, in the terms we are using) must be relevant. Hence S felt both that she was doing right by each choice task, and that she was being inconsistent. While this leaves much still to be understood about the phenomenon, it is a significant start. Integrationism, on the other hand, seems to lack the resources to begin making any sense of S. And the same sort of dependence, whereby differing form- and contextinfluenced perceptions of pragmatic task provoke the application of differing mental processes to what appears to be a single, unchanged decision problem (or topic), turns up in a diverse array of situations.17 The perspect model of cognition in fact accords well generally with the results unearthed by the rich research program on decision-making inspired by Tversky and Kahneman (e.g., see generally Kahneman and Tversky 2000a). The major findings include the following: ‘‘The same choice problem may evoke different preferences, depending on inconsequential variations in the formulation of options or in the procedure used to elicit choices’’ (Kahneman
Situated Cognition
239
2000, p. 759, citations omitted); ‘‘Different frames [i.e., characterizations, wordings], contexts, and elicitation procedures highlight different aspects of the options and bring forth different reasons and considerations that influence decision’’ (Shafir, Simonson, and Tversky 2000, p. 618). While the psychologists and behavioral economists pursuing these lines of research have concentrated on uncovering and classifying the phenomena, and describing the patterns of thinking that they reveal, they have tended to be silent on questions regarding how such patterns ought to be modeled and understood as products of the larger mental system. The perspect model offers the beginning of such a systematic account. A great deal of work needs to be done to elaborate the details of the perspect account, while testing its concurrence with the range of empirical results produced in cognitive science and elsewhere. But we might glean a primitive sense of the likely mechanics of perspect switching by considering an illustration, the Self-Hating Sports Fan: Milt insists, with full and well-reasoned conviction, that the outcomes of sporting events, for all the attention and hype they receive, are thoroughly unimportant as life matters go, and unworthy of any serious concern. Still, it is clear from his behavior at other times that he is deeply concerned about them. While watching some football contests, for example, he exults and despairs as the tide turns to and fro, and he analyzes the unfolding events with great care and seriousness. Sometimes, following or even during such an involvement in a game, he is again of a mind to deny sports’ significance, and at such times he may experience confusion or selfdisgust at his own passionate involvement (particularly if the game itself has soured his feelings).
Let’s suppose that Milt has just been muttering about how silly sports are, when in his perceptual field there appears a newspaper headline concerning the results of yesterday’s Vikings game. Does he say ‘‘Big deal,’’ and go on with other things? Or does he shift into a perspect that takes sports seriously, and become engrossed in the article or in speculating about the playoff implications of what he has just seen? There are various possibilities here, depending on Milt’s precise current perspect. Perhaps his spotting of the headline occurs at a moment when he is formulating arguments to express and justify his disgust at the present-day fervid interest in spectator sports, or maybe he is reasoning to himself (or to someone else) how, appearances notwithstanding, he really does not care about sports, and can take them or leave them. In either of these cases, it might well be that Milt will interpret the suddenly noticed headline as material to be worked into the case he is making. A response akin to ‘‘Big deal’’ will be the result. (‘‘Just look at this—how we devote
240
Lawrence Lengbeyer
3-inch headlines to a football game when there’s just been a typhoon in Thailand!’’ or, ‘‘See, I don’t need to read the sports section. I can just toss it away like this.’’) Here, there will have been no shift to a caring-about-sports perspect. On the other hand, Milt’s state of mind just before seeing the headline might not involve an activity like those just mentioned. Suppose that he is relaxed, having concluded his anti-sports thoughts and having begun to move on to other things. If so, it might well be that Milt would (subconsciously) construe a suddenly sighted Vikings headline as an opportunity to get valuable information about his favorite team—much as seeing a newspaper headline about Russia would ordinarily draw his attention, given his abiding fascination with Russian affairs. In the cases considered in the previous paragraph, this construal, though the more common one for Milt to make, had been blocked by a different one that was imposed by his current activity. It is not so blocked here. Thus a caring-about-sports perspect would be activated, superseding the sportsdenigrating one. Milt might then go on to read about his team contentedly (or frustratedly, if the news is not good), ‘forgetting himself’ for the time being in that he is entirely untroubled by the sports-antipathetic ideas lying dormant in his memory. Alternatively, he could possibly be prompted to shift back to the original perspect (and think ‘‘Big deal’’ after having initially felt sincere interest in the news) if some change in perceived pragmatic context engages his pmanager in an appropriate fashion. For instance, he might be reminded of his sports-are-worthless outlook by another person’s remark, or by his own episodic-memory recollection of having just been thinking such thoughts before he came across the headline.18 Perspects and the Frame Problem We now have the beginnings of an enriched conceptual scheme for describing and understanding cognitive activity, one that supports new kinds of hypotheses about the mind’s operations in particular cases. We also have a new conception of relevance in the operation of human cognition, one that might assist in the understanding of how humans by and large finesse the so-called Frame Problem. The Frame Problem is something of a Rorschach blot that has been subjected to quite a variety of interpretations within cognitive science and philosophy (see Pylyshyn 1987; Ford and Pylyshyn 1996). One of the broader construals (e.g., Dennett 1990) sees it as the challenge to build or to explain
Situated Cognition
241
an intelligence that can employ huge amounts of stored information in an efficient, timely manner in dealing with its environment. The perspect model can be viewed as offering a distinctive kind of theory about how the human mind more or less solves this Frame Problem, a rough sketch or general blueprint that might be taken up and fleshed out by cognitive scientists.19 The fundamental design point of organizing a mind into perspects is to cause situationally relevant information to be accessed together when another instance of the same type of situation is subsequently recognized. At the same time, other stored information does not get activated, and therefore does not interfere with processing.20 As Clark Glymour (1987, p. 70) observes, ‘‘The best solutions to frame problems . . . generate the relevant without ever looking at the irrelevant.’’ At the very least, a perspect structure shrinks the domain of mental representations that needs to be submitted to a relevance-to-topic search. Like earlier-postulated ‘frames,’ ‘scripts,’ and ‘schemas,’ the perspect structure ‘‘gets its ignoring-power from the attention-focusing power of stereotypes’’ (Dennett 1990, p. 163)—stereotypes of pragmatic contexts, and of collections of representational resources that have been matched to them in the pmanager by prior experience. There is also provision for improvising new, somewhat modified, collections on the fly in response to somehow-novel situations, where some of these new collections (those corresponding to situations that are sufficiently distinct and important to deserve their own independent conceptualizations) are then stored and made available for future recognizably similar situations.21 Both the pmanager, and the set of pragmatic-context types or categories to which it is attuned, appear to offer the benefits of both stability and adaptability, providing standardized responses yet also possibilities for modification in order to suit newly emerging cognitive needs. Not that this system is invulnerable to error, as common experience proves. We succumb to both errors of inclusion and errors of omission, all of which can be classified into four categories, depending on whether the problem concerns the (1) excessive or deficient (2) retention or activation of cognitive resources.22 Errors of Inclusion Protraction errors A newly activated perspect unhelpfully retains elements of its predecessor—for example, one directs unrelated frustrations or resentments against a hapless child or colleague; one cannot relax at an outing in the presence of one’s boss.
n
242
Lawrence Lengbeyer
Mobilization errors Counterproductive elements get deployed out of the larger cognitive endowment—for example, one cannot control one’s anxious awareness of how much is riding on the musical audition one now faces; one employs out-of-place philosophical or legal reasoning during discussions on the basketball court, with shop clerks, or with one’s spouse.
n
Errors of Omission Contraction errors One fails to keep activated certain useful elements of the previous perspect—for example, one lets down one’s guard, which had previously been rightly raised, while interacting with a smooth-talking salesman. n Inactivation errors Needed perspectival elements fail to be roused for the mental task at hand—for example, the Heedless Match-Striker. n
Each kind of error can appear in isolation, but it may be more typical for two or more to occur together in producing the wrong set of cognitive resources for handling a situation. For instance, if one is arguing a legal case before the United States Supreme Court, and one addresses a Justice in the manner that one would address one’s child, then one is apparently utilizing some resources that ought not be available and failing to mobilize some that ought to be. Errors of inclusion and omission seem traceable to at least seven, and probably more, sources. Misperception Unfavorable perceptual conditions cause misconstruals of pragmatic contexts—for example, one mistakes a friend for a menacing street person. n Ignorance Information needed for accurate situation construal is missing or distorted—for example, one is unaware that the person with whom one has struck up a conversation is actually the investigative journalist who has been seeking one out for an interview. n Fatigue or other physiological disruption The alertness and attentiveness needed for effective utilization of one’s perspect system are compromised— for example, one’s mind drifts when one is unable to remain focused on the changing circumstances during a lengthy party or arduous group expedition. n Emotion-driven inertia One remains attached to a particular pragmaticcontext construal due to a powerful short-term emotional need—as in the case above of ire that is misdirected, perhaps knowingly, against one’s child. n Affect-driven preoccupation Episodic memories of powerful feelings, or of the triggers for these, keep dragging one back into a perspect from which n
Situated Cognition
243
one needs to move on—as in the case above of anxiously facing a musical audition. n Deficient situation sense One’s pmanager lacks sensitivity to certain pragmatically relevant circumstances—as is commonly the case with children, and with travelers in unfamiliar cultures. n Recalcitrant habits of situation construal One counterproductively assimilates new situations to a familiar type of pragmatic context despite efforts to eradicate this tendency—for example, the reforming pickpocket who, finding himself in a crowd, unthinkingly begins sizing up potential marks. It is clear from the existence of these errors that the pmanager perspectswitching system is not some all-knowing homunculus that has the full resources of the whole person at its disposal. It includes only a finite store of precedent pragmatic contexts for matching to encountered situations, and is typically restricted (with limited exceptions) to activating in a given situation only the cognitive resources that have come to be linked to that situation-type in the past. More important, it is not any sort of guiding intelligence, but rather a passive executive process—the ‘circulation librarian’ for the vault of cognitive assets—that makes no choices, possesses no agenda or motives, and has no preferences or goals of its own. Diverse, Even Dueling, Personas Despite the regular occurrence of mishaps of the kinds surveyed, perspects are invaluable for maintaining and deploying a large body of varied information in an efficiently organized and accessible manner. But they do more than this. By keying activation of cognitive assets to situation-relevance (purpose, circumstances, and personal history), rather than topic-relevance (logical relationships among linguistic forms or contents), they also permit inconsistencies to be introduced into a person’s perspective in a way that allows these to be controlled and capitalized upon. As long as it is topicrelevance that determines which representations get mobilized, no barrier seems possible between those that stand in close logical relation—if p is relevant, then so must be not-p (and maybe the apparent entailments of p, too). But if relevance is situationally relative, that is, strictly a matter of suitability to (circumstantially and historically situated) purpose or task, then the possibility is open for p, but not not-p, to be relevant at a given moment. Perspect theory thus raises a serious question about the unity of the agent. With a perspect-structured mind, a person does not cognize from a
244
Lawrence Lengbeyer
single stable vantage point but can undertake differing or even inconsistent approaches to a single subject matter or entity in the world. This is not merely a matter of adopting various reconcilable standpoints as part of a single plan for knowing or dealing with something. The differing standpoints are not, in general, like spies sent from a single headquarters according to one overall, coherent strategy, but more like separate cells capable of acting independently and in pursuit of their own agendas. This independence is a matter of degree, with some people imposing more cognitive unity, via cross-perspect monitoring and control, than others. In its basic structure, then, the cognitive system is a composite, an evolving collection of diverse instruments for apprehending the world and oneself. It is not merely the Integrationist mind refined by the substitution of a more stringent relevance test. Where the variations in cognizing approach or style are significant enough, ordinary psychologizing often speaks of someone becoming a different ‘‘self,’’ ‘‘person,’’ or ‘‘persona,’’ and I will use the last of these to refer to the somewhat-divergent beings who emerge in different kinds of practical lifeworlds. On this conception, then, a person is to a significant extent divided among differing personas, albeit personas that are overlapping and interrelated rather than entirely dissociated. The terminology needed for portraying this picture is unusual, and we ought not be misled into exaggerating the degree of cognitive nonintegration. The perspect-structured mind is a unity, if only an imperfectly unified, loosely woven one. It possesses a coherence that derives from several sources: the single cognitive endowment that supplies representational resources for each perspect out of a single, evolving pool; the overlap in elements among perspects in a person’s repertoire; the single perspectswitching mechanism; and the systematic interrelations among the personas or roles constituted by the differing perspects, the ways that the habits of activation allow some personas control over others. While personas are often blind to, or uncaring about, one another, they in some cases compete as rivals or antagonists, or support one another as allies, or instruct one another as do friends or siblings. From the vantage point of one perspect, for instance, we might be aware of our tendency to fall into a certain other perspect in certain circumstances, and if we deplore this habit (e.g., of being unrelentingly pessimistic, or indulging in superstitious thinking), we might resolve to correct or eliminate it, and take measures aimed at executing this resolution.23 Alternatively, we may try to fiddle with the pmanager machinery so as to promote the use of a certain persona in a certain type of situation (e.g., so that whenever we find our-
Situated Cognition
245
selves becoming exasperated with children, we are prompted to think of their limited capacities and of our own childhoods). We can see now that the earlier football analogy, though apt in some respects—it did convey the basic contrast with Integrationism, according to which the mind could be modeled as an orchestra under the baton of an all-controlling conductor—was also misleading. It suggested that a single guiding intelligence lies behind the constitution and activation of units/perspects (the coach, or whoever above him draws up the unitswapping rules). An arrangement more faithful to the mental reality (albeit less faithful to football reality) would emphasize that the coach’s role is merely executive and otherwise passive, and that there is no higher-up who pulls the strings. Managerial power is concentrated in the hands of the units—the personas—themselves. (Insofar as a unit is a collection of instrumentalities, it can be thought of as a perspect; insofar as it has a collective way of playing or thinking, it can be thought of as a persona.) The coach simply does the units’ bidding. He shifts units on and off the field in accordance with the existing substitution rules upon being informed by the unit on the field of a substitution-triggering change in game situation. He sometimes changes the substitution rules at the behest of one of the units. He seeks out additional players that a unit deems to be needed. He tries to dismiss certain players who are regarded as unworthy by a certain unit. But he has no independent role of his own. Some units (personas), by contrast, do have independent agendas that they look to pursue. Some of these units are quite thoroughly occupied by trying to control the matters that concern them specifically, and are relatively unaware of the other units. But some take more notice of matters that involve other units besides themselves, and perhaps have visions of what the character of the entire team (the person) should be, of what goals it should pursue and how it should work toward these. Such a unit U may be aware of the performances of certain other units, and how the nature of those performances affects the team’s character as U envisions it. U’s concerns may lead it to pursue its own agenda for how the team ought to be run—what sort of additional players need to be acquired, what units ought to be used in what sorts of situations, what players should be removed from certain units or even booted off the team, and so forth. Of course, unit U 0 might have an agenda that conflicts somewhat with that of U (even if U 0 overlaps in personnel with U). In case of such conflict, one unit may seek to have the other’s makeup changed or to reduce the opportunities for the other to see playing time—either by having the substitution rules
246
Lawrence Lengbeyer
altered or by doing what it can, when it is on the field, to prevent the development of those game situations that would dictate a shift to the other unit. In some instances, a two-way struggle may develop. We have clearly arrived at a portrayal of the mind quite removed from Integrationism. Conclusion If the positing of what is, functionally, cognitive compartmentalization is on the mark, then understanding and agency apparently do not require a cognitively unified psychology. Nor do they demand perpetual spontaneity, the active imposition of a practical agenda and theoretical outlook upon the environment that is encountered. The relationship is two-way; agent-environment influence is mutual. One’s construal of one’s situation is indeed shaped by the perspect through which one perceives and interprets the circumstances—hence by the particular projects, ideas, and concepts that one brings to these. Furthermore one does have, at least sometimes, a degree of freedom to consciously override the automatic situation-triggered arousals of one’s inclinations to thought or action, and this freedom is occasionally exercised. Such factors constitute the active side of the mind’s encounter with the world. On the other hand, the circumstances as perceived thrust upon one a partly receptive position as they seize upon one’s ingrained perceptual categories and perspect-activation dispositions. The world in this way imposes itself and compels a certain unchosen mental orientation, like a ringing tuning fork that causes the vibration of strings pre-tuned for resonating at certain frequencies. Yes, there is some freedom to redirect one’s cognitive processes as one consciously sees fit—but the scope for conscious choice to interject itself into the subconscious dance of perspect system and world is limited, much as is its scope for overriding the subconscious control of breathing or bodily movement. Acknowledgments Many thanks to George Ainslie, Wayne Christensen, Stephen Cowley, Paul Davies, Dan Dennett, Andries Gouws, Peter McInerney, Teed Rockwell, Don Ross, and Mariam Thalos for their helpful comments and questions on an earlier version of this chapter; to Don Ross and Harold Kincaid for numerous valuable suggestions on how to carry forward the further explo-
Situated Cognition
247
ration of the perspect model; and to David Spurrett, Don Ross, and the other presenters at the mind AND world II Conference for creating such a fertile environment for the exchange and development of ideas. Notes 1. Ross and Nisbett (1991, pp. 139–40, 18–19) suggest that the agent tends to draw attention as ‘figure’, with environmental ‘ground’ being given less credit as a force (or field of forces) shaping cognition and behavior. 2. The account offered here, as well as the contrasting perspect model, can both be straightforwardly adapted to an ‘extended mind’ or ‘soft self’ conception, by allowing that they mobilize and thereby incorporate resources or tools that reside outside of the brain, even nonbiological ones that are situated outside the body. See Clark (chapter 7 of this volume). 3. Little recent philosophical work aims to spell out the cognizing process in such detail, but one careful paper that does (Israel, Perry, and Tutiya 1993) suggests a model of just this kind. In it, beliefs are complex, structured concrete particulars composed of simpler ones among which are ‘‘notions’’ of individuals. Each notion functions like a file folder for storing all known information about the individual under the identity through which the believer has become acquainted with it. (This permits the existence of multiple, unlinked folders/notions concerned with the very same individual, such as Evening Star and Morning Star for the planet Venus, or Caesar the emperor and Caesar the childhood acquaintance). The important feature from our point of view here is that when a notion is thought about, it is apparently assumed on this model that the notion brings automatically with it all the beliefs that concern it—the entire contents of its folder. Armstrong (1973, p. 18) hints at a similarly structured process of cognitive activation: ‘‘A believes that arsenic is poisonous if, and only if, acquiring the belief that a certain portion of stuff is arsenic brings it about, in normal circumstances at least, that A acquires the further belief that that portion of stuff is poisonous.’’ 4. Price (1969, pp. 259–62), from whom Cherniak may have taken over this example, could make no better sense of it than to suggest that Smith must have been distracted by some emotional upset: he ‘‘lost his head’’ due to ‘‘fear, anger, anxiety, astonishment or some other ‘disconcerting’ emotional state (another is excessive eagerness to ‘do something about’ the situation at once)’’ (p. 260). 5. After all, cognitive activation seems clearly to be very selective. Our noticings and thinkings often do not spur thoughts, conscious or otherwise, of all things that are related to their contents, even those that are closely related. So, for example, one can think or talk about tomorrow’s lunch without therefore thinking (even subconsciously) about everything one knows connected to the concept of lunch. Similarly,
248
Lawrence Lengbeyer
if one is reasoning about how to drive from Boston to Pittsburgh, one need not be sporadically distracted by ideas about the food in Boston or about driving from Denver to Salt Lake City. 6. Another terminological note: while a perspect, by definition, strictly comprises only cognitive resources stored in and activated from long-term memory—the activated portion of the permanent endowment—I will sometimes, where it causes no confusion, use this term in a broader sense that encompasses also the newly formed, occurrently used items that have not (yet) been stored in long-term memory, such as percepts or just-inferred sentences. Despite being created on the spot, such occurrent states can figure crucially in the production and explanation of judgments and actions, among other things, whether they are subsequently preserved or abandoned; it may be helpfully simplifying in such cases, or where distinguishing between permanent assets and temporary constructs is not worth the trouble, to refer to the cognitive collection that generates the judgment or action as a ‘‘perspect.’’ 7. A perspect is, strictly, a set of mental representations, whose usage in cognition gives rise to a certain viewpoint on the world (or, more generally, to a certain ‘persona’ or way of being), but equivocating by sometimes using ‘‘perspect’’ to refer to the viewpoint itself is a harmless enough practice in all cases where the difference is irrelevant or it is clear from the textual context which sense is being used. My use of ‘‘perspective’’ is similarly equivocal. 8. I take it that working memory affords short-term storage for such items so that they need not be used immediately when formed but have some window within which they can be called upon before fading away. 9. It would thus be reasonable to call this mechanism the perspective manager, too, because it actually manages the activation and deactivation of all the representational resources that constitute a person’s perspective. 10. I make the one-pmanager assumption because it simplifies matters significantly, and because there is no strong reason—at least none apparent at this time—to opt for a more complicated story. Consider the alternative: each person is attributable with a system of pmanagers, in the plural (only one of which is activated at a particular time), as is the case with perspects. Perhaps the pmanager is not a separate, unvarying operational unit but is simply a part of the current perspect and, like it, ever subject to replacement at any moment by a successor. If so, then whenever the current pmanager initiates a change of perspect (in response to a perceived change of environment), the change potentially extends, reflexively, to the pmanager itself, in which case certain transition rules are activated and others deactivated. A person thus possesses a set of (overlapping) sets of transition rules (i.e., a set of pmanagers) just as he possesses, according to the model, a set of (overlapping) usable sets of cognitive tools (i.e., a set of perspects). At any moment the next perspect activated is, on this account, a function not only of the environmental conditions perceived, and hence of the current perspect that shapes such perception, but also of the set of tran-
Situated Cognition
249
sition rules that happens to be currently operative. The corresponding perspectchange function is ðP; eðPÞ; pmðPÞÞ ! ðP 0 Þ. Taking the arrangement of mind to be such would, of course, add significant complications—which are to be avoided in the absence of compelling empirical evidence or other good reasons to face them. 11. This is not to say that no other factors need be taken account of in understanding situation construal. For instance, motivational variables—momentary goals or urges, say, such as hunger for food or for sex—could conceivably cause one and the same set of circumstances to be construed differently by the same person. 12. Or, in some cases—where the antecedents of the pmanager’s transition rules contain no match for the current pragmatic-context type—to sufficiently similar or closely analogous kinds of pragmatic context that one has previously encountered and incorporated into the pmanager system. 13. The ‘‘in all likelihood’’ qualifier is necessary because of the possibility that the divergent pragmatic-context construals might, due to differences between the pmanagers, just so happen to cause identical sets of cognitive assets to be activated. 14. We cannot say ‘‘successfully conducting’’ because the system merely ‘institutionalizes’ prior practice, whether or not it has been effective. Over time, we do reflect upon and correct many of our errors, and hopefully thereby improve the pmanager transition rules; but certainly we are capable of simply perpetuating ingrained patterns of thought and behavior that are distinctly suboptimal. 15. I assume here that new situation-perspect connections are established at least on some occasions when one must actively recruit cognitive resources for unprecedented kinds of mental task (or for already encountered tasks whose past pursuits have not produced lasting connections, perhaps because they seemed unimportant or unlikely to be repeated). Using the assemblage of cognitive elements in that task activity establishes the connections in question. 16. Kahneman and Tversky (2000b, pp. 2–5). The offered explanations for experimentally elicited ‘‘framing effects’’ like these do not, however, all come down to ‘‘loss aversion.’’ For example, parallel experiments have evoked divergent responses depending on whether options are described in terms of (1) how much of the work force would be employed, vs. how much would be unemployed, and (2) what proportions of certain immigrant communities have criminal records, vs. what proportions have no criminal records. These results have been ascribed to the ‘‘ratiodifference principle,’’ whereby choosers care not only about absolute differences in values provided by different options but also about their ratios. Quattrone and Tversky (2000, pp. 461–64). 17. For example, Loewenstein and Prelec (2000a, pp. 570, and 576–77) re: choosing to delay a free French restaurant dinner, when considered alone vs. as part of a sequence that includes dining at home; Loewenstein and Prelec (2000b, p. 591) re:
250
Lawrence Lengbeyer
choosing stipulated payments vs. higher payments lowered to the stipulated levels by rebates; Shafir, Simonson, and Tversky (2000, pp. 602–603) re: preferring one vacation over another vs. canceling one of the two reservations; Fischhoff (2000, p. 627) re: choosing to take a gamble, when the alternative is a sure loss vs. an insurance premium paid to avoid the risk. See also Kahneman and Tversky (2000b, pp. 12–13) re: choosing to buy a $10 ticket for a play despite having lost a $10 bill vs. not choosing to buy a second $10 ticket after losing the first one; Thaler (2000, pp. 244–45) re: valuing the savings on a purchase as an absolute dollar amount vs. as a proportion of the purchase price; Tversky and Griffin (2000, pp. 722–23) and Shafir, Diamond, and Tversky (2000, pp. 341–43) re: choosing one job option over another, while anticipating that the latter would produce more happiness; Tversky, Sattath, and Slovic (2000) re: discrepancies between choices and valuations; Slovic (2000, p. 502) re: viewing lost cash as $100 loss vs. as decrease in wealth from $5,600 to $5,500; Kahneman, Ritov, and Schkade (2000, p. 659) re: judging appropriate punitive damage awards on cases examined in isolation vs. together. 18. To play this out just a step or two further, suppose that Milt does shift back to a contemptuous-toward-sports perspect. Seeing the same headline again now will no longer have the same effect as before, since it constitutes a different situation, one no longer involving apparently new information about a recent game. This is not to say that Milt’s attention cannot again be yanked away, and his perspect altered, by another headline that promises new interesting information about the game or the team. 19. The unsung role in cognition of nonsentential representations is also pertinent to explaining how humans largely dodge the broader Frame Problem. The difficulties that would appear to beset a mind faced with operating over massive lists of sentential representations—including problems connected to the Frame Problem narrowly construed, namely how can the mind update its model of the world in response to unfolding events and know in each instance which aspects of the model need no updating? (see Janlert 1987; Haugeland 1987; Dietrich and Fields 1996; Morgenstern 1996)—might be more tractable if the system were to have imagistic representations at its disposal. Imagistic records of earlier experienced episodes of a certain type of action might include within them the information, say, that pulling a wagon with a desired object upon it could well entail dragging along other things or sliding the object off the wagon, while it has no effect on the color of the walls or the shape of the wagon (see Dennett 1990, p. 147). People have tremendous numbers of experiences from which to learn for the future, especially if these are not limited to those that they have taken the effort to cash into explicit sentences. These could be available as precedents for guiding later activity, if they could be stored in a fund of exemplars (or perhaps generalized prototypes) that can be activated upon encountering similar instances in the future. (Their activation need not be confined to this, though; it might be possible to cross-index them sufficiently that they could be made occurrent and usable at will if the right retrieval cue were, say, consciously thought of.) The
Situated Cognition
251
original experiences in such a system need not even be hands on but can involve observations of others (in person, on film, or otherwise), or even merely detailed imaginings of such experiences, perhaps instigated by verbally described scenes in a book, for instance. The current difficulties about how to implement such mechanisms for utilizing imagistic representations ought not prevent us from placing some trust in judgments that such mechanisms may exist and doing our best to specify them in gradually increasing detail. 20. The perspect approach might, say, suggest an account of natural language production and understanding that does not unrealistically posit language-users who possess only a single, canonical manner of working with words, one in which they are always employing, regardless of the purpose of the moment, the full kit of linguistic techniques and resources that they possess. (Thanks to Stephen Cowley for this suggestion.) 21. There are obviously many details and complications here that remain to be addressed. Consider, for example, that the pmanager system is the product of one’s own past experiences—but also, apparently, of one’s second-hand (or even merely fictional or imagined) experiences and of one’s efforts at instituting changes to the set of perspect activation/transition rules. For example, merely having read about the Heedless Match-Striker, and fearing a similar fate, one might mentally concentrate on remembering to activate the Heedless Match-Striker example (or one’s related knowledge regarding gas-ignition explosions) should one ever find oneself needing more illumination for examining a gas tank, or indeed should one ever be seeking illumination (or thinking of striking a match, say for lighting a pipe or for creating a pleasing atmosphere) in any sort of place where a gas tank is liable to be present. Can this sort of effort, of imagining oneself being in several scenarios of these kinds, and recalling in each the dangers of lighting a match, succeed in arming one’s pmanager with a new transition rule? This needs to be investigated. But let’s suppose that it does so succeed. For the new rule to be effective, though, one would seem to need also to modify all existing more-general transition rules that in practice might preempt the new rule and thus result in perspects lacking the crucial danger-avoidance information. One would need to remove the specific set of situations for which the new rule has been designed from the antecedent triggering conditions of the existing transition rules that are actuated by pragmatic contexts of illumination-providing, pipe-lighting, atmosphere-creating, and so on. This would entail adding a complicating exception, creating the modified rule ‘‘If . . . , and no gas tank is likely to be present, then activate . . . .’’ There is a more efficient and reliable solution, however, that would obviate the need for such rule modifications. The pmanager’s internal operations need only be designed so that, upon pragmatic-context construal PC, the ensuing perspect shift is guided by the transition rule whose antecedent most closely and specifically matches PC. This would permit the addition of new specific rules tailored to particular kinds
252
Lawrence Lengbeyer
of situations without the necessity of modifying the existing rules that would, in their generality, already cover such situations—and, indeed, without the necessity of figuring out just which existing rules would be thereby implicated, a task of great difficulty and one involving a kind of trans-situational surveying for which a system of situation-focused cognition is not well suited. Just how such a pmanager design would function, what sorts of characteristic errors it would produce (e.g., what if, in a given instance, two or more transition rules match different specific aspects of PC?), and whether this theoretical account accords with reality, are obviously matters that should have a place on the agenda of a perspect theory research program. 22. When, below, we examine the perspect-driven division of the person into multiple personas, it will become evident that what constitutes error is not an objective fact that is independent of point of view. For example, suppose that I have had experiences as the child of an Auschwitz survivor that have resulted in my commonly using perspects that, whatever else they also do, vigilantly monitor my environment for the least sign of possible anti-Semitism. If at some point my vigilance lapses unintentionally for a time, has my cognitive system fallen into error? There is certainly room for disagreement here; even I myself may hold multiple, incompatible views of the matter. One of my personas or persona-types might think, ‘‘Yes, I need to be more disciplined in maintaining my focus, given the menacing world out there’’; but another one might think, ‘‘No, thank goodness I’m starting to relax occasionally, finally overcoming the legacy of my family’s horrific experiences with the Nazis.’’ 23. These attempts frequently fail, in part because the resolving persona does not remain at the cognitive helm for long, in part because we are not well-trained or practiced in the process of self-modification, and in part because we do not understand that process at all well. For further analysis of the difficulties involved in achieving the permanent elimination of items from one’s cognitive endowment, see Lengbeyer (2004). References Armstrong, D. M. 1973. Belief, Truth, and Knowledge. Cambridge: Cambridge University Press. Cherniak, C. 1986. Minimal Rationality. Cambridge: MIT Press. Dennett, D. C. 1990. Cognitive wheels: The frame problem of AI. In M. Boden, ed., The Philosophy of Artificial Intelligence, pp. 147–70. Oxford: Oxford University Press. Dietrich, E., and C. Fields. 1996. The role of the frame problem in Fodor’s modularity thesis: A case study of rationalist cognitive science. In K. M. Ford and Z. W. Pylyshyn, eds., The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence, pp. 9– 24. Norwood, NJ: Ablex.
Situated Cognition
253
Fischhoff, B. 2000. Value elicitation: Is there anything in there? In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 620–41. Cambridge: Cambridge University Press. Flanagan, O. 1991. Varieties of Moral Personality. Cambridge: Harvard University Press. Ford, K. M., and Z. W. Pylyshyn, eds. 1996. The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex. Glymour, C. 1987. Android epistemology and the frame problem: Comments on Dennett’s ‘‘Cognitive Wheels.’’ In Z. W. Pylyshyn, ed., The Robot’s Dilemma: The Frame Problem in Artificial Intelligence, pp. 65–75. Norwood, NJ: Ablex. Haugeland, J. 1987. An overview of the frame problem. In Z. W. Pylyshyn, ed., The Robot’s Dilemma: The Frame Problem in Artificial Intelligence, pp. 77–93. Norwood, NJ: Ablex. Israel, D., J. Perry, and S. Tutiya. 1993. Executions, motivations, and accomplishments. Philosophical Review 102: 515–40. Janlert, L.-E. 1987. Modeling change—The frame problem. In Z. W. Pylyshyn, ed., The Robot’s Dilemma: The Frame Problem in Artificial Intelligence, pp. 1–40. Norwood, NJ: Ablex. Kahneman, D. 2000. New Challenges to the Rationality Assumption. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 758–74. Cambridge: Cambridge University Press. Kahneman, D., I. Ritov, and D. Schkade. 2000. Economic preferences or attitude expressions? An Analysis of dollar responses to public issues. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 642–71. Cambridge: Cambridge University Press. Kahneman, D., and A. Tversky. 2000a. Choices, Values, and Frames. Cambridge: Cambridge University Press. Kahneman, D., and A. Tversky. 2000b. Choices, values, and frames. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 1–16. Cambridge: Cambridge University Press. Lengbeyer, L. A. 2004. Racism and impure hearts. In M. P. Levine and T. Pataki, eds., Racism in Mind, pp. 158–78. Ithaca: Cornell University Press. Loewenstein, G. F., and D. Prelec. 2000a. Preferences for sequences of outcomes. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 565–77. Cambridge: Cambridge University Press. Loewenstein, G. F., and D. Prelec. 2000b. Anomalies in intertemporal choice: Evidence and an interpretation. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 578–96. Cambridge: Cambridge University Press.
254
Lawrence Lengbeyer
Morgenstern, L. 1996. The problem with solutions to the frame problem. In K. M. Ford and Z. W. Pylyshyn, eds., The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence, pp. 99–133. Norwood, NJ: Ablex. Price, H. H. 1969. Belief. London: George Allen and Unwin. Pylyshyn, Z. W., ed. 1987. The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex. Quattrone, G. A., and A. Tversky. 2000. Contrasting rational and psychological analyses of political choice. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 451–72. Cambridge: Cambridge University Press. Ross, L., and R. E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. New York: McGraw-Hill. Shafir, E., P. Diamond, and A. Tversky. 2000. Money illusion. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 335–55. Cambridge: Cambridge University Press. Shafir, E., I. Simonson, and A. Tversky. 2000. Reason-based choice. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 597–619. Cambridge: Cambridge University Press. Slovic, P. 2000. The construction of preference. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 489–502. Cambridge: Cambridge University Press. Thaler, R. H. 2000. Mental Accounting Matters. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 241–68. Cambridge: Cambridge University Press. Tversky, A., and D. Griffin. 2000. Endowments and contrast in judgments of wellbeing. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 709– 25. Cambridge: Cambridge University Press. Tversky, A., S. Sattath, and P. Slovic. 2000. Contingent weighting in judgment and choice. In D. Kahneman and A. Tversky, eds., Choices, Values, and Frames, pp. 503– 17. Cambridge: Cambridge University Press.
12 The Evolutionary Origins of Volition Wayne Christensen
A High-Order Control Basis for Volition It appears to be a straightforward implication of distributed cognition principles that there is no integrated executive control system (e.g., Brooks 1991; Clark 1997). If distributed cognition is taken as a credible paradigm for cognitive science, this in turn presents a challenge to volition because the concept of volition assumes integrated information processing and action control. For instance, the process of forming a goal should integrate information about the available action options. If the goal is acted upon, these processes should control motor behavior. If there were no executive system, then it would seem that processes of action selection and performance couldn’t be functionally integrated in the right way. The apparently centralized decision and action control processes of volition would be an illusion arising from the competitive and cooperative interaction of many relatively simple cognitive systems. Here I will make a case that this conclusion is not well-founded. Prima facie it is not clear that distributed organization can achieve coherent functional activity when there are many complex interacting systems, there is high potential for interference among systems, and there is a need for focus. Resolving conflict and providing focus are key reasons why executive systems have been proposed (Baddeley 1986; Norman and Shallice 1986; Posner and Raichle 1994). This chapter develops an extended theoretical argument based on this idea, according to which selective pressures operating in the evolution of cognition favor high-order control organization with a highest order control system that performs executive functions. The core elements of this architecture are presented in figure 12.1. According to the high-order control model, control competency is distributed across multiple systems, but systems are also organized hierarchically such that one or more high-order systems control multiple low-order
256
Wayne Christensen
Figure 12.1 High-order control architecture. Key properties: (1) Motor and perceptual systems have many degrees of freedom. (2) The first level of control is provided by CPGs, which determine patterns of motor activation. (3) High-order control systems are differentially specialized for increasingly high-order control problems in ascending order; lower order control systems provide constrained stereotypical control, higher order systems provide increasingly flexible high-dimensional control. (4) All control systems can access perceptual information directly and receive descending influence from higher systems. For simplicity only descending connections are shown, but ascending connections are assumed. (5) Perception-action loops are possible for each level of control. (6) Higher level systems are only engaged as necessary. (7) The highest level system can exert top-down influence either via intermediate control systems or via direct control of level 1 controllers, permitting either coarse or finegrained influence on motor control in varying circumstances. Three levels of control are shown, but in actual cases there will typically be more than this. CPGs: central pattern generators; H-O: high order. (Cf. Swanson 2003a, fig. 6.7 and Fuster 2004, figs. 1 and 2.)
systems, which are responsible for organizing effector output. Perceptual information flows to both low- and high-order control systems, and loworder controllers can be capable of generating action without higher order input. It is assumed that the architecture is the product of an evolutionary process in which higher order control has been progressively added to loworder controllers, which thus have substantial preexisting control capacity. Low- and high-order controllers are differentially specialized: low-order controllers for low-order control problems, and high-order controllers for high-order problems. High-order controllers provide flexible orchestration
The Evolutionary Origins of Volition
257
of low-order controllers, and increased specification and refinement of loworder competencies. For simple or routine activity high-order controllers may be minimally active. High-order controllers become maximally active in novel situations and for problems requiring complex information processing and action coordination. The main theoretical proposal of the chapter is an account of the factors that drive the evolution of this architecture. In explaining its evolution the account also provides an explanation of many of the core functional attributes of the architecture. The account of the evolution of high-order control is supported by two sources of evidence. First, it will be shown that it is consistent with the general structure of the evolution of sensorimotor systems in vertebrates. Second, it is consistent with cognitive neuroscience findings that the prefrontal cortex exhibits hierarchical control organization and performs high-level executive functions. This picture provides a framework for understanding volition. The prefrontal cortex performs integrated action control functions, and some of the properties of this control correspond reasonably well to features associated with volition. No developed theory of volition is provided here, but the account blocks the prima facie challenge presented by distributed cognition and offers a platform for further investigation of volition in terms of high-order action control. Toward a Biologically Based Comparative Framework for Cognitive Architecture If cognition is notable for being distributed, an appropriate question to ask is ‘‘distributed compared to what?’’ Discussion of whether cognition is distributed or centralized needs to be placed within a conceptual framework that allows for systematic comparison. In fact the frame of reference has been largely shaped by the advocacy of rival theoretical paradigms within cognitive science: the cognitivist symbolic computation paradigm, the connectionist artificial neural network paradigm, the behavior-based robotics paradigm, the dynamical systems paradigm, and the situated cognition paradigm. Collectively the latter four propose conceptions of cognition that are distributed in comparison with the cognitivist model. However, there are significant problems with this situation. Since the units of comparison are whole paradigms, the frame of reference is very coarse; the claim that cognition is distributed thus means something like ‘‘more distributed than a von Neumann architecture’’ or ‘‘more distributed than cognitivists thought it was.’’ This offers little basis for addressing structured questions. For instance, are there degrees of organizational distribution in functional
258
Wayne Christensen
architecture? Is it possible that differences in degree of organizational distribution are cognitively important? The cognitive processes of pencil-andpaper arithmetic show distributed organization, but are these processes as highly distributed as, say, the swimming of a jellyfish? From a conceptual standpoint we need organizational concepts that allow us to specify in more precise ways the respects in which cognitive architectures can be centralized or distributed. From an empirical standpoint claims about the distribution or otherwise of cognition should be placed in a structured comparative framework. It is not difficult to find examples of cognitive processes that show some form of distributed organization, but it is less clear what the exact significance of this is. Simply collecting examples that support a rather broad hypothesis can give a misleading picture, since it can overlook evidence that points in other directions. Making predictions in the context of structured evidence provides a much tougher and more informative test. In this respect the relationship between the range of actual architectures and cognitive abilities is the appropriate frame of reference for comparison. Within this framework questions such as the following arise: Is vertebrate neural architecture more or less centralized than arthropod neural architecture? Can differences in centralization between these taxa be associated with differences in behavioral abilities? A comparative framework of this kind is the bedrock on which a scientific approach to cognitive architecture should be based. We can specify systematically the kind of evidence that should be addressed by a theory of cognitive architecture in the following way. As the highest level theory it should provide structured answers to the most fundamental questions. These include: What is cognition? What determines significant variations in cognitive ability? Which evidence is most relevant follows from the questions. In particular, the most fundamental questions correspond to the most fundamental patterns in the empirical evidence. These are of two kinds: (1) the fundamental features of sensorimotor architecture and (2) the empirical distribution of cognition. The central type of evidence that a theory of cognitive architecture should explain, then, is large-scale patterns in the evolution of sensorimotor organization and behavior in metazoa. Before more complicated questions about human cognitive architecture can be solved, the bread-and-butter issues should be securely handled. This point is worth insisting on: if it cannot explain this kind of evidence, there is reason to think that the theory doesn’t have a very good grip on the nature of cognition. If it does have a good model of cognition, the theory should be able to say in a reasonably precise way what it is that is under selection when cognition evolves.
The Evolutionary Origins of Volition
259
When measured against these conceptual and empirical criteria, the distributed cognition paradigm fares poorly. It does not provide a clear positive account of what cognition is and offers little purchase on the problem of specifying the nature of variations in cognitive ability. Consequently it doesn’t provide a structured basis for explaining the empirical distribution of cognition. To be fair, distributed cognition was not framed with these questions in mind; as noted above, it has rather been focused on drawing a contrast with the cognitivist paradigm. However, it is legitimate to assess the strength of distributed cognition against these criteria when it is being used as a basis for inferences about cognitive architecture intended to guide further research. With respect to the topic of this volume the relevant inferences are to the effect that cognitive architecture doesn’t exhibit significant hierarchy and that it doesn’t feature a central system. Because distributed cognition is conceptually and empirically much weaker than has been supposed, it does not provide the support for these inferences that has been commonly assumed. Moreover, as I argue below, there is substantial counterevidence. By comparison, the high-order control model presented above provides a better account of the core architectural features of cognition. It associates cognition with high-order control ability and so is able to provide a structured explanation of variations in cognitive ability, and of the selection pressures that impact on cognitive ability. Most important, it is consistent with the kind of evidence specified above, namely empirical findings concerning the core features of sensorimotor architecture and large-scale patterns in the empirical distribution of cognition. The Evolution of High-Order Control Almost all evolutionary theories of the origins of cognition propose that it arose in response to problems of complexity (Byrne 2000; Roth and Dicke 2005). It is also common to view behavioral flexibility as the main advantage provided by cognition (Roth and Dicke 2005), although behavioral ecologists and evolutionary psychologists have claimed that cognition is primarily an aggregate of special abilities (Lockard 1971; Cosmides and Tooby 1997). The account I now present also sees the origins of cognition in problems of complexity, and identifies the major functional benefit as flexibility. But whereas most accounts focus on external complexity (environmental or social), the present account proposes a prominent role for internal functional complexity, and identifies the evolution of the fundamental mechanisms of cognition as beginning much earlier than most
260
Wayne Christensen
Figure 12.2 Architectural transformations in the evolution of high-order control. Early multicellular animals had simple homogeneous organization. Articulation pressure drives differentiation and specialization, which creates integration pressure favoring regulative mechanisms. In vertebrates high-order control becomes highly elaborated, permitting increasingly complex and flexible strategic action control. High intelligence has evolved independently multiple times in diverse taxa. The figure depicts only one kind of evolutionary trend, so is not a scalae natura.
accounts, and in response to much more general complexity problems. Indeed, it proposes that the evolutionary process that has given rise to advanced cognition can be traced back to early metazoan evolution. Further it proposes that the core trait under selection in the evolution of cognition is high-order control capacity, rather than more specific abilities such as spatial cognition, tool use, or theory of mind. Many specific abilities have played a role in the evolution of cognition, but the deepest level of organization is shaped by problems of control that are common across many abilities. The main architectural transitions are presented in figure 12.2, while table 12.1 lists the major forces and constraints operating in the evolution of high-order control. The way in which these forces drive the architectural transitions is described in the following model: Selection for improved action targeting creates a need for the differentiation and separate control of aspects of action production; this articulation pressure gives rise to functional complexification.
n
The Evolutionary Origins of Volition
261
Table 12.1 Major forces and constraints in the evolution of high-order control Articulation pressure As the conditions for successful action become more demanding there is selection for capacity to differentiate and separately control key aspects of action
n
Functional complexity advantage Elaboration and specialization of action production mechanisms ! Increased power, specificity, diversity, accuracy
n
Behavioral and ecological factors There are high value, difficult-toobtain resources n More complex action capacities open up new adaptive possibilities inaccessible to simpler control systems n
Integration pressure As precise global state gains functional importance there is selection for mechanisms that promote coordination of collective activity
n
Functional complexity downside Increases degrees of freedom, making global coordination a harder control problem n Regulatory infrastructure is required n Increased functional interdependence ! Increased cost of error n
Variance factors The existing architecture has a structurally available pathway for increase of control capacity
n
Complex functional organization offers a range of powerful adaptive benefits, including specificity, power, accuracy and diversity of action, and these benefits collectively drive continuing complexification. n Increases in complexity present high-order coordination problems magnified by increased functional interdependency that increases the cost of error, resulting in pressure for integration. n Integration pressure selects for regulative mechanisms with both local and wider effects, and for integration between regulatory systems. n The hierarchical structuring of regulatory systems provides the most effective solution to the high-order coordination problems presented by high complexity, and will be selectively favored as competitive pressure for increased functional capacity continues. n Selection regimes favoring high-order control are likely to arise in ecological circumstances where there are high-value, difficult-to-obtain resources. n
Note that while figure 12.2 depicts integrated strategic agency as the outcome of selection for high-order control, a detailed explanation is beyond the scope of this chapter. In summary, the model of the evolution of highorder control serves as a framework for a more specific model of the evolution of strategic agency, which is focused on the latter stages depicted in figure 12.2. In essence, strategic agency is a form of high-order control and
262
Wayne Christensen
evolves under the same general kinds of evolutionary pressures. High-order control itself is subject to articulation pressure and becomes increasingly elaborated. Integration pressure favors the formation of integrated management of whole-system relations, including internal state and environmental interaction. The specific model will be developed in Christensen (in preparation) and is as follows:1 Articulation and integration pressures acting on high-order control will favor strategic action control. n Strategic action capacity is improved through the articulation of mechanisms of action-outcome control to permit relational action management. n Relational action management is subserved by relational information processing and valuation, and these capacities increase with integration capacity. n Integration pressure acting on relational action management capacity drives the evolution of capacities for high-order representation of selfinformation, abstractive conceptual learning, and executive control, and in combination these constitute the basis of integrated strategic agency. n
Defining Functional Complexity The account is based on the following definitions of order and complexity: Order The scale of the correlations in a pattern; low order corresponds to local correlation and high order corresponds to wider correlation Complexity
The amount of correlational structure present in a pattern
The definition of order can be understood in the following way. Loworder organization corresponds to correlations that can be specified in terms of few, typically spatially local, elements of the system. High-order organization corresponds to correlations that must be specified in terms of many, typically spatially widespread, elements of the system. The complexity of a pattern is determined by how much information is required to specify the pattern: simple patterns can be easily described, while complex patterns require a lot of information.2 Although a high-order pattern is specified over many elements, it need not be complex as is the case for a simple gradient or regional difference. On the other hand, complex patterns typically will be of high order, involving relations among many system elements. Some of the most important factors that contribute to the organizational complexity of a system are (1) the number of system elements, (2) the number of types of system elements, (3) the number and type of interactions between system elements.
The Evolutionary Origins of Volition
263
High complexity will tend to show correlational structure at multiple scales, and consequently a combination of regional heterogeneity and coherent larger scale patterning. If we restrict our focus to living systems, adaptivity appears as a key constraint requiring global functional coherence. This leads to the following definition: Functional complexity Richly structured organization of functional systems and processes featuring regional heterogeneity and global coherence Increased Functional Complexity Provides a Competitive Advantage The central idea of the model outlined above is that functional complexity offers major functional advantages but carries with it a core tension that drives the evolution of increasingly complex hierarchically structured regulation. This tension stems from the fact that functional complexity involves a combination of regional diversity coupled with global coherence. Increases in complexity must somehow balance these two competing factors. But why become more complex at all? It is now accepted wisdom in biology that there is no essential adaptive advantage to complexity; viruses and bacteria are as adaptive as, for instance, large primates. The apparent increase in organismic complexity during the course of evolution can ostensibly be explained by the fact that starting from a simple base, there was nowhere else to go. Simple diffusion through morphospace can produce an apparent trend toward increased complexity. However, although it is true that there is no essential direct link between complexity and adaptiveness, they may nevertheless not be wholly independent either. Being adaptively successful requires performing the right actions3 at the right time. In a competitive environment the conditions for successful action targeting tend to become more demanding, and this can create pressure for the differentiation and separate control of key aspects of action production. For example, in order to pick up a glass without knocking it over, it is helpful to be able to independently control the force and direction of arm movement. The simplest action production systems in biology lack this kind of differentiated control: in the case of a paramecium the direction of swimming motion is determined randomly and the force of the motor action is fixed. Clearly, articulated action control can confer advantages by improving action targeting. I will refer to the relative advantage of more differentiated action production when there is selection for improved action targeting as articulation pressure. In these circumstances organisms with less articulated action production systems are out-competed by their more accurate conspecifics.
264
Wayne Christensen
The effect of sustained articulation pressure is complexification. Increased complexity through differentiation and specialization permits more complex production processes through interaction between differentiated components. This allows more resources of greater variety to be brought to bear on action production. The list of adaptive benefits of complexity includes: Power The ability to concentrate energy making an action stronger, faster, more sustained Specificity The ability to produce an action type matched to a particular context Diversity A greater range of action types that can be produced Accuracy Further improvements gained to the targeting of action These benefits are recursive inasmuch as they apply to the production mechanisms themselves. The effect of this is to facilitate further articulation as enhanced production capacity allows the manufacture of more specifically structured system components able to participate in more precisely structured functional processes. These are powerful adaptive advantages, and hence there is reason to expect selection to lead to increased functional complexity in many circumstances. Functional Complexity Produces Integration Pressure, Which Selects for Regulation Although increasing organizational complexity can confer substantial adaptive benefits, it brings with it associated problems. The advantages of functional complexity stem from integrating heterogeneous components and processes, but diversity and coherence are in tension with one another. As the number of heterogeneous system components increases, and as the complexity of the components themselves increases, the coordination demands for achieving a globally coherent functional state increase. This is compounded by the fact that functional complexity will gain an adaptive advantage by enabling more complex morphologies (in the case of developmental mechanisms) and more complex ways of interacting with the environment (in the case of physiological and behavioral mechanisms), which will tend to expose the organism to a greater range of developmental and environmental conditions. These will require different patterns of activity at different times. The organism must consequently be able to maintain and switch between multiple functional regimes, where each regime is a particular set of coordinated functional states and processes. At the same time the cost of failing to achieve global coherence increases. This is because increases in functional complexity inherently tend to
The Evolutionary Origins of Volition
265
increase functional interdependency, but increased interdependency increases the likelihood that a functional failure somewhere in the system will propagate to downstream processes, resulting in a cascade of dysfunction. Thus, while the advantages of functional complexity depend on integration, increases in complexity make integration harder to achieve, and the costs of failing to achieve integration increase. I will refer to the escalating need for integration as complexity increases as integration pressure. Achieving the benefits of increased functional complexity will be dependent on mechanisms that promote functional coherence and thereby resolve integration pressure. There are three main ways in which coherence can be produced: through structural constraints, through parallel interactions that produce emergent patterns, and through regulation that directly controls for a pattern. Each has strengths and weaknesses. The most straightforward way to ensure functional coherence is to limit the degrees of freedom of the system elements through structural constraints, for example, structures introduced in development that constrain physiology and behavior. This has the advantage of simplifying functional processing requirements because the functional restrictions don’t need to be dynamically generated as part of ongoing functional processing. However, structural constraints limit diversity, thereby inherently limiting functional complexity.4 More complex action abilities depend on opening up degrees of freedom, and achieving coherence in these circumstances must be, at least in part, via some means other than structural constraints. Functional coherence can also be achieved through parallel interactions that generate emergent outcomes (i.e., that are not directly controlled). In this case the collective organization in question is the product of many local interactions, with no functionally distinct global instruction signal. This has the advantage of imposing minimal infrastructure requirements and can take advantage of spontaneous pattern-formation processes. But, while ‘self-organization’ is celebrated for its capacity to generate global patterns, it has significant limitations as a means of resolving the problems presented by integration pressure. The most important of these are slow action and poor targeting capacity. Precisely because achieving the global state depends on propagating state changes through many local interactions, the time taken to achieve the final state can be long, and increases with the size of the system. Moreover, since there is no regulation of global state, the ability of the system to find the appropriate collective pattern depends on the fidelity of these interactions. Here there is a tension: if the self-organization process is robust against variations in specific conditions,
266
Wayne Christensen
the process will be reliable, but it will be difficult for the system to generate multiple finely differentiated global states. Alternatively, if the dynamics are sensitive to specific conditions, it will be easy for the system to generate multiple finely differentiated global states but difficult to reliably reach a specific state. Slow action and poor targeting capacity severely limit the capacity of self-organization to achieve the kind of coherence that functional complexity requires. As described above, the adaptive advantages of functional complexity stem in large part from precise, varied interactions that may shift rapidly. Consequently the most effective means for achieving the type of global coherence required for functional complexity is through regulation, including feedback mechanisms and instructive signals operating at both local and larger scales. The key feature that distinguishes regulation from selforganization is the presence of a functionally specialized system that differentially specifies one or a restricted set of states from the range of possible states the regulated system might take, based on the sensing of system conditions and the production of control signals that induce changes in functional state.5 Regulation can mitigate the negative effects of organizational complexity in a variety of ways. Regulatory processes can correct errors, repair damage, and adjust process activity to changing circumstances. Error correction and repair make processes more reliable, reducing the likelihood of errors that manifest as functional failures affecting other processes. The ability to adjust activity to changing circumstances can allow downstream compensation if an upstream functional failure does occur. It also permits dynamic mutual tuning of activity that can help to ensure that systems and processes remain within mutually required ranges of activity. In combination these capacities are able to provide robustness in the context of close functional linkage. This regulation ameliorates the increasing cost of error that accompanies increasing functional complexity. A further important feature of regulation is that it is an enabler for greater levels of functional coordination, since a local regulative ability to modify functional activity in response to signals from other systems facilitates large-scale correlated functional change. Thus selection for functional complexity will tend to give rise to derived selection for regulative ability. However, it should be noted that the adaptivity of an organism will always be the result of all three kinds of mechanisms operating together. That is, it will be the result of some mixture of structural constraints, selforganization, and regulation. I have suggested that each kind of mechanism has different adaptive trade-offs, and such trade-offs will play an important role in defining differentiated adaptive strategies. Thus we can
The Evolutionary Origins of Volition
267
expect to find a range of adaptive strategies in biology that emphasize some kinds of coherence-inducing mechanisms rather than others. For instance, r-selected organisms6 will tend to rely strongly on structural constraints, in the form of simple, highly constrained morphologies with limited regulatory complexity. It is in species where selection for functional complexity has been most prominent, thereby resulting in the greatest integration pressure, where we should expect to find the most elaborated regulative systems. Strong Integration Pressure Selects for High-Order Regulation The simplest response to integration pressure is local regulation of a functional process, allowing it to respond to changing circumstances. Extensive selection for regulative ability is, consequently, likely to give rise to a multitude of local regulatory mechanisms, resulting in high levels of distributed control. In addition to its local functional effects this improves the capacity for globally integrated functional behavior: collectively the system can integrate functional activity through many local adjustments. Indeed, by making local systems more sensitive to ambient conditions, local regulatory mechanisms can promote collective self-organization processes and ensure that they result in functional end-states. However, this ‘selforganization’ mechanism has the limitations identified above: slow action and poor targeting capacity. Moreover, as the complexity of global state changes require increases, global self-organization becomes increasingly inefficient because it requires local controllers to be ‘‘too intelligent’’: that is, to have sophisticated information-processing capacities that can determine precisely what the global context is and what the appropriate local response is. Not only can this impose prohibitive demands on local information-processing capacities (e.g., of individual cells), from an evolutionary point of view the heterogeneity of local control systems requires many coordinated adaptations in local controls to achieve specific changes in globally coordinated behavior, making adaptive change increasingly unlikely. Consequently, as integration pressure increases, specialized regulative systems that have wide effect will be selectively favored. By directly modulating large-scale functional activity, such systems can more effectively promote globally coordinated functional activity. A specialized regulative system can function as an integration center, gathering information from wide-ranging sources and subjecting it to processing, to extract highly specific control information. Functionally specialized systems can provide spatially and/or temporally and/or qualitatively precise delivery of control
268
Wayne Christensen
signals across wide regions. In addition a regulatory system can serve as a target of selection for variations that produce globally correlated changes. At this point it is necessary to examine the definition of order given above. According to this definition the scale of a correlation determines order. Applied to control, this gives the following definition: Control order The scope of control influence (how much of the system is subject to the control signal) Thus, a regulatory system with wide effect is a high order controller. However high order control is also used to mean metacontrol, a sense that may be more common and intuitive: Metacontrol
The control of another control system
Since there will typically be widespread local regulation, and since regulatory systems with wide effect will interact with local controllers, regulative systems with wide effect will generally be high-order controllers in both the control scope and metacontrol senses. Some readers may feel inclined to restrict the meaning of high-order control to metacontrol. However, the definition in terms of control scope is more organizationally fundamental and more important. It plays a vital role in characterizing the architecture of high-order control, which cannot be understood in terms of metacontrol alone. For instance, many forms of cortical control are of high order in the sense that they control for wide-ranging aspects of the animal’s internal state and behavior, independently of whether this control is directly mediated by elaborated hierarchical systems. Increases in control ability can occur through a variety of routes, including the modification or refinement of an existing control system, the expansion of a control system, the addition of a new control system, improvement of coordination among control systems, and the hierarchical structuring of control systems. Thus we need to ask whether there are any factors that will tend to promote hierarchical structuring in particular. As just noted, integration pressure will select for regulative systems with wide effect, and regulative systems with wide effect will interact directly or indirectly with local control systems. Hierarchical organization will tend to arise as a natural product of this. However there are in addition specific adaptive benefits provided by hierarchical organization. It is a very efficient way to increase diversity because the same components can produce different outputs as a result of differing modulatory input. Further, structuring regulatory systems hierarchically provides a way of partitioning the control problem that allows increased global coordination while keeping the
The Evolutionary Origins of Volition
269
overall management problem for any given control system tractable. Specialized higher order control systems reduce the coordination burden on local regulation. Conversely, effective local regulation reduces the problem facing high-order control systems. Differential specialization between low- and high-order controllers allows low-order controllers to optimize for local coordination problems, while high-order controllers specialize for high-order coordination problems. This frees high-order controllers from the problem of ‘‘micromanaging’’ local responses. High-order control can extend and refine existing competencies, allowing incremental, efficient improvement of functional capacity. The net effect is to allow limited capacity systems to cooperate on a complex overall problem with intimate yet partitioned structuring of control burden. Thus hierarchically structured regulation can provide an effective solution to the problems presented by integration pressure, thereby making available the adaptive benefits of greater levels of functional complexity. However, for high-order control to be selectively favored these conditions must actually obtain within the population. Specifically: (1) The existing architecture has a structurally available pathway for the evolutionary increase of regulative capacity. (2) The control benefits (which can include enhanced specificity, power, accuracy, diversity, and coordination of action) yield overall higher returns (possibly including reduced error costs), within the niche. Given that increased regulation can present substantial infrastructure costs as well as, potentially, negative effects of increased complexity, these conditions will by no means be universal. If the adaptive contingencies of the niche that fall within the range of variation of the population do not offer increased return for increased control ability there will be no directional selection for control ability. Indeed there may be selection against high-order control if the costs of increased infrastructure, energy demands, and complexity are greater than the returns gained. With respect to behavior and ecology, then, two kinds of circumstances are likely to be especially important for generating selection for high-order control: (1) There are high-value, difficult-to-obtain resources. (2) More complex action capacities open up new adaptive possibilities inaccessible to simpler control systems. In addition we can expect contingency to play a major role in the evolution of high-order control. The structural pathways that are evolutionarily available will be highly constrained by the nature of the existing regulatory systems. Some regulatory systems may result in evolutionary dead ends, while major adaptations may depend on a prior sequence of adaptations to occur. Conversely, however, the advent of a novel regulatory system
270
Wayne Christensen
is likely to have major evolutionary effects by significantly changing the adaptive possibilities that are available. The evolution of high-order control systems is also likely to exhibit ratcheting effects. Assuming that high-order control will often be selected for when there are high-value benefits that are difficult to obtain, and since each additional regulatory adaptation will present costs, a regulatory adaptation increases the adaptive need to obtain high-value returns, increasing the selective pressure favoring further improvements to high-order control ability. The effect of such feedback can be to sustain extended directional selection, with several important evolutionary consequences. First, it could act to promote episodes of rapid evolution. Second, however, the evolution of a major regulatory system will involve an extended suite of adaptations. Feedback that sustains directional selection can maintain selection over the extended periods of evolutionary time it takes for such adaptations to occur. From these factors it should be clear that the account above does not presume a scalae natura, orthogenesis, or teleological evolution. Given the complexity of the trade-offs involved, high-order control is only one of a variety of adaptive strategies for behavior control. As noted, in many cases increased high-order control will not be advantageous. The selection pressures can nevertheless be seen to play a role across the full stretch of metazoan history. This is not a unique case. Oriented evolution, involving consistent directional changes along one or a few dimensions, such as increase in size or speed, is a ubiquitous biological phenomenon (Simpson 1949). Intelligence is just an example of this pattern, with human intelligence constituting an extreme elaboration of a widespread adaptive strategy. Indeed precisely because the adaptive pressures are widespread, the account predicts that there should be independent evolutionary pathways in which selection for high-order control has played a prominent role. This conforms to the evidence, which indicates that evolutionary increases in intelligence have occurred independently in a variety of different taxa, including birds and mammals, and cetaceans and primates (Roth and Dicke 2005, p. 250). Supportive Evidence from Neural Evolution Having presented the account of the evolution of high-order control abstractly, I now outline evidence concerning the evolution of nervous systems that supports the account. The effects of articulation and integration pressures are discernable in the earliest stages of neural evolution, in the
The Evolutionary Origins of Volition
271
differentiation of specialized cell types, the formation of specialized control centers, and the trend toward centralized neural organization. Extended hierarchical structuring is apparent in the vertebrate autonomic and somatic motor systems. Centralization Is a Prominent Feature of Early Neural Evolution The first multicellular animals were sponge-like creatures with little in the way of differentiated tissue structure, and behavioral abilities limited to the control of water flow through pores by adjusting the contraction of muscle cells.7 These cells, called myocytes, perform both the sensory and the motor functions of the organism. Nervous systems first appeared in Cnidaria, carnivorous radially or biradially symmetric animals with a saclike body and a single body opening (the mouth) surrounded by tentacles. The evolution of neurons from ectoderm constituted a major advance in regulatory capacity by permitting the specialization of sensory function (through sensory neurons), and by permitting the rapid and precise transmission of signals to muscle cells (through motor neurons). Sensory discrimination could become more sensitive, precise, and functionally differentiated (e.g., into different modalities). The addition of an intermediate layer of specialized communication between sensory function and motor output allows point-to-point, longer range information transmission, and creates the potential for divergence and convergence of information flow. Divergent signal paths allow a sensory neuron or sensory area to broadcast to many distant parts of the animal, permitting a rapid, coordinated wholeorganism response to a sensory stimulus. Convergent signal pathways allow a given muscle cell to be sensitive to many different sensory neurons, and to the activity of other muscle cells. Thus, early neural evolution made possible much greater behavioral complexity and integration. Cnidarian nervous systems are diffusely organized nerve nets. Flatworms represent the next grade of complexity of neural organization. They have bilateral symmetry, a dorsal-ventral (top-bottom) axis and a rostral-caudal (front-rear) axis. They move by swimming or crawling, and have a concentration of sensory neurons at the head end. The cell bodies of neurons are clustered in ganglia connected by bundles of axons called nerve cords. The clustering of neurons, in contrast with the diffuse organization of Cnidaria, is a phylogenetic trend referred to as centralization. The concentration of ganglia at the head end of flatworms is the simplest form of brain, and is referred to as cephalization. Centralization and cephalization are more pronounced in annelid worms and arthropods, and are highly elaborated in vertebrates.
272
Wayne Christensen
These examples illustrate the principles governing the evolution of highorder control described above in the following way: Conceptually, the most parallel form of organization possible is a homogeneous matrix. Sponges represent the closest biological approximation to this type of organization, and within most of the major metazoan taxa there are trends toward more complex functional forms.8 The separation of sensory and effector functions into separate cell types, seen in the evolution of neurons, is plausibly viewed as a response to articulation pressure. This articulated control arrangement allows the activity of effector cells to be regulated in much more complex ways. In particular, it allows the activity of effector cells to be rapidly coordinated so as to achieve specific global goals. This regulative capacity fundamentally expands the functional capacities possible with a multicellular body, and is thus a keystone event in animal evolution. As such it exemplifies the kind of evolutionary impact that regulatory innovations can have. The predacious lifestyle of Cnidaria is consistent with the hypothesis that selection for high-order control is based on the capacity to obtain high-value, difficult-to-obtain resources. Prey capture will typically deliver high-value returns, but prey will also tend to be sporadically distributed and capable of defensive measures. The centralization of neurons into ganglia, and their rostral concentration in cephalization, observed in flatworms, annelids, arthropods and vertebrates, concentrates control and provides the basis for the formation of specialized high-order regulatory systems. Centralization and cephalization are consistent with the hypothesis that the increasingly complex functional forms found in metazoan evolution have generated associated integration pressure. The Vertebrate Autonomic System Is a High-Order Control System The autonomic system is generally thought of as an automatic, low-order, noncognitive system.9 To casual observation the continuous unconscious bodily adjustments of the autonomic system might seem like a marvelous example of distributed organization. In some respects they are. But a proper appreciation of the functional organization of the autonomic system depends on the right comparative framework. When assessed in terms of anatomy and function, rather than in comparison with conscious control, the autonomic system is a centrally organized high-order control system. Invertebrates lack a specialized regulative system of comparable complexity and have much more limited capacity for coordinated bodywide physiological changes.10 Thus the autonomic system can be seen as a regulatory
The Evolutionary Origins of Volition
273
adaptation to the integration pressure posed by the complexity of vertebrate bodies and lifestyles. The following description indicates some of the reactions likely to occur in response to hearing a sudden loud noise behind you in a dark alley: In literally the time of a heartbeat or two, your physiology moves into high gear. Your heart races; your blood pressure rises. Blood vessels in muscles dilate, increasing the flow of oxygen and energy. At the same time, blood vessels in the gastrointestinal tract and skin constrict, reducing blood flow through these organs and making more blood available to be shunted to skeletal muscle. Pupils dilate, improving vision. Digestion in the gastrointestinal tract is inhibited; release of glucose from the liver is facilitated. You begin to sweat, a response serving several functions, including reducing friction between limbs and trunk, improving traction, and perhaps promoting additional dissipation of heat so muscles can work efficiently if needed for defense or running. Multiple other smooth and cardiac muscle adjustments occur automatically to increase your readiness to fight or to flee, and almost all of them are effected by the sympathetic division of the ANS [autonomic system]. (Powley 2003, pp. 913–14)
In broad outline the integrative action of the autonomic system is well known, but remarkable nevertheless. In light of the model of the evolution of high-order control outlined above several points stand out. The level of coordination of autonomic action provides a guide to just how deeply integrated vertebrate physiology is. This is evidence for strong integration pressure. Modularity has been greatly emphasized in recent biological and psychological thinking. The organ systems certainly perform modularized functions such as digestion and fluid transport, but they also interact continuously in the production of behavior. Consequently the state of high action readiness exemplified in the fight or flight response requires coordinated changes of activity across almost all of the physiological systems of the body. This is a good illustration of functional complexity, in which highly diverse systems are coordinated to achieve globally coherent patterns of activity. The high level of global coherence enhances the effectiveness of the adaptive response. The autonomic system also provides a clear example of the role of specialized regulatory systems in achieving high levels of global coherence. Complex, systemwide changes in activity must occur on very rapid timescales in response to specific conditions. The autonomic system provides the specialized information processing and signal delivery required to achieve this. The autonomic system also illustrates the role of hierarchical organization in enabling tractable global coordination. The most localized control
274
Wayne Christensen
provided by the autonomic system is mediated by what are known as axon reflexes: stimulation of visceral afferent neurons results in the central propagation of an action potential, but it can also produce local release of neurotransmitter directly from the site of stimulation and local collaterals. These axon reflexes produce a range of inflammatory and vascular responses. The next most localized form of control is mediated by reflex arcs passing through the spinal cord. Visceral afferents project to laminae I and V of the spinal dorsal horn, sending sensory information about visceral volume, pressure, contents, or nociceptive stimuli to spinal circuits that interpret the information and generate patterned responses via efferent connections back to the viscera, for example, increasing heart rate and vasoconstriction. The activity of reflex arcs is integrated and coordinated by a supraspinal system known as ‘‘the central autonomic network,’’ consisting of a hierarchically organized network of sites in the mesencephalon, hypothalamus, amygdala, bed nucleus of the stria terminalis, septal region, hippocampus, cingulate cortex, orbital frontal cortex, and insular and rhinal cortices. Many of these centers are part of the limbic system. The integrative functions performed by this system can be divided into three types (Powley 2003, p. 928): (1) coordination and sequencing of local reflexes, such as the autonomic responses of the mouth, stomach, intestines, and pancreas during and after a meal; (2) integration between autonomic and somatic motor activity, for example, adjusting blood flow through the body in response to postural changes to preserve blood supply to the brain; (3) organizing autonomic activity in anticipation of key events, such as major homeostatic imbalances. These are good examples of high-order control functions. The Vertebrate Somatic Motor System Exhibits Extensive Hierarchical Structuring If we assume functional distinctions among sensors, effectors, and interneurons, then conceptually the most parallel form of organization approximates that of the Cnidarian nerve net: even distribution of sensory and effector cells, and diffuse spread of information by interneurons.11 Vertebrate sensorimotor organization is clearly nothing like this. Direct muscle-to-muscle neural connections, common in arthropods, are absent in vertebrates. Control of muscle activity is entirely located within the spinal cord and higher sites of the central nervous system. While we have an intuitive sense when performing skilled activity that our body ‘‘knows what to do’’—the phenomenology that informs the lay concept of muscle memory—the information guiding skilled action is not in fact stored in the
The Evolutionary Origins of Volition
275
muscles but in the brain stem, cerebellum, basal ganglia, and cortex. In other words, skill memory is stored in high order control systems rather than distributed through the muscle system. Vertebrate motor control shows a similar, though more elaborate, hierarchical structuring to that of the autonomic system. The nature of this hierarchy was first demonstrated in the early twentieth century by experiments involving sectioning the central nervous systems of cats (Brown 1911; Sherrington 1947). When the brain stem and spinal cord is isolated from the forebrain, a cat is still able to breathe, swallow, stand, walk, and run. However, the movements are produced in a highly stereotyped, ‘‘robotic’’ fashion. The animal is not goal-directed, nor does it respond to the environment. Thus the brain stem and spinal cord are responsible for producing basic movement coordination but not higher level environmental sensitivity or goal-directedness. A cat with intact basal ganglia and hypothalamus, but disconnected cortex, will move around spontaneously and avoid obstacles. It will eat and drink and display emotions such as rage. This level of motor control is thus responsible for the core elements of motivated behavior. The hypothalamus plays an especially prominent role in integrating the activity of the autonomic and somatic motor systems. The cortex is required for the most complex forms of action control. The somatotopic organization of the primary motor cortex is well known. The proportionately much greater area devoted to the hands, face, and tongue, compared with other body areas, is an anatomical correlate of the fact that the cortex plays an important role in the control of complex skilled movements. Body areas requiring fine control are represented in more detail. The cortex is connected to the spinal cord via descending pathways of several kinds. Corticospinal pathways project directly to the spinal cord, while rubrospinal and reticulospinal pathways project to motor centers in the brain stem that in turn project to the spinal cord. The cortico- and rubrospinal connections transmit commands for skilled movements and corrections of motor patterns generated by the spinal cord. The reticulospinal connections activate spinal motor programs for stereotypic movements such as stepping, and are involved in the control of posture. This distinction is reflected in the two kinds of descending pathways depicted in figure 12.1. Functionally this allows high-order control influence to be mediated either through existing motor programs (e.g., habits and reflexes) or to act more directly on the patterning of muscle action. This distinction permits fine-grained, contextually sensitive control when necessary, and is a major source of flexibility.
276
Wayne Christensen
Thus the hierarchical structuring of the motor system can be understood in terms of the layering of control systems responsible for successively more complex forms of control. The relationship between the control of basic walking movement in the spinal cord and contextual and posture control in the brain stem exemplifies this layering ability. The core elements of walking motion are produced by a repeated pattern of muscle firing, and a simple circuit in the spinal cord (a central pattern generator, or CPG) can produce this basic cycle of activity. However, walking must also be adapted to context and goals. Refinement of movement and posture control is provided by higher centers that integrate a wider range of sensory information and perform more complex information processing. This higher level control acts by modulating spinal circuits, adjusting the basic walking pattern to the circumstances. But because spinal CPGs contribute a substantial component of the movement pattern, higher control is relieved of this computational burden. Control responsibility is thus efficiently distributed. Skilled and goal-directed actions present the most challenging control problems. In the case of skilled action, performance may need to be precisely adapted to the context, and require extended sequences of motor activity. While much of the sequencing may be routinizable, success may still require continuous monitoring and adjustment of performance because the context for the skill may be complex and variable. Acquiring skilled action is especially challenging because it requires assembling component motor actions into larger structures that may be partly or wholly novel. Successfully learning a skill will be strongly dependent on the capacity to monitor the relations among multiple actions, context, and goals. Goal-directed action more generally presents formidable control problems because it requires the ability to opportunistically identify action possibilities, which may shift dramatically as context varies, form instrumental goals that effectively satisfy that animal’s requirements given the contextual opportunities, and to flexibly coordinate actions in relation to those goals. Thus effective goal-directed action may depend on complex valuation processes, high levels of bodily awareness, rich long-term memory for context, and intensive processing of episodic information. Complex, skilled motor actions are associated with volition in motor control research. Such actions are referred to as voluntary because they are performed ‘at will’. Nevertheless, the concept is noted to be ambiguous, inasmuch as almost all types of motor behavior, including basic reflexes, can be influenced by will (Grillner 2003, p. 762). For example, if one touches an object that unexpectedly turns out to be hot, a withdrawal reflex will be triggered. However, if the object is known to be hot, the withdrawal reflex
The Evolutionary Origins of Volition
277
can be overridden if grasping it is important. We will return to these issues below. For now it is important to note that the concept of voluntary action plays an important role in motor research and is associated with anatomically distinct systems involved in the most complex forms of action control. This provides a prima facie case that volition has a grounding in motor control. Moreover it suggests that volition is a product of the evolutionary pressures for high-order control that I have been characterizing. Cognitive Control and the Central System The evidence outlined so far indicates that the main features of the human sensorimotor system conform to the high-order control model depicted in figure 12.1. This is not very surprising, since the model was in part abstracted from this evidence. The more substantial proposal is the theoretical account of the properties and evolution of the architecture presented earlier in this chapter. Based on this account we can make a further prediction. High-level cognition should be an extension of the general pattern exemplified in basic sensorimotor architecture, being the product of the same articulation and integration pressures. The highest level of control should integrate the greatest amount of information, have the widest control scope, and be responsible for the most complex action control problems. In other words, the high-order control model predicts that there should be a central system.12 Support for this prediction is provided by cognitive neuroscience research on cognitive control. Cognitive control research is concerned with the mechanisms of flexible, goal-directed behavior, and distinguishes these from ‘‘automatic’’ forms of action production, including reflexes and habits. Cognitive control is closely associated with the prefrontal cortex, which has a pattern of broad connections with the rest of the brain that allows it to synthesize information from many sources and exert wide control influence (Miller 2000). A variety of lines of research have shown that it has the properties of a high-order control system. It is in fact the highest order system in the brain. The Architecture of Top-down Control Koechlin et al. (2003) experimentally demonstrated a model of the architecture of cognitive control consisting of a nested three-level control hierarchy (figure 12.3). In the first level of control (sensory control) motor actions are selected in response to perceptual stimuli. This basic level of control is associated with
278
Wayne Christensen
Figure 12.3 Model of the architecture of cognitive control. LPFC: lateral prefrontal cortex. (Reprinted from Koechlin et al. 2003, fig. 1. Reproduced with permission.)
the lateral premotor regions. In the second level of control (contextual control) premotor activation is regulated by information about the context. Anatomically it is associated with the caudal lateral PFC. The highest level of control (episodic control) regulates the contextual control system according to the temporal episode and goals. It is subserved by the rostral lateral PFC. The architecture has a cascade structure, with episodic control influencing sensory control via the contextual control level. The model was tested by presenting subjects with tasks in which stimulus, context, and episodic factors were systematically manipulated, making it possible to determine which areas of brain activation correlate with each type of factor. fMRI results showed that activation in lateral premotor, caudal lateral PFC, and rostral lateral PFC correlated with stimulus, context, and episode factors respectively, as predicted by the model. A notable feature of this hierarchy is that higher levels perform more complex forms of integration.13 Sensory control requires the least information, activating a stored motor program in response to an innate or learned stimulus. Contextual control is more demanding: information provided by the nature of the stimulus is not sufficient to determine what response is appropriate. It is necessary to draw on memory of the context in which the stimulus occurs. Episodic control is more complex again, requiring information both about context and current circumstances in order to determine the appropriate response. Here we see that the same pattern found in subcortical motor organization is repeated in the cortex, consistent with
The Evolutionary Origins of Volition
279
the high-order control model. This is counterevidence to the distributed cognition prediction that brain organization should feature multiple control centers with no significant hierarchical organization.14 Attentional Control and Fluid Intelligence One of the more significant achievements of cognitive control research is to begin to resolve the neural and cognitive mechanisms underlying fluid intelligence.15 As recently as 1997 Deary and Caryl were able to write in a review that there was ‘‘a dearth of explanatory accounts to link cognitive performance differences with variance in brain mechanisms’’ (1997, p. 365). Hypotheses under consideration included faster neural conduction, reliability of neural conduction, ‘‘neural adaptability,’’ and greater metabolic efficiency. Duncan et al. (2000) substantially narrowed the field of possibilities by showing that tasks that impose high demands on fluid intelligence produced focal activation in the lateral frontal cortices. Given the general association between the lateral PFC and cognitive control, this implicated cognitive control in intelligence. Providing a more specific cognitive association, Gray et al. (2003) used an individual differences approach to show a relationship between fluid intelligence and attentional control. This evidence is in many respects remarkable. That a phenomenon as complex as intelligence should be closely associated with a cognitive function as apparently straightforward as attentional control is surprising. Yet attentional control is necessary for a variety of functions that are required for complex cognitive processes. For example, in order to learn difficult concepts and form complex plans, it is necessary to notice relationships among disparate locations, entities, and events across extended periods of time. This requires close attention to specific features based on background knowledge and expectations. Once again, this evidence supports the high-order control model. The high-order control model explains major differences in intelligence among species in terms of differences in the elaboration of high-order control. Evidence that I am unable to discuss at length in this chapter indicates that the anterior cingulate cortex has been under selection in the evolution of great apes (Allman et al. 2001), and is associated with generalized highorder control functions, in particular, conflict monitoring (Botvinick et al. 2004). This supports the hypothesis that the evolution of intelligence in great apes has been associated with selection for high-order control functions. The findings of Gray et al. (2003) indicate that high-order control
280
Wayne Christensen
capacity is also able to account for individual differences in intelligence in humans. Thus the high-order control model provides an empirically supported explanation of variations in cognitive ability. Abstractive Task Learning Many studies have shown that the PFC functions to assemble task-relevant information. Miller (2000) suggested the PFC extracts the features of experience relevant for achieving goals, detecting the relationships important for performing the task (the task contingencies), and forming abstract rules, such as ‘‘do not pick up the telephone in someone else’s house.’’ The ‘‘adaptive coding’’ model of Duncan (2001) provides some information concerning the neural basis of task-oriented processing in the PFC. According to the adaptive coding model the response properties of single neurons are highly adaptable throughout much of the PFC, and become tuned to information relevant to the task. Duncan notes that the model has connections to the idea that the PFC acts as a kind of ‘‘global workspace’’ (Baars 1989; Dehaene et al. 1998). In Sum Taken together, these lines of research support the view that the PFC is a highest order executive control system with the kinds of properties that would be expected on the basis of the model of the evolution of high-order control described previously in this chapter. To reiterate, the highest level of control should integrate the greatest amount of information, have the widest control scope, and be responsible for the most complex action control problems. This conclusion is at odds with current distributed cognition principles. In light of the theoretical considerations outlined earlier, and the empirical evidence presented, there is a basis for suggesting that these principles should be reexamined. A reasonable conclusion to draw from the evidence is that the major claims of no significant hierarchical structure and no central system are falsified. There is certainly no central system of the kind envisioned by classical cognitive science, but a central system nevertheless. Yet in other respects the account developed here can be seen as supporting and extending the basic distributed cognition approach. Volition This chapter has explored terrain not often considered in discussions of volition. But volition concerns action control, and hence a proper under-
The Evolutionary Origins of Volition
281
standing of it depends on placing it within the context of the evolution of action control mechanisms. The account presented above provides an evolutionary and cognitive framework within which volition can be understood as a natural phenomenon. We can divide this project into two aspects: understanding the nature and status of the folk concept, and developing theoretical accounts of volition. Developing an adequate understanding of structure of the folk concept of volition will depend on empirical concept research, but in advance of such research we can identify broad characteristics of the concept that such research will plausibly support. Features commonly associated with volition include intention, desire, choice, evaluation, command, resolution, effort, strategic flexibility, conscious awareness, and responsibility. Furthermore these features are schematically organized, such that paradigm cases of volitional action control exhibit the properties of volition in an integrated fashion. For instance, the agent may form a goal that is supposed to bring about some condition that is valuable to the agent. The agent will evaluate the goal for efficacy and monitor the process of performing the action. In difficult conditions the agent may show resolve by increasing focus and strengthening goal-oriented action control processes. The agent might show strategic flexibility by switching to an alternate approach to achieving the goal. And so on. As well as schematic structure the concept is likely to have prototype structure: some examples will be better than others, and there will be marginal cases that are difficult to decide. The important point here is that to a first approximation, these properties correspond reasonably well to properties of action control processes mediated by the prefrontal cortex. The PFC assembles task-relevant information and provides high-level goal and context-based regulation of motor output. This suggests that scientific research is likely to result in a conservative naturalization of volition.16 Attention and memory provide relevant comparisons here: these are folk concepts that have gained a secure scientific footing as major scientific research fields. This research has revealed a great deal of structure not present in the folk concepts, but it has not shown them to be basically wrong: attention really does involve selective focus, and memory really does involve retrieving information about the past. Plausibly, volition really does involve forming goals and controlling action in relation to intentions. With respect to developing a theoretical understanding of volition, the high-order control account can help cast light on some of its more puzzling aspects. One of these is the so-called freedom of the will. This issue is
282
Wayne Christensen
usually posed in terms of freedom with respect to the fundamental laws of nature. But if we set that notion aside, we can identify a more adaptively relevant form of freedom: flexibility of action control. Recall that the concept of voluntary motor action used in motor control research is defined as action that can be performed ‘at will’. The high-order control architecture is organized to open up action control flexibility, and this is exemplified in the somatic motor hierarchy. Low levels provide stereotypic forms of motor activation such as basic walking movement, while higher levels adjust the action to the circumstances (e.g., brain stem postural control) and set goals such as direction and speed (determined by the cortex). The model of Koechlin et al. (2003) shows that this hierarchical organization is extended in the frontal cortex, with increasingly complex, flexible forms of control being performed by successively anterior lateral regions moving forward from the primary motor cortex. The rostral lateral PFC performs episodic control, adjusting goal-directed action in relation to local contingencies. Research such as that of Duncan (2001) shows that this control is based on rich representations of task-related information. From an adaptive perspective the highest level of action control should be extremely flexible because task contingencies can be very complex and can change dramatically. The highest level system should be able to form and reform goals for action based on shifts in any of a large range of agent-based and environment-based factors. The formation of volitions, then, should be based on the agent’s goals, values, and the environmental context but shouldn’t be consistently determined by any particular factor or type of factor. Thus action performed ‘at will’ is determined episodically in relation to a constellation of factors, and so can exhibit high levels of spontaneity and variability. However, the high-order control account presented here does not by itself furnish a theory of volition. Such a theory must address many further issues, including the subjective perception of volition (Wegner and Sparrow, chapter 2 in this volume), social and psychological processes of self formation (Ross, chapter 10 in this volume), and the ability to pursue longer term objectives that require resisting short-term temptations (Ainslie, chapter 9 in this volume). Given the diversity of perspectives on volition presented here, there is clearly a long way to go before a theoretical synthesis is likely to emerge. The high-order control proposal doesn’t grapple with some of the more complex phenomena associated with volition, but I suggest that it is an important element in the mix and can provide a basis for integrating a range of agency-related phenomena within an adaptive, control-based framework.
The Evolutionary Origins of Volition
283
Acknowledgments Thanks to David Spurrett for a number of very helpful editorial suggestions, and collectively to the organizers of the second Mind and World Conference for an absorbing and enjoyable event. Notes 1. Definitions used in the model: Strategic action control: orchestration of actions in relation to goals. Action-outcome control: control of action production with respect to outcomes. Relational action management: action control based on the relational properties of actions, entities, and goals. 2. I am describing here in intuitive terms the definition of complexity provided by Bennett (1985). See Collier and Hooker (1999) for a more general discussion. 3. I am using the term ‘‘action’’ here broadly to mean the product of a functional process. This encompasses all of the internal functional processes of the system, such as protein manufacture. 4. Training wheels on a child’s bike are an example of a structural constraint that allows functional performance by restricting the range of available states. They also illustrate some of the limitations that can be associated with this kind of solution: once the child is able to dynamically maintain balance for herself, the training wheels are a hindrance. The regulative ability to maintain balance provides a much more powerful, flexible solution to the problem of staying upright. 5. It should be noted that while regulation is distinguished from self-organization, it can both contribute to self-organization and take advantage of it, as is explained below. 6. Organisms selected for a high rate of reproduction. 7. The discussion in this section follows Swanson (2003a, b). 8. Whether this represents widespread selection for the adaptive benefits of functional complexity, or whether it simply represents diffusion through morphospace, can only be resolved through detailed phylogenetic analysis. The present account predicts that the former constitutes a substantial component of the phylogenetic pattern. 9. This section is based on Swanson (2003a), Card et al. (2003), and Powley (2003). 10. The stomatogastric nervous system does indeed perform analogous functions in insects; see Hartenstein (1997). 11. This section is based on Gazzaniga et al. (1998), Swanson (2003a), Grillner (2003), Floeter (2003), Scheiber and Baker (2003).
284
Wayne Christensen
12. Although it is not directly implied by the account of the evolution of high-order control that has been developed in this chapter, the model of the evolution of strategic agency briefly described at the start of the third section serves as the basis for an associated hypothesis that subjective awareness is the product of mechanisms for assembling and processing high-order control information. Integrated selfrepresentation enhances strategic action capacity. We should then expect consciousness to be associated with the highest order control processes. For related proposals see Baars’s ‘global workspace’ model of consciousness (Baars 1989), Metzinger’s ‘self model’ theory of consciousness (Metzinger 2003), and Legrand’s ‘action monitoring’ approach (Legrand 2006). 13. See Fuster (1997, 2004) and Fuster et al. (2000). 14. For example, Dennett (2001, p. 222) claims that ‘‘there is no single organizational summit to the brain.’’ Here he is cautioning Dehaene and Naccache for using the term ‘‘top-down’’ in describing their version of the global workspace model, claiming that reference to ‘‘top-down’’ influence in attention can at best refer to the aggregate effects of sideways influences. The results of Koechlin et al. (2003) demonstrate that to the contrary, there is a very literal anatomical basis to top-down control. 15. Defined in terms of reasoning and flexible problem solving ability (Cattell 1971). 16. It is important to differentiate the folk concept from the complex, theory-laden associations that volition has in philosophy. Philosophical theorizing is likely to generate much stronger and more structured commitments (e.g., to metaphysical claims) than folk concepts. Given that philosophical theorizing about volition has occurred prior to any significant scientific information about the subject, it is a sensible naturalist stance to treat it with suspicion pending a systematic reevaluation. References Allman, J. M., A. Hakeem, J. M. Erwin, E. Nimchinsky, and P. Hof. 2001. The anterior cingulate cortex: The evolution of an interface between emotion and cognition. Annals of the New York Academy of Sciences 935: 107–17. Baars, B. J. 1989. A Cognitive Theory of Consciousness. New York: Cambridge University Press. Baddeley, A. D. 1986. Working Memory. Oxford: Oxford University Press. Bennett, C. H. 1985. Dissipation, information, computational complexity and the definition of organization. In D. Pines, ed., Emerging Syntheses in Science. Proceedings of the Founding Workshops of the Santa Fe Institute, pp. 215–34. Redwood City, CA: Addison West Publishing. Botvinick, M. M., J. D. Cohen, and C. S. Carter. 2004. Conflict monitoring and anterior cingulate cortex: An update. Trends in Cognitive Sciences 8: 539–46.
The Evolutionary Origins of Volition
285
Brooks, R. A. 1991. Intelligence without reason. In Proceedings of the Twelfth International Conference on Artificial Intelligence, pp. 569–95. San Mateo, CA: Morgan Kaufman. Brown, T. S. 1911. The intrinsic factors in the act of progression in the mammal. Proceedings of the Royal Society London, Series B, 84: 308–19. Byrne, R. W. 2000. Evolution of primate cognition. Cognitive Science 24: 543–70. Cajal, S. R. 1894. Les nouvelles ide´es sur la structure du syste`me nerveux chez l’homme et chez les vertebre´s. Paris: Reinwald. Card, J. P., L. W. Swanson, and R. Y. Moore. 2003. The hypothalamus: an overview of regulatory systems. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond, eds., Fundamental Neuroscience, 2nd ed., pp. 897–908. New York: Elsevier Academic Press. Cattell, R. B. 1971. Abilities: Their Structure, Growth and Action. Boston: Houghton Mifflin. Christensen, W. D. 2007. Agency and the evolution of intelligence. Synthese, forthcoming. Clark, A. 1997. Being There: Putting Brain, Body and World Together Again. Cambridge: MIT Press. Collier, J. D., and C. A. Hooker. 1999. Complexly organised dynamical systems. Open Systems and Information Dynamics 6: 241–302. Cosmides, L., and J. Tooby. 1997. The modular nature of human intelligence. In A. B. Scheibel and J. W. Schopf, eds., The Origin and Evolution of Intelligence, pp. 71–101. London: Jones and Bartlett. Deary, I. J., and P. G. Caryl. 1997. Neuroscience and human intelligence differences. Trends in Neurosciences 20: 365–71. Dennett, D. C. 2001. Are we explaining consciousness yet? Cognition 79: 221–37. Dehaene, S., M. Kerszberg, and J.-P. Changeux. 1998. A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Science USA 95: 14529–34. Duncan, J. 2001. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience 2: 820–29. Duncan, J., R. J. Seitz, J. Kolodny, D. Bor, H. Herzog, A. Ahmed, et al. 2000. A neural basis for general intelligence. Science 289(5478): 457–60. Floeter, M. K. 2003. The spinal cord, muscle, and locomotion. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond. eds., Fundamental Neuroscience, 2nd ed., pp. 767–90. New York: Elsevier Academic Press.
286
Wayne Christensen
Fuster, J. M. 1997. The Prefrontal Cortex. Philadelphia: Lippincott-Raven. Fuster, J. M. 2004. Upper processing stages of the perception-action cycle. Trends in Cognitive Sciences 8: 143–45. Fuster, J. M., M. Bodner, and J. K. Kroger. 2000. Cross-modal and cross-temporal association in neurons of frontal cortex. Nature 405: 347–51. Gazzaniga, M. S., R. B. Ivry, and G. R. Mangun. 1998. Cognitive Neuroscience: The Biology of the Mind. New York: Norton. Gray, J. R., C. F. Chabris, and T. S. Braver. 2003. Neural mechanisms of general fluid intelligence. Nature Neuroscience 6: 316–22. Grillner, S. 2003. Fundamentals of motor systems. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond, eds., Fundamental Neuroscience, 2nd ed., pp. 755–65. New York: Elsevier Academic Press. Hartenstein, V. 1997. Development of the insect stomatogastric nervous system. Trends in Neurosciences 20: 421–27. Koechlin, E., C. Ody, and F. Kouneiher. 2003. The architecture of cognitive control in the human prefrontal cortex. Science 302(5648): 1181–85. Legrand, D. 2006. The bodily self: The sensorimotor roots of pre-reflexive selfconsciousness. Phenomenology and the Cognitive Sciences 5: 89–118. Lockard, R. B. 1971. Reflections on the fall of psychology: Is there a message for us all? American Psychologist 26: 168–79. Metzinger, T. 2003. Being No One: The Self-model Theory of Subjectivity. Cambridge: MIT Press. Miller, E. K. 2000. The prefrontal cortex and cognitive control. Nature Reviews Neuroscience 1: 59–65. Norman, D., and T. Shallice. 1986. Attention to action: Willed and automatic control of behavior. In R. J. Davidson, G. E. Schwartz, and D. Shapiro, eds., Consciousness and Self-Regulation, vol. 4, pp. 1–18. New York: Plenum Press. Posner, M. I., and M. E. Raichle. 1994. Images of Mind. New York: Scientific American Library. Powley, T. L. 2003. Central control of autonomic functions: organization of the autonomic nervous system. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond, eds., Fundamental Neuroscience, 2nd ed., pp. 913– 32. New York: Elsevier Academic Press. Roth, G., and U. Dicke. 2005. Evolution of the brain and intelligence. Trends in Cognitive Sciences 9: 250–57.
The Evolutionary Origins of Volition
287
Scheiber, M. H., and J. F. Baker. 2003. Descending control of movement. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond, eds., Fundamental Neuroscience. 2nd ed., pp. 792–814. New York: Elsevier Academic Press. Sherrington, C. S. 1947. The Integrative Action Of The Nervous System, 2nd ed. New Haven: Yale University Press. Simpson, G. G. 1949. The Meaning of Evolution. New Haven: Yale University Press. Swanson, L. W. 2003a. Brain Architecture—Understanding the Basic Plan. Oxford: Oxford University Press. Swanson, L. W. 2003b. The architecture of nervous systems. In L. R. Squire, F. E. Bloom, S. K. McConnell, J. L. Roberts, N. C. Spitzer, and M. J. Zigmond, eds., Fundamental Neuroscience, 2nd ed., pp. 15–44. New York: Elsevier Academic Press.
13 What Determines the Self in Self-Regulation? Applied Psychology’s Struggle with Will Jeffrey B. Vancouver and Tadeusz W. Zawidzki
When does a personality simulation become the bitter mote of a soul? —Alfred J. Lanning, I Robot
Introduction The tension between mechanistic understandings of human behavior and traditional notions of agency has been a topic of vigorous debate in philosophy for thousands of years, and in theoretical psychology since William James’s pioneering discussions over a century ago (1897). Discussion of whether concepts of the self, agency, and personal responsibility can be reconciled with the mechanistic explanation of human behavior has not abated.1 In fact these discussions have recently surfaced in what appears to be a rather unlikely domain: applied psychology. Applied psychology includes such subfields as industrial/organizational (I/O), educational, health, and clinical psychology. One primary concern of this field is human motivation. It is therefore not surprising that researchers in this field are interested in concepts of self and agency. What is surprising is that the leading researchers in applied psychology takes explicit and underargued positions on the philosophical problem of freedom of the will and agency. These researchers then appeal to these positions in arguing against certain theoretical approaches and experimental methodologies. Specifically, theoretical and empirical work, based on mechanistic paradigms like control theory, are dismissed on philosophical grounds: it is claimed that such approaches are incompatible with standard notions of will, self, and agency (e.g., Bandura and Locke 2003). In essence, these theorists endorse philosophical libertarianism: the view that the will cannot be explained using mechanistic models because it is free and freedom is incompatible with mechanization.
290
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
To philosophical admirers of the mechanistic paradigm in cognitive science, which has seen the fruitful application to cognitive phenomena via control theory and descendants like artificial intelligence, connectionism, and situated robotics, the debate in applied psychology should appear quite shocking, and perhaps depressing. Philosophers like Dennett (1984, 2003) have spent decades battling the anti-scientific prejudices of philosophical libertarianism, all the while assuming that they have psychologists on their side. Yet, here, in the gritty, down-to-earth domain of applied psychology, we see a resurgence of libertarianism! Our discussion proceeds as follows. In the next section we give a brief overview of relevant current debates in applied psychology. We contrast three different theories in the field: (1) goal-setting theory (Locke and Latham 1990), (2) social cognitive theory (Bandura 1986), and (3) control theory (Powers 1973; Carver and Scheier 1981). Defenders of the first two theories are united in their opposition to the third, largely on the ground that the mechanistic assumptions of control theory are allegedly incompatible with agency. We instead argue that all three theories focus on agency, falling under the self-regulation paradigm, according to which, human beings regulate variables to satisfy desires represented within the ‘‘self.’’ Indeed in the third section we describe, in some detail, empirical research that supports a control theoretic view of agency that is generally consistent with the other self-regulation theories or helps clear up ambiguities in these other theories. This consistency, rather than representing unnecessary redundancy (see Locke 1991), arises from control theory’s ability to explain agency via distributed cognitive structures (i.e., its mechanistic assumptions). To be sure, human agency is complex and so far the explanations are incomplete. In the fourth section we thus respond to more abstract concerns about applying control theory to modeling certain capacities hypothesized by the self-regulation paradigm: (1) the worry that control theory is too abstract to yield concrete, testable proposals, (2) the claim that many capacities are best modeled by a ‘‘lower level’’ science, like neurophysiology, (3) the claim that control theory cannot handle irreducibly indeterministic processes, (4) the claim that control theory is uninformative because it merely expresses findings of other theories in different terminology, and (5) the claim that control theory cannot scale up to ‘‘higher level’’ cognitive processes. We focus on the last of these by discussing how control theory might resolve the frame problem that has so bedeviled artificial intelligence research. In the last section we conclude our discussion by making explicit the ultimately philosophical nature of objections to control
What Determines the Self in Self-Regulation?
291
theory in applied psychology. In the earlier sections we put to rest many of the empirical concerns of control theory’s critics, so all that is left is an unargued intuition: that ‘free will’ and other concepts of our commonsense or ‘‘folk’’ psychology, like self, belief, and agency, are incompatible with mechanistic models, like control theory. We argue that intuition alone is not sufficient ground to reject control theory as a paradigm in applied psychology. Philosophers like Dennett have spent their careers defending the compatibility of folk psychological notions with mechanistic models of the mind, and until such arguments are rebutted, appeals to incompatibilist intuitions have little force. Self-regulation in Applied Psychology Applied psychology tends to focus on the human in context (e.g., school, work, family). As a result, the interaction among psychological and environmental variables has long been considered important when attempting to predict or explain behavior (Mischel 2004). This view has developed into a set of theories called self-regulation theories, which are now recognized as the leading means to understanding human motivation in applied settings (Boekaerts, Maes, and Karoly 2005; Latham and Pinder 2005). Selfregulation theories describe individuals as choosing and shaping their environments while simultaneously being shaped and affected by those environments (Kanfer 1970). The theories tend to place goals, defined as desired states represented within the individual (Austin and Vancouver 1996), in a central explanatory role. The goals on which applied psychologists focus tend to relate the individual to his or her environment. Moreover goals are usually conceptualized as existing within a hierarchy, where subgoals (e.g., get an A on this test) serve goals (e.g., graduate from college), which are means to even higher level goals (e.g., have a successful career). Despite the agreement among self-regulation theories regarding the interaction of person, environment, and behavior, and the role of goals in directing behavior, substantial disagreements over the ‘‘person,’’ or ‘‘human agency,’’ side of the equation have emerged (e.g., Bandura 1989; Locke 1991; Powers 1991). One aspect of this disagreement concerns the nature of agency, specifically, whether or not will can be conceptualized as lawful as opposed to unlawful (i.e., free), or, in other words, whether or not behavior can be explained mechanistically. A second aspect of the disagreement concerns the degree to which agency resides exclusively in individuals, as opposed to being distributed in the environment.2
292
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
For example, goal-setting theory (Locke and Latham 1990) is arguably the leading self-regulation theory in I/O psychology (Latham and Pinder 2005). The main claim of goal-setting theory, based on a great deal of empirical evidence, is that difficult and specific goals will lead to higher performance than easy or vague goals. In most of the empirical work, the goals are assigned, which is something a manager might do, making for a very practical theory of motivation and accounting for its popularity (Locke and Latham 2004). However, attempts to explain the behavioral phenomena on which goalsetting theory focuses have been more controversial (e.g., Naylor and Ilgen 1984). According to Locke (1991), goal-setting theory’s primary formulator and advocate, the ‘‘central core of goal-setting theory is the proposition, based on introspective evidence, that conscious goals regulate much human action and specifically performance on work tasks’’ (p. 18). This statement is somewhat disconcerting given Wegner and Sparrow’s (chapter 2 in this volume) research suggesting that we should not rely on introspection for understanding the sources of will. However, for Locke (1995), the primacy and efficacy of consciousness are axiomatic: it is beyond question that conscious decision-making plays the most important causal role in determining the goals and the actions needed to pursue the goals. Moreover, for Locke (1995), it is self-evidently true that the exercise of consciousness is based on free will. Specifically, Locke (1995) states, ‘‘free will actually means the choice to focus one’s mind conceptually or to let it drift passively at the level of sense perception (or to deliberately unfocus), that is, to think or not to think’’ (p. 271). Although this clearly places Locke in the libertarian category, theoretical psychologists are also expected to describe mechanisms that account for the relationships predicted within their theories (Sutton and Staw 1995). For goal-setting theory (Locke and Latham 2002) this chore has largely been subsumed by another selfregulation theory: social cognitive theory. Social cognitive theory is explicitly a self-regulation theory of human agency (Bandura 1991). More specifically, Bandura claims, ‘‘most human behavior, being purposive, is regulated by forethought’’ (p. 248). Forethought is composed of goals and predictions regarding actions, all represented symbolically and at least somewhat ‘‘controlled’’ by the individual. Specifically, ‘‘people possess self-reflective and self-reactive capabilities that enable them to exercise some control over their thoughts, feelings, motivation, and actions’’ (Bandura 1991, p. 249). Mechanism shows up in social cognitive theory in the form of beliefs in self-efficacy: ‘‘among the mechanisms of human agency, none is more central or pervasive than beliefs of
What Determines the Self in Self-Regulation?
293
personal efficacy’’ (Bandura and Locke 2003, p. 87). These beliefs in selfefficacy are ‘‘proximal determinants of human self-regulation’’ (Bandura 1991, p. 257) across the array of self-functions. Moreover these processes feed through a ‘‘self’’ that is free to influence the values of variables and the processes (Bandura 1989). Yet how these processes occur is not specified within social cognitive theory. Social cognitive theory attempts to describe the symbolic processes occurring within the human mind (as well as outside of it) and to use this to explain human behavior. However, another theory, control theory, attempts to examine these processes at a deeper level (Vancouver 2005). A theory of self-regulation can be articulated at the level of the meaning of the symbols and how those meanings affect one another and behavior. Alternatively, it can be articulated at the level of the subsystems that explain the functioning of these symbols, give these symbols their meaning, and explain how they are translated into action.3 This is the level of explanation sought by control theoretic models of self-regulation.4 Like social cognitive theory, self-regulation theories based on control theory can be found in all domains of applied psychology (e.g., Carver and Scheier 1998). Indeed control-theoretic models as applied to human behavior can be found in fairly basic human research domains like the physiological basis for motivation (Beck 2000), psychomotor behavior (Pressing 1999), and learning (Anderson 1995), as well as more applied areas like decision-making (e.g., Beach 1990) and human factors ( Jagacinski and Flach 2003). All these theories share the concept of the negative feedback loop, or control system (see figure 13.1), as a central architecture for understanding why humans behave as they do. Adapted from Wiener’s (1948) cybernetic structure, self-regulatory versions of the theory (e.g., Powers 1973) highlight the interaction of the environment and person via a dynamic process of acting to maintain (i.e., regulate) perceptions of environmental variables at desired levels (i.e., goal levels). For example, individuals maintain the speed of their automobiles by constantly (or sporadically) monitoring their speed, comparing it to their desired speed, and pressing or easing up on the gas pedal as discrepancies arise. The mechanisms for doing this, according to control theories, are fairly straightforward. The crucial component is a comparator function that subtracts one signal from another. One of the signals is a symbolic representation of the current state of some variable derived from an input function (i.e., a function that translates information about some variable in the system’s environment into a value representing the state of that variable). The other signal represents the desired state (i.e., goal level or reference signal)
294
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
Figure 13.1 Generic control theory model.
for the variable in question. For example, a control system could assess relative position and compare it to desired position ( Jagacinski and Flach 2003). Meanwhile the result coming from the comparator goes to an output function, where it is amplified or dampened by a factor called gain, and sent on. For future reference we will call this constellation of three functions a ‘‘goal agent’’ (see figure 13.1). At this point the nature of the information emerging from the output depends on what system and agent is being modeled. In the abstract the information changes (i.e., acts on) the state of the variable in question. In many versions of control theory (e.g., Powers 1973) the signals emerging from the output functions determine the reference signals and gains for lower level agents. At the lowest level the signals determine desired tensions for muscles, which, if below current tension levels, will cause the muscles to contract. These are actuator functions; they translate neural signals into physical actions on the variable in the environment. Given that the current state of the variable in question is constantly represented by the input function, the resulting changes to the variable represent feedback of the effect of the agent’s actions. To be a control system, the actions must move the state of the variable, as represented by the input function, closer to the desired state (i.e., reduce the discrepancy between current and desired state).
What Determines the Self in Self-Regulation?
295
The model takes an interactionist approach to explaining the causes of behavior. That is, behavior is not simply a function of the stimuli from the environment because the desired goals and the levels of gains are internally determined. However, as we show later, the nature of this determination is controversial. Finally, the model allows complex responses to environmental stimuli without relying on an internal representation of the relationship between variables in the environment (see Ashby 1956). That is, a goal agent need not know the nature of disturbances (see figure 13.1) on the variable of interest to control it, just as a thermostat need not know what causes a room to chill in order to maintain a steady temperature. This greatly simplifies computational requirements of the system (more on this later). Such control-theoretic models have come under heavy criticism by goal-setting and social cognitive theorists (e.g., Bandura and Locke 2003). Specifically, criticisms seem to focus on control theory’s relationship to engineering models and, by implication, the mechanistic assumptions of such models. Because mathematics derived from control theory are used to design and understand machine behavior, critics of the theory focus on the quality of the engineered machine as a metaphor for understanding human behavior. Naturally they see this metaphor as wanting. For example, Bandura and Locke (2003) note: In the machine metaphor of human functioning, the organism is stripped of qualities that are distinctly human—a phenomenal and functional consciousness, intentionality, self-reactiveness, and self-reflectiveness. Many problems arise in a cybernetic model of human motivation and action. Machines are not conscious. They operate automatically and non-volitionally. People are not nonconscious organisms locked in negative feedback loops driven automatically to reduce disparity between sensed feedback and inner referents. . . . The connections between sensing and action are not mechanical. The cybernetic model ignores the vast knowledge of cognitive selfregulation of human motivation and action. (p. 93, emphasis added)
First, one might wonder whether the reference to concepts like ‘consciousness’ these authors provide represents the type of conceptual imperialism Davies (chapter 3 in this volume) warns against. In fact, the sentiment expressed by Bandura and Locke (2003), as opposed to Davies, seems to represent the beliefs of many applied psychologists about control theory (e.g., Sandelands, Glynn, and Larson 1991). Yet psychological control theories of self-regulation are seeking to understand the nature of the machine that humans are, not the nature of actual, contemporary artifacts (Vancouver 2005). The confusion over this distinction seems to be at the heart of the conflict.5
296
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
To resolve this conflict, one of two paths can be taken. The first is to claim that humans are not machines of any type, where ‘‘machine’’ is defined as a physical entity constrained in its behavior by the laws governing its architecture. This seems to be the path taken by Bandura and Locke, bolstered by a libertarian position on free will. The second path is to narrow the distance between humans and machines, if not real ones then abstract conceptualizations. This second path is more consistent with a science seeking to understand the laws governing human behavior. It seeks to recognize the physical limits on the operations of an architecture given the medium in which it is instantiated, as well as the special properties that might emerge as those operations are realized. This is the position taken by most control theorists. In particular, control theories begin with physically realizable, simple control systems (or goal agents), and explore the properties that can emerge from configurations of these agents. The more properties of humans that can be explained with these configurations, the stronger becomes the control theorists’ argument that the path is a correct one.6 In the following sections, empirical work is described that addresses particular phenomena that control-theoretic models allegedly cannot address. For many of the studies, computational models comprised of goal agents and negative feedback loops have been rendered and simulated to confirm that control-theoretic explanations are possibly correct. The results of the simulations have also been compared to the empirical data as a further test of the veracity of the models. Of course, this process does not ensure that the models are correct, but it does indicate that control-theoretic models can account for self-regulation phenomena. Specific Control-Theoretic Models of Self-regulation Phenomena A Model of Goal Striving The first model we consider was designed to provide a control-theoretic explanation of one of goal-setting theory’s core findings: a goal difficulty’s positive relationship with performance. In an experimental study, participants were asked to manipulate the work schedules of nurses in a simulated hospital environment (Vancouver, Putka, and Scherbaum 2005). Budgetary goals were set to test for the effect of goal difficulty on motivation and performance. Participants interacted with a computer interface that provided information about the state of the schedules (e.g., the current cost of the schedule in terms of total hourly wage given the time blocks scheduled) and means for adding, moving, or deleting time blocks (see figure 13.2).
What Determines the Self in Self-Regulation?
297
Figure 13.2 SimNurse interface. (Reprinted from J. B. Vancouver and D. J. Putka 2000. Analyzing goal-striving behavior and a test of the generalizability of perceptual control theory. Organizational Behavior and Human Decision Processes 82: 334–62. Copyright (, Elsevier.
During the study sessions, participants were given several schedules to revise, and the budgetary goal was changed for different schedules in order to assess goal-level changes. A control-theoretic model of the individual participant and the simulated environment was developed (see figure 13.3). The model consisted of two closed-loop control systems. The model reduced discrepancy from the goal state (target budgetary goals) by acting on the time blocks to alter perceptions such that they approached their goal states. Model and participant performance showed impressive correlation (see figure 13.4), supporting the closed-loop, control-theoretic explanation of this behavior. Discrepancy Creation The study just described confirmed that a closed-loop, control-theoretic model could account for the way a human system strives for goals. It
298
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
What Determines the Self in Self-Regulation?
299
Figure 13.4 Correspondence between behavior of computation model and a participant. (Reprinted from J. B. Vancouver, D. J. Putka, and C. A. Scherbaum 2005. Testing a computational model of the goal-level effect: An example of a neglected methodology. Organizational Research Methods 8: 100–27. Copyright (, Sage Publications.
does not eliminate the goal construct, as Bandura and Locke (2003) suggest a machine-metaphor model would, but instead confirms the explanatory utility of the goal construct. Yet, perhaps this study does not address the central problem these psychologists have with control-theoretic explanations of human motivation. These researchers also argue that controltheoretic models cannot explain other phenomena that other self-regulation theories, particularly social cognitive theory, can explain. Moreover these other phenomena are likely to be, according to Bandura (1991), the more interesting phenomena. In particular, Bandura refers to what he calls ‘‘discrepancy creation’’ (p. 260). Discrepancy creation is the antithesis g Figure 13.3 Schematic depicting control theoretic model of scheduling agents. (Reprinted from J. B. Vancouver, D. J. Putka, and C. A. Scherbaum 2005. Testing a computational model of the goal-level effect: An example of a neglected methodology. Organizational Research Methods 8: 100–27. Copyright (, Sage Publications.
300
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
of discrepancy reduction and thus, according to Bandura, cannot be explained by control-theoretic models. The discrepancy creation aspect of motivation refers to the setting of goal levels above perceptions of current states. According to Bandura (1986), once these new goal levels are set, discrepancy-reducing (i.e., negative feedback) loops may take over, but they do not start the process. However, ironically, control theory was actually introduced to applied psychology to explain a discrepancy creation phenomenon. Specifically, Campion and Lord (1982) describe in control-theoretic terms how failures in past performances lead individuals to raise goal levels in subsequent performance episodes. Indeed the hierarchical architecture they propose is based on the notion that discrepancies in higher level goal agents are reduced by creating discrepancies, via the raising of goal levels in lower level goal agents. Yet this argument has had little impact (Bandura and Locke 2003). There are two possible reasons for disregarding Campion and Lord’s arguments. First, the arguments are verbal. That is, they do not reflect operating a simulation to demonstrate that such an explanation can hold water. The lack of a concrete model encourages irrelevant criticisms. For example, Locke (1991) argues that if discrepancy-reducing systems could change the goal level, they would reduce the goal to the perception of the current state rather than raise it. Certainly discrepancy-reducing systems could be constructed that work that way, but that is not the way any control theorists suggest they work in humans (or should work in machines). Computational models of the hypothesized architecture, and simulations of these models that replicate human behavior, should put such criticisms to rest. Given that Bandura has no computational model of discrepancy creation against which to compare a control-theoretic model, it is likely that the lack of a model is not the issue for him. Instead, the issue is likely the context for discrepancy creation and what Bandura believes it says about human nature. Specifically, Bandura (1986) and others describe discrepancy creation as the result of past successes, not past failures, as in Campion and Lord’s (1982) proposal. Indeed evidence that at least some individuals are likely to raise their goals following previous success is considered, by some, as an empirical demonstration of the predictive superiority of social cognitive theory over control theory (e.g., Phillips, Hollenbeck, and Ilgen 1996). Both of the issues above can be resolved by constructing a controltheoretic computational model of discrepancy creation under a condition of improving performance. Such a model, involving four goal agents, was constructed (see figure 13.5), and a research protocol was devised using
What Determines the Self in Self-Regulation?
Schematic of discrepancy-producing property given multiple discrepancy-reducing agents.
301
Figure 13.5
302
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
the scheduling task simulation to test the model (Scherbaum 2001). Although not all the research participants exhibited discrepancy creation, some did and so did the model. Moreover no one exhibited more discrepancy creation than the model, and simple parameter changes to the model ‘‘turned off’’ discrepancy creation.7 A Role for Beliefs Although the previously described models rely heavily on the goal construct, and appear to explain self-regulation phenomena, they do not involve components that represent beliefs. Explaining behavior in terms of expectancy beliefs (i.e., beliefs about the relationships betweens events or between actions and events) has had a long, distinguished history in psychology (e.g., Olson, Roese, and Zanna 1996). One expectancy belief in particular, belief in self-efficacy, plays a dominant role in both social cognitive theory (Bandura 1997) and goal-setting theory (Locke and Latham 2002). According to these theories a strong belief in self-efficacy is the key determinant of goal acceptance (i.e., increasing the probability of accepting a difficult goal), as well as a direct, positive influence on motivation, regardless of goal difficulty level. Moreover these relationships appear to have a great deal of empirical backing (Bandura and Locke 2003). Given the strong empirical evidence in favor of the role these types of beliefs play in human motivation, applied control-theoretic models of selfregulation include them in some way or another, though often referring to them with different terms (e.g., Carver and Scheier 1998; Klein 1989). It would appear that this inclusion should counter criticisms that machinemetaphor models neglect beliefs. Yet Bandura and Locke (2003) argue that ‘‘sociocognitive factors grafted on the negative feedback loop’’ (p. 93) do not make a theory. Rather, they state that ‘‘the ingredients in this theoretical conglomerate may contain valid ideas, but none of them have anything to do with control theory’’ (p. 93). Indeed Vancouver (2005) finds some legitimacy in Bandura and Locke’s claim, stating that self-regulation control theorists often talk about these constructs but do not incorporate them within the cybernetic architecture. However, some control theorists have described architectures into which such beliefs could easily be incorporated ( Jagacinski and Flach 2003; Powers 1973). Perhaps more important, Vancouver (2000) proposed a model that not only included components representing such beliefs, but described how they are used, based on structures that deviate only minimally from the basic goal agent (see figure 13.6). The model depicted in figure 13.6 is used to analyze human behavior on tasks in which subjects must decide
What Determines the Self in Self-Regulation?
303
Figure 13.6 Schematic representing structures needed for mental modeling and resource allocation.
whether to accept certain difficult goals, and then indicate the amount of resource (e.g., time) they are willing to allocate to accomplishing each goal. In the model a signal is passed from the output function to the input function, creating an internal feedback loop (i.e., not through the environment). This signal is weighted by the expectancy belief, which represents a belief in the effectiveness of one’s actions. It acts, via the output function, by changing the perception of the variable to be regulated, which is generally how self-efficacy is measured. This link allows the agent to mentally simulate acting on the variable. Meanwhile a second, ‘‘adaptive agent’’ monitors the mental simulation and records the time it takes for the focal agent to reduce discrepancy given its expectancy belief. Should the simulated time exceed a threshold, the adaptive agent reduces the gain of the focal system to zero, effectively rejecting the goal of the focal agent and allowing the adaptive agent to maintain its goal of resource use. If the focal agent reduces its simulated discrepancy to zero without exceeding the threshold, the amount of time the agent ‘‘thinks’’ it needs is represented in the gain signal. This way a computational model comprised of negative feedback loops can represent a kind of feedforward process for determining anticipated need and allocating resources in contexts where immediate, unambiguous information about the current state of the variable is lacking. The specific relationship between changes in expectancy, and thus beliefs about self-efficacy and resource allocation, in the model above, includes
304
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
both positive and negative (i.e., nonmonotonic) and discontinuous (i.e., not smooth) aspects. Although explicitly described in some control theoretic self-regulation theories (e.g., Carver and Scheier 1998), it is not clear whether such a relationship would be predicted in Bandura and Locke’s theories of self-regulation. Regardless, several studies have now found support for the relationship (Vancouver et al. 2005) and thus, by inference, for the model. This model, and subjects’ behavior in the series of studies that confirmed it, are relevant to a debate between Bandura (1989, 1991) and Powers (1991) regarding whether or not belief in self-efficacy has a direct effect on motivation and behavior. Bandura claims that belief in self-efficacy has a positive effect on motivation and behavior, mediated through goal acceptance as well as some other, unspecified process. In contrast, Powers argues that belief in self-efficacy has an indirect positive effect on performance by virtue of influencing goal selection, but that any direct effect is often negative, depending on circumstances. Specifically, if the self-efficacy manipulation leads individuals to believe they are doing better than they are, then they are likely to prematurely reduce efforts (i.e., motivation) on the task. In the series of studies that inspired the model, Vancouver et al. (2001, 2002) examined the relationship between belief in self-efficacy and performance within an individual over time. Four studies found that belief in self-efficacy is a positive function of past performance, but that it affects subsequent performance negatively, substantiating Powers’s (1991) control theoretic account. Other studies, which manipulated goal difficulty, showed that belief in self-efficacy correlates positively with the acceptance of difficult goals. These studies also show that goal difficulty has a positive effect on performance. Thus, via the indirect route of goal acceptance, selfefficacy is positively related to performance. This finding is also consistent with Powers’s arguments, as well as with the claims of social cognitive theory and goal-setting theory regarding the positive influence of belief in self-efficacy. This set of studies and the model they inspired provide an important demonstration of the advantages of the control-theoretic approach. Not only do control-theoretic models throw light on how belief in self-efficacy affects performance, the control-theoretic perspective highlights complexities that are hidden to theories operating at what Dennett calls the ‘‘semantic’’ or ‘‘phenomenological’’ level (1987, p. 44), that is, the level of meanings. This is the level at which goal-setting theory and social cognitive theory operate, and for this reason such theories are blind to all but the
What Determines the Self in Self-Regulation?
305
most commonplace observations regarding the relationship between beliefs in self-efficacy and performance. The Origins of Model Components Undaunted by these types of findings, Locke (1991) ‘‘moves the goal posts’’ once again: he argues that although control theory can account for phenomena like discrepancy creation via multiple goal agents, these agents are ‘‘programmed’’ by control theorists. Ultimately, he argues, only a concept like will and the desire for life can account for the motivation and goals of humans. Control-theoretic models, he argues, cannot account for these ultimate sources of agency because engineers construct machines and engineers did not construct humans. This is a variation of what philosophers call ‘‘the homunculus problem’’: theories of the mind cannot ultimately rely on some agent performing key functions while this agent itself remains unexplained. This is not a problem Locke need concern himself with given his perspective, but it is inconsistent with a control-theoretic perspective and many researchers in applied psychology have commented on this problem (e.g., Pinder 1998). The complete rendering of all the goal agents, how they come about (or came to be encoded in the genome), and how they interact are all explanatory goals of control theory. Although control theory is a long way from meeting these explanatory goals, the form of such explanations is presumed to involve emergence from the combined activity of simple goal agents and control systems. Moreover some of the specific mechanisms by which the functions of goal agents develop and change have been hypothesized and modeled for some time (Powers 1973). Consider the development of the expectancy term in the model depicted in figure 13.6. Expectancy is depicted in that figure as an exogenous variable (i.e., nothing in the model influences it). As a weight used to transform the output signal into a perception, it represents an aspect of an input function. What is not depicted in figure 13.6 is that the expectancy is a variable that changes as the individual interacts with his or her environment. Specifically, a delta-learning rule, which is another variation of the negative feedback loop commonly used in neural network models of learning (Anderson 1995), was incorporated into the computational model so that it could ‘‘learn from experience’’ with the various options. The delta-learning rule compares a prediction with an experience and changes the weights of the prediction model based on the discrepancies between the predictions and the experience. The model learned to determine its
306
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
effectiveness as readily as the participants did in the studies. Thus the source of beliefs in self-efficacy is not really much of a mystery. Indeed Bandura (1986) not only acknowledges the role of past experience as a source of expectancies and beliefs in self-efficacy; he considers it the primary source. That said, Bandura (1986) also describes other sources of beliefs in selfefficacy, which Vancouver’s lab has not simulated. These other sources are important conceptually because they go beyond the simple learning model represented in the delta-learning rule. For example, Bandura argues that one source of beliefs in self-efficacy is what he calls ‘‘modeling.’’ In this context, ‘‘modeling’’ refers to observing another and learning from that observation. It requires linking the other to the self, seemingly a difficult computational process. But are such processes intractable by computational models? Do they necessarily require some kind of irreducible indeterminacy? This seems unlikely. In fact Arbib and Rizzolatti (1996) propose a control-theoretic model of imitation in monkeys, based on neurophysiological studies of ‘‘mirror neurons’’ in area F5 of monkey cerebral cortex. Although computational models of the source of beliefs in self-efficacy are not complete, the general approach has applicability to the question of where all components of control system agents come from. Indeed the model of how belief in self-efficacy arises from experience, described above, is taken from general computational models of learning (Gluck and Bower 1988). As mentioned earlier, neural network models, which are examples of these, can learn to categorize complex patterns of simple stimuli into meaningful representations (Rumelhart et al. 1986). While too extensive a literature to review here, these models at least hint at the mechanisms potentially involved in developing functions that can be altered by experience. The bonus is that they are based on the basic negative feedback architecture of control theory. Less malleable functions may be innate products of evolution. The Frame Problem and Other General Objections to Control Theory The previous section addressed objections in applied psychology to control theory using specific models. As we have shown, it is possible to give concrete, experimental and model-theoretic interpretations of both the hypotheses of goal-setting and social cognitive theorists, and their objections to control-theoretic accounts. These interpretations yield interesting results, some of which vindicate and explain certain goal-setting and social
What Determines the Self in Self-Regulation?
307
cognitive hypotheses, others of which appear to confute some of their objections to control-theoretic models. In addition control-theoretic models raise new questions regarding goal-setting and social cognitive theories, and open up new avenues of research. For example, a control-theoretic approach uncovered previously unappreciated complexities in the relationship between self-efficacy and motivation. However, some of the objections to control-theoretic approaches are more abstract, and not easily addressed experimentally or modeltheoretically. Specifically, some argue that there are principled reasons why it is unwise to squander the limited institutional resources of applied psychology on control-theoretic models of key self-regulation capacities. We can think of five such reasons. First, control theory is too abstract8 to yield testable predictions; it amounts to nothing more than the claim that some systems reduce discrepancy between a start state and a goal state, and this we already knew: after all, applied psychology studies goal-directed behavior. Second, some capacity that is ‘‘black-boxed’’ at the psychological level is best studied by a different science. For example, some might argue that belief in self-efficacy should only be modeled by neurophysiologists; there is no interesting psychological account of how such beliefs are determined. At the level of psychology, belief in self-efficacy must be taken as an unexplained given that can predict performance and motivation. Third, some self-regulation capacity is governed by an irreducibly indeterministic process, and therefore cannot be explained in terms of deterministic, control-theoretic models. Fourth, control theories are not informative because they merely use a different vocabulary for conceptualizations found in other theories, particularly goal-setting and social cognitive theory. Fifth, although control theory is an appropriate framework for studying certain ‘‘low-level’’ capacities, like the capacity to regulate blood sugar levels, it does not ‘‘scale up’’: ‘‘higher level’’ capacities, like decision-making and problem-solving, cannot be modeled control-theoretically. Our earlier discussion of specific experimental studies and models goes some way toward addressing these concerns. With respect to the first concern, none of these studies tests control theory per se. There is always a specific model, with many detailed parameters fixed, that is tested. For example, the model of subject performance on the scheduling task includes specific input and output functions, specific settings of gain value, and so forth. With respect to the second concern, the models discussed earlier involve psychological variables, such as desires and perceptions, and no neurophysiological variables. If these are good models of self-regulation capacities, then control theory can provide models at the psychological
308
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
level. With respect to the third concern, although it is possible that some self-regulation capacities are implemented in mechanisms that have irreducibly indeterministic components, it is unlikely that these mechanisms are entirely indeterministic. Furthermore it is possible for a control-theoretic model to include components that it treats as (smaller) black boxes because they are irreducibly indeterministic (Glimcher 2005). For example, Powers (1973) argues that the process by which new output functions are selected in response to insufficient discrepancy reduction may have random components. However, the fact that these models illuminate the functioning of self-regulation capacities shows that there is plenty of deterministic mechanism underlying these capacities for control theory to model. With respect to the fourth concern, the studies above demonstrate that control-theoretic models predict different effects than those predicted by the other theories, so they are not just old theories repackaged with new terminology. The fifth concern is not so easily dismissed. The models discussed above apply to capacities that are, arguably, ‘‘high level’’: determining the optimal schedule for a dozen or so employees is, on the face of it, a high-level, problem-solving task. Nevertheless, the problem of scaling up to higher level capacities has a specific version, familiar to philosophers and researchers in AI, which deserves further discussion. According to an influential line of argument, computational models of cognitive capacities, all of which are instantiations of the basic control-theoretic architecture, are destined to fail when applied to higher level cognitive processes, for a principled reason known as the ‘‘frame problem’’ (Pylyshyn 1987).9 The frame problem arises whenever a cognitive process can draw on any information, available to the whole of a sufficiently complex cognitive system that is potentially relevant to succeeding at some task. The problem is that searching through all information available and evaluating it for potential relevance to some task is computationally intractable: no algorithm could accomplish this in real time. Not every cognitive process has this property. For many and perhaps all physically possible cognitive systems, many tasks can be solved by computational subsystems or modules that make use of a very limited, precisely specified information store. For example, Fodor argues that perceptual input systems, like language parsers, or visual scene parsers, can be modeled computationally precisely because they are ‘‘informationally encapsulated’’ modules of this kind (Fodor 1983). However, according to Fodor, high-level cognitive processes, like belief fixation and problem-solving, occurring within ‘‘central systems,’’ can draw on any information available to the whole system that is potentially relevant, a property that Fodor calls ‘‘isotropy’’ (Fodor 1983, p. 105). The
What Determines the Self in Self-Regulation?
309
problem is that there appears to be no way to computationally model such central systems. As Fodor points out (1983, p. 105), the clearest examples of tasks that exhibit isotropy involve scientific reasoning. In principle, any information might be relevant to formulating a well-confirmed hypothesis to explain some set of experimental data. For example, theories of fluid behavior proved relevant to formulating well-confirmed hypotheses to explain the behavior of light. Cognitive processes that apparently exhibit isotropy operate in mundane contexts as well. According to the consensus understanding of the frame problem, whenever a cognitive system acts on the world, and must therefore update its representation of the world to reflect both intended and many unintended consequences of its actions, it may encounter the frame problem.10 On the face of it, human beings appear to solve the frame problem all of the time. Our commonsense, background knowledge about how objects behave in different contexts enables us to immediately judge the relevance or irrelevance of different facts. Common sense allows for the possibility that any information might be relevant to some task, depending on context, while efficiently treating most information as irrelevant relative to specific contexts. In Fodor’s terms, common sense is isotropic. Therefore, Fodor argues, common sense cannot be modeled by any computational model that we know of (1983, pp. 114–15). The reason is that when we model a capacity computationally, or try to build an artificially intelligent system, the goal is to formalize the knowledge human beings deploy so that unintelligent, mechanical algorithms can use it to solve tasks. But the only way to do this appears to be to represent all potentially relevant facts in an explicit formalism, and then use an algorithm to evaluate each fact for relevance. And this task is computationally intractable. Here is how the frame problem might arise for control-theoretic models of the kind described above. Typically, in more complicated models, tasks are solved through the orderly activation of multiple control systems. First, there is the control system that works to reduce discrepancy between an anticipated or current state and the goal state assigned by the task. The way this system does this is by activating lower level goal agents via changes in desired states or gains for those agents. These then work to reduce discrepancies between their perceived and desired states. This way the system treats these various subgoals as means to achieving the higher level goal state. For example, in the scheduling task the goal state set by the task is a certain cost level for the overall payroll. The way the initial control system
310
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
might work toward this goal is by activating another control system that works to achieve some other goal that is likely to contribute to achieving the system’s overall goal. It might activate a control system that works to achieve some kind of visual feedback from the computer interface that correlates with the overall goal, for example, the total cost value depicted on the computer screen (see figure 13.2). This hierarchy of control systems is consistent with Christensen’s (chapter 12 in this volume) description of evolved executive functioning in higher order organisms. However, it is also in this delegating of subtasks to other control systems that the frame problem looms. How does the system know how to decompose the overall task into subtasks and, consequently, which other control systems to activate? Depending on context, very different subtasks, and hence very different control systems might be relevant to accomplishing the overall task. In the scheduling task example, a different program with a different interface would require pursuing different subgoals, and hence activating different control systems. The commonsense, background knowledge about which interface is being used is crucial to knowing which subgoals and control systems are relevant to solving the overall task in context. Before discussing how control theory might tackle this challenge, it is important to make a distinction between the frame problem, as abstractly conceived, and specific tasks that allegedly exemplify it. The frame problem, as abstractly conceived, is just the problem as Fodor understands it: how does one model the capacity to select, from the vast amount of information that is potentially relevant to solving some task, only the information that is likely to be actually relevant, given the context in which the system must solve the task? This must be distinguished from specific tasks that appear to require a solution to this problem: the task of formulating a specific well-confirmed scientific hypotheses, the task of pulling a wagon with a battery and a bomb on it out of a room without being destroyed (Dennett 1987), the task of making a midnight snack (Dennett 1987), the task of finding someone’s phone number in a phonebook and phoning her (Fodor 1983, p. 113), and so on. Explaining the frame problem in terms of specific examples engenders a kind of illusion. The specific examples are invariably of tasks that human beings can accomplish. The point is to show how smart we are compared to extant computational models. The natural inference is that while computational models cannot solve the frame problem, we can. But this is a non sequitur. We might not be able to solve the frame problem anymore than computational models can. Perhaps we can solve many specific tasks
What Determines the Self in Self-Regulation?
311
that are offered as examples of the frame problem, and that computational models cannot solve, using more limited means: heuristics or good tricks that our brains have discovered through learning and evolution, that work, for the most part, where and when we need them to work.11 There are good reasons to expect that such heuristics and good tricks are very difficult to make explicit or implement in artificial systems. For example, as Dennett points out frequently, natural selection often takes advantage of ‘‘unintended’’ by-products to cobble together unobvious solutions to problems (Dennett 1991, p. 175). Sometimes these unintended byproducts are medium-dependent: they are a result of the physical properties of the materials of which we happen to be composed. For example, Clark (chapter 7 in this volume) describes recent ‘‘artificial walkers’’ that require minimal computation, taking advantage of the dynamics inherent in certain physical materials. Such good tricks do not constitute solutions to the frame problem. However, if the brain is a bag of such tricks, it might fool us into thinking it has solved the frame problem; it might use such limited means to solve problems that we treat as examples of the frame problem, only because we notice that certain computational models cannot solve them. Perhaps the reason they cannot solve them is because we have not yet been able to model all the good tricks employed by human brains, not because human brains have somehow solved the frame problem.12 If this argument is cogent, then all that control-theoretic, or any other computational models, need provide is some kind of explanation of how a limited set of mechanisms might approximate or create the illusion of solving the frame problem, in certain contexts. And control theory does have the resources to model this capacity. The key idea is what we refer to, above, as an ‘‘adaptive agent’’ (see also figure 13.6). Any task requires the deployment of numerous control systems that above we called ‘‘goal agents.’’ These simply function to reduce discrepancy between a current state and a certain goal state, where the current state is specified by input from the specific agent’s environment. As we have seen, part of the way one goal agent might reduce discrepancy between its current and goal states is by activating other goal agents with goals that the system treats as means to the overall goal. Call the initial goal agent the ‘‘focal’’ goal agent, the goal agents it activates ‘‘subsidiary’’ goal agents, and their goals ‘‘subgoals.’’ An adaptive agent takes as input certain ‘‘diagnostic variables’’: variables that indicate the progress that the focal goal agent is making toward its goal. For example, the rate at which the focal goal agent reduces
312
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
discrepancy might constitute input to an adaptive agent. The goal of the adaptive agent might be to keep this rate of discrepancy reduction at or above a certain level. If it falls below this level, the adaptive agent might, in order to reduce this discrepancy, effect a reorganization of the resources on which the focal goal agent is relying to achieve its goal. For example, the adaptive agent might redirect focal agent output to different subsidiary goal agents, retaining this redirection if focal goal agent discrepancy starts getting smaller. If some number of such redirections does not work, another adaptive agent might suspend the activity of the focal goal agent, by reducing the priority of its goal relative to other goals of the system, yet continue trying other reorganizations off line, and use an emulation of the environment (see figure 13.6) to determine whether any would increase the rate of discrepancy reduction in the focal goal agent.13 How might such an architecture approximate or create the illusion of solving the frame problem in certain contexts? The idea is that, initially, most information that is potentially relevant to a task is ignored.14 The focal goal agent simply activates (i.e., sends signals that change reference or gain levels) certain default subsidiary goal agents that pursue goals that the system treats, by default, as means to achieving the focal goal agent’s goal. Inflexible, ‘‘sphexish’’15 behavior is avoided thanks to the adaptive agents. These agents constantly monitor the progress that the focal goal agent is making toward its goal, and explore alternatives to the default response if progress is insufficient. Which alternatives are explored, and the order in which they are explored, may be fixed by evolution or influenced by learned associations. Of course, there is the possibility that the higher order task of exploring alternatives to the focal goal agent’s default response is itself inflexible and ‘‘sphexish.’’ However, the architecture here can, in principle, be indefinitely recursive: there can be higher order adaptive agents monitoring the progress that lower order adaptive agents make toward their goals of maximizing the rate of discrepancy reduction by focal goal agents. Furthermore, given continued failure to achieve a desired rate of discrepancy reduction by some focal agent, the adaptive agent that monitors it may effect random reorganizations of the subgoal/subsidiary agent structure.16 Although such an architecture is not guaranteed to solve the frame problem, it has the resources to implement a capacity for behaviors that approximate this competence in certain contexts. Applied to an actual case, such a model could display properties that are often treated as symptomatic of the capacity to solve the frame problem. Consider Dennett’s example of making a midnight snack. Molly enters the kitchen and moves immediately toward the refrigerator. Most information available to her is ignored: she does not ponder the relevance of the wall-
What Determines the Self in Self-Regulation?
313
paper on the kitchen wall. This is because, in control-theoretic terms, only default subsidiary goal agents have been activated. Suppose that Molly encounters an unexpected problem: the knife she wants to use for spreading mayonnaise is not in its usual drawer. In Molly’s mind, some adaptive agent ‘‘notices’’ because the rate at which the focal goal agent is reducing discrepancy between its goal state (sandwich-in-mouth) and its current state has slowed below some threshold. The adaptive agent works to compensate: it triggers a reorganization of the subsidiary goal agents originally activated by the focal goal agent. Molly checks the dishwasher. The knife is found. The rate at which the focal goal agent effects discrepancy reduction returns to normal. Eventually this discrepancy is removed: the sandwich is in Molly’s mouth. The model succeeds in explaining the subtle balance of efficiency and flexibility that, in this context, appears to indicate a solution to the frame problem. Of course, there is no guarantee that such an architecture can perform this well in any context. If Molly cannot find a knife in the dishwasher or anywhere in the house, the adaptive agent may run out of possible ‘‘reorganizations’’ of subsidiary goal agents, and it would have to explore more radical responses. For example, it might turn off the focal goal agent, thereby abandoning the original goal, and activate other agents aimed at figuring out what happened to all of the knives. Or it might activate an agent aimed at figuring out how to spread mayonnaise without a knife. The point is, however, that in most contexts in which human beings are likely to operate, such an architecture has the resources to produce behavior with an efficiency and flexibility sufficient to accomplish many of the sorts of tasks that exemplify the frame problem. This is sufficient to explain how human beings approximate or create the illusion of solving the frame problem. There is at least one other factor, drawn from recent models of distributed cognition, that is relevant here. As Clark (1997, 2003) argues, human cognition is distinguished by its nearly continuous structuring and restructuring of the environment to ease the cognitive load borne by the human brain. Human beings routinely engage in ‘‘epistemic action’’ (Clark 1997, pp. 64–66): physical actions that transform the environment in such a way that the limited cognitive resources of our brains can accomplish apparently miraculous cognitive feats. Epistemic action can include everything from building artifacts, to organizing physical and artificial environments, to creating social institutions. A dramatic example of epistemic action, of particular relevance in the context of a discussion of the frame problem, comes from studies of patients suffering from advanced Alzheimer’s that nonetheless live close to
314
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
functional, independent lives. It turns out that such patients rely on very specific ways of organizing space and time: furniture, utensils, appliances, and so forth, must all be kept in specific places, and routines, such as bus schedules, must be strictly followed. Any modifications of their carefully structured environments cause dramatic crashes in cognitive competence (Clark 1997, p. 66). Such patients are examples of cognitive systems that clearly cannot solve the frame problem. They only have access to a very limited amount of information at a time, due to memory impairment, so they clearly cannot flexibly consider all information that is potentially relevant to a task and select only information that is likely to be actually relevant. However, as long as their physical environment is set up in a very stereotyped, invariant way, they do not need such flexibility. They can rely on the inflexible, ‘‘sphexish’’ neural capacities that are still intact to approximate normal competence, given that their environments are structured appropriately. Perhaps the predicament and response of such patients is merely an extreme case of the general human predicament and response. We appear to process information relevant to tasks in which we are engaged flexibly and efficiently. But much of this may be a result of our constant structuring of our social and physical environments into stereotyped patterns such that we can succeed with far less flexible neural competencies than appear necessary for the tasks in which we engage. Indeed, in an unpublished study involving the nurse scheduling task, Vancouver and students found that in the absence of the usual interface feedback, many subjects automatically tuned into other variables in the interface that indirectly tracked budgetary expenditures. Rather than try to calculate appropriate modifications to schedules ‘‘in their heads,’’ subjects automatically searched for other environmental variables that they could track perceptually, which correlated with discrepancy reduction. In many subjects the default response to absence of feedback appears to be a ‘‘retargeting’’ of the same perception/ action loop on different environmental variables: environmental structures are relied on to keep on-board processing at a minimal level. Thus there are resources available to control theory, and related approaches, to explain the kinds of competencies that, in human beings, are taken to require a solution to the frame problem. Let us end this discussion by reconnecting with the central topic of this volume: the free will problem. Fodor’s skepticism regarding computational models of central systems, apparently capable of isotropic reasoning, bears interesting affinities with libertarian skepticism regarding computational models of human motivation and decision-making. Both are skeptical of applying control-
What Determines the Self in Self-Regulation?
315
theoretic, computational models to certain important capacities hypothesized to exist in the human mind. While Bandura and Locke doubt that control theory can illuminate self-regulation capacities, like belief in selfefficacy, Fodor doubts that computational models can explain ‘‘central systems,’’ responsible for tasks like scientific reasoning, problem-solving, and belief fixation. And their reasons for skepticism are remarkably similar. In both cases there is an alleged creativity or spontaneity that escapes algorithmic or deterministic understanding. For Bandura and Locke, no mechanistic model can explain the self’s freedom to decide on different courses of action, or the will’s capacity to overcome obstacles in the pursuit of difficult, specific goals. For Fodor, no computational model can replicate the creative integration of disparate cognitive domains apparent in analogical reasoning (Fodor 1983, p. 107).17 The responses to these challenges are similar as well. In both cases specific, and admittedly impressive, human capacities are inflated by being conflated with general capacities that no physically possible cognitive system can implement. The ability to decide among different courses of action, and the capacity to willfully pursue the favored course, is conflated with freedom from all constraints. The ability to draw on potentially relevant information from diverse cognitive domains to flexibly and efficiently solve tasks is conflated with the capacity to select, from the vast amount of information that is potentially relevant to solving some task, only the information that is likely to be actually relevant, given the context in which the system must solve the task. Once these capacities are appropriately deflated, once we are clear on the ‘‘varieties of free will worth wanting’’ (Dennett 1984), and the flexibility and efficiency of which human beings are actually capable, mechanistic and algorithmic models appear viable. Such capacities are distributed across multiple control systems interacting with each other, including higher order control systems that monitor discrepancy reduction in lower order control systems. In addition they rely heavily on ‘‘external scaffolding,’’ like language, artifacts, and social institutions, molded by ‘‘epistemic action’’ to simplify the tasks faced by neural mechanisms. It is possible that randomizing processes play a role in getting such systems out of ruts: as we saw above, this might explain the creative exploration of potential solutions to tasks that exemplify the frame problem. Such randomizing elements may also play a role in free, deliberate, decision-making (Dennett 2003, pp. 133–34). We now turn our attention back to the main issue. Given that we have addressed the major scientific reasons that have been offered against the application of control-theoretic models to self-regulation phenomena, are
316
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
there any other reasons for this skepticism? In the concluding section, we argue that the opponents of control theory, in applied psychology, have only one reason left: an unargued allegiance to incompatibilism and libertarianism. Conclusion: Philosophical Objections Require Work Too! Our examination of the debate between control theorists in applied psychology and their critics leads to the following conclusions: The insights of goal-setting theory and social cognitive theory can be reconciled with a control-theoretic approach to modeling human motivation and performance. Goal-setting theorists may be right that specific, difficult goals lead to higher performance than easy ‘‘do your best’’ goals. Social cognitive theorists may be right that belief in self-efficacy plays a key role for many human behaviors. However, such claims raise obvious questions: How and why do specific, difficult goals lead to higher performance than easy ‘‘do your best’’ goals? How and why does belief in self-efficacy influence behavior? Control theory provides a paradigm for addressing such questions. One would think that those who first posited such self-regulation phenomena would welcome explanations that legitimized their theories, like Crick and Watson legitimized Mendel. This raises a further question: Why do goal-setting and social cognitive theorists bear such animosity toward control theory? If one assumes that the resources of any scientific community are limited, one role of the members of that community is to see to their prudent distribution. The libertarian perspective of goal-setting and social cognitive theorists leads them to cordon off some causes of behavior into nonscientific elements. Specifically, certain components of the mind are considered intractable black boxes. On this view, resources spent trying to illuminate such black boxes are wasted. However, control theorists are gathering at the edge of the black boxes and talking about what might be inside. They are developing scientific paradigms, or using ones developed elsewhere, to test their speculations. Thus it is not surprising that the old guard is trying to close ranks. Yet the libertarian argument is fundamentally unscientific. It seeks to put boundaries on what can be studied, limiting empirical scrutiny and thus stifling scientific progress. Moreover it defends this by appealing to an intuition that many philosophers find wanting. It is entirely legitimate to raise philosophical objections to scientific enterprises. However, just as one needs to work to make an empirical objection stick, one needs to work to make a philosophical objection stick. Mere
What Determines the Self in Self-Regulation?
317
appeals to intuition are not enough. Locke, Bandura, and other opponents of control theory within applied psychology pitch some of their worries as empirical. We have documented the extensive work that control theorists have put into responding to these empirical problems. Not only can control-theoretic approaches model the kinds of self-regulation phenomena on which applied psychologists focus, they can model alternative proposals in order to test between them, and they reveal properties of such phenomena that are otherwise not obvious (e.g., the complex relationship between beliefs in self-efficacy and motivation). Although control-theoretic models are by no means the whole story, their empirical success constitutes a sufficient response to most empirical problems that have been raised for them. This leaves the philosophical problems. But here, all that critics have to go on are unargued incompatibilist intuitions. It is taken as just obvious that mechanistic explanations are incompatible with folk psychological notions such as the will, the self, belief, and agency. Such intuitions underlie the equation of mechanistic models with eliminativism about folk psychological notions (Bandura 1989; Locke 1995). But such incompatibilism is far from obvious. Philosophers of mind and philosophers of psychology have been defending various proposals for reconciling folk psychological notions with mechanistic explanations for over a century now. These philosophical arguments must be addressed, before incompatibilist intuitions can trump mechanistic paradigms like control theory. Herein lies the work of philosophy. As with the empirical objections to control theory, such philosophical criticisms do not come for free. Notes 1. In cognitive psychology, Wegner (2002) has provided compelling evidence that our introspections of willed control are largely illusory. In philosophy, Dennett (1984, 2003) has argued forcefully that any variety of free will worth wanting is compatible with mechanistic and deterministic explanations of human behavior. Other philosophers vigorously disagree (Kane 1996). 2. Naturally, other disagreements exist between self-regulation theorists and theories, but these are the ones of primary concern for us here. 3. See Dennett (1987, pp. 44–45) for an explicit and insightful discussion of this distinction. He distinguishes between the ‘‘purely semantic level . . . or . . . phenomenological level [where] all items in view are individuated by their meanings . . . and one just assumes that the items behave as items with those meanings ought to behave,’’ and the ‘‘mechanical’’ level, at which we ask the question ‘‘how can ideas be designed so that their effects are what they ought to be, given what they mean?’’
318
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
4. Some control-theoretic models emphasize this deeper level of explanation more than others (see Klein 1989). 5. It also appears that symbol meanings are the bottom-most layer of explanation allowed by goal-setting and social cognitive theorists. 6. The construction of configurations represents half the work of the theorist. The other half, likely to take much longer, is confirming that the particular configurations are the correct ones. 7. There are several ways to prevent the model from exhibiting discrepancy creation, but the data collected did not allow one to distinguish which way or ways best represent subjects who did not exhibit it. 8. We thank Andy Clark for raising this issue during the question and answer period of the version of this chapter presented at the Mind and World Conference in Birmingham, Alabama. 9. We thank a member of the audience for raising this issue, during the question and answer period of the version of this chapter presented at the Birmingham conference. 10. See many of the papers contained in Pylyshyn (1987). 11. Indeed, if it were true that humans never suffered from the frame problem, it would seem reasonable to suggest that all scientific theories should be complete. That is, we would know what all the relevant information was and only the relevant information. We would not waste time with experiments that are uninformative or nondiagnostic. However, only a small amount of time with even the leading journals in a discipline will reveal that this is not the case. 12. See Dennett (1991, pp. 279–80) for a version of this argument. 13. This off-line ‘‘experimentation’’ constitutes a control-theoretic model of the kind of environment-independent thinking that many critics claim control theory cannot model (Bandura and Locke 2003). Off-line simulations have the advantage of preserving resources (i.e., trying alternative problem-solving strategies in the real world is limited by a behavioral bottleneck, and hence must be carried out serially). 14. This is an example of what AI researchers used to call the ‘‘sleeping dog’’ strategy (Haugeland 1987, p. 83). 15. This term was coined by Hofstadter (1985, p. 529) to refer to stereotyped, reflexlike routines that are completely insensitive to significant variations in environmental variables, such as the Sphex wasp’s routine for verifying that the hole in which it has chosen to lay its eggs is clear. 16. In fact the earliest application of control theory to issues in psychology (Powers 1973) proposed just this role for randomization in control-theoretic models.
What Determines the Self in Self-Regulation?
319
17. As with much of our discussion here, this point regarding the analogy between libertarian notions of the self and Fodor’s central systems derives from a discussion by Dennett (1991, p. 261). References Anderson, J. R. 1995. An Introduction to Neural Networks. Cambridge: MIT Press. Arbib, M., and G. Rizzolatti. 1996. Neural expectations: A possible evolutionary path from manual skills to language. Communication and Cognition 29: 393–424. Ashby, W. R. 1956. An Introduction to Cybernetics. London: Methuen. Austin, J. T., and J. B. Vancouver. 1996. Goal constructs in psychology: Structure, process, and content. Psychological Bulletin 120: 338–75. Bandura, A. 1986. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A. 1989. The concept of agency in social-cognitive theory. American Psychologist 44: 1175–84. Bandura, A. 1991. Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes 50: 248–87. Bandura, A. 1997. Self-Efficacy: The Exercise of Control. New York: Freeman. Bandura, A., and E. A. Locke. 2003. Negative self-efficacy and goal effects revisited. Journal of Applied Psychology 88: 87–99. Beach, L. R. 1990. Image Theory: Decision Making in Personal and Organizational Contexts. New York: Wiley. Beck, R. C. 2000. Motivation: Theories and Principles, 4th ed. Upper Saddle River, NJ: Prentice-Hall. Boekaerts, M., S. Maes, and P. Karoly. 2005. Self-regulation across domains of applied psychology: Is there an emerging consensus? Applied Psychology: An International Review 54: 149–54. Campion, M. A., and R. G. Lord. 1982. A control systems conceptualization of the goal-setting and changing processes. Organizational Behavior and Human Performance 30: 265–87. Carver, C. S., and M. F. Scheier. 1981. Attention and Self-Regulation: A Control-Theory Approach to Human Behavior. New York: Springer-Verlag. Carver, C. S., and M. F. Scheier. 1998. On the Self-regulation of Behavior. New York: Cambridge University Press.
320
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
Clark, A. 1997. Being There. Cambridge: MIT Press. Clark, A. 2003. Natural Born Cyborgs. Oxford: Oxford University Press. Cronbach, L. J. 1957. The two disciplines of scientific psychology. American Psychologist 12: 671–84. Dennett, D. 1984. Elbow Room. Cambridge: MIT Press. Dennett, D. 1987. Cognitive wheels: The frame problem of AI. In Z. Pylyshyn, ed., The Robot’s Dilemma, pp. 41–64. Norwood, NJ: Ablex. Dennett, D. 1991. Consciousness Explained. Boston: Little Brown. Dennett, D. 2003. Freedom Evolves. New York: Viking. Fodor, J. 1983. The Modularity of Mind. Cambridge: MIT Press. Fodor, J. 1987. Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In Z. Pylyshyn, ed., The Robot’s Dilemma, pp. 139–49. Norwood, NJ: Ablex. Glimcher, P. W. 2005. Indeterminacy in brain and behavior. Annual Review of Psychology 56: 25–56. Gluck, M. A., and G. H. Bower. 1988. Evaluating an adaptive network model of human learning. Journal of Memory and Language 27: 166–95. Haugeland, J. 1987. An overview of the frame problem. In Z. Pylyshyn, ed., The Robot’s Dilemma, pp. 77–93. Norwood, NJ: Ablex. Hofstadter, D. 1985. Metamagical Themas. New York: Basic Books. Jagacinski, R. J., and J. M. Flach. 2003. Control Theory for Humans: Quantitative Approaches to Modeling Performance. Mahwah, NJ: Erlbaum. James, W. 1897. The Will to Believe and Other Essays. New York: Dover. Kane, R. 1996. The Significance of Free Will. Oxford: Oxford University Press. Kanfer, F. H. 1970. Self-regulation: Research, issues, and speculation. In C. Neuringer and J. L. Michael, eds., Behavior Modification in Clinical Psychology, pp. 178–220. New York: Appleton-Century-Crofts. Karoly, P., M. Boekaerts, and S. Maes. 2005. Toward consensus in the psychology of self-regulation: How far have we come? How far do we have yet to travel? Applied Psychology: An International Review 54: 300–11. Klein, H. J. 1989. An integrated control theory model of work motivation. Academy of Management Review 14: 150–72. Latham, G. P., and C. C. Pinder. 2005. Work motivation theory and research at the dawn of the twenty-first century. Annual Review of Psychology 56: 485–516.
What Determines the Self in Self-Regulation?
321
Locke, E. 1991. Goal theory vs. control theory: Contrasting approaches to understanding work motivation. Motivation and Emotion 15: 9–28. Locke, E. A. 1995. Beyond determinism and materialism, or Isn’t it time we took consciousness seriously? Journal of Behavior Therapy and Experimental Psychiatry 26: 265–73. Locke, E. A., and G. P. Latham. 1990. A Theory of Goal Setting and Task Performance. Englewood Cliffs, NJ: Prentice-Hall. Locke, E. A., and G. P. Latham. 2002. Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist 57: 705–17. Locke, E. A., and G. P. Latham. 2004. What should we do about motivation theory? Six recommendations for the twenty-first century. Academy of Management Review 29: 388–403. Mischel, W. 2004. Toward an integrative science of the person. Annual Review of Psychology 55: 1–22. Naylor, J. C., and D. R. Ilgen. 1984. Goal-setting: A theoretical analysis of a motivational technology. In B. Staw and L. Cummings, eds., Research in Organizational Behavior, vol. 6, pp. 95–140. Greenwich, CT: JAI Press. Olson, J. M., N. J. Roese, and M. P. Zanna. 1996. Expectancies. In E. T. Higgins and A. W. Kruglanski, eds., Social Psychology: Handbook of Basic Principles, pp. 211–38. New York: Guilford Press. Phillips, J. M., J. R. Hollenbeck, and D. R. Ilgen. 1996. Prevalence and prediction of positive discrepancy creation: Examining a discrepancy between two self-regulation theories. Journal of Applied Psychology 81: 498–511. Pinder, C. C. 1998. Work Motivation in Organizational Behavior. Upper Saddle River, NJ: Prentice-Hall. Powers, W. T. 1973. Behavior: The Control of Perception. Chicago: Aldine. Powers, W. T. 1991. Commentary on Bandura’s ‘‘human agency’’. American Psychologist 46: 151–53. Pressing, J. 1999. The referential dynamics of cognition and action. Psychological Review 106: 714–47. Pylyshyn, Z., ed. 1987. The Robot’s Dilemma. Norwood, NJ: Ablex. Rumelhart, D., J. L. McClelland, and the PDP Research Group, eds. 1986. Parallel Distributed Processing. Vol. 1: Foundations. Cambridge: MIT Press. Sandelands, L., M. A. Glynn, and J. R. Larson. 1991. Control theory and social behavior in the workplace. Human Relations 44: 1107–29.
322
Jeffrey B. Vancouver and Tadeusz W. Zawidzki
Scherbaum, C. A. 2001. Testing a Computational Goal-Discrepancy Reducing Model of Goal-Discrepancy Creation. Masters Thesis. Ohio University. Sutton, R. I., and B. M. Staw. 1995. What theory is not. Administrative Science Quarterly 40: 371–84. Vancouver, J. B. 2000. Self-regulation in Industrial/Organizational Psychology: A tale of two paradigms. In M. Boekaerts, P. R. Pintrich, and M. Zeidner, eds., Handbook of Self-Regulation, pp. 303–41. San Diego, CA: Academic Press. Vancouver, J. B. 2005. The depth of history and explanation as benefit and bane for psychological control theories. Journal of Applied Psychology 90: 38–52. Vancouver, J. B., D. J. Putka, and C. A. Scherbaum. 2005. Testing a computational model of the goal-level effect: An example of a neglected methodology. Organizational Research Methods 8: 100–27. Vancouver, J. B., K. L. Scherbaum, K. More, R. Yoder, and L. Kendal. 2005. Selfefficacy and resource allocation: Support for a nonmonotonic, discontinuous model. Working paper. Vancouver, J. B., C. M. Thompson, E. C. Tischner, and D. J. Putka. 2002. Two studies examining the negative effect of self-efficacy on performance. Journal of Applied Psychology 87: 506–16. Vancouver, J. B., C. M. Thompson, and A. A. Williams. 2001. The changing signs in the relationships between self efficacy, personal goals and performance. Journal of Applied Psychology 86: 605–20. Wegner, D. M. 2002. The Illusion of Conscious Will. Cambridge: MIT Press. Wiener, N. 1948. Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge: MIT Press.
14 Civil Schizophrenia Dan Lloyd
Some farm houses in a farm yard time With a horse and horseman time Where going across the field as if they’re ploughing the field time With ladies or collecting crops time work is Coming with another lady time work is And where she’s holding a book time Thinking of things time work is And time work is where you see her coming Time work is on the field and where work is Where her time is where working is and thinking of people And where work is and where you see the hills Going up and time work is Where you see the grass time work is Time work is and where the fields are Where growing is and where work is —Transcript of patient with chronic schizophrenia describing a farming scene, collected by Heidi Allen and quoted in Frith 1992. Line breaks indicate pauses.
Schizophrenia is a devastating disorder that affects approximately 1 percent of humankind. It usually strikes in early adulthood, and by every measure, outcomes of schizophrenia are usually poor: About half of discharged patients will be rehospitalized within a year. Two-thirds of first-episode patients will continue to have positive symptoms (like hallucinations or delusions) one year later, and one-third will still have these symptoms six to ten years later. Less than 20 percent of individuals with the illness are gainfully employed. Subjective and objective measures of the quality of life with schizophrenia are very low, even when compared to other chronically ill patients. One in ten will end their lives through suicide (Robins
324
Dan Lloyd
et al. 1984; Karno and Norquist 1995; Hafner and Heiden 1997; Weiden et al. 1996). The humane need to address this illness is clear, and there has long been a massive effort to understand and treat it. For example, the PubMed database currently references more than 66,000 papers on schizophrenia. More than three thousand were published last year alone; that’s about eight schizophrenia papers every day. Yet, despite all this effort, and despite significant progress in managing the illness, schizophrenia has been singularly resistant to explanation. It is elusive for every discipline, including philosophy, phenomenology, cognitive science, and neuroscience. Its elusiveness is frustrating but also fascinating. Many psychological disorders and diseases have been crucial to discovering the functions of mind and brain in both illness and health. It would seem that a disorder as manifold as schizophrenia would be especially revealing of the machinery of mind, but so far it poses more riddles than answers. In this chapter, I propose that schizophrenic cognition does open a window on the working brain, but what is revealed is seen through a glass darkly. This is in part because schizophrenia is a complex disorder (or group of disorders), and in part because the explanatory frameworks we bring to bear on it are inappropriate for its distinctive complexity. To address its challenges, I’ll recommend a double shift of approach, motivated by dynamical systems and phenomenology. The two perspectives illuminate an aspect of cognition that is often overlooked but may be important in understanding schizophrenia. The Deficit Schema and Its Dysfunctions Cognitive neuroscience is in some ways the oldest of the disciplines of cognitive science. It can be traced to nineteenth-century neuropsychology, and the clinical genius of researchers like Broca and Wernicke. They and their colleagues inaugurated a search for a physiological implementation of a much older faculty psychology. Their overarching hypothesis was that the faculties reside at specific anatomical addresses, or in other words that the brain was predominantly organized around functional localization. The logic of localization was a logic of lesions, linking the loss of specific brain regions to specific deficits in behavior. If insult to some region R led to a deficit in the behaviors supported by some function f , then, conversely, R’s normal function would seem to be f . Neuropsychology is full of lesion studies, which provide much of the foundation of modern cognitive neuroscience. An old example that
Civil Schizophrenia
325
remains germane is the case of H.M., whose hippocampi (and more) were ablated to control his epilepsy. Following his operation, H.M. suffered from severe anterograde amnesia. He was never again able to create longterm episodic memories. His case demonstrated the role of hippocampus in memory, and subsequent experiments, many with H.M. as their subject, teased apart distinctions between declarative memory and procedural memory, among others. Many of the lessons of H.M. remain central to cognitive neuroscience and cognitive psychology (Corkin 2002). In the case of H.M. it was not hard to fill in the blanks of the deficit schema. A distinct external cause (his surgery) resulted in a specific and observable single lesion in his hippocampi in both hemispheres, and as a further result brought about specific deficits that could be readily observed and described. None of these clean distinctions hold in schizophrenia. It is multiform at each stage in the deficit schema. First, schizophrenia arises from the joint effect of several causes, none of which is fully understood. One factor is genetic. If your identical twin has schizophrenia, your chance of acquiring it will be around 50 percent; risk tails off as one moves along the branches of the family tree. For example, if you have a cousin with schizophrenia, your own chances double from the baseline rate of 1 percent. But even as these ratios point to a genetic contribution, they just as clearly indicate that something else is also involved, affecting some stage of pre- or postnatal development, or arising in the environment or experience of those who ultimately acquire the disease. Moreover there may be multiple paths to schizophrenic disorders. Many potential contributing causes have been discussed, but there is no consensus yet. The multiple ambiguous causes of schizophrenia lead to multiple and ambiguous differences in the brain. Early in the last century it seemed that schizophrenic brains were anatomically indistinguishable from healthy brains, but more careful measurement revealed an enlargement of cerebral ventricles in those with the disorder. This then seemed to be due to smaller volumes of gray matter overall, but particularly in the frontal lobe, but also in the temporal lobe. At this point statistically significant variations in volume have been found in parts of all four lobes, the basal ganglia, and cerebellum (Niznikiewicz et al. 2003). These volume differences may (or may not) be related to observed differences in cytoarchitecture in several regions (Frith 1992). Relating all these variations to their expression in the illness would be complicated in any case, but hypotheses are additionally confounded from two directions. On the one hand, it’s not clear that schizophrenia causes all these differences, as individuals with the illness are also affected by chronic medication and a very different life experience. On
326
Dan Lloyd
the other hand, it’s also unclear that any of these anatomical differences reflect mechanisms of schizophrenic cognition. Many people exhibit similar anatomical variations—siblings of schizophrenics, for example—without developing the illness. Emblematic of both confounds is the relationship of gray matter loss to schizophrenic symptoms. Individuals with schizophrenia have less gray matter volume than those without, but then, between ages twenty and sixty, the average healthy individual will lose around 15 percent of his or her cortical gray matter (Lim et al. 1992). Loss of gray matter in itself does not give rise to schizophrenia, fortunately. In short, the root causes and neural expressions of schizophrenia are both obscure. But perhaps the most perplexing aspect of the illness is its symptomatology. Only in the 1980s were the symptoms organized into positive and negative categories (Crow 1980). More recently the positive symptoms were further divided, leading to a ‘‘three-compartment’’ conception of the illness (Liddle 1987). These are, first, the positive symptoms including delusions of various types and hallucinations. Second, there are the ‘‘disorganized symptoms,’’ including disorganized speech and behavior, impaired attention, or catatonia. Third, there are the negative symptoms, including diminished speech, decreased ability to initiate goaldirected behavior, flattened affect, inability to express pleasure, and social isolation. (These diagnostic symptoms are frequently accompanied with numerous cognitive deficits; see Barch 2005.) The clarity of the three compartments is somewhat illusory, however. Individual patients will express the illness in idiosyncratic ways, and furthermore the form the illness takes changes over time. Moreover, under scrutiny, some of the symptoms take on the enigmatic cast of the disease overall. For example, a commonsense understanding of delusions might define them as patently false beliefs held by persons who are unaware of that falseness. Bill Fulford has pointed out that the differentia of this definition fail to capture what seems to have gone wrong in delusion (Fulford 1994). First, delusions are not simply false (or many of us would be in their grip, most of the time) nor are they necessarily false. Someone may suffer the paranoid delusion that the government is spying on him, but the belief may nonetheless be true. More important, Fulford points out that individuals with delusions often have a great deal of insight into their beliefs. They know that they hold the belief and can appropriately relate it to many other beliefs, some delusional and some not. They often know that the delusion is exceptional, that it is denied by everyone around them, that it is radically dysfunctional in the context of practical action and social life, and that it has made life hell for them. Even more perplexing, it’s not even clear
Civil Schizophrenia
327
that delusions are beliefs. Deluded patients often fail to act on their delusions or to register appropriate affect about them. Citing observations like these, Stevens and Graham propose a unique propositional attitude, the ‘‘delusional stance’’ (Stephens and Graham 2004). In short, schizophrenic delusion has something to do with falsehood and evidence, something to do with a lack of insight, and something to do with belief, but a precise description of the cognitive dislocation turns out to be elusive. One indicator of the complex symptomology of schizophrenia is the proliferation of diagnostic tests for the illness and its variants, like ‘‘schizotypal disorder’’ and ‘‘schizoaffective disorder.’’ Some of these can be revealing of assumptions about the subjective experience of schizophrenia. For example, consider the opening questions of the Schizotypal Personality Questionnaire: 1. Do you sometimes feel that things you see on the TV or read in the newspaper have a special meaning for you? 2. I sometimes avoid going to places where there will be many people because I will get anxious. 3. Have you had experiences with the supernatural? 4. Have you often mistaken objects or shadows for people, or noises for voices? 5. Other people see me as slightly eccentric (odd). 6. I have little interest in getting to know other people. 7. People sometimes find it hard to understand what I am saying. 8. People sometimes find me aloof and distant. 9. I am sure I am being talked about behind my back. (http://www-rcf.usc.edu/~raine/spq.htm)1 Indirect questions dominate this and other tests. Imagine, by analogy, a diagnostic test for pain that asked not ‘‘Does it hurt a lot?’’ but instead asked ‘‘Do you find yourself saying ‘ouch’ a lot?’’ or ‘‘Do other people often offer you aspirin?’’ The circumlocutions imply that self-awareness is impaired in schizophrenic disorders, even in comparison to other mental illnesses. This failure of insight is one facet of the common view that individuals with schizophrenia suffer a ‘‘break with reality.’’ The break with reality, then, is the subjective counterpart of the breakdown of communicative abilities. Many of the signs listed in the three compartments of schizophrenic symptomatology are variants on the twin themes of mistaken perception and derailed expression. Excerpts from schizophrenic discourse, sprinkled through every textbook treatment, offer many particular examples of delusional belief and fractured language. At the limit, the break
328
Dan Lloyd
with reality can seem to exclude schizophrenic experience from any normal understanding. Karl Jaspers, following Franz Brentano, referred to ‘‘genetic understanding’’ to denote the understanding of how one mental state arises from another, and claimed that the mark of madness was the failure of the sane to achieve a genetic understanding of the mad attempts to communicate ( Jaspers 1963). In Jaspers’s account, regarding severely delusional patients the failure was absolute. Quoted out of context, individuals with schizophrenia will express beliefs that can’t easily fit into the spectrum of typical world views, circa 2006. With all these assumptions—the break in perception and communication and the failure of insight—in mind, then, let us consider the dialogue below, a transcription of five minutes of a hospital interview of a woman with psychosis: Interviewer: I want you to tell me something about what’s been happening to you. Patient: Well, I’ve been exposed to guerilla warfare, and some of these Dutch Confusia . . . I: Dutch Confusia? P: Yes, Dutch Confusia, that’s what he is, he’s a con-fus-ia. He will cause confusion. A great deal of British and Dutch use those people in nations where they want to break into, create some strife, oftentimes steal something, or do something, get away with something. They’ve done it for thousands of years. They planned and plot world conquest. About every thousand years, my grandmother says, the confusia overrun the world. They break into nations: that’s what Russia is, and they’re doing it in South America. They’re to blame for the trouble in South America. They will deceive our Royals and Royals cannot fight back and do the things they do. We have to return good for evil— I: What are Royals? P: A Royal is a human being as God created him, that lives by God’s law, that will not slay another man, will not touch his person. Lives by the Bill of Rights, in other words, as God gave the commandments for man to conduct himself. It’s come down to us, it’s called the Bill of Rights today, because William the Bastard and the Duke of York came over during the American Revolution and overran this continent, and they meddled around in our civil affairs over here and now we call it the Bill of Rights. He renamed that and they’ve written in death penalty, which is forbidden by God—any man can err, he can change hands, but you cannot take his life. Disprove that, my grandfather the Baptiste’s first wife—he detected Christ’s human rights— I: What do you mean, your grandfather— P: Great grandfather, John the Baptiste, St. John. I: I see— P: John the Baptiste, great great great grandfather. I can’t repeat the number of patterns of removal, but he is a great great grandfather and really the head of the
Civil Schizophrenia
329
American, as we say, the house of Aabel, the American nation over here. He was the founding father— I: What do you mean, the house of Aabel? P: Aabel—it’s the clan of Aabel, our tribe, as you would say— I: Would you explain that word, Aabel? P: The body of Americans carry Aabel blood more than they do Cain and Satan blood— I: What does that word Aabel mean? P: Abel, Abel, A with the first Adam and Bel means one that lived by the Bell of Rights, that set down the eighty marked lands by the acorn and the oak, and the acorn became the symbol of our Liberty Bell and that Bell of Rights, or law, set of commandments as God gave us, become known as the Bell of Rights, because they would call Congress, they would ring that bell, they would call, gather ye, hear ye, hear ye—town crier, or the first bell in the harbor up there, holds a torch aloft. Our grandfather was the first to be given light by God. That happened right down here in the Cove of Anton, that’s one reason that we came to this home— I: The what? P: The Cove of San Anton is a cave, a volcanic formation, and our grandfather, the first Jo-an and the first Adam— I: The first what? P: My great grandfather is Adam. The first Adam, and his son Jo-an and his wife, and his son’s wife. He had a small child already, he had Anton. The T means a crucifix in ancient hieroglyphics. I: Tell me more about the meaning of the letters— P: They had wandered around the United States—I’m coming to that—they came down here, to, they call this the Fountain of Youth region. This water here will mend bones, and they had discovered, God said, ‘‘Seeking my light and the right water to heal’’—our people were homeopathic. Now if you want to cleanse your system, you seek the right water. If you have an ailment, you have to first seek the right water. And some of our people heal with the sun, but they sought this water because if they had broken bones or sores or anything that wouldn’t heal, this water flows through limestone here, the famous Bell Fountain in Ohio here—you’ve heard of that I suppose—those were Grandpa Bell’s Fountains, and the mineral lode is the best right down where we are, and those 13,000 acres were an ancient reserve that belonged to my grandfather when he went to the other side, the General— I: Now why are you here? P: I don’t know why I’m being routed out here. I understood some tests were to be made. I: What kind of place is this? P: This is Mountain View, a hospital for mentally deficient, or mental patients. I: Are you a mental patient? P: No, I am not, but they’re trying to make it appear that I am. The British and Dutch are doing that, the Confusia . . .
330
Dan Lloyd
Undeniably, there is something deeply ‘‘off’’ about this patient’s attempt to communicate with her doctor. She seems almost oblivious to the social environment and the pragmatic demands and constraints that normally support successful communication. From this passage one could extract dozens of explicit claims that could serve as textbook examples of deluded belief or logical chaos. But reading the whole passage, slowly, leaves a very different impression. One is struck, I think, by the patient’s urgency to explain herself, by an indefatigable need to make sense of her world. Bleuler (1913) originally characterized schizophrenic thought process by its ‘‘loose associations,’’ but when we settle in with this transcript the associative paths begin to seem rule-governed and thoroughly explored. By the third reading, the passage almost seems like the theoretical discourse of a systematic philosopher, the ruminations of the Leibniz of Dutch confusionism. As Einstein labored on the general theory of relativity in an office across the street from an asylum, he remarked on a feeling of kinship with the inmates: ‘‘They were the madmen who did not concern themselves with physics. I was the madman who did.’’ At the same time, the passage is reminiscent of modernist literature, with echoes of Beckett or Molly Bloom. In short, genetic understanding of this speaker is not impossible. This is not to deny that the speaker’s illness is greatly distorting her discourse, which is neither literature nor philosophy (but compare the nonschizophrenic novels of Kathy Acker, or the nonschizophrenic philosophy of, e.g., Lacan). But, just as with the understanding of any discourse, this text becomes clearer with the addition of its context. More text yields more understanding, and even the glimmers of the organization of the patient’s worldview, as she returns to her first topic at the end of the passage. Isolated sound bites from schizophrenic discourse suggest an illness characterized by an incomprehensible break with reality. But the lesson of the passage above is that providing even limited context for those same data suggests a method in the madness, and the possibility of partial understanding. This simple observation, that context renders schizophrenic perception more understandable, is the emblem for the main claim of this chapter. Schizophrenia, I will suggest, is a disorder of context (see also Hemsley 2005). The context in question is the temporal context of episodes of thought and perception, schizophrenic and healthy alike. Time, I’ll suggest, is the fundamental structure of our experience (and essential to every aspect of cognition). It is so basic as to be invisible and thus largely overlooked in both philosophy and cognitive science. My proposal here is that schizophrenia can be approached as a partial dislocation in fundamental temporal cognition. Its symptoms reflect a struggle to stabilize and normalize a shifting framework that we who are fortunate in health call reality.
Civil Schizophrenia
331
My ‘‘brief history of time’’ will unfold on two levels at once. The first level is that of dynamical systems with reference to a few main theories of the pathophysiology of schizophrenia. The second level is phenomenological, considering the normally invisible structures of temporality and their emergence from the underlying dynamic architecture. The links across levels will then suggest an interpretation for what is both strange and inexpressible in schizophrenic experience. The Brain as a Recurrent Network, in Health and Illness In 1990 Jeffrey Elman introduced the connectionist architecture known as the Simple Recurrent Network (SRN; Elman 1990). Figure 14.1 depicts the SRN architecture schematically. Essentially, a recurrent network is based on a standard three layer feed-forward architecture, designed for back-propagation learning. Its innovation is a feedback loop, functionally implemented in an auxiliary set of ‘‘neural’’ units that copy the information in the hidden layer, making it available for the next cycle of information processing. Thus at each processing cycle two inputs are available to determine the output: the occurrent input, and a copy of the pattern of activity in the hidden layer from the immediately preceding cycle.
Figure 14.1 A simple recurrent network.
332
Dan Lloyd
The simple recurrent network is a generalized computational architecture and a highly abstract model. But it is at the right level of abstraction, in that it specifies a minimally sufficient architecture to support a class of functions no ordinary feedforward net can handle. Elman’s 1990 paper is entitled ‘‘Finding Structure in Time,’’ and it demonstrated that recurrence gave networks the power to learn temporal contingencies of many sorts. SRNs can accommodate delayed responses, and iterative and recursive structures, like embedded clauses. A moment’s reflection reminds us that almost all our cognitive power depends on finding structure in time, that is, building complex representations from the serial presentation of independently ambiguous ‘‘takes’’: language, action planning and execution, scene perception and object constancy, reasoning, narrative, and memory of every sort all presuppose an ability to embed temporal information in perception and cognition. The SRN also illustrates how temporal information could be folded into the general information stream. Parallel distributed processing is characterized, as the name suggests, by distributed representation, in which information is spread across units and connections, each contributing to many separate functions. In a recurrent network distributed representations also encode time. Temporal information is not localized in specialized time-keeping units but rather is folded into whatever other information the network is processing. This enfolding is ongoing, and as a result the temporal depth of information extends beyond the single cycle mirrored in the feedback loop. The current state of the hidden layer incorporates the previous state, which incorporates the state before that, and so on into the past. Through a regular backpropagation learning process a recurrent net learns to maintain a ‘‘working memory’’ as demanded by the temporal contingencies of the task at hand. Recurrent networks offer a useful framework for thinking about perception and cognition. They also remind us of a pervasive feature of biological brains, in which as many as 90 percent of all connections are feedback. Figure 14.1 thus also represents a highly schematized picture of the brain. Even at this level of abstraction we can think of ordinary brain function as the flow of information through a recurrent network. And we can think of schizophrenia as a modification of that flow. One usual suspect in the pathophysiology of schizophrenia is dopamine, which was implicated in the disease more than fifty years ago. The ‘‘dopamine hypothesis’’ proposes that schizophrenia is an effect of a dysregulation in neuromodulatory pathways, originating in midbrain areas (in and near the substantia nigra) and projecting to many cortical regions, especially prefrontal cortex and the mesolimbic system (including nucleus
Civil Schizophrenia
333
Figure 14.2 Neuromodulator input, schematically representing the ‘‘dopamine hypothesis’’ in the exploration of schizophrenia.
accumbens, ventral striatum, amygdala, hippocampus, entorhinal cortex, medial frontal cortex, and anterior cingulate). The dopamine hypothesis has been adjusted over the years, and there is room to question whether the evidence unequivocally points to dopamine dysregulation, but these issues will not be discussed here. (See Byne et al. 1999.) For present purposes we need only consider a schematic translation of neuromodulation into the dynamical system architecture before us. Neuromodulatory inputs can be modeled as a tonic bias input to a recurrent network, as in figure 14.2. The schematic leaves unspecified, for now, the manner of modulation, but the overall causal cascade is clear enough: change in the modulator system leads to change in the information processing in the main recurrent loop. Altered neuromodulation translates into a dynamical systems hypothesis as interference or noise in a recurrent circuit. At the dynamic systems level, then, schizophrenia is the expression of dysregulated, noisy recurrent processing. This is oversimplified, but perhaps not grossly, as it will provide a way to think about several interesting recent proposals about schizophrenia. I’ll summarize three views, to suggest the range of ideas in play in this literature.
334
Dan Lloyd
For example, in several papers Nancy Andreasen (1999) has proposed that some of the symptoms of schizophrenia arise through ‘‘cognitive dysmetria,’’ ‘‘a defect in timing or sequencing the flow of information as required during normal ‘thought’ or speech.’’ The flow in question is a loop through thalamus, frontal cortex, and cerebellum. Andreasen points out that just as action requires a fine-grained coordination of sensory inputs and motor outputs, so also thought. On this proposal, then, the motor difficulties observed in individuals with schizophrenia are one expression of dysmetria, and cognitive deficits another. Christopher Frith identifies recurrent circuits involved in the initiation and monitoring of willed action, and considers the manifold effects of disrupting these loops (Frith 1992). Functionally the loop includes a ‘‘supervisory attentional system,’’ responsible for forming goals and plans. This system communicates with a system for selecting which action to undertake next, among competing actions. The resultant choice initiates the action at the same time as it is communicated back to the supervisory system. A disruption of the link between the supervisory system and the action scheduler would explain several of the negative symptoms of schizophrenia. At the same time, action is accompanied by self-monitoring, which Frith proposes occurs through a corollary discharge from the supervisory system. The corollary discharge, then, is the signal of an intention to act. As in Andreason’s proposal, Frith’s involves an analogy between action and thought. An internal monitoring system compares the corollary discharge to what actually happens, as indicated by proprioception and other forms of self-perception. Disruption of the corollary intention signal would result in actions or thoughts that appear unintended, a common thread among the positive symptoms of the illness. Frith cautiously identifies the brain areas that may implement the loop, relying on lesion studies, animal models, and behavioral experiments with subjects who have schizophrenia. The attentional supervisory system includes the anterior cingulate cortex, dorsolateral prefrontal cortex, and supplementary motor area. The action selector includes the striatum, especially the putamen, and the globus pallidus. Karl Friston’s ‘‘disconnection hypothesis’’ considers the implications of dysregulated neuromodulation on two areas of the brain that seem implicated in schizophrenia: the ubiquitous prefrontal cortex and the medial temporal lobe, including amygdala, hippocampus, and parahippocampal gyrus (Friston 1999). The effect of aberrant modulation is to alter longterm synaptic plasticity (i.e., learning). Schizophrenia disrupts the learning of contingencies in the environment, including the social environment,
Civil Schizophrenia
335
and it disrupts the learning of complex action patterns. This could lead to negative symptoms, especially those involving communication and social interaction. It could also lead to positive symptoms, through misattribution of the relationships between one’s own intentions and perceived events. All of the authors above draw on multiple strands of evidence, which my capsule summaries completely omit. All reward study and deserve further discussion, but for present purposes they illustrate a single point: each proposal is a variant on the theme of a disrupted recurrent network. They vary in their ideas about the brain regions involved in the circuit, and the proposed mechanism of the pathophysiology. (Andreasen and Frith seem to suggest that schizophrenic symptoms are the immediate effect of a disruption of connectivity among brain areas; Friston proposes that the symptoms are a longer term secondary consequence of disrupted modulation of otherwise normal processes of synaptic change.) The exemplary theories further develop their proposals to explain several schizophrenic symptoms and deficits. In these theories, localized dysfunction is a secondary manifestation of the illness. No particular component fails in schizophrenia, they claim, but rather all fail together as their interconnections unhinge. It happens that the networks they scrutinize comprise components that are mutually interconnected, and in different ways aberrant feedback is proposed as part of the mechanism of cognition in schizophrenia. But there is a deeper consequence of recurrence that the individual theories omit but is prominent in the dynamic architecture of a generic recurrent network. In addition to disrupting inputs and outputs, dysfunction in a recurrent network both accumulates and spreads. Returning to figure 14.1 and the Simple Recurrent Network, a modification in one connection in the recurrent loop can affect one unit, but that unit can affect several others in the next cycle, and those can affect still others as the cycles continue. If the connections are plastic, then the spreading dysfunction embeds itself in many connections between units, and that in turn contributes to further distortion. Like compounded interest reinvested in a savings account, dysfunction accumulates and compounds over time. In short, the disconnection hypotheses considered above regard schizophrenia as a modification of neural networks that are distributed in space and fail to act in concert. But recurrence provides the functional architecture to accommodate information that is distributed in time. Its failures are deficits in sequence, with the secondary effect of compounded problems. If schizophrenia is a consequence of dysfunction in a recurrent loop,
336
Dan Lloyd
its expression will be quite different from deficits limited to spatial, synchronous patterns. Temporality ‘‘The time is out of joint,’’ complains Hamlet, and in the previous section I suggested a possible neural network implementation of his predicament: dysfunction in a recurrent network will distort structures in time. But to understand how this distortion might appear in schizophrenia, we must first understand the normal, undistorted starting point. The basic structures of temporality in cognition are little discussed in cognitive science, cognitive neuroscience, and analytic philosophy. But temporality is foundational in continental phenomenology, and despite variations over the decades, the original phenomenological description of temporal experience is still accepted. The definitive account of time is due to Edmund Husserl, who described it in lectures given in 1905 (although they were only published in 1928). His fundamental observation was that our conscious perceptual experience of a scene before us right now is not exhaustively constituted by the occurrent sensory information available. In addition to sensation, all perception incorporates a nonsensory ‘‘apprehension’’; appearances include both sensations and apprehensions. The contents of the apprehension are manifold, but central to all of them is the awareness of the temporal context of the present sensation. The presently experienced context enfolds both future and past. The future ‘‘appears’’ as an anticipation of what will or might happen in the seconds and minutes, ahead. Husserl called this anticipation ‘‘protension.’’ At the same time the past appears as a nonsensory awareness of what has just transpired. Husserl called this form of primary memory ‘‘retention.’’ Between protention and retention, the incoming stream of sensation is the ‘‘primal impression.’’ Our experience of the present, then, is not simply the intake of information before us, but is a triptych of protention, primal impression, and retention. Subjective temporality is ‘‘thick’’ with protentive and retentive layers, in effect adding phenomenal temporal dimensions to the thin line of linear, objective time. Husserl claimed not only that temporality was apparent, but that it was a necessary condition for anything we would recognize as conscious experience. For example, imagine a watch hanging from a watch chain. Is it moving or stationary? Present sensation of the watch right there is ambiguous—it could just as easily be a snapshot of an immobile as of a moving object. The perception that the watch is moving can only be
Civil Schizophrenia
337
achieved by retaining the context of its positions in the immediate past. This is equally true of the perception that the watch is stationary. Husserl’s deeper point is that we cannot imagine experience of either change or stasis without temporal information being part of the present consciousness of things. Two further points deserve emphasis in this account. First, this tripartite structure of time is all packed into the present moment. Protention and retention are both here now. Second, this fundamental temporal structure is distinct from our explicit attention to the future, through conscious prediction or planning, or to the past, through recollection. These distinct processes receive their own analysis (of their own distinctive tripartite structure). Once we have clearly in mind a present that includes a nonsensory anticipation of the future and a nonsensory trace of the past, we are ready to follow Husserl and launch the present, which is time, in motion through time. What appears as time passes is a continuous slippage of the present into retention (along with a continuous resolution of protention into primal impression). What slides into retention is not merely the present primal impression, the momentary sensory inputs, but rather the entire tripartite structure, moment by moment in a continuous temporal flow. At 10:10, present consciousness includes the sensory content at 10:10, along with an occurrent retention of (formerly) present consciousness at 10:09. But that lapsed present consciousness at 10:09 included its primal impression (sensory information at 10:09) and retentional consciousness at 10:09, itself enfolding retentional consciousness from 10:08, and so on into the past, as if into a bottomless well. But all this recursive nesting is experienced, all at once, at 10:10. Similar recursion opens into protention. We anticipate not just the next primal impression, at 10:11, but a next moment that will include a retention of the present package at 10:10 (and a further protention toward 10:12 and beyond). Figure 14.3 presents a schematic outline of the present moment of consciousness, as understood (in outline) by Husserl. Both the example just above and the diagram suggest discrete time steps and sharp boundaries between phases of temporal experience, but this is just for clarity. Husserl imagined a continuous slippage or flow of time. In addition the nesting depicted reaches into retention only, omitting equally complicated structures of protention. What then is time? For humans, at least, it is more than meets the eye, and more than the clock reports. That ‘‘more’’ goes beyond locating one’s experience in a framework of subjective history. Rather, temporality is
338
Dan Lloyd
Figure 14.3 Phenomenology of the present moment of consciousness according to Husserl.
the basis for the subjective sense of reality itself. Since both stability and change are essentially temporal experiences, every element of the experienced world is inflected by temporality. Objects get (or lose) their objectivity by their trajectory through a complex counterpoint of protention and retention, and by exactly the same calculus we siphon off the subjective component of consciousness. Subjective and objective are both aspects of experienced time. Temporality adds a significant complication to any model of cognition, in the form of a new dimension in perception. It entails, for example, that no two sensations are the same, no matter how similar the physical stimuli are. By the same token, nothing endures. Even the serene contemplation of a good size boulder yields an evanescent flow of experience. At a minimum, duration itself is a changing variable as the boulder evolves from first glance to steadfastness. In addition anything else straying into perception might remain in the temporal field even after it has disappeared from sensation.
Civil Schizophrenia
339
Figure 14.4 Speculative mapping of temporal information processing in a recurrent neural network, expressed in the language of phenomenology.
Braiding Time and Recurrent Networks Husserl never considered how temporal consciousness might be implemented in any physical system. Nowadays things have changed a bit. However, the new complexities of time do not lend themselves to a classical box-and-arrow functional decomposition. Instead, we turn again to dynamic systems in general, and to the simple recurrent network. The recurrent architecture offers information processing to support retention in very much the way Husserl imagined. The ‘‘context layer’’ reproduces the entire state of the hidden layer from one cycle past and makes it available along with the current inputs. Thus the basis of each cycle of processing includes two streams of information: an analogue of the primal impression (i.e., the current input) and an analogue of the retentional contents retained from the immediate past (i.e., the context layer) preserving the information encoded from the previous cycle. Figure 14.4 presents the analogy diagrammatically. The case for this intertwining of temporality and its network implementation appears elsewhere (Lloyd 2003). Briefly, I simulated a recurrent network faced with a simple predictive task and developed methods for analyzing the model to demonstrate the presence of information required by Husserlian temporality. Then I used the same analytical techniques to confirm that temporal information processing is a conspicuous global feature of brain activity, as detected by functional neuroimaging (See also Lloyd 2002). At this point several pieces of a larger puzzle are before us. To begin, our conscious experience of the world continually presents its own context.
340
Dan Lloyd
Although temporal context is not sensed, it is woven into every experience, and is essential to the constitution of consciousness. From an informationprocessing standpoint, any record of the past can influence the present only by some mechanism where information is gathered, stored, and represented in ongoing processing. The simple recurrent network offers a functional architecture to support this contextual information processing, and it has just the right components to implement the flow of experience Husserl posited, and to produce an integrated temporal structure of conscious experience. Computer models of recurrent networks demonstrate that all this is feasible for a material system. Phenomenology makes temporality conspicuous and emphasizes its importance. Phenomenology and network models together remind us that perceptual time is not a physical fact, but rather an ongoing construction of cognitive processes that are distinct from basic sensation. In ordinary, healthy cognition, temporal cognition is so exquisitely attuned to the causal patterns of the real world that we forget that it is a construct. What, then, is the consequence when temporality fails? It’s difficult to articulate a normal, functional sense of temporal reality, but even harder to imagine its disruption. Consider again the example of the watch on its fob, this time as it might be misperceived. The ordinary case of misperception is a momentary failure of sensation. Perhaps the watch suddenly flashes red. The perplexed misperceiver inquires into the cause of the unexpected flash, which may be external or internal. This inquiry is entirely framed by normal temporality, in which the red flash and events before and after it are fixed, offering stable objects for reflective scrutiny and further exploration. The goal is to save the appearances, which at the most fundamental level entails saving reality itself. That is, the best explanation is always measured against an edifice of one’s understanding of the way the world works and the circumstances of the anomaly, all expressed within the infrastructure of time. At the outcome of the investigation one might conclude that the sensation had no physical cause, and was therefore an internally generated hallucination. But this is nonetheless an account that preserves reality as such. Sensation may be in doubt (no small matter), but the world sticks. All the clues of schizophrenia surveyed here draw attention to another sort of breakdown, however. In schizophrenia, perhaps, sensory channels are preserved and operate normally, while recurrent circuits fail. In perceptual experience, how might this sort of anomaly appear, and what remedies would it require? Disruptions in recurrence will appear as anomalies in re-
Civil Schizophrenia
341
tention. Since a recurrent net stores both spatial and temporal information in distributed representations, its dysfunction could affect either the ‘‘what’’ of the immediate past or its ‘‘when.’’ The unstable network might insert or delete content with the perplexing property of seeming to have occurred. Or the distortion may be temporal, dilating or shrinking the felt duration of processes. (For a similar proposal regarding the relationship of stored material, sensory input, and cognition in schizophrenia, see Hemsley 1996.) In the case of the swinging watch, these effects would give rise to unusual hallucinations. The otherwise normal watch may appear to have been changed (a content shift) or to have accelerated oddly (a temporal shift) or both—even though its present sensory presentation is normal. Mystifying alterations like these naturally attract attention and demand explanation. If this happened once in a while and seemed to affect only some types of experience, it might be possible to wall off the anomaly, leaving it unexplained, while preserving a normal outlook on ordinary reality. De´ja` vu may be an example of this phenomenology. De´ja` vu experiences are not sensory hallucinations, but illusions of temporal context that charge the present sensory experience with the feeling of having been experienced before. They are typically perceptual and (as the name implies) often visual. We don’t ordinarily experience de´ja` vu with respect to internally generated experiences like thoughts or mental imagery. But even de´ja` vu inspires some people to leap to mystical or occult explanations. In the case of the temporally distorted swinging watch, an almost inevitable inference might posit unusual causes, forces operating in addition to the normal mechanics of the situation. Hypothesizing occult causes restores temporal experience to normalcy, at the expense of physics. To the subject, perception would be operating correctly in a world where watches are shoved around and transformed by mysterious forces. The anomaly remains, but the new explanatory context preserves the nonsensory appearances. Perhaps schizophrenic experience is like this, but amplified: the conjectured schizophrenic breakdown of recurrent processing is not a sporadic occurrence, nor is it limited to external sensory modalities. In dysfunctional recurrent networks, both externally and internally generated processes are candidates for derailment over time. In schizophrenia, anomaly is the rule. This paradoxical assault on one’s efforts to make sense of the world will intensify a desperate struggle to explain an endless slippage suffusing the objective world. But the process of explanation itself suffers the compounded effects of dysfunction. Things misbehave, and so do ideas. But an individual in this horrifying predicament cannot easily
342
Dan Lloyd
give up a claim on reality. He or she cannot doubt the temporality of experience. Acknowledging the pervasiveness of dysfunction would be to surrender the very possibility of perception. A dynamic systems hypothesis of schizophrenia will find ultimate grounding in the brain. Many regions seem affected in the illness, but not all, and a full story will need to balance spared function against the cataclysm described above. Nonetheless, the hypothesis illuminates the primary datum of schizophrenia, namely its elusive resistance to explanation. The symptomatology, phenomenology, and general characterization of schizophrenic thought processes are elusive because every aspect of information in the affected brain areas is potentially subject to disruption. That is, no concept, distinction, or pattern of thought, expression, or feeling is immune from potential distortion. Accordingly it is unlikely that there is a single ‘‘essential confusion’’ that defines the conceptual content of the illness. The idiosyncrasy of schizophrenic symptoms is rooted in the particularity of the experiences, habits, concerns, and personality of each patient. Although no particular ideation accompanies the disorder, the effort to preserve phenomenal reality in the face of random disruption may frequently elicit certain strategies, including posits of occult causes, hyperelaboration of systematic explanations, and, when all else fails, a catatonic withdrawal from the tumult. Each of these strategies is an attempt to preserve perception, and to the extent that they succeed, patients in the grip of psychosis will believe that their reality is Reality, and thus are likely to assume their reality is shared. They will neglect to communicate their world effectively. These comments outline an ‘‘epistemology of schizophrenia,’’ a characterization of information processing in a dysfunctional recurrent network. In humans the pathophysiology of the disease seems dependent on dysfunctional neuromodulatory pathways. In us, then, the epistemology of schizophrenia is a dopaminergic epistemology. But through the unique lens of schizophrenia, we see some aspects of healthy cognition that are not usually conspicuous. Healthy cognition depends on the edifice of temporality. The phenomenal experience of reality, and the experience of perception as perception (i.e., as our subjective access to reality), is the product of a carefully tuned internal recurrent mechanism that may implicate many regions of the brain. The explanatory struggle of a mind with schizophrenia conforms to the smaller scale reactions to anomaly that those of us who are lucky in health experience routinely: cognition pounces on anomaly and labors to embed it in the warp and woof of reality. Schizophrenic symptomatology and phenomenology suggest that phenomenal reality is
Civil Schizophrenia
343
robust. But, although the feel of the real is not easily overcome, the actual correspondence to the physical world is a fragile one, dependent on the smooth functioning of a recurrent net with billions of nodes. In the dialectic of the phenomenal, felt reality versus one’s antecedent knowledge of the objective world, phenomenal reality wins easily. Delusions and other positive symptoms of schizophrenia indicate a striking willingness to give up bedrock ideas of agency and causality. But this, I suggest, is the price of maintaining an even more basic hold on perception. Any reality, no matter how bizarre, is better than no reality. As King Lear says: O, let me not be mad, not mad, sweet heaven, Keep me in temper: I would not be mad. (I,v)
Civil Schizophrenia So far this discussion has circulated through several aspects of a single idea, variously expressed through phenomenological temporality, dynamic systems recurrence, and neurophysiological modulated circuits and systems. In all three incarnations, I’ve considered schizophrenic cognition and its healthy counterpart in terms of internal information processing. The loop of recurrence on which reality hangs is a loop inside the head. But this book and a good deal of contemporary discussion have explored the outsourcing of cognition. ‘‘Distributed cognition’’ has come to mean more than parallel distributed processing implemented inside networks and nervous systems; distribution now reaches into the world, and worldly processes are considered as proper parts of cognition itself. Accordingly, in this penultimate section I’ll revisit the ideas above in the framework of distributed cognition in this broader sense. At the core of this discussion is the theme of recurrence, a functionally loopy notion of information produced by a system and then re-introduced to the same system. Elman’s Simple Recurrent Network showed the cognitive power of the architecture, phenomenology already had a niche for the lived experience of the loop, and neuroscience showed that the brain is broadly organized accordingly. If anything, this loop is even more apparent outside the box. In its tightest form, action changes the world and perception detects the change. The action–perception loop is crucial to selfmonitoring. Action, of course, is more than mere bodily movement. In the physical and social world, our actions ramify in their effects, and these ripples return to our perception, where they shape further action, and so forth.
344
Dan Lloyd
Schizophrenia invited us to consider a disruption in the loop, and highlighted some perplexing implications of recurrence run amok. First, I labored above to conceive of a nonsensory dislocation of context, especially temporal context, and to conceive of the wobbly self-corrections of a recurrent net and the experienced struggle to fit context slippage into a real world, however weird. Second, arrayed against the hope of self-correction is the re-entrant magnification of dysfunction. In general, recurrence allows a local dysfunction to spread as it recycles through the system. Finally, these two discussions underscored the fragility of our home in time. However comforted we may be by the immutable laws of nature, we seem to live in a house of cognitive cards. A dopaminergic tweak can bring it all down. Now consider these same observations, applied to the manifold concentric and overlapping loops of the inhabited world. As a metaphorical handle, I propose to consider the ‘‘Gaslight Effect,’’ after the movie Gaslight, in which greedy Charles Boyer attempts to convince his spouse, wealthy Ingrid Bergman, that she is going mad. The intriguing proposal of the film is that the husband’s project is not hard to pursue. A few misplaced items and earnest lies push the heroine into a state of high anxiety. That is, small external events ramify in her mind toward a terrifying global hypothesis about her own psyche. The Gaslight Effect, accordingly, will name this potential for destabilization, in which some sort of disruption in the information flow in the environment spreads to the epicycles of the brain. Another name for it might be ‘‘social schizogenesis.’’ In Gaslight the nefarious plot unravels as Ingrid Bergman seizes on another small clue, a flickering gaslight, to successfully deduce the conspiracy against her. Like the patients considered in the sections above, she labors to preserve her hold on reality, and succeeds. In the movie the plotting of an evil husband dislocates the reflective stability of the heroine. One hopes that no one is a victim of deliberate manipulation toward schizogenic ends. In any case, we can’t begin with a literal interpretation of the effect, nor assume that there is a villainous agency at work in society. Casting our net into the great sea of culture, how might we detect a Gaslight Effect in the wild? Of course, our cultural milieu, like any cultural milieu, breeds misinformation along with knowledge. In this respect culture is not unlike cognition, muddling through. Schizophrenia, in contrast, is a specific kind of disruption in the cognitive flow. In the sections above I have emphasized alterations in recurrent information processing due to alterations in modulation. In addition I’ve discussed some characteristic responses to the mas-
Civil Schizophrenia
345
sive threat to perceptual reality posed by the illness. The Gaslight analogy lends itself to various loose interpretations and many applications; here I’ll comment on a few instances. Consider, for example, mass media in its contribution to one’s perception and understanding of the world. There are many reasons to question the accuracy of mediated ‘‘perceptions,’’ but here the issue is a different sort of distortion. We should consider not how information is presented in the first instance but rather how it is represented through recurrent processes. In this respect it seems to me that text-based media and photographic media diverge, especially in the era of television and easy video recording. Modern video affords rapid and exact repetition of scenes. Where other information is sparse, particular images recur as placeholders, despite the low information value of their nth presentation. With repetition, certain images acquire iconic significance and heightened availability to recall and reflection. This way they amplify one path through an ongoing recurrent loop. Commentators have noted that this repetition leads either to numbness or to overreaction (‘‘hysteria’’), which could be cultural analogues of negative and positive schizophrenia symptoms. Whether these responses correspond exactly to schizophrenic symptoms is not as important as the recognition that the process of repetition in the external environment may have psychological effects that distort cognitively appropriate responses to the original event. (Providing examples from recent events will be an exercise left to the reader.) Looking more closely at mass media news-gathering, we can see many instances of modulation of information flow, and speculate about their cognitive effects as well. For example, The New York Times reported in March 2005 that several government agencies were producing television news stories for inclusion in local news broadcasts. Providing information to citizens is, of course, an essential function of government, but these segments took the liberty of disguising their governmental origins. Instead, they were presented as independent reporting, complete with a fictitious on-air reporter. The Potemkin correspondents offered laudatory descriptions of US military prison guard training, successful business women in Afghanistan, and airport security. Setting aside bias in the content of these reports, consider their deceptive framing as objective reporting. This is an example of aberrant modulation of the information. Like repetition, the framing exaggerates the information value of the depicted content, making it more available to recall and reflection (as ‘‘fact’’ rather than propaganda). In this case ideas have been moved from one category to another. Nowadays we refer to this as ‘‘spin,’’ a term of art that entered American English only a little more than a decade
346
Dan Lloyd
ago. One arena of ongoing spin is the appropriation of terms across categories, a practice that especially affects evaluative terms. Language itself is a recurrent medium, and the long-term effects of spin on thought may be subtler than those of outright deception. Consider this description of an earlier instance of out-of-control spin doctoring: Revolution thus ran its course from city to city, and the places which it arrived at last, from having heard what had been done before, carried to a still greater excess the refinement of their inventions, as manifested in the cunning of their enterprises and the atrocity of their reprisals. Words had to change their ordinary meaning and to take that which was now given them. Reckless audacity came to be considered the courage of a loyal ally; prudent hesitation, specious cowardice; moderation was held to be a cloak for unmanliness; ability to see all sides of a question, inaptness to act on any. Frantic violence became the attribute of manliness; cautious plotting, a justifiable means of self-defense. The advocate of extreme measures was always trustworthy; his opponent a man to be suspected. (History of the Peloponnesian War, 10.33– 34, Richard Crawley, trans.)
This is Thucydides’s description of the erosion of discourse in the later stages of the Peloponnesian wars. I think his diagnosis of the cumulative effect of spin is correct: words change their meaning. In this respect civil discourse gradually unhinges in a similar manner to the breakdown of language in schizophrenia. Nothing in this section should come as a surprise to any witness to recent cultural and political developments in the United States. But I invite their reconsideration under the rubric of the Gaslight Effect: external events and processes that disrupt healthy cognition, with the potential for intractable runaway feedback. The prognosis for the mental disorder of schizophrenia remains poor. For now, there is no cure for this illness. Civil schizophrenia is another matter, however. The Gaslight Effect is an epistemic dysfunction, and its cure follows a sustained course of treatment with the therapeutic practice of rational inquiry. Searching for evidence for one’s claims, especially disconfirming evidence, weighing alternatives, careful communication, and plain old honesty are the modulators needed to restrain civil schizophrenia. In an era where ‘‘values’’ are themselves subject to spin, perhaps we would do well to keep these epistemic values before us. They are essential in the struggle to achieve every other value. To abandon these fundamental epistemic values is to abandon civil discourse. If there is a Gaslight Effect, then civil society always faces a dual threat. Dysregulated discourse and misinformation undermine a society’s collective grasp of consensual truth, while at the same time dislocating cognition itself. As civil schizophrenia spirals inward, it infects our capacity for rational reflec-
Civil Schizophrenia
347
tion. Through the inner and outer dynamics discussed in this chapter, these processes of disintegration can ramify and compound. As we lose our communities of discourse, we may also lose our minds. Acknowledgments Thanks to Matthew Broome and Kara Carvalho for helpful comments on earlier drafts. Note 1. It seems to me that these questions do a fairly good job of identifying not only schizotypal personalities but academics as well, particularly the type of professor who often gives the plenary address at conferences. To be fair, these are warm up questions compared to the other sixty-five questions in this assessment tool, which probe increasingly bizarre experiences. References Andreasen, N. C., P. Nopoulos, D. S. O’Leary, D. D. Miller, T. Wassink, and M. Flaum. 1999. Defining the phenotype of schizophrenia: Cognitive dysmetria and its neural mechanisms. Biological Psychiatry 46: 908–20. Barch, D. 2005. Cognitive neuroscience of schizophrenia. Annual Review of Clinical Psychology 1: 321–53. Bleuler, E. 1913. Dementia praecox or the group of schizophrenias. In J. Cutting and M. Shepherd, eds., The Clinical Roots of the Schizophrenia Concept. Cambridge: Cambridge University Press. Byne, W., E. Kemether, L. Jones, V. Haroutunian, and K. Davis. 1999. Neurochemistry of schizophrenia. In D. S. Charney, E. J. Nestler, and B. S. Bunney, eds., Neurobiology of Mental Illness, pp. 236–57. New York: Oxford University Press. Corkin, S. 2002. What’s new with the amnesic patient H.M.? Nature Reviews Neuroscience 3: 153–60. Corkin, S., D. G. Amaral, R. G. Gonzalez, K. A. Johnson, and B. T. Hyman. 1997. H.M.’s medial temporal lobe lesion: Findings from magnetic resonance imaging. Journal of Neuroscience 17: 3964–79. Crow, T. J. 1980. Molecular pathology of schizophrenia: More than one disease process? British Medical Journal 280: 66–68. Elman, J. 1990. Finding Structure in Time. Cognitive Science 14: 179–211.
348
Dan Lloyd
Friston, K. J. 1999. Schizophrenia and the disconnection hypothesis. Acta Psychiatrica Scandinavica 395 (suppl.): 68–79. Frith, C. 1992. Cognitive Neuropsychology of Schizophrenia. Hove, UK: Erlbaum. Fulford, K. 1994. Value, illness, and failure of action: Framework for a philosophical psychopathology of delusions. In G. Graham and G. L. Stephens, eds., Philosophical Psychopathology, pp. 205–33. Cambridge: MIT Press. Hafner, H., and W. Van der Heiden. 1997. Epidemiology of schizophrenia. Canadian Journal of Psychiatry 42: 139–51. Hemsley, D. R. 1996. Schizophrenia: A cognitive model and its implications for psychological intervention. Behavior Modification 20: 139–69. Hemsley, D. R. 2005. The development of a cognitive model of schizophrenia: Placing it in context. Neuroscience and Biobehavioral Reviews 29: 977–88. Husserl, E. 1964. Phenomenology of Internal Time Consciousness, trans. J. S. Churchill. Indianapolis: Indiana University Press. Jaspers, K. 1963. General Psychopathology. Manchester: Manchester University Press. Karno, M., and G. S. Norquist. 1995. Schizophrenia: Epidemiology. In H. Kaplan and B. J. Sadock, eds., Kaplan and Sadock’s Psychiatry, pp. 902–10. Baltimore: Williams and Wilkins. Liddle, P. F. 1987. The symptoms of chronic schizophrenia. A re-examination of the positive-negative dichotomy. British Journal of Psychiatry 151: 145–51. Lim, K. O., R. B. Zipursky, M. C. Watts, and A. Pfefferbaum. 1992. Decreased gray matter in normal aging: An in vivo magnetic resonance study. Journal of Gerontology 47: B26–30. Lloyd, D. 2002. Functional MRI and the study of human consciousness. Journal of Cognitive Neuroscience 14: 1–14. Lloyd, D. 2003. Radiant Cool: A Novel Theory of Consciousness. Cambridge: MIT Press. Niznikiewicz, M., M. Kubicki, and M. Shenton. 2003. Recent structural and functional imaging findings in schizophrenia. Current Opinion in Psychiatry 16: 123–47. Robins, L. N., J. E. Helzer, M. M. Weissman, H. Orvaschel, E. Gruenberg, J. D. Burke Jr., and D. A. Regier. 1984. Lifetime prevalence of specific psychiatric disorders in three sites. Archives of General Psychiatry 41: 949–58. Stephens, G. L., and G. Graham. 2004. Reconceiving delusion. International Review of Psychiatry 16: 236–41. Weiden, P., R. Aquila, and J. Standard. 1996. Atypical antipsychotic drugs and longterm outcome in schizophrenia. Journal of Clinical Psychiatry 57(suppl.) 11: 53–60.
Index
Abbott, Barbara, 181, 183
Adaptive coding model, 280
Absent-mindedness, 229
Addictions
Abstractive task learning, 280 Accuracy, as complexity benefit, 264 Acker, Kathy, 330 Action(s), 1, 283n.3 antecedents to, 124 causation of, 133
self-torturer model of, 183 as willpower failure, 171 Affect-driven preoccupation, 242–43 Agency and agents, 1, 39, 57–58, 123, 124–25, 146. See also Authorship; at Causal
choice in, 5
adaptive, 311–12, 313
and control, 81, 143 (see also Agent-
and applied psychology researchers,
control)
289
and intention, 21, 48–49, 54, 129–31
Aristotle on, 144
ownership of (will as), 170 physically coupled, 18 (see also
in atomic model, 156 and authorship judgments, 31
Coaction) psychologically coordinated, 18 (see also Coaction)
and causal explanation, 125, 133–34, 140 and conceptual imperialism, 40
and reasons, 125
and control, 132, 146–47, 151
science of, 125, 126, 138
and ‘‘control’’ or ‘‘responsibility,’’ 57
social circumstances of, 19
and control-theoretic models, 305
Action control fluidity of, 282 high-order, 257 Action-outcome control, 283n.1 in evolution of high-order control, 262 Action-perception loop, 343 Act-of-will picture, 77, 77–78, 84, 86
and control theory, 290 as core biological ensemble, 114 and counterfactuals, 143 as distributed, 12, 156–57, 158 vs. autonomous, 118 and distributed cognitive structures, 290
clear distinction in, 87
and ecological control, 104
experiments challenging, 78–80 and neuroscience, 81
and economics, 13 and environment as passive, 227
Adaptive agent, 311–12, 313
goal, 294, 311–12, 313
350
Agency and agents (cont.) and integration in complex systems, 205 and mechanistic explanation, 289 modeling of, 149–50
Index
Algorithms, 135 and centralization, 136 Allen, Heidi, quoted, 323 Alphabet maze experiment, 24–27, 30– 31
and A/not-B task, 151–52
Altruistic punishment, 66
and Bratman’s position, 154–55
Alzheimer’s disease, and frame problem,
and central systems, 150–51
313
and expert performance, 152–53 and hijacking, 153
Amygdala, 334 Andreasen, Nancy, 334, 335
and ironic processes, 153–54
‘‘Animal spirits’’ (Keynes), 222n.15
and neurotic behavior or phobic fears,
A/not-B task, 151–52
154
Anscombe, G. E. M., 124
and social sciences, 155–57
Anterior cingulate cortex, 279
and weakness of will, 153
Anti-individualism, 201, 213
and moral culture, 4–5
and self, 205
and motivation, 1 and motivationism, 126
Apparent mental causation, theory of, 21, 47, 48–50, 52, 53, 56, 57
in neoclassical economics, 198–99, 203
Applied psychology. See Psychology,
and neoclassical formalism, 209 and schizophrenia, 15, 343
applied Aristotle
and science, 14, 124, 139
on incontinence, 176
and self-regulation, 290, 291
on studying fiction, 204
and skepticism about willful causation, 11 as social phenomenon, 11 unity of, 243
on will, 143–44 Articulation pressure, 263 Artificial intelligence (AI), 6, 146, 205, 290
Wegner’s view of, 59
Artificial life (AL) techniques, 201, 202
and will, 3
‘‘Artificial walkers’’ (PDWs), 102–103,
Agent-control, 77 in act-of-will picture, 77–78, 81, 84, 86 clear distinction in, 87 experiments challenging, 78–80 as agential capacity, 81–82, 84–88, 89 conversability as, 82–84, 85–86, 87, 89 and neuroscience, 78, 79, 81, 85, 88– 89 Agentic functions, 148
117, 311 Asimo (robot), 102, 103 Asset valuation, 206–207 Assumptions, underlying hypotheses, 185 Atomic model, 156 Atomists, 169 Atonement (McEwan), 78 Attentional control, and fluid intelligence, 279–80
Agentic shift, 23
Attentional system, supervisory, 334
AI (artificial intelligence), 6, 146, 205, 290
At-will behaviors or performances, 123– 24, 282
Ainslie, George, 8–9, 12–13, 95, 96
and motor control, 276, 282
Index
351
Authority, and Milgram obedience experiment, 23
Bandura, Albert, 295, 296, 299, 300, 302, 304, 306, 315, 317
Authorship, 19. See also Agency and
Bargaining, intertemporal, 173–74, 177,
agents; Ownership; Responsibility and body information, 20
185, 189, 192 lecture audience example of, 177, 183
and causal relations, 134
Beckett, Samuel, 330
and coaction, 19, 23–25, 32 (see also
Behavior
Coaction) and conscious will, 28 as emotion of, 51, 53–54 and control, 134, 138
and consciousness, 144 and control, 134 expert, 126 (see also Expert performance)
erroneous assumption of, 55–56
fluid, 309
and mind information, 21–23
habituated, 126
of others’ actions, 21
molarization of, 132, 135
in schizophrenia, 20, 30 and person as self-perceiver, 29–30
ownership of, 124–25, 126 reflex, 276–77
and social accounting, 30–32 and social information, 23–28, 29–30
Behavioral flexibility, and cognition, 259
will as somatic marker of, 51–52
Behavior control, adaptive strategies for,
Autism, and facilitated communication, 22 Autism-like traits, and authorship of coaction, 27 Automata, 94. See also Robots Automation and expert performance, 152 and modules, 146
270 Behaviorism, 179 Behavioristic neoclassical consumer theory, 205 Beliefs, 211, 247n.3 in control-theoretic models of selfregulation, 302–305 Bergson, Henri-Louis, 42
Autonomic system, 272–74
Berns, Greg, 206, 207
Avicenna, 119n.1
Biological individuals, 208–209
quoted, 101, 113 Avowals, 137 and Bratman, 155 and motivations, 155 Awareness, 99n.1. See also Consciousness brain’s operation in absence of, 7 subjective, 284n.12 Axelrod, Robert, 66 Axiom of causality, 139–40 Axiom of control, 140 Ayer, A. J., 184 Baars, Bernard, 145 Bain, Alexander, 178–79, 184
and abortion, 221n.10 and cooperative games, 219 enculturation of, 213, 214–15 and evolutionary game theory, 210 vs. persons, 8, 208, 213, 221n.10 Biologically based comparative framework for cognitive architecture, 257–59 Biotechnologically hybrid self, 108–109 Black boxes, mental components seen as, 316 will as, 4, 5, 13, 185 Black-Scholes model, 206
352
Index
Blame and blameworthiness, attribution of, 72
Broca, Pierre-Paul, 324 Bundling choices, 191
Bleuler, Eugen, 330
Bundling rewards, 174–76
Blumenbach, Johann Friedrich, 42
Buridan’s ass, 127, 128
Bodily functions
Bush, George W., 30
as nonconscious, 93 robots as responsible for, 94–95 and self as communication monitor, 95–98 Body information, and authorship, 19, 20 Boogerd, Fred C., 43–44 Bounded rationality, 220n.2 Brain, 101 amygdala, 334 anterior cingulate cortex, 279 and awareness, 7 and conscious choices, 5
Calvinist theory of sin and salvation, 191 Cartesian theater, 54–55, 113 Causal agent, ideal, 49, 53, 57–58. See also Agency and agents Causal attribution, social circumstances in, 23–24 Causal explanation and agency, 125, 133–34, 140 diagnostic thinking as, 190 Causality or causation axiom of, 139–40
control hierarchy of, 14
vs. control, 138–43, 146
and distributed control, 6–8, 14
free will as contradiction to, 188–89
easing of cognitive load on, 313
between intentions and actions, 48–49
and ecological control, 104 (see also
mental, 21
Ecological control) functional localization in, 324 and game-playing, 212 hippocampus, 325, 334
vs. reasons, 7 and will, 192 illusion of, 80 (see also Illusion of will) Causa sui (cause of oneself), 61
and massive modularity, 145
Causation. See Causality or causation
and neuroeconomics, 206–208
Central executive, 150. See also
vs. nonbiological environment, 111 prefrontal cortex, 257, 277, 280, 281, 282, 334
Executive function Centralization, in early neural evolution, 271–72
as recurrent network, 331–35
Centralized process, 136, 136–37
schizophrenic, 325, 334, 342 vs. selves, 208
Central pattern generator (CPG), 256, 276
and social information in experience of
Central systems, 150–51. See also
authorship, 28 sophistication in sciences of, 4 and split-brain patients, 50, 54
Hierarchical organization or architecture; High-order control or regulation
and subpersonal level, 89
and cognitive control, 277–80
and vertebrate motor system, 275
and computational problems (Fodor),
and willpower, 171 Bratman, Michael, 129, 154–55 Brentano, Franz, 328
315 and high-level cognitive processes, 308 Cephalization, 271
Index
Chaos theory, and freedom of will, 189 Chess-playing computer programs, 97 Chevreul pendulum, 147, 153 Childhood, executive function in, 147 Chimpanzees, and moral outrage, 66, 67–68, 74n.7 Chisholm, Roderick, 41, 44, 48 Choice(s). See also Rational choice theory and action, 5 bundling of, 176, 191 and ownership, 170 and temptation, 192 and will, 189 Christian philosophy, and will, 1 Clark, Andy, 6, 8, 12, 134, 313
353
distributed cognition, 259 theory of, 258 Cognitive compartmentalization, 346. See also Perspect theory Cognitive control, and central system, 277–80 Cognitive dissonance, 50, 53 resolving of, 146 Cognitive dysmetria, 334 Cognitive heuristics and biases experiments, 237–39, 249–50n.17 Cognitive integrationism. See Integrationism, cognitive Cognitively sophisticated (CS) creatures and irrationality, 71 and reactive attitudes, 67, 68
Cnidarian nervous systems, 271, 272, 274
Cognitive neuroscience, 324 cognitive control research of, 277
Coaction, 17, 19
Cognitive psychology, conceptual false
and authorship, 19, 32 and body information, 19, 20 and conscious will, 28
positives in, 45 Cognitive-resource activation, 229, 247– 48n.5
forms of, 18–19
Cognitive revolution, 169, 179
and mind information, 21–23
Cognitive science
and social information, 23–28, 29–30 and will, 32 as experiential indicator, 31
and agency, 14 overenthusiasm in, 14 theoretical paradigms within, 257
Co-control, 119n.7
Cognitive self-regulation, 295
Cognition. See also Distributed
Cognitive systems
cognition and ecological control, 104 (see also Ecological control)
for Alzheimer’s patients, 314 and supervisory system, 148 Cognitivist symbolic computation
evolution of, 255 and high-order control, 257
paradigm, 257 Coherence, functional, 265
high-level, 277
Collective rationality, 163n.33
massive modularity in, 145–46
Commitment model or problem, 65, 68
origins of, 259–60
Common experience, as basis for
schizophrenic, 324, 343 (see also Schizophrenia) and temporality, 330, 336, 338, 340 Cognitive architecture biologically based comparative framework for, 257–59
theory, 178–80 Common sense, and science, 80–81 Communication in determination of behavior, 116 facilitated, 22, 53 in humans vs. animals, 98
354
Communication (cont.) as recursively agent-generating practices, 112 self-monitoring of, 96, 96–97 Compartmentalization, cognitive, 246. See also Perspect theory Compatibilism, and moral responsibility, 71 Complexity, 262–63 functional, 263–64 and hierarchically structured regulation, 269 integration pressure from, 264–67 Computational models
Index
and robots, 94 and self, 113 and self-monitoring, 94, 96–98, 284n.12 and temporality, 336–37, 339 Conservatism, conceptual, 40, 47, 58 of Wegner, 50–57 Consistency, and intention-action connection, 48 Constitution, self as, 159 Constraints, structural, 265, 267 training wheels as example of, 283n.4 Consumer theory, 205–206 Context layer, 339
and frame problem, 310–11, 314–15
Contextual control, 278
and modeling, 306
Contraction errors, 242
Computer programs, chess-playing, 97 Conceptual conservatism, 40, 47, 58 of Wegner, 50–57 Conceptual imperialism, 40–41, 47, 295 Conceptual progressivism, 41, 57 Conflict resolution, in control systems, 146 Conformity, 18–19 Connectionism, 257, 290 Consciousness, 93–94. See also Awareness action-monitoring approach to, 284n.12 and behavior, 144
Control, 55–56, 57, 134–35 and agency, 132, 146–47, 151 (see also Agent-control) attentional, 279–80 axiom of, 140 vs. causality or causation, 138–43, 146 contextual, 278 differentiated, 263 ecological, 101–104, 117, 118 Ismael on, 120n.10 and role of conscious mind, 110, 115–16 soft, 118 and unfolding of self, 112
cause of action specified in, 53–54
episodic, 278
in critique of control theory, 295 Dennett on, 113
and executive function, 148–49 organization of, 136–37, 141–42
and expert performance, 144–45
perceived, 51
global workspace model of, 284n.12
and problem-solving, 135–36
and goal-setting theory (Locke), 292
through self-governance, 117–18
and highest order control processes,
through self-model, 117
284n.12
sensory, 277–78
and human species, 94
structure of, 141–42, 144, 155
and intention-action relationship, 49 and nonconscious cognitive processes,
and theory of action, 143 top-down, 277–79, 284n.14
106–10, 115–16
Control order, 268
Index
Control systems minimal (Lloyd), 211 for robotic bodily systems, 96 Control theoretic models of selfregulation, 293–94, 317 and beliefs, 302–305 and discrepancy creation, 297, 299– 302 and goal striving, 296–97, 298, 299 and human/machine relation, 295–96 and origins of model components, 305–306 Control theory, 205, 294–95
355
Culture and decoupled representations, 211 and enculturation, 213, 214–15, 217 and re-enculturation, 216 evolutionary invention of, 214 history of, 44–45 Cybernetic structure or model, 293–94, 295 Darwin, Charles, 41. See also Evolution and emotions or attitudes, 65 of indignation or outrage, 68 on free will, 63, 71
and agency, 290
on grandeur of natural selection, 73
and applied psychology, 289, 290, 295
and moral responsibility, 64
criticism of and objections to, 295–96, 306–308 and distributed cognitive structures, 290 and frame problem, 290, 309–15 and free will, 314–15 and goal-setting or cognitive theory, 295, 306–307, 316 and mechanistic paradigm, 290 neuroscientific, 207 Conversability, 71, 82–84, 85–86, 87, 89
and new concepts, 57 Darwin and Design (Ruse), 40 Davidson, Donald, 3, 124–25, 126, 133, 153, 160n.6, 162n.31 Davidson’s principle, 127–28 and intention examples, 129–31 Davies, Paul Sheldon, 11 Decision-making computational models of, 314 and control theory, 307, 308 Kant on, 176 research in, 169
Cooperative games, 162–63n.33, 219
Decoupled representation, 210, 211–12
Coordinated action, 18 Coordination, and atomic model, 156
Deficit schema, 325 De´ja` vu experiences, 341
Coordination games, 204–205
Deliberation, with others, 163n.33
Correspondence claim, 55–56
Delta-learning rule, 305
Counterfactual account of causation, 142–43
‘‘Delusional stance,’’ 327 Delusions, and schizophrenia, 323,
Crick, Francis, 316 Cultural dynamics, vs. social dynamics, 210 Cultural evolution, 209–10 theories of, 156
326–27, 343 Dennett, Daniel, 3, 6, 8 and agent-centered view of agentcontrol, 81 on brain function, 7
Cultural milieu, misinformation in,
and distributed mind hypothesis, 12
344 Cultural technology, in distributed
on frame problem, 310, 312–13 on law of effect, 115
control, 7
and massive modularity, 145
356
Dennett, Daniel (cont.) on mind and self, 111, 113 on natural selection, 311 on nonconscious elements, 107–108, 109 vs. conscious, 119n.6 and philosophical libertarianism, 290 on responsibility, 71 on ‘‘semantic’’ or ‘‘phenomenological’’ level, 304 and thought experiments, 13, 184 and will, 3
Index
comparisons of, 257, 259 executive control system lacking in, 255 and high-level control model, 278–79, 280 and volition, 255, 257 Distributed cognitive economy, 110, 118. See also Nonconscious cognitive processes Distributed control, 6–7 in ant colony, 96 and will, 6
Descartes’s conception of, 5
Distributed empirical will, 54–55
and free will, 11, 315, 317n.1 Descartes, Rene´, 2, 5–6
Distributed mind hypothesis, 12 and schizophrenia, 15
and Cartesian theater, 54–55, 113
Distributed mind and self, 113
on mechanics of lighter and heavier bodies, 180–81
Distributed process, 136 in Wittgenstein’s Investigations, 158
on soul, 95
Distributed representation, 332
vs. will as habit, 9
Distributed self, 116
Determinism and conversability, 71 and free will, 61 mechanical, 4 and morality, 31 Developmental questions, Wittgenstein’s emphasis on, 157, 163n.37 De Waal, Franz, 66, 67, 74 Differential specialization, 269
Distributed society, 145 Distributed systems, 102. See also Ecological control Diversity, as complexity benefit, 264 Doing-as-causing, 133 Dopamine, 332 Dopaminergic epistemology, in schizophrenia, 342 Downstream niche construction, 211– 12
Differentiated control, 263
Drescher, Gary, 97
Disconnection hypothesis, 334–35
Dualism, 3
Discounting, hyperbolic, 172–73, 174, 187–88, 189, 192 Discrepancy creation, and control-
Christian, 2 and distributed control of behavior, 6 Dynamical systems paradigm, 257
theoretic models of self-regulation, 297, 299–302 Distributed agency, 12, 156–57, 158 vs. autonomous, 118
Eccles, John, 79 Ecological control, 101–104, 117, 118 Ismael on, 120n.10
Distributed coalitions, 115
and role of conscious mind, 110, 115–
Distributed cognition, 6, 9, 343. See also at Control
16 soft, 118
and agency, 290
and unfolding of self, 112
Index
Eddington, Arthur S., 80–81 Ego, and robotic cells, 95. See also Self EGT (evolutionary game-theoretic) approaches, 201, 202, 209, 212 Einstein, Albert, 73 Elephant-thoughts example, 148, 154 Eliminativism, about folk psychological notions, 317 Elman, Jeffrey, 331, 332, 343 Emotion-driven inertia, 242 Empirical will, 49, 51–52, 53, 54, 155 as distributed, 54–55 Enculturation, 213, 214–15, 217. See also Culture and re-enculturation, 216 Environment
357
and neural bottleneck problem, 207 from Skinnerian to Popperian creatures (Dennett), 97 Evolutionary games, 208–13, 218 Evolutionary game-theoretic (EGT) approaches, 201, 202, 209, 212 Evolutionary history, and orientation toward inquiry, 46–47 Evolutionary theory and error theory, 72 and reactive attitudes, 65–66 Exclusivity, and intention-action connection, 48 Executive authority, 137 Executive function, 146 in agentic function, 148
and boundaries of will, 7 control functions off-loaded onto, 111
and A/not-B task, 152 central, 150
and perspect theory, 233
in childhood, 147
social, 7–8
and hierarchy of control systems,
technologies in, 134 Environmental variables, 291 Epiphenomenon self as, 199 will as, 50 Episodic control, 278 Epistemic action, 313–14 Epistemic values, 346 Epistemology of schizophrenia, 342 Error, and point of view, 252n.22 Error regulation and repair, 266
310 and neurotic behaviors or phobic fears, 154 in prefrontal cortex, 257 Executive systems, 255 Expectancy belief, 303 in goal-decisions model, 303, 305 Expert performance, 126, 131–32, 149, 152–53. See also Skill memory as automated, 135 and consciousness, 144–45
Errors of inclusion, 241–42
‘‘External scaffolding,’’ 315
Errors of omission, 242 Error theory of free will and moral
Facilitated communication, 22, 53
responsibility, 63–64, 71, 72, 72–73 Evolution. See also Darwin, Charles
Faculty psychology, 1, 3, 324 vs. will as habit, 9
and blaming, 72
Fatigue, errors from, 242
of communication, 97
Feedback
and high-order control, 255, 257, 259– 62, 269–70 and neural evolution, 270–77 of intelligence, 270 in great apes, 279
from body to brain, 20 in control theory, 294 and feed-forward, 20 in high-order control systems, 270 interface, 314
358
Feedback loop internal, 303
Index
as dubious concept, 44 error theory of, 63–64, 71, 72, 72–73
negative, 293–94, 300, 302, 303
as experience of willing, 47–48
in recurrent network, 331
experiential defense of, 62–63
and temporal information, 332
and high-order control account, 281–
Fehr, Ernst, 66
82
Fight or flight response, 273
illusion of, 70, 71, 72–73
‘‘Finding Structure in Time’’ (Elman),
and moral responsibility, 2, 72
332 First-person responsibility, 31
and reactive attitudes, 67–71, 74n.9 Strawson on, 64–65, 83
Flexibility
and readiness potential for action, 79–
of action control, 282 and frame problem, 313 behavioral (and cognition), 259 ‘‘Flow,’’ 131. See also Expert performance Fluid behavior, and light, 309 Fluid intelligence, and attentional control, 279–80 Fodor, Jerry, 111, 308–309, 310, 314, 315 Frame Problem, 240–43, 308–309
80 and self-regulation theories, 291 as spread over time, 108 Friston, Karl, 334 Frith, Christopher, 334, 335 Fulford, Bill, 326 Functional coherence, 265 Functional complexity, 263–64 and hierarchically structured regulation, 269 integration pressure from, 264–67
and control theory, 290, 309–15 Frames, 239
Gage, Phineas, 171
Framing effects, 237, 249n.16 Frank, R., 65, 66, 69
Galileo, 180, 181 Gallagher, S., 109
Frankfurt, Harry, 71
Game determination, 213–18
Frayn, Michael, 95
Game theory
‘‘Freedom and Resentment’’ (Strawson), 64–65, 83 Free riders, and altruistic punishment, 66 Free will, 2–3, 8, 47 and applied psychology researchers, 289 arguments against, 61–62 belief in, 63, 72, 73–74n.4
evolutionary games, 201, 208–13, 218 game determination in, 213–18 intertemporal bargaining game, 177 Prisoner’s Dilemma, 66, 174, 176–78, 192, 204 team games and short-run games, 219 Gaslight (movie) and Gaslight Effect, 344–45, 346
and computational models, 314
Gazzaniga, Michael, 50, 145, 146
and conceptual imperialism, 40
‘‘Ghost in the machine,’’ 159, 170
and positive aim of imperialist inquiry, 41 and consciousness (Locke), 292 conundrum of, 188–89
Giorello, Giulio, 95 Global workspace, 145, 284n.12 Goal(s) accounts of lacking, 227
Index
359
and integrated information processing, 255
and integration pressure, 267–70 as metacontrol, 268
and motivation (Locke), 305
and theory of volition, 282
and self-regulation theories, 291
vertebrate autonomic system as, 272–
and volition, 281
74
Goal agent, 294, 311–12, 313
Hijacking cases, 130–31, 138, 145, 153
Goal-directed action, 275, 276
Hippocampus, 325, 334
Goal-setting theory, 290, 292
Historical-cultural evolutionary change,
and belief in self-efficacy, 302 and control theory, 295, 306–307, 316 Goal striving, and control-theoretic models of self-regulation, 296–97, 298, 299 and discrepancy creation, 297, 299– 302
218 Hitchcock, Christopher, 141–42 H.M. (brain lesion case), 324–25 Homunculus and homunculus problem, 243, 305 Human functioning, machine metaphor of, 295
Grooves, in nervous systems, 56–57
Hume, David, 2, 71, 161n.24
Habit, will as, 8–9
Husserl, Edmund, 336–40 Hyperbolic discounting, 172–73, 174,
Habituated behaviors, 126 Hamilton, William, quoted, 93
187–88, 189, 192 Hypothalamus, 275
Headlong (Frayn), 95 Hierarchical organization or
‘‘I.’’ See Self
architecture, 268–69. See also Central
Ideal causal agent, 49, 53, 57–58
systems; at Control
Ignorance, 242
in autonomous system, 273–74 in brain (prefrontal cortex), 14, 257
Illusion of Conscious Will, The (Wegner), 39, 59n.8
of control systems, 310
Illusions
and discrepancies in higher level goal agents, 300 marketplace model as alternative to, 173
of authorship of others’ actions, 20 de´ja` vu experiences as, 341 of own authorship, 32 Illusion of self, 118
Hierarchical structuring, in vertebrate
Illusion of volition, 255
somatic motor system, 274–77 High-level cognition, 277
Illusion of will, 10, 49, 57, 80, 317n.1 of free will, 70, 71, 72–73
High-order control or regulation, 255–
Imitation or mimicry, in coaction, 18,
57 and central system, 277–80 (see also Central systems) and control-theoretic models, 308 evolution of, 255, 257, 259–62, 269– 70 and neural evolution, 270–77 and frame problem, 312, 315
28 Imperialism, conceptual, 40–41, 47, 295 Impulsiveness, 172–73. See also Weakness of will; Willpower Inactivation errors, 242 Inclusion, errors of, 241–42 Incompatibilism, 291, 317
360
Inconsistency(ies), 9. See also Noncontradiction
Index
Intention and action, 21, 48–49, 54, 129–31
and cognitive integrationism, 230
and belief, 162n.31
and perspects, 243
on Davidson’s principle, 127, 129
and situation relevance, 238
hijacking of, 129–31
Inconsistent cognitive adherence, 231
as inferred cause of action, 49–50
Incontinence, Aristotle on, 176. See also
and interpretive system, 53
Weakness of will
and nonconscious cognitive processes,
Indeterminism, 61 in objection to control theory, 307
50 as primitive (Bratman), 155
Individual differences, in intelligence
in thought experiment on willpower,
(humans), 280 Individualism, 198 and biological individuals, 208
186–87, 187 tracking of, 170 and will, 173
brain-level vs. person-level, 207
Intentionality, and blame, 72
and neoclassical theory, 197, 198, 198–
Interactionist approach, 295
99 and neuroeconomics, 206, 208
Interpersonal communicative coordination, 11–13
normative, 201, 220n.3
Interpersonal dynamics, 7
and selves, 199–202, 210
Intertemporal bargaining, 173–74, 177,
strong and weak, 200
185, 189, 192
Inertia, emotion-driven, 242
Introception, 19, 29
Infants, perseveration among, 151
Introspection, willpower unavailable to,
Information, social, 23–28, 29–30 Information processing, 6 and schizophrenia, 343, 344 Inhibition, 140 Inquiry, orientations toward, 40–41 naturalism, 41–42
178 Intuition(s) about higher level mental processes, 192 of incompatibilism, 291, 317 of libertarians, 31, 317
and evolutionary history, 46–47
and philosophical objections, 316–17
and history of culture, 44–45
and thought experiment, 180, 184,
and history of science, 42–44
184–85
and human psychology, 45–46 Integrated information processing, 255
Ironic processes, 147–48, 153–54 Ismael, Jenann, 116–17
Integrationism, cognitive, 228–30
Isotropy, 308–309, 314
and cognitive heuristics and biases, 238 Integration pressure
James, William, 178–79, 289
from functional complexity, 264–67
Jaspers, Karl, 328
and high-order regulation, 267–70
Johnson, Samuel. quoted, 61, 63
Intelligence evolution of, 270 in great apes, 279 fluid, 279–80
Kahneman, Daniel, 182, 237 Kant, Immanuel, 42, 69–70, 124, 144, 158, 176
Index
361
Kavka’s problem, 186–88, 189, 192 Keynes, John Maynard, 222n.15
McEwan, Iain, 78 McFarland, David, 96, 97
King Lear (Shakespeare), 343
Mechanical determinism, 4 Mechanistic explanation, 289–90
Lacan, Jacques, 330 Language and ‘‘spin,’’ 346 thought experiments on, 181–82 Wittgenstein on, 157 Lanning, Alfred J., quoted, 289 Law of effect, 115 Leibniz, Gottfried Wilhelm, 180–81 Libertarianism, philosophical, 289, 296 and applied psychology, 290 and distribution of scientific resources, 316 Libet, Benjamin, 5, 7, 79 ‘‘Life force,’’ 42 Limbic system, 274 Limited warfare, 173–74 and Prisoner’s Dilemmas, 176
and control theory, 290 (see also Control theoretic models of selfregulation; Control theory) and folk psychological notions, 291, 317 and free will, 315 and reductionism, 43 Memory declarative vs. procedural, 325 working, 145, 248n.8, 332 Mendel, Gregor, 316 Mental causation, 21. See also Agency and agents; Authorship; Control apparent (theory of), 21, 47, 48–50, 52, 53, 56, 57 from inaccessible center of control, 88 and motivationism, 127
Lloyd, Dan, 211
Mental task, 234
Localized process, 136
Metacontrol, 268
Locke, E. A., 295, 296, 299, 302, 304,
Michotte, Albert, 48, 52, 58n.4
305, 315, 317
Microeconomic theory, 197 Micromanagement, 153, 154, 269
Mach, Ernst, 180, 181
Mid-life crisis, 204
Machines, and humans, 295, 296. See
Milgram obedience studies, 23
also Mechanistic explanation Magical thinking, 190, 191. See also Agency and agents; Authorship Making things happen, 146–48 Management. See Control Management function. See Executive function Manager, perspect (pmanager). See Pmanager
Mimicry, 18 Mind ‘‘official doctrine’’ of, 28–30, 32 perspect-structured, 244 as shifting coalition, 111–15 ‘‘Mindblind’’ persons, 27 Mind information, and authorship, 21– 23 Minimal constraints approach, 217
Marx, Karl, and Marxism, 198, 200
Misperception, 242
Massive modularity, 145–46
Mobilization errors, 242
Mass media
Modularity, 273, 308
and civil schizophrenia, 345–46 obedience to, 19 McCauley, Robert, 45
massive, 145–46 and molarity, 146 and popularity of cognitive modules, 3
362
Index
Modulation, and schizophrenia, 333, 344
vs. conceptual conservatism, 58 and evolutionary history, 46–47
Molarization of behavior, 132, 135
and history of culture, 44–45
Molecularization of agents, 156, 158
and history of science, 42–44
Monitoring. See Self-monitoring and
and human psychology, 45–46
monitoring Montague, P. Read, 206, 207
and responsibility, 71 of Wegner, 47–50
Monterosso’s problem, 186
Naturalistic philosophers, 10
Moral culture and agency, 4–5
Neoclassical consumer theory, behavioristic, 205
persons at center of, 8 Morality and denial of free will and responsibility, 73 and determinism, 31 lazy and conventional, 14–15 Moral responsibility, 2, 11, 55, 72–73. See also Robust moral responsibility
Neoclassical (revealed-preference) formalism, 198, 213, 220n.2 Neoclassical modeling theory, and rational choice theory, 222n.15 Neoclassical preference theory, Samuelsonian, 219 Neoclassical theory and models, 197, 198–99, 202, 219
arguments against, 61–62
and consumer theory, 205–206
and free will, 2, 72 (see also Free will)
and emotional response, 222n.15
Motivation, 149
and selves, 199, 200, 202
and agency, 1
as epiphenomena, 199
and applied psychology, 289
socially scripted, 205
and avowals, 155
Nervous system, 98
and belief in self-efficacy, 304, 307, 317 computational models of, 314
and automata, 94 Neural architecture, vertebrate vs.
discrepancy creation aspect of, 300 in ironic processes, 154 Locke on bases of, 305
arthropod, 258 Neural evolution, and evolution of high-order control, 270–71
and policies, 155
centralization in, 271–72
preponderant, 127
vertebrate autonomic system,
and self-regulation theories, 291 and weakness of will, 153 (see also Weakness of will)
272–74 vertebrate somatic motor system, 274–77
Motivational states, and self, 173
Neural plasticity, 101
Motivationism, 126–29, 131
Neuroeconomics, 206–208, 212
cases outside scope of, 149 and intention examples, 129–31 and neurotic behaviors, 154 Motor system, 276
and evolutionary game theory, 212 Neuromodulation, and schizophrenia, 333
Narrative self, 104, 113, 117, 204, 205
Neurons, evolution of, 271 Neuropsychology, 324–25
Naturalism, 41–42
Neuroscience
Index
and agent-control, 78, 79, 81, 85, 88– 89 cognitive, 277, 324 Neurotic behaviors, 154 Newcomb’s problem, 189–92
363
Passive dynamic walkers (PDWs; ‘‘artificial walkers’’), 102–103, 117, 311 Peloponnesian wars, erosion of discourse in, 346
Newton, Isaac, 139
Pendulum, Chevreul, 147, 153
Nietzsche, Friedrich
Perceived control, 51
vs. Cartesian dualism, 2 on causa sui, 61 Nonconscious cognitive processes, 106– 10 and intentions, 50 Nonconscious priming, 50, 54 Noncontradiction, 203, 220–21n.6. See also Inconsistency(ies)
Perception through mass media, 345 perceptual systems and self, 7 in schizophrenia, 341–42, 343 Perseveration, 151 weakness of will as, 153 Personal responsibility. See Responsibility
Non-individualism, 200, 201
Personas, 230–31
problems with, 201–202 Normal production process, 133–34
diverse or dueling, 243–46 Person(s) or people, 8
Nurses’ work schedule study, 296–97, 298, 299, 309–10, 314
vs. biological individuals, 8, 208, 213, 221n.10 as cultural niche-constructors, 214
Obedience, 18–19
and economics, 13
Obedience studies by Milgram, 23
and enculturation, 213, 215
Omission, errors of, 242
and mind’s workings, 29
Ontological reductionism, 43–44 Order, 262
as self-perceiver rather than selfknower, 29
Organization of control, 136–37, 141– 42 Orientations toward inquiry. See Inquiry, orientations toward Orthonomy, 83, 84, 87, 89
and whole agent, 6 Perspect, 247nn.6,7 and persona, 245 Perspective, 228 and inconsistencies, 230
Ouija board, and coaction, 22
Perspective manager, 248n.9
Ownership. See also Authorship; Responsibility
Perspect manager. See Pmanager Perspect theory, 230–32, 246
of action (will as), 170
and Frame Problem, 240–43
of behavior, 124–25, 126 (see also
and larger mental system, 239
Agency) and recursiveness within will, 189 study of, 170
and shifting standpoints, 232–34 situation-relevance in, 234–40 and Frame Problem, 241 and inconsistencies, 243
Pampered Prisoner case, 179–80, 184 Parahippocampal gyrus, 334
Pettit, Phillip, 11, 71 Phantom-limb experiences, 20, 79
Parallel distributed processing, 332
Phenomenal self-model, 99n.3
364
Index
Phenomenology, temporality in, 336, 340
Preemption, 141–42 Preferences, 211
Philosophical libertarianism. See
Prefrontal cortex. See also Brain
Libertarianism, philosophical Philosophy and philosophers. See also Aristotle; Descartes, Rene´; Hume, David; Husserl, Edmund; Kant, Immanuel; Wittgenstein, Ludwig and at-will thinking, 124 Christian (on will), 1–2 current, 3 and Davidson on agency, 125 disgruntlement of, 12 and incompatibilism, 317 and motivationist account, 129 naturalistic, 10
cognitive control in, 277 episodic control in, 282 in schizophrenia, 334 and task-relevant rules, 280, 281, 282 and volition, 257, 281 Preoccupation, affect-driven, 242–43 Priority, and intention-action connection, 48 Prisoner’s Dilemma game, 66, 174, 176– 78, 192, 204 Problem-solving, 135–36 and control theory, 307, 308 and ‘‘isotropy’’ (Fodor), 308–309
and self, 95 situational influences neglected by, 227
Progressivism, conceptual, 41, 57 Proprioception, 19, 20, 29
transcendental, 158–59
Prose, Francine, 17
and will, 3
Protocol, 136, 138, 159
free will, 3, 8, 73
Protraction errors, 241–42
Phobic fears, 154
Psychologically coordinated action, 18
Physically coupled action, 18
Psychology, applied, 289
Physics, thought experiments in, 181
and control theory, 295
Physiological disruption, errors from, 242
and environmental variables, 291 self-regulation in, 291–296 (see also
Piaget, Jean, 151 Plagiarism, 30 Pmanager (perspect manager), 14, 232– 33, 235, 241, 248–49n.10, 251–
Self-regulation paradigm or theory) Psychology, and orientation toward inquiry, 45–46 Punishment, altruistic, 66
52n.21 deficient situation sense of, 243
Radio telescopes analogy, 233–34
and persona, 244 perspect-switching system of, 243
Randomizing processes, 315 Ratio-difference principle, 249n.16
Posner, Eric, 179 Potemkin correspondents, 345 Power, as complexity benefit, 264 Pragmatic context, 230, 231, 234–35, 238 recalcitrant habits in, 243 Pragmatic-context-irrelevant information, 238 Predestination, 191
Rational choice theory (RCT) and instrumental bases of social transactions, 179 and neoclassical modeling theory, 222n.15 and willpower, 171–72 thought experiments on, 192 Rational decision theorists, atomic model of, 156
Index
Rational inquiry, vs. Gaslight Effect, 346 Rationality, 66–67. See also Reason(ing)
365
nonsentential, 250–51n.19 in perspect system, 236
and agency, 1
Resolve, 174. See also Willpower
and behavior, 71
Responsibility, 56, 57. See also
bounded, 220n.2 collective vs. individual, 163n.33 and noncontradiction, 220–21n.6 violations of, 182 Reactive attitudes, 64–66, 74n.9 and belief in free will and RMR, 67–71 Readiness potential for action, 79 Reason(ing) and agency (Aristotle), 144
Authorship first-person, 31 and mechanistic explanation, 289 Responsibility, moral, 2, 11, 55, 72–73. See also Robust moral responsibility arguments against, 61–62 and free will, 2, 72 (see also Free will) Retention (Husserl), 336, 337, 338, 339 and schizophrenia, 340–41
in agency-motivation relationship, 1
Rewards, bundling of, 174–76
answerability to, 15
Risk aversion, 207
and control theory, 137
Robotics, situated, 290
and conversability, 82–84 and machinery of self, 109
Robotics paradigm, behavior-based, 257
and motivationism, 126
Robots, 94
in problem-solving, 135 Reasons and action, 125 and agency, 7, 125 and causes, 3 Recurrence, 343 Recurrent network, and schizophrenia,
examples of Asimo, 102, 103 Robotoddler, 103, 103–104 Shakey, 102 as self, 95 Robust moral responsibility (RMR), 63, 73n.3
331–36, 340. See also Simple
and free will, 64, 72
Recurrent Network
as illusion, 72
Recurrent process, in information presentation, 345 Recursively agent-generating practices, 113
and reactive attitudes, 67–71, 74n.9 Strawson on, 64–65 Robust tracking, 211 Ross, Don, 70, 112–13
Recursive self-prediction, 176–78 Reductionism, ontological, 43–44
Rule-defined practice, 163n.37 Ruse, Michael, 40
Reflex behavior, 276–77
Ryle, Gilbert, 29, 32, 159, 170
Regulation. See also at Control and functional complexity, 266 high-order, 267–70 Relational action management, 283n.1 in evolution of high-order control, 262 Representation construction of in time, 332 decoupled, 210, 211–12
Samuelsonian neoclassical preference theory, 219 ‘‘Scaffolding, external,’’ 315 Scheduling task study, 296–97, 298, 299, 309–10, 314 Schemas, 241 Schizogenesis, social, 344
366
Schizophrenia, 15, 323–24 and brain, 325, 334, 342
Index
vs. brains, 208 as Cartesian soul, 95
causes of, 325
and communication, 95–97
and deficit schema, 325
and conceptual imperialism, 41
as disorder of context, 330
Dennett on, 113, 118
as disruption in cognitive flow, 343–
and distributed mind hypothesis, 12
44, 344 and Gaslight Effect, 344–45, 346
and EGT models, 209 in evolutionary and strategic dynamics,
dopamine hypothesis on, 332–33 epistemology of, 342
212 hierarchical model of, 173
illusions about others’ actions in, 20,
as illusion, 118
30 localized dysfunctions in, 335 and patient interview, 328–30
and individualism, 199–202 as integrating internal bargaining communities, 207
prognosis for, 323–24, 346
and mechanistic explanation, 289
and recurrent processing, 333–36
and neoclassical economics, 199, 200,
and cognitive dysmetria, 334 and disconnection hypothesis, 334– 35 and supervisory attentional system, 334 as social dysfunction (civil schizophrenia), 346–47 symptomatology of, 326–30 and temporality, 336, 340–43 Schizotypal Personality Questionnaire, 327
202 as epiphenomena, 199 socially scripted, 205 and nonconscious cognitive processes, 106–10, 115–16 and non-individualism, 200 revamped image of needed, 15 and robotic cells, 95 and science, 10 as shifting coalition, 111–15, 134 in social cognitive theory, 293
Schlink, Bernhard, quoted, 123, 125
and social games, 210
Science. See also Cognitive science;
and social pressure, 202, 204
Neuroscience; Social science of action, 125, 126, 138 and agency, 14, 124, 139
and social processes, 219 soft-selves, 112, 114, 115, 116, 134, 247n.2
cognitive, 14, 257 and common sense, 80–81
and stabilization, 10, 13, 70, 114–15, 201, 203
and conventional ‘‘morality,’’ 14–15
strategic signalers as, 98
history of, 42–44
transcendental vs. empirical view of,
interdisciplinary, 7
159
and self, 10
uncertain boundaries of, 105
and will, 4–6, 8, 10
as unit of accountability, 87
Dennett on, 3 Script, 135, 137, 241 Self (selves), 13, 104–106, 203–204 and anti-individualism, 205
and whole agent, 6 and will, 8 Self-awareness, and schizophrenic disorders, 327
Index
Self-control, thought experiments on, 183. See also Willpower Self-efficacy, belief in, 292–93, 302, 316 measurement of, 303 and motivation, 304, 307, 317 neurophysiology vs. psychology in modeling of, 307 source of, 306 Self-governance, 117, 117–18, 120n.9 Self-model, 116–17 Self-monitoring and monitoring and action-monitoring approach to consciousness, 284n.12 and action-perception loop, 343 and automata, 94 of communication, 96–98 control systems without, 96 by higher order control systems, 315 and schizophrenia, 334 Self-organization, 116, 117, 120n.9, 265–66, 267 Self-prediction, recursive, 176–78 Self-regulation paradigm or theory, 290. See also Control theoretic models of self-regulation and applied psychology, 291–96 cognitive, 295
367
and Frame Problem, 241 and inconsistencies, 243 Situation sense, deficient, 243 Skill memory, 274–75. See also Expert performance Smith, Adam, 198 Social accounting, and authorship, 30– 32 Social circumstances of action, 19 Social cognitive theory, 290, 292–93 and belief in self-efficacy, 302 and control theory, 295, 306–307, 316 Social dynamics vs. cultural dynamics, 210 and selves, 219 Social environment, 7–8 Social exchange systems, and experience of authorship, 32 Social factors, in empirical will, 54 Social feeling, Bain-James exchange on, 178–79 Social information or pressures, 23–28, 29–30 and will, 28, 29 Social interactions, and utility functions, 218–19
and goal setting, 292
Socially scripted selves, 205
and social cognitive theory, 292
Social rewards and punishments, and
Self-representation, 116
belief in RMR, 71
Sellars, Wilfred, 80
Social schizogenesis, 344
Semi-automation, 130–31
Social science, 155–57
Sensory control, 277–78 Shakey (robot), 102
Social stability, and answerability to reason, 15
Sherrington, Charles S., 5–6, 20
Soft-assembly, 103
Simple Recurrent Network (SRN), 331–
Soft ecological control, 118
35, 339, 340, 343
Soft-selves, 112, 114–16, 134, 247n.2
Situational filter, 234
Sort/switch task, 152
Situational influences, 227
Soul, and robotic cells, 95. See also
Situation construal, recalcitrant habits of, 243 Situation-relevance, in perspect theory, 230, 234–40
Consciousness; Self Specialization, differential, 269 Specificity, as complexity benefit, 264 ‘‘Sphexish’’ behavior, 312, 314, 318n.15
368
‘‘Spin,’’ 345–46 and ‘‘values,’’ 346 Spinoza, Baruch, 63, 64, 71 Split-brain patients, 50, 54 SRN (Simple Recurrent Network), 331– 35, 339, 340, 343 Stanford Research Institute, robot of (Shakey), 102 Stelarc, 119n.2 quoted, 101
Index
Monterosso’s problem, 186 Newcomb’s problem, 189–92 Thucydides, on erosion of discourse, 346 Time, 330, 332, 335. See also Temporality Top-down control. See also High-order control or regulation anatomical basis to, 284n.14 architecture of, 277–79
Sterelny, K., 210, 211–13, 214
Top-down theorists, 169
Stereotypes, 241
Topic-relevance, 229, 238, 238
Strategic action control, 283n.1
Training
in evolution of high-order control, 262
in agency, 132, 151 for expert performance, 152–53
Strawson, Galen, 62
Transcendental philosophers, 158–59
Strawson, Peter F., 64–65, 66, 83 Structural constraints, 265, 267
Transcendental stance, 137 Transparent equipment, 106
training wheels as example of, 283n.4 Structurally divided mind, 228
Trivers, Robert, 65, 66 Tversky, Amos, 182, 237
Structure of control, 141–42, 144, 155 Subjective awareness, 284n.12 Sully, James, 176
Unwritten Rules of Management (Swanson), 30
Supervisory attentional system, 334
Utility, 202–203
Swanson, William, 30
of Von Neuman-Morgenstern, 206 Utility function, 203
Talleyrand, Charles-Maurice de, quoted, 93, 98
and enculturation, 213, 214–15 ‘‘teaming’’ of, 218–19
Task learning, abstractive, 280 Temporality, 336–40 and de´ja` vu experiences, 341
Velleman, J. D., 116 Vertebrate autonomic system, as high-
and schizophrenia, 336, 340–43
order control system, 272–74
Thaves, Bob, 17 Theory of action, and control, 143
Vertebrate somatic motor system, in hierarchical structuring, 274–77
Theory of apparent mental causation,
Viswanathan, Kaavya, 30
21, 47, 48–50, 52, 53, 56, 57 Thinking-of-elephants example, 148, 154 Thought experiments, 12–13, 170, 180– 83, 192 on willpower, 183–86, 192 free will conundrum, 188–89 Kavka’s problem, 186–88, 189, 192
Volition, 280–82, 284n.16. See also Will and distributed cognition, 255, 257 evolutionary pressures on, 277 features commonly associated with, 281 and skilled motor actions, 276 Von Neumann-Morgenstern utility, 206
Index
369
Wallace, Alfred, 41, 57 Watson, James D., 316
and indicators of authorship, 19 and intention, 173
Weakness of will, 127–28, 137, 149,
and intention-action sequence, 21
153, 155, 160n.6 Aristotle on (incontinence), 176 Wegner, Daniel, 8, 59n.8
micro and macro factors in, 4, 7 and motivation (Locke), 305 and philosophy
and Ainslee, 96
current, 3
on brain, 7
and free will, 8
conceptual conservatism of, 50–57 on ironic processes, 147–48
traditional, 1–2 revamped image of needed, 15
on mechanical influence, 94–95
and science, 4–6, 8, 10
and will, 5, 39, 40, 47, 57, 128, 170, 317n.1
Dennett on, 3 and self, 8
Wernicke, Carl, 324
and social pressures, 28, 29
Wiener, Norbert, 14, 293
study of, 170–71
Will, 1, 170, 189. See also Weakness of
Wegner’s discussion of, 39–40, 47–50,
will and act-of-will picture, 78, 81 (see also Act-of-will picture)
128 (see also under Wegner, Daniel) and conceptual conservatism, 50–57 Willpower, 9, 171–72
Aristotle on, 143–44
and bundling of choices, 176
assumptions about, 192
and hyperbolic discounting, 173
at-will behaviors or performances,
and intertemporal cooperation, 174
123–24
and neoclassical economics, 199
and authorship, 28
and recursive self-prediction or self-
as black box, 4, 5, 13, 185 and causality, 192
observation, 178, 183–84 theories of, 173
and coaction, 28, 32
thought experiments on, 183–86, 192
and Davidson’s principle, 127
free will conundrum, 188–89
Descartes’s conception of, 2, 5–6
Kavka’s problem, 186–89, 192
and distributed control, 6–8 (see also at
Monterosso’s problem, 186
Distributed)
Newcomb’s problem, 189–92
and economics, 13
Wittgenstein, Ludwig, 157–58, 163n.37
empirical, 49, 51–52, 53, 54, 155 as distributed, 54–55
Working memory, 145, 248n.8, 332
and environmental conditions, 5
Zelazo, P. D., 151–52
as epiphenomenon, 50 as experiential indicator, 31 and first-person responsibility, 31–32 and free will, 2–3, 8, 47, 61–62 (see also Free will) as habit, 8–9 illusion of, 10, 49, 57, 80, 317n.1 of free will, 70, 71, 72–73